Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
Our results are documented here https://www.thebipartisanpress.com/politics/calculating-political-bias-and-fighting-partisanship-with-ai/
We used FastAI with Hugging face’s transformers library in probably the first regression approach to this task.
As expected, we found RoBERTa to provide a much more accurate prediction when fine-tuned compared to BERT, which had a much higher accuracy compared to ULMFit trained on wikitext-103.
We also attempted to use xlnet as well as Albert, but the later yielded poor results, and curiously, we weren’t able to fit even xlnet-base on a V100(16gb) with a sequence length of 512 and batch size of 1, even with fp16.
We created a tool that people can try out here: https://www.thebipartisanpress.com/analyze-bias/
submitted by /u/Giftcard4life
[link] [comments]