[P] We utilized BERT and RoBerta to build a AI that differentiates political bias
Our results are documented here https://www.thebipartisanpress.com/politics/calculating-political-bias-and-fighting-partisanship-with-ai/
We used FastAI with Hugging face’s transformers library in probably the first regression approach to this task.
As expected, we found RoBERTa to provide a much more accurate prediction when fine-tuned compared to BERT, which had a much higher accuracy compared to ULMFit trained on wikitext-103.
We also attempted to use xlnet as well as Albert, but the later yielded poor results, and curiously, we weren’t able to fit even xlnet-base on a V100(16gb) with a sequence length of 512 and batch size of 1, even with fp16.
We created a tool that people can try out here: https://www.thebipartisanpress.com/analyze-bias/
submitted by /u/Giftcard4life
[link] [comments]