Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] Classification of Misinformation and Scaled Sentiment Analysis

I do not know if this title best captures my idea for this post but I will nonetheless continue. Recently, I have been concerned with the extent of misinformation present in the media in the form of unsubstantiated claims and conspiracies as well as deepfakes and other similar things. This, along with human biases and unconscious judgements towards certain believes or topics, creates a climate where a tremendous amount of attention and force can be maliciously and wrongly directed towards some person or other entity and there is no “truth” or authority to set things right or to educate people correctly. My topic for discussion here, given this opening, is what measures can be taken, employing different ML techniques, especially those for sentiment analysis, to prevent digital mobs from unjustly releasing their wrath and to undo the effects of users having been exposed to unsubstantiated and potentially dangerous false information? What would you enact if you worked at some the company of some web browser or social media company to ensure that misinformation didn’t dominate conversation or proliferate further? Some topics I have in mind that I have yet to examine deeply are YouTube’s algorithm’s progressive recommendations of more radical content or Instagram failing to qwell networks that propogate anti-vaccination and other conspiracy rhetoric. There seems to me to be an erosion of evidenced information and a reversion to a chaotic environment where people are guided by the simplest, most tribalist claims. What are your thoughts?

submitted by /u/unsupervisedmodeler
[link] [comments]