Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[Discussion] Examples of mis-specifying optimization objectives causing unpleasant outcomes

In Stewart Russell’s book ‘Human Compatible’, he gave an example of social platform specifying maximization of click-through rate as objective, which did not only promote echo chamber effect, but in fact slowly modifying people’s preference so we become more predictable. In the process, driving more extreme viewpoints, because it is easier to predict what content will be clicked through when your view points are extreme to any one side of the spectrum.

I find this example complex and interesting, and am wondering what are other real-world examples?

submitted by /u/dbcrib
[link] [comments]