Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[R] Adversarial Examples Aren’t Bugs, They’re Features

Blog post: http://gradientscience.org/adv

Paper: https://arxiv.org/abs/1905.02175

Hi, I’m one of the lead authors on this paper.

TL;DR: We show that adversarial examples aren’t really weird aberrations or random artifacts, and are instead actually meaningful but imperceptible features of the data distribution (i.e. they are helpful for generalization). We prove this through a series of experiments that shows that (a) you can learn just based on these imperceptible features embedded into a completely mislabeled training set and generalize to the true test set (b) you can remove these imperceptible features and generalize *robustly* to the true test set (with standard training).

We would love to answer any questions/comments!

submitted by /u/andrew_ilyas
[link] [comments]