Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
Blog post: http://gradientscience.org/adv
Paper: https://arxiv.org/abs/1905.02175
Hi, I’m one of the lead authors on this paper.
TL;DR: We show that adversarial examples aren’t really weird aberrations or random artifacts, and are instead actually meaningful but imperceptible features of the data distribution (i.e. they are helpful for generalization). We prove this through a series of experiments that shows that (a) you can learn just based on these imperceptible features embedded into a completely mislabeled training set and generalize to the true test set (b) you can remove these imperceptible features and generalize *robustly* to the true test set (with standard training).
We would love to answer any questions/comments!
submitted by /u/andrew_ilyas
[link] [comments]