Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
Adversarial noise/examples that produce the desired output across different models – does it exist and if not is it within the realm of possibility? Is it wrong to think that models that do X might be susceptible to noise in the form of Y? This can easily be tested empirically but is there any existing theory that suggests one way or the other?
submitted by /u/Boozybrain
[link] [comments]