Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
We know that we can measure a model’s robustness to perturbation by applying perturbation to training points and checking if the outputs are the same:
The
lp
ball around an image is said to be the adversarial ball, and a network is said to beE-robust
aroundx
if every point in the adversarial ball aroundx
classifies the same. source, Part 3
But how is this done concretely?
submitted by /u/data-soup
[link] [comments]