Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[R] Adversarial explanations for understanding image classification decisions and improved neural network robustness

[R] Adversarial explanations for understanding image classification decisions and improved neural network robustness

Abstract:

For sensitive problems, such as medical imaging or fraud detection, neural network (NN) adoption has been slow due to concerns about their reliability, leading to a number of algorithms for explaining their decisions. NNs have also been found to be vulnerable to a class of imperceptible attacks, called adversarial examples, which arbitrarily alter the output of the network. Here we demonstrate both that these attacks can invalidate previous attempts to explain the decisions of NNs, and that with very robust networks, the attacks themselves may be leveraged as explanations with greater fidelity to the model. We also show that the introduction of a novel regularization technique inspired by the Lipschitz constraint, alongside other proposed improvements including a half-Huber activation function, greatly improves the resistance of NNs to adversarial examples. On the ImageNet classification task, we demonstrate a network with an accuracy-robustness area (ARA) of 0.0053, an ARA 2.4 times greater than the previous state-of-the-art value. Improving the mechanisms by which NN decisions are understood is an important direction for both establishing trust in sensitive domains and learning more about the stimuli to which NNs respond.

Open Access pre-print: https://arxiv.org/abs/1906.02896

Open Access PDF (low-resolution images, due to size restriction): https://arxiv.org/pdf/1906.02896.pdf

Peer-reviewed publication (with full-resolution images; also see bottom of this Reddit post): https://www.nature.com/articles/s42256-019-0104-6

Code: https://github.com/wwoods/adversarial-explanations-cifar/

Comparing explanatory power between Grad-CAM [Selvaraju et al. 2017] and Adversarial Explanations (AEs) when applied to a robust NN trained on CIFAR-10. The top four rows, subfigure a, demonstrate comparisons on different inputs. For each row, the columns show: the original “Input” image, labeled with the most confidently-predicted class, the correct class, and the NN’s confidence in each; two Grad-CAM explanations, one for each predicted class shown by the input; two AEs, divided into the adversarial noise used to produce the AE, and the AE itself. Below those rows, subfigures b through i are annotated versions of the AEs for subfigure a, indicating regions which contributed to or detracted from each predicted class. See the main text for full commentary.

Author’s note: The freely-available pre-print on ArXiv contains all content available in the Nature version, just in a slightly different ordering (IEEE vs Nature style). The resolution of the ArXiv images is a bit lower, as the full document from pdflatex is ~97 MB due to included images… A Ghostscript-optimized version, with full-resolution images, weighs in at 25MB and may be found here: https://drive.google.com/open?id=1xGCja0BUQ2VR9nlKre6QzJ2Q-qpp8ub8

submitted by /u/waltywalt
[link] [comments]