[N] Natural Adversarial Examples Slash Image Classification Accuracy by 90%
Researchers from UC Berkeley and the Universities of Washington and Chicago have released a set of natural adversarial examples, which they call “ImageNet-A.” The images are described as real-world, naturally occurring examples that have the potential to highly degrade the performance of an image classifier. For example DenseNet-121 obtains only around two percent accuracy on the new ImageNet-A test set, a drop of approximately 90 percent.
The ImageNet challenge competition was closed in 2017, as it was generally agreed in the machine learning community that the task of image classification was mostly solved and that further improvements were not a priority. It should be noted however that the ImageNet test examples are mostly relatively uncluttered close-up images which do not represent the more challenging object contexts and representations found in real world.
What’s more, it has been shown that adversarial examples that succeed in fooling one classification model can also fool other models that use different architecture or were trained on different datasets. Adversarial attacks therefore have the potential to cause serious and widespread security vulnerabilities across popular AI applications such as facial recognition, self-driving cars, etc.
More in article:
submitted by /u/RelativeAnalyst9
[link] [comments]