Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[N] Natural Adversarial Examples Slash Image Classification Accuracy by 90%

Researchers from UC Berkeley and the Universities of Washington and Chicago have released a set of natural adversarial examples, which they call “ImageNet-A.” The images are described as real-world, naturally occurring examples that have the potential to highly degrade the performance of an image classifier. For example DenseNet-121 obtains only around two percent accuracy on the new ImageNet-A test set, a drop of approximately 90 percent.

The ImageNet challenge competition was closed in 2017, as it was generally agreed in the machine learning community that the task of image classification was mostly solved and that further improvements were not a priority. It should be noted however that the ImageNet test examples are mostly relatively uncluttered close-up images which do not represent the more challenging object contexts and representations found in real world.

What’s more, it has been shown that adversarial examples that succeed in fooling one classification model can also fool other models that use different architecture or were trained on different datasets. Adversarial attacks therefore have the potential to cause serious and widespread security vulnerabilities across popular AI applications such as facial recognition, self-driving cars, etc.

More in article:

https://medium.com/syncedreview/natural-adversarial-examples-slash-image-classification-accuracy-by-90-702f381acbb8

submitted by /u/RelativeAnalyst9
[link] [comments]