Learn About Our Meetup

4500+ Members

[D] Data-poisoning and Trojan attacks at training time. Is it a real threat?

I would like to know anyone’s opinion on this.

Recent work has identified that classification models implemented as neural networks are vulnerable to data-poisoning and Trojan attacks at training time.

Source: Attacks on Deep Reinforcement Learning Agents :

  1. Is it a real threat?
  2. How the risk can be identified from someone that just uses the model without access to its source or training data (i.e. prepare a set of tests)?

submitted by /u/niklongstone
[link] [comments]

Next Meetup




Plug yourself into AI and don't miss a beat


Toronto AI is a social and collaborative hub to unite AI innovators of Toronto and surrounding areas. We explore AI technologies in digital art and music, healthcare, marketing, fintech, vr, robotics and more. Toronto AI was founded by Dave MacDonald and Patrick O'Mara.