[D] Data-poisoning and Trojan attacks at training time. Is it a real threat?
I would like to know anyone’s opinion on this.
Recent work has identified that classification models implemented as neural networks are vulnerable to data-poisoning and Trojan attacks at training time.
Source: Attacks on Deep Reinforcement Learning Agents : https://arxiv.org/abs/1903.06638
- Is it a real threat?
- How the risk can be identified from someone that just uses the model without access to its source or training data (i.e. prepare a set of tests)?
submitted by /u/niklongstone
[link] [comments]