Blog

Learn About Our Meetup

4500+ Members

[D] Andrew Ng’s thoughts on ‘robustness’ – looking for relevant resources

For those of you unfamiliar, Andrew Ng runs a weekly newsletter where he shares thoughts and new developments in deep learning. It’s called ‘The Batch’. I was very interested in something he said in today’s newsletter (which can be read here), in which he talks about how deep learning systems still fail in many real scenarios because they are not yet robust to changes in data quality/distributions

One of the challenges of robustness is that it is hard to study systematically. How do we benchmark how well an algorithm trained on one distribution performs on a different distribution? Performance on brand-new data seems to involve a huge component of luck. That’s why the amount of academic work on robustness is significantly smaller than its practical importance. Better benchmarks will help drive academic research.

I am looking for more resources that study this type of robustness systematically. Is anyone aware of any key works on this topic? For example looking at how real datasets and corresponding performance vary from train/test datasets a model is developed on?

Thanks!

submitted by /u/deep-yearning
[link] [comments]

Next Meetup

 

Days
:
Hours
:
Minutes
:
Seconds

 

Plug yourself into AI and don't miss a beat