Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
For those of you unfamiliar, Andrew Ng runs a weekly newsletter where he shares thoughts and new developments in deep learning. It’s called ‘The Batch’. I was very interested in something he said in today’s newsletter (which can be read here), in which he talks about how deep learning systems still fail in many real scenarios because they are not yet robust to changes in data quality/distributions
One of the challenges of robustness is that it is hard to study systematically. How do we benchmark how well an algorithm trained on one distribution performs on a different distribution? Performance on brand-new data seems to involve a huge component of luck. That’s why the amount of academic work on robustness is significantly smaller than its practical importance. Better benchmarks will help drive academic research.
I am looking for more resources that study this type of robustness systematically. Is anyone aware of any key works on this topic? For example looking at how real datasets and corresponding performance vary from train/test datasets a model is developed on?
Thanks!
submitted by /u/deep-yearning
[link] [comments]