Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
While searching for papers about inverse reinforcement learning (IRL, learning rewards instead of policies), I found this:
https://dsbrown1331.github.io/CoRL2019-DREX/ https://arxiv.org/pdf/1907.03976.pdf
It seems to be the only paper in which an agent manages to outperform the sub-optimal players it learns from.
I think this probably is an important step forward since eventually we would like to create programs that act in the real world and perform tasks better than us without having to code a reward function.
A short term application of this could be in self-driving cars, allowing them to learn from human drivers and be safer than them.
submitted by /u/Pedro-Afonso
[link] [comments]