Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
As I understand it (and please correct me if I’m wrong), inverse reinforcement learning + reinforcement learning will eventually produce the same result as supervised learning/behavioural cloning. Inverse RL assumes the agent’s behaviour is optimal, so it will end up just imitating the agent.
Let’s say you want to do a task better than the agent. Has there been any research on deriving a reward function from agent behaviour without assuming the agent’s behaviour is optimal?
submitted by /u/strangecosmos
[link] [comments]