Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] Inverse reinforcement learning without assuming the agent’s behaviour is optimal?

As I understand it (and please correct me if I’m wrong), inverse reinforcement learning + reinforcement learning will eventually produce the same result as supervised learning/behavioural cloning. Inverse RL assumes the agent’s behaviour is optimal, so it will end up just imitating the agent.

Let’s say you want to do a task better than the agent. Has there been any research on deriving a reward function from agent behaviour without assuming the agent’s behaviour is optimal?

submitted by /u/strangecosmos
[link] [comments]