Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] should i over-sample rare episodes with successful exploration ?

I am using DRL (mostly policy gradients) in a simulated discrete sokoban-style environment.

Alex-the-agent is rewarded for the shortest possible solution, as well as training on progressively harder/intricate maps. After a while, exploration is very difficult, and it takes millions of attempts to complete an episode with a slightly-better score. To be clear, this is not a plateauing of performance, it just takes excessively longer exploration.

Should i be “over-sampling” these increasing-rare successful score improvements ?

I use PG since it works, but I am open to trying value techniques.

submitted by /u/so_tiredso_tired
[link] [comments]