cool! :)
cool! 🙂
there’s something subtle also going on in the objective with the entropy regularization, which is inserted into policy gradients to incetivise exploration.
The side of effect of this is that the optimal agent behavior will actually be to act randomly when it doesn’t matter. In Pong, the final agent therefore jitters around randomly with maximum entropy, but when it comes to catching the ball, executing the precise sequence of moves needed to get that done. And then reverting back to random behavior.