Skip to main content


Learn About Our Meetup

5000+ Members



Join our meetup, learn, connect, share, and get to know your Toronto AI community. 



Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.



Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[P] What are Alpha Zero’s Inputs (Chess)?

My current understanding of Alpha Zero from a high level is that it takes in the board state and outputs the probability distribution of results. I am especially confused about the anatomy of the inputs though.

Taking the example of chess, the DeepMind arxiv paper said that the network had a total of 119 spatial planes. Of these spatial planes four types stick out to me: color, repetitions, total move count, and no-progress count.


The layer feels useless because it only encodes a single number repeated. Couldn’t this be optimized out because the information is already encoded within the board structure. The chess board is oriented such that the side AlphaZero is playing is on the bottom and color should be implicit with the layers that the P1 and P2 pieces exist in. Didn’t the authors also say P1 is always the player to make the move? Why do we need a color map?


I’m not sure what this is. Does this come from fen representation? Why does this need to be counted per time-step? I believe that past a certain point the repetitions no longer count towards a draw. As well, how are these encoded? Is it just a copy of the relevant positions that would cause a three-fold repetition?

Total Move Count and No-Progress Count

Are these encoded as integers? To my understanding, activation functions perform best near 0. So, a small difference in count would not make a difference. Would these be normalized or will this happen implicitly in the network. Also not sure what are no-progress counts.

I apologize if my questions are a waste of time.

submitted by /u/MiddleStress
[link] [comments]