[D] Best network for battle game agents (neuroevolution)
I’m working on an indie game where you evolve teams of agents that each have a neural network, and then battle them against other players team (looks like this: https://youtu.be/EPekL1JMXEY).
I already have an implementation of sparse lstm-ish networks (1), but I’d like to optimize this further and wanted to see what people here have to suggest. Since it’s an evolution based game I don’t use backprop. It also needs to be fairly simple as it all runs on the GPU (which is why I can have simulations of thousands of agents on a single machine). And since it all runs on the GPU I’d prefer something that is fixed size, which is why I’ve stayed away from NEAT so far.
So my question is; what would be the best network for something like this?
(1) My current network works as follows; Each node has: a state (1 float), 12 indices, 12 weights, 2 bias values. Each index decides which other nodes it “reads” from, so a node can be connected to any other node (layers are therefore less important, they only decide the order of updates). 8 of the inputs are used for the next value of the state. 4 of the inputs are used as a write gate (-1 keep state -> 1 update state). There are some more details but that’s roughly it.