Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] Best network for battle game agents (neuroevolution)

Hi,

I’m working on an indie game where you evolve teams of agents that each have a neural network, and then battle them against other players team (looks like this: https://youtu.be/EPekL1JMXEY).

I already have an implementation of sparse lstm-ish networks (1), but I’d like to optimize this further and wanted to see what people here have to suggest. Since it’s an evolution based game I don’t use backprop. It also needs to be fairly simple as it all runs on the GPU (which is why I can have simulations of thousands of agents on a single machine). And since it all runs on the GPU I’d prefer something that is fixed size, which is why I’ve stayed away from NEAT so far.

So my question is; what would be the best network for something like this?

(1) My current network works as follows; Each node has: a state (1 float), 12 indices, 12 weights, 2 bias values. Each index decides which other nodes it “reads” from, so a node can be connected to any other node (layers are therefore less important, they only decide the order of updates). 8 of the inputs are used for the next value of the state. 4 of the inputs are used as a write gate (-1 keep state -> 1 update state). There are some more details but that’s roughly it.

submitted by /u/FredrikNoren
[link] [comments]