Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] Parallelization for neuroevolution AutoML models

I want to run multiple smaller models in parallel on the same GPU for the purposes of implementing something like CoDeepNEAT. However when, in testing, creating 100 small Torch CUDA models and getting the output of a 1000×8 tensor passed to each model with layer sizes 8-64-8, parallelizing with a pool of 8 workers takes ~15 seconds and uses ~6 GB of vRAM, and serially processing them takes ~0.03 seconds and uses ~100 MB of vRAM.

Is there some particular scheme that I should be using for this? Should I switch from Torch to Tensorflow? From Python to C++? Anyone have any ideas?

submitted by /u/MrAcurite
[link] [comments]