[D] Parallelization for neuroevolution AutoML models
I want to run multiple smaller models in parallel on the same GPU for the purposes of implementing something like CoDeepNEAT. However when, in testing, creating 100 small Torch CUDA models and getting the output of a 1000×8 tensor passed to each model with layer sizes 8-64-8, parallelizing with a pool of 8 workers takes ~15 seconds and uses ~6 GB of vRAM, and serially processing them takes ~0.03 seconds and uses ~100 MB of vRAM.
Is there some particular scheme that I should be using for this? Should I switch from Torch to Tensorflow? From Python to C++? Anyone have any ideas?
submitted by /u/MrAcurite
[link] [comments]