[D] Do you know any useful tips, examples, articles etc. for better GPU utilization?
It’s been 6 months since I started learning deep learning. Finally last week I implemented DQN for atari games. It is in its simplest form with 3 conv layers, 2 dense layers, replay memory and fixed targets. This week I upgraded my gpu from a gtx 950 to rtx2060, and the training speed is only increased like 10-20%. I know it is a simple code for maybe higher gpu utilization, but it is kind of huge for me, and honestly I was expecting it to scale similar with its fp32 calculation capabilities(x3.5-4). Obviously I can’t utilize my gpu, and I’d like to learn if there is something I can do to improve my code in the future outside of just increasing batch size.