Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] Do you know any useful tips, examples, articles etc. for better GPU utilization?

It’s been 6 months since I started learning deep learning. Finally last week I implemented DQN for atari games. It is in its simplest form with 3 conv layers, 2 dense layers, replay memory and fixed targets. This week I upgraded my gpu from a gtx 950 to rtx2060, and the training speed is only increased like 10-20%. I know it is a simple code for maybe higher gpu utilization, but it is kind of huge for me, and honestly I was expecting it to scale similar with its fp32 calculation capabilities(x3.5-4). Obviously I can’t utilize my gpu, and I’d like to learn if there is something I can do to improve my code in the future outside of just increasing batch size.

submitted by /u/sequence_9
[link] [comments]