Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[P][D] Pytorch Sparse training library. Sparse training = fraction of all parameters updated each step. Non-used parameters saved to disk -> reduce GPU Memory Usage + Increase Training Speed. If you are working with such an architecture, let us know and we’ll optimize and include it in our release.

Hello,

We are creating a sparse training library for Pytorch. Sparse training is when only a fraction of the total parameters go through a forwards pass / backwards pass / update during each step.

Having all parameters takes up a lot of GPU memory, and in some cases may limit the total number of parameters your system can hold. By having the parameters stored on disk when not in use, that would significantly reduce the GPU memory used at any given instance, allowing you to use many more parameters.

A concern is that generally disk are not low enough latency to make this work. But we were able to figure out a pipeline to make it work. Not only that, but through a few Pytorch tricks we inadvertently discovered along the way, we think our set up may be (very slightly) faster, though we’ll need to do a bunch of test to absolutely confirm.

At the moment we need to code each adapt each architecture individually. If you or anyone you know have sparse training architecture you have in mind, point us to the paper or code and we’ll optimize and include it.

So far we’ve only been able to find recommender systems that make use of such architectures. If you know of any other architectures, please point them out.

submitted by /u/Research2Vec
[link] [comments]