Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] Sharing GPUs with host OS

Hello all, I’ve some experience in building DL models using Tensorflow in Unix environments leveraging on platforms such as AWS and GCP. My work/company also provides me with Unix servers to build my models.

Now that I want to build my own DL rig, on Windows no less, I am interested to know about performance degradation on Tensorflow of a GPU that is shared with the host OS. Anyone has any experience in this?

PS. I have no choice but to use Windows, much as I would prefer a Linux system.

TLDR; Want to do deep learning on GPUs on Windows, worried whether the host OS sharing the GPU will affect performance much, using Tensorflow.

submitted by /u/charpi123
[link] [comments]