Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[P] Latest Python + TensorFlow + CUDA / CuDNN optimized pip wheels

TL;DR: custom pip wheels for TF 2.0 / 2.1 for Py 3.7 / 3.8 and CUDA 10.1 / 10.2: https://github.com/inoryy/tensorflow-optimized-wheels

I’m sharing my pip wheels for TF built from source for some non-standard versions, notably Python 3.8 + CUDA 10.2 and Python 3.7 + CUDA 10.1, the latter is “compatible” with PyTorch 1.3 so you can have them share a single env.

The builds also enable various performance flags like XLA JIT support and modern CPU opt flags, including SIMD support (AVX2, SSE4, FMA). If you have a CPU released after ~2013 then you’ll likely benefit from these on e.g. data pre-processing. Though I should note that if you have Intel CPU then you might not see a large difference since now TF comes pre-built with MKL which can dispatch required intrinsics at runtime.

Finally, I’ve enabled additional compute capabilities support (5.0, 6.1, 7.0), which means these wheels should also work on older GPUs (7xx – 9xx families).

submitted by /u/Inori
[link] [comments]