Blog

Learn About Our Meetup

4500+ Members

[D] Training NNs with FP16 in Tensorflow

Is there anybody with experience using FP16 in Tensorflow/Keras? Regarding some blogs it is just available using a self-built version of Tensorflow as FP16 requires CUDA 10 [1]. Graphic card benchmark tests show significant improvements [2]. Would you already “rely” on this FP16 possibility? Do we know that it is always better/faster? I hope Tensorflow supports CUDA 10 soon, so no own-built version needs to be used.

What do you think about it?

Sources:

[1]: https://medium.com/@noel_kennedy/how-to-use-half-precision-float16-when-training-on-rtx-cards-with-tensorflow-keras-d4033d59f9e4

[2]: https://lambdalabs.com/blog/2080-ti-deep-learning-benchmarks/

submitted by /u/synzierly
[link] [comments]

Next Meetup

 

Days
:
Hours
:
Minutes
:
Seconds

 

Plug yourself into AI and don't miss a beat