Skip to main content


Learn About Our Meetup

5000+ Members



Join our meetup, learn, connect, share, and get to know your Toronto AI community. 



Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.



Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] I’m writing a full C++ wrapper for Tensorflow, is anyone at all interested?

I’ve asked this question around long before I began working on this project and here’s a (non-exhaustive) list of answers I got:

1- Building ML models in Python is faster and easier

2- There’s absolutely no use-case where you might need to train in C++

3- If performance is what you’re after, why not train in Python then export your model for inference in C++

4- If you insist on C++, why not use caffe, mxnet or pytorch

And my answers are the following:

1- It’s easier if you’re comfortable with Python. Personally, I hate Python and I am never comfortable working with untyped languages. I may be old school but I have 15+ years of C++ experience and that makes it easier and faster for me.

2- Here’s a few use cases off the top of my head: – Using the library to perform tensor calculations on the GPU, for non machine-learning uses (such as Audio DSPs, Ray Tracing, etc…) while still benefitting from TF’s optimizations and distributed graph computation capabilities – Training Unsupervised ML models with data read from physical sensors in realtime – Training models that require some lengthy data preprocessing or postprocessing that need to be done on CPU – Online-Training models on devices that are memory or battery constrained where having a Python interpreter and a webserver to serve the inference model would be wasteful

3- See #2 There are many cases where this assumption doesn’t hold

4- I tried mxnet for a year before eventually giving up. The library is so unstable and buggy it’s barely usable. Also tensorflow is truly remarkable when it comes to its distributed graph computation capabilities and seems to be the most evolved in terms of portability. It works across OSs, GPUs, TPUs, different CPU arch, etc… Not to mention the large community and active Google support

With all that being said, I’m interested to know what everyone thinks. I plan on Open-Sourcing the wrapper eventually but that would require some extra work on my behalf, proper documentation and working examples, etc… I’d push that further if I know there’s absolutely no interest in the project.

What are your thoughts?

submitted by /u/memento87
[link] [comments]