Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] I’m writing a full C++ wrapper for Tensorflow, is anyone at all interested?

I’ve asked this question around long before I began working on this project and here’s a (non-exhaustive) list of answers I got:

1- Building ML models in Python is faster and easier

2- There’s absolutely no use-case where you might need to train in C++

3- If performance is what you’re after, why not train in Python then export your model for inference in C++

4- If you insist on C++, why not use caffe, mxnet or pytorch

And my answers are the following:

1- It’s easier if you’re comfortable with Python. Personally, I hate Python and I am never comfortable working with untyped languages. I may be old school but I have 15+ years of C++ experience and that makes it easier and faster for me.

2- Here’s a few use cases off the top of my head: – Using the library to perform tensor calculations on the GPU, for non machine-learning uses (such as Audio DSPs, Ray Tracing, etc…) while still benefitting from TF’s optimizations and distributed graph computation capabilities – Training Unsupervised ML models with data read from physical sensors in realtime – Training models that require some lengthy data preprocessing or postprocessing that need to be done on CPU – Online-Training models on devices that are memory or battery constrained where having a Python interpreter and a webserver to serve the inference model would be wasteful

3- See #2 There are many cases where this assumption doesn’t hold

4- I tried mxnet for a year before eventually giving up. The library is so unstable and buggy it’s barely usable. Also tensorflow is truly remarkable when it comes to its distributed graph computation capabilities and seems to be the most evolved in terms of portability. It works across OSs, GPUs, TPUs, different CPU arch, etc… Not to mention the large community and active Google support

With all that being said, I’m interested to know what everyone thinks. I plan on Open-Sourcing the wrapper eventually but that would require some extra work on my behalf, proper documentation and working examples, etc… I’d push that further if I know there’s absolutely no interest in the project.

What are your thoughts?

submitted by /u/memento87
[link] [comments]

Next Meetup

 

Days
:
Hours
:
Minutes
:
Seconds

 

Plug yourself into AI and don't miss a beat

 


Toronto AI is a social and collaborative hub to unite AI innovators of Toronto and surrounding areas. We explore AI technologies in digital art and music, healthcare, marketing, fintech, vr, robotics and more. Toronto AI was founded by Dave MacDonald and Patrick O'Mara.