Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[P] Run inference with zero dependencies C code and ONNX

Hi! Just would like to share with you a project that I have been working for some time:

A bit of background (ONNX)

In short, onnx provides an Open Neural Network Exchange format. This format, describes a huge set of operators, that can be mixed to create every type of machine learning model that you ever heard of, from a simple neural network to complex deep convolutional networks. Some examples of operators are: matrix multiplications, convolutions, adding, maxpool, sin, cosine, you name it! They provide a standardised set of operators here. So we can say that onnx provides a layer of abstraction to ML models, which makes all framework compatible between them. Exporters are provided for a huge variety of frameworks (PyTorch, TensorFlow, Keras, Scikit-Learn) so if you want to convert a model from Keras to TensorFlow, you just have to use Keras exporter to export Keras->ONNX and then use the importer to import ONNX-TensorFlow.

The project

There are many open-source repos that can run inference on ML models with C code, but most of them are framework-specific, so you are tied to TensorFlow or whatever framework. The idea behind this project is to have a “backend” that can run inference on ONNX models. You might have heard of “onnxruntime” which provides runtimes to run inference on ONNX models in different languages, like in R, Go or even C++, but the idea of this project is to have a pure C99 runtime without any external dependency, that can compile with old compilers for any device without any fancy hw accelerators, multicore or GPUs.

What’s next?

The project is in a very early stage and we are looking for contributors, both for C code and general ideas. So far, you can run inference on the well known MNIST model for handwritten digits recognition. Inside the repo you can find some specific tasks and documentation about what we have so far.

submitted by /u/jbj-fourier
[link] [comments]