Learn About Our Meetup

4500+ Members

[P] Run inference with zero dependencies C code and ONNX

Hi! Just would like to share with you a project that I have been working for some time:

A bit of background (ONNX)

In short, onnx provides an Open Neural Network Exchange format. This format, describes a huge set of operators, that can be mixed to create every type of machine learning model that you ever heard of, from a simple neural network to complex deep convolutional networks. Some examples of operators are: matrix multiplications, convolutions, adding, maxpool, sin, cosine, you name it! They provide a standardised set of operators here. So we can say that onnx provides a layer of abstraction to ML models, which makes all framework compatible between them. Exporters are provided for a huge variety of frameworks (PyTorch, TensorFlow, Keras, Scikit-Learn) so if you want to convert a model from Keras to TensorFlow, you just have to use Keras exporter to export Keras->ONNX and then use the importer to import ONNX-TensorFlow.

The project

There are many open-source repos that can run inference on ML models with C code, but most of them are framework-specific, so you are tied to TensorFlow or whatever framework. The idea behind this project is to have a “backend” that can run inference on ONNX models. You might have heard of “onnxruntime” which provides runtimes to run inference on ONNX models in different languages, like in R, Go or even C++, but the idea of this project is to have a pure C99 runtime without any external dependency, that can compile with old compilers for any device without any fancy hw accelerators, multicore or GPUs.

What’s next?

The project is in a very early stage and we are looking for contributors, both for C code and general ideas. So far, you can run inference on the well known MNIST model for handwritten digits recognition. Inside the repo you can find some specific tasks and documentation about what we have so far.

submitted by /u/jbj-fourier
[link] [comments]

[N] AI articles and best ICCV2019 papers reviewed on Computer Vision News of December (with codes!)

[N] AI articles and best ICCV2019 papers reviewed on Computer Vision News of December (with codes!)

Here is Computer Vision News of December 2019, published by RSIP Vision:
HTML5 version (recommended)
PDF version

52 awesome pages around AI. Free subscription for all on page 52.
It includes the BEST OF ICCV 2019, selected among the best papers presented at the conference in Seoul.
Exclusive interviews and technical articles, with codes!


submitted by /u/Gletta
[link] [comments]

[R] CenterMask : Real-Time Anchor-Free Instance Segmentation

We propose a simple yet efficient anchor-free instance segmentation, called CenterMask, that adds a novel spatial attention-guided mask (SAG-Mask) branch to anchor-free one stage object detector (FCOS) in the same vein with Mask R-CNN. Plugged into the FCOS object detector, the SAG-Mask branch predicts a segmentation mask on each box with the spatial attention map that helps to focus on informative pixels and suppress noise. We also present an improved VoVNetV2 with two effective strategies: adds (1) residual connection for alleviating the saturation problem of larger VoVNet and (2) effective Squeeze-Excitation (eSE) deals with the information loss problem of original SE. With SAG-Mask and VoVNetV2, we deign CenterMask and CenterMask-Lite that are targeted to large and small models, respectively. CenterMask outperforms all previous state-of-the-art models at a much faster speed. CenterMask-Lite also achieves 33.4% mask AP / 38.0% box AP, outperforming YOLACT by 2.6 / 7.0 AP gain, respectively, at over 35fps on Titan Xp. We hope that CenterMask and VoVNetV2 can serve as a solid baseline of real-time instance segmentation and backbone network for various vision tasks, respectively.

submitted by /u/tumaini-lee
[link] [comments]

DCGANS – Adding more convolutions [Project]

I’m training a DCGAN on a dataset of 10k images. I want to expirement with adding and taking away convolutions in attempt to balance out the generator and discriminator. Whats the rule to do this in practice?

Here is my generator:

self.main = nn.Sequential( nn.ConvTranspose2d(100, 512, 4, 1, 0, bias = False), nn.BatchNorm2d(512), # We normalize all the features along the dimension of the batch. nn.ReLU(True), # We apply a ReLU rectification to break the linearity. nn.ConvTranspose2d(512, 256, 4, 2, 1, bias = False), nn.BatchNorm2d(256), # We normalize again. nn.ReLU(True), # We apply another ReLU. nn.ConvTranspose2d(256, 128, 4, 2, 1, bias = False), nn.BatchNorm2d(128), # We normalize again. nn.ReLU(True), # We apply another ReLU. nn.ConvTranspose2d(128, 64, 4, 2, 1, bias = False),
nn.BatchNorm2d(64), # We normalize again. nn.ReLU(True), # We apply another ReLU. nn.ConvTranspose2d(64, 3, 4, 2, 1, bias = False), nn.Tanh() )

What input and output values should I have for the 6th layer?

submitted by /u/CasualTrip
[link] [comments]

[D] Efficient workflow with colab/jupyter?

I’m struggeling how to efficiently use Google’s colab facility.

My normal workflow is:

  1. Fiddle around in JupyterLab untill I have some result.
  2. Move code into standalone .py libraries, clean-up the JupyterLab notebook to call the library functions.
  3. Create testcases that test the standalone .py libraries, refactor code more.

This tandem between Jupyter and a traditional .py IDE helps to get code that is clean and testable. My notebooks tend to be messy, they aren’t unit-tested and might not work anymore in the future.

This doesn’t work that well with Google colab – it’s not that easy to move code to libraries and have it available in Google colab. But I would like to use the computing power that comes with colab.

What is your workflow with colab?

submitted by /u/zuuuhkrit
[link] [comments]

[N] NeurIPS 2019 videos

Just FYI (because it can also be found on the conference page: and in case you missed it, this year the videos are not on the Facebook page but here:
In addition, brief paper overview videos are now linked on the website:
The conference started on the 8th of December ( Expo (Industry) Day on the 8th). The talks started on the 9th, and the tutorials among other things are already online.

submitted by /u/blissfox-red
[link] [comments]

[D] Advice on hourly rate as a student freelancer?

Hello! I need some advice about hourly rates for freelance work from machine learners. I’ve looked online a bit and the answers are all over the place or are generally for data scientists with degrees.

Right now I am still studying to get my master’s degree (eventually phd), and reading papers/practicing about ML. Recently one of my project got really popular on GitHub and as a consequence I’ve been contacted to help with implementing/designing models.

What should I charge for my skills?

Edit: I was asked to join a startup for the same reasons before for 90k usd after passing many interviews but it feels so out of proportions to me.

submitted by /u/greencent
[link] [comments]

[D] Neural Computers for Text Generation

Has anyone ever applied neural computer based models, e.g. Neural Turing Machine or Differentiable Neural Computer for text generation? In a short research I couldn’t find any such work.

In general these models should do a good job as they are essentially RNNs with an (theoretically) unlimited memory to write and read to and indeed have been proved to generalize better with respect to sequence length.

Reasons I see why such models haven’t been applied to the problem of text generation yet could be, that they are simply too complex and make it a burden to work with them. Of course it’s not something with the difficulty of string theory but nevertheless the entrance barriers for understanding and actually working with them are much bigger than for the usual CNN or RNN. Combined with the fact that the NLP community is much smaller than the one of CV or RL, there is probably also just a low probability that someone with the necessary skills and interests comes across this ideas.

submitted by /u/trashcoder
[link] [comments]

Next Meetup




Plug yourself into AI and don't miss a beat


Toronto AI is a social and collaborative hub to unite AI innovators of Toronto and surrounding areas. We explore AI technologies in digital art and music, healthcare, marketing, fintech, vr, robotics and more. Toronto AI was founded by Dave MacDonald and Patrick O'Mara.