Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

Category: Reddit MachineLearning

[P] Reinforcement Learning / Game Theory on Urban Planning Problems

Greetings,

I’ve been working on the use of machine learning models for urban planning problems for my PhD. My earlier work focused on the use of regression-based models (ANN, GPR, etc.) but due to changes in funding, I’m having to switch to reinforcement learning / game theoretic models for my current work. However, I haven’t been able to find collaborators from the RL domain in my university, and my advisor is not an expert in it either.

Our project currently involves path planning or resource allocation in stochastic environments (eg: snow plowing, police placement [not predictive policing], trash pickup, etc.). If there is anyone in this sub-reddit who has experience in these domains or RL in general, and if you’re interested to collaborate, please reach out.

I could try and do a lot of literature surveys to make sure I’m not trying to reinvent a wheel or going in the wrong direction, but I strongly believe that subject experts would be able to provide much better insights.

submitted by /u/AdmiralLunatic
[link] [comments]

Machine Learning Infrastructure [Research]

I just started a new position at a small AI startup as a systems engineer. Historically, my roles have been in more traditional IT roles on the support side in Windows environments.

We have several data science and machine learning teams for different products and projects. They all seem to use different technologies at the moment. We also have a lot of bare metal hardware laying around that is not inventoried or monitored and seems to be under-utilized in some places while other hardware has a long waitlist.

I had a meeting with the managers and leads of each team to figure out what they were doing, using, etc. All of them have decided to transition to Airflow and Dask. Some teams require heavy CPU and storage while others require heavy GPU for their jobs.

This is my first venture into machine learning so I’m trying to educate myself. We have been discussing gathering up unused hardware and building one or more clusters to provide organized, consistent, and scheduled resources to the teams for their workflows. I am thinking something like containers as a service where they can pick their CPU/GPU requirements and generate instances for processing on-demand, without having to go through Ops. Ops just maintains the infrastructure to make sure there is enough available to the teams.

For those of you working in machine learning and data science, does this sound like a good solution? Are there products out there y’all use that function in this way? I’ve been reading about some of VMware’s vCloud solutions and found an article about containers/Kubernetes as a service that also allowed for traditional VMs to reside in the cluster but now I can’t find it.

I would appreciate any info, suggestions, articles, or products that may help me empower our teams. I would love to really provide some solid infrastructure that is productive and easy for them to use.

Thanks!

submitted by /u/gennyact
[link] [comments]

[D] Can GANs generate new animals?

I googled but couldn’t find anything.

We’ve seen GANs trained on imagenet that were conditioned on the labels, so they can generate dogs or ants for example. But what if you just conditioned it on animal/not animal?

Could you get a GAN that can think up new animal species that we’ve never seen?

Or you could even play around with the specificity, so you can train it on reptiles for example, instead of specifically snakes or turtles.

submitted by /u/Kavillab
[link] [comments]

[R] Depth Maps Inpainting with GANs

[R] Depth Maps Inpainting with GANs

arXiv: https://arxiv.org/abs/1912.03992

GitHub: https://github.com/nuneslu/VeIGAN

We proposed a GAN network earlier to deal with disparity data inpainting and object removal in a paper published in the Intelligent Vehicles Symposium 2019 (IV’19) (https://ieeexplore.ieee.org/document/8814157). Now we published on arXiv a more in-depth analysis of our latest network version, which is also open-sourced.

Third column the objects removed using the Contextual Attention network; Last column our results removing the same objects.

submitted by /u/matiaslucas
[link] [comments]

[Research] Question Regarding “Deep Convolutional Spiking Neural Networks for Image Classification”

Paper can be found here: https://arxiv.org/pdf/1903.12272.pdf

I am currently investigating the research into Spiking Neural Networks using Spike-timing-dependent plasticity learning, initially regarding image processing.

I have now read several papers that discuss “Spiking Convolutional Neural Networks”, and cannot understand how these particular networks can be convolutional in nature, at least in the same way as backprop trained CNNs are.

The kernels in the standard CNN are trained by backprop against every possible patch or feature in the preceding layer. So a kernel can detect a feature at any point in the preceding layer.

While you can definitely use “kernels” with these spiking networks, they would only apply to a set patch of the image right? If you wanted something that ran over all the patches your “kernel neuron” would just end up with a link from each neuron in the previous layer and it would be a mess. Or you would need to duplicate this kernel neuron x number of times depending on the size of the input, and find some way to keep these neurons with the same input weights.

What am I misunderstanding here? Do you simply end up with a whole pile of duplicate kernels across all the patches? This would definitely work but is it optimal?

submitted by /u/bitcoin_analysis_app
[link] [comments]

[P] Run inference with zero dependencies C code and ONNX

Hi! Just would like to share with you a project that I have been working for some time:

A bit of background (ONNX)

In short, onnx provides an Open Neural Network Exchange format. This format, describes a huge set of operators, that can be mixed to create every type of machine learning model that you ever heard of, from a simple neural network to complex deep convolutional networks. Some examples of operators are: matrix multiplications, convolutions, adding, maxpool, sin, cosine, you name it! They provide a standardised set of operators here. So we can say that onnx provides a layer of abstraction to ML models, which makes all framework compatible between them. Exporters are provided for a huge variety of frameworks (PyTorch, TensorFlow, Keras, Scikit-Learn) so if you want to convert a model from Keras to TensorFlow, you just have to use Keras exporter to export Keras->ONNX and then use the importer to import ONNX-TensorFlow.

The project

There are many open-source repos that can run inference on ML models with C code, but most of them are framework-specific, so you are tied to TensorFlow or whatever framework. The idea behind this project is to have a “backend” that can run inference on ONNX models. You might have heard of “onnxruntime” which provides runtimes to run inference on ONNX models in different languages, like in R, Go or even C++, but the idea of this project is to have a pure C99 runtime without any external dependency, that can compile with old compilers for any device without any fancy hw accelerators, multicore or GPUs.

What’s next?

The project is in a very early stage and we are looking for contributors, both for C code and general ideas. So far, you can run inference on the well known MNIST model for handwritten digits recognition. Inside the repo you can find some specific tasks and documentation about what we have so far.

submitted by /u/jbj-fourier
[link] [comments]

[N] AI articles and best ICCV2019 papers reviewed on Computer Vision News of December (with codes!)

[N] AI articles and best ICCV2019 papers reviewed on Computer Vision News of December (with codes!)

Here is Computer Vision News of December 2019, published by RSIP Vision:
HTML5 version (recommended)
PDF version

52 awesome pages around AI. Free subscription for all on page 52.
It includes the BEST OF ICCV 2019, selected among the best papers presented at the conference in Seoul.
Exclusive interviews and technical articles, with codes!

Enjoy!

https://preview.redd.it/m1mmzzb63t341.jpg?width=700&format=pjpg&auto=webp&s=416ac35820aef1394600addda2e29863bed319bf

submitted by /u/Gletta
[link] [comments]

[R] CenterMask : Real-Time Anchor-Free Instance Segmentation

We propose a simple yet efficient anchor-free instance segmentation, called CenterMask, that adds a novel spatial attention-guided mask (SAG-Mask) branch to anchor-free one stage object detector (FCOS) in the same vein with Mask R-CNN. Plugged into the FCOS object detector, the SAG-Mask branch predicts a segmentation mask on each box with the spatial attention map that helps to focus on informative pixels and suppress noise. We also present an improved VoVNetV2 with two effective strategies: adds (1) residual connection for alleviating the saturation problem of larger VoVNet and (2) effective Squeeze-Excitation (eSE) deals with the information loss problem of original SE. With SAG-Mask and VoVNetV2, we deign CenterMask and CenterMask-Lite that are targeted to large and small models, respectively. CenterMask outperforms all previous state-of-the-art models at a much faster speed. CenterMask-Lite also achieves 33.4% mask AP / 38.0% box AP, outperforming YOLACT by 2.6 / 7.0 AP gain, respectively, at over 35fps on Titan Xp. We hope that CenterMask and VoVNetV2 can serve as a solid baseline of real-time instance segmentation and backbone network for various vision tasks, respectively.

submitted by /u/tumaini-lee
[link] [comments]