Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

Category: Reddit MachineLearning

[P] A library to do deep learning with spiking neural networks

Spiking neural networks are not currently the focus of most machine learning researchers, but there are several reasons why they are of interest:

  • Spike based communication is believed to be the primary way in which biological neutrons interact, so spiking neuron models are of interest to computational neuroscientists.

  • Special purpose hardware (often called brain-inspired or neuromorphic hardware) can potentially deliver better power / performance numbers than deep learning hardware accelerators.

With that being said there are few modern machine learning focussed libraries available to explore spiking neural networks. We are in the early stages of creating one based on PyTorch (https://github.com/norse/norse). What we’ve publicly published is enough to explore supervised learning on small datasets like MNIST and CIFAR-10.

Any feedback or comments would be appreciated. Also happy to discuss related state of the art research.

submitted by /u/caprica
[link] [comments]

[Discussion] Speech synthesis – text-to-speech vs speech-to-speech

I’ve recently started looking into speech synthesis, and notice that most of the focus is on text-to-speech.

I haven’t had much like finding anything on speech-to-speech – that is, changing the voice of an audio clip to that of another person (e.g. by passing a voice embedding as an input to the model). Not sure what the actual term is for it. Is there much happening in this space, and if so, any recommendations on where to start? While not broadly applicable, it seems (on the surface) like it’d be a lot easier than TTS.

submitted by /u/Helpful-Season
[link] [comments]

[Research] UCL Professor & MIT/ Princeton ML Researchers Create YouTube Series on ML/ RL — Bringing You Up To Speed With SOTA.

Hey everyone,

We started a new youtube channel dedicated to machine learning. For now, we have four videos introducing machine learning some maths and deep RL. We are planning to grow this with various interesting topics including, optimisation, deep RL, probabilistic modelling, normalising flows, deep learning, and many others. We also appreciate feedback on topics that you guys would like to hear about so we can make videos dedicated to that. Check it out here: https://www.youtube.com/channel/UC4lM4hz_v5ixNjK54UwPEVw/

and tell us what you want to hear about 😀 Thanks!!

Now, who is we: I am an honorary lecturer at UCL with 12 years of expertise in machine learning, and colleagues include MIT, Penn, and UCL graduates;

Haitham – https://scholar.google.com/citations?user=AE5suDoAAAAJ&hl=en ;

Yaodong – https://scholar.google.co.uk/citations?user=6yL0xw8AAAAJ&hl=en

Rasul – https://scholar.google.com/citations?user=Zcov4c4AAAAJ&hl=en ;

submitted by /u/haithamb123
[link] [comments]

[P] DL Help! – Animal detection robot

I’m a student currently trying to create a DL model that is able to identify a fox in an image. The initial plan was to have a robot detect a fox and chase it. As a starting point, i’d want to just have a DL model that actually is able to identify a fox!

The biggest problem at the moment is gathering training data, does anyone have any advice where or how I can get many pictures of foxes to use for training? Would using videos and splitting it down into frames work? I initially wanted to use night vision footage as the device would work at night, is there any way to convert normal pictures to nightvision?

Any advice would be appreciated, thanks 🙂

submitted by /u/CarrotCakePls
[link] [comments]

[D] Mapping parallel data that shares the same vocabulary

Hello,

Let’s say we want to translate between two sequences that share the same vocabulary.

We assume that the vocabulary is: V = [A,B,C,D,E,F,G]

We have this parallel data:

Source: [A B C C , A F G]

Target: [E B C C, E F G]

This was just an example

It we want to represent any sequence. We can use a vector that contains the counts of each element from the vocabulary.

So A B C C = [1,1,2,0,0,0,0]

A F G = [1,0,0,0,0,1,1]

E B C C = [0,1,2,0,1,0,0]

E F G = [0,0,0,0,1,1,1]

As we said that A B C C = E B C C and A F G = E F G, then their vectors must be the same to some extent. Like we can have something like this:

A B C C = E B C C = [1,1,2,0,1,0,0]

The first idea was to train a seq2seq model and try to extract the encoder mapping representation of the sequence. But it looks that the encoder encode just the source sequence representation not the mapping.

Is there any algorithms that can perform this task?

submitted by /u/kekkimo
[link] [comments]

[P] Rotating image about a pivot…pad_width must be of integral (type error)

I am developing an image to text conversion system using EAST text detector and pytesseract.

For tilted text portions in the image, I have found out the tilt angle and start point of the text bound and used the following code to rotate the image through the given angle about a given pivot point.

def rotateImage(img, angle, pivot): padX =[img.shape[1]- pivot[0], pivot[0]] padY =[img.shape[1]- pivot[0], pivot[0]] imgP =np.pad(img, [padX,padY], ‘constant’) imgR =scipy.ndimage.rotate(imgP, angle, reshape=False) return imgR[padY[0]: -padY[1], padX[0] : -padX[1]]

the angle and pivot are given as int values but even then at time im getting a typeerror saying pad_width must be of integral type

what is the problem here and how to solve it

submitted by /u/abbhn
[link] [comments]

[R] Do you have any hacks and heuristics for quickly gaining the necessary theoretical background to understand a recent theoretical ML paper?

A challenge when reading an ML paper is that the authors don’t have the time or space to explain and clarify every advanced concept, theorem, or notation they used to get the to result they present in the paper. At best, they will point to a text book or a review paper if the concept is novel enough from the point of their target audience, other times they will just assume that their readers are smart/educated enough to figure out the necessary concepts on their own. But most readers (like myself) aren’t that smart.

When I was in grad school, I had the luxury and time to go through the references one by one, look up theorems and text books etc, and dedicate several weeks to building up the necessary theoretical baggage to grasp a paper that I was really interested in.

But now I work in industry, and I don’t have the time to pick a text book on graph theory or algebraic topology, if a concept from those fields is used to illustrate or prove a point in a theoretical ML paper, or read an additional 10 papers besides the one I am actually interested in. In fact I barely have time to read papers in general.

Do you have any hacks/heuristics to quickly get up to speed on the necessary theoretical backgrounds for an advanced ML paper, without having to dedicated several weeks to going through graduate level textbooks, and “reverse reading” bibliographies until you get to a paper that simplifies a given concept?

submitted by /u/AlexSnakeKing
[link] [comments]

[D] Trying to wrap my head around the Information Bottle neck theory of Deep Learning: Why is it such a big deal?

I’m trying to understand this paper that was posedt in a thread here earlier, which claims to refute the Information Bottleneck [IB] theory of Deep Learning. Ironically I get what the authors of this refutation result are saying, but I fail to understand why IB was considered such a big deal in the first place.

According to this post , IB “[opens] the black box of deep neural networks via information” and “this paper fully justifies all of the excitement surrounding it.”

According to this post, IB “is helping to explain the puzzling success of today’s artificial-intelligence algorithms — and might also explain how human brains learn.”, Hinton is quoted as saying ““It’s extremely interesting,[…] I have to listen to it another 10,000 times to really understand it, but it’s very rare nowadays to hear a talk with a really original idea in it that may be the answer to a really major puzzle.” and Kyle Cranmer, a particle physicist at New York University says that IB “somehow smells right.”

Here’s where I’m confused, isn’t the idea that an algorithm:

  1. Tries to fit the data “naively”
  2. Then removes the noise and keep just the useful model
  3. Do so by stochastically iterating through a large set of examples (i.e. the stochasticity is what allows the algorithm so separate the signal from the noise).

…just a formalization of what any non-parametric supervised learning algorithm that is based on function approximation (i.e. excluding parametric models like linear regression, and “fully non-parametric”, in-memory models like k-NN).

I understand that Tibshy and his co-authors provide very specific details how this happens, namely that there are two clear phases between (1) and (2), what happens in (2) is what makes a Deep Learning model generalize well, and that (3) is due to the stochasticity of SGD ,which allows the compression that happens in (2).

What I don’t understand is why was this considered a major paradigm shifting result that Hinton has to hear 10000 times to grasp and deems to answer a major puzzle?

For (2), isn’t an algorithm that uses function approximation to learn (i.e. excluding k-NN, and some Parzen based methods, which store the entire training set in memory, and parametric models like linear regression, where the functional form is assumed before hand) performing data compression by design, i.e. take the training data and try to boil it down to reasonably small functional form that preserves the signal and discards the noise?

For (3), we’ve known since the 70s at least that adding stochasticity and random sampling improves the ability of optimization algorithms to get close to a global optimum.

AFAIK, the only really interesting part here is the phase transition between (1) and (2), but even for that, we’ve know about phase transitions in learning and optimization problems have been studied and well known since at least the early 80s.

So what was it about Tibshy et al. that was so revolutionary that non less than Geoffrey Hinton said he needs 10000 epochs to grasp, it “opens the black box of Deep Learning”, and its refutation by Saxe et al. in the aforementioned paper is such a big deal?!?!?!?

What am I missing in IB? Is overall outline of it correct?

submitted by /u/AlexSnakeKing
[link] [comments]