Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

Category: Reddit MachineLearning

[D] It’s better to get a part-time job or push a good article

Hello,

I’m pursuing a PhD in machine learning applied to the drug design (actually this is my second month of the PhD). In the future, I would like to open a start-up or eventually go to R&D in the industry, so I’m not going to stay in academia.

Meanwhile, I have an opportunity to go for a part-time job in Data Science most probably working with some NLP.

A then few questions arise:

  1. It is better to focus on a good ML paper or better to go for a part-time job?
  2. How projects and papers are perceived by the industry and the job market in general?
  3. How managing a grant is perceived (In my country PhD/Undergrad/Master students can apply for a grant)?

submitted by /u/DanielWicz
[link] [comments]

[R] Change the weights of a NN while training without using the conventional algorithms

I am currently starting a research project where I want to use evolutionary algorithms to update the weights in a multilayer perceptron during the training process. I have started with keras, but after realising that i could not do it, I moved to pure tensorflow, but I am still failing at this task. I am trying to update my weights using simmulated annealing and genetic algorithms if it help you to help me. The solution may be away from tensorflow and python but i don’t care. I need it to work as soon as possible.

submitted by /u/MagicElyas
[link] [comments]

[N] Where AI is going in 2020

https://venturebeat.com/2020/01/02/top-minds-in-machine-learning-predict-where-ai-is-going-in-2020/
By a panel of

  • Google AI chief Jeff Dean

  • University of California, Berkeley professor Celeste Kidd

  • PyTorch lead Soumith Chintala

  • Nvidia machine learning research head Anima Anandkumar

  • IBM Research director Dario Gil

Would like to hear r/MachineLearning‘s opinions too.

submitted by /u/thntk
[link] [comments]

[P] Generating Color Palettes using Deep Learning

[P] Generating Color Palettes using Deep Learning

Hi Guys,

I have been working on a project to generate color palette using a deep learning model. Idea is to feed the model famous color palettes collected across the internet and use that to generate new color palettes. Idea is to generate new color palettes. User can also specify some colors in the palette and have to generate rest of the colors. Example of palettes from the dataset are:

Colors from dataset

I have tried following things:

Bidirectional GRU based model: I converted the problem of generating color into fill in the blank problem. I represented input as 15 numbers (5 color palette and each color has 3 RGB values) and each number ranges from 0 to 256. where 0-255 are color values and 256 is for mask. So input to model is randomly masked vector and output is unmasked values. Here is an example with mask [0,0,0,0,0,1,0,1,0,0,1,0,0,1,0]:

Input : [0, 185, 252, 124, 122, 256, 48, 256, 189, 48, 256, 113, 178, 256, 116]

Output: [0, 185, 252, 124, 122, 125, 48, 170, 189, 48, 127, 113, 178, 143, 116]

The task of the model would be to predict values which are 256. Idea is similar to Masked LM (e.g. BERT) but without attention or other advanced layers. Model is embedding layer, a few GRU layers and one dense layer with softmax loss. Before training model outputs random garbage:

First two colors are hints – outputs before training

For inference, I would do something like beam search. So I would give give my masked vector to model and get prediction. Then select one masked value and replace it with prediction. Selection of class is random based on probabilities from last layer. Then repeat this step until there is no masked value remaining.

Output after training:

First two colors are hint – GRU after training

The problem with this model was that, model would predict dull color (for comparison see colors from dataset or generated by GAN). It felt like model was trying to just average out all the colors. So I went for different approach.

GAN-based Approach: In this approach, I tried to normalize my color from 0-255 to -1 to +1. generator would also output a vector of size 15 from -1 to +1. Then discriminator would discriminate the colors and so on. This worked like a charm. Generator outputs sometimes surprisingly good colors and sometimes it fails as well.

Outputs before training:

GAN: Color generated before training

Outputs after Training:

Palettes look pretty nice

But main problem in this approach was that I could not specify the hint as mentioned above. Like if user provides a few values and wants rest of colors to be generated, that would not be possible. To work around this, I tried a trick from style gan paper.

First one is given a color, find its latent space. I tried training a simple network which would predict latent space of randomly generated colors. It didn’t work that great. Then I tried gradient search. e.g. input vector to be optimized and generator weights are locked but it would get stuck in local minimum. Finally, I went for genetic search. Start with random population, find, colors which are closer to hint. This gave pretty good colors but given the reasonable amount of time it really did converge very close but not exact to hint color. Fitness score would be MSE between generated color and hinted color only for hinted values.

This posed another challenge. In some cases, model would always converge to just a few palettes given a specific hint. To solve this I included diversity in the fitness function e.g. try to maximize distance from previously generated colors as well. Now colors do not get very close to hint.

Following is example with locking and diversity correction:

First two colors should have been same as provided in above (GRU) example

As you can see first two colors do not stay perfectly close to original colors.

Currently, last mentioned method, with tiny bit of error is currently live. But I wanna know if someone knows better technique to solve this problem.

Thanks

submitted by /u/NaxAlpha
[link] [comments]

[D] Can there be a “funny image” manifold?

https://i.redd.it/sclg6mwr0ce21.jpg

Let’s take this image as an example.Unlike an image of a Face, understanding this as funny requires knowledge about history.In general most funny pictures require contextual knowledge or an understanding of how the different entities in the image interact with each other. We can now “roam around” the face manifold thanks to GANs and other architectures.but gan we roam around the funny image manifold?For example ,the above image would still be funny even if it was a native american woman/child(any native american face).Can we interpolate between two different funny images and expect all the inbetween images to be funny?

I apologise if this feels like five different questions in one.I find thinking along these lines very interesting and want to know what everyone else thinks about it.Any related links to blogs/papers would be amazing!

submitted by /u/niszoig
[link] [comments]

[DISCUSSION] What are some of the unpopular neural networks and their training techniques.

Obviously deep learning is now very popular that employ neural networks as its substrate. But what are some of unpopular neural networks and their training techniques or even neural networks that are not trained in its mainstream meaning.

Off the top we have following which are not as popular as DNNs but still are well known,

  • RBM and Hopfield nets
  • SOM
  • SOINN
  • CPPN
  • GNG
  • NEAT/HyperNEAT
  • Evolution strategies

One I could think of that is buried under is a string matching NN – Optimal neural network algorithm for on-line string matching

Please share some of the interesting networks from the past.

submitted by /u/paarulakan
[link] [comments]

[P] GlyphNet: Training neural networks to communicate with a visual language

[P] GlyphNet: Training neural networks to communicate with a visual language

Visualization of glyphs generated by neural network

I did an experiment over winter break to see what would happen if I trained 2 neural networks to communicate with each other in a noisy environment. The task of the first neural network is to generate unique symbols, and the other’s task is to tell them apart. The result is a pretty cool visual language that looks kind of alien.

Notably, I got the best results by dynamically increasing the noise parameters as the networks became more competent (pulling inspiration from Automatic Domain Randomization and POET).

Please take a look and let me know what you think! https://github.com/noahtren/GlyphNet

submitted by /u/noahtren
[link] [comments]