Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

Category: Reddit MachineLearning

[D] How to use VAE’s encoded mu and sigma with respect to user-given z?

My understanding of VAE is that unlike Autoencoders, it does not directly give you a discrete encoding (latent code vectors n-dim) instead, it gives you both mu and sigma (n- dim mean vectors and n-dim standard deviation vectors). Then you have epsilon which you use to sample from a normal distribution with mu and sigma to create z. When combining mu, sigma and epsilon, you get z which is the one decoded by the VAE’s decoder. z is basically the main encoding.

Say my z, mu, sigma are of n-dimension like 10 (10-dim z, mu, sigma). I enable the user to have a free picking/giving me numbers 10 vectors [-a, a], say a = 5. So the user is free to pick 10 vectors between -5, 5.

This becomes my z that is decoded by my decoder to generate a new image.

[Main problem]

My VAE is trained on a dataset of apparel. Now, if I run my VAE’s encoder on each of the data in the dataset, I’d get a mu and sigma for each (not sure if this is still correct).

Using the z given by the user, how do I find the most similar from the dataset using VAE’s encoding of only mu and sigma?

My thinking is to generate z using mu and sigma generated by VAE’S encoder but in order to generate z, I still need to sample from a distribution using epsilon in which makes it non-discrete w.r.t user-generated z. This adds randomness to it so I am not sure how would I use the encoded z to find the nearest to user-generated z.

submitted by /u/sarmientoj24
[link] [comments]

[P] NERD for image generation.

A while back I shared an algorithm called NERD here. The initial implementation was for generating sequence and the results were not spectacular but encouraging. I have now used the same algorithm with minor changes to generate images. The generated images were still not spectacular but its not random either.

project: https://github.com/Gananath/NERD/tree/master/NERD_IMAGES

image: https://raw.githubusercontent.com/Gananath/NERD/master/NERD_IMAGES/nerd_mnist.png

submitted by /u/gananath
[link] [comments]

[D] I’ve been switching over from Pytorch to TF 2.0, and my take is that the library itself isn’t too much of a problem (I’ve heard lots of complaints of TF), the real issue is the lack of official guides, detailed documentation, and lack of question answering from the Tensorflow team.

I feel like I’m trying to create some pipelines that should be fairly common among Tensorflow users. There seems to be multiple ways of doing everything, each method with it’s own nuance, that is not officially documented anywhere. You have to dig around on their github issues and sometimes the info will be there, if you’re lucky.

There’s a lot of unofficial medium blogs out there, but I’ve noticed that often they have inaccurate information. A lot of the times it’s an overseas remote developer creating marketing materials to get hired.

A lot of the design choices I google lead me to unanswered stackoverflow questions, asked several months or a year ago. I think this is the easiest fix. A lot of these unanswered questions are good questions, that are not covered anywhere in Tensorflow’s official documentation.

https://imgs.xkcd.com/comics/wisdom_of_the_ancients.png

I am looking at the backlog of unaswered questions for both. Pytorch has 2,101, Tensorflow has 24,066

Pytorch, on the other hand, has a forum where you often get your questions answered by someone on the Pytorch developers team.

I hear a lot of complains about Tensorflow, but I personally don’t see the structure itself as poor. I think it’s pretty good actually, but what’s the point of creating something awesome if you don’t enough info for people to optimally use it.

submitted by /u/DisastrousProgrammer
[link] [comments]

[D] Are Siraj Raval’s videos all fake? I found one tutorial that had a hard coded result.

I was following along with the Video he had making a web app that essentially takes a chest x-ray and inferences for pneumonia or not. What I found was that the model he had was complete bologna and that the results were hard coded in the html. Am I the only one that thinks this is wrong? I wanted to create the project anyway, so I found tutorials on Kaggle. I, a 17 year old kid, was able to create this project with a legit model (though it wasn’t super accurate). So how is it that I can take a one day to create this, while Siraj hard codes the result. Didn’t he go to Columbia? I feel like he’s breeding machine learning posers.

submitted by /u/highjumpisfun
[link] [comments]

[D] How do I decide whether to send my paper to a machine learning conference or to a theoretical comp. sci. conference?

Basically the title. Assuming I have a theory paper pertaining to some ML related subject, containing little to no experiments, how do I decide if I should try a top tier ML conference (ICML, NeurIPS, ICLR) or go with a theory conference? Should I try to add some experiments (assuming I go with an ML conference) or would that open up a can of worms?

Thanks!

submitted by /u/mlonetimer
[link] [comments]

[D] Machine Learning – WAYR (What Are You Reading) – Week 76

This is a place to share machine learning research papers, journals, and articles that you’re reading this week. If it relates to what you’re researching, by all means elaborate and give us your insight, otherwise it could just be an interesting paper you’ve read.

Please try to provide some insight from your understanding and please don’t post things which are present in wiki.

Preferably you should link the arxiv page (not the PDF, you can easily access the PDF from the summary page but not the other way around) or any other pertinent links.

Previous weeks :

1-10 11-20 21-30 31-40 41-50 51-60 61-70 71-80
Week 1 Week 11 Week 21 Week 31 Week 41 Week 51 Week 61 Week 71
Week 2 Week 12 Week 22 Week 32 Week 42 Week 52 Week 62 Week 72
Week 3 Week 13 Week 23 Week 33 Week 43 Week 53 Week 63 Week 73
Week 4 Week 14 Week 24 Week 34 Week 44 Week 54 Week 64 Week 74
Week 5 Week 15 Week 25 Week 35 Week 45 Week 55 Week 65 Week 75
Week 6 Week 16 Week 26 Week 36 Week 46 Week 56 Week 66
Week 7 Week 17 Week 27 Week 37 Week 47 Week 57 Week 67
Week 8 Week 18 Week 28 Week 38 Week 48 Week 58 Week 68
Week 9 Week 19 Week 29 Week 39 Week 49 Week 59 Week 69
Week 10 Week 20 Week 30 Week 40 Week 50 Week 60 Week 70

Most upvoted papers two weeks ago:

/u/sebamenabar: Deep Equilibrium Models

Besides that, there are no rules, have fun.

submitted by /u/ML_WAYR_bot
[link] [comments]