Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

Category: Reddit MachineLearning

[D] Anyone else Approached by Fake Research Gate Accounts

About once a month, i get a follow request on research gate from a fake account who claims to be from my institution. I have emailed research gate about it and essentially their claim was they can do nothing about it. I find this unbelievable since there would be no way for the spoofed account to verify their email address (no one in our system has an email address of the spoofed account).

No exactly ML related, but wondering if other ML researchers have this issue?

submitted by /u/idg101
[link] [comments]

[Discussion] Merging machine learning with topological data analysis.

Recently I have discovered in topology an interesting application to data analysis. In particular I am fascinated by the idea that the shape of the space underlying a dataset could give some insight for predictive algorithms. Furthermore I found some user-friendly libraries as Giotto-learn and GUDHI, where you can start applying topological data analysis (TDA) without needing a strong background in mathematics. I am thrilled by all the possible application this domain could have! For example a found a bunch of articles on medium about the topic and I would like to share with you one of the most popular ones, where they try to apply TDA for predicting football matches. The article is a kind of POC and I’m trying to improve it adding real matches data (in the post they use FIFA game data).

What do you think about topological data analysis? You think that it could add value to the classic machine learning?

submitted by /u/henniez95
[link] [comments]

[D] NeurIPS 2019 Bengio Schmidhuber Meta-Learning Fiasco

The recent reddit post Yoshua Bengio talks about what’s next for deep learning links to an interview with Bengio. User u/panties_in_my_ass got many upvotes for this comment:

Spectrum: What’s the key to that kind of adaptability?***

Bengio: Meta-learning is a very hot topic these days: Learning to learn. I wrote an early paper on this in 1991, but only recently did we get the computational power to implement this kind of thing.

Somewhere, on some laptop, Schmidhuber is screaming at his monitor right now.

because he introduced meta-learning 4 years before Bengio:

Jürgen Schmidhuber. Evolutionary principles in self-referential learning, or on learning how to learn: The meta-meta-… hook. Diploma thesis, Tech Univ. Munich, 1987.

Then Bengio gave his NeurIPS 2019 talk. Slide 71 says:

Meta-learning or learning to learn (Bengio et al 1991; Schmidhuber 1992)

u/y0hun commented:

What a childish slight… The Schmidhuber 1987 paper is clearly labeled and established and as a nasty slight he juxtaposes his paper against Schmidhuber with his preceding it by a year almost doing the opposite of giving him credit.

I detect a broader pattern here. Look at this highly upvoted post: Jürgen Schmidhuber really had GANs in 1990, 25 years before Bengio. u/siddarth2947 commented that

GANs were actually mentioned in the Turing laudation, it’s both funny and sad that Yoshua Bengio got a Turing award for a principle that Jurgen invented decades before him

and that section 3 of Schmidhuber’s post on their miraculous year 1990-1991 is actually about his former student Sepp Hochreiter and Bengio:

(In 1994, others published results [VAN2] essentially identical to the 1991 vanishing gradient results of Sepp [VAN1]. Even after a common publication [VAN3], the first author of reference [VAN2] published papers (e.g., [VAN4]) that cited only his own 1994 paper but not Sepp’s original work.)

So Bengio republished at least 3 important ideas from Schmidhuber’s lab without giving credit: meta-learning, vanishing gradients, GANs. What’s going on?

submitted by /u/posteriorprior
[link] [comments]

[Discussion] StyleGANv2 uses Multi-scaled Gradients instead of Progressive Growing

The progressive growing has been immensely successful in the past and has been used in a myriad of works since then (number of citations :D). We were mostly trying to solve the issue of extensive hyperparameter tuning with the Multi-Scale Gradients. But the paper mentions a totally new problem altogether. This raises two questions for me and I thought of asking them out here:

1.) The problem of phase artifacts. There is slightly less theoretical motivation about progressive growing being the cause of this problem, and it has been attributed to the fact that the network tries to output highest frequency details at all resolutions. It would be very helpful for me if my fellow redditers could provide any more insights into how these two connect to each other.

2.) If this issue existed since the beginning, then what kind of implications does the use of progressive growing has in the state-of-the-art works in other domains such as image2image, vid2vid, colourization networks etc?

For reference -> MSG-GAN paper (https://arxiv.org/abs/1903.06048)

submitted by /u/akanimax
[link] [comments]

[D] Tools for quick web-demos?

Hi all, I see recently a proliferation of papers that not only release the code and some supplementary results in a github repo, but have complete sites with interactive demos.

Examples: http://gannotsee.net/, http://gandissect.res.ibm.com/ganpaint.html

To me, these seem like a difficult exercise that not all students / labs, have access to. Unless there is a quick tool/pipeline that people use.

Are all these websites dynamic and have server side stuff to run the model?
or is there a simpler way to do this browser (client-side)?

I guess the answer depends on the type of demo (e.g. here everything is pre-done: https://dmitryulyanov.github.io/deep_image_prior), but I am just wondering if there is a tool etc to make the whole process easier.

To clarify: I am aware of tensorflow-js, and in browser running of models, but I am referring to a bigger picture.

submitted by /u/da_g_prof
[link] [comments]

[D] What is the best way to search for a learning rate schedule?

Learning rate schedule is the most irritating hyperparameter to search for me, because there seem to be exponential number of possibilities in when to decay and how much to decay by. What is the most systematic way to search for a good learning rate schedule? Is there a method that automatically decays the learning rate when the loss stops dropping quickly?

In my experience a “step” learning rate decay function is better than a “smooth” decay function. Is there any paper or blog that has done a large scale confirmation/rejection of this and other systematic analysis of learning rate schedules?

submitted by /u/nearning
[link] [comments]

[R] ProteinNet: a standardized data set for machine learning of protein structure (call for contributions, June 2019)

I called for “call for contributions” recently, but it didn’t end well. People were too obsessed with keeping their secrets and know little outside of ML.

So I searched myself for challenging problems in science, with high meaningful impact, potential for ML to make breakthrough, ready dataset and benchmark, and I found this ProteinNet for protein folding. These scientists seem to think for the sake of science as a whole, and want to see how ML can help advance their field. You are welcome to use it for your side project if you are already tired of old time CV or NLP tutorials.

submitted by /u/thntk
[link] [comments]