Blog

Learn About Our Meetup

4200+ Members

Category: Reddit MachineLearning

[N] mlfinlab Python Package Released (Advances in Financial Machine Learning)

Finally our package mlfinlab has been released on the PyPi index.

pip install mlfinlab

mlfinlab is a “living and breathing” project in the sense that it is continually enhanced with new code from the chapters in the Advances in Financial Machine Learning book. We have built this on lean principles with the goal of providing the greatest value to the quantitative community.

Read More on Blog

submitted by /u/Jackal008
[link] [comments]

[D] Modern applications of statistical learning theory?

I was reading about concentration of measure related stuff recently and was curious whether anyone knows whether this material is still applicable to ‘deep learning’ models. By statistical learning theory I mean stuff like VC / Rademacher bounds etc.

If it is, can anyone point to any research papers on this topic?

From my naive understanding, because these bounds relate to the worst-case scenario the union bound may be excessively pessimistic in terms of the number of training examples required.

submitted by /u/tensorflower
[link] [comments]

[R] Differentiable Neural Computer Memory Testing Using dSprites

Some independent research I conducted in testing what’s going on in the memory of the DNC when trained to learn a predictive model of the environment for Reinforcement Learning. Unfortunately, the experiments didn’t find anything strongly positive. The memory appears to be a black-box, at least when examined in this way. Read more at: http://blog.adeel.io/2019/03/10/differentiable-neural-computer-memory-testing-using-dsprites/

Would love to hear the thoughts of the ML community.

submitted by /u/ThisIsMySeudonym
[link] [comments]

[N] Fine-grained visual recognition workshop at CVPR 2019 (FGVC6) – with competitions

We are pleased to announce the 6th Workshop on Fine-Grained Visual Categorization at CVPR 2019 in June. The purpose of the workshop is to bring together researchers to explore visual recognition across the continuum between basic level categorization (object recognition) and identification of individuals (face recognition, biometrics) within a category population.

Short Papers We invite submission of extended abstracts describing work in fine-grained recognition. For more details check out the workshop website. https://sites.google.com/view/fgvc6/home

Challenges In conjunction with the workshop we are also hosting a series of competitions on Kaggle. These range from classification of different species of plants and animals in images through to predicting fine-grained visual attributes in fashion images. https://www.kaggle.com/FGVC6/competitions

submitted by /u/fgvc2017
[link] [comments]

[D] “Face it, it’s over when the Physics PhD “Monster Mind” hotshots hit the job market”

With webpages like this freely accessible to all, I assume everybody here already knows that there is a tidal wave of former IMO/IPhO/Part III “mega-hotshots” (who got tired of doing QFT problem sets) set to finish up their ~Machine learning in Physics~ PhDs (or similar) within the next 2-3 years. Look into the eyes of every single individual on this list — (and if you look them up, about half of them are doing ML stuff) — you can literally feel their scintillating intellects, and they’re coming for your ML jobs soon. How is everyone planning to keep up with advancements in ML once these “monster minds” finish their PhDs? Is ML over for ordinary folks?

(And by the way, this isn’t a troll post. If you think you’re on the same level as these folks, that first guy I linked has literally been to fucking space — that’s the level of hotshot you’re dealing with here. What I’m describing is therefore a real, actual “phenomenon”, which has to be contended with, that is, the existence of “mega-hotshots”, currently walking the halls of the Stanford physics building, and elsewhere, set to burst onto the scene. )

Thoughts?

submitted by /u/IMO_2009_Q6_SUPERFAN
[link] [comments]

[D] I feel like a small data point that one day hopes to be a good dataset in this field.

Hello all artificially intelligent Redditors:

Little background about me, I am soon to graduate with a BS Software Engineering from a decent university. This is my final semester I am in and I just took a class on AI. Now, my math background is significantly better than the bare minimum of any degree program. However, I still feel I don’t have enough math knowledge to pursue AI with a full understanding. Currently I have taken Calc 1, Calc 2, Discrete, and Probability Theory (have to take that last one over the summer). I understand python really well so I can make assumptions on different aspects of code. However when it comes to the real analysis of it all (the models, activation possibilities, etc) I am really lost. I REALLY REALLY want to do this for a living because I think its super cool, but real hard. What would be your advice on being more versed in this realm. Would you go into a masters program and if so where (Looking for best value so not expensive nor dirt cheap). I will add edits as I may have forgotten to add somethings. Thanks in advance.

submitted by /u/sovashadow
[link] [comments]

[D] Momentum updates average of g, e.g. Adagrad also of g^2. What other averages might be worth to update? E.g. 4: of g, x, x*g, x^2 give MSE fitted local parabola

Updating exponential moving average is a basic tool of SGD methods, starting with of gradient g in momentum method to extract local linear trend from the statistics.

Then e.g. Adagrad, ADAM family adds averages of g_i*g_i to strengthen underrepresented coordinates.

TONGA can be seen as another step: updates g_i*g_j averages to model (uncentered) covariance matrix of gradients for Newton-like step.

I wanted to propose a discussion about some other interesting/promising updated averages for SGD convergence e.g. met in literature?

For example updating 4 exponential moving averages: of g, x, gx, x2 gives MSE fitted parabola in a given direction, estimated Hessian = Cov(g,x).Cov(x,x)-1 in multiple directions (derivation). Analogously we could MSE fit e.g. in a single direction degree 3 polynomial if updating 6 averages: of g, x, gx, x2, g*x2, x3.

Have you seen such additional updated averages in literature, especially of g*x? Is it worth e.g. to expand momentum method by such additional averages to model parabola in its direction for smarter step size?

submitted by /u/jarekduda
[link] [comments]

[P] Deep Learning on Healthcare Lecture Series (6)

Deep Learning on Healthcare (6): Regulations. I found quite interesting argument in twitter between influential people such as Hugh Harvey, Jeremy Howard and Luke Oakden-Rayner. So I decided to introduce this argument and discuss about the regulation for deep learning in healthcare and medicine for the last lecture theme.

Deep Learning on Healthcare (6)

Deep Learning on Healthcare (5)

Deep Learning on Healthcare (4)

Deep Learning on Healthcare (3)

Deep Learning on Healthcare (2)

Deep Learning on Healthcare (1)

submitted by /u/hiconcep
[link] [comments]

Next Meetup

 

Days
:
Hours
:
Minutes
:
Seconds

 

Plug yourself into AI and don't miss a beat