Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] What are some problems types where ML could be applied “in theory” but it’s outside of practical reach ?

It might be an overly-simplified view of the field, but it seems to me that a lot of the ML boom of this decade is due to the appearance of hardware+architectures that were able to tackle a set of problems which were easy in terms of data gathering and “pretty deterministic” (that is to say, based on our human abilities to tackle them, we can be pretty certain there are usually no latent variables which are necessary to solve the problem correctly), things like bounding boxes, image classification and translation.

On the other hand these new methods have hardly put a dent in how most people approach mostly “pretty non-deterministic” issues (e.g. stock trading or risk analysis), where practice and intuition shows that there’s simply not sufficient “easy” data that can make a reliable prediction.

It seems to me that most efforts right now are focused on either “productizing” the gains that were had on text and image problems (e.g. getting that 0.x% extra accuracy and 0.y% extra specificity that makes them practical to use in fields with low error margins) or getting algorithms that can better communicate the uncertainty of non-deterministic datasets (e.g. Bayesian/Probabilistic NNs).

However, it’s not obvious to me what the next set of problems similar to images and text will hit the chopping block, or if there is such a set of problems.

I’ve seen some interesting research (e.g. Alpha Fold) and some huge failures (e.g. that earthquake prediction publish in Nature that was worse than a linear regression) in the realm of scientific problems where we “seem to” have sufficient data but lack the mathematical frameworks to gain insights from the data. I think anything related to complex molecular dynamics in a “static” environment is a pretty good example, since in theory the starting state should allow us insight into any state at a later time T, but in practice this is often too computationally expensive and/or too complex to formalize in a way that is fitting for our current models. However, there doesn’t seem to be near that amount of adoption, excitement or novel ideas coming from this class of problems.

So I wonder, what would you guys think would be the next “category” of problems where, conceptually, ML techniques could be applied without too much of a data-gathering barrier, yet the hardware+knowledge combination of current humans is yet to evolve to a point where they are feasible.

submitted by /u/elcric_krej
[link] [comments]