Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] Can ML models output ideas and concepts?

I have very limited experience with ML, so apologies if this question is silly or too abstract.

Many problems (e.g. abstractive text summarization) do not have well-developed and robust solutions yet, even though the progress is reasonably fast and there is a lot of theoretical research that allows us to move forward.

For example, developments in the area of word embeddings have made a huge impact on the quality of text processing models. We have come a long way from the simplest Bag-of-Words model to more modern Word2Vec variants to Glove and FastText. Thanks to these developments, we are able to train models which successfully catch on the semantics of text. However, this required decades of research, which is a long time.

This kind of research yields new ideas and concepts, not just results of computations or definite answers to specific questions. This applies to any area (biology, chemistry, physics), not just text processing.

So, my basic question is, could we make a computer research this kind of problem instead of spending the time of actual humans? I’m not even sure if this lies in the realm of ML, but it doesn’t seem as hard as creating a “true AI”, because such “machine thinker” would only need knowledge of some subject area, not a complete memory of an adult human.

Basically, can we create ML models which output ideas and concepts as opposed to specific answers to classification or prediction problems? E.g. can we have a computer “invent” the next approach to word embeddings (better than current state-of-the-art) faster than the human researchers will?

It’s not even necessary that the resulting approach is understood by humans, it just needs to be implementable.

I see a lot of unsolved problems here (how do we formalize ideas to make the machine process them? where do we get training datasets with “good” and “bad” ideas?), but is there any research at all into this sort of thing?

Thanks!

P.S. Let’s keep jokes about AI apocalypse out of this

submitted by /u/smthamazing
[link] [comments]