Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] What does it mean for a machine to “understand”? (by @tdietterich)

An excerpt from Thomas Dietterich’s recent blog post:

In order for a system to understand, it must create linkages between different concepts, states, and actions. Today’s language translation systems correctly link “water” in English to “agua” in Spanish, but they don’t have any links between “water” and “electric shock”.

Much of the criticism of the latest AI advances stems from two sources. First, the hype surrounding AI (generated by researchers, the organizations they work for, and even governments and funding agencies) has reached extreme levels. It has even engendered fear that “superintelligence” or the “robot apocalypse” is imminent. Criticism is essential for countering this nonsense.

Second, criticism is part of the ongoing debate about future research directions in artificial intelligence research and to allocation of government funding. On the one side are the advocates of connectionism who developed deep learning and who support continuing that line of research. On the other side are the advocates of AI methods based on the construction and manipulation of symbols (e.g., using formal logic). There is also a growing community arguing for systems that combine both approaches in a hybrid architecture. Criticism is also essential for this discussion, because the AI community must continually challenge our assumptions and choose how to invest society’s time and money in advancing AI science and technology. However, I object to the argument that says “Today’s deep learning-based systems don’t exhibit genuine understanding, and therefore deep learning should be abandoned”. This argument is just as faulty as the argument that says “Today’s deep learning-based systems have achieved great advances, and pursuing them further will `solve intelligence’.” I like the analysis by Lakatos (1978) that research programmes tend to be pursued until they cease to be fruitful. I think we should continue to pursue the connectionist programme, the symbolic representationalist programme, and the emerging hybrid programmes, because they all continue to be very fruitful.

Criticism of deep learning is already leading to new directions. In particular, the demonstration that deep learning systems can match human performance on various benchmark tasks and yet fail to generalize to superficially very similar tasks has produced a crisis in machine learning (in the sense of Kuhn, 1962). Researchers are responding with new ideas such as learning invariants (Arjovsky, et al., 2019; Vapnik & Ismailov, 2019) and discovering causal models (Peters, et al., 2017). These ideas are applicable to both symbolic and connectionist machine learning.

submitted by /u/milaworld
[link] [comments]