Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] What does it mean to understand? Neural networks case

Hello everybody!

In this clearly philosophical paper, we express our opinion on the neural network’s understanding issue. We were motivated by Timothy P. Lillicrap and Konrad P. Kording paper https://arxiv.org/pdf/1907.06374.pdf and we disagree with many statements presented in this work. We propose understanding requirements and based on them, describes the state when we can say that we understand the neural networks.

Abstract: We can say that we understand neural networks then and only then if you will come to me and say that the best model ever for some task has a 100 layers, and I will answer “No! 101 layers model is the best!”.

Some highlights:

2.05. The more emergence in the system, the more heuristics it requires to deal with it.

2.07. Knowledge destroys heuristics.

2.13. The more superficial knowledge of the phenomena, the more irrational it is, and the easier it is to make mistakes.

3.09. A description is a specification of the causal relationship between a set of system facts noted as simple facts

3.16. First requirement: system described in the all possibly abstract levels. (We call it descriptive completeness).

3.17. Second requirement: description of the system of the abstract descriptions reduces emergence to zero. (We call it descriptive emergence).

3.18. The difficulty of understanding is in finding all abstract levels of the system.

3.19. If after describing the system by one or more abstract levels, emergence persists in the system, it is necessary to get more abstract level descriptions, to bridge the gap.

3.20. The greater the gap between abstract description levels, the bigger the emergence between facts of these levels.

3.21. Third requirement: proposed description does not contradict to the new coming facts. (We call it descriptive power)

3.22. We cannot evaluate the description if we have no facts beyond the scope of this description
3.23. If a satisfied lower level requirement does not lead to satisfying a higher requirement, this description is not true.

5.200. To understand the code of neural network, doesn’t mean to understand the neural network.

5.204. All abstract levels of code are known. We have a zero emergence between different abstract levels of program code, from low to high.

5.205. If we made a mistake in the code, we can always describe how these mistakes affect the other processes, and why it’s a mistake, there is no emergence in the system.

5.206. If we made a mistake in deep neural network architecture, we can’t always describe how these mistakes affect the other processes, and why it’s a mistake.

And so on…

Full paper: https://philpapers.org/archive/IERWDI.pdf

The problem of “understanding” is very hotly debated in the machine learning community in the last time. https://www.reddit.com/r/MachineLearning/comments/chm065/d_why_ml_community_so_negatively_opposed_to/. https://www.reddit.com/r/MachineLearning/comments/do6xx5/d_what_does_it_mean_for_a_machine_to_understand/

We hope that our article will draw even more attention to this problem.

submitted by /u/Albert_Ierusalem
[link] [comments]