Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] Is there a neuroscience / cognitive research equivalent to the relational inductive bias in machine learning?

The human decision-making is heavily influenced by beliefs, biases and heuristics. The decision-making in (inductive) machine learning algorithms is grounded in inductive biases. I was wondering if we can establish a connection between both. In particular, I am interested in bridging the gap for the following example:

Say, we have a few lego blocks on the table that are randomly arranged. If we ask us humans to move one block passed another without interfering with it, we will analyze structure in the perceptual input and decompose the scene into entities, relations and relational constraints. We will also access our knowledge/models about objects (Sperkle et al., “Core knowledge”) and will use our beliefs and experience to find an appropriate solution.

Now, I would argue that if we ask the same thing a robot, we would require similar decision-making capabilities. I would further argue that we require two key components:

  1. a forward model of objects or the scene (to “hallucinate” consequences)
  2. A relational inductive bias that allows to exploit structure and impose constraints on relations and interactions of entities during learning the forward model.

Assuming my assumptions are correct, I was wondering if I can make a connection between human decision-making and such a machine intelligence model. Is there something similar in human decision-making for the decomposition of a scene into entities and relations that is related to a machine learning (inductive) bias?

Thanks a lot!

submitted by /u/whiletrue2
[link] [comments]