Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[Discussion] Category-theoretic approach to machine learning

I’d like to start a thread about a small surge of recent papers studying machine learning from the perspective of functional programming/category theory. Plenty of interesting things are happening that most people don’t seem to be aware of, unless they are in these tight circles!

Category theory is a very general and rigorous mathematical theory of compositionality that seems have become a powerful unifying force in all the mathematics and very recently sciences. Its main concerns are those alike to in deep learning: finding compositional structure in data, such that the created abstractions are non-leaky and as general as possible.

Alongside the many of the papers I linked below, the Symposium on Compositional Structures that’s happening this week has two talks on abstract mathematical generalizations of machine learning.

Note: unlike most ML papers which are focused on experiments, almost all of these are biased heavily on theory and disentangling of some of the existing structure, rather than providing new ad-hoc design choices in neural network architectures. They don’t have a SOTA result or any immediate benefit you can implement right now, but are more focused on the long term understanding of the relevant structures underlying neural networks.

I’ve compiled a list of these papers below. To me all these things are cool and I thought it might be useful for people to see these new approaches, as they might show us a shape of things to come.

Backprop As Functor

The simple essence of automatic differentiation

Lenses and Learners

Compositional Deep Learning

Generalized convolution and efficient language recognition

Towards formalizing and extending differential programming using tangent categories

Learning as change propagation with delta lenses

From open learners to open games

EDIT: Disclaimer: I am the author of the fourth paper “Compositional deep learning”

submitted by /u/totallynotAGI
[link] [comments]