Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

Category: Reddit MachineLearning

[D] Machine Learning & Robotics: My (biased) 2019 State of the Field

At the end of every year, I like to take a look back at the different trends or papers that inspired me the most. As a researcher in the field, I find it can be quite productive to take a deeper look at where I think the research community has made surprising progress or to identify areas where, perhaps unexpectedly, we did not advance. I’ve put together a post in which I give my perspective on the state of the field from this past year. The post is no doubt a biased sample of what I think is progress, but I hope it stimulates discussion about which subfields evolved or what priorities unexpectedly shifted in 2019.

Link to post: http://www.cachestocaches.com/2019/12/my-state-of-the-field/

A short summary/outline of my post for discussion:

  • From AlphaZero to MuZero MuZero picks up where AlphaZero left off a couple years ago and makes significant advances using a learned model to enable rollouts without planning in pixel space.
  • Representation Learning I’m particularly excited to see how recent progress in representation learning (like advances in “entity abstraction”) will help to blur the lines between black-box deep learning and old-school symbolic AI & classical planning.
  • Supervised Computer Vision Research Cools (slightly) Research in this space has slowed, but related techniques, like network pruning and network compilation, have taken off this past year.
  • Maturing Technologies
    • Graph Neural Networks
    • Explainable & Interpretable AI
    • Simulation Tools & Sim-to-Real
  • Bittersweet Lessons No discussion of 2019 would be complete without a conversation about Rich Sutton’s “The Bitter Lesson” post and rebuttals.

What do you think were the most interesting advancements or shifts this year?

submitted by /u/gregoryjstein
[link] [comments]

[D] 7 really neat recent survey papers in deep learning

[D] 7 really neat recent survey papers in deep learning

The intense democratization of toolkits coupled with the breakneck speed at which research is unfolding in Deep learning, the literature landscape might seem chaotic and cacophonous at times.

Hence, I truly appreciate when the well cited authors in a specific vertical of research invest time and effort to author good overview/survey/review/meta papers. Besides the obvious good of providing a comprehensive bird’s-eye view of the field, they serve 5 crucial purposes that are oft ignored.

1) These are high quality invitation notes to researchers from a different domain to contribute

2) They serve as collection of important open problems waiting to be solved

3) Immensely helpful in faster, better and up-to-date teaching course design

4) Setting the agenda for the research directions in the near future

5) Eases the burden of lengthy citation lists, especially for short communication papers.

This year, I chanced upon 7 such papers that I am sharing with the ML community here.

Happy year end reading!

List:

  1. Advances and Open Problems in Federated Learning, https://arxiv.org/pdf/1912.04977.pdf
  2. Deep learning for time series classification: a review, https://arxiv.org/pdf/1809.04356.pdf
  3. Optimization for deep learning: theory and algorithms, https://arxiv.org/pdf/1912.08957.pdf
  4. Normalizing Flows: An Introduction and Review of Current Methods, https://arxiv.org/pdf/1908.09257.pdf
  5. Normalizing Flows for Probabilistic Modeling and Inference, https://arxiv.org/pdf/1912.02762.pdf
  6. Fantastic Generalization Measures and Where to Find Them, https://arxiv.org/pdf/1912.02178.pdf
  7. Neural Style Transfer: A Review, https://arxiv.org/pdf/1705.04058.pdf

Cheat-sheet for print:

https://preview.redd.it/4y22qaqp9t741.png?width=1792&format=png&auto=webp&s=ff3aa1c76374530983cb3ae561ff777afe57db69

submitted by /u/VinayUPrabhu
[link] [comments]

[R] Better-than-Demonstrator Imitation Learning via Automatically-Ranked Demonstrations

While searching for papers about inverse reinforcement learning (IRL, learning rewards instead of policies), I found this:

https://dsbrown1331.github.io/CoRL2019-DREX/ https://arxiv.org/pdf/1907.03976.pdf

It seems to be the only paper in which an agent manages to outperform the sub-optimal players it learns from.

I think this probably is an important step forward since eventually we would like to create programs that act in the real world and perform tasks better than us without having to code a reward function.

A short term application of this could be in self-driving cars, allowing them to learn from human drivers and be safer than them.

submitted by /u/Pedro-Afonso
[link] [comments]

[P] Dashcam Crash Classification

I am working on trying to classify dashcam videos of car crashes as Head-on, Rear-end by others, Rear-end by us, Sideswipe, T-Bone by others and T-Bone by us. We are only bothered about the videos where the dashcam car is involved in the accident. There are two types of accidents involved here

  1. Visible on dashcam footage – Head-on, Rear-end by us, etc
  2. Not visible on dashcam footage – Rear-end by others, T-Bone by others

Working with motion features seemed like a good idea to classify this. I have tried using the ideas from “Action Recognition with Improved Trajectories” to build a classifier. However, this has not proved fruitful. As a solution for the visible accident subcase, I am able to identify the other car involved in the accident and pass it through an image classification algorithm to obtain satisfactory results. I am currently exploring dynamic time warping to figure out a solution.

It seems like an interesting problem to solve. Any leads on how to approach this would be highly appreciated. 🙂

submitted by /u/deepakprabakar
[link] [comments]

[D] Why is it difficult to sample from Energy Based Models?

I have very little experience with generative models, so apologies if that is a trivial question.

My understanding of an energy based model (EBM) is that it is an undiredted graph defining the joint distribution over the vector X as p(X)=exp(-E(X)) such that E(x) is a sum of potentials defined over clicks.

The well-known Deep learning book by Goodfellow et al. claims that sampling from an EBM is difficult:

To understand why drawing samples from an energy-based model (EBM) is difficult, consider the EBM over just two variables, defining a distribution p(a,b). In order to sample a, we must draw from p(a|b), and in order to sample b, we must draw it from p(b|a). It seems to be an intractable chicken-and-egg problem.

I really find that perplexing. We already know p(a,b), so why can’t we just compute the marginal p(a), sample from it, and then sample from p(b|a)?

submitted by /u/AddMoreLayers
[link] [comments]