Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

Vector Researchers Prepare for 33rd Annual Conference on Neural Information Processing Systems (NeurIPS)

Vector researchers are preparing for the world’s premier machine learning conference, the 33rd annual conference on Neural Information Processing Systems (NeurIPS). A multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers, NeurIPS 2019 runs December 8-14 at the Vancouver Convention Center, Vancouver, BC.

This year, Vector researchers had an impressive 23 papers accepted to the conference. Additionally, they are organizing four workshops.

At the 2018 NeurIPS conference, Vector Faculty Members and students collaborated to win two of four Best Paper awards and a Best Student Paper Award for their research. Read more about Vector’s accomplishments at last year’s conference here.

 

Accepted Papers by Vector researchers:

Efficient Graph Generation with Graph Recurrent Attention Networks
Renjie Liao (University of Toronto) · Yujia Li (DeepMind) · Yang Song (Stanford University) · Shenlong Wang (University of Toronto) · Will Hamilton (McGill) · David Duvenaud (University of Toronto) · Raquel Urtasun (Uber ATG) · Richard Zemel (Vector Institute/University of Toronto)

 

Incremental Few-Shot Learning with Attention Attractor Networks
Mengye Ren (University of Toronto / Uber ATG) · Renjie Liao (University of Toronto) · Ethan Fetaya (University of Toronto) · Richard Zemel (Vector Institute/University of Toronto)

 

SMILe: Scalable Meta Inverse Reinforcement Learning through Context-Conditional Policies
Seyed Kamyar Seyed Ghasemipour (University of Toronto, Vector Institute) · Shixiang (Shane) Gu (Google Brain) · Richard Zemel (Vector Institute/University of Toronto)

 

Lookahead Optimizer: k steps forward, 1 step back
Michael Zhang (University of Toronto) · James Lucas (University of Toronto) · Jimmy Ba (University of Toronto / Vector Institute) · Geoffrey Hinton (Google)

Graph Normalizing Flows
Jenny Liu (Vector Institute, University of Toronto) · Aviral Kumar (UC Berkeley) · Jimmy Ba (University of Toronto / Vector Institute) · Jamie Kiros (Google Inc.) · Kevin Swersky (Google)

 

Latent Ordinary Differential Equations for Irregularly-Sampled Time Series
Yulia Rubanova (University of Toronto) · Tian Qi Chen (U of Toronto) · David Duvenaud (University of Toronto)

 

Residual Flows for Invertible Generative Modeling
Tian Qi Chen (U of Toronto) · Jens Behrmann (University of Bremen) · David Duvenaud (University of Toronto) · Joern-Henrik Jacobsen (Vector Institute)

 

Neural Networks with Cheap Differential Operators
Tian Qi Chen (U of Toronto) · David Duvenaud (University of Toronto)

 

Stochastic Runge-Kutta Accelerates Langevin Monte Carlo and Beyond
Xuechen Li (Google) · Yi Wu (University of Toronto & Vector Institute) · Lester Mackey (Microsoft Research) · Murat Erdogdu (University of Toronto)

Value Function in Frequency Domain and Characteristic Value Iteration
Amir-massoud Farahmand (Vector Institute)

 

Learning to Predict 3D Objects with an Interpolation-based Differentiable Renderer
Wenzheng Chen (University of Toronto) · Huan Ling (University of Toronto, NVIDIA) · Jun Gao (University of Toronto) · Edward Smith (McGill University) · Jaakko Lehtinen (NVIDIA Research; Aalto University) · Alec Jacobson (University of Toronto) · Sanja Fidler (University of Toronto)

 

Fast Convergence of Natural Gradient Descent for Over-Parameterized Neural Networks
Guodong Zhang (University of Toronto) · James Martens (DeepMind) · Roger Grosse (University of Toronto)

 

Which Algorithmic Choices Matter at Which Batch Sizes? Insights From a Noisy Quadratic Model
Guodong Zhang (University of Toronto) · Lala Li (Google) · Zachary Nado (Google Inc.) · James Martens (DeepMind) · Sushant Sachdeva (University of Toronto) · George Dahl (Google Brain) · Chris Shallue (Google Brain) · Roger Grosse (University of Toronto)

 

Understanding Posterior Collapse in Variational Autoencoders
James Lucas (University of Toronto) · George Tucker (Google Brain) · Roger Grosse (University of Toronto) · Mohammad Norouzi (Google Brain)

 

Preventing Gradient Attenuation in Lipschitz Constrained Convolutional Networks
Qiyang Li (University of Toronto) · Saminul Haque (University of Toronto) · Cem Anil (University of Toronto; Vector Institute) · James Lucas (University of Toronto) · Roger Grosse (University of Toronto) · Joern-Henrik Jacobsen (Vector Institute)

 

MixMatch: A Holistic Approach to Semi-Supervised Learning
David Berthelot (Google Brain) · Nicholas Carlini (Google) · Ian Goodfellow (Google Brain) · Nicolas Papernot (University of Toronto) · Avital Oliver (Google Brain) · Colin A Raffel (Google Brain)

 

Fast PAC-Bayes via Shifted Rademacher Complexity
Jun Yang (University of Toronto) · Shengyang Sun (University of Toronto) · Daniel Roy (Univ of Toronto & Vector)

 

Information-Theoretic Generalization Bounds for SGLD via Data-Dependent Estimates
Gintare Karolina Dziugaite (Element AI) · Mahdi Haghifam (University of Toronto) · Jeffrey Negrea (University of Toronto) · Ashish Khisti (University of Toronto) · Daniel Roy (Univ of Toronto & Vector)

 

Understanding attention in graph neural networks
Boris Knyazev (University of Guelph) · Graham W Taylor (University of Guelph) · Mohamed R. Amer (Robust.AI)

 

The Cells Out of Sample (COOS) dataset and benchmarks for measuring out-of-sample generalization of image classifiers
Alex Lu (University of Toronto) · Amy Lu (University of Toronto/Vector Institute) · Wiebke Schormann (Sunnybrook Research Institute) · David Andrews (Sunnybrook Research Institute) · Alan Moses (University of Toronto)

 

Learning Reward Machines for Partially Observable Reinforcement Learning
Rodrigo Toro Icarte (University of Toronto and Vector Institute) · Ethan Waldie (University of Toronto) · Toryn Klassen (University of Toronto) · Rick Valenzano (Element AI) · Margarita Castro (University of Toronto) · Sheila McIlraith (University of Toronto)

 

When does label smoothing help?
Rafael Müller (Google Brain) · Simon Kornblith (Google Brain) · Geoffrey E Hinton (Google & University of Toronto)

 

Stacked Capsule Autoencoders
Adam Kosiorek (University of Oxford) · Sara Sabour (Google) · Yee Whye Teh (University of Oxford, DeepMind) · Geoffrey E Hinton (Google & University of Toronto)

 

Vector Institute researchers are hosting four workshops:

 

Machine Learning and the Physical Science: Organized by Juan Felipe Carrasquilla, (Canada CIFAR AI Chair, Vector Institute, Faculty Member, Vector Institute and Assistant Professor (Adjunct), Department of Physics and Astronomy, University of Waterloo) and collaborators, this workshop focuses on applying machine learning to outstanding physics problems. | Learn more 

 

Fair ML in Healthcare: Organized by Shalmali Joshi, Post-doctoral Fellow, and Shems Saleh at the Vector Institute, and collaborators this, the goal of this workshop is to investigate issues around fairness in machine learning-based health care. | Learn more

 

Program Transformations for ML: Organized by David Duvenaud (Assistant Professor at the University of Toronto, Co-founder, Invenia, Canada Research Chair in Generative Models and Faculty Member, Vector Institute) and his collaborators.  This workshop aims at viewing program transformations in ML in a unified light, making these capabilities more accessible, and building entirely new ones | Learn more

 

Machine Learning with Guarantees: Organized by Daniel Roy (Assistant Professor at the University of Toronto, Faculty Member, Vector Institute and Canada CIFAR Artificial Intelligence Chair) and his collaborators, this workshop will bring together researchers to discuss the problem of obtaining performance guarantees and algorithms to optimize them.  | Learn more

Learn more:

  • Check out a full list of Vector research publications here.