Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

Category: Toronto AI Organizations

Vector researcher Will Grathwohl wants to lower the barriers to entry to AI

By Ian Gormely

Artificial intelligence is a transformative technology. Yet, much like the Internet before web browsers, it remains inaccessible to many people. The web’s true potential wasn’t realized until the barriers to entry were lowered to the point that “anyone with a laptop had the potential to build the next Facebook,” says Will Grathwohl, a Vector researcher and graduate student at the University of Toronto. “I think we should put AI into people’s hands. The people who have the best ideas for how to apply something are usually not the people that created the thing. But right now, it’s just not like that at all.”

Grathwohl was part of a robust contingent of Vector affiliated folk who attended this year’s International Conference on Learning Representations (ICLR) in New Orleans. In total, 12 posters from Vector Faculty Members were accepted to the conference, with Grathwohl giving an oral presentation of the paper, “FFJORD: Free-Form Continuous Dynamics for Scalable Reversible Generative Models” which he co-authored with Ricky T. Q. Chen, Jesse Bettencourt, Ilya Sutskever, and Vector Faculty Member David Duvenaud. 

FFJORD, an acronym for Free-form Jacobian of Reversible Dynamics, is a small but important step in Grathwhol’s quest to lower the barriers to entry to AI. There have been tremendous breakthroughs in the field, particularly around the use of machine learning, over the past five years. But those breakthroughs still require vast sums of hand-labelled data – say pictures of cats that are identified as such – and computing power, neither of which comes cheap. “To me, the most interesting method to making that amount of data smaller is finding ways to use the massive amounts of unlabeled data that are out there,” says the 27-year old. “One way that that’s become popular to do that is to look into generative models.”

Grathwohl’s paper looks specifically at normalizing flows, a class of generative models that have become popular in the machine learning community for their ability to generate samples and compute likelihood. Building them though requires placing a lot of restrictions on neural networks that can be used to solve a problem. FFJORD applies the idea of continuous time as a workaround to build better, less restrictive normalizing flows.

It builds off an idea first put forth by Grathwohl’s advisor, Vector Faculty Member David Duvenaud in the paper Neural Ordinary Differential Equations, which won Duvenaud, and his co-authors Ricky Tian Qi Chen, Yulia Rubanova, and Jesse Bettencourt the Best Paper Award at last year’s NeuIPS Conference. “David’s paper presented the idea of having a neural network parameterize a continuous time dynamic process. And that opened up a whole new paradigm to think about things that involve machine learning in neural networks,” says Grathwohl. Leveraging Duvenaud’s idea of switching from discrete time – data sampled at regular intervals – to continuous time – data sampled at any point in the flow – allows for the creation of normalizing flow-based generative models in a much simpler and expressive way.

After finishing his undergrad in 2014, Grathwohl spent several years bouncing around the tech industry, first as an entrepreneur, developing content moderation software, and later using machine learning for product indexing at a startup. Eventually, he became frustrated with the lack of creativity. Yet out of that milieu came the inspiration for his return to school. “My job was building infrastructure to collect data and figuring out how to do it as cheaply as possible,” he says. “We had to build more classifiers to serve more industries and more customers. Every single one of those was a constant cost of time and money. I realized we need to make these things work better with less data.”

FFJORD does not solve that problem, but it is a step in the right direction. “Better models that can solve this less labelled data problem will be a key piece,” he says, noting that down the road, normalizing flows could also help in modelling environments, an important aspect of genetic research and robotics. “Any improvement in unsupervised generative models will help us in the semi-supervised learning setting.”

Machine learning platform helps enable early diagnosis of life-threatening infection in premature infants

Vector Institute and Ontario Tech University supporting two hospitals in predictive analytics to detect sepsis in infants through machine learning

Toronto – Today, the Vector Institute, an
independent, not-for-profit research institute focused on leading-edge machine
learning, announced the latest of its series of
Pathfinder Projects
to implement artificial intelligence (AI) in the health sector.

The fifth Pathfinder Project uses machine learning for early detection of sepsis in infants in the
neonatal intensive care unit (NICU).

Sepsis is a life-threatening condition where bacteria
grows in the blood stream, resulting in a severe widespread inflammatory
response. In infants, it is one of the leading causes of long-term morbidity or
mortality globally.  

With support from the Vector Institute and led by Dr. Carolyn McGregor at Ontario Tech University, Artemis is a predictive analytics platform that applies machine
learning to help physicians with the critical care of newborns. Artemis is being developed in partnership with McMaster Children’s
Hospital and Southlake Regional Health Centre. Once fully
implemented, the Artemis system will monitor infants in NICUs, alerting
clinicians when sepsis develops before it would otherwise be clinically
apparent. Ultimately, Artemis will reduce mortality, morbidity and average
length of stay in NICUs.

“Early detection of sepsis in newborns has the
potential to save many lives,” says Dr. McGregor. “Artemis data can help NICUs
better manage the use of antibiotics and reduce the frequency of blood draws
from patients. Our research has also developed a new understanding of a number
of other conditions which will all contribute to better outcomes for these
fragile infants and their families.”

Premature babies have underdeveloped immune
systems making them acutely susceptible to infections, which can lead to sepsis.
Symptoms appear rapidly and unpredictably and can become fatal within hours. A
quarter of preterm infants will develop an episode of sepsis during their stay
in the NICU. 10 per cent of all cases are fatal.

“We’ve started Artemis with the very smallest
of patients,” adds Dr. Edward Pugh, clinical lead at McMaster Children’s
Hospital. “But this analytics platform has the potential to be rolled out
across the adult world and very much change the way that my colleagues and I
work.”

“As a community and regional hospital,
Southlake is passionate about the care of all our patients,” says Patrick
Clifford, Director of Research and Innovation at Southlake Regional Health
Centre. “The opportunity to collaborate on Artemis not only advances care for
our highly vulnerable neonates, but allows hospitals like Southlake to better
serve our tiniest of patients with leading edge care, closer to home.”

Pathfinder Projects are small-scale efforts
designed to produce results in 12 to 18 months to guide future research and
technology adoption. With technical and resource support from the Vector
Institute, projects bring together a multidisciplinary research team to tackle
an important health care problem or opportunity by using machine learning and
AI more broadly. Each project was chosen for its potential to help identify a
“path” through which world-class machine learning research translates into
widespread benefits for patients.

About
the Vector Institute

The Vector Institute is an independent,
not-for-profit corporation dedicated to advancing AI, excelling in machine and
deep learning. The Vector Institute’s vision is to drive excellence and
leadership in Canada’s knowledge, creation, and use of AI to foster economic
growth and improve the lives of Canadians.

The Vector Institute is
funded by the Province of Ontario, the Government of Canada through the
Pan-Canadian AI Strategy administered by CIFAR, and industry sponsors from across the Canadian economy.

Predicting
sepsis in infants with machine learning

Behind the medical monitors in the neonatal
intensive care unit (NICU) at McMaster Children’s Hospital sits a small beige
box. It likely goes unnoticed by most visitors, yet its benign appearance belies
its role as a gateway to a powerful tool that could save the lives of the
hospital’s smallest patients.

“The box itself is very unexciting,” confirms
Dr. Edward Pugh, clinical lead at McMaster. “But it very much has the potential
to change the way that my colleagues and I work.”

The box in question is the bedside connection
point for Artemis, a cloud-based data collection platform that uses machine learning
technology to collect, store and analyze patient data. The system is currently
being pilot tested in two Ontario hospitals: Southlake Regional Health Centre
in Newmarket and McMaster in Hamilton. Running continuously in the background
of each hospital’s NICUs, the system sends and receives data at a volume equivalent to 1,000
tweets a second per infant for approximately 1,200 patients annually. 

With support from the Vector Institute, Ontario
Tech University researcher Dr. Carolyn McGregor, the project’s lead, along with
Dr. Pugh, Southlake and their teams have set up Artemis to use AI to constantly
monitor many streams of data and analyze changes in infant physiology.
Variations in indicators like heart rate or breathing are signs a child is
dealing with an infection. Should such signs occur, Artemis will alert
physicians who will interpret the data and decide next steps.

“One of our main contributions is defining the
patterns of other conditions that will nevertheless make a baby unstable in
similar ways,” says Dr. McGregor. “By accurately identifying sepsis and other
events that make a baby unstable, we will be able to minimize unnecessary
antibiotics and investigations. Minimizing interventions in the NICU can
improve the long-term outcome of these fragile infants and decrease the distress
and burden on their families.”

Sepsis is one of the most common and devastating
conditions preterm and ill term infants can develop says Dr. McGregor. It occurs
when the natural chemicals that the body produces to ward off infections fall
out of balance. The underdeveloped immune systems of premature babies make them
particularly vulnerable — a quarter of preterm infants develop sepsis.
“Symptoms appear rapidly and unpredictably and can be fatal within a few
hours,” she says, noting that 10 per cent of cases are fatal.

McMaster is home to Ontario’s largest NICU,
where babies from just 350 grams to as large as eight kilograms are cared for.
“We’ll have babies who will stay with us for up to a year of life,” notes Dr.
Pugh. “You don’t see that in many neonatal intensive cares.” The volume, acuity
and wide variety of patient conditions seen across McMaster and a large
community hospital like Southlake, make them ideal locations to pilot Artemis
in the field.

Studies will continue through 2020. Once fully
implemented, the researchers hope to expand Artemis beyond checking for sepsis
and outside of the NICU. “We’re small footprint place working with the tiniest
of patients,” says Dr. Pugh, “but we have huge potential for a large impact.”

Additional
Pathfinder Projects

Evolution of Deep Learning Symposium recap blog post

AI community celebrates Dr. Geoffrey Hinton at Evolution of Deep Learning Symposium

International AI talent gathered in Toronto last week to share perspectives on how research and applications are evolving, and how researchers can continue momentum in the field based on historic accomplishments in deep learning. 

The two-day Evolution of Deep Learning Symposium, hosted by the Vector Institute, was in celebration of Dr. Geoffrey Hinton’s leadership and foresight in the field. His four decades of work earned him the 2018 ACM A.M. Turing Award, alongside colleagues Dr. Yoshua Bengio and Dr. Yann LeCun. Dr. Hinton is Chief Scientific Advisor at the Vector Institute, Vice President and Engineering Fellow at Google, and Professor Emeritus at the University of Toronto. 

Researchers from throughout the deep learning community gathered to celebrate Dr. Hinton’s contributions and achievements, presented posters showcasing  the impact of his research across a broad range of topics, and heard from speakers spanning four decades of Dr. Hinton’s career and collaborators: 

The line-up of talks given at the event was a testament to Dr. Hinton’s enduring influence in the field. All are all colleagues, collaborators or former students of Dr. Hinton’s who have since gone on to make their own significant contributions to deep learning. Each spoke at length about Dr. Hinton’s convictions, curiosity , and friendship.

  • Radford Neal, Professor emeritus at the University of Toronto
  • George Dahl, research scientist with Google Brain
  • Terrence Sejnowski, Professor and Director of the Computational Neurobiology Laboratory at the Salk Institute and Distinguished Professor at UC San Diego
  • Max Welling, research chair in Machine Learning at the University of Amsterdam, VP technologies at Qualcomm and Senior Fellow at CIFAR
  • Ilya Sutskever, cofounder and chief scientist at OpenAI

Dr. Hinton himself presented his Turing lecture, and remarked that “Turing is amazing because he spanned both connectionist and symbolic fields.”

The Symposium’s second day was capped off with an industry panel featuring a discussion of the benefits and challenges of “productization” in Canada, and opportunities created for researchers in industry labs to test solutions at scale. 

Panelists

  • Foteini Agrafioti, Chief Science Officer at RBC and Head of Borealis AI
  • Andrew Brown, Senior Director of Data Science and AI Research at CIBC 
  • Raquel Urtasun, Uber ATG Chief Scientist, Professor University of Toronto, and Co-Founder of the Vector Institute

Moderator

  • Graham Taylor, Associate Professor of Engineering at the University of Guelph, CIFAR Azrieli Global Scholar, and Academic Director of NextAI

 

Celebrating fundamental discovery and the power of research groups

The symposium’s highlight was a candid discussion between Dr. Hinton and Eric Schmidt, Technical Advisor at Alphabet Inc. and former CEO and Executive Chairman at Google. At a reception sponsored by TD Layer 6, the pair discussed how they’ve seen the field evolve and what excites them about the future, such as neutral networks with improved interpretability – especially in health care – as well as understanding the limitations of current techniques and pursuing capabilities such as self-supervised learning.

Both reminded the audience that Canada made its mark by supporting fundamental research with long-term gains and abilities to tackle rare diseases, crop science, financial fraud, and more. 

Dr. Hinton credits the funding he received for full-time research as the reason he was able to establish a foothold and make strides in neural network research. He also spoke to the power of a strong research community, encouraging Ph. D. candidates in the crowd to pursue their research alongside trustworthy colleagues who will be honest and push their work further by asking the right questions.

Leaders of the Vector Institute’s industry sponsors and special guests gathered after the fireside chat for a celebratory dinner. 

Left to right: Ed Clark, Chair of the Board, Vector Institute; Anthony Viel, Chief Executive Officer, Deloitte LLP; Jim Smith, President and Chief Executive Officer, Thomson Reuters; Garth Gibson, President and Chief Executive Officer, Vector Institute; Eric Schmidt, Technical Advisor, Alphabet Inc.; Elio Luongo, Chief Executive Officer and Senior Partner, KPMG LLP; Aaron Regent, Chairman of the Board, Scotiabank; Brian Levitt, Chairman of the Board, TD Bank Group; Raquel Urtasun, Uber ATG Chief Scientist, Professor University of Toronto, Co-Founder of the Vector Institute; Cameron Schuler, Chief Commercialization Officer and VP Industry Innovation, Vector Institute; Ilya Sutskever, Co-founder and Chief Scientist, Open AI

University of Toronto faculty commitments honour Hinton’s work

Dr. Hinton’s advocacy for strong research communities was fitting, as the symposium was also the launchpad for an exciting announcement to expand Toronto’s deep learning community. In recognition of Dr. Hinton’s work, Vector is working with the University of Toronto to recruit three new tenure-stream faculty positions in deep learning.

That news came on the heels of Vector cross-appointing eight new faculty members and 29 faculty affiliates from universities across Canada. These researchers now have access to a collaborative research community based in Toronto’s MaRS Discovery District, along with computing resources to catalyze both foundational research and specific applications. 

These academic appointments will not only enhance the University of Toronto’s already established reputation as an AI leader, but also support the growth of this dynamic field that Dr. Hinton has helped champion for decades.

AI community celebrates Dr. Geoffrey Hinton at Evolution of Deep Learning Symposium

Geoffrey Hinton delivered his Turing Lecture to a crowd of researchers and professionals at the Vector Institute’s Evolution of Deep Learning Symposium on October 16th.

International AI talent gathered in Toronto last week to share perspectives on how research and applications are evolving, and how researchers can continue momentum in the field based on historic accomplishments in deep learning. 

The two-day Evolution of Deep Learning Symposium, hosted by the Vector Institute, was in celebration of Dr. Geoffrey Hinton’s leadership and foresight in the field. His four decades of work earned him the 2018 ACM A.M. Turing Award, alongside colleagues Dr. Yoshua Bengio and Dr. Yann LeCun. Dr. Hinton is Chief Scientific Advisor at the Vector Institute, Vice President and Engineering Fellow at Google, and Professor Emeritus at the University of Toronto. 

Researchers from throughout the deep learning community gathered to celebrate Dr. Hinton’s contributions and achievements, presented posters showcasing the impact of his research across a broad range of topics, and heard from speakers spanning four decades of Dr. Hinton’s career and collaborators.

The line-up of talks given at the event was a testament to Dr. Hinton’s enduring influence in the field. Colleagues, collaborators, and former students of Dr. Hinton’s who have since gone on to make their own significant contributions to deep learning each spoke at length about Dr. Hinton’s convictions, curiosity, and friendship.

  • Radford Neal, Professor Emeritus at the University of Toronto
  • George Dahl, research scientist with Google Brain
  • Terrence Sejnowski, Professor and Director of the Computational Neurobiology Laboratory at the Salk Institute and Distinguished Professor at UC San Diego
  • Max Welling, Research Chair in Machine Learning at the University of Amsterdam, VP technologies at Qualcomm and Senior Fellow at CIFAR
  • Ilya Sutskever, Co-founder and Chief Scientist at OpenAI

Dr. Hinton himself presented his Turing lecture, and remarked that “Turing is amazing because he spanned both connectionist and symbolic fields.”

The Symposium’s second day was capped off with an industry panel featuring a discussion of the benefits and challenges of productizing AI in Canada, and opportunities created for researchers in industry labs to test solutions at scale.

Panelists

  • Foteini Agrafioti, Chief Science Officer at RBC and Head of Borealis AI
  • Andrew Brown, Senior Director of Data Science and AI Research at CIBC 
  • Raquel Urtasun, Uber ATG Chief Scientist, Professor University of Toronto, and Co-Founder of the Vector Institute

Moderator

  • Graham Taylor, Associate Professor of Engineering at the University of Guelph, CIFAR Azrieli Global Scholar, and Academic Director of NextAI

Celebrating foundational discovery and the power of good collaborators

The symposium’s highlight was a candid discussion between Dr. Hinton and Eric Schmidt, Technical Advisor at Alphabet Inc. and former CEO and Executive Chairman at Google. At a reception sponsored by TD Layer 6, the pair discussed how they’ve seen the field evolve and what excites them about the future, such as neutral networks with improved interpretability – especially in health care – as well as understanding the limitations of current techniques and pursuing capabilities such as self-supervised learning.

Both guests reminded the audience that Canada made its mark by supporting foundational research with long-term gains and abilities to tackle challenges such as rare diseases, crop science, financial fraud, and more.

Dr. Hinton credits the funding he received for full-time research as the reason he was able to establish a foothold and make strides in neural network research. He also spoke to the power of a strong research community, encouraging PhD candidates in the crowd to pursue their research alongside trustworthy colleagues who will be honest and further their work further by asking the right questions.

Leaders of the Vector Institute’s industry sponsors and special guests gathered after the fireside chat for a celebratory dinner. 

Left to right: Ed Clark, Chair of the Board, Vector Institute; Anthony Viel, Chief Executive Officer, Deloitte LLP; Jim Smith, President and Chief Executive Officer, Thomson Reuters; Garth Gibson, President and Chief Executive Officer, Vector Institute; Eric Schmidt, Technical Advisor, Alphabet Inc.; Elio Luongo, Chief Executive Officer and Senior Partner, KPMG LLP; Aaron Regent, Chairman of the Board, Scotiabank; Brian Levitt, Chairman of the Board, TD Bank Group; Raquel Urtasun, Uber ATG Chief Scientist, Professor University of Toronto, Co-Founder of the Vector Institute; Cameron Schuler, Chief Commercialization Officer and VP Industry Innovation, Vector Institute; Ilya Sutskever, Co-founder and Chief Scientist, Open AI

University of Toronto faculty commitments honour Dr. Hinton’s work

Dr. Hinton’s advocacy for strong research communities was fitting, as the symposium was also the launchpad for an exciting announcement to expand Toronto’s deep learning community. In recognition of Dr. Hinton’s work, Vector Institute will work with the University of Toronto to recruit three new tenure-stream faculty positions in deep learning.

That news came on the heels of Vector appointing eight new faculty members and 29 faculty affiliates who also hold appointments at universities across Canada. These researchers now have access to a collaborative community based in Toronto’s MaRS Discovery District, along with computing resources to catalyze both foundational research and specific applications.

Altogether, these academic appointments will enhance Canada’s leadership in AI and support the growth of this dynamic field that Dr. Hinton has championed for decades.

 

Bengio, Hinton et LeCun acceptent le prix A.M. Turing 2018 de l’ACM à San Francisco

Les lauréats du prix A.M. Turing 2018 de l’ACM, Geoffrey Hinton, Yoshua Bengio et Yann LeCun, ont reçu le prix A.M. Turing de l’ACM au banquet de remise des prix 2019 de l’ACM, à San Francisco, au mois de juin.

 

À la conférence FCRC de l’ACM à Phœnix, Hinton et LeCun ont prononcé leur conférence Turing : « La révolution de l’apprentissage profond » et « La révolution de l’apprentissage numérique : La suite ». | Vidéo

 

Les trois lauréats ont fait la couverture du numéro de juin 2019 de la revue Communications of the ACM | Lisez ici

 

Cette année, la Banque d’Angleterre a rendu hommage à Alan Turing pour ses travaux d’avant-garde sur les ordinateurs, ainsi que pour ses contributions lors de la Deuxième Guerre mondiale, y compris la « bombe de Turing », l’un des outils principaux utilisés pour décrypter les messages codés à l’aide d’Enigma; son portrait figurera sur les billets de 50 £. Pendant le mois de la fierté gaie cette année, le New York Times a aussi rendu hommage à Alan Turing pour ses idées qui ont contribué à la victoire pendant la Deuxième Guerre mondiale, et les épreuves qu’il a traversées relativement à sa sexualité : Overlooked No More: Alan Turing, Condemned Code Breaker and Computer Visionary

Cet article a été publié dans le Bulletin IACan. Abonnez-vous à la publication électronique bimestrielle pour rester au fait des plus récentes nouvelles en IA au Canada.

Vector Researchers Prepare for 33rd Annual Conference on Neural Information Processing Systems (NeurIPS)

Vector researchers are preparing for the world’s premier machine learning conference, the 33rd annual conference on Neural Information Processing Systems (NeurIPS). A multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers, NeurIPS 2019 runs December 8-14 at the Vancouver Convention Center, Vancouver, BC.

This year, Vector researchers had an impressive 23 papers accepted to the conference. Additionally, they are organizing four workshops.

At the 2018 NeurIPS conference, Vector Faculty Members and students collaborated to win two of four Best Paper awards and a Best Student Paper Award for their research. Read more about Vector’s accomplishments at last year’s conference here.

 

Accepted Papers by Vector researchers:

Efficient Graph Generation with Graph Recurrent Attention Networks
Renjie Liao (University of Toronto) · Yujia Li (DeepMind) · Yang Song (Stanford University) · Shenlong Wang (University of Toronto) · Will Hamilton (McGill) · David Duvenaud (University of Toronto) · Raquel Urtasun (Uber ATG) · Richard Zemel (Vector Institute/University of Toronto)

 

Incremental Few-Shot Learning with Attention Attractor Networks
Mengye Ren (University of Toronto / Uber ATG) · Renjie Liao (University of Toronto) · Ethan Fetaya (University of Toronto) · Richard Zemel (Vector Institute/University of Toronto)

 

SMILe: Scalable Meta Inverse Reinforcement Learning through Context-Conditional Policies
Seyed Kamyar Seyed Ghasemipour (University of Toronto, Vector Institute) · Shixiang (Shane) Gu (Google Brain) · Richard Zemel (Vector Institute/University of Toronto)

 

Lookahead Optimizer: k steps forward, 1 step back
Michael Zhang (University of Toronto) · James Lucas (University of Toronto) · Jimmy Ba (University of Toronto / Vector Institute) · Geoffrey Hinton (Google)

Graph Normalizing Flows
Jenny Liu (Vector Institute, University of Toronto) · Aviral Kumar (UC Berkeley) · Jimmy Ba (University of Toronto / Vector Institute) · Jamie Kiros (Google Inc.) · Kevin Swersky (Google)

 

Latent Ordinary Differential Equations for Irregularly-Sampled Time Series
Yulia Rubanova (University of Toronto) · Tian Qi Chen (U of Toronto) · David Duvenaud (University of Toronto)

 

Residual Flows for Invertible Generative Modeling
Tian Qi Chen (U of Toronto) · Jens Behrmann (University of Bremen) · David Duvenaud (University of Toronto) · Joern-Henrik Jacobsen (Vector Institute)

 

Neural Networks with Cheap Differential Operators
Tian Qi Chen (U of Toronto) · David Duvenaud (University of Toronto)

 

Stochastic Runge-Kutta Accelerates Langevin Monte Carlo and Beyond
Xuechen Li (Google) · Yi Wu (University of Toronto & Vector Institute) · Lester Mackey (Microsoft Research) · Murat Erdogdu (University of Toronto)

Value Function in Frequency Domain and Characteristic Value Iteration
Amir-massoud Farahmand (Vector Institute)

 

Learning to Predict 3D Objects with an Interpolation-based Differentiable Renderer
Wenzheng Chen (University of Toronto) · Huan Ling (University of Toronto, NVIDIA) · Jun Gao (University of Toronto) · Edward Smith (McGill University) · Jaakko Lehtinen (NVIDIA Research; Aalto University) · Alec Jacobson (University of Toronto) · Sanja Fidler (University of Toronto)

 

Fast Convergence of Natural Gradient Descent for Over-Parameterized Neural Networks
Guodong Zhang (University of Toronto) · James Martens (DeepMind) · Roger Grosse (University of Toronto)

 

Which Algorithmic Choices Matter at Which Batch Sizes? Insights From a Noisy Quadratic Model
Guodong Zhang (University of Toronto) · Lala Li (Google) · Zachary Nado (Google Inc.) · James Martens (DeepMind) · Sushant Sachdeva (University of Toronto) · George Dahl (Google Brain) · Chris Shallue (Google Brain) · Roger Grosse (University of Toronto)

 

Understanding Posterior Collapse in Variational Autoencoders
James Lucas (University of Toronto) · George Tucker (Google Brain) · Roger Grosse (University of Toronto) · Mohammad Norouzi (Google Brain)

 

Preventing Gradient Attenuation in Lipschitz Constrained Convolutional Networks
Qiyang Li (University of Toronto) · Saminul Haque (University of Toronto) · Cem Anil (University of Toronto; Vector Institute) · James Lucas (University of Toronto) · Roger Grosse (University of Toronto) · Joern-Henrik Jacobsen (Vector Institute)

 

MixMatch: A Holistic Approach to Semi-Supervised Learning
David Berthelot (Google Brain) · Nicholas Carlini (Google) · Ian Goodfellow (Google Brain) · Nicolas Papernot (University of Toronto) · Avital Oliver (Google Brain) · Colin A Raffel (Google Brain)

 

Fast PAC-Bayes via Shifted Rademacher Complexity
Jun Yang (University of Toronto) · Shengyang Sun (University of Toronto) · Daniel Roy (Univ of Toronto & Vector)

 

Information-Theoretic Generalization Bounds for SGLD via Data-Dependent Estimates
Gintare Karolina Dziugaite (Element AI) · Mahdi Haghifam (University of Toronto) · Jeffrey Negrea (University of Toronto) · Ashish Khisti (University of Toronto) · Daniel Roy (Univ of Toronto & Vector)

 

Understanding attention in graph neural networks
Boris Knyazev (University of Guelph) · Graham W Taylor (University of Guelph) · Mohamed R. Amer (Robust.AI)

 

The Cells Out of Sample (COOS) dataset and benchmarks for measuring out-of-sample generalization of image classifiers
Alex Lu (University of Toronto) · Amy Lu (University of Toronto/Vector Institute) · Wiebke Schormann (Sunnybrook Research Institute) · David Andrews (Sunnybrook Research Institute) · Alan Moses (University of Toronto)

 

Learning Reward Machines for Partially Observable Reinforcement Learning
Rodrigo Toro Icarte (University of Toronto and Vector Institute) · Ethan Waldie (University of Toronto) · Toryn Klassen (University of Toronto) · Rick Valenzano (Element AI) · Margarita Castro (University of Toronto) · Sheila McIlraith (University of Toronto)

 

When does label smoothing help?
Rafael Müller (Google Brain) · Simon Kornblith (Google Brain) · Geoffrey E Hinton (Google & University of Toronto)

 

Stacked Capsule Autoencoders
Adam Kosiorek (University of Oxford) · Sara Sabour (Google) · Yee Whye Teh (University of Oxford, DeepMind) · Geoffrey E Hinton (Google & University of Toronto)

 

Vector Institute researchers are hosting four workshops:

 

Machine Learning and the Physical Science: Organized by Juan Felipe Carrasquilla, (Canada CIFAR AI Chair, Vector Institute, Faculty Member, Vector Institute and Assistant Professor (Adjunct), Department of Physics and Astronomy, University of Waterloo) and collaborators, this workshop focuses on applying machine learning to outstanding physics problems. | Learn more 

 

Fair ML in Healthcare: Organized by Shalmali Joshi, Post-doctoral Fellow, and Shems Saleh at the Vector Institute, and collaborators this, the goal of this workshop is to investigate issues around fairness in machine learning-based health care. | Learn more

 

Program Transformations for ML: Organized by David Duvenaud (Assistant Professor at the University of Toronto, Co-founder, Invenia, Canada Research Chair in Generative Models and Faculty Member, Vector Institute) and his collaborators.  This workshop aims at viewing program transformations in ML in a unified light, making these capabilities more accessible, and building entirely new ones | Learn more

 

Machine Learning with Guarantees: Organized by Daniel Roy (Assistant Professor at the University of Toronto, Faculty Member, Vector Institute and Canada CIFAR Artificial Intelligence Chair) and his collaborators, this workshop will bring together researchers to discuss the problem of obtaining performance guarantees and algorithms to optimize them.  | Learn more

Learn more:

  • Check out a full list of Vector research publications here.