Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] Linear Networks For Classification

We glad to present and discuss our paper named Linear Distillation Learning (LDL). Is a simple remedy to improve the performance of linear networks through distillation.

In deep learning models, distillation often allows the smaller/shallow network to mimic the larger models in a much more accurate way, while a network of the same size trained on the one-hot targets can’t achieve comparable results to the cumbersome model. Our neural networks without activation functions achieved high classification score on a small amount of data on MNIST and Omniglot datasets.

The approach is based on using a linear function for each class in dataset, which is trained to simulate output of teacher linear network for each class separately. When the model is trained, we can apply classification by novelty detection for each class. Our framework distilling randomized prior functions for data, since prior functions are linear, in couple with bootstrap methods it provides a Bayes posterior.

Highlights: https://medium.com/@asadulaevarip/linear-distillation-learning-da76a2f3a933

arXiv: https://arxiv.org/abs/1906.05431

Twitter: https://twitter.com/postmachines/status/1146848387658108928

submitted by /u/postmachines
[link] [comments]