Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[P] Nearing BERT’s accuracy on Sentiment Analysis with a model 56 times smaller by Knowledge Distillation

Hello everyone,

I recently trained a tiny bidirectional LSTM model to achieve high accuracy on Stanford’s SST-2 by using knowledge distillation and data augmentation. The accuracy is comparable to BERT after fine-tuning, but the model is small enough to run at hundreds of iterations per second on a laptop CPU core. I believe this approach could be very useful since most user-devices in the world are low-power.

I believe this can also give some insight into the success of huggingface’s DistilBERT, as it seems their success doesn’t stem solely from knowledge distillation but also from the Transformer’s unique architecture and the clever way they initialize its weights.

If you have any questions or insights, please share 🙂

For more details please take a look at the article:

https://blog.floydhub.com/knowledge-distillation/

submitted by /u/alexamadoriml
[link] [comments]