Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] Distilling BERT — How to achieve BERT performance using Logistic Regression

Few days ago, in this post, I asked about a way to make BERT smaller. I got some interesting results and found some relevant papers. The basic idea is, given a relatively small labelled dataset, and another much bigger unlabelled set:

  1. Train BERT on the labeled set
  2. Predict values of the unlabelled set
  3. Train a much smaller model using the now the labelled big set

I tried it with Logistic Regression and got some interesting results here:

https://towardsdatascience.com/distilling-bert-how-to-achieve-bert-performance-using-logistic-regression-69a7fc14249d?source=friends_link&sk=2e62337d0b44f56409640c27277b99ce

submitted by /u/sudo_su_
[link] [comments]