[D] Make BERT model smaller
We used BERT in one of the tasks we work on in our company. It worked incredibly well, and we want to try it in many other tasks.
The issue is that BERT is a huge model and requires a GPU both in training and inference. We want to find a way to utilize BERT without using GPU everywhere.
Is there a way to make BERT smaller or build some approximation model?
What we thought for now is:
-
For some task, train a model using BERT on a small amount of data (what we currently have). If the results are good, “tag” a lot of data using this model and the train another, much smaller, model on the large artificially tagged data.
-
Use part of the BERT layers, for example, take only 2-3 first attention layers out of 12 and fine-tune them.
Thanks.
submitted by /u/sudo_su_
[link] [comments]