[D] Current issues with transfer learning in NLP
We have all been impressed with good results achieved by BERT and its brothers. But these models are not perfect and their results come at a cost. I have summarized a lot of my readings on current issues with these huge pre-trained models in this blog post: https://mohammadkhalifa.github.io/2019/09/06/Issues-With-Transfer-Learning-in-NLP/.
I have mainly discussed the following 6 issues:
- Computational Intensity
- Difficult Reproducibility
- Leaderboard Madness
- Dissimilarity to how humans learn a language.
- Shallow Language Understanding.
- High carbon footprint.
Any feedback on the content or the writing would be really appreciated.