[D] How to use BERT as replacement of embedding layer in my pytorch model?
As far as I understand BERT can work as kind of embedding but context-sensitive.
Can I use pretrained BERT like pretrained embedding in my model?
If I can, what simplest way to do so?
In general, I want to make something like a context-sensitive replacement for char/word lvl default embeddings for my models.
submitted by /u/hadaev
[link] [comments]