[D] Whcih embedding is best for seq2seq?
Are there embeddings for seq2seq tasks? I have a very large vocab and I want to feed a fixed-length vector rather than one-hot vector to the input of the neural network (and for the output)
submitted by /u/omgDoYouKnow
[link] [comments]