[D] Positional Encoding in Transformer
I was reading up the transformer paper https://arxiv.org/abs/1706.03762. This architecture uses positional encoding which the attention layers ignore.
I don’t understand two things –
- Why use Sin & Cos as positional embeddings , why not any other function?
- They also talk about training these positional embeddings, how do you go about training such embeddings. As in how do you let the model know that these embeddings are for the position