[D] Intuition behind positional encodings in Transformers.
Hi, I was wondering why the positional encoding in transformer works the way it does. Like why add the positional encoding with word embedding and not multiply or some other operation? Is there any explanation behind why this works and why not some other method is used?
submitted by /u/bytestorm95
[link] [comments]