Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] Intuition behind positional encodings in Transformers.

Hi, I was wondering why the positional encoding in transformer works the way it does. Like why add the positional encoding with word embedding and not multiply or some other operation? Is there any explanation behind why this works and why not some other method is used?

submitted by /u/bytestorm95
[link] [comments]