Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
I dont get the idea why do we use masking before the calculation of attention. I get the idea that we want to input the decoder one word at a time, but i dont understand why in the implementation they used masking in encoder and a mask in the first part of the decoder where the inputs from encoder is passed.
submitted by /u/bikanation
[link] [comments]