Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
Hi,
I was trying to use a 2d relative position encoding in my transformer network and couldn’t find one in pytorch,
So I decided to change the tensor2tensor’s implementation into pytorch and added 3d and 1d support as well.
Also because of the heavy usage of attention in the field, I decided to implement that same function in cuda.
It is not a general purpose cuda kernel, and only works great in my settings (large batch size with relatively small patch size) but it might be worth it to check the performance on your settings (I’m getting 2.5x speed up in my forward and 1.5x on my backward)
One last thing, It also supports the B and D attention terms in the Transformer-XL paper.
here is the https://github.com/Separius/CudaRelativeAttention
submitted by /u/Separius12
[link] [comments]