[P] Relative Attention Positioning library in pytorch
I was trying to use a 2d relative position encoding in my transformer network and couldn’t find one in pytorch,
So I decided to change the tensor2tensor’s implementation into pytorch and added 3d and 1d support as well.
Also because of the heavy usage of attention in the field, I decided to implement that same function in cuda.
It is not a general purpose cuda kernel, and only works great in my settings (large batch size with relatively small patch size) but it might be worth it to check the performance on your settings (I’m getting 2.5x speed up in my forward and 1.5x on my backward)
One last thing, It also supports the B and D attention terms in the Transformer-XL paper.