Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] Transformer number of token performance limits

Hi,

I am currently working on a research project that involves using a transformer-like model for a NLP task. Specifically summarizing long documents.

I was wondering if any of you knows of a paper that explores the limits of the transformer when using super long sequences.

Is there any issue with long sequences?

Is there a “length” limit that this kind of model starts to decrease their performance?

Thanks a lot in advance!

submitted by /u/fdelrio89
[link] [comments]