Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
Basically, so far, I have been trying to train BERT on a very long document by cutting start, middle , and end sections of article so it could be fit into the limited input dimension of 512. However; the performance has been dismal for most of the time. So far, I am not sure if using LSTM+GRU was a better approach than this. But are there other ways to train it than just cutting up the article? When I googled for an alternative approach, I couldn’t find much…
submitted by /u/shstan
[link] [comments]