Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
Most of the popular neural network language models use softmax+cross entropy loss during training, which is based on the assumption that only the target label is true, and everything else is false. But isn’t language modeling a multilabel classification task? why sigmoid+BCE isn’t used often?
submitted by /u/DeMorrr
[link] [comments]