Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
I wrote up a short article introducing self-supervised learning and noting down common recurring patterns that I’ve observed in several self-supervised problem setups. Feedback welcome. In general, one has to be quite creative setting up the right ‘input’ and ‘output’ for learning a particular object’s representation.
Are there other common patterns that others have observed?
How do we compare the representations learned from two different self-supervised setups for the same object type, e.g., rotation vs patch-based, BERT-like masked loss vs word vectors?
submitted by /u/ekshaks
[link] [comments]