[D] Patterns of Self-Supervised Learning
I wrote up a short article introducing self-supervised learning and noting down common recurring patterns that I’ve observed in several self-supervised problem setups. Feedback welcome. In general, one has to be quite creative setting up the right ‘input’ and ‘output’ for learning a particular object’s representation.
Are there other common patterns that others have observed?
How do we compare the representations learned from two different self-supervised setups for the same object type, e.g., rotation vs patch-based, BERT-like masked loss vs word vectors?
submitted by /u/ekshaks
[link] [comments]