[D] What is the difference between few-, one- and zero-shot learning?
At first, I’ve thought that:
– few-shot learning is when there is only few training examples for each label available;
– one-shot learning is when there might be only one training example for a label;
– zero-shot learning is when some labels won’t be available in training sample.
But, for example, in Siamese Neural Networks for One-shot Image Recognition training process requires more than one training example for label in set, which would be few-shot learning, and in the test time you can choose the class which was not represented, which would be zero-shot learning.
It confuses me and I will greatly appreciate if someone helps me.
submitted by /u/FeatherNox839
[link] [comments]