[D] What is the effect of training a system with jumbled up feature vectors?
An idea has been bouncing around my head today, and since I’m not very knowledgeable about ML I want to know if there’s any literature (or common sense from experts) about it.
Lets say we have training examples consisting of (say) two features, color (
c) and size (
s), such that
v1 = [c1, s1],
v2 = [c2, s2], and
vn = [cn, sn].
What is the effect of training a system with “jumbled” inputs
vy, such that
vx = [c1, s2] and
vy = [c2, s1]?
My immediate thought is that you can’t really give labels to jumbled training examples (it can’t be a cat if it has a horse’s head and a pig’s tail), but perhaps the system could learn a probability distribution of the labels based on the features included?
Anyway, can jumbled training examples produce a model that is useful in any way? Is there any literature exploring this?