Tiling and Randomizing Images as a Stopgap for Visual Data Shortage [D]
This is my first post and, as a beginner, I don’t yet have the vocabulary to totally articulate my thoughts on this and I apologize in advance for that.
I’m currently interested in training a convolutional neural network to recognize images. However, a problem I’m encountering is a lack of source images so I thought, why not throw the few images I have into some program that could cut them up into tiles and shuffle them around randomly?
Upon further reading, it seems like this is – or is at least similar to – a preestablished method for synthesizing new visual data called Structured Domain Randomization.
My question: is this a valid method for training my neural network?
I’m worried that the algorithm wouldn’t actually be learning anything new since the structures present within each image will remain the same, only their location within the image will change.