[D] Do you think ML will become powerful enough to decode diffused light reflection?
It seems there is some variability to diffusion appearance depending on the material/distance/light intensity/object being reflected. For instance if you hold your hand up to a white piece of paper with a bulb behind your hand, you can see your hands shape a few centimeters off the paper. Move your hand back and eventually it just becomes a large blurry dark spot exponentially fading (assuming the scale was kept constant).
I was thinking earlier about this, and we as humans really are only trained to see these pronounced diffused reflections, as anything else is cognitively expensive and probably wasteful. So once the hand no longer represents a hand to us on the white paper, it starts losing its reason to be seen more and more.
Do you think it is possible now, using gans/cnns, to train a model, to learn to see more than we can? Will we now, or ever, be able to decode the diffused reflections of whole/partial objects from vast distances using ML?