Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
It seems there is some variability to diffusion appearance depending on the material/distance/light intensity/object being reflected. For instance if you hold your hand up to a white piece of paper with a bulb behind your hand, you can see your hands shape a few centimeters off the paper. Move your hand back and eventually it just becomes a large blurry dark spot exponentially fading (assuming the scale was kept constant).
I was thinking earlier about this, and we as humans really are only trained to see these pronounced diffused reflections, as anything else is cognitively expensive and probably wasteful. So once the hand no longer represents a hand to us on the white paper, it starts losing its reason to be seen more and more.
Do you think it is possible now, using gans/cnns, to train a model, to learn to see more than we can? Will we now, or ever, be able to decode the diffused reflections of whole/partial objects from vast distances using ML?
submitted by /u/FreckledMil
[link] [comments]