[D] Did somebody tried to apply DeepDream to 3D Convolutions on MIT Places to produce 3D environment models ?
That would likely require a rewrite of the DeepDream code, so it would work on the 3D recognition (convolution-based) systems that would output models, and also it might use an [heavily] edited version of the videoify.py that would progressively build up the depth of the scene (instead of zooming-in, it would be the Z axis in 3D space..). I noted the Places database, because i don’t think the standard Inception would produce good results with this..
submitted by /u/ad48hp
[link] [comments]