[R] Neural Volumes: Learning Dynamic Renderable Volumes from Images — Source Code Release
We published our work, Neural Volumes, at SIGGRAPH this year. The goal of the paper is to create an animate-able 3D model of objects and scenes from calibrated multi-view video. You can check out the video in that link for a visual explanation of how the method works and to see some of the results. The formulation of the method is basically an autoencoder with a fixed-function rendering technique that renders a volume into an image.
We’ve just put the source code on Github today. Since calibrated multi-view video is relatively hard to come by, we’ve including the dry ice dataset as well as a pretrained model (in the v0.1 release). I’m happy to answer any questions about this work or talk about neural rendering in general.
submitted by /u/stephenlombardi
[link] [comments]