[Research] Neural Point-Based Graphics
Let me introduce our new work on real-time photo-realistic neural rendering. The method allows you to render complex scenes from novel viewpoints using raw point clouds as proxy geometry and require no meshes. Pipeline is following: scan object with ordinary video camera, produce the point cloud using widely available software (e.g. Agisoft Metashape), feed the point cloud and video to the algorithm and that’s it!
The core ingredient of our algorithm is 8-dimensional descriptors learned for each point in the cloud, instead of common 3-dimensional RGB colors. Rendering neural network interprets this descriptors and outputs RGB image. We train the network on large Scannet dataset to boost it’s generalization capabilities on novel scenes.
For more details please refer to the paper, as well as short description of the method on the project page and video demonstrating the results.