Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[Research] Neural Point-Based Graphics

[Research] Neural Point-Based Graphics

Hey all,

Let me introduce our new work on real-time photo-realistic neural rendering. The method allows you to render complex scenes from novel viewpoints using raw point clouds as proxy geometry and require no meshes. Pipeline is following: scan object with ordinary video camera, produce the point cloud using widely available software (e.g. Agisoft Metashape), feed the point cloud and video to the algorithm and that’s it!

The core ingredient of our algorithm is 8-dimensional descriptors learned for each point in the cloud, instead of common 3-dimensional RGB colors. Rendering neural network interprets this descriptors and outputs RGB image. We train the network on large Scannet dataset to boost it’s generalization capabilities on novel scenes.

For more details please refer to the paper, as well as short description of the method on the project page and video demonstrating the results.

Paper: https://arxiv.org/abs/1906.08240

Project page: https://dmitryulyanov.github.io/neural_point_based_graphics

Video: https://youtu.be/7s3BYGok7wU

Free-viewpoint rendering by our method

submitted by /u/alievk91
[link] [comments]