|
|
|
|
|
|
|
We present Omnidirectional Neural Radiance Fields (OmniNeRF), the first method to the application of parallax-enabled novel panoramic view synthesis. Recent works for novel view synthesis focus on perspective images with limited field-of-view and require sufficient pictures captured in a specific condition. Conversely, OmniNeRF can generate panorama images for unknown viewpoints given a single equirectangular image as training data. To this end, we propose to augment the single RGB-D panorama by projecting back and forth between a 3D world and different 2D panoramic coordinates at different virtual camera positions. By doing so, we are able to optimize an Omnidirectional Neural Radiance Field with visible pixels collecting from omnidirectional viewing angles at a fixed center for the estimation of new viewing angles from varying camera positions. As a result, the proposed OmniNeRF achieves convincing renderings of novel panoramic views that exhibit the parallax effect. We showcase the effectiveness of each of our proposals on both synthetic and real-world datasets. |
Moving in a 360 World: Synthesizing Panoramic Parallaxes from a Single Panorama (hosted on ArXiv) |
Scene 1 |
|
|
Scene 2 |
|
|
Scene 3 |
|
|
Scene 4 |
|
|
| |
Source image Is is for training, and the target view It is the nearest neighbor of Is in Matterport3D. The OmniNeRF generated panorama It by training on Is and applying the same pose as It. Eventhough we are unable to reproduce the precise edges at far-ther regions, the overall quality is still convincing. |
Citation
@article{abs-2106-10859,
|