ViewMorpher3D
A 3D-aware Diffusion Framework for Multi-Camera Novel-View Synthesis in Autonomous Driving

Qualcomm AI Research

Para-Lane Novel-views

Double-lane extrapolation

3DGS ( [+] reference view)
ViewMorpher3D
3DGS ( [+] reference view)
ViewMorpher3D

nuScenes Novel-views (6 cameras)

Temporal extrapolation (0-5 sec. @2FPS)

3DGS
DiFix3D
ViewMorpher3D
3DGS
DiFix3D
ViewMorpher3D
3DGS
DiFix3D
ViewMorpher3D
3DGS
DiFix3D
ViewMorpher3D
3DGS
DiFix3D
ViewMorpher3D
3DGS
DiFix3D
ViewMorpher3D
3DGS
DiFix3D
ViewMorpher3D
3DGS
DiFix3D
ViewMorpher3D
3DGS
DiFix3D
ViewMorpher3D
3DGS
DiFix3D
ViewMorpher3D
3DGS
DiFix3D
ViewMorpher3D
3DGS
DiFix3D
ViewMorpher3D
3DGS
DiFix3D
ViewMorpher3D
3DGS
DiFix3D
ViewMorpher3D

Abstract

Autonomous driving systems rely heavily on multi-view images to ensure accurate perception and robust decision-making. To effectively develop and evaluate perception stacks and planning algorithms, realistic closed-loop simulators are indispensable. While 3D reconstruction techniques such as Gaussian Splatting offer promising avenues for simulator construction, the rendered novel views often exhibit artifacts, particularly in extrapolated perspectives or when available observations are sparse. We introduce ViewMorpher3D, a multi-view image enhancement framework based on image diffusion models, designed to elevate photorealism and multi-view coherence in driving scenes. Unlike single-view approaches, ViewMorpher3D jointly processes a set of rendered views conditioned on camera poses, 3D geometric priors, and temporally adjacent or spatially overlapping reference views. This enables the model to infer missing details, suppress rendering artifacts, and enforce cross-view consistency. Our framework accommodates variable numbers of cameras and flexible reference/target view configurations, making it adaptable to diverse sensor setups. Experiments on real-world driving datasets demonstrate substantial improvements in image quality metrics, effectively reducing artifacts while preserving geometric fidelity.

Method

The backbone of ViewMorpher3D is a single-step image diffusion model (SD-Turbo), which enhances novel view images, rendered from 3D Gaussian Splatting (3DGS) representation. The diffusion process not only conditioned on low-quality rendered views but also the previously reference views, camera poses and 3D representation of 3DGS.

ViewMorpher3D overview: A diffusion-based framework for restoring a variable number of low-quality novel-view images by conditioning on observed (reference) views, camera poses, and 3D structures represented through 3D Gaussian Splatting.

Novel-view metrics (Para-Lane)

Single-lane Transition

Double-lane Transition

Novel-view metrics (nuScenes)

Extrapolated temporally (5 sec.)

BibTex

              
              @article{zanjani2026viewmorpher3d,
              title={ViewMorpher3D: A 3D-aware Diffusion Framework for Multi-Camera Novel View Synthesis in Autonomous Driving},
              author={Farhad G. Zanjani and Hong Cai and Amirhossein Habibian},
              year={2026},
              eprint={2601.07540},
              archivePrefix={arXiv},
              primaryClass={cs.CV},
              url={https://arxiv.org/abs/2601.07540},
              }