DyBluRF: Dynamic Neural Radiance Fields from Blurry Monocular Video

CVPR 2024

We're currently working on this page. Please check back soon.

Huiqiang Sun1, Xingyi Li1, Liao Shen1, Xinyi Ye1, Ke Xian2, Zhiguo Cao1*,

1School of AIA, Huazhong University of Science and Technology    2School of EIC, Huazhong University of Science and Technology

Abstract


Recent advancements in dynamic neural radiance field methods have yielded remarkable outcomes. However, these approaches rely on the assumption of sharp input images. When faced with motion blur, existing dynamic NeRF methods often struggle to generate high-quality novel views. In this paper, we propose DyBluRF, a dynamic radiance field approach that synthesizes sharp novel views from a monocular video affected by motion blur. To account for motion blur in input images, we simultaneously capture the camera trajectory and object Discrete Cosine Transform (DCT) trajectories within the scene. Additionally, we employ a global cross-time rendering approach to ensure consistent temporal coherence across the entire scene. We curate a dataset comprising diverse dynamic scenes that are specifically tailored for our task. Experimental results on our dataset demonstrate that our method outperforms existing approaches in generating sharp novel views from motion-blurred inputs while maintaining spatial-temporal consistency of the scene.


Method



Comparisons



Citation


@InProceedings{sun2024_dyblurf,
    author    = {Sun, Huiqiang and Li, Xingyi and Shen, Liao and Ye, Xinyi and Xian, Ke and Cao, Zhiguo},
    title     = {DyBluRF: Dynamic Neural Radiance Fields from Blurry Monocular Video},
    booktitle = {arXiv preprint arXiv:2403.10103},
    year      = {2024},
}