CTNeRF: Cross-Time Transformer for Dynamic Neural Radiance Field from Monocular Video

Abstract

The goal of our work is to generate high-quality novel views from monocular videos of complex and dynamic scenes. Prior methods, such as DynamicNeRF, have shown impressive performance by leveraging time-varying dynamic radiation fields. However, these methods have limitations when it comes to accurately modeling the motion of complex objects, which can lead to inaccurate and blurry renderings of details. To address this limitation, we propose a novel approach that builds upon a recent generalization NeRF, which aggregates nearby views onto new viewpoints. However, such methods are typically only effective for static scenes. To overcome this challenge, we introduce a module that operates in both the time and frequency domains to aggregate the features of object motion. This allows us to learn the relationship between frames and generate higher-quality images. Our experiments demonstrate significant improvements over state-of-the-art methods on dynamic scene datasets. Specifically, our approach outperforms existing methods in terms of both the accuracy and visual quality of the synthesized views.

cite

@misc{miao2024ctnerf,
      title={CTNeRF: Cross-Time Transformer for Dynamic Neural Radiance Field from Monocular Video}, 
      author={Xingyu Miao and Yang Bai and Haoran Duan and Yawen Huang and Fan Wan and Yang Long and Yefeng Zheng},
      year={2024},
      eprint={2401.04861},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}