Cinematic Behavior Transfer via NeRF-based Differentiable Filming

Xuekun Jiang1*, Anyi Rao2*, Jingbo Wang1, Dahua Lin1,3, Bo Dai1
1ShangHai AI laboratory, 2Standford University, 3The Chinese University of Hong Kong
CVPR 2024

*Indicates Equal Contribution
MY ALT TEXT

Given a film shot, a series of visual continuous frames, containing complex camera movement and character motion, we present an approach that estimates the camera trajectory and character motion in a coherent global space. The extracted camera and characters' behavior can be applied to new 2D/3D content through our cinematic transfer pipeline. The 2D cinematic transfer replicates character motion, camera movement, and background with new characters.The 3D cinematic transfer result is similar to 2D results, but have more freedom than 2D, such as changing the lighting and environment.

2D Cinematic Transfer Results

Ip Man4, 2019
the following video contains sound
The Wild, 2007
Lalaland, 2016
the following video contains sound
Dunkirk, 2017
the following video contains sound

3D Cinematic Transfer Results

Batman vs. Superman, 2016
The Wild, 2007
Ringo, 2023
the following video contains sound
Ringo, 2023
the following video contains sound

Object Removal Results by ProPainter

Original video
Finally result
Masked video
Pure background video
Without background

Visualization in 3D Engine

Original shot, Batman vs. Superman, 2016 A side view in 3d engine of the extracted camera and character behavior

BibTeX

@article{jiang2023cinematic,
      title={Cinematic Behavior Transfer via NeRF-based Differentiable Filming},
      author={Jiang, Xuekun and Rao, Anyi and Wang, Jingbo and Lin, Dahua and Dai, Bo},
      journal={arXiv preprint arXiv:},
      year={2023}
  }