CTRL-D: Controllable Dynamic 3D Scene Editing with Personalized 2D Diffusion

1University of Toronto, 2Vector Institute

Abstract


Recent advances in 3D representations, such as Neural Radiance Fields and 3D Gaussian Splatting, have greatly improved realistic scene modeling and novel-view synthesis. However, achieving controllable and consistent editing in dynamic 3D scenes remains a significant challenge. Previous work is largely constrained by its editing backbones, resulting in inconsistent edits and limited controllability. In our work, we introduce a novel framework that first fine-tunes the InstructPix2Pix model, followed by a two-stage optimization of the scene based on deformable 3D Gaussians. Our fine-tuning enables the model to "learn" the editing ability from a single edited reference image, transforming the complex task of dynamic scene editing into a simple 2D image editing process. By directly learning editing regions and styles from the reference, our approach enables consistent and precise local edits without the need for tracking desired editing regions, effectively addressing key challenges in dynamic scene editing. Then, our two-stage optimization progressively edits the trained dynamic scene, using a designed edited image buffer to accelerate convergence and improve temporal consistency. Compared to state-of-the-art methods, our approach offers more flexible and controllable local scene editing, achieving high-quality and consistent results.

Overview


Overview of our pipeline for controllable dynamic scene editing. Given a dynamic 3D scene, our method (a) first edits one frame as a reference with any 2D editing model, we then (b) fine-tune the InstructPix2Pix with the edited reference image, along with sampled images from the original models to preserve the model priors, and then (c) we optimize the dynamic 3D scenes with deformable gaussian representation, using the designed 2-stage method.

Gallery


Scene: DyCheck 1

Comparison


We compare our results with Instruct 4D-to-4D as the baseline. Our method demonstrates superior consistency, higher quality, and more precise local edits, which are not achievable with the baseline.

BibTeX

@misc{he2024ctrldcontrollabledynamic3d,
      title={CTRL-D: Controllable Dynamic 3D Scene Editing with Personalized 2D Diffusion}, 
      author={Kai He and Chin-Hsuan Wu and Igor Gilitschenski},
      year={2024},
      eprint={2412.01792},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2412.01792}, 
}