Skip to main navigation Skip to search Skip to main content

AniDoc: Animation Creation Made Easier

Yihao Meng, Hao Ouyang, Hanlin Wang, Qiuyu Wang, Wen Wang, Ka Leong Cheng, Zhiheng Liu, Yujun Shen, Huamin Qu*

*Corresponding author for this work

Research output: Contribution to journalConference article published in journalpeer-review

Abstract

The production of 2D animation follows an industry-standard workflow, encompassing four essential stages: character design, keyframe animation, in-betweening, and coloring. Our research focuses on reducing the labor costs in the above process by harnessing the potential of increasingly powerful generative AI. Using video diffusion models as the foundation, AniDoc1 emerges as a video line art colorization tool, which automatically converts sketch sequences into colored animations following the reference character specification. Our model exploits correspondence matching as an explicit guidance, yielding strong robustness to the variations (e.g., posture) between the reference character and each line art frame. In addition, our model could even automate the in-betweening process, such that users can easily create a temporally consistent animation by simply providing a character image as well as the start and end sketches. Our code is available at: https://yihaomeng.github.io/AniDocdemo.

Original languageEnglish
Pages (from-to)18187-18197
Number of pages11
JournalProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Early online date13 Aug 2025
DOIs
Publication statusPublished - 2025
Event2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2025 - Nashville, United States
Duration: 10 Jun 202517 Jun 2025

Bibliographical note

Publisher Copyright:
© 2025 IEEE.

Keywords

  • animation
  • diffusion models
  • line art video colorization
  • video generation
  • video interpolation

Fingerprint

Dive into the research topics of 'AniDoc: Animation Creation Made Easier'. Together they form a unique fingerprint.

Cite this