Interactive Story Visualization with Multiple Characters

Yuan Gong, Youxin Pang, Xiaodong Cun, Menghan Xia, Yingqing He, Haoxin Chen, Longyue Wang, Yong Zhang*, Xintao Wang, Ying Shan, Yujiu Yang*

*Corresponding author for this work

Research output: Chapter in Book/Conference Proceeding/ReportConference Paper published in a bookpeer-review

19 Citations (Scopus)

Abstract

Accurate Story visualization requires several necessary elements, such as identity consistency across frames, the alignment between plain text and visual content, and a reasonable layout of objects in images. Most previous works endeavor to meet these requirements by fitting a text-to-image (T2I) model on a set of videos in the same style and with the same characters, e.g., the FlintstonesSV dataset. However, the learned T2I models typically struggle to adapt to new characters, scenes, and styles, and often lack the flexibility to revise the layout of the synthesized images. This paper proposes a system for generic interactive story visualization, capable of handling multiple novel characters and supporting the editing of layout and local structure. It is developed by leveraging the prior knowledge of large language and T2I models, trained on massive corpora. The system comprises four interconnected components: story-to-prompt generation (S2P), text-to-layout generation (T2L), controllable text-to-image generation (C-T2I), and image-to-video animation (I2V). First, the S2P module converts concise story information into detailed prompts required for subsequent stages. Next, T2L generates diverse and reasonable layouts based on the prompts, offering users the ability to adjust and refine the layout to their preferences. The core component, C-T2I, enables the creation of images guided by layouts, sketches, and actor-specific identifiers to maintain consistency and detail across visualizations. Finally, I2V enriches the visualization process by animating the generated images. Extensive experiments and a user study are conducted to validate the effectiveness and flexibility of interactive editing of the proposed system.

Original languageEnglish
Title of host publicationProceedings - SIGGRAPH Asia 2023 Conference Papers, SA 2023
EditorsStephen N. Spencer
PublisherAssociation for Computing Machinery, Inc
ISBN (Electronic)9798400703157
DOIs
Publication statusPublished - 10 Dec 2023
Externally publishedYes
Event2023 SIGGRAPH Asia 2023 Conference Papers, SA 2023 - Sydney, Australia
Duration: 12 Dec 202315 Dec 2023

Publication series

NameProceedings - SIGGRAPH Asia 2023 Conference Papers, SA 2023

Conference

Conference2023 SIGGRAPH Asia 2023 Conference Papers, SA 2023
Country/TerritoryAustralia
CitySydney
Period12/12/2315/12/23

Bibliographical note

Publisher Copyright:
© 2023 Owner/Author.

Keywords

  • Controllable Generation
  • Diffusion Models
  • Story Visualization

Fingerprint

Dive into the research topics of 'Interactive Story Visualization with Multiple Characters'. Together they form a unique fingerprint.

Cite this