Motion texture: A two-level statistical model for character motion synthesis

Yan Li*, Tianshu Wang, Heung Yeung Shum

*Corresponding author for this work

Research output: Contribution to journalConference article published in journalpeer-review

215 Citations (Scopus)

Abstract

In this paper, we describe a novel technique, called motion texture, for synthesizing complex human-figure motion (e.g., dancing) that is statistically similar to the original motion captured data. We define motion texture as a set of motion textons and their distribution, which characterize the stochastic and dynamic nature of the captured motion. Specifically, a motion texton is modeled by a linear dynamic system (LDS) while the texton distribution is represented by a transition matrix indicating how likely each texton is switched to another. We have designed a maximum likelihood algorithm to learn the motion textons and their relationship from the captured dance motion. The learnt motion texture can then be used to generate new animations automatically and/or edit animation sequences interactively. Most interestingly, motion texture can be manipulated at different levels, either by changing the fine details of a specific motion at the texton level or by designing a new choreography at the distribution level. Our approach is demonstrated by many synthesized sequences of visually compelling dance motion.

Original languageEnglish
Pages (from-to)465-472
Number of pages8
JournalACM Transactions on Graphics
Volume21
Issue number3
DOIs
Publication statusPublished - 2002
Externally publishedYes
EventACM Transactions on Graphics; Proceedings of ACM SIGGRAPH 2002 - , United States
Duration: 23 Jul 200226 Jul 2002

Keywords

  • Linear dynamic systems
  • Motion editing
  • Motion synthesis
  • Motion texture
  • Texture synthesis

Fingerprint

Dive into the research topics of 'Motion texture: A two-level statistical model for character motion synthesis'. Together they form a unique fingerprint.

Cite this