Motion texture for dynamic sequence synthesis

Tian Shu Wang*, Nan Ning Zheng, Yan Li, Ying Qing Xu, Heung Yeung Shum

*Corresponding author for this work

Research output: Contribution to journalJournal Articlepeer-review

2 Citations (Scopus)

Abstract

We describe a novel model, called motion texture, for synthesizing complex dynamic sequences that are statistically similar to the original sample data. We define motion texture as a set of motion textons and their distribution, which characterize the stochastic and dynamic nature of the sample data. Specifically, a motion texton is modeled by a linear dynamic system (LDS) while the texton distribution is represented by a transition matrix indicating how likely each texton is switched to another. We design a maximum likelihood algorithm to learn the motion textons and their relationship. The learnt motion texture can then be used to generate new animations automatically. Our approach is demonstrated by many synthesized sequences of visually compelling dance motion and video sequences.

Original languageEnglish
Pages (from-to)1241-1247
Number of pages7
JournalJisuanji Xuebao/Chinese Journal of Computers
Volume26
Issue number10
Publication statusPublished - Oct 2003
Externally publishedYes

Keywords

  • Computer animation
  • Motion analysis
  • Statistical learning

Fingerprint

Dive into the research topics of 'Motion texture for dynamic sequence synthesis'. Together they form a unique fingerprint.

Cite this