Video tooning

Jue Wang, Yingqing Xu, Heung Yeung Shum, Michael F. Cohen

Research output: Contribution to conferenceConference Paperpeer-review

79 Citations (Scopus)

Abstract

We describe a system for transforming an input video into a highly abstracted, spatio-temporally coherent cartoon animation with a range of styles. To achieve this, we treat video as a space-time volume of image data. We have developed an anisotropic kernel mean shift technique to segment the video data into contiguous volumes. These provide a simple cartoon style in themselves, but more importantly provide the capability to semi-automatically rotoscope semantically meaningful regions. In our system, the user simply outlines objects on keyframes. A mean shift guided interpolation algorithm is then employed to create three dimensional semantic regions by interpolation between the keyframes, while maintaining smooth trajectories along the time dimension. These regions provide the basis for creating smooth two dimensional edge sheets and stroke sheets embedded within the spatio-temporal video volume. The regions, edge sheets, and stroke sheets are rendered by slicing them at particular times. A variety of styles of rendering are shown. The temporal coherence provided by the smoothed semantic regions and sheets results in a temporally consistent non-photorealistic appearance.

Original languageEnglish
Pages574-583
Number of pages10
DOIs
Publication statusPublished - 2004
Externally publishedYes
EventACM SIGGRAPH 2004, SIGGRAPH 2004 - Los Angeles, CA, United States
Duration: 8 Aug 200412 Aug 2004

Conference

ConferenceACM SIGGRAPH 2004, SIGGRAPH 2004
Country/TerritoryUnited States
CityLos Angeles, CA
Period8/08/0412/08/04

Fingerprint

Dive into the research topics of 'Video tooning'. Together they form a unique fingerprint.

Cite this