On object-based compression for a class of dynamic image-based representations

Qing Wu*, King To Ng, Shing Chow Chan, Heung Yeung Shum

*Corresponding author for this work

Research output: Chapter in Book/Conference Proceeding/ReportConference Paper published in a bookpeer-review

10 Citations (Scopus)

Abstract

An object-based compression scheme for a class of dynamic image-based representations called "plenoptic videos" (PVs) is studied in this paper. PVs are simplified dynamic light fields in which the videos are taken at regularly spaced locations along a line segment instead of a 2-D plane. To improve the rendering quality in scenes with large depth variations and support the functionalities at the object level for rendering, an object-based compression scheme is employed for the coding of PVs. Besides texture and shape information, the compression of geometry information in the form of depth maps is also supported. The proposed compression scheme exploits both the temporal and spatial redundancy among video object streams in the PV to achieve higher compression efficiency. Experimental results show that considerable improvements in coding performance are obtained for both synthetic and real scenes. Moreover, object-based functionalities such as rendering individual image-based objects are also illustrated.

Original languageEnglish
Title of host publicationIEEE International Conference on Image Processing 2005, ICIP 2005
Pages405-408
Number of pages4
DOIs
Publication statusPublished - 2005
Externally publishedYes
EventIEEE International Conference on Image Processing 2005, ICIP 2005 - Genova, Italy
Duration: 11 Sept 200514 Sept 2005

Publication series

NameProceedings - International Conference on Image Processing, ICIP
Volume3
ISSN (Print)1522-4880

Conference

ConferenceIEEE International Conference on Image Processing 2005, ICIP 2005
Country/TerritoryItaly
CityGenova
Period11/09/0514/09/05

Fingerprint

Dive into the research topics of 'On object-based compression for a class of dynamic image-based representations'. Together they form a unique fingerprint.

Cite this