Matching commercial clips from TV streams using a unique, robust and compact signature

Yijun Li*, Jesse S. Jin, Xiaofang Zhou

*Corresponding author for this work

Research output: Chapter in Book/Conference Proceeding/ReportConference Paper published in a bookpeer-review

Abstract

One of critical challenges in automatic recognition of TV commercials is to generate a unique, robust and compact signature. Uniqueness indicates the ability to identify the similarity among the commercial video clips which may have slight content variation. Robustness means the ability to match commercial video clips containing the same content but probably with different digitalization/encoding, some noise data, and/or transmission and recording distortion. Efficiency is about the capability of effectively matching commercial video sequences with a low computation cost and storage overhead. In this paper, we present a binary signature based method, which meets all the three criteria above, by combining the techniques of ordinal and color measurements. Experimental results on a real large commercial video database show that our novel approach delivers a significantly better performance comparing to the existing methods.

Original languageEnglish
Title of host publicationProceedings of the Digital Imaging Computing
Subtitle of host publicationTechniques and Applications, DICTA 2005
Pages266-272
Number of pages7
DOIs
Publication statusPublished - 2005
Externally publishedYes
EventDigital Imaging Computing: Techniques and Applications, DICTA 2005 - Cairns, Australia
Duration: 6 Dec 20058 Dec 2005

Publication series

NameProceedings of the Digital Imaging Computing: Techniques and Applications, DICTA 2005
Volume2005

Conference

ConferenceDigital Imaging Computing: Techniques and Applications, DICTA 2005
Country/TerritoryAustralia
CityCairns
Period6/12/058/12/05

Fingerprint

Dive into the research topics of 'Matching commercial clips from TV streams using a unique, robust and compact signature'. Together they form a unique fingerprint.

Cite this