TY - GEN
T1 - Learning a sparse, corner-based representation for time-varying background modelling
AU - Zhu, Qiang
AU - Avidan, Shai
AU - Cheng, Kwang Ting
PY - 2005
Y1 - 2005
N2 - Time-varying phenomenon, such as ripples on water, trees waving in the wind and illumination changes, produces false motions, which significantly compromises the performance of an outdoor-surveillance system. In this paper, we propose a comer-based background model to effectively detect moving-objects in challenging dynamic scenes. Specifically, the method follows a three-step process. First, we detect feature points using a Harris corner detector and represent them as SIFT-like descriptors. Second, we dynamically learn a background model and classify each extracted feature as either a background or a fore-ground feature. Last, a "Lucas-Kanade" feature tracker is integrated into this framework to differentiate motion-consistent foreground objects from background objects with random or repetitive motion. The key insight of our work is that a collection of SIFT-like features can effectively represent the environment and account for variations caused by natural effects with dynamic movements. Features that do not correspond to the background must therefore correspond to foreground moving objects. Our method is computational efficient and works in real-time. Experiments on challenging video clips demonstrate that the proposed method achieves a higher accuracy in detecting the foreground objects than the existing methods.
AB - Time-varying phenomenon, such as ripples on water, trees waving in the wind and illumination changes, produces false motions, which significantly compromises the performance of an outdoor-surveillance system. In this paper, we propose a comer-based background model to effectively detect moving-objects in challenging dynamic scenes. Specifically, the method follows a three-step process. First, we detect feature points using a Harris corner detector and represent them as SIFT-like descriptors. Second, we dynamically learn a background model and classify each extracted feature as either a background or a fore-ground feature. Last, a "Lucas-Kanade" feature tracker is integrated into this framework to differentiate motion-consistent foreground objects from background objects with random or repetitive motion. The key insight of our work is that a collection of SIFT-like features can effectively represent the environment and account for variations caused by natural effects with dynamic movements. Features that do not correspond to the background must therefore correspond to foreground moving objects. Our method is computational efficient and works in real-time. Experiments on challenging video clips demonstrate that the proposed method achieves a higher accuracy in detecting the foreground objects than the existing methods.
UR - https://openalex.org/W2139504318
UR - https://www.scopus.com/pages/publications/33745959802
U2 - 10.1109/ICCV.2005.134
DO - 10.1109/ICCV.2005.134
M3 - Conference Paper published in a book
SN - 076952334X
SN - 9780769523347
T3 - Proceedings of the IEEE International Conference on Computer Vision
SP - 678
EP - 685
BT - Proceedings - 10th IEEE International Conference on Computer Vision, ICCV 2005
T2 - Proceedings - 10th IEEE International Conference on Computer Vision, ICCV 2005
Y2 - 17 October 2005 through 20 October 2005
ER -