Transferring Image-CLIP to Video-Text Retrieval via Temporal Relations

Han Fang, Pengfei Xiong, Luhui Xu, Wenhan Luo*

*Corresponding author for this work

Research output: Contribution to journalJournal Articlepeer-review

20 Citations (Scopus)

Abstract

We present a novel network to transfer the image-language pre-trained model to video-text retrieval in an end-to-end manner. Leading approaches in the domain of video-and-language learning try to distill the spatio-temporal video features and multi-modal interaction between videos and language from a large-scale video-text dataset. Differently, we leverage the pre-trained image-language model, and simplify it as a two-stage framework including co-learning of image and text, and enhancing temporal relations between video frames and video-text respectively. Specifically, based on the spatial semantics captured by Contrastive Language-Image Pre-training (CLIP) model, our model involves a Temporal Difference Block (TDB) to capture motions at fine temporal video frames, and a Temporal Alignment Block (TAB) to re-align the tokens of video clips and phrases and enhance the cross-modal correlation. These two temporal blocks efficiently realize video-language learning and enable the proposed model to scale well on comparatively small datasets. We conduct extensive experimental studies including ablation studies and comparisons with existing SOTA methods, and our proposed approach outperforms them on the popularly-employed text-to-video and video-to-text retrieval benchmarks, including MSR-VTT, MSVD, LSMDC, and VATEX.

Original languageEnglish
Pages (from-to)7772-7785
Number of pages14
JournalIEEE Transactions on Multimedia
Volume25
DOIs
Publication statusPublished - 2023
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 1999-2012 IEEE.

Keywords

  • Image-text pretrained
  • temporal transformer
  • video-text retrieval

Fingerprint

Dive into the research topics of 'Transferring Image-CLIP to Video-Text Retrieval via Temporal Relations'. Together they form a unique fingerprint.

Cite this