Fine-grained Video Attractiveness Prediction Using Multimodal Deep Learning on a Large Real-world Dataset

Xinpeng Chen, Jingyuan Chen, Lin Ma, Jian Yao, Wei Liu, Jiebo Luo, Tong Zhang

Research output: Chapter in Book/Conference Proceeding/ReportConference Paper published in a bookpeer-review

19 Citations (Scopus)

Abstract

Nowadays, billions of videos are online ready to be viewed and shared. Among an enormous volume of videos, some popular ones are widely viewed by online users while the majority attract little attention. Furthermore, within each video, different segments may attract significantly different numbers of views. This phenomenon leads to a challenging yet important problem, namely fine-grained video attractiveness prediction, which only relies on video contents to forecast video attractiveness at fine-grained levels, specifically video segments of several second length in this paper. However, one major obstacle for such a challenging problem is that no suitable benchmark dataset currently exists. To this end, we construct the first fine-grained video attractiveness dataset (FVAD), which is collected from one of the most popular video websites in the world. In total, the constructed FVAD consists of 1,019 drama episodes with 780.6 hours covering different categories and a wide variety of video contents. Apart from the large amount of videos, hundreds of millions of user behaviors during watching videos are also included, such as view counts, "fast-forward, "fast-rewind, and so on, where "view counts" reflects the video attractiveness while other engagements capture the interactions between the viewers and videos. First, we demonstrate that video attractiveness and different engagements present different relationships. Second, FVAD provides us an opportunity to study the fine-grained video attractiveness prediction problem. We design different sequential models to perform video attractiveness prediction by relying solely on video contents. The sequential models exploit the multimodal relationships between visual and audio components of the video contents at different levels. Experimental results demonstrate the effectiveness of our proposed sequential models with different visual and audio representations, the necessity of incorporating the two modalities, and the complementary behaviors of the sequential prediction models at different levels.

Original languageEnglish
Title of host publicationThe Web Conference 2018 - Companion of the World Wide Web Conference, WWW 2018
PublisherAssociation for Computing Machinery, Inc
Pages671-678
Number of pages8
ISBN (Electronic)9781450356404
DOIs
Publication statusPublished - 23 Apr 2018
Externally publishedYes
Event27th International World Wide Web, WWW 2018 - Lyon, France
Duration: 23 Apr 201827 Apr 2018

Publication series

NameThe Web Conference 2018 - Companion of the World Wide Web Conference, WWW 2018

Conference

Conference27th International World Wide Web, WWW 2018
Country/TerritoryFrance
CityLyon
Period23/04/1827/04/18

Bibliographical note

Publisher Copyright:
© 2018 IW3C2 (International World Wide Web Conference Committee), published under Creative Commons CC BY 4.0 License.

Keywords

  • fine-grained
  • long short-term memory(lstm)
  • multimodal fusion
  • video attractiveness

Fingerprint

Dive into the research topics of 'Fine-grained Video Attractiveness Prediction Using Multimodal Deep Learning on a Large Real-world Dataset'. Together they form a unique fingerprint.

Cite this