Skip to main navigation Skip to search Skip to main content

Improving temporal consistency of implicit neural fields in 3D human reconstruction

  • Ka Ho LEE

Student thesis: Master's thesis

Abstract

Conventional 3D human animation requires expertise to create an animation from scratch. The intermediate steps such as character modeling, texture design, rigging, and skinning could be time-consuming and a barrier for those without background of computer graphic design. Recent research on machine learning-based realistic 3D Human reconstruction has greatly reduced the work for 3D human modeling. However, it is still far from an ideal one-step solution for realistic human animation from video. A painless solution for amateurs would be direct 3D human animation generation from the video. However, 3D model reconstruction from 2D images as an ill-pose problem usually leads to unstable performance for similar frames. Besides, research works and datasets for direct 3D human animation from video are rarely published[1–3]. Given that there are more published works of image-based 3D human digitization[4–6], it would be a huge advantage if one could turn those well-trained image-based 3D human digitization neural networks to a 3D human animation reconstruction counterpart. In view of this, we proposed a method with the idea referenced from Deep Video Prior [7] in video restoration tasks. By exploiting the knowledge in a well-trained single 3D human reconstruction neural network, we can turn it into a model for 3D human animation generation of a video. Since our method uses the original network’s prediction as a regulation prior, no extract dataset is required, nor any handicraft networks and loss functions. The method has been tested on networks with single and multiple intermediate training stages. We attained satisfying results on our testing papers.
Date of Award2022
Original languageEnglish
Awarding Institution
  • The Hong Kong University of Science and Technology
SupervisorQifeng CHEN (Supervisor) & Sai Kit YEUNG (Supervisor)

Cite this

'