Skip to main navigation Skip to search Skip to main content

Exploring contextualized conceptualization : an evaluation of prompt-driven responses in diverse language models

  • Tsz Ho CHAN

Student thesis: Master's thesis

Abstract

Conceptualization, making abstraction and inference instantiation based on it, is an essential part of intelligence for both humans and artificial to do reasoning. And it has long been regarded as the key component of Natural Language Processing and Understanding for commonsense Knowledge.

With the fast-growing development of Pre-trained Language Models, more tasks about conceptualization have been launched and tested, most of which are caring conceptualization with context. However, the current experiments are focusing on the traditional fine-tuning setting to let models fit into the provided datasets but ignore the importance of the self-capable conceptualization ability, which should be the true representative of the cognitive ability of models. In this work, we propose some zero-shot experiments to explore the influence of various prompts regarding models, and the adaptability of prompts regarding datasets, and try to find a challenging dataset to better examine models. The results show that significant improvement can be produced with a good choice of prompts.

Date of Award2023
Original languageEnglish
Awarding Institution
  • The Hong Kong University of Science and Technology
SupervisorYangqiu SONG (Supervisor)

Cite this

'