Skip to main navigation Skip to search Skip to main content

Designing human-AI alignment to improve collaborative decision-making

  • Shuai MA

Student thesis: Doctoral thesis

Abstract

Artificial Intelligence (AI) systems are increasingly being integrated into real-world decision-making domains such as criminal justice and medical diagnosis. In these human-AI decision-making processes, AI serves as an assistive tool by offering recommendations, while human decision-makers retain the authority to accept or reject these suggestions. The key challenge lies in achieving a truly complementary team performance, where the collaboration between humans and AI surpasses the capabilities of either working independently. To address this challenge, current research focuses on enhancing human understanding of AI predictions by providing features such as AI confidence levels and explanations. However, empirical studies of these approaches have yielded mixed results. We contend that these efforts often fail to fully consider the limitations of human rationality, the impact of cognitive biases, and the need for alignment between human and AI decision-making factors. To mitigate these gaps, we adopt a human-centered design approach to foster human-AI alignment, focusing on three crucial decision-making factors: capability, confidence, and decision rationale. First, we align human and AI capabilities by developing a human-in-the-loop methodology to model user correctness likelihood, mitigating the impact of inaccurate self-estimation. Drawing from cognitive science theories, we introduce adaptive interventions to foster appropriate human reliance on AI recommendations. Second, we align human-AI confidence by proposing an analytical framework that considers the influence of poorly calibrated human self-confidence on reliance. We introduce three mechanisms for calibrating human confidence and assess their impacts on collaborative decision-making. Third, we align decision rationales between humans and AI through a novel Human-AI Deliberation framework. This framework facilitates reflective dialogue on divergent opinions, supported by our AI assistant, Deliberative AI, which integrates Large Language Models (LLMs) and domain-specific models to enhance interactions and provide reliable information. Building on our findings, this thesis underscores the detrimental effects of overlooking human bounded rationality and the misalignment of AI assistance with human decision-making on the effectiveness of collaborative outcomes. It advocates for a human-centered interaction design approach to strengthen alignment between humans and AI. In the discussion, we position the proposed alignment within the broader context of human-AI decision-making design, distilling key insights on achieving breakthroughs in this collaborative domain. We conclude by reflecting on the design and implementation of human-AI alignment and suggesting avenues for future research in human-AI decision-making.
Date of Award2024
Original languageEnglish
Awarding Institution
  • The Hong Kong University of Science and Technology
SupervisorXiaojuan MA (Supervisor)

Cite this

'