Explanations by arbitrated argumentative dispute

Kristijonas Čyras*, David Birch, Yike Guo, Francesca Toni, Rajvinder Dulay, Sally Turvey, Daniel Greenberg, Tharindi Hapuarachchi

*Corresponding author for this work

Research output: Contribution to journalJournal Articlepeer-review

41 Citations (Scopus)

Abstract

Explaining outputs determined algorithmically by machines is one of the most pressing and studied problems in Artificial Intelligence (AI) nowadays, but the equally pressing problem of using AI to explain outputs determined by humans is less studied. In this paper we advance a novel methodology integrating case-based reasoning and computational argumentation from AI to explain outcomes, determined by humans or by machines, indifferently, for cases characterised by discrete (static) features and/or (dynamic) stages. At the heart of our methodology lies the concept of arbitrated argumentative disputesbetween two fictitious disputants arguing, respectively, for or against a case's output in need of explanation, and where this case acts as an arbiter. Specifically, in explaining the outcome of a case in question, the disputants put forward as arguments relevant cases favouring their respective positions, with arguments/cases conflicting due to their features, stages and outcomes, and the applicability of arguments/cases arbitrated by the features and stages of the case in question. We in addition use arbitrated dispute trees to identify the excess features that help the winning disputant to win the dispute and thus complement the explanation. We evaluate our novel methodology theoretically, proving desirable properties thereof, and empirically, in the context of primary legislation in the United Kingdom (UK), concerning the passage of Bills that may or may not become laws. High-level factors underpinning a Bill's passage are its content-agnostic features such as type, number of sponsors, ballot order, as well as the UK Parliament's rules of conduct. Given high numbers of proposed legislation (hundreds of Bills a year), it is hard even for legal experts to explain on a large scale why certain Bills pass or not. We show how our methodology can address this problem by automatically providing high-level explanations of why Bills pass or not, based on the given Bills and their content-agnostic features.

Original languageEnglish
Pages (from-to)141-156
Number of pages16
JournalExpert Systems with Applications
Volume127
DOIs
Publication statusPublished - 1 Aug 2019
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2019 Elsevier Ltd

Keywords

  • Argumentation
  • Explanation
  • Legislative data

Fingerprint

Dive into the research topics of 'Explanations by arbitrated argumentative dispute'. Together they form a unique fingerprint.

Cite this