Transparency versus explanation: The role of ambiguity in legal AI

Authors

  • Elena Esposito Bielefeld University

Keywords:

explanation, interpretation, ambiguity, Rule of Law, artificial intelligence, XAI

Abstract

Dealing with opaque machine learning techniques, the crucial question has become the interpretability of the work of algorithms and their results. The paper argues that the shift towards interpretation requires a move from artificial intelligence to an innovative form of artificial communication. In many cases the goal of explanation is not to reveal the procedures of the machines but to communicate with them and obtain relevant and controlled information. As human explanations do not require transparency of neural connections or thought processes, so algorithmic explanations do not have to disclose the operations of the machine but have to produce reformulations that make sense to their interlocutors. This move has important consequences for legal communication, where ambiguity plays a fundamental role. The problem of interpretation in legal arguments, the paper argues, is not that algorithms do not explain enough but that they must explain too much and too precisely, constraining freedom of interpretation and the contestability of legal decisions. The consequence might be a possible limitation of the autonomy of legal communication that underpins the modern rule of law.

Reply by Federico Cabitza, University of Milan-Bicocca.

Downloads

Published

10 November 2021
Total downloads
1272

How to Cite

Esposito, Elena. 2021. “Transparency Versus Explanation: The Role of Ambiguity in Legal AI”. Journal of Cross-Disciplinary Research in Computational Law 1 (2). https://journalcrcl.org/crcl/article/view/10.