Transparency versus explanation: The role of ambiguity in legal AI
Keywords:
explanation, interpretation, ambiguity, Rule of Law, artificial intelligence, XAIAbstract
Dealing with opaque machine learning techniques, the crucial question has become the interpretability of the work of algorithms and their results. The paper argues that the shift towards interpretation requires a move from artificial intelligence to an innovative form of artificial communication. In many cases the goal of explanation is not to reveal the procedures of the machines but to communicate with them and obtain relevant and controlled information. As human explanations do not require transparency of neural connections or thought processes, so algorithmic explanations do not have to disclose the operations of the machine but have to produce reformulations that make sense to their interlocutors. This move has important consequences for legal communication, where ambiguity plays a fundamental role. The problem of interpretation in legal arguments, the paper argues, is not that algorithms do not explain enough but that they must explain too much and too precisely, constraining freedom of interpretation and the contestability of legal decisions. The consequence might be a possible limitation of the autonomy of legal communication that underpins the modern rule of law.
Reply by Federico Cabitza, University of Milan-Bicocca.
Downloads
Published
How to Cite
Main text and response text copyright © 2021 Elena EspositoReply text copyright © the replier
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.