Vol. 1 No. 2 (2022): Data-driven Computational Law
Papers from the COHUBICOL Philosophers' Seminar 'Intepretability issues in machine learning' (November 2020).
Papers from the COHUBICOL Philosophers' Seminar 'Intepretability issues in machine learning' (November 2020).
Classifiers implement a level of abstraction: they classify entities by taking into account certain features and ignoring other features. Explanations and interpretations of algorithmic decisions require us to identify and to critically assess these abstractions. But what does it mean to critically assess an abstraction? While computer scientists see abstraction as something desirable, many legal scholars would see it as a cause for concern. Each of these views entails a different kind of critique of abstraction. This paper argues that a relative critique of abstraction is more appropriate in the context of the interpretability problem. It proposes a model, inspired by Floridi’s Method of Abstraction, of a relative critique of abstractions that can be used to reason about the explanation, justification and contestation of classifiers.
Reply by Sandra Wachter, University of Oxford.
At present, the commercial appeal of automated legal systems rests on three pillars: speed, scale, and preference satisfaction. However, for many parts of the U.S. legal system, there is a common sense that their translation into computation would be inappropriate. This concern about premature or unwise automation has many facets. The flexibility of natural language as opposed to computer languages is critical. A “legal process” account of the rule of law hinges on the availability of human review, appeals, and dialogic interaction.
One possible rejoinder to these specific accounts of the limits of legal automation, for advocates of technology, is to characterize their critics’ treatment of extant legal processes as ossification or naturalization. Ossification refers to a pathological hardening into permanence of practices that are merely contingent. Naturalization denotes the treatment of human-made processes as something like the laws of nature, and thus errantly assuming their lasting endurance or value.
The simultaneous existence of malleability of legal systems, and constitutive practices within them, leads to a two-level consideration of a) what aspects of a liberal legal order are crucial, and b) for those that are crucial, what is lost when the step is either partially or fully automated. Within a sphere of human activity like a liberal legal order, some patterns of action are merely instrumental to achieving ends, while others are essential, or constitutive: the activity should no longer even be considered part of a liberal legal order when the practice ceases. Administrative processes that are simply incidental and instrumental to the legitimate resolution of a case are primed for automation, and it is there to which legal technology should (and often does) turn its attention first. Other practices by persons, for persons, are essential and intrinsically important, and properly resist being converted into machine-readable code. Distinguishing between incidental and constitutive, or instrumentally and intrinsically important aspects of law, should be a recognized part of bounding and guiding legal automation.
We approach the issue of interpretability in artificial intelligence and law through the lens of evolutionary theory. Evolution is understood as a form of blind or mindless ‘direct fitting’, an iterative process through which a system and its environment are mutually constituted and aligned. The core case is natural selection as described in biology but it is not the only one. Legal reasoning can be understood as a step in the ‘direct fitting’ of law, through a cycle of variation, selection and retention, to its social context. Machine learning, insofar as it relies on error correction through backpropagation, is a version of the same process. It may therefore have value for understanding the long-run dynamics of legal and social change. This is distinct, however, from any use it may have in predicting case outcomes. Legal interpretation in the context of the individual or instant case depends upon the generative power of natural language to extrapolate from existing precedents to novel fact situations. This type of prospective or forward-looking reasoning is unlikely to be well captured by machine learning approaches.
Reply by Masha Medvedeva, University of Groningen.
Adversarial Attacks, commonly described as deliberately induced perturbations, can lead to incorrect outputs such as misclassifications or false predictions in systems based on forms of artificial intelligence. While these changes are often difficult to detect for a human observer, they can cause false results and have impacts on physical as well as intangible objects. In that way, they represent a key challenge in diverse areas, including — among others — legal fields such as the judicial system, law enforcement and legal tech. While computer science is addressing several approaches to mitigate these risks caused by Adversarial Attacks, the issue has not received much attention in legal scholarship so far. This paper aims to fill this gap, tries to assess the risks of and technical defenses against Adversarial Attacks on AI Systems and provides a first assessment of possible legal countermeasures.
If a system is interpretable today, why would it not be as interpretable in five or ten years time? Years of societal transformations can negatively impact the interpretability of some machine learning (ML) systems for two types of reasons. These two types of reasons are rooted in a truism: interpretability requires both an interpretable object and a subject capable of interpretation. This object versus subject perspective ties in with distinct rationales for interpretable systems: generalisability and contestability. On the generalisability front, when it comes to ascertaining whether the accuracy of some ML model holds beyond the training data, a variety of transparency and explainability strategies have been put forward. These strategies can make us blind to the fact that what an ML system has learned may produce helpful insights when deployed in real-life contexts this year yet become useless faced with next year’s socially transformed cohort. On the contestability front, ethically and legally significant practices presuppose the continuous, uncertain (re)articulation of conflicting values. Without our continued drive to call for better ways of doing things, these discursive practices would wither away. To retain such a collective ability calls for a change in the way we articulate interpretability requirements for systems deployed in ethically and legally significant contexts: we need to build systems whose outputs we are capable of contesting today, as well as in five years’ time. This calls for what I call ‘ensemble contestability’ features.
Reply by Zachary C. Lipton, Carnegie Mellon University.
Dealing with opaque machine learning techniques, the crucial question has become the interpretability of the work of algorithms and their results. The paper argues that the shift towards interpretation requires a move from artificial intelligence to an innovative form of artificial communication. In many cases the goal of explanation is not to reveal the procedures of the machines but to communicate with them and obtain relevant and controlled information. As human explanations do not require transparency of neural connections or thought processes, so algorithmic explanations do not have to disclose the operations of the machine but have to produce reformulations that make sense to their interlocutors. This move has important consequences for legal communication, where ambiguity plays a fundamental role. The problem of interpretation in legal arguments, the paper argues, is not that algorithms do not explain enough but that they must explain too much and too precisely, constraining freedom of interpretation and the contestability of legal decisions. The consequence might be a possible limitation of the autonomy of legal communication that underpins the modern rule of law.
Reply by Federico Cabitza, University of Milan-Bicocca.
We log anonymous usage statistics. Please read the privacy information for details.
CRCL is Platinum Open Access under the Creative Commons BY-NC license.
ISSN 2736-4321.