The Structure and Legal Interpretation of Computer ProgramsJames Grimmelmann
This is an essay about the relationship between legal interpretation and software interpretation, and in particular about what we gain by thinking about computers and programmers as interpreters in the same way that lawyers and judges are interpreters. I wish to propose that there is something to be gained by treating software as another type of law-like text, one that has its own interpretive rules, and that can be analysed using the conceptual tools we typically apply to legal interpretation. In particular, we can usefully distinguish three types of meaning that a program can have. The first is naive functional meaning: the effects that a program has when executed on a specific computer on a specific occasion. The second is literal functional meaning: the effects that a program would have if executed on a correctly functioning computer. The third is ordinary functional meaning: the effects that a program would have if executed correctly and was free of bugs. The punchline is that literal and ordinary functional meaning are inescapably social. The notions of what makes a computer ‘correctly functioning’ and what makes a program ‘bug free’ depend on the conventions of a particular technical community. We cannot reduce the meaning and effects of software to purely technical questions, because although meaning in programming languages is conventional in a different way than meaning in natural languages, it is conventional all the same.
Reply by Marieke Huisman, University of Twente.
Evolutionary Interpretation Law and Machine LearningSimon Deakin, Christopher Markou
We approach the issue of interpretability in artificial intelligence and law through the lens of evolutionary theory. Evolution is understood as a form of blind or mindless ‘direct fitting’, an iterative process through which a system and its environment are mutually constituted and aligned. The core case is natural selection as described in biology but it is not the only one. Legal reasoning can be understood as a step in the ‘direct fitting’ of law, through a cycle of variation, selection and retention, to its social context. Machine learning, insofar as it relies on error correction through backpropagation, is a version of the same process. It may therefore have value for understanding the long-run dynamics of legal and social change. This is distinct, however, from any use it may have in predicting case outcomes. Legal interpretation in the context of the individual or instant case depends upon the generative power of natural language to extrapolate from existing precedents to novel fact situations. This type of prospective or forward-looking reasoning is unlikely to be well captured by machine learning approaches.
Reply by Masha Medvedeva, University of Groningen.
Diachronic interpretability and machine learning systemsSylvie Delacroix
If a system is interpretable today, why would it not be as interpretable in five or ten years time? Years of societal transformations can negatively impact the interpretability of some machine learning (ML) systems for two types of reasons. These two types of reasons are rooted in a truism: interpretability requires both an interpretable object and a subject capable of interpretation. This object versus subject perspective ties in with distinct rationales for interpretable systems: generalisability and contestability. On the generalisability front, when it comes to ascertaining whether the accuracy of some ML model holds beyond the training data, a variety of transparency and explainability strategies have been put forward. These strategies can make us blind to the fact that what an ML system has learned may produce helpful insights when deployed in real-life contexts this year yet become useless faced with next year’s socially transformed cohort. On the contestability front, ethically and legally significant practices presuppose the continuous, uncertain (re)articulation of conflicting values. Without our continued drive to call for better ways of doing things, these discursive practices would wither away. To retain such a collective ability calls for a change in the way we articulate interpretability requirements for systems deployed in ethically and legally significant contexts: we need to build systems whose outputs we are capable of contesting today, as well as in five years’ time. This calls for what I call ‘ensemble contestability’ features.
Reply by Zachary C. Lipton, Carnegie Mellon University.
Hermeneutical injustice and the computational turn in lawEmilie van den Hoven
In this paper, I argue that the computational turn in law poses a potential challenge to the legal protections that the rule of law has traditionally afforded us, of a distinctively hermeneutical kind. Computational law brings increased epistemic opacity to the legal system, thereby constraining our ability to understand the law (and ourselves in light of it). Drawing on epistemology and the work of Miranda Fricker, I argue that the notion of ‘hermeneutical injustice’ captures this condition. Hermeneutical injustice refers to the condition where individuals are dispossessed of the conceptual tools needed to make sense of their own experiences, consequently limiting their ability to articulate them. I argue that in the legal context this poses significant challenges to the interpretation, ‘self-application’ and contestation of the law. Given the crucial importance of those concepts to the rule of law and the notion of human dignity that it rests upon, this paper seeks to explicate why the notion of hermeneutical injustice demands our attention in the face of the rapidly expanding scope of computation in our legal systems.
Reply by Ben Green, University of Michigan.
Computational legalism and the affordance of delay in lawLaurence Diver
Delay is a central element of law-as-we-know-it: the ability to interpret legal norms and contest their requirements is contingent on the temporal spaces that text affords citizens. As more computational systems are introduced into the legal system, these spaces are threatened with collapse, as the immediacy of ‘computational legalism’ dispenses with the natural ‘slowness’ of text. In order to preserve the nature of legal protection, we need to be clear about where in the legal process such delays play a normative role and to ensure that they are reflected in the affordances of the computational systems that are so introduced. This entails a focus on the design and production of such systems, and the resistance of the ideology of ‘efficiency’ that pervades contemporary development practices.
Reply by Ewa Luger, Chancellor's Fellow, University of Edinburgh.
Analogies and Disanalogies Between Machine-Driven and Human-Driven Legal JudgementReuben Binns
Are there certain desirable properties from text-driven law, which have parallels in data-driven law? As a preliminary exercise, this article explores a range of analogies and disanalogies between text-driven normativity and its data-driven counterparts. Ultimately, the conclusion is that the analogies are weaker than the disanalogies. But the hope is that, in the process of drawing them, we learn something more about the comparison between text and data-driven normativities and the (im?)possibility of data-driven law.
Reply by Emily M. Bender, Professor of Computational Linguistics, University of Washington.
The adaptive nature of text-driven lawMireille Hildebrandt
This article introduces the concept of ‘technology-driven normativities’, marking the difference between norms, at the generic level, as legitimate expectations that coordinate human interaction, and subsets of norms at speciﬁc levels, such as moral or legal norms. The article is focused on the normativity that is generated by text, ﬂeshing out a set of relevant affordances that are crucial for text-driven law and the rule of law. This concerns the ambiguity of natural language, the resulting open texture of legal concepts, the multi-interpretability of legal norms and, ﬁnally, the contestability of their application. This leads to an assessment of legal certainty that thrives on the need to interpret, the ability to contest and the concomitant need to decide the applicability and the meaning of relevant legal norms. Legal certainty thus sustains the adaptive nature of legal norms in the face of changing circumstances, which may not be possible for code- or data-driven law. This understanding of legal certainty demonstrates the meaning of legal protection under text-driven law. A proper understanding of the legal protection that is enabled by current positive law (which is text-driven), should inform the assessment of the protection that could be offered by data- or code-driven law, as they will generate other ‘technology-driven normativities’.
Reply by Michael Rovatsos, Professor of Artificial Intelligence, University of Edinburgh.
Legal Technology/Computational Law Preconditions, Opportunities and RisksWolfgang Hoffmann-Riem
Although computers and digital technologies have existed for many decades, their capabilities today have changed dramatically. Current buzzwords like Big Data, artificial intelligence, robotics, and blockchain are shorthand for further leaps in development. The digitalisation of communication, which is a disruptive innovation, and the associated digital transformation of the economy, culture, politics, and public and private communication – indeed, probably of virtually every area of life – will cause dramatic social change. It is essential to prepare for the fact that digitalisation will also have a growing impact on the legal system.
Reply by Virginia Dignum, Professor at the Department of Computing Science, Umeå University.