Philosophy
All Items
-
Education in an Era of Convergence
Lyria Bennett MosesThis paper considers alternatives to strict disciplinarity, particularly in legal education, in light of the increasing importance of problem spaces that cross disciplines including computational law and cyber security. It is a short provocation rather than a broad-ranging inquiry and focuses on law/computer science collaborations. It asks three questions that are increasingly controversial: (1) What might be done within disciplinary programs, such as law, to prepare students to work wisely alongside engineered systems? (2) What might be done to develop students’ skills at cross-disciplinary problem-solving throughout their education? (3) Should we offer undergraduate degrees oriented not around a discipline but a problem-space; for example, should computational law be a new discipline?
-
The Future of Computational Law in the Context of the Rule of Law
Mireille HildebrandtIn this position paper, I argue that lawyers must come to terms with the advent of a rich variety of legal technologies and define a series of challenges that the position papers in this special issue aim to identify and address. Before doing so, I address the question of what it means to discuss the future of computational law and how that relates to the Rule of Law. This, in turn, raises the question of whether there could be something like ‘a computational Rule of Law’, or whether that would be a bridge too far because neither the concept nor the practice of Rule of Law lends itself to computation. In that case, how would the integration of computational technologies into legal practice relate to a non-computational Rule of Law? The answer to that question will structure the challenges I see for the uptake of legal technologies, resulting in a research agenda that should enable, guide and restrict the design, deployment and use of legal technologies with an eye to the future of law.
-
The Interpretability Problem and the Critique of Abstraction(s)
Patrick AlloClassifiers implement a level of abstraction: they classify entities by taking into account certain features and ignoring other features. Explanations and interpretations of algorithmic decisions require us to identify and to critically assess these abstractions. But what does it mean to critically assess an abstraction? While computer scientists see abstraction as something desirable, many legal scholars would see it as a cause for concern. Each of these views entails a different kind of critique of abstraction. This paper argues that a relative critique of abstraction is more appropriate in the context of the interpretability problem. It proposes a model, inspired by Floridi’s Method of Abstraction, of a relative critique of abstractions that can be used to reason about the explanation, justification and contestation of classifiers.
Reply by Sandra Wachter, University of Oxford.
-
Rules, judgment and mechanisation
Mazviita ChirimuutaThis paper is a philosophical exploration of the notion of judgment, a mode of reasoning that has a central role in legal practice as it currently stands. The first part considers the distinction proposed by Kant, and recently explored historically by Lorraine Daston, between the capacity to follow and execute rules and the capacity to determine whether a general rule applies to a particular situation (that is, judgment). This characterisation of judgment is compared with one proposed by Brian Cantwell Smith, as part of an argument that current AI technologies do not have judgment. The second part of the paper asks whether digital computers could in principle have judgment and concludes with a negative answer.
Reply by William Lucy, University of Durham.
-
Diachronic interpretability and machine learning systems
Sylvie DelacroixIf a system is interpretable today, why would it not be as interpretable in five or ten years time? Years of societal transformations can negatively impact the interpretability of some machine learning (ML) systems for two types of reasons. These two types of reasons are rooted in a truism: interpretability requires both an interpretable object and a subject capable of interpretation. This object versus subject perspective ties in with distinct rationales for interpretable systems: generalisability and contestability. On the generalisability front, when it comes to ascertaining whether the accuracy of some ML model holds beyond the training data, a variety of transparency and explainability strategies have been put forward. These strategies can make us blind to the fact that what an ML system has learned may produce helpful insights when deployed in real-life contexts this year yet become useless faced with next year’s socially transformed cohort. On the contestability front, ethically and legally significant practices presuppose the continuous, uncertain (re)articulation of conflicting values. Without our continued drive to call for better ways of doing things, these discursive practices would wither away. To retain such a collective ability calls for a change in the way we articulate interpretability requirements for systems deployed in ethically and legally significant contexts: we need to build systems whose outputs we are capable of contesting today, as well as in five years’ time. This calls for what I call ‘ensemble contestability’ features.
Reply by Zachary C. Lipton, Carnegie Mellon University.
-
Transparency versus explanation: The role of ambiguity in legal AI
Elena EspositoDealing with opaque machine learning techniques, the crucial question has become the interpretability of the work of algorithms and their results. The paper argues that the shift towards interpretation requires a move from artificial intelligence to an innovative form of artificial communication. In many cases the goal of explanation is not to reveal the procedures of the machines but to communicate with them and obtain relevant and controlled information. As human explanations do not require transparency of neural connections or thought processes, so algorithmic explanations do not have to disclose the operations of the machine but have to produce reformulations that make sense to their interlocutors. This move has important consequences for legal communication, where ambiguity plays a fundamental role. The problem of interpretation in legal arguments, the paper argues, is not that algorithms do not explain enough but that they must explain too much and too precisely, constraining freedom of interpretation and the contestability of legal decisions. The consequence might be a possible limitation of the autonomy of legal communication that underpins the modern rule of law.
Reply by Federico Cabitza, University of Milan-Bicocca.