Philosophy

4 Items

 

 

All Items

  • The Interpretability Problem and the Critique of Abstraction(s)

    Patrick Allo

    Classifiers implement a level of abstraction: they classify entities by taking into account certain features and ignoring other features. Explanations and interpretations of algorithmic decisions require us to identify and to critically assess these abstractions. But what does it mean to critically assess an abstraction? While computer scientists see abstraction as something desirable, many legal scholars would see it as a cause for concern. Each of these views entails a different kind of critique of abstraction. This paper argues that a relative critique of abstraction is more appropriate in the context of the interpretability problem. It proposes a model, inspired by Floridi’s Method of Abstraction, of a relative critique of abstractions that can be used to reason about the explanation, justification and contestation of classifiers.

    Reply by Sandra Wachter, University of Oxford.

  • Rules, judgment and mechanisation

    Mazviita Chirimuuta

    This paper is a philosophical exploration of the notion of judgment, a mode of reasoning that has a central role in legal practice as it currently stands. The first part considers the distinction proposed by Kant, and recently explored historically by Lorraine Daston, between the capacity to follow and execute rules and the capacity to determine whether a general rule applies to a particular situation (that is, judgment). This characterisation of judgment is compared with one proposed by Brian Cantwell Smith, as part of an argument that current AI technologies do not have judgment. The second part of the paper asks whether digital computers could in principle have judgment and concludes with a negative answer.

    Reply by William Lucy, University of Durham.

  • Diachronic interpretability and machine learning systems

    Sylvie Delacroix

    If a system is interpretable today, why would it not be as interpretable in five or ten years time? Years of societal transformations can negatively impact the interpretability of some machine learning (ML) systems for two types of reasons. These two types of reasons are rooted in a truism: interpretability requires both an interpretable object and a subject capable of interpretation. This object versus subject perspective ties in with distinct rationales for interpretable systems: generalisability and contestability. On the generalisability front, when it comes to ascertaining whether the accuracy of some ML model holds beyond the training data, a variety of transparency and explainability strategies have been put forward. These strategies can make us blind to the fact that what an ML system has learned may produce helpful insights when deployed in real-life contexts this year yet become useless faced with next year’s socially transformed cohort. On the contestability front, ethically and legally significant practices presuppose the continuous, uncertain (re)articulation of conflicting values. Without our continued drive to call for better ways of doing things, these discursive practices would wither away. To retain such a collective ability calls for a change in the way we articulate interpretability requirements for systems deployed in ethically and legally significant contexts: we need to build systems whose outputs we are capable of contesting today, as well as in five years’ time. This calls for what I call ‘ensemble contestability’ features.

    Reply by Zachary C. Lipton, Carnegie Mellon University.

  • Transparency versus explanation: The role of ambiguity in legal AI

    Elena Esposito

    Dealing with opaque machine learning techniques, the crucial question has become the interpretability of the work of algorithms and their results. The paper argues that the shift towards interpretation requires a move from artificial intelligence to an innovative form of artificial communication. In many cases the goal of explanation is not to reveal the procedures of the machines but to communicate with them and obtain relevant and controlled information. As human explanations do not require transparency of neural connections or thought processes, so algorithmic explanations do not have to disclose the operations of the machine but have to produce reformulations that make sense to their interlocutors. This move has important consequences for legal communication, where ambiguity plays a fundamental role. The problem of interpretation in legal arguments, the paper argues, is not that algorithms do not explain enough but that they must explain too much and too precisely, constraining freedom of interpretation and the contestability of legal decisions. The consequence might be a possible limitation of the autonomy of legal communication that underpins the modern rule of law.

    Reply by Federico Cabitza, University of Milan-Bicocca.