The Atomium in Brussels

About the Journal

The Journal of Cross-disciplinary Research in Computational Law (CRCL) invites excellence in law, computer science and other relevant disciplines with a focus on two types of ‘legal technologies’: (1) data-driven (e.g. predictive analytics, ‘intelligent’ search) and (2) code-driven (e.g. smart contracts, algorithmic decision-making (ADM), legal expert systems), and (3) their hybrids (e.g. code-driven decision-making based on data-driven research).

Legal practice is where computational law will be resisted, used or even fostered. CRCL wishes to raise questions as to (1) when the introduction of legal technologies should be resisted and on what grounds, (2) how and under what conditions they can be integrated into the practice of law and legal research and (3) how their integration may inform, erode or enhance legal protection and the rule of law.

Please subscribe for updates on upcoming articles and issues.

About CRCL ◉ Editorial team ◉ Submissions

Please note that at this moment we are preparing the first issue, which will appear in 2022. We will open the journal for submissions in the course of 2022, do not hesitate to contact the editors if you have a proposal for a special issue or an article. 

Launching of online first articles

The journal has kicked off mid-November 2020 with an invited article by Wolfgang Hoffmann-Riem, former Justice of the German Constitutional Court in Karlsruhe, who was part of the Court when it decided the seminal case that established the fundamental right to the guarantee of the confidentiality and integrity of information technology systems. By inviting him to contribute to the opening issue of CRCL we emphasise our close attention to legal practice.

The second online-first article (late November 2020) is authored by Mireille Hildebrandt, founding member of journal's editoral team. Her article concerns the normative alterity that is inherent in the different technologies that articulate legal norms, arguing that the normative affordances of text-based technologies enabled the rise of the rule of law. She contends that the transition to data- and code-driven 'legal' technologies will transform legal practice and thereby the mode of existence of modern positive law.

In the third online-first article (beginning December 2020) Reuben Binns, with a background in both computer science and philosophy, investigates the difference(s) that make(s) a difference in text-driven law as compared to data- and code-driven decision-making systems. This involves issues of procedure (getting to a right answer in the right way), discretion (dealing with the indeterminary inherent in text-driven systems), anticipation (the role of new case law) and a number of other incompatibilities that are all highly relevant when considering the integration of 'legal technologies' into legal practice or legal scholarship. 

The fourth online-first article (half December 2020) argues the crucial importance of delay in law and the rule of law. Whereas efficiency is often associated with speed, Laurence Diver, founding member of this journal’s editorial team, argues that the slowness of a text-driven legal practice may be a feature rather than a bug when it comes to legal protection. To make this point, Diver coined the concept of a ‘hermeneutic gap’ and in this article he explores to what extent ‘slow computing’ might contribute to sustaining this gap before bridging it in the context of legal practice.

In our fifth online-first article (March 2021), Emilie van den Hoven argues that the epistemic opacity that characterises the systems that comprise computational 'law' threatens us with ever-greater 'hermeneutical injustices': constraints on our ability as citizens to make sense of the law, and to account for its place in our lives, which in turn threaten the dignitarian underpinnings of the rule of law.

Data-driven law

Our sixth online first article (November 2021) is the first of a series of articles on the issue of interpretability in machine learning – in the context of data-driven ‘law’. In this article Elena Esposito leaves the well-trodden path of trying to figure out how legal technologies informed by machine learning actually reach their conclusions (output). Instead, she argues that we must learn to interact with these technologies, to use and control them rather than submit to them. Whereas she speaks of ‘communicating with’ and her replier, Federico Cabitzo speaks of ‘relating to’, both emphasise the need to reject objectivist assumptions, while nevertheless actively engaging with them in order to regain the agency that could otherwise be lost in translation.

Our seventh online first article (January 2022) squarely addresses the static temporality that defines machine learning (ML) systems, and its relevance for automated decision support systems that inform legally relevant decisions. ML systems can only be trained on historical data, and the relevance of such data may deteriorate over time, especially in the case of the law, which is a moving target as it adapts to changing circumstances. The author, Sylvie Delacroix, explains how this affects the interpretability and contestability of these systems over time. She proposes to introduce ‘ensemble contestability’ features capable of achieving long-term contestability. Her replier, Zachary Lipton, agrees that the temporal dynamics of ML must be taken into account when considering the impact of ML on the agency of those subject to their decisions. He then moves to succinctly develop three points of critique, arguing, in particular, that Delacroix’s post hoc ‘ensemble contestability’ solution is not viable and cannot do the work that is needed. In the course of his reply, he manages to trace the history of many relevant debates, thus providing a rich resource beyond the usual focus of explainable AI. Delacroix counters by distinguishing between the ‘interpretable object’ and the ‘subject capable of interpretation’, arguing that Lipton’s objections concern the ‘interpretable object’, whereas her concern sides with the subject capable of interpretation. Here we see the need for and the salient results of a genuine, respectful cross-disciplinary conversation.  



Online first

  • Rules, judgment and mechanisation

    Mazviita Chirimuuta

    This paper is a philosophical exploration of the notion of judgment, a mode of reasoning that has a central role in legal practice as it currently stands. The first part considers the distinction proposed by Kant, and recently explored historically by Lorraine Daston, between the capacity to follow and execute rules and the capacity to determine whether a general rule applies to a particular situation (that is, judgment). This characterisation of judgment is compared with one proposed by Brian Cantwell Smith, as part of an argument that current AI technologies do not have judgment. The second part of the paper asks whether digital computers could in principle have judgment and concludes with a negative answer.

    Reply by William Lucy, University of Durham.

  • Evolutionary Interpretation Law and Machine Learning

    Simon Deakin, Christopher Markou

    We approach the issue of interpretability in artificial intelligence and law through the lens of evolutionary theory. Evolution is understood as a form of blind or mindless ‘direct fitting’, an iterative process through which a system and its environment are mutually constituted and aligned. The core case is natural selection as described in biology but it is not the only one. Legal reasoning can be understood as a step in the ‘direct fitting’ of law, through a cycle of variation, selection and retention, to its social context. Machine learning, insofar as it relies on error correction through backpropagation, is a version of the same process. It may therefore have value for understanding the long-run dynamics of legal and social change. This is distinct, however, from any use it may have in predicting case outcomes. Legal interpretation in the context of the individual or instant case depends upon the generative power of natural language to extrapolate from existing precedents to novel fact situations. This type of prospective or forward-looking reasoning is unlikely to be well captured by machine learning approaches.

    Reply by Masha Medvedeva, University of Groningen.

  • Diachronic interpretability and machine learning systems

    Sylvie Delacroix

    If a system is interpretable today, why would it not be as interpretable in five or ten years time? Years of societal transformations can negatively impact the interpretability of some machine learning (ML) systems for two types of reasons. These two types of reasons are rooted in a truism: interpretability requires both an interpretable object and a subject capable of interpretation. This object versus subject perspective ties in with distinct rationales for interpretable systems: generalisability and contestability. On the generalisability front, when it comes to ascertaining whether the accuracy of some ML model holds beyond the training data, a variety of transparency and explainability strategies have been put forward. These strategies can make us blind to the fact that what an ML system has learned may produce helpful insights when deployed in real-life contexts this year yet become useless faced with next year’s socially transformed cohort. On the contestability front, ethically and legally significant practices presuppose the continuous, uncertain (re)articulation of conflicting values. Without our continued drive to call for better ways of doing things, these discursive practices would wither away. To retain such a collective ability calls for a change in the way we articulate interpretability requirements for systems deployed in ethically and legally significant contexts: we need to build systems whose outputs we are capable of contesting today, as well as in five years’ time. This calls for what I call ‘ensemble contestability’ features.

    Reply by Zachary C. Lipton, Carnegie Mellon University.

  • Transparency versus explanation: The role of ambiguity in legal AI

    Elena Esposito

    Dealing with opaque machine learning techniques, the crucial question has become the interpretability of the work of algorithms and their results. The paper argues that the shift towards interpretation requires a move from artificial intelligence to an innovative form of artificial communication. In many cases the goal of explanation is not to reveal the procedures of the machines but to communicate with them and obtain relevant and controlled information. As human explanations do not require transparency of neural connections or thought processes, so algorithmic explanations do not have to disclose the operations of the machine but have to produce reformulations that make sense to their interlocutors. This move has important consequences for legal communication, where ambiguity plays a fundamental role. The problem of interpretation in legal arguments, the paper argues, is not that algorithms do not explain enough but that they must explain too much and too precisely, constraining freedom of interpretation and the contestability of legal decisions. The consequence might be a possible limitation of the autonomy of legal communication that underpins the modern rule of law.

    Reply by Federico Cabitza, University of Milan-Bicocca.

  • Hermeneutical injustice and the computational turn in law

    Emilie van den Hoven

    In this paper, I argue that the computational turn in law poses a potential challenge to the legal protections that the rule of law has traditionally afforded us, of a distinctively hermeneutical kind. Computational law brings increased epistemic opacity to the legal system, thereby constraining our ability to understand the law (and ourselves in light of it). Drawing on epistemology and the work of Miranda Fricker, I argue that the notion of ‘hermeneutical injustice’ captures this condition. Hermeneutical injustice refers to the condition where individuals are dispossessed of the conceptual tools needed to make sense of their own experiences, consequently limiting their ability to articulate them. I argue that in the legal context this poses significant challenges to the interpretation, ‘self-application’ and contestation of the law. Given the crucial importance of those concepts to the rule of law and the notion of human dignity that it rests upon, this paper seeks to explicate why the notion of hermeneutical injustice demands our attention in the face of the rapidly expanding scope of computation in our legal systems.

    Reply by Ben Green, University of Michigan.

  • Computational legalism and the affordance of delay in law

    Laurence Diver

    Delay is a central element of law-as-we-know-it: the ability to interpret legal norms and contest their requirements is contingent on the temporal spaces that text affords citizens. As more computational systems are introduced into the legal system, these spaces are threatened with collapse, as the immediacy of ‘computational legalism’ dispenses with the natural ‘slowness’ of text. In order to preserve the nature of legal protection, we need to be clear about where in the legal process such delays play a normative role and to ensure that they are reflected in the affordances of the computational systems that are so introduced. This entails a focus on the design and production of such systems, and the resistance of the ideology of ‘efficiency’ that pervades contemporary development practices.

    Reply by Ewa Luger, Chancellor's Fellow, University of Edinburgh.

  • Analogies and Disanalogies Between Machine-Driven and Human-Driven Legal Judgement

    Reuben Binns

    Are there certain desirable properties from text-driven law, which have parallels in data-driven law? As a preliminary exercise, this article explores a range of analogies and disanalogies between text-driven normativity and its data-driven counterparts. Ultimately, the conclusion is that the analogies are weaker than the disanalogies. But the hope is that, in the process of drawing them, we learn something more about the comparison between text and data-driven normativities and the (im?)possibility of data-driven law.

    Reply by Emily M. Bender, Professor of Computational Linguistics, University of Washington.

  • The adaptive nature of text-driven law

    Mireille Hildebrandt

    This article introduces the concept of ‘technology-driven normativities’, marking the difference between norms, at the generic level, as legitimate expectations that coordinate human interaction, and subsets of norms at specific levels, such as moral or legal norms. The article is focused on the normativity that is generated by text, fleshing out a set of relevant affordances that are crucial for text-driven law and the rule of law. This concerns the ambiguity of natural language, the resulting open texture of legal concepts, the multi-interpretability of legal norms and, finally, the contestability of their application. This leads to an assessment of legal certainty that thrives on the need to interpret, the ability to contest and the concomitant need to decide the applicability and the meaning of relevant legal norms. Legal certainty thus sustains the adaptive nature of legal norms in the face of changing circumstances, which may not be possible for code- or data-driven law. This understanding of legal certainty demonstrates the meaning of legal protection under text-driven law. A proper understanding of the legal protection that is enabled by current positive law (which is text-driven), should inform the assessment of the protection that could be offered by data- or code-driven law, as they will generate other ‘technology-driven normativities’.

    Reply by Michael Rovatsos, Professor of Artificial Intelligence, University of Edinburgh.

  • Legal Technology/Computational Law Preconditions, Opportunities and Risks

    Wolfgang Hoffmann-Riem

    Although computers and digital technologies have existed for many decades, their capabilities today have changed dramatically. Current buzzwords like Big Data, artificial intelligence, robotics, and blockchain are shorthand for further leaps in development. The digitalisation of communication, which is a disruptive innovation, and the associated digital transformation of the economy, culture, politics, and public and private communication – indeed, probably of virtually every area of life – will cause dramatic social change. It is essential to prepare for the fact that digitalisation will also have a growing impact on the legal system.

    Reply by Virginia Dignum, Professor at the Department of Computing Science, Umeå University.

View All Issues