Vol. 2 No. 2 (2024): The Future of Computational Law

Special issue on the Future of Computational Law

Introduction: On 20-21 November 2024 the second International Conference on Cross-Disciplinary Research in Computational Law (CRCL23) hosted the Symposium on The Future of Computational Law. A range of invited thought leaders in this domain presented their invited position paper on the subject: Lyria Bennett Moses, Floris Bex, Natali Byrom, MireilleHildebrandt, Sayash Kapoor & Peter Henderson & Arvind Narayanan, Sarah Lawsky, Denis Merigoux and Frank Pasquale & Gianclaudio Malgieri. In this special issue we publish their papers, hoping to further nourish key conversations between lawyers and legal scholars and developers and computer scientists. The first paper, by Mireille Hildebrandt comments on the themes and the content of the papers after offering an analysis of ‘the future of computational law’ and of its relation to ’the rule of law’.

Full Issue

Articles

  • The Future of Computational Law in the Context of the Rule of Law

    Mireille Hildebrandt

    In this position paper, I argue that lawyers must come to terms with the advent of a rich variety of legal technologies and define a series of challenges that the position papers in this special issue aim to identify and address. Before doing so, I address the question of what it means to discuss the future of computational law and how that relates to the Rule of Law. This, in turn, raises the question of whether there could be something like ‘a computational Rule of Law’, or whether that would be a bridge too far because neither the concept nor the practice of Rule of Law lends itself to computation. In that case, how would the integration of computational technologies into legal practice relate to a non-computational Rule of Law? The answer to that question will structure the challenges I see for the uptake of legal technologies, resulting in a research agenda that should enable, guide and restrict the design, deployment and use of legal technologies with an eye to the future of law.

  • Computational Law and Epistemic Trespassing

    Sarah Lawsky

    This article uses the concept of 'epistemic trespassing' to argue that technologists who propose applications of computer science to the law should recognize and incorporate legal expertise, and that legal experts have a responsibility not to defer mindlessly to technologists’ claims. Computational tools or projects developed without an understanding of the substance and practice of law may harm rather than help, by diverting resources from actually useful tools and projects, resolving unimportant questions, answering questions flatly incorrectly, or providing purported solutions without sufficient attention to the larger context in which law is created and functions.

  • Education in an Era of Convergence

    Lyria Bennett Moses

    This paper considers alternatives to strict disciplinarity, particularly in legal education, in light of the increasing importance of problem spaces that cross disciplines including computational law and cyber security. It is a short provocation rather than a broad-ranging inquiry and focuses on law/computer science collaborations. It asks three questions that are increasingly controversial: (1) What might be done within disciplinary programs, such as law, to prepare students to work wisely alongside engineered systems? (2) What might be done to develop students’ skills at cross-disciplinary problem-solving throughout their education? (3) Should we offer undergraduate degrees oriented not around a discipline but a problem-space; for example, should computational law be a new discipline?

  • Scoping AI & Law Projects: Wanting It All is Counterproductive

    Denis Merigoux

    The intersection of law and computer science has been dominated for decades by a community that self-identifies with the pursuit of ‘artificial intelligence’. This self-identification is not a coincidence; many AI & Law researchers have expressed their interest in the ideologically-charged idea-utopia of government by machines, and the field of artificial intelligence aligns with the pursuit of all-encompassing systems that could crack the very diverse nature of legal tasks. As a consequence, a lot of theoretical and practical work has been carried in the AI & Law community with the objective of creating logic-based, knowledge-based or machine-learning-based systems that could eventually ‘solve’ any legal task. This ‘want-it-all’ research attitude echoes some of the debates in my home field of formal methods around formalization of programming languages and proofs. Hence, I will argue here that the quest for an unscoped system that does it all is counterproductive for multiple reasons. First, because these systems perform generally poorly on everything rather than being good at one task, and most legal applications have high correctness standards. Second, because it yields artifacts that are very difficult to evaluate in order to build a sound methodology for advancing the field. Third, because it nudges into technological choices that require large infrastructure-building (sometimes on a global scale) before reaping benefits and encouraging adoption. Fourth, because it distracts efforts away from the basic applications of legal technologies that have been neglected by the research community.

  • Promises and pitfalls of artificial intelligence for legal applications

    Sayash Kapoor, Peter Henderson, Arvind Narayanan

    Is AI set to redefine the legal profession? We argue that this claim is not supported by the current evidence. We dive into AI's increasingly prevalent roles in three types of legal tasks: information processing; tasks involving creativity, reasoning, or judgment; and predictions about the future. We find that the ease of evaluating legal applications varies greatly across legal tasks, based on the ease of identifying correct answers and the observability of information relevant to the task at hand. Tasks that would lead to the most significant changes to the legal profession are also the ones most prone to overoptimism about AI capabilities, as they are harder to evaluate. We make recommendations for better evaluation and deployment of AI in legal contexts.

  • Computational Law and Access to Justice

    Natalie Byrom

    Increasingly claims are made about the potential for computational technology to address the access to justice crisis. Advocates for AI argue that these tools can extend the protection of the law to the estimated 5.1billion people worldwide who are unable to secure meaningful access to justice, whilst also creating efficiency savings and reducing the cost of administering justice. Globally, court digitisation efforts are rapidly increasing the volume, granularity and accessibility of data about civil justice systems and the people who access them and in doing so, creating the datasets and the infrastructure needed to support the deployment of computational technologies at scale. What are the prospects for these developments to meaningfully improve access to justice? What research should be prioritised and what changes to policy and regulation are required?

    This paper argues that the potential for computational technologies to address the civil access to justice crisis is undermined by: i.) an impoverished understanding of the nature of the crisis – at both a theoretical and empirical level ii.) misalignment between the values that are currently driving the turn to computational law and the goal of increasing rights realisation and accountability  and iii.) the failure to address the ecosystem factors (access to data, access to funding and regulation) that would support the development of computational technologies in the interests of access to justice. The paper concludes by suggesting next steps for the field. 

     

     

  • Transdisciplinary research as a way forward in AI & Law

    Floris Bex

    The field of Artificial Intelligence & Law is a community of law and computer science scholars, with a focus on AI applications for the law and law enforcement. Such applications have become the subject of much debate, with techno-pessimists and techno-optimists on either side. What is the role of the (largely techno-optimistic) AI & Law community in this debate, how can we investigate AI for the law without getting caught up in the drama? I will argue for three points: (1) combe research on data-driven systems, such as generative AI, with research on knowledge-based AI; (2) put AI into (legal) practice, working together with courts, the police, law firms and citizens; (3) work together across disciplines, bringing together those who think about how to build AI and those who think about how to govern and regulate it.

  • Generative AI, Explainability, and Score-Based Natural Language Processing in Benefits Administration

    Frank Pasquale, Gianclaudio Malgieri

    Administrative agencies have developed computationally-assisted processes to speed benefits to persons with particularly urgent and obvious claims. One proposed extension of these programs would score claims based on the words that appear in them, identifying some set of claims as particularly like known, meritorious claims, without understanding the meaning of any of these legal texts. Score-based natural language processing (SBNLP) may expand the range of claims that may be categorized as urgent and obvious, but as its complexity advances, its practitioners may not be able to offer a narratively intelligible rationale for how or why it does so. At that point, practitioners may utilize the new textual affordances of generative AI to attempt to fill this explanatory gap, offering a rationale for decision that is a plausible imitation of past, humanly-written explanations of judgments.

    This article explains why such generative AI should not be used to justify SBNLP decisions in this way. Due process and other core principles of administrative justice require humanly intelligible identification of the grounds for adverse action. Given that ‘next-token-prediction’ is distinct from understanding a text, generative AI cannot perform such identification reliably. Moreover, given current opacity and potential bias in leading chatbots based on large language models, as well as deep ethical concerns raised by the databases they are built on, there is a good case for entirely excluding these automated outputs in administrative and judicial decision-making settings. Nevertheless, SBNLP may be established parallel to or external to justification-based legal proceedings, for humanitarian purposes.