Legal theory

17 Items

All Items

  • Automated Law Enforcement An assessment of China’s Social Credit Systems (SCS) using interview evidence from Shanghai

    Zhenbin Zuo

    This paper provides one of the first fieldwork-based research accounts of China's Social Credit Systems (SCS).  It focuses on the issue of automated law enforcement. Evidence is drawn from semi-structured interviews with Shanghai-based local government officials, judges and corporate employees, conducted in April 2021. These are actors who supervise, manage, and/or operate Shanghai’s SCS at the level of daily practice. The paper examines the use of blacklists and joint sanctions within the wider framework of the SCS. The interview evidence, combined with online archival research, uncovers a more complete understanding than previously available of the detailed workings of these systems and of their perceived impacts, both positive and negative, in the field. Automation is observed to have achieved efficient scaling, but also to have negative consequences, including rigidity at the level of code, and perverse or counter-productive incentives at the level of human behaviour, leading to ‘institutional overload’. Proposing an original institutional theory of computational law which identifies the role of governance in ‘scaling and layering’, the paper argues that automated enforcement can only achieve scale effects if human judgement is combined with automation. Human agency is needed to continuously realign and re-fit code-based systems to text-driven laws and social norms in specific spatio-temporal environments. In the final analysis, code operates in a path-dependent and complementary way to these other forms of governance. From social norms to laws, to data and to code, governance is layered via formalisation sustained by human work and societal feedback.

  • The Future of Computational Law in the Context of the Rule of Law

    Mireille Hildebrandt

    In this position paper, I argue that lawyers must come to terms with the advent of a rich variety of legal technologies and define a series of challenges that the position papers in this special issue aim to identify and address. Before doing so, I address the question of what it means to discuss the future of computational law and how that relates to the Rule of Law. This, in turn, raises the question of whether there could be something like ‘a computational Rule of Law’, or whether that would be a bridge too far because neither the concept nor the practice of Rule of Law lends itself to computation. In that case, how would the integration of computational technologies into legal practice relate to a non-computational Rule of Law? The answer to that question will structure the challenges I see for the uptake of legal technologies, resulting in a research agenda that should enable, guide and restrict the design, deployment and use of legal technologies with an eye to the future of law.

  • Computational Law and Access to Justice

    Natalie Byrom

    Increasingly claims are made about the potential for computational technology to address the access to justice crisis. Advocates for AI argue that these tools can extend the protection of the law to the estimated 5.1billion people worldwide who are unable to secure meaningful access to justice, whilst also creating efficiency savings and reducing the cost of administering justice. Globally, court digitisation efforts are rapidly increasing the volume, granularity and accessibility of data about civil justice systems and the people who access them and in doing so, creating the datasets and the infrastructure needed to support the deployment of computational technologies at scale. What are the prospects for these developments to meaningfully improve access to justice? What research should be prioritised and what changes to policy and regulation are required?

    This paper argues that the potential for computational technologies to address the civil access to justice crisis is undermined by: i.) an impoverished understanding of the nature of the crisis – at both a theoretical and empirical level ii.) misalignment between the values that are currently driving the turn to computational law and the goal of increasing rights realisation and accountability  and iii.) the failure to address the ecosystem factors (access to data, access to funding and regulation) that would support the development of computational technologies in the interests of access to justice. The paper concludes by suggesting next steps for the field. 

     

     

  • Generative AI, Explainability, and Score-Based Natural Language Processing in Benefits Administration

    Frank Pasquale, Gianclaudio Malgieri

    Administrative agencies have developed computationally-assisted processes to speed benefits to persons with particularly urgent and obvious claims. One proposed extension of these programs would score claims based on the words that appear in them, identifying some set of claims as particularly like known, meritorious claims, without understanding the meaning of any of these legal texts. Score-based natural language processing (SBNLP) may expand the range of claims that may be categorized as urgent and obvious, but as its complexity advances, its practitioners may not be able to offer a narratively intelligible rationale for how or why it does so. At that point, practitioners may utilize the new textual affordances of generative AI to attempt to fill this explanatory gap, offering a rationale for decision that is a plausible imitation of past, humanly-written explanations of judgments.

    This article explains why such generative AI should not be used to justify SBNLP decisions in this way. Due process and other core principles of administrative justice require humanly intelligible identification of the grounds for adverse action. Given that ‘next-token-prediction’ is distinct from understanding a text, generative AI cannot perform such identification reliably. Moreover, given current opacity and potential bias in leading chatbots based on large language models, as well as deep ethical concerns raised by the databases they are built on, there is a good case for entirely excluding these automated outputs in administrative and judicial decision-making settings. Nevertheless, SBNLP may be established parallel to or external to justification-based legal proceedings, for humanitarian purposes.

  • Computational Law and Epistemic Trespassing

    Sarah Lawsky

    This article uses the concept of 'epistemic trespassing' to argue that technologists who propose applications of computer science to the law should recognize and incorporate legal expertise, and that legal experts have a responsibility not to defer mindlessly to technologists’ claims. Computational tools or projects developed without an understanding of the substance and practice of law may harm rather than help, by diverting resources from actually useful tools and projects, resolving unimportant questions, answering questions flatly incorrectly, or providing purported solutions without sufficient attention to the larger context in which law is created and functions.

  • Rules, Computation and Politics: Scrutinizing Unnoticed Programming Choices in French Housing Benefits

    Denis Merigoux, Marie Alauzen, Lilya Slimani

    The article questions the translation of a particular legal statement, a rule of calculation of social rights, into a computer program, able to activate the rights of the concerned citizens. It does not adopt a theoretical perspective on the logic of law and computing, rather a realistic stance on contemporary welfare states, by studying the case of the calculation of housing benefit in France. Lacking access to CRISTAL, the source code of the calculation, we simulated the code base from the letter of the law and met with the writers of the housing law in the ministries to conduct a critical investigation of the source code. Through these interdisciplinary methods, we identified three types of unnoticed micro-choices made by developers when translating the law: imprecision, simplification and invisibilization. These methods also uncover significant sociological understanding of the ordinary writing of law and code in the administration: the absence of a synoptic point of view on a particular domain of the law, the non-pathological character of errors in published texts, and the prevalence of a frontier of automation in the division of bureaucratic work. These results from the explicitation of programming choices, lead us to plead for a re-specification in the field of legal informatics and a reorientation of the investigations in the field of the philosophy and the sociology of law.

  • From Rules as Code to Mindset Strategies and Aligned Interpretive Approaches

    Mark Burdon, Anna Huggins, Nic Godfrey, Rhyle Simcock, Josh Buckley, Siobhaine Slevin, Stephen McGowan

    ‘Rules as Code’ is a broad heuristic that encompasses different conceptual and practical aspects regarding the presentation of legal instruments as machine executable code, especially for use in automated business systems. The presentation of law as code was historically considered a largely isomorphic exercise that could be achieved through a literal translation of law into code. Contemporary research is questioning the value of a literal approach to legal coding and is adopting different interpretive strategies that seek enhanced alignment between law and code. In this article, we report on research findings involving the coding of an Australian Commonwealth statute – the Treasury Laws Amendment (Design and Distribution Obligations and Product Intervention Powers) Act 2019 (Cth) (the ‘DDO Act’), and the Act’s concomitant regulatory guidance – the Australian Securities and Investments Commission (ASIC) Regulatory Guide 274 (‘RG 274’). We adapt and apply Brownsword’s mindsets to develop different interpretive approaches that were necessary to resolve the coding issues encountered. The mindset strategies enabled us to outline and delineate distinct computational, legal and regulatory interpretive approaches that highlight the different cultural contexts and rationales which are embedded in legal instruments, like legislation and regulatory guidance. In conclusion, we contend that different types of mindset strategies better highlight the interpretive choices involved in the coding of legal and regulatory instruments.    

  • Law as Code Exploring Information, Communication and Power in Legal Systems

    Bhumika Billa

    This paper is an inquiry into the informational nature of legal systems to arrive at a new understanding of law-society interactions. Katharina Pistor in her book Code of Capital reveals how the legal ‘coding’ of ‘capital’ has deepened wealth inequality but does not offer an in-depth exploration or definition of ‘legal coding’. In her critical response to ‘legal singularity’ as a proposed solution for making law more inclusive and accessible, Jennifer Cobbe calls for a closer look at the structural role law plays in society and how it has come to exclude, marginalise and reinforce power gaps. The paper aims to link Pistor’s project with Cobbe’s critical questions by exploring ‘law as code’ and modelling juridical communication and information flows in a legal system. For this purpose, I use two external frames — Claude Shannon’s information theory and Niklas Luhmann’s systems theory — to explore ways in which the legal system is exclusive, reflexive, and adaptive in the ways it interacts with society. An attempt to model information flows within (using Shannon) and beyond (using Luhmann) the boundaries of law reveals the influence of experts, their identities, and their lived experiences on both the translation and transmission of legal information. The paper is hopefully a starting point for more cross-disciplinary conversations aimed at addressing the structural issues with the way law shifts and reinforces power.

    Reply by Jannis Kallinikos, LUISS Guido Carli University.

  • Platforms as Law A speculative theory of coded, interfacial and environmental norms

    José Antonio Magalhães

    This paper aims to offer a nomic (legal-spatial-political) concept of platform at the interface between modern legal theory and contemporary speculative philosophy. I argue that the ‘code as law’ debate has been dominated by ‘legal correlationism’, a theoretical framework based on the is/ought distinction in which ‘code’ appears as a technological fact to be regulated by legal norms. I propose an alternative approach via speculative legal theory in order to take code as law in a literal sense. I rework Carl Schmitt’s notion of ‘nomos’ to produce a legal concept of platform that avoids correlationism. I frame both modern law and computational platforms as nomic platforms, though based on different conceptions/experiences of technics, and map out their respective operations. I discern three types of norms active in nomic platforms: coded, interfacial and environmental norms, the first two of which have been often confused, while the third remain largely unknown to legal theory. Finally, I seek to offer a set of concepts meant to render cloud platforms intelligible in nomic terms, especially those of device, application, interface and user, introducing the notion of the transdividual user as the correlate of algorithmic governance. I close by emphasising that, though it is vital to criticise platform nomics and protect the affordances of law-as-we-know-it, those efforts should be supplemented by theoretico-practical speculation about what law may become.

    Reply by Cecilia Rikap, University College London.

  • The Structure and Legal Interpretation of Computer Programs

    James Grimmelmann

    This is an essay about the relationship between legal interpretation and software interpretation, and in particular about what we gain by thinking about computers and programmers as interpreters in the same way that lawyers and judges are interpreters. I wish to propose that there is something to be gained by treating software as another type of law-like text, one that has its own interpretive rules, and that can be analysed using the conceptual tools we typically apply to legal interpretation. In particular, we can usefully distinguish three types of meaning that a program can have. The first is naive functional meaning: the effects that a program has when executed on a specific computer on a specific occasion. The second is literal functional meaning: the effects that a program would have if executed on a correctly functioning computer. The third is ordinary functional meaning: the effects that a program would have if executed correctly and was free of bugs. The punchline is that literal and ordinary functional meaning are inescapably social. The notions of what makes a computer ‘correctly functioning’ and what makes a program ‘bug free’ depend on the conventions of a particular technical community. We cannot reduce the meaning and effects of software to purely technical questions, because although meaning in programming languages is conventional in a different way than meaning in natural languages, it is conventional all the same.

    Reply by Marieke Huisman, University of Twente.

  • Evolutionary Interpretation Law and Machine Learning

    Simon Deakin, Christopher Markou

    We approach the issue of interpretability in artificial intelligence and law through the lens of evolutionary theory. Evolution is understood as a form of blind or mindless ‘direct fitting’, an iterative process through which a system and its environment are mutually constituted and aligned. The core case is natural selection as described in biology but it is not the only one. Legal reasoning can be understood as a step in the ‘direct fitting’ of law, through a cycle of variation, selection and retention, to its social context. Machine learning, insofar as it relies on error correction through backpropagation, is a version of the same process. It may therefore have value for understanding the long-run dynamics of legal and social change. This is distinct, however, from any use it may have in predicting case outcomes. Legal interpretation in the context of the individual or instant case depends upon the generative power of natural language to extrapolate from existing precedents to novel fact situations. This type of prospective or forward-looking reasoning is unlikely to be well captured by machine learning approaches.

    Reply by Masha Medvedeva, University of Groningen.

  • Diachronic interpretability and machine learning systems

    Sylvie Delacroix

    If a system is interpretable today, why would it not be as interpretable in five or ten years time? Years of societal transformations can negatively impact the interpretability of some machine learning (ML) systems for two types of reasons. These two types of reasons are rooted in a truism: interpretability requires both an interpretable object and a subject capable of interpretation. This object versus subject perspective ties in with distinct rationales for interpretable systems: generalisability and contestability. On the generalisability front, when it comes to ascertaining whether the accuracy of some ML model holds beyond the training data, a variety of transparency and explainability strategies have been put forward. These strategies can make us blind to the fact that what an ML system has learned may produce helpful insights when deployed in real-life contexts this year yet become useless faced with next year’s socially transformed cohort. On the contestability front, ethically and legally significant practices presuppose the continuous, uncertain (re)articulation of conflicting values. Without our continued drive to call for better ways of doing things, these discursive practices would wither away. To retain such a collective ability calls for a change in the way we articulate interpretability requirements for systems deployed in ethically and legally significant contexts: we need to build systems whose outputs we are capable of contesting today, as well as in five years’ time. This calls for what I call ‘ensemble contestability’ features.

    Reply by Zachary C. Lipton, Carnegie Mellon University.

  • Hermeneutical injustice and the computational turn in law

    Emilie van den Hoven

    In this paper, I argue that the computational turn in law poses a potential challenge to the legal protections that the rule of law has traditionally afforded us, of a distinctively hermeneutical kind. Computational law brings increased epistemic opacity to the legal system, thereby constraining our ability to understand the law (and ourselves in light of it). Drawing on epistemology and the work of Miranda Fricker, I argue that the notion of ‘hermeneutical injustice’ captures this condition. Hermeneutical injustice refers to the condition where individuals are dispossessed of the conceptual tools needed to make sense of their own experiences, consequently limiting their ability to articulate them. I argue that in the legal context this poses significant challenges to the interpretation, ‘self-application’ and contestation of the law. Given the crucial importance of those concepts to the rule of law and the notion of human dignity that it rests upon, this paper seeks to explicate why the notion of hermeneutical injustice demands our attention in the face of the rapidly expanding scope of computation in our legal systems.

    Reply by Ben Green, University of Michigan.

  • Computational legalism and the affordance of delay in law

    Laurence Diver

    Delay is a central element of law-as-we-know-it: the ability to interpret legal norms and contest their requirements is contingent on the temporal spaces that text affords citizens. As more computational systems are introduced into the legal system, these spaces are threatened with collapse, as the immediacy of ‘computational legalism’ dispenses with the natural ‘slowness’ of text. In order to preserve the nature of legal protection, we need to be clear about where in the legal process such delays play a normative role and to ensure that they are reflected in the affordances of the computational systems that are so introduced. This entails a focus on the design and production of such systems, and the resistance of the ideology of ‘efficiency’ that pervades contemporary development practices.

    Reply by Ewa Luger, Chancellor's Fellow, University of Edinburgh.

  • Analogies and Disanalogies Between Machine-Driven and Human-Driven Legal Judgement

    Reuben Binns

    Are there certain desirable properties from text-driven law, which have parallels in data-driven law? As a preliminary exercise, this article explores a range of analogies and disanalogies between text-driven normativity and its data-driven counterparts. Ultimately, the conclusion is that the analogies are weaker than the disanalogies. But the hope is that, in the process of drawing them, we learn something more about the comparison between text and data-driven normativities and the (im?)possibility of data-driven law.

    Reply by Emily M. Bender, Professor of Computational Linguistics, University of Washington.

  • The adaptive nature of text-driven law

    Mireille Hildebrandt

    This article introduces the concept of ‘technology-driven normativities’, marking the difference between norms, at the generic level, as legitimate expectations that coordinate human interaction, and subsets of norms at specific levels, such as moral or legal norms. The article is focused on the normativity that is generated by text, fleshing out a set of relevant affordances that are crucial for text-driven law and the rule of law. This concerns the ambiguity of natural language, the resulting open texture of legal concepts, the multi-interpretability of legal norms and, finally, the contestability of their application. This leads to an assessment of legal certainty that thrives on the need to interpret, the ability to contest and the concomitant need to decide the applicability and the meaning of relevant legal norms. Legal certainty thus sustains the adaptive nature of legal norms in the face of changing circumstances, which may not be possible for code- or data-driven law. This understanding of legal certainty demonstrates the meaning of legal protection under text-driven law. A proper understanding of the legal protection that is enabled by current positive law (which is text-driven), should inform the assessment of the protection that could be offered by data- or code-driven law, as they will generate other ‘technology-driven normativities’.

    Reply by Michael Rovatsos, Professor of Artificial Intelligence, University of Edinburgh.

  • Legal Technology/Computational Law Preconditions, Opportunities and Risks

    Wolfgang Hoffmann-Riem

    Although computers and digital technologies have existed for many decades, their capabilities today have changed dramatically. Current buzzwords like Big Data, artificial intelligence, robotics, and blockchain are shorthand for further leaps in development. The digitalisation of communication, which is a disruptive innovation, and the associated digital transformation of the economy, culture, politics, and public and private communication – indeed, probably of virtually every area of life – will cause dramatic social change. It is essential to prepare for the fact that digitalisation will also have a growing impact on the legal system.

    Reply by Virginia Dignum, Professor at the Department of Computing Science, Umeå University.