Scoping AI & Law Projects: Wanting It All is Counterproductive



cybernetics, legal formalism, ai and law


The intersection of law and computer science has been dominated for decades by a community that self-identifies with the pursuit of ‘artificial intelligence’. This self-identification is not a coincidence; many AI & Law researchers have expressed their interest in the ideologically-charged idea-utopia of government by machines, and the field of artificial intelligence aligns with the pursuit of all-encompassing systems that could crack the very diverse nature of legal tasks. As a consequence, a lot of theoretical and practical work has been carried in the AI & Law community with the objective of creating logic-based, knowledge-based or machine-learning-based systems that could eventually ‘solve’ any legal task. This ‘want-it-all’ research attitude echoes some of the debates in my home field of formal methods around formalization of programming languages and proofs. Hence, I will argue here that the quest for an unscoped system that does it all is counterproductive for multiple reasons. First, because these systems perform generally poorly on everything rather than being good at one task, and most legal applications have high correctness standards. Second, because it yields artifacts that are very difficult to evaluate in order to build a sound methodology for advancing the field. Third, because it nudges into technological choices that require large infrastructure-building (sometimes on a global scale) before reaping benefits and encouraging adoption. Fourth, because it distracts efforts away from the basic applications of legal technologies that have been neglected by the research community.

Author Biography

Denis Merigoux, Inria





20 May 2024
Total downloads

How to Cite

Merigoux, Denis. 2024. “Scoping AI &Amp; Law Projects: Wanting It All Is Counterproductive”. Journal of Cross-Disciplinary Research in Computational Law 2 (2).