Technical Countermeasures against Adversarial Attacks on Computational Law

Authors

  • Dario Henri Haux Formerly Zentrum für Life Sciences-Recht (ZLSR), Universität Basel
  • Alfred Früh Zentrum für Life Sciences-Recht (ZLSR), Universität Basel

Abstract

Adversarial Attacks, commonly described as deliberately induced perturbations, can lead to incorrect outputs such as misclassifications or false predictions in systems based on forms of artificial intelligence. While these changes are often difficult to detect for a human observer, they can cause false results and have impacts on physical as well as intangible objects. In that way, they represent a key challenge in diverse areas, including — among others — legal fields such as the judicial system, law enforcement and legal tech. While computer science is addressing several approaches to mitigate these risks caused by Adversarial Attacks, the issue has not received much attention in legal scholarship so far. This paper aims to fill this gap, tries to assess the risks of and technical defenses against Adversarial Attacks on AI Systems and provides a first assessment of possible legal countermeasures.

Downloads

Published

8 January 2024 — Updated on 16 January 2024

Versions

Total downloads
212

How to Cite

Haux, Dario Henri, and Alfred Früh. (2024) 2024. “Technical Countermeasures Against Adversarial Attacks on Computational Law”. Journal of Cross-Disciplinary Research in Computational Law 2 (1). https://journalcrcl.org/crcl/article/view/31.