Technical Countermeasures against Adversarial Attacks on Computational Law
Abstract
Adversarial Attacks, commonly described as deliberately induced perturbations, can lead to incorrect outputs such as misclassifications or false predictions in systems based on forms of artificial intelligence. While these changes are often difficult to detect for a human observer, they can cause false results and have impacts on physical as well as intangible objects. In that way, they represent a key challenge in diverse areas, including — among others — legal fields such as the judicial system, law enforcement and legal tech. While computer science is addressing several approaches to mitigate these risks caused by Adversarial Attacks, the issue has not received much attention in legal scholarship so far. This paper aims to fill this gap, tries to assess the risks of and technical defenses against Adversarial Attacks on AI Systems and provides a first assessment of possible legal countermeasures.
Downloads
Published
Versions
- 16-01-2024 (3)
- 08-01-2024 (2)
- 08-01-2024 (1)
Issue
Section
Categories
How to Cite
Main text and response text copyright © 2024 Dario Henri Haux, Alfred FrühReply text copyright © the replier
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.