Paper by Daniel Wilf-Townsend: “A new remedy has emerged in the world of technology governance. Where someone has wrongfully obtained or used data, this remedy requires them to delete not only that data, but also to delete tools such as machine learning models that they have created using the data. Model deletion, also called algorithmic disgorgement or algorithmic destruction, has been increasingly sought in both private litigation and public enforcement actions. As its proponents note, model deletion can improve the regulation of privacy, intellectual property, and artificial intelligence by providing more effective deterrence and better management of ongoing harms
But, this article argues, model deletion has a serious flaw. In its current form, it has the possibility of being a grossly disproportionate penalty. Model deletion requires the destruction of models whose training included illicit data in any degree, with no consideration of how much (or even whether) that data contributed to any wrongful gains or ongoing harms. Model deletion could thereby cause unjust losses in litigation and chill useful technologies.
This article works toward a well-balanced doctrine of model deletion by building on the remedy’s equitable origins. It identifies how traditional considerations in equity—such as a defendant’s knowledge and culpability, the balance of the hardships, and the availability of more tailored alternatives—can be applied in model deletion cases to mitigate problems of disproportionality. By accounting for proportionality, courts and agencies can develop a doctrine of model deletion that takes advantage of its benefits while limiting its potential excesses…(More)”.