The House budget bill is deeply stupid. No, I don’t mean the massive tax cut extensions for people who don’t need them, done on the backs of food and medical care for the poor, although it also does that and it’s stupid. I mean the provision that bans states from regulating AI. Tucked inside is a provision that says “no state or political subdivision may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act.” This is making the news if you’re in the right circles, but like so much of the deluge coming out of Washington these days, it’s not getting the national attention it deserves. Over on Lawfare, Katie Fry Hester and Gary Marcus have the details. The key point is that the law attempts to pre-empt all state AI regulations. It would probably also take out big chunks of state privacy laws, just as we’re starting to see some of them.
Hester and Marcus emphasize some of the problems: the bill likely violates the 10th Amendment, it is absolutely a policy change of the sort that the Senate Parliamentarian should rule out-of-bounds for a reconciliation bill, and it’s deeply unpopular: the public is worried about AI and wants it regulated. A standard debate about state vs. federal regulation cites the Brandeis “laboratories of democracy” vs. the need for uniform federal rules. This is a reasonable debate, and it probably depends on the topic which you’d favor. They are not necessarily exclusive choices either, as federal regulation can set floors and ceilings that states may go above/below, and sometimes federal regulations develop out of state rules. Federal copyright law preempts state, and that system makes obvious sense. State laws around gambling or alcohol make sense given the diversity of local cultures.
That debate from your civics class is however not what this is about. The problem of course is not that we’d have federal policy instead of state. The problem is that there is absolutely no chance that this Congress will pass meaningful AI regulation, so the choice is between a patchwork of state rules and nothing. Congress hasn’t even passed meaningful privacy regulation yet, and right now they’re in thrall to an autocrat who issued an Executive Order in his first week in office directing agencies to “suspend, revise, or rescind such actions, or propose suspending, revising, or rescinding such actions” taken in compliance with the Biden Administration’s (bare minimum) efforts at AI regulation, in the name of “revok[ing] certain existing AI policies and directives that act as barriers to American AI innovation, clearing a path for the United States to act decisively to retain global leadership in artificial intelligence.”
That’s of course what the AI Industry players want: if state regulations are forbidden and federal regulations aren’t forthcoming, AI is unregulated except for narrow cases like deepfake pornography, and we all have to live with the unmitigated consequences of an extractive industry with a history of foisting unsafe products on the world. What could go wrong?
One of the underemphasized problems with unregulated AI is its tendency to buttress authoritarianism. Predictive systems – which Pascal König already compared to Hobbes’s Leviathan – have a tendency to move power upstream, away from those they describe and into the hands of the system’s predictions. They are notoriously opaque, which poses problems all around, since if you don’t know what the system does you don’t know if it’s following the law or not. If you don’t know what it does, you also can’t contest it, which is important for due process reasons. People caught up in AI systems often have a very hard time effectuating their rights, as the repeated instances of wrongful arrests (usually in violation of police procedures) due to bad facial recognition attest.
Current systems involve staggering levels of centralization in the hands of a few companies; these sorts of monopolies also centralize power. In a new paper, James Goodrich ties current critiques of monopoly emerging from people like Lina Khan to a normative account of domination. As he summarizes the argument, it is that “large data-collecting firms benefit from their power to exclude others from the use of the data they collect. This power to exclude is arbitrary and thus constitutes a form of domination. Because these firms benefit from their dominating activity (and could avoid doing so), they are engaged in a form of societal exploitation.” As he explains it, the issue is one of the social capacity to use data for purposes other than what the firms want. This is because data is non-rivalrous and so we need a good reason to exclude others from its use. In the case of IP the argument is that without exclusive rights there’d be no incentive for creation and so no economic model to get us to a socially beneficial amount of creative works, inventions, etc. But there is no comparable rationale for data: Google can make advertising money whether or not they lock up all their data.
Of particular interest here is that the original antitrust laws derived from the thought that excessive corporate centralization was closely analogous to excessive political power. Goodrich quotes Senator John Sherman (of Sherman Act fame):
“If we will not endure a king as a political power, we should not endure a king over the production, transportation, and sale of any of the necessities of life. If we would not submit to an emperor, we should not submit to an autocrat of trade, with the power to prevent competition.”
That’s clarifying in the current AI case. Here’s three reasons you want state-level AI regulation:
1. Blurry private/public sector lines: The boundaries between corporate AI and governmental AI are blurry to begin with. As Salomé Viljoen warned recently, part of what’s been going on with DOGE and the early Trump administration is a further centralization and unification of data in the name of governmental power. That’s going to make the lines even blurrier, as are moves by companies like ClearViewAI, which scraped images off the web and sold them to law enforcement.
That’s the point where the nascent authoritarianism enabled by deregulating state AI rules suddenly emerges. A couple of months ago, Clearview entered a nationwide settlement agreement. But it wasn’t a settlement with the feds: it was a settlement under the Illinois Biometric Information Privacy Act. BIPA is a good law and one of the best of the state efforts at privacy, and it covers all biometric issues like facial recognition. It’s been used against both public and private actors (see here and here). That is exactly the sort of law that the current House bill would get rid of. There’s a lot of state attempts to take the rough edges off AI, and even more state laws that (however well or not) try to protect privacy from automated systems. California, for example, protects privacy not just in personal data but in inferences drawn from it (this is a signal improvement over the EU’s GDPR). That all goes away if the House bill passes.
2. Creeping police authoritarianism. As a new paper by Matthew Tokson argues, there’s a lot of ways that AI plays into authoritarian tendencies in policing, ranging from massively increasing surveillance capacity to replacing morally-conscious humans with obedient robots to centralizing power and thereby shielding corruption. Tokson’s overall argument is that Fourth Amendment jurisprudence should re-orient itself to deal with this, but he also points out that legislatures don’t have to wait for courts to set boundaries on local law enforcement use and misuse of AI. The House bill would eliminate that possibility.
3. Kneecapping state attorneys general. Lots of state privacy laws work through state attorneys general for enforcement. There’s a good debate to be had about how effective this is versus other forms of regulation – structural, individual torts, etc. But Danielle Citron made the case a while ago that state attorneys general have been good privacy enforcers, often better than federal counterparts. Eliminate state laws, and you eliminate state attorneys general as enforcement. This makes it a lot harder to litigate, and not just by eliminating statutory causes of action. It also dumps a huge burden on individuals (who may not even know they’ve been harmed). We know from privacy that individual enforcement is too hard for individuals to manage. For one, like privacy, the potential damage award for an AI harm is likely to be too low to be worth litigating individually, even if the social damage is quite high. Class actions are hard to litigate, and the current Supreme Court is hostile to class standing in data cases specifically. Courts also have trouble understanding individual data harms.
Unregulated AI is bad. If you say that, you’ll be told that innovation is always good. You should ask who it’s good for.
Recent Comments