At Hearing on Federal Preemption of State AI Laws, Subcommittee Democrats Underscore Need for Guardrails to Protect Americans While Promoting Innovation
Washington, D.C. (September 18, 2025)—Rep. Jamie Raskin, Ranking Member of the House Judiciary Committee, and Rep. Hank Johnson, Ranking Member of the Subcommittee on Courts, Intellectual Property, Artificial Intelligence, and the Internet, led subcommittee Democrats in warning that comprehensive federal preemption of state common law, statutes, and regulations regarding AI would deny victims their day in court and stifle innovation, and urged bipartisan action to enact basic guardrails to protect Americans from harm online.
The hearing included testimony from: Neil Richards, Koch Distinguished Professor in Law, Director, Cordell Institute, Washington University Law; Adam Thierer, Resident Senior Fellow, Technology and Innovation, R Street Institute; David Bray, Loomis Council Member & Distinguished Fellow, Stimson Center; and Kevin Frazier, AI Innovation and Law Fellow, Lecturer, University of Texas at Austin School of Law.
Committee Democrats underscored that federal preemption of all state common law, statutes, and regulations regarding AI would create a dangerous, unregulated environment, deny families a path to hold companies accountable, hinder AI innovation, and even paralyze future federal action.
- Ranking Member Raskin asked: “Isn’t it the case that a moratorium today would just wipe out state laws without substituting anything, without imposing a national law?” Professor Richards replied, “Absolutely.” Ranking Member Raskin continued, “Is there any precedent for just doing that, saying we don’t want any state laws at all while we think it over, or while we’re stuck in some kind of legislative paralysis?” Professor Richards explained: “I can’t think of one. And that’s why I think it would be disastrous, because depending first on how the law is defined, it could sweep very, very broadly and take out laws that are that are important and protective that everybody on this panel would agree are good laws.”
- Ranking Member Johnson said: “Some of the current lawsuits against AI companies are being brought under common law to hold companies accountable for the harm that their products have caused to children. For example, Megan Garcia is suing Character Technologies and Google after her 14-year-old son, Sewell Setzer, died by suicide. She testified before our colleagues in the Senate this week that his death was ‘the result of prolonged abuse by AI chat bots on a platform called Character AI.’ […] These tragic cases show some of the worst possible harms that can arise from AI technologies. Professor Richards, does an AI moratorium run the risk of impeding these lawsuits that seek to hold companies accountable?” Professor Richards explained that it would.
- Rep. Ted Lieu explained: “Congress established in the House of Representatives a bipartisan AI Task Force. I was the co-chair, and there were 12 Democrats, 12 Republicans and we all agreed on over 80 recommendations in a bipartisan manner, a number of which could be turned into legislation. And instead, the Trump Administration basically says, no, we don’t want Congress doing anything, and we’ll go to the states and not have them do anything. We have zero regulations. And the Trump Administration tried to put in a ten-year moratorium ban on states that was opposed by 17 Republican governors, 20 Republican attorneys general, and 130 Republican state lawmakers. And then that ten-year proposed ban failed 99 to 1 in the U.S. Senate, a spectacular rejection of what the Administration was trying to do.”
- Rep. Deborah Ross said: “I know that we have the parents of children who have been hurt by AI here, and the states are ahead of Congress in protecting our children. And given our inaction, many states have stepped up passing legislation covering topics that run the gamut from expanding CSAM laws to cover AI-generated material in Alabama, to prohibiting AI from being used to provide mental health care services in Nevada. And then, you know, we’ve been talking about democracy, prohibiting the use of AI during an election to create political messaging that contains deepfakes of candidates for office in New Hampshire. And so, we’ve been talking about federalism, but sometimes the states have to act.”
Committee Democrats explained how instituting basic guardrails for responsible AI development prevent a wide range of harms to Americans.
- Rep. Zoe Lofgren explained how the Trump Administration’s cuts to the National Institute of Standards & Technology (NIST) have hindered efforts to address AI: “We rely on NIST, an agency that is widely respected in the Congress and in the technology world. But we’ve got to look at what’s happened to NIST. They’ve been eviscerated by the DOGE people. And I fail to see how they’re going to be able to perform the tasks that we’re hoping that they can perform given what has happened to them.”
- Rep. Lou Correa said: “AI is moving faster than we imagined or we even expected just last year, touching every aspect of our lives. Most of our constituents, like many of us here, don’t know a lot about it, but they know enough to expect that we here will protect them, their jobs, their children, their intellectual property. And so, the debate about whether it’s local control or federal control, I think, is second to the fact that we just can’t move on this stuff at the federal level.”