Skip to main content

Ranking Member Raskin’s Opening Statement at Hearing on Federal Preemption of State AI Laws

September 18, 2025

Washington, D.C. (September 18, 2025)—Today, Rep. Jamie Raskin, Ranking Member of the House Judiciary Committee, delivered opening remarks at the Subcommittee on Courts, Intellectual Property, Artificial Intelligence, and the Internet hearing examining federal preemption of state artificial intelligence (AI) laws.

Below are Ranking Member Raskin’s remarks, as prepared for delivery, at today’s hearing. 

Image
raskin

WATCH Ranking Member Raskin’s opening statement.
Ranking Member Jamie Raskin
Subcommittee on Courts, Intellectual Property, Artificial Intelligence, and the Internet
Hearing on “AI at a Crossroads: A Nationwide Strategy or Californication?”
September 18, 2025 

Thank you, Chairman Issa, and thank you to the witnesses for being with us here today.

When commercially available AI platforms debuted three years ago, the impact was immediate. Generative AI spurred scientific research and gave new tools to creators. This jump-start to America’s innovation sector can be seen in fields ranging from pharmaceutical research and quantum computing to sound recording and film editing. But generative AI has also raised legal uncertainties, such as whether individuals have a right to their name, image, likeness, and voice when they’re used in deepfakes; if limitations should be placed on AI-enabled surveillance of our citizens; and what is an appropriate standard of care for AI platforms to protect users from harmful consequences. 

While Congress takes the time to examine this technology and its effects, the states have begun to enact the first regulations on AI. We often talk about “rules of the road” when crafting legislation that governs technology. But I think this way of talking about consumer safety and technological ethics suggests that a road without speed limits would get us to where we’re going faster. For generative AI, it’s not about creating speed laws and building guard rails but creating a road to begin with.

Some of my colleagues across the aisle would argue that these roads are unnecessary. They say that without broad preemption—without clearing the technological field from legal encumbrances—AI companies and fledgling startups will be at a disadvantage, have trouble complying with laws, and wither on the vine. I have heard little to suggest that is the appropriate solution to progress. Proponents of federal preemption present Americans with a series of false choices—telling us we must choose a side: between one party or another, between businesses and consumers, and between national security and safe innovation.

In 1816, Thomas Jefferson wrote, “Laws and institutions must go hand in hand with the progress of the human mind. As that becomes more developed, more enlightened, as new discoveries are made, new truths discovered and manners and opinions change, with the change of circumstances, institutions must advance also to keep pace with the times.” What we are seeing across the country is individual states looking at technological developments and asking whether and how their laws need to change to properly protect their citizens.

To sit here today, you would think that this issue has created a partisan divide—Republicans on one side, Democrats on the other. But opposition to an AI moratorium is broad and bipartisan. In fact, when some Republicans in Congress tried to pass a moratorium through our last spending bill, attorneys general from across the country, red and blue states alike, sent a letter to Congress saying, “don’t do this!”

In another letter, 17 Republican governors wrote to Majority Leader Thune and Speaker Johnson, complimenting their “Big Beautiful Bill” but explaining that the moratorium provision stripping “the right of any state to regulate this technology in any way…without a thoughtful public debate” was “the antithesis of what our Founders envisioned.” I may disagree with these Republican governors on many issues, but I think they should be free to create what they called “smart regulations of the AI industry that simultaneously protect consumers while also encouraging this ever-developing and critical sector.”

In a statement submitted for this subcommittee hearing, AI startup Bria wrote that a moratorium on state laws would create a vacuum, and strip away the rules needed to quote “raise capital, form partnerships, and build safely in order to win customer trust.” Without a road on which to travel forward, startups are cut out of the market in favor of large companies with the legal and fundraising teams necessary to overcome a barren legal landscape.

Finally, some argue that we need unrestrained AI development to properly compete with China. This subcommittee has held many bipartisan hearings on the threat to innovation, AI supremacy, and intellectual property from China. It would be unwise to assume that we need to become more like China to compete with China. In fact, many would argue that stronger, better products—developed in America, while protecting Americans and their data—ensure that American AI is both more advanced and more internationally competitive. Protecting American innovation, investing in research, and investing in our workforce is the right way to win the so-called “AI Arms Race.”           

Americans’ safety is not at odds with AI innovation. That should be the baseline for any conversation we have about the best way to move forward.

I’m looking forward to hearing from our witnesses, and I yield back the balance of my time.