Skip to main content

Subcommittee Ranking Member McBath’s Opening Statement at Hearing on Protecting Americans from Abuses of AI Technology

July 16, 2025

Washington, D.C. (July 16, 2025)—Today, Rep. Lucy McBath, Ranking Member of the Subcommittee on Crime and Federal Government Surveillance, delivered opening remarks at a subcommittee hearing on the use of Artificial Intelligence (AI) in law enforcement and how to combat AI-enabled crimes.

Below are Ranking Member McBath’s remarks, as prepared for delivery, at today’s hearing. 

Image
McBath

WATCH Subcommittee Ranking Member McBath’s opening statement.
Ranking Member Lucy McBath
Subcommittee on Crime and Federal Government Surveillance
Hearing on “Artificial Intelligence and Criminal Exploitation: A New Era of Risk”
July 16, 2025

Mr. Chairman, thank you for convening this hearing to discuss AI-enabled crime, efforts to detect and combat such crime, and how law enforcement deploys AI tools.

Like so many new technologies, A.I. is not inherently good or bad. A.I.-enabled tools can find patterns, sort through vast amounts of information, and may even help law enforcement solve crimes. But in the wrong hands, the same tools can be used to commit financial fraud, breach national security systems, and harm our children.

When used by law enforcement, this technology has the potential to empower investigators, while also carrying the risk of serious errors with life-changing consequences. That is why it is critical that we proceed thoughtfully and put appropriate guardrails in place so that everyone in our criminal justice system is using A.I.-enabled tools responsibly, not to the detriment of law-abiding members of our communities. 

We’ve already seen what can go wrong when those safeguards are missing. A woman and her family experienced the dangers of using A.I.-enabled facial recognition technology. Detroit police used a facial recognition tool in an attempt to identify a carjacking suspect using an image from a surveillance camera. The tool matched the surveillance image with a picture of Porcha Woodruff, a nursing school student.

One morning, as Ms. Woodruff was getting her two children ready for school, police knocked on her door and told her she was under arrest for carjacking. She knew right away there must be some kind of mistake and gestured at her body as she spoke to law enforcement to point out the obvious: she was eight months pregnant. Though the police had not been looking for a visibly pregnant woman, they still handcuffed Ms. Woodruff, took her away from her crying children, held her for 11 hours, searched her phone, and charged her. After her release, she went straight to the hospital and was treated for dehydration. The charges were dismissed a month later. 

This case is especially troubling because facial recognition tools have been shown to perform worse on Black individuals, increasing the risk of misidentification and contributing to overcriminalization. A.I. is only as good as the data it is trained on, and when that data is biased, it exacerbates racial disparities long embedded in our criminal justice system. An inaccurate tool is dangerous for all of us.

Thankfully, and due in part to cases like this one, the city of Detroit has adopted new rules to direct the use of facial recognition technology within its police department. And they’re not alone. Many cities and states have put sensible guardrails in place to limit potentially harmful uses of A.I.

That’s why it was alarming when some Republicans recently attempted to pass a moratorium on state and local A.I. regulations in the “Big Ugly Bill”—a move that generated bipartisan opposition so much so that seventeen Republican governors, including the governor of Georgia, wrote to the Senate in opposition to the proposed moratorium, warning that “people will be at risk until basic rules ensuring safety and fairness can go into effect.” 

Sarah Huckabee Sanders, the Republican Governor of Arkansas, and former Press Secretary to President Trump, took to Twitter to state, “I stand with a majority of GOP Governors against stripping states of the right to protect our people from the worst abuses of AI. The U.S. must win the fight against China—on AI and everything else. But we won’t if we sacrifice the health, safety, and prosperity of our people.”

While this most recent proposal was ultimately stripped from the bill by a 99-1 vote of the Senate, the Republican Chair of the House Energy and Commerce Committee has already vowed to continue to pursue a moratorium—even while acknowledging that federal regulations on A.I. are still years away.

I stand with those seeking to protect the health, safety, and civil rights of our communities from the abuses of A.I., and I hope that we can come together and follow the lead of the states to explore what those guardrails should look like and put them in place. I look forward to learning more from the experts here today about this important issue.

I yield back.