Klobuchar: State AI laws keep us safe. Trump’s next move could upend that.

“One rulebook” is not the kind of AI regulation this country needs.

The New York Times
December 12, 2025 at 11:00AM
President Donald Trump in the Roosevelt Room of the White House Dec. 10 in Washington. (Evan Vucci/The Associated Press)

Opinion editor’s note: Strib Voices publishes a mix of guest commentaries online and in print each day. To contribute, click here.

•••

We’ve all heard about the revolutionary breakthroughs that could result from the deployment of artificial intelligence, including cures for cancer, advances in energy and individually tailored education for every student. These benefits would be truly game-changing.

Unfortunately for many Americans, these advantages remain distant. And because of the lack of sensible rules governing AI technology, we are more familiar with its darker side: the theft of people’s voices and visual likenesses; scams directed at seniors; political attack videos in which you can’t tell if what you’re seeing or hearing is actually the candidate you love (or hate); and worst of all, children committing suicide after turning to AI chatbots for help.

These harms will only multiply. That’s why it has been critical for states to step up and pass desperately needed AI safety standards while Congress sadly continues to delay enacting federal standards. Now we risk going backward, with President Donald Trump saying on Monday that he will sign an executive order that will replace state laws with “One Rulebook” that the public has never seen.

That executive order should concern every American. Tech companies should not be allowed to use their lobbying power to undo the few protections Americans have from the downsides of AI — passed at the state level with bipartisan support. Congress urgently needs to stop delaying the passage of mutually agreed upon federal AI standards. But it remains paramount that states be able to protect people right now, before such rules are enacted.

Despite a series of well-meaning and thorough bipartisan Senate meetings, Congress has been unable to overcome its own institutional inertia to pass comprehensive AI regulation. And tech leaders — who once warned that “mitigating the risk of extinction from AI should be a global priority” — are at best divided on what to do or, at worst, actively lobbying against proposals they think will thwart their short-term interests.

The most serious federal AI protection that has passed Congress and been signed into law by Trump is a bill I led with Sen. Ted Cruz and 20 others, the Take It Down Act. This legislation allows victims to remove intimate images — both authentic and AI-created deepfakes — published without their consent. While it is a good model in that it requires platforms to take down content, it doesn’t scratch the surface of the many privacy, economic and national security risks AI poses.

Enter the states. After years of waiting on Congress, both Democratic and Republican governors and state legislatures have passed their own deeply needed AI laws. Tennessee’s ELVIS Act gives artists control over their AI-generated digital replicas so others cannot use their voices and likenesses without consent. New laws in Utah require some companies to disclose when people are interacting with AI. And from Alabama to Minnesota, 28 states have laws to rein in deceptive political deepfakes.

Former Supreme Court Justice Louis Brandeis once argued that states are the laboratories of democracy. Inspired by state action, many of us at the federal level are pressing for similar laws. Sen. Chuck Schumer and a bipartisan group of senators have put forward a road map to support AI innovation and improve safeguards. Sen. John Thune and I have come together to lead a bill that would promote innovation and transparency for AI systems in high-risk settings such as health care or public safety. Other federal bills would protect creators online, similar to Tennessee’s ELVIS Act.

But as of now, these are just concepts and bills, not laws. States have no choice but to act.

Tech lobbyists have frequently opposed even the most sensible federal standards and rules — such as labeling videos as produced by AI or taking down unauthorized content. In an act of total hubris, they are now arguing that states should be banned from regulating AI, and pushing the president and Congress to override the states’ laws. That included a recent failed attempt to shoehorn a moratorium on state AI laws into Congress’s annual defense bill, and a similar failed attempt this summer as part of congressional Republicans’ budget bill, which the Senate rejected in a 99-1 vote.

Details around the new executive order aren’t known yet, but a draft executive order that circulated last month directed the U.S. Attorney General to sue states to overturn AI laws and withhold broadband grants and other funding from states with AI laws.

Even if the executive order is challenged in court (as it should be), it’s clear the industry — which often says the right thing about wanting rules in place while actively working behind the scenes to scuttle major safeguards — and its allies in the White House and in Congress want to strip Americans of the few legal protections they currently have from AI-created harms, rather than work with lawmakers to ensure these technologies are deployed responsibly.

This is wrong. We have seen the tragic consequences of the lack of enforceable safety standards, such as the young boy who died by suicide after confiding in ChatGPT about his emotional struggles and plans about ending his life. Though the chatbot suggested he seek help, it also provided feedback on a photograph of a noose that the boy had made. Repealing what standards exist at the state level will only make it worse.

Once we actually have federal standards passed by Congress, it should be for Congress to decide whether to preempt state laws or allow them to go further. And as AI continues to evolve, there will always be new applications of the technology that spur states to act before the federal government does. That is something we should encourage — it is how our laboratories of democracy were intended to function.

But we can’t supersede state protections until we have strong, enforceable federal standards in place. So AI companies should join us in putting meaningful safeguards spearheaded by Congress in place at the federal level — and stop pretending that in the meantime, state standards are too much of a burden to bear. After all, how can you expect us to believe you’re on the precipice of creating groundbreaking superintelligence if you can’t manage to comply with a handful of state laws?

Tech leaders need to understand: There will be safety standards for these products. If they do not want a patchwork of state laws, they should work with Congress to pass comprehensive standards. Until then, states have a right — and a duty — to stand up for their citizens.

Amy Klobuchar, a Democrat, represents Minnesota in the U.S. Senate. This article originally appeared in the New York Times.

about the writer

about the writer

Amy Klobuchar

More from Commentaries

See More
card image
Evan Vucci/The Associated Press

“One rulebook” is not the kind of AI regulation this country needs.

card image