Opinion editor's note: Star Tribune Opinion publishes a mix of national and local commentaries online and in print each day. To contribute, click here.


You could almost hear palms smacking onto foreheads all over the techier corners of the internet recently after a Google artificial intelligence program began generating pictures of Black founding fathers, a female pope and other notions that would exist only in the most fringe-progressive alternate reality.

No, the culprit wasn't some obsessively "woke" technocrat flailing at his keyboard. It was the unintended result of a legitimate but overly rushed attempt by Google to program out AI's disturbing tendency toward racism and misogyny (a tendency that, it must be said, arises logically from the fact that it gets most of its information from an internet that reflects American culture).

The episode illustrates just one of the many unforeseen challenges ahead regarding a technology that holds more promise — and more potential for societal havoc — than any since the creation of the internet itself.

Chief among those challenges may be the fact that this powerful new technology today requires, more than anything, careful and deliberate development.

Yet all the incentives for the companies doing the developing — Google, Meta, Microsoft, Amazon and a galaxy of less-familiar names — are trying to roll out their programs as quickly as possible in what amounts to an AI arms race.

The implications are too far-reaching to leave them to the profit-driven feeding frenzy that is the tech industry today. Congress must stop dithering on AI and set up a regulatory structure as soon as possible to govern its development and use.

AI is the quest to create programs that combine the massive, lightning-fast data processing capacity of computers with the ability to creatively reason, analyze and produce original ideas — to "think," as humans do.

Whether that last part can ever truly be achieved is a towering question for both programmers and philosophers. But the positive practical applications already are astonishing. Medical diagnostics, drug discovery, manufacturing robotics, educational materials, cybersecurity, transportation advances, retail efficiencies — these are only a few of the many real-world endeavors where some form of AI is being regularly utilized right now to carry out complex processes faster and better than people alone can.

The darker side of the technology has been demonstrated as well, though, particularly in the area of misinformation and "deepfakes." One of these cloned President Joe Biden's voice in a robocall urging voters to sit out the New Hampshire primary. Another produced a vulgar video involving the president and his granddaughter. Yet another shows Hillary Clinton endorsing Republican Florida Gov. Ron DeSantis for president (created by supporters of former President Donald Trump to jab at former GOP primary opponent DeSantis).

None of this actually happened, but all of it looked and sounded real. Consider the other possibilities for electoral mischief in a political culture that already is deeply divided over fundamental questions about what's real and what's "fake news."

Distrust in institutional norms is among the biggest threats to American democracy today, and malignant use of AI technology could worsen that situation in almost unfathomable ways. And that's before even considering how identity thieves, con artists and even terrorists could abuse this technology.

The Biden administration has nibbled at the edges of the issue with an executive order offering largely nonbinding guidelines for development and use of AI. Congress is talking — and talking and talking — on the issue. But it hasn't offered a coherent legislative strategy for regulating a transformative technology that is careening toward implications even its creators don't fully understand.

If the federal government doesn't slow this down for the sake of societal safety, profit-driven competing entrepreneurs have already shown they aren't going to tap the brakes on their own.

The Google debacle is instructive. Bloomberg said it best in a recent headline: "Google's AI isn't too woke. It's too rushed."

The same could be said of AI technology more broadly. And it's a failing that could ultimately cause far bigger problems than just embarrassingly ahistorical imagery.