Opinion editor's note: Star Tribune Opinion publishes a mix of national and local commentaries online and in print each day. To contribute, click here.
•••
When President Joe Biden signed his sweeping executive order on artificial intelligence last week, he joked about the strange experience of watching a "deep fake" of himself, saying, "When the hell did I say that?"
The anecdote was significant, for it linked the executive order to an actual AI harm that everyone can understand — human impersonation. Another example is the recent boom in fake nude images that have been ruining the lives of high-school girls. These everyday episodes underscore an important truth: The success of the government's efforts to regulate AI will turn on its ability to stay focused on concrete problems like deep fakes, as opposed to getting swept up in hypothetical risks like the arrival of our robot overlords.
Biden's executive order outdoes even the Europeans by considering just about every potential risk one could imagine, from everyday fraud to the development of weapons of mass destruction. The order develops standards for AI safety and trustworthiness, establishes a cybersecurity program to develop AI tools and requires companies developing AI systems that could pose a threat to national security to share their safety test results with the federal government.
In devoting so much effort to the issue of AI, the White House is rightly determined to avoid the disastrous failure to meaningfully regulate social media in the 2010s. With government sitting on the sidelines, social media technology evolved from a seemingly innocent tool for sharing personal updates among friends to a large-scale psychological manipulation, complete with a privacy-invasive business model and a disturbing record of harming teenagers, fostering misinformation and facilitating the spread of propaganda.
But if social networking was a wolf in sheep's clothing, artificial intelligence is more like a wolf clothed as a horseman of the apocalypse. In the public imagination AI is associated with the malfunctioning evil of HAL 9000 in Stanley Kubrick's "2001: A Space Odyssey" and the self-aware villainy of Skynet in the "Terminator" films. But while AI certainly poses problems and challenges that call for government action, the apocalyptic concerns — be they mass unemployment from automation or a superintelligent AI that seeks to exterminate humanity — remain in the realm of speculation.
If doing too little, too late with social media was a mistake, we now need to be wary of taking premature government action that fails to address concrete harms.