Opinion editor's note: Editorials represent the opinions of the Star Tribune Editorial Board, which operates independently from the newsroom.
•••
Some of the largest and most powerful artificial intelligence companies in the country — Google, Meta (Facebook), Amazon, Microsoft and others — have, at the behest of the White House, agreed to abide by voluntary safety and security standards, a move needed to protect the public.
In a recent White House meeting with President Joe Biden, those companies — along with Anthropic, Inflection and, notably, OpenAI, the creator of ChatGPT — committed to protective guardrails.
Remarkably, the agreement includes a mutual pledge to allow independent security experts to test the companies' systems before public release and to share safety data with government officials and academics.
The companies also have committed to developing tools that will alert the public whenever an image, video or text is created by artificial intelligence, known as "watermarking." That is another badly needed move in the face of a growing inability to distinguish human-generated text and images from those that are the result of artificial intelligence.
Nick Clegg of Meta, the parent company of Facebook, said in a statement that the safeguards "are an important first step in ensuring responsible guardrails are established for AI, and they create a model for other governments to follow."
In announcing the agreement, Biden rightly noted that emerging AI technologies can pose a threat "to our democracy and our values." Taking the proper precautions, he said, could avoid that scenario.