SEOUL, South Korea — Leading artificial intelligence companies made a fresh pledge at a mini-summit Tuesday to develop AI safely, while world leaders agreed to build a network of publicly backed safety institutes to advance research and testing of the technology.
Google, Meta and OpenAI were among the companies that made voluntary safety commitments at the AI Seoul Summit, including pulling the plug on their cutting-edge systems if they can't rein in the most extreme risks.
The two-day meeting is a follow-up to November's AI Safety Summit at Bletchley Park in the United Kingdom, and comes amid a flurry of efforts by governments and global bodies to design guardrails for the technology amid fears about the potential risk it poses both to everyday life and to humanity.
Leaders from 10 countries and the European Union will ''forge a common understanding of AI safety and align their work on AI research," the British government, which co-hosted the event, said in a statement. The network of safety institutes will include those already set up by the U.K., U.S., Japan and Singapore since the Bletchley meeting, it said.
U.N. Secretary-General Antonio Guterres told the opening session that seven months after the Bletchley meeting, ''We are seeing life-changing technological advances and life-threatening new risks — from disinformation to mass surveillance to the prospect of lethal autonomous weapons.''
The U.N. chief said in a video address that there needs to be universal guardrails and regular dialogue on AI. ''We cannot sleepwalk into a dystopian future where the power of AI is controlled by a few people — or worse, by algorithms beyond human understanding,'' he said.
The 16 AI companies that signed up for the safety commitments also include Amazon, Microsoft, Samsung, IBM, xAI, France's Mistral AI, China's Zhipu.ai, and G42 of the United Arab Emirates. They vowed to ensure the safety of their most advanced AI models with promises of accountable governance and public transparency.
It's not the first time that AI companies have made lofty-sounding but non-binding safety commitments. Amazon, Google, Meta and Microsoft were among a group that signed up last year to voluntary safeguards brokered by the White House to ensure their products are safe before releasing them.