Opinion editor's note: Star Tribune Opinion publishes a mix of national and local commentaries online and in print each day. To contribute, click here.
•••
It is hard not to be worried when the so-called godfather of artificial intelligence, Geoffrey Hinton, says he is leaving Google and regrets his life's work.
Hinton, who made a critical contribution to AI research in the 1970s with his work on neural networks, told several news outlets this week that large technology companies were moving too fast on deploying AI to the public. Part of the problem was that AI was achieving humanlike capabilities more quickly than experts had forecast. "That's scary," he told the New York Times.
Hinton's concerns certainly make sense, but they would have been more effective if they had come several years earlier, when other researchers who didn't have retirement to fall back on were ringing the same alarm bells.
Tellingly, Hinton in a tweet sought to clarify how the New York Times characterized his motivations, worried that the article suggested he had left Google to criticize it. "Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google," he said. "Google has acted very responsibly."
While Hinton's prominence in the field might have insulated him from blowback, the episode highlights a chronic problem in AI research: Large technology companies have such a stranglehold on AI research that many of their scientists are afraid of airing their concerns for fear of harming their career prospects.
You can understand why. Meredith Whittaker, a former research manager at Google, had to spend thousands of dollars on lawyers in 2018 after she helped organize the walkout of 20,000 Google employees over the company's contracts with the U.S. Department of Defense. "It's really, really scary to go up against Google," she tells me.