Almost human: New era of robot tech edges toward reality

  • Article by: JOHN MARKOFF
  • New York Times
  • November 23, 2012 - 10:58 PM

Using an artificial intelligence technique inspired by theories about how the brain recognizes patterns, technology companies are reporting startling gains in fields as diverse as computer vision, speech recognition and the identification of promising new molecules for designing drugs.

The advances have led to widespread enthusiasm among researchers who design software to perform such human activities as seeing, listening and thinking. They offer the promise of machines that converse with humans and perform such tasks as driving cars and working in factories, raising the specter of automated robots that could replace human workers.

The technology, called deep learning, has already been put to use in services like Apple's Siri virtual personal assistant, which is based on Nuance Communications' speech recognition service, and in Google's Street View, which uses machine vision to identify addresses.

But what is new in recent months is the growing speed and accuracy of deep-learning programs, often called artificial neural networks or "neural nets" for their resemblance to the brain's neural connections.

"There has been a number of stunning new results with deep-learning methods," said Yann LeCun, a New York University computer scientist who did pioneering research in handwriting recognition at Bell Laboratories. "The kind of jump we are seeing in the accuracy of these systems is very rare indeed."

Artificial intelligence researchers are acutely aware of the dangers of being overly optimistic. Their field has long been plagued by outbursts of enthusiasm followed by equally striking declines. In the 1960s, some experts believed that an AI system was just 10 years away. In the 1980s, a wave of commercial startups collapsed, leading to what some people called the "AI winter." But recent achievements have impressed a wide spectrum of computer experts.

Advances in pattern recognition hold implications not just for drug development but for an array of applications. With greater accuracy, for example, marketers can comb large databases of consumer behavior to get more precise data on buying habits. And improvements in facial recognition are likely to make surveillance technology cheaper and more common.

Artificial neural networks, an idea going back to the 1950s, seek to mimic the way the brain absorbs information and learns from it. In recent decades, University of Toronto computer scientist Geoffrey Hinton, 64 (a great-great-grandson of the 19th-century mathematician George Boole, whose work in logic is the foundation for modern digital computers), has pioneered new techniques for helping the artificial networks recognize patterns.

Modern artificial neural networks are composed of an array of software components, divided into inputs, layers and outputs. The arrays can be "trained" by repeated exposures to recognize patterns like images or sounds. These techniques, aided by the growing speed and power of modern computers, have led to rapid improvements in speech recognition, drug discovery and computer vision. And deep-learning systems have recently outperformed humans in some recognition tests.

One of the most striking aspects of the research led by Hinton is that it has taken place largely without the patent restrictions and bitter infighting over intellectual property that characterize high-technology fields. "We decided early on not to make money out of this, but just to sort of spread it to infect everybody," he said.

Referring to the rapid deep-learning advances made possible by greater computing power, he added: "The point about this approach is that it scales beautifully. Basically you just need to keep making it bigger and faster, and it will get better. There's no looking back now."

© 2014 Star Tribune