On a summer night in Dallas in 2016, a bomb-handling robot made technological history. Police officers had attached C-4 explosive to it, steered it near an active shooter and detonated it. Micah Xavier Johnson became the first person in the U.S. to be killed by a police robot.
Then-Dallas Police Chief David Brown called the decision sound. Johnson had fatally shot five officers, wounded nine and hit two civilians.
But some robotics researchers were troubled. "Bomb squad" robots are marketed as tools for safely disposing of bombs, not for delivering them to targets. Their profession had supplied the police with a new form of lethal weapon, and in its first use as such, it had killed a Black man.
Like most police robots in use today, the Dallas device was a straightforward remote-control platform. But more sophisticated robots are being developed with algorithms for, say, facial recognition or deciding on their own to fire projectiles. But many of today's algorithms are biased against people of color and others who are unlike the white, male, affluent and able-bodied designers of most robot systems. In the future, critical decisions might be made by a robot, one created by humans, with their flaws in judgment baked in.
"It is disconcerting that robot peacekeepers, including police and military robots, will, at some point, be given increased freedom to decide whether to take a human life," wrote Ayanna Howard, a robotics researcher at Georgia Tech, and her colleague Jason Borenstein.
During the past decade, evidence has accumulated that "bias is the original sin of AI," Howard noted in her 2020 audiobook, "Sex, Race and Robots." Facial-recognition systems have been shown to be more accurate in identifying white faces than those of other people. (In January, one system told the Detroit police that it had matched photos of a suspect with the driver's license of a Black man with no connection.)
Joy Buolamwini, the founder of the Algorithmic Justice League and a graduate researcher at the MIT Media Lab, has encountered interactive robots at two laboratories that failed to detect her. (At MIT, she wore a white mask in order to be seen.)
The long-term solution is "having more folks that look like the United States population at the table when technology is designed," said Chris S. Crawford, a professor at the University of Alabama. Algorithms trained mostly on white male faces (by mostly white male developers) are better at recognizing white males.