In a news conference this week, Attorney General Eric Holder was asked what he planned to do to increase the Obama administration's transparency with regard to the drones program. "We are in the process of speaking to that," Holder said. "We have a rollout that will be happening relatively soon."
Due to the program's excessive secrecy, few solid details are available to the public. Yet, as new technologies come online — on Tuesday, the Navy launched an unmanned stealth jet from an aircraft carrier — new concerns are emerging about how the U.S. government may use drones.
The X-47B, which can fly without human input, is a harbinger of what's to come. A growing number of international human-rights organizations are concerned about the development of lethal autonomy — that is, drones that can select and fire on people without human intervention. But as the outcry over this still-hypothetical technology grows, it's worth asking: Might the opposite be true? Could autonomous drones actually better safeguard human rights?
Last month, Christof Heyns, the U.N. special rapporteur on extrajudicial, summary or arbitrary executions, released a major report calling for a pause in developing autonomous weapons and for the creation of a new international legal regime governing future development and use. Heyns asked whether this technology can comply with human-rights law and whether it introduces unacceptable risk into combat.
The U.N. report is joined by a similar report, issued last year by Human Rights Watch. HRW argues that autonomous weapons take humanity out of conflict, creating a future of immoral killing and increased hardship to civilians. The organization calls for a categorical ban on all development of lethal autonomy in robotics. It also is spearheading a new global campaign to forbid the development of lethal autonomy.
That is not as simple as it sounds. "Completely banning autonomous weapons would be extremely difficult," Armin Krishnan, a political scientist at the University of Texas at El Paso who studies technology and warfare, told me. "Autonomy exists on a spectrum."
If it's unclear where to draw the line, then maybe intent is a better way to think about such systems. Lethally autonomous defensive weapons, such as the Phalanx missile defense gun, decide on their own to fire. Dodaam Systems, a South Korean company, even manufactures a machine gun that can automatically track and kill a person from two miles away. These stationary, defensive systems have not sparked the outcry autonomous drones have. "Offensive systems, which actively seek out targets to kill, are a different moral category," Krishnan explains.
Yet many experts are uncertain whether autonomous attack weapons are necessarily a bad thing, either. "Can we program drones well? I'm not sure if we can trust the software or not," Samuel Liles, a Purdue professor specializing in transnational cyberthreats and cyberforensics, wrote in an e-mail. "We trust software with less rigor to fly airliners all the time."