Quick links

Avoiding dangers of AI in warfare requires responsible U.S. leadership among international community, general says

Steven Schultz, Office of Engineering Communications
Photos by Sameer A. Khan/Fotobuddy

International agreements and coordinated diplomatic pressures will be essential in avoiding indiscriminate use of artificial intelligence in lethal military conflicts, retired Gen. John R. Allen told an audience at Princeton University on Wednesday, Oct. 2.

General John Allen sitting with Edward Felten before an audience.
Gen. John Allen (left), president of the Brookings Institution, spoke with Edward Felten, Princeton professor of computer science and public affairs, at a public lecture Oct. 2 focused on the use of artificial intelligence in warfare. Allen emphasized the importance of longstanding ethical frameworks within the U.S. military as well as international conventions as critical to avoiding wanton uses of emerging lethal technologies.


Allen, now president of the Brookings Institution in Washington, D.C., engaged in a wide-ranging conversation about emerging uses of artificial intelligence on the battlefield with Edward Felten, Princeton’s Robert E. Kahn Professor of Computer Science and Public Affairs.

“This is a capability that has the capacity for great good, but more likely I worry it could easily be a technology that could be applied with great destructiveness that we don’t understand yet,” Allen said.

Members of the University’s ROTC program.

Members of the University’s ROTC program attended the event.

The discussion before an audience of students, including numerous ROTC participants, faculty members and others at Princeton’s Maeder Hall, was part of the G.S. Beckwith Gilbert ’63 Lectures series and was co-sponsored by the Center for Information Technology Policy.

In response to an opening question from Felten, Allen said that warfare has always involved a balance of human and technological dimensions and that the military has often been an early adopter of technology. Many benefits to the military in adopting artificial intelligence are clear, he said, ranging from much more efficient and effective logistics, inventory and maintenance, to better intelligence gathering and sensing of targets.

“But often — and that’s where I figure we are today — the technology outstrips the human dimension to fully grasp the potential and ultimately to thoroughly integrate that technology,” said Allen.

“It is way too easy to kill people in the battle space, especially people who are innocent,” Allen said. “It is way too easy for that to occur through the sloppy adoption of lethal technologies.”

Key questions revolve around the role of human decision-making in deploying deadly force, including selection of targets and decisions to start and abort missions, said Allen, who commanded NATO forces in Afghanistan and served as the special presidential envoy for the global coalition to counter the Islamic State terrorist organization before retiring in 2015.

Allen said that the United States has been well served by longstanding ethical principles governing use of force and military action. Specifically, he emphasized three elements of decision-making: necessity, distinction and proportionality. Necessity refers to decisions about whether an attack is appropriate in the first place; distinction is a question of distinguishing actual targets from innocent bystanders, and associated decisions about whether to abort a mission if the distinction is not clear; and proportionality refers to judgments about how much force to apply and whether further attacks are needed given the aims of the mission.

“We are nowhere near, in the development of artificial intelligence right now, where all three of those things can be done by an autonomous system,” Allen said. “In the end it’s not so much about the technology as it is about the role of the human in this process.”

Allen described two ways to think about human control, being “on the loop,” meaning that a human is monitoring a system to make sure it’s behaving as expected, or “in the loop,” meaning that a human is actively making go or no-go decisions. “The issue really is, what is the level of human control?”

Allen and Felten, who served as deputy chief technologist in the Obama White House, agreed that a 2012 directive from the Department of Defense lays important groundwork for thinking about these issues. “It is a remarkable document in its prescience in seeing how a responsible nation, guided by ethics, will consider the use of artificial intelligence in the context of engagement,” Allen said.

The directive, he said, builds on principles that have long been at the root of military training for U.S. officers. “Any American commander who will be applying these new technologies — not just AI but those that are on the horizon and coming at us — will be guided inherently by a longstanding commitment, both through personal oath and through constant education and training, to living and commanding and existing and leading in an environment that is dominated by ethics.”

In response to a question from a student in the audience, Allen emphasized that military leaders must be taught to avoid complacency in accepting recommendations from automated systems and to apply judicious skepticism to machine-driven inputs. He cited the case in 1983 when a Soviet officer, Stanislav Petrov, declined to initiate a nuclear strike despite receiving clear but ultimately incorrect signals that the United States had launched nuclear missiles.

“There was something about his training and his instincts and his understanding of the system that he said ‘This is a false alert,’” Allen said. “Or we might not be sitting here today. That was a really good question. Thank you for asking that.”

But Allen emphasized that the United States or any nation cannot control the use of such systems alone.

“We have got to have a robust international conversation that can embrace what this technology is and if necessary we can have a conversation about controlling it, whether it’s a treaty that fully bans it or a treaty that only permits the technology to be used in certain ways,” he said.

There will always be rogue actors who flaunt international law, but a healthy coalition of democracies can exert very substantial pressure to minimize such abuses, he said.

Ed Felten speaking.
Felten, who was the founding director of Princeton’s Center for Information Technology Policy, has been a leading voice on the role of artificial intelligence in society, including work on the subject as deputy chief technology officer in the Obama White House.

In another question from the audience, a student asked if predictive algorithms that underlie autonomous systems should be vetted and subjected to international weapons treaties. “They should be,” Allen said, noting that even defining the technical terms has been challenging.

“It’s not binary,” he said. “We won’t necessarily say you can’t use this algorithm, but we may say if you’re going to use those predictive algorithms, it can only be for certain purposes.”

The prospect of achieving such international agreements requires U.S. leadership, Allen said. And leadership with regard to advanced technologies goes hand-in-hand with leadership on a range of critical and difficult issues, he said.

“When people ask me what I worry about most in terms of threats to the United States, I say, look, we can handle the differences we have with the Chinese, we can handle the Russians, we can handle the jihadists. We will take a knock from time to time but we can handle all that. We can’t handle climate.

“When the United States withdrew its leadership from the Paris Climate Accord, when the United States doesn’t show up and report on sustainable development goals … that is the kind of challenge that is really existential if we are not careful. So much of what this world has been able to achieve in the aftermath of the Cold War has been directly as a result of the United States’ leadership and partnerships with similar countries,” Allen said.

“The United States needs to reassert its global commitment to human rights, reassert its commitments to democratic leadership, that liberal democracy is about human rights and gender equity, freedom of speech and a commitment to a set of values that are recognizable to our partners overseas.”

Follow us: Facebook Twitter Linkedin