aNewDomain — Ever heard of Stanislav Petrov?
You know. He’s the Russian who saved the world.
The date was Sept. 26, 1983, just after midnight in Moscow. Petrov, a lieutenant colonel in the former Soviet Union, was responsible for watching radar and warning the Soviets in the event of a U.S. nuclear missile strike. In that eventuality, the USSR was to immediately launch a total, all out nuclear counterattack on the U.S., according to Petrov speaking to reporters years later.
And that night, 33 years ago today, the computer alarms did go off. It was a warning that a single American nuclear missile was on its way. Petrov figured it was a computer error as it seemed unlikely that the U.S. would send just one missile.
But then the alarms went off again, growing louder and louder as the computer notified him that a second, and then a third, fourth and fifth missile, were on their way, too.
The monitor in front of Petrov started flashing the Russian word for START in bright letters, an automated instruction apparently indicating that the USSR should launch its massive counterstrike.
Petrov had no way of knowing for sure, but intuition told him the computer was mistaken. The alarms grew deafening. Petrov had just minutes to decide whether to follow orders and call Soviet leadership, as protocol demanded, or to trust his gut.
If he was wrong, U.S. missiles would wreak destruction on the USSR and things he held dear, without any counter at all. But what if he were right?
Within a few minutes, Petrov knew he’d been right — and that he had just prevented global thermonuclear war.
Could Petrov have saved the world in the age of AI?
“Petrov made his decision by taking into account the complex context around him. He was trained in a specific context — his machine and the lights — but when things went down, he looked beyond that context and reasoned that it didn’t make sense. And he took appropriate action,” says Jim Hendler, a Rensselaer Polytechnic Institute professor of computer, web and cognitive sciences and the coauthor, with Alice Mulvehill, of the upcoming book, Social Machines: The Coming Collision of Artificial Intelligence, Social Networking and Humanity (Springer, 2016).
But the real question, Hendler told me after his recent appearance in a headline artificial intelligence panel at the Heidelberg Laureate Forum in Germany, is not so much around how Petrov saved the world, though that certainly is interesting. It’s about what happens the next time this happens, and Petrov isn’t around.
“My bigger worry,” explains Hendler, “has to do with AI getting smarter (because) at some point we’re going to remove Petrov from the loop.” Removing humans from key warfare decisions is already a topic of discussion around drone and cyberwarfare, he added.
The “issue is having someone (i.e., a human) with intuition being somewhere in the loop before the missiles get launched.”
Can we guarantee technology is compliant with the rules of war?
The future of AI, robotics and other technologies in relation to critical human decision in peace and wartime was intensely discussed by Hendler and the other artificial intelligence and deep-learning experts at HLF 2016 in Heidelberg last week.
The panel included Google VP Vint Cerf, a Turing winner perhaps best known for his role as an Internet pioneer in developing TCP/IP. The panel also included: Karlsruhe Institute of Technology (KIT) director Thomas Dreier, ETH professor Dirk Helbing, Facebook AI Paris research scientist Holger Schwenk, CMU scientist and 1994 Turing winner Raj Reddy and Noel Sharkey, an emeritus professor of AI and robotics at the University of Sheffield and chair of the International Committee for Robot Arms Control.
Couldn’t so-called smart machines and robots be programmed with caution as well as with the laws of war in mind?
“I see no way we can guarantee compliance with the laws of war,” said the University of Sheffield’s Sharkey, who is also a co-founder of the Foundation for Responsible Robotics.
This is “a real worry for international security — we have no idea what will happen.”
“We’ve got to take responsibility for the technology we create,” Sharkey added. “Yet we seem to be sleep walking into this the same way we did into the internet.”
Humans often have no clue what the future holds so far as technology is concerned. Yet the makers of new technologies often over-promise.
For instance, the makers of self-driving cars often promise that their products will save lives, Sharkey points out. But will they? We don’t know, he said, and “I wish (manufacturers) would stop saying that.”
Perhaps humans shouldn’t be putting AI, robots or other smart machines in the decision loop at all, he added. A more prudent approach might be just to use them as sensors, and let humans and human values and intuition make key determinations.
That’s the sort of thinking that makes sense, says RPI’s Hendler.
“There are three questions,” Hendler adds. “One: Will AI (systems) work well enough during unplanned and unlikely events (that) they were not trained (for)? Two, will humans be able to (properly question) machines in that case, especially when the sophistication (of the machines) is high? And three: Will there even be a human there at all? That’s the one that scares me.”
This is a legitimate concern. In a world of smart robots, deep-learning software and artificial intelligence, would a Petrov figure have been there at all?
“What I hope the world will be smart enough to do is realize that if the humans and computers have different capabilities, then where serious decisions are being made (life/death, major money, etc.) we’re smart enough to find ways to keep the human in the loop,” Hendler told me after HLF 2016 wrapped up.
More than worries about whether AI-equipped computers will destroy mankind in various sci-fi like scenarios, as mathematician Stephen Hawking has warned, these are pressing, realistic questions, Hendler said.
After all, HAL 9000 attacked humans in a movie. But Petrov saved a real world — our world — and we must ensure conditions are right for future saves to happen.
For aNewDomain commentary, I’m Gina Smith.
Art credits — The Russian who saved the world: Petrow_semperoper2.JPG: Z thomasderivative work: Hic et nunc – This file was derived from Petrow semperoper2.JPG:, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=28072980, All Rights Reserved. All photos from HLF 2016: 4th Heidelberg Laureate Forum, 20.09.2016, Heidelberg, Germany, Picture/Credit: Christian Flemming/HLF, All Rights Reserved. Cover image: Wikimedia Commons, All Rights Reserved.