Putin Says He Hopes AI Could Not Run a Country, But Russia & Other Nations Still Pursuing AI Use in War

In early December, Russian president Vladimir Putin participated in a video conference about the future role of artificial intelligence (AI). During that conference, when prompted with a question about whether AI could run the country someday, Putin said he hoped not and explained the shortcomings of AI. As reported by the private Russian news agency, Interfax, here was his response:

AI has neither a heart, nor soul, nor compassion or conscience, Putin said.

“All these components are extremely important in people who are vested by their citizens with special powers to make and implement decisions to the benefit of the country,” he said.

At times, presidents do have to make decisions which may not seem quite rational at first glance, Putin said. “They have to be based on history, culture, current practices, the aspirations and expectation of the country’s citizens. These social sector decisions sometimes seem irrational in the area of pension security, health care, and other spheres of human activity,” he said.

“For a human president, they seem and are justified, because he makes decisions in the interests of living human beings, not machines,” he said.

AI could be a good helper and teacher for anyone, including the head of state, Putin said.

“The role and significance of AI in public administration will doubtlessly grow. I’m very hopeful, Afina, that your colleagues will make relevant decisions with the realization of their responsibility, should they work with heads of state,” he said.

Putin made some very good points about the limitations and even dangers of AI, which I don’t think are being considered nearly enough. I have heard it stated very succinctly that AI would make the perfect psychopath – having an intellect of sorts but none of the components that most human minds have that serve as potential brakes on destructive behavior.

I had a Lyft driver engage me in a conversation about AI several weeks back. He seemed to be – as many others are – enamored of the potential of AI to perform all sorts of neat tricks in our post-modern world. But he didn’t discuss any of the possible problems. I finally asked him how it would be possible to program empathy or holistic human experience into a machine. He admitted he didn’t know. I also asked him if he thought that human morality was keeping up with the pace of its technological advance. He confessed that he thought it wasn’t.

While it’s never fun to play the role of Debbie Downer, I was glad that I may have chastened his enthusiasm for AI long enough to think the implications through more thoroughly. It’s something that modern humans seem to have a real blind spot about. We’re very good at thinking to our short-term advantage and getting taken in by the bright and shiny and easy gratification, but not so much about the long-term, larger context, unintended consequences, etc. We also have institutions that encourage this kind of poor thinking, such as corporations that are legally structured to maximize profits with little-to-no concern for the long-term human and environmental consequences.

Unfortunately, Putin’s expressed understanding of the problems of AI hasn’t stopped Russia from pursuing AI in the context of war aka killer robots.

Granted, the nature of international military competition and the constant advancing of policies by the U.S. and NATO that unnecessarily increase tensions on Russia’s borders, surely contributes to this policy decision. The U.S. is certainly not shying away from the potential use of killer robots either.

However, the potential dangers of the use of autonomous AI machines by any country in this context should concern us all. Peace and human rights activists sounded the alarm about this in 2014 with an open letter signed by dozens of activists, including numerous Nobel Peace Prize winners, on the eve of a United Nations conference in Geneva, Switzerland that year to discuss the Convention on Certain Conventional Weapons (CCW), otherwise known as the Inhumane Weapons Convention, stating that the use of such technology in war was “unconscionable.”

In 2017, a high-ranking U.S. military general testified before the U.S. Senate that such weapons should be limited in warfare. Common Dreams reported at the time:

Gen. Paul Selva spoke about automation at his confirmation hearing before the Senate Armed Services Committee, saying that the “ethical rules of war” should be kept in place even as artificial intelligence (AI) and drone technology advances, “lest we unleash on humanity a set of robots that we don’t know how to control.”

The Defense Department currently mandates that a human must control all actions taken by a drone. But at the hearing, Sen. Gary Peters (D-Mich.) suggested that by enforcing that requirement, which is set to expire this year, the U.S. could fall behind other countries including Russia.

Of course, General Selva ran with the Russia-as-bogeyman framing and suggested that other countries didn’t necessarily have the same moral compass that the U.S. did in such matters. While the U.S.’s track record in foreign and military matters for decades makes this notion tragically laughable. it is refreshing to hear a military man say that there should be limits on the use of this technology: “‘I don’t think it’s reasonable for us to put robots in charge of whether or not we take a human life,” Selva told the committee.”

Around the same time, Elon Musk also publicly sounded the alarm about this technology before a meeting of U.S. state governors:

Days before Gen. Selva’s hearing, Musk spoke at the National Governors Association about the potential for an uncontrollable contingent of robots in the future.

The inventor acknowledged the risks AI poses for American workers, but added that the concerns go beyond employment. “AI is a fundamental existential risk for human civilization, and I don’t think people fully appreciate that,” Musk said.

He urged governors throughout the U.S. to start thinking seriously now about how to regulate robotics—before AI becomes an issue that’s out of humans’ control.

The very concept of AI arose out of the reductive idea of thinking about the human mind as an information processing unit or a computer. The problems with this framing of the human mind were discussed by Professor Theodore Roszak in a 1986 book called The Cult of Information. Roszak was a professor of psychology at Cal State Hayward (now called Cal State East Bay).

I graduated from Cal State Hayward and my father before me studied psychology there and took a class with Roszak in the 1970’s, which is how I came to be introduced to his work – which also included the development of the field of eco-psychology. Roszak was a brilliant but underrated thinker and in the video below he discusses the problems with using the model of a computer or information processing unit to understand the human mind. He makes some interesting and prescient comments about AI as well as this technology potentially having control of our nuclear arsenal.