Status: Open

Autonomous Systems At The Battlefield — How The Danger From Laws Can Be Estimated?

June 27, 2019

MOSCOWJUNE27. PIRPRESS“Autonomous systems are gradually displacing humans from the battlefield, and in many aspects, this can be a boon to the military, who are exposed to less risk. However, at the same time, humans transfer to artificial intelligence (AI) a part of their powers, and consequently a part of their responsibility. According to experts, neural networks will probably never learn to explain their decisions to humans. This can become a serious problem once AI is involved in such areas as intelligence, data analysis, communications and control, scenario development, and in the long run decision making.” – Director of PIR Center’s Emerging Technologies and Global Security Project Vadim Kozyulin.

On May 31, PIR Center and the Institute of Contemporary International Studies of the Diplomatic Academy of the Ministry of Foreign Affairs of the Russian Federation held a seminar as part of the Midweek Brainstorming Sessions series on Autonomous Systems in Military Affairs — International Legal Aspects. The seminar featured presentations by Director of PIR Center’s Emerging Technologies and Global Security Project Vadim Kozyulin and Senior Researcher of the Harvard Law School Program on International Law and Armed Conflict Dustin Lewis.

According to Vadim Kozyulin, the use of AI in military affairs generates three groups of threats. First, as far as the use of weapons is concerned, the central issue relates to the exclusion of the human from the decision-making cycle. This issue is the subject of international consultations under the auspices of the Convention on Certain Conventional Weapons and it is also often referred to in campaigns of non-governmental activists. The second group of threats is related to the maintenance of strategic stability. According to the expert, military artificial intelligence can have a destabilizing effect in this area. Finally, the third group of threats involves the introduction of AI not only in weapon systems but more broadly – in communications systems, intelligence collection, logistics, etc. The fast pace and increasing automation of the military domain will reduce the space for the human’s participation in decision making. Vadim Kozyulin indicated that today it is difficult to predict to what extent any given threat will manifest itself. Nevertheless, governments should have realistic assessments of these threats and be proactive in preventing worst case scenarios.

Dustin Lewis presented research findings of the Harvard Law School Program on International Law and Armed Conflict (HLS PILAC). In 2018-2020, HLS PILAC runs a project on International Legal and Policy Dimensions of War Algorithms: Enduring and Emerging Concerns. This project will study the ways that artificial intelligence and computer algorithms influence military affairs and how this correlates with international law. Dustin Lewis talked about how reviews of weapons involving AI technologies could be carried out and proposed 16 elements and properties that could be taken into account in such reviews. According to the expert, one area that the discussion of artificial intelligence and computer algorithms in military affairs could focus on is the preservation of legal responsibility regimes.

Participants of the seminar included representatives of the Ministry of Foreign Affairs of the Russian Federation, the International Committee of Red Cross, members of Trialogue Club International, experts of the Diplomatic Academy of the Ministry o Foreign Affairs of the Russian Federation, MGIMO University, the Russian International Affairs Council, and the Institute for Systems Analysis of Russian Academy of Sciences.

For all questions regarding the Emerging Technologies and Global Security Project at PIR Center please contact Vadim Kozyulin, tel. +7 (495) 987 19 15, email kozyulin@pircenter.org.