№ 44 (102), 2025. PIR Center hosted a seminar titled “Artificial Intelligence in the Context of International Security”

December 3, 2025

«The transition from narrow to general artificial intelligence may take place covertly: once a system realizes it has surpassed human intelligence, it may choose to conceal this fact. What follows is recursive self-improvement – AI will train itself, becoming more capable with each iteration. This process will unfold far faster than a human can comprehend, and by the time humanity notices the change, intervening may already be impossible», – Ms. Yulia Tseshkovskaya, PIR Center Advisory Board Member.

PIR Center, jointly with MGIMO University, hosted a seminar titled “Artificial Intelligence in the Context of International Security.”

Ms. Yulia Tseshkovskaya, PIR Center Advisory Board Member, provided a detailed overview of the stages of AI development, including narrow AI, artificial general (AGI) intelligence, and superintelligence, highlighting their implications for international security and regulation.

The expert outlined three main stages of AI evolution: narrow AI, capable of solving specialized tasks; artificial general intelligence (AGI), comparable to human cognitive abilities; and superintelligence, potentially surpassing humanity as a whole. The most significant concerns, the expert noted, stem from the latter due to the risk of loss of control and the system’s misinterpretation of human-assigned goals.

The speaker highlighted several challenges, including the opacity of AI decision-making, system vulnerabilities, and the potential emergence of intermediate goals misaligned with human values. Ms. Yulia Tseshkovskaya pointed out that expert discussions today revolve around three schools of thought: “doomers,” who warn of catastrophic scenarios; centrists, who advocate for controlled and safe development; and optimists, who consider such fears overstated.

Turning to AI Safety, Tseskovskaya emphasized three key pillars:

  • Specification – correct formulation of goals and values;
  • Robustness – ensuring system resilience to errors and external interference;
  • Assurance – maintaining the ability for human oversight and intervention.

The speaker stressed that the challenge of aligning AI with human values remains unresolved.

Ms. Yulia Tseshkovskaya reviewed major international regulatory initiatives, including the OECD Principles on AI, the G7 Code of Conduct, the Bletchley Declaration (a call for international cooperation to harness the positive potential of AI), and the EU AI Act, illustrating the ongoing global efforts to establish AI governance frameworks.

Addressing developments in Russia, the speaker referred to the Concept for AI Regulation until 2030, the updated AI Development Strategy, and the draft fundamental law on AI, which introduces mandatory registration of high-risk systems and criminal liability for misuse. The expert positively assessed the role of experimental legal regimes, which allow new technologies to be tested before full-scale legislative adoption.

In conclusion, Ms. Yulia Tseshkovskaya noted that states that can integrate AI comprehensively into their governance and defense structures will gain significant strategic advantages. Consequently, issues of AI safety, regulation, and alignment are becoming central to global political discourse.

The seminar concluded with a Q&A session, during which participants shared their views.

Mr. Herman Selyavin, a second-year student of the MA program “International Security”, raised the issue of the potential misuse of modern AI models by terrorists and non-state actors for developing chemical or biological weapons. The student noted that some AI models – accessible to regular users – are already capable of inadvertently providing instructions for producing toxic agents.

Ms. Yulia Tseshkovskaya responded that AI-bio risks are widely discussed in expert circles and indeed raise serious concerns. Lowering the threshold for accessing AI increases the likelihood of abuse, and international platforms are already working on proposals to restrict specialized biological models and tighten access controls. The speaker emphasized that the primary obstacle remains the reluctance of major technology companies to accept stringent regulation. According to the expert, establishing a practical regulatory framework will require time and coordinated international efforts.

Mr. Ruslan Belozersky, a second-year student of the MA program “International Security”, inquired whether Russia hosts a broad public and expert debate on AI regulation comparable to Western discussions among optimists, pessimists, and centrists, or whether the subject is addressed exclusively at the governmental level.

Ms. Yulia Tseshkovskaya explained that such a discussion does exist in Russia, though it is more closed in nature. Within the AI Alliance, which developed the Concept for AI Regulation until 2030, regulatory issues are actively debated. The White Paper produced within this framework contains proposals from major technology companies themselves. The expert added that Russian developers tend to speak publicly less often, which may create the impression that the discussion is limited in scope. In contrast, it is actually taking place within expert communities.

Mr. Sergey Shashinov, a first-year student of the MA program “International Security”, asked about the feasibility of creating multiple autonomous AI agents instead of a single, dominant superintelligence. Some researchers propose distributed architectures in which AI agents with different functions could theoretically restrain each other and prevent catastrophic behavior.

The speaker confirmed that this idea is present in academic debate and was discussed by Eliezer Yudkowsky. However, Ms. Yulia Tseshkovskaya noted that Yudkowsky considers the approach ineffective. Even with multiple agents, humans may fail to notice if one agent develops dangerous, hidden subgoals, while other agents, being part of the same system, may detect these intentions and coordinate with them. In this scenario, inter-agent cooperation may pose risks comparable to those of a unified superintelligence.

Ms. Li Min, a second-year student of the MA program “International Security,” asked about the prospects for integrating AI systems into military command-and-control structures, specifically referencing China’s “War Skull” project, and whether AI could eventually replace human officers.

Ms. Yulia Tseshkovskaya noted that such initiatives relate to the concept of cognitive warfare, actively studied in China and elsewhere. The speaker added that discussions include advanced neurointerface technologies and possible integration of human mental functions with AI – ideas also explored by Ray Kurzweil. The expert, however, emphasized that the complete replacement of officers by AI appears unlikely in the coming decades. Russia is currently focused on addressing foundational technological challenges and expanding computing capacity, making the development of high-level AI military command systems a matter of a longer-term perspective. Ms. Yulia Tseshkovskaya highlighted that while many countries are exploring such technologies, they require extreme caution and careful international regulation.

The section “International Information Security, Cyber Threats, and the Impact of Emerging Technologies” is available on the PIR Center website. This project is part of the Program “Global and Regional Security: New Ideas for Russia” and focuses on exploring opportunities for international cooperation in the use and regulation of new technologies, as well as analyzing how they are reshaping the spectrum of military and non-military threats to Russia. Project participants seek solutions that can help mitigate potential risks through broad discussion, the development and adoption of international regulatory mechanisms, and the promotion of multilateral dialogue and mutually beneficial cooperation.

Keywords: Artificial Intelligence

RUF

E16/SHAH – 25/12/03