CHAPTER 7. HACKING THE FUTURE: HOW ARTIFICIAL INTELLIGENCE AND NEW TECHNOLOGIES RESHAPE INTERNATIONAL SECURITY

Globalization and technological progress have blurred the boundaries between civilian and military applications of a wide range of digital solutions – from microelectronics to artificial intelligence. This has resulted in the emergence of new types of threats that require thorough scholarly consideration and effective legal regulation. A significant place in this process is occupied by so-called dual-use technologies, which, although initially intended for civilian purposes in medicine, industry, and ecology, are increasingly used in the defense sector, forming the foundation for new types of weaponry and military equipment.

As security analyst Mark Galeotti notes, the “weaponization of everything”[1] has become a hallmark of the present era: even hashtags, memes, and selfies can be used as instruments in information confrontations, and everyday means of communication and data exchange are turning into elements of new conflicts. This phenomenon gives rise to new kinds of threats and concerns, linked to the possibility of exploiting vulnerabilities in everyday devices for cyberattacks or other destructive actions.

The development of the Internet of Things (IoT) is of particular significance, becoming an integral part of urban infrastructure and daily life. IoT devices (cameras, sensors, etc.) ensure control over technological processes and personal comfort, creating massive volumes of data used to automate the management of various spheres of life. However, their vulnerability to cyberattacks makes them potential tools for attacking or interfering in critical systems.

ICTs have become a strategic resource that demands new legal regulation, especially in the context of their military or destructive use. Russia is actively contributing to the reshaping of regulatory approaches in this field.

When a Pager Becomes a Mine

On September 17, 2024, a series of explosions of Hezbollah members’ pagers occurred in Lebanon, killing 12 people and injuring about 3,000[2]. Lebanese authorities and Hezbollah representatives attributed responsibility for the attacks to Israel’s Mossad, claiming that explosives had been integrated into the devices during production and activation occurred remotely through electronic signals. The devices exploded several seconds after receiving the relevant message.

This incident became the subject of a broad international discussion, as it was the first large-scale attack carried out by hacking and modifying civilian IoT devices, raising questions about the transformation of everyday items into potential weapons. International law and ethics experts drew attention to the attack’s indiscriminate nature: it was impossible to determine in advance who the victims would be, casting doubt on compliance with the principles of proportionality and precaution enshrined in international humanitarian law (IHL). The significant harm to civilians, including the deaths of children, was deemed disproportionate to any conceivable military advantage[3]. Russia classified the incident as a violation of international law and called for the creation of binding agreements to prevent similar attacks[4].

Map 4. Locations of confirmed device explosions (Lebanon and Syria), September 2024
Based on Haaretz (https://www.haaretz.com/middle-east-news/2024-09-17/ty-article/.premium/reports-hundreds-wounded-in-pager-blasts-targeting-hezbollah-operatives-across-lebanon/), updated by the author

The international expert community also voiced serious concern. Legal scholars identified two key aspects. First, the legality of an attack depends on the status of the conflict and the parties involved. Helmut Aust (Free University of Berlin) noted that Hezbollah’s status as a non-state actor complicates the qualification of the conflict as international. At the same time, Stefan Talmon (University of Bonn) argued that members of armed groups may be considered lawful military targets if they perform combat functions. However, critics point out that the scale of the explosions and the deaths of civilians, including children, contradict the principle of proportionality and the prohibition of indiscriminate attacks enshrined in international humanitarian law[5].

Second, the use of pagers as disguised explosive devices falls under Protocol II of the 1996 UN Convention prohibiting booby-trap mines disguised as harmless objects. UN High Commissioner for Human Rights Volker Türk characterized the incident as a war crime, noting that the attack made no distinction between combatants and civilians and was terrorist in nature[6]. European politicians, including Josep Borrell, condemned the actions for their indiscriminate nature and emphasized the risk of escalating conflict across the region.

The legal consequences of the incident include the need for an independent investigation, the possibility of appeal to the International Criminal Court, and tightened controls on cyberweapons. The events in Lebanon have intensified debates about regulating hybrid warfare methods, in which civilian technologies are used for military purposes. The lack of unequivocal attribution of responsibility complicates the application of sanctions[7].

The Lebanon pager explosions vividly demonstrate the vulnerability of IoT devices and their potential for weaponization. This raises questions about the need to strengthen security standards for civilian digital technologies and to develop international legal frameworks restricting their use to harm civilian populations. Currently, universally applicable international standards regulating the military use of digital technologies are absent, creating a substantial gap in civilian protection in modern conflicts.

Thus, the incident underscores the necessity of revisiting approaches to legal regulation of digital technologies, especially in the context of their dual-use nature and growing integration into daily life. The development of effective international mechanisms for preventing and responding to such threats has become one of the key challenges to maintaining global security in the digital age.

War by the Formula “AI Will Decide Who Survives”

One of the most prominent examples of artificial intelligence use in contemporary armed conflicts is Israel’s deployment of the AI systems Lavender and Gospel during military operations in the Gaza Strip between 2023 and 2025. These systems, developed by Israel’s military Unit 8200, illustrate how digital technologies have become instruments not only of warfare but also of geopolitical influence, forming new challenges for international law and ethics.

Lavender is intended for analyzing intelligence data collected from drones, surveillance cameras, and mobile communications, with the aim of compiling lists of individuals associated with HAMAS and the Palestinian Islamic Jihad. Its algorithms processed information on most of the enclave’s 2.3 million residents, assigning each a probability score for affiliation with militants, on a scale from 1 to 100. At the peak of operations, the system identified up to 37,000 Palestinians as potential targets.

Simultaneously, Gospel analyzed geodata and movement routes, making recommendations for target selection. Both systems allowed attack decisions to be made within 20 seconds, with up to 80% of targets determined by algorithms without detailed human verification. According to researchers, the software included a threshold for “collateral damage” – up to 15–20 civilians per presumed militant. In practice, however, for each militant eliminated, there were up to 100 or more civilian casualties[8], and in 10% of cases, the systems made identification errors[9]. The use of Lavender and Gospel sparked broad debate in expert circles about compliance with international humanitarian law and fundamental ethical principles. Critics argue that automating target selection led to the use of indiscriminate warfare methods, mass strikes on residential neighbourhoods, and disproportionately high civilian casualties, contravening the principles of distinction and proportionality enshrined in international humanitarian law.

This creates legal grey zones and exacerbates conflict asymmetry. Analysis of the functioning of Lavender and Gospel revealed that key decisions and limits were programmed and controlled by humans-programmers and operators. Thus, the responsibility for violations lies not with the algorithms, but with those who set their parameters.

Therefore, the case of Israeli AI systems underscores the need to further develop international legal mechanisms that can ensure both the effectiveness of military technologies and the protection of fundamental rights of civilian populations in the context of the digital transformation of armed conflict.

Civilian sector applications of such technologies also create risks for ordinary citizens. Identification errors and failures in automated systems threaten life and health, reduce access to services, and foster discrimination against specific groups. Cybersecurity and infrastructure resilience issues are especially acute: attacks on energy grids, medical databases, and transport nodes highlight the scale of threats to the peaceful population.

Digital Command: Artificial Intelligence in Headquarters

Modern military AI technologies are not solely focused on inflicting direct damage on opponents. A prime example of the comprehensive use of AI in defense is the United States Department of Defense’s COMPASS (Collection and Monitoring Platform for Advanced Situational Scenarios) program. This system is designed to integrate and analyze heterogeneous intelligence data to enhance situational awareness and forecast potential enemy actions in real time.

COMPASS algorithms employ machine learning methods to process a multiplicity of parameters. The system executes hybrid analysis, combining military intelligence with data from open sources such as social networks, financial transactions, and political records. Such an approach permits the formation of composite forecasts of enemy behavior and early identification of potential threats.

COMPASS operates in several phases. The first involves analyzing adversary behavioral patterns using AI algorithms, generating projections of likely future scenarios. The second phase can include information operations, such as influencing public opinion[10]. The third phase encompasses “provocative probing” tactics to uncover enemy strategies and produces recommendations for military command.

Firstly, the large-scale collection and processing of data, including personal information, raises concerns about privacy and data protection. Secondly, automating target selection and forecasting entails risks of misclassification: limitations in training datasets or contextual errors may lead to faulty identification of targets, violating the principle of distinction in international humanitarian law. Automated assessment of expected collateral damage also requires algorithm adjustment in line with IHL standards to minimize civilian risk.

Additionally, the use of commercial cloud platforms (for example, Google Cloud) for military purposes generates further ethical questions and criticism from human rights organizations. Scientific literature highlights the risk of so-called “algorithmic drift” – the gradual expansion of targeting criteria and impact radius resulting from adaptive system learning.

CompanyProject / ContractAmount($ billion)Period
PalantirProject Maven (Maven Smart System)1.32025-2029
MicrosoftHoloLens (IVAS)21.92021
AmazonCIA Commercial Cloud EnterpriseDozens2020
Google/AlphabetJoint Warfighting Cloud Capability9.02022
MicrosoftArmy Software Contract10.02025-2035
AmazonSpecial Operations Cloud0.222024
OraclePentagon Cloud Computing2.52022
IBMPentagon Cloud Computing2.52022
SpaceXStarlink/Stargate Project0.52025
AndurilUAV Defense Systems0.6422024
United States Total Military Expenditures997.02024
Table 2. US Companies contracts in the field of AI and smart military systems
Compiled by the author based on open sources

Finally, acceleration of the decision-making cycle – from target discovery to striking – within mere minutes increases the likelihood of mistakes under conditions of incomplete or unreliable source data. This brings to the fore questions of responsibility allocation among developers, operators, and command staff for international law violations involving autonomous systems.

Overall, the experience of implementing COMPASS illustrates not only AI’s potential to boost the effectiveness of military command but also the need to establish clear legal and ethical standards governing the use of such technologies in modern conflicts. In civilian contexts, the evolution of IoT, big data, and automation increasingly poses risks of data leaks and mass data collection. Ensuring privacy is now not only a technical but a social concern: hacker attacks on banks, healthcare, and city networks threaten not only financial losses but also fundamental individual rights.

Moreover, automation in staff recruitment, credit scoring, and distribution of social benefits leads to issues of algorithmic bias and discrimination of certain populations based on social, gender, or ethnic criteria.

The Dual Life of Algorithms: When AI Serves Both Markets and Militaries

A significant aspect of contemporary technological processes is the spillover of military artificial intelligence technologies into the civilian sector. AI systems originally developed for processing large volumes of data in military operations find application in business environments – from market analysis and demand forecasting to strategy optimization. However, their use is associated with risks of unfair competition, including the displacement of competitors, tender manipulation, and market monopolization.

Military AI algorithms demonstrate high effectiveness in forecasting market events. By analyzing news flows, financial reports, and social media data, they identify trends and potential disruptions, allowing manipulators to make decisions about buying or selling assets before changes become obvious to other market participants. Moreover, these technologies can artificially alter market sentiment through the generation and dissemination of false information. The creation of fake news about a company’s financial performance can trigger sharp fluctuations in stock prices, as observed in 2024 during meme wars on cryptocurrency exchanges.

The provocative nature of technical analysis enhanced by military algorithms presents particular dangers. Through automated content creation and social media manipulation, the illusion of certain trends is formed, influencing investor behavior. For example, modeling “natural” market corrections through bots using Natural Language Processing (NLP) violates principles of transparency and equal access to information[11].

Dual-use technologies are actively penetrating social and economic life: AI platforms are used for big data analysis in the financial sector, workforce management, and educational and medical services. This can both optimize processes and create new challenges for protecting workers’ rights, countering systemic discrimination, and ensuring compliance with labor standards. The growth of automation contributes to the emergence of new professions while simultaneously intensifying digital inequality and complex social stratifications.

International AI regulation standards and control mechanisms are required to prevent abuses.

Artificial Intelligence as a Catalyst for Digital Threat Evolution

In recent years, artificial intelligence has become a key factor in transforming the offensive capabilities of cybercriminals. AI creates a new generation of autonomous cyberweapons. Such systems are capable not only of analyzing protected environments but also of dynamically changing their own code or behavior to circumvent even the most advanced detection and countermeasure systems.

The application of AI allows for the automation of processes that previously required manual intervention, including the generation of thousands of variants of malicious software, identification and exploitation of zero-day vulnerabilities, and adaptation to protective measures in real time. This significantly increases the speed, scale, and complexity of attacks, making them more destructive and difficult to predict for cybersecurity specialists.

Experts identify three main areas of AI use in cybercrime: automation of traditional attacks, creation of new channels of influence (including those extending beyond virtual space), and attacks on other artificial intelligence systems. Autonomous malicious programs based on AI are capable of self-improvement and continuous learning based on new data, significantly enhancing their resilience and adaptability in rapidly changing digital environments.

Figure 26. AI Powered Attacks on IoT Devices (simplified diagram)
Source: Karthik V. The Rise of AI Powered Attacks on IoT Devices, 2025

In the civilian sector, the widespread use of cloud services and artificial intelligence becomes a vulnerability factor not only for individual companies but for society as a whole. Major data breaches and examples of attacks on government service systems, educational portals, and transport platforms underscore the necessity of developing complex multilayered strategies for ensuring digital sovereignty and protecting citizens’ personal data.

Militarization of Digital Technologies and International Legal Conflicts

The militarization of digital technologies exacerbates existing international legal contradictions. The integration of civilian information and communication technologies into military systems creates asymmetric advantages, forming the foundation for network-centric warfare. The implementation of 5G/6G standards and cloud platforms (Huawei, Amazon Web Services) transforms into instruments of geopolitical influence. A striking example is the pressure from the US administration on European allies to exclude the Chinese company Huawei from EU 5G infrastructure. Simultaneously, China implements aggressive technological expansion in African countries, offering cyber-infrastructure leasing under conditions that critics call a form of digital colonialism.

Legal dilemmas arise: to what extent are data collected by Amazon and Alibaba, Microsoft and Google corporations protected from military use? How can compliance with international law be ensured if commercial platforms become elements of hybrid conflicts?

The absence of clear international norms regulating the dual use of digital technologies creates “gray zones” in law. Cloud services can be adapted for cyber-espionage or military operations, as demonstrated by cases of AWS and Google Cloud integration into US Department of Defense projects. The militarization of the digital sphere requires a revision of technological sovereignty principles.

Algorithm Diplomacy: The UN on the Digital Front Line

The history of discussing the militarization of information and communication technologies within the United Nations clearly demonstrates the complexity of finding international solutions to limit the military use of digital technologies. The Group of Governmental Experts on Developments in the Field of Information and Telecommunications in the Context of International Security (GGE) was established in 2004 on Russia’s initiative and became the first specialized platform for discussing international information security issues, including the application of international law in cyberspace, principles of state sovereignty, state responsibility, and confidence-building measures.

The GGE operated in a closed format with a limited number of experts appointed by states, and despite difficulties in achieving consensus on final documents, its reports from 2010, 2013, and 2015 laid the foundations for forming international norms of state behavior in the ICT sphere. In 2019, on Russia’s initiative, an Open-Ended Working Group (OEWG) was created, accessible to all UN member states, which responded to the exhaustion of the narrow expert format’s potential and allowed for democratizing the negotiation process, increasing transparency and inclusivity in discussing cyber threats, norms of responsible behavior, confidence-building and capacity development measures, and creating mechanisms for international cooperation in cybersecurity.

In 2021, a second Open-Ended Working Group (OEWG II, Resolution 75/240) was established, continuing work in an open and inclusive format, and in 2025, the creation of a permanent open-ended mechanism for ICT security under UN auspices is planned[12], oriented toward operational aspects of cybersecurity, analysis of specific cyber threats through the lens of international law, development of confidence-building measures, and preparation of technical recommendations for the UN General Assembly[13]. Within the OEWG framework, initiatives are being considered for developing a legally binding convention on international information security (Russia’s proposal), creating thematic working groups on cyber-resilience, application of international law and artificial intelligence regulation, and establishing a voluntary cybersecurity fund to support developing countries[14].

Issues of ICT militarization and their use as instruments of geopolitical and economic influence remain central to both working groups, though participants’ approaches differ: the first OEWG insisted on creating universal legal frameworks, while OEWG II focuses on voluntary cooperation within existing norms, reflecting fundamental geopolitical opposition between supporters of multilateral regulation (Russia, China) and Western countries advocating flexible, legally non-binding formats. Russia consistently promotes the idea of internationalizing Internet governance and asserts states’ sovereign right to manage national network segments, while Western countries insist on the transboundary nature of technologies and oppose national barriers, considering them contrary to the global Internet’s nature.

Russian diplomacy views the OEWG format as a democratic and open mechanism for developing universal solutions, criticizing alternative Western projects (such as France’s Programme of Action) as attempts to undermine OEWG activities and exclude developing countries from decision-making processes. Western countries prefer narrow specialized formats, focusing primarily on technical aspects such as critical infrastructure protection, and tend to divert discussion toward topics less significant for most states. As a result, contradictions between Russian and Western positions acquire a long-term character: Russia seeks to strengthen digital sovereignty and create mandatory international frameworks for cyberspace regulation, while the West defends a model of global and free Internet without rigid legal restrictions. These disagreements reflect a broader conflict of values and strategies, determining prospects for forming universal rules of state behavior in the digital environment[15].

International Oversight in Question: Drones, Robots, and the Legal Void

Within the Convention on Certain Conventional Weapons (CCW), the issue of military artificial intelligence is discussed within the Group of Governmental Experts on Lethal Autonomous Weapons Systems (LAWS), established in December 2016[16]. The Group received a mandate to consider LAWS parameters, the role of the human factor, potential military applications, and options for countering international security challenges, including humanitarian aspects. Information and communication technologies are an integral part of modern autonomous weapons systems, providing control, navigation, targeting, communication, and data analysis. The high level of automation achieved through ICTs allows such systems to operate effectively in complex combat situations while simultaneously creating new risks for international security. Issues considered by the CCW GGE intersect with themes of weapons technological evolution, international law applicability, threat assessment, and development of international norms for civilian protection.

Within the CCW, various, sometimes opposing views on LAWS regulation have emerged. Several states (Austria, Argentina, Mexico, New Zealand, Chile, and others) advocate for developing a new international treaty completely prohibiting or strictly limiting LAWS[17]. Some countries, including most European states, support the idea of developing new norms and guiding principles but are not ready for a complete ban. The largest military powers and weapons manufacturers (Russia, USA, China, India, and others) insist on continuing discussions within the CCW, consider the GGE on LAWS the only legitimate platform, oppose transferring the discussion to other international forums and against rigid new restrictions, emphasizing the need for balance between humanitarian concerns and defense interests[18].

Russia’s position is that existing international humanitarian law norms are sufficient for regulating autonomous systems, and excessive attention to human rights and ethical aspects regarding LAWS is inappropriate[19]; the USA and Israel hold similar views. Official statements emphasize: “We proceed from the fact that international law, including IHL, is fully applicable to LAWS and does not require modernization or adaptation due to their specificity. We oppose developing any international legally binding instrument regarding LAWS and introducing a moratorium on the development and use of such systems and technologies used for their creation”[20]. In 2019, the GGE adopted 11 guiding principles, including mandatory preservation of human control over the use of force and state responsibility for compliance with international humanitarian law, but significant disagreements persist on key issues – prohibition or regulation[21].

Some states consider the CCW GGE format insufficiently effective for discussing military artificial intelligence issues due to geopolitical contradictions and differences in approaches to military technology regulation. As alternatives, international summits and coalitions dedicated to responsible AI use in the military sphere are launched. Russia and several other countries oppose such initiatives, believing they may lead to unilateral approaches. As noted by Deputy Permanent Representative of Russia to the UN Office and other international organizations in Geneva A.I. Belousov, Russia considers the CCW Group of Governmental Experts the only optimal platform for discussing all LAWS-related issues, confirmed in the consensus-approved 11 guiding principles and final document of the 6th CCW Review Conference.

One alternative initiative became the REAIM (Responsible Artificial Intelligence in the Military Domain) international summits held in the Netherlands (2023) and South Korea (2024), bringing together representatives from over 90 countries, experts, business, and civil society. Russia was not invited to the summit. Based on REAIM, a Global Commission on Responsible AI Use in the Military Sphere was created, including experts from various countries. In 2024, at the second REAIM summit, an “Action Plan” was presented, supported by about 60 countries, though several delegations refused to sign the document, which was criticized for ignoring parts of the international community’s interests.

Discussions are conducted in the UN, UNIDIR[22], the EU, and other international institutions with participation from corporations and civil society. IT corporations, academic circles, and civil society are actively involved in discussions.

In Russia, military AI themes annually become subjects of discussion at the international military-technical forum “Army”. Against the backdrop of new initiatives, competition between international platforms and approaches to military AI regulation is forming, reflecting disagreements about paths to ensuring global security and control over new technologies.

Who Will Write Laws for Digital Armies?

As a result of international discussions, understanding of the military artificial intelligence phenomenon is gradually forming, and key spheres that may become subjects of global legal regulation are being defined.

For the civilian sphere, the development and implementation of algorithm transparency standards, artificial intelligence ethics, quality control mechanisms, and digital service accessibility acquire particular importance. International responsible AI initiatives, GDPR, biometric data regulation, and rights to algorithmic decision explanations become key instruments for ensuring comprehensive security in the era of digital transformation.

The application of digital and information-communication technologies in the military sphere encompasses a wide range of tasks: combat systems with various levels of autonomy, cyber-operations, logistics, intelligence, analysis and targeting, command, control, and decision-making support.

The scale and complexity of these processes make the task of developing global legal regulation extremely challenging. For effective international dialogue, it is necessary to unify terminology, define regulatory responsibility zones, and implement risk-oriented approaches to system certification.

A telling example of institutionalizing such efforts is the Netherlands Research Council (NWO) initiative, which began modeling international legislation applicable to the use of AI and information-communication technologies for military purposes[23]. The final work will be titled the “West Point Manual” on International Law Applicable to Artificial Intelligence in Warfare. Like the Tallinn Manual, the new project has NATO military structure support[24].

Russia’s Choice in Military AI Regulation

The Russian Federation consistently advocates for the principle of mandatory preservation of human control over all military technologies, including autonomous weapons systems. According to Russia’s official position, existing international humanitarian law, including its fundamental principles of distinction and proportionality, already fully regulates the use of any types of weapons, including systems with artificial intelligence elements. Russia proceeds from the premise that international law does not require special adaptation or development of new norms regarding autonomous weapons systems, as existing legal frameworks provide the necessary level of regulation.

In this regard, Russia opposes the introduction of additional legal restrictions, considering them excessive and capable of undermining states’ sovereign right to develop their own defense technologies. The Russian Federation objects to preparing any international legally binding document or introducing a moratorium on development and application.

Russia attaches special importance to the Group of Governmental Experts under the Convention on Certain Conventional Weapons, viewing it as the only optimal international platform for discussing issues related to autonomous weapons systems and other military AI technologies. Russia categorically rejects transferring this issue to other international forums, considering it ineffective and politically motivated.

A similar position is evident regarding international information security issues. Russia views the Open-Ended Working Group as the only legitimate mechanism for decision-making on ICT security issues under UN auspices. The Russian Federation opposes Western states’ attempts to replace this format with a program of action resolution reflecting primarily their interests.

On these platforms, Russia consistently advocates for forming an international information security system on a solid legal basis, founded on principles of sovereign equality of states and non-interference in internal affairs. Russia emphasizes the priority of fighting cybercrime while unconditionally observing state sovereignty and insists on prohibiting the use of digital technologies for interfering in other countries’ internal affairs, including organizing “color revolutions” and conducting cognitive attacks[25]. Additionally, Russia defends states’ rights to independently determine rules for artificial intelligence use in the defense sphere and opposes extraterritorial application of foreign legislation in the digital environment.

Time for New Norms: How to Regulate AI Tomorrow

Contemporary disagreements between states on digital technology regulation reflect a fundamental contradiction between sovereignty and globalization principles. However, the rapid development of AI, especially autonomous agents and generative systems, questions the very possibility of preserving traditional norm-creation models. As cases of AI implementation in the military sphere show, technologies already outpace legal frameworks, creating “gray zones” of responsibility.

In 2024, notable expansion in the use of intelligent agents (AI-agents) is observed, indicating the growing role of algorithms in decision-making processes, including in some cases with minimal human participation. This trend actualizes discussion about the need to rethink the principle of “meaningful human control”, enshrined in several international documents and considered a fundamental element in regulating autonomous systems, including military technologies.

Currently, artificial intelligence is primarily used for processing and analyzing large data arrays and supporting management decisions. However, some algorithmic systems today can formulate proposals for optimizing regulatory and management procedures, potentially contributing to reducing interstate contradictions in certain spheres. Nevertheless, such decisions are often made on non-transparent bases, raising questions about their validity and compliance with ethical and legal standards.

In perspective, the emergence of dynamic legal regimes capable of adapting to artificial intelligence evolution is not excluded, for example, in the form of modular conventions with possibilities for automatic updating of technical appendices. In this regard, it seems appropriate to consider creating international AI audit bodies analogous to the IAEA, which could control algorithm compliance with international humanitarian law and ethics requirements.

Discussion about artificial intelligence legal personality and its potential role as a participant in legal relations remains predominantly theoretical. However, as algorithmic systems become more complex and their functionality expands, questions of responsibility distribution between developer, operator, and the AI itself require additional scientific and legal consideration.

Overall, the development of digital technologies and artificial intelligence not only poses new challenges to law but requires rethinking established regulatory principles. In conditions where human roles in partnership with AI are gradually decreasing, the formation of flexible, adaptive, and transparent legal regulation models becomes particularly significant, capable of ensuring balance between innovation efficiency and protection of fundamental rights and interests of individuals, society, and the state.


[1] Abid A. The Weaponisation of Everything: A Field Guide to the New Way of War // Journal of Security & Strategic Analyses, July 2023. p. 94-96.

[2] Lebanon pager blast death toll rises to 4,000 // RBC. September 17, 2024 URL: https://www.rbc.ru/politics/17/09/2024/66e9dd649a794707f731e229 (in Russ.).

[3] Exploding pagers and radios: A terrifying violation of international law, say UN experts // OHCHR. September 19, 2024. URL: https://www.ohchr.org/en/press-releases/2024/09/exploding-pagers-and-radios-terrifying-violation-international-law-say-un

[4] Interview of Deputy Minister of Foreign Affairs of the Russian Federation Sergey Vershinin to the online news publication ‘Lenta.ru’ // MFA of Russia. November 5, 2024. URL: https://www.mid.ru/ru/foreign_policy/news/1979481/ (in Russ.).

[5] Explosion of pagers in Lebanon assessed as a violation of international law, Victoria Kondratieva // Lenta.RU. September 18, 2024. URL: https://lenta.ru/news/2024/09/18/vzryv-peydzherov-v-livane-otsenili-kak-narushenie-mezhdunarodnogo-prava/ (In Russ.).

[6] UN calls Lebanon pager blasts a war crime // RIA Novosti News Agency. September 20, 2024. URL: https://ria.ru/20240920/podryv-1973937120.html (In Russ.).

[7] Israel planted explosives in Hezbollah’s Taiwan-made pagers, say sources // Reuters. September 18, 2024. URL: https://www.reuters.com/world/middle-east/israel-planted-explosives-hezbollahs-taiwan-made-pagers-say-sources-2024-09-18/

[8] ‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza // Prepare for Change. April 10, 2024. URL: https://prepareforchange.net/2024/04/10/lavender-the-ai-machine-directing-israels-bombing-spree-in-gaza/

[9] New Recruits Help Israel Track HAMAS Operatives In Gaza. They Are Both AI // NDTV World. April 28, 2024. URL: https://www.ndtv.com/world-news/explained-what-are-lavender-and-gospel-ai-israel-is-using-to-bomb-gaza-5540086

[10] The Pentagon Wants to Flood Social Media With Fake AI People // Futurism. October 19, 2024. URL: https://futurism.com/the-byte/pentagon-wants-fake-ai-people

[11] Market manipulation // Clusterdelta. URL: https://clusterdelta.com/ru/market-manipulation (In Russ.).

[12] Towards a Regular Institutional Dialogue on International ICT Security: Review of Current Proposals and Considerations for Effective Dialogue // UNIDIR. November 29, 2024. URL: https://unidir.org/publication/towards-regular-institutional-dialogue-on-international-ict-security-review-of-current-proposals-and-considerations-for-effective-dialogue/

[13] Developments in the field of information and telecommunications in the context of international security, UN General Assembly Resolution 75/240 // United Nations. December 31, 2020.URL: https://docs.un.org/ru/A/RES/75/240 (in Russ.).

[14] UN OEWG 2021-2025 10th substantive session, Opening of the session // DigWatch. February 17, 2025. URL: https://dig.watch/event/un-oewg-2021-2025-10th-substantive-session/opening-of-the-session-oewg-2025

[15] Boyko S. International Information Security: Russia in the UN. Two Dialogue Format (2018-2021) // International Affairs Journal. March 21, 2024. URL: https://interaffairs.ru/news/show/45168 (In Russ.).

[16] The official title of the so-called “Convention on Certain Conventional Weapons” is “Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects”.

[17] Convention on Certain Conventional Weapons // MFA of the Russian Federation. April 28, 2021. URL: https://www.mid.ru/ru/foreign_policy/international_safety/disarmament/obychnye_vooruzheniya/konventsiya_o_negumannom_oruzhii/1413307/ (In Russ.).

[18] On the approaches of the Russian Federation to the problem of new technologies in the field of “Lethal Autonomous Weapons” // UNODA. May 14, 2024. URL: https://docs-library.unoda.org/Convention_on_Certain_Conventional_Weapons_-Group_of_Governmental_Experts_on_Lethal_Autonomous_Weapons_Systems_(2024)/CCW-GGE.1-2024-WP.2_Russian.pdf (in Russ.).

[19] Speech by Deputy Head of the Russian delegation A.I. Belousov explaining the position on the draft resolution “Lethal Autonomous Weapons” in the First Committee of the 78th session of the UN General Assembly // United Nations. November 1, 2023. URL: https://russiaun.ru/ru/news/811123 (in Russ.).

[20] Ibid.

[21] Group of governmental experts on emerging technologies in the area of “Lethal Autonomous Weapons”, Commonalities in national commentaries on guiding principles, 2019 // UNODA. URL: https://documents.unoda.org/wp-content/uploads/2020/09/Commonalities-paper-on-operationalization-of-11-Guiding-Principles.pdf

[22] RAISE: The Roundtable for AI, Security and Ethics /// UNIDIR. URL: https://unidir.org/raise/

[23] Designing International Law and Ethics into Military Artificial Intelligence (DILEMA). URL: https://www.nwo.nl/en/projects/mvi19017

[24] West Point Manual // West Point. URL: https://www.westpoint.edu/michael-n-schmitt

[25] First Committee Approves New Resolution on “Lethal Autonomous Weapons”, as Speaker Warns ‘An Algorithm Must Not Be in Full Control of Decisions Involving Killing’ // United Nations. November 1, 2023. URL: https://press.un.org/en/2023/gadis3731.doc.htm