by
Lika Chimchiuri
CyJurII Scholar
on 3 April 2026
Abstract
International Humanitarian Law (IHL) is grounded in human judgment, yet artificial intelligence is rapidly transforming the conduct of warfare. Through autonomous weapons systems, decision-support tools, and AI-driven propaganda, a widening gap emerges between established legal principles and technological reality. These systems lack the capacity to assess intent, distinguish nuanced human behavior, or evaluate the moral weight of life. Consequently, an accountability gap arises, whereby “the algorithm made the decision” risks becoming a legal shield for commanders. To safeguard humanitarian principles, it is essential to codify meaningful human control, prohibit targeting by proxy, and ensure that life-and-death decisions remain under human responsibility.
Keywords: International Humanitarian Law, Autonomous Weapons System, Accountability Gap, Cognitive Warfare.
Discussion
International Humanitarian Law (IHL) was developed to regulate armed conflict on the assumption that humans both wage war and make decisions. This Insight argues that existing IHL principles remain fully applicable to AI-mediated warfare but require doctrinal reinforcement through the codification of meaningful human control and clarified standards of individual responsibility.
Artificial intelligence is reshaping the battlefield in three principal ways.
First, autonomous weapons systems including drones and loitering munitions analyze sensor data and independently identify potential targets. These systems do not “see” individuals; rather, they detect patterns, movements, and thermal signatures. The reported deployment of the Kargu-2 drone in Libya (UN Security Council 2021) illustrates the emergence of systems operating with limited human oversight. The central legal issue is therefore one of attribution: where a machine selects and engages a target, responsibility must still be traced to human actors involved in its deployment and use.
Second, AI-driven decision-support systems assist, rather than replace, human operators. Reported systems such as “Lavender” and “Where’s Daddy?” generate target profiles and behavioral tracking data at scale (Abraham 2024). Although human approval is formally required, the speed and volume of outputs risk reducing oversight to a procedural formality creating what may be termed an “illusion of control.” In such contexts, the legal standard shifts toward questions of foreseeability, due diligence, and the adequacy of human review.
Third, AI is increasingly deployed in cognitive and cyber operations, including deepfakes, automated propaganda, and disinformation campaigns. Here, the battlefield extends beyond physical space into perception and informational integrity. The circulation of a deepfake video depicting President Volodymyr Zelenskyy calling for surrender during the conflict in Ukraine demonstrates how AI can destabilize societies without kinetic force (BBC News 2022).
IHL continues to apply to all armed conflicts, including those involving AI technologies. Its foundational rules are codified in the Geneva Conventions (1949) and further elaborated in Additional Protocol I (1977), particularly Articles 48, 51, and 52 (Additional Protocol I 1977).
The principle of distinction requires parties to distinguish between civilians and combatants and to direct operations only against legitimate military objectives (Additional Protocol I 1977, art. 48). This obligation extends beyond visual identification to include the interpretation of intent and context. AI systems, however, rely on pattern recognition rather than human judgment, increasing the risk of misidentification.
The principle of proportionality prohibits attacks expected to cause excessive civilian harm relative to the anticipated military advantage (Additional Protocol I 1977, art. 51(5)(b)). This balancing test is inherently normative. While AI can process quantitative inputs, it cannot evaluate qualitative considerations such as human dignity, long-term societal harm, or ethical restraint.
The principle of humanity underpins the entire IHL framework. It reflects values of empathy, judgment, and restraint qualities that machines cannot replicate. Delegating life-and-death decisions to AI risks undermining this foundational principle.
A central challenge is the accountability gap. Under Article 25 of the Rome Statute of the International Criminal Court, individual criminal responsibility attaches only to natural persons and requires both actus reus and mens rea (Rome Statute 1998). AI complicates this framework by diffusing responsibility among programmers, operators, and commanders, while lacking legal personality itself. However, negligence and failure to exercise due care remain legally relevant standards.
Customary international humanitarian law reinforces these obligations, particularly Rule 1 (distinction) and Rule 14 (proportionality), as identified by the International Committee of the Red Cross (Henckaerts and Doswald-Beck 2005). These norms apply irrespective of the technologies employed.
Current regulatory frameworks remain insufficient. The European Union Artificial Intelligence Act represents a significant development in civilian AI governance, emphasizing transparency, accountability, and risk mitigation (European Union 2024). However, it does not directly regulate military applications.
The role of private companies further complicates regulation. Many AI systems originate in the civilian sector and are later adapted for military purposes, often bypassing strict oversight. Technologies such as facial recognition have reportedly been deployed in conflict settings, raising concerns about misuse, privacy violations, and the transformation of civilian populations into testing environments.
To ensure that IHL remains effective in the age of AI, several measures are necessary:
• Codify meaningful human control: Commanders must retain informed, critical oversight over AI-generated outputs.
• Prohibit targeting by proxy: Lethal decisions must not rely solely on algorithmic predictions without substantive human verification.
• Ensure strict commander responsibility: The use of AI must not dilute established standards of accountability.
• Recognise cognitive harm: AI-driven disinformation and psychological operations should be addressed within legal frameworks.
• Develop a binding international instrument: A dedicated protocol to the Geneva Conventions may be required.
Conclusion
As AI continues to outpace the evolution of legal frameworks, the normative foundations of warfare risk erosion. While machines excel in data processing, they cannot replicate the empathy, judgment, or restraint embedded in the principle of humanity. AI may augment warfare, but it cannot displace the legal and moral architecture of responsibility; the law must therefore regulate not the machine, but the human decision to rely upon it.
References
1. Abraham, Yuval. 2024. “Lavender: The AI Machine Directing Israel’s Bombing Spree in Gaza.” +972 Magazine.
2. BBC News. 2022. “Deepfake Video of Zelenskyy Tells Ukrainians to Surrender.”
3. European Union. 2024. Regulation (EU) 2024/1689 (Artificial Intelligence Act).
4. Henckaerts, Jean-Marie, and Louise Doswald-Beck. 2005. Customary International Humanitarian Law, Volume I: Rules. Cambridge: Cambridge University Press.
5. Rome Statute of the International Criminal Court. 1998.
6. UN Security Council. 2021. Final Report of the Panel of Experts on Libya. UN Doc S/2021/229.
7. Additional Protocol I to the Geneva Conventions. 1977.