by
Muhammad Siraj Khan
CyJurII Theorist
on 9 November 2025
International Humanitarian Law (IHL), particularly the 1949 Geneva Conventions and their Additional Protocols of 1977, emerged in an age when warfare was largely kinetic, human-controlled, and territorially confined. The drafters could not have predicted the advent of unmanned aerial systems, satellite-guided precision weapons, autonomous robotic platforms, or artificial-intelligence-enabled targeting systems. Yet, the ethical and legal dilemmas arising from these technologies, especially lethal autonomous weapon systems (LAWS), have become central to contemporary discourse on the future of armed conflict. The fundamental question is whether the existing IHL, through principled interpretation, can sufficiently govern such novel methods and means of warfare or whether new treaties are indispensable to address the unprecedented contours of machine-driven combat.
The central argument supporting the continued applicability of existing IHL is that the core principles of distinction, proportionality, military necessity, and precaution are technology-neutral. These obligations regulate conduct based on its effects, not the platforms themselves. Thus, whether a strike is executed by a piloted aircraft or an autonomous system, the duties to distinguish combatants from civilians, to avoid excessive incidental harm, and to take precautionary measures remain intact and unchanged. Moreover, Article 36 of Additional Protocol I obliges States to review the legality of new weapons to ensure compliance with the existing norms, i.e., the core principles of IHL. According to this view, the interpretive evolution, customary practice, and state responsibility can suffice to accommodate technological transformation without adopting new normative treaties.
Having said the above, significant challenges posed by emerging technologies and the rise of Artificial Intelligence (AI), however, complicate this optimism. Autonomous systems capable of selecting and engaging targets without meaningful human control strain traditional notions of accountability, intent, and human judgment, which are elements traditionally central to IHL compliance. If a machine independently misidentifies a civilian object as a military objective, the question of assigning responsibility becomes quite tricky and confusing. Who may be held responsible for a violation of the IHL: the programmer, the program, the commander, the manufacturer, or the State? Moreover, the so-called Martens Clause, which is essential to all four Geneva Conventions of 1949, while emphasizing public conscience and humanity, casts ethical doubt on delegating life and death decisions to software. Critics are of the view that while interpretation can guide practice, the technological rupture created by AI-driven warfare is sufficiently profound to necessitate new legal instruments that clarify standards of human control, transparency, verification, and attribution. It becomes of paramount importance in cases of IHL violations, which are termed as grave breaches or war crimes.
A balanced approach towards this ongoing debate could only be adopted after weighing both sets of arguments. Both views, i.e., of the pre-emptive regulation to tackle the challenges likely to be posed by the AI & other emerging technologies and the optimistic view of iterative interpretation, are not mutually exclusive. While strengthening the Article 36 mechanism through rigorous testing, transparency, and independent oversight can address the near-term concerns on one hand, multilateral forums such as the United Nations Group of Governmental Experts on LAWS should continue structured negotiations simultaneously on the other hand, keeping in view the eventual goal of a protocol or treaty articulating mandatory human control thresholds and prohibiting fully autonomous lethal decision-making. This phased model harmonizes prudential caution with legal continuity, ensuring IHL's ethical core is preserved while its practical application evolves to govern wars waged by machines as well as men.
References
1. International Committee of the Red Cross (ICRC), Geneva Conventions of 12 August 1949 and Additional Protocols, 1977. https://ihl-databases.icrc.org
2. Additional Protocol I, Article 36 (Weapons Review). https://ihl-databases.icrc.org/en/ihl-treaties/api-1977/article-36
3. ICRC, Autonomy, artificial intelligence and robotics: Technical and legal issues, 2019. https://www.icrc.org/en/document/autonomy-artificial-intelligence-and-robotics-technical-legal-issues
4. United Nations Office for Disarmament Affairs, Group of Governmental Experts on Lethal Autonomous Weapons Systems (LAWS). https://meetings.unoda.org/meeting/ccw-gge-laws
5. Human Rights Watch & Harvard Law School International Human Rights Clinic, Killer Robots and the Concept of Meaningful Human Control, 2016. https://www.hrw.org/report/2016/04/11/killer-robots-and-concept-meaningful-human-control
6. ICRC Commentary on the Martens Clause. https://ihl-databases.icrc.org/en/ihl-treaties/customary-ihl/rule-1
7. Schmitt, Michael N., “Autonomous Weapon Systems and International Humanitarian Law: A Reply to the Critics,” Harvard National Security Journal, 2013. https://harvardnsj.org
8. Asaro, Peter, “On Banning Autonomous Weapon Systems: Human Rights, Automation, and the Dehumanization of Lethal Decision-Making,” International Review of the Red Cross, Vol. 94, 2012. https://international-review.icrc.org
9. United Nations Institute for Disarmament Research (UNIDIR), The Weaponization of Increasingly Autonomous Technologies, Research Series. https://unidir.org