by
Din Mashkovich
CyJurII Scholar
on 27 March 2026
Recent developments in military technology have made it increasingly difficult to distinguish between human decision-making and digital systems. In contemporary operations, states rely heavily on shared intelligence systems, real-time data, and AI-driven target identification. As a result, the traditional distinction between the party providing intelligence and the party carrying out the attack is becoming increasingly difficult to sustain.
This issue moved from the abstract to the concrete with the Minab airstrike of 28 February 2026, in which a missile struck a school in Minab, Iran, killing more than 150 people, primarily children and staff. While not officially confirmed, reports suggest that the strike was carried out by the United States, possibly on the basis of intelligence supplied by Israel, which misidentified the building as a lawful military target. If accurate, the legal issue is not only whether the strike constitutes an international crime, but also whether those generating and supplying the intelligence may be regarded as co-perpetrators rather than merely secondary participants.
The Minab airstrike illustrates a broader structural problem: responsibility may become fragmented across a chain of interdependent acts, including data collection, algorithmic processing, and kinetic execution. The striking state may argue that it lacked the requisite mens rea due to faulty intelligence, while the intelligence provider may contend that it lacked the relevant actus reus because it did not make the final operational decision.
However, where AI systems play a central role, “providing information” may amount in practice to “participating in the attack”. Under art 25(3)(a) of the Rome Statute,¹ criminal responsibility attaches to those who commit a crime jointly. Jurisprudence of the International Criminal Court (ICC), notably Prosecutor v Lubanga² and Prosecutor v Katanga,³ links this form of liability to “control over the crime”, understood as functional control over its commission.
Co-perpetration requires both a common plan and an essential contribution by each participant. In the context of the Minab incident, a common plan should not be inferred solely from intelligence-sharing arrangements. The analysis may instead be framed through the concept of shared operational risk: a mutual understanding that one party would generate intelligence and the other would operationalise it in a manner creating a substantial risk of unlawful harm to civilians in a specific military operation. The plan must be directly connected to the conduct that produced the unlawful outcome.
In conventional cases, the provision of information is typically characterised as aiding and abetting. In the context of AI warfare, however, where the attacking party relies on AI-generated misidentification to locate or validate a target, such input may exceed mere assistance. Where the strike would not realistically have occurred in the same manner absent that input, the contribution may qualify as “essential” for the purposes of co-perpetration.
This concern is amplified by automation bias. A commander may remain formally “in the loop” while, in practice, deferring to algorithmic assessments presented as objective. Even highly competent military commanders are unlikely to possess the technical expertise required to independently evaluate AI-generated outputs. Combined with the time-sensitive nature of operational decision-making, this dynamic reinforces automation bias and increases reliance on externally generated intelligence.
In the case of the airstrike, the United States, although highly capable, relies significantly on Israeli intelligence, particularly given Israel’s experience in AI-assisted data collection in the Middle East. When combined with automation bias, this reliance may elevate the role of the intelligence provider in shaping operational outcomes.
Principal liability under the Rome Statute requires a sufficiently culpable mental state; negligence is insufficient.⁴ Liability may arise where participants were aware of a substantial likelihood that flawed AI-generated intelligence would be used to strike civilian objects, yet proceeded within a framework that accepted that risk. Whether this awareness satisfies art 30 of the Rome Statute⁵ depends on the specific facts, which remain unclear.
Nevertheless, AI-driven intelligence systems inherently involve a margin of error, even if minimal. This raises the question of whether such known risks meet the threshold of a “substantial likelihood” required to establish the mental element.
The central lesson of Minab is that international criminal law must not permit responsibility to dissipate across technical and operational boundaries. The striking party cannot automatically avoid responsibility by invoking faulty data, and the intelligence provider cannot claim neutrality where its systems were decisive. While aiding and abetting may, in some cases, remain the appropriate characterisation, the Minab incident demonstrates that intelligence providers are no longer necessarily secondary actors in the age of AI warfare. The crucial legal question is whether their role crosses the threshold from assistance to the level of joint control required for co-perpetration.
Rome Statute of the International Criminal Court (adopted 17 July 1998, entered into force 1 July 2002) 2187 UNTS 90 art 25(3)(a).
Prosecutor v Lubanga (Judgment) ICC-01/04-01/06.
Prosecutor v Katanga (Judgment) ICC-01/04-01/07.
Rome Statute (n 1) art 30.
ibid.