Dr. Veronika Bock is the deputy director of the Centre for Ethical Education in the Armed Forces (zebis) and the Institute for Theology and Peace (ithf) in Hamburg. Her research and teaching focus on peace and military ethics, human rights, and the impact of digitalisation in the armed forces. She lectures in Germany and abroad and has published on these topics.
The rapid development of artificial intelligence (AI) represents a real ‘turning point’ in military technology. Proponents promise that the autonomy of lethal weapon systems (LAWS) – with their increased precision and endurance – will humanise warfare by reducing the number of casualties. In this context, however, autonomy refers to a system’s ability to make decisions algorithmically, without human intervention. Critics warn that such systems carry the risk of accountability gaps, escalation dynamics and an erosion of moral restraint. Regulation is urgently needed: with the adoption of a UN General Assembly resolution on autonomous weapons in 2023, the international community has taken a first, albeit tentative, step towards ensuring meaningful human control over the use of force [1].
“The Oppenheimer Moment of Our Generation”: Artificial Intelligence, Autonomy and the Ethics of War
Between euphoria and dystopia
Discussions about AI often oscillate between euphoria and dystopia. Some celebrate a creative partnership between humans and machines; others, like Stephen Hawking, warned of a future in which AI could spell the end of civilisation [2]. The question that now confronts us is profoundly anthropological: what role should the human being play in the human–machine relationship – and, ultimately, what kind of future do we wish to live in?
This question acquires existential urgency when applied to the military domain. Reports suggest that weapon systems with autonomous functions – such as loitering munitions equipped with image recognition – are already being used in contemporary conflicts. We are witnessing a genuine military-technological turning point, accelerated by wars in Ukraine and the Middle East.
Proponents highlight the operational advantages: autonomous systems can defuse mines or evacuate the wounded without endangering soldiers, and AI-supported drones can process information much faster than human operators. Robotics expert Ronald C. Arkin argues that such systems, because they act without emotions such as fear, anger, or revenge, could reduce the risk of war crimes and collateral damage and, paradoxically, lead to a ‘humanisation’ of combat [3].
However, this claim raises fundamental questions. Can algorithmic systems really take responsibility for complex moral decisions? Decision-making presupposes consciousness and self-reflection – qualities that machines do not possess. The anthropomorphic projection of moral agency onto machines obscures the fact that, by their design, they are merely executors of human intentions, not bearers of moral responsibility. The attribution of human characteristics to non-humans (anthropomorphism) is a fundamental and often unquestioned problem in the debate on LAWS.
Although there is no universally accepted definition, most approaches describe autonomous weapon systems as those that, once activated, can select and attack targets without further human intervention. The International Committee of the Red Cross (ICRC) defines them as systems that, after human activation, independently initiate or trigger attacks in response to sensor data [4]. Autonomy here should not be understood in the Kantian moral philosophical sense of self-legislation, but rather as technical self-execution.
The problem is not only theoretical. International humanitarian law was written for human actors, not machines. Accountability cannot be transferred to algorithms: responsibility for actions and their consequences remain with humans [5]. However, the diffusion of responsibility in the design, programming and use of AI-assisted weapons carries the risk of undermining precisely this principle. Who is to blame for mistakes or collateral damage – the engineer, the programmer, the commander?
Security analysts such as Ulrike Franke warn of scenarios in which autonomous systems of opposing forces react to one another’s signals in fractions of a second – so-called “flash wars” that no human ever intended [6]. Others point to an accelerating arms race, where deterrence requires parity in autonomy, or to the destabilisation of nuclear equilibrium through AI-enabled submarine detection. These developments increase the urgency of a binding legal framework before technology dictates the pace of war.
Ethical and Legal Frameworks: Meaningful Human Control
Any future regulation must ensure meaningful human control over critical functions of weapon systems – particularly target selection and the parameters of duration, range, and engagement [7]. Such control must be genuine and not merely a formality. Operators must be able to anticipate and influence the consequences of deployment in accordance with the principles of discrimination and proportionality enshrined in international humanitarian law.
Meaningful control can be assessed based on various dimensions: the time between the last human decision and the use of force; the operational environment, in particular the presence of civilians; the nature of the mission, whether defensive or offensive; and the level of training of the personnel who can intervene if necessary [8]. These factors determine whether human control is substantial or merely symbolic.
Automation bias poses an additional risk: the tendency of human operators to over-trust machine outputs, even when they conflict with intuition or experience [9]. Where reaction times are measured in milliseconds, genuine oversight may become illusory. The more sophisticated the algorithm, the more persuasive its authority appears – and the less likely humans are to challenge it.
Maintaining meaningful human control therefore means setting design limits that preserve human judgment throughout the decision cycle. AI must serve as a decision-support tool, not a substitute for moral reasoning. NATO emphasised that human dignity, accountability and the rule of law must remain at the core of any military use of AI [10].
Human dignity and the moral limits of delegation
The question of human dignity is at the heart of the debate on lethal autonomous weapons. Does the killing of a human being by a machine constitute a violation of this dignity? Can decisions about life and death be outsourced to algorithms without undermining the moral foundations of warfare?
Former UN Special Rapporteur Christof Heyns described the delegation of life-and-death decisions to machines as ‘the ultimate assault on human dignity’ – death by algorithm [11]. Paul Scharre also argues that such decisions are at the core of the military profession; removing them from human agency means undermining the ethical core of soldiering itself [12].
Human dignity, enshrined in Article 1 of the German Basic Law, is an absolute and inalienable value. Philosophically, it derives from Kant’s imperative to treat human beings always as an end, never merely as a means’ [13]. Delegating lethal decisions to an insensitive machine carries the risk of reducing human beings to data points – targets within an algorithmic calculation – and thus treating them as mere objects. In this sense, lethal autonomy is not only a legal or technical problem, but also an ethical one: it redefines what it means to act morally in war.
Even if autonomous systems were fundamentally capable of adhering to the letter of international humanitarian law, they would lack the capacity for empathy, compassion or restraint – those deeply human responses that sometimes override the logic of military necessity.
Warfare is not a purely technological process, but a social and communicative event shaped by culture, perception and moral conflicts. Removing humans from the process of lethal decision-making therefore threatens to undermine the moral grammar of warfare itself [14].
Conclusion: Responsibility in the Age of Autonomy
A growing majority of states recognise the urgency of regulating autonomous weapon systems. The 2023 UN resolution is a first step, but a binding international treaty is needed to ensure that control over the use of force remains in human hands. Meaningful human control is not an optional ethical luxury – it is the minimum requirement for moral and legal accountability.
As military technology evolves, so must the ethical education of soldiers and commanders. Ethical competence cannot be retrofitted; it must be cultivated as part of professional military formation. The challenge is to integrate technological innovation with the enduring principles of humanity and justice in war.
The ‘Oppenheimer moment’ [15] of our generation lies in deciding whether AI in war will become an instrument of peace or a catalyst for the erosion of ethical standards of warfare – this depends on our ability to define its limits.
[1]Cf. https://docs-library.unoda.org/General_Assembly_First_Committee_-Seventy-Ninth_session_(2024)/78-241-US-EN.pdf [13.10.2025].
[2] Kharpal, A. (2017): Stephen Hawking says A.I. could be ‘worst event in the history of our civilization’: https://www.cnbc.com/2017/11/06/stephen-hawking-ai-could-be-worst-event-in-civilization.html [13.10.2025].
[3] Cf. Arkin, R. (2014):Lethal Autonomous Systems and the Plight of the Non-combatant: https://www.ethikundmilitaer.de/en/magazine-datenbank/detail/2014-01/article/lethal-autonomous-systems-and-the-plight-of-the-non-combatant [15.10.2025].
[4] Cf. ICRC position on autonomous weapon systems (12.5.2021): https://www.icrc.org/en/document/icrc-position-autonomous-weapon-systems[15.10.2025].
[5] Cf. Bilgeri, A. (2024): Autonomous Weapons Systems – Current International Discussions: https://www.ethikundmilitaer.de/en/magazine-datenbank/detail/01-2024/article/autonomous-weapons-systems-current-international-discussions [15.10.2025]
[6] Cf. Gierke, S. (2024): Künstliche Intelligenz im Militärbereich: ‚Es gibt sehr reale Gründe für Sorge‘: https://www.sueddeutsche.de/politik/kuenstliche-intelligenz-militaer-moderne-kriegsfuehrung-1.6346059
[7] See Connolly, C. (2024): Advocating For A Legally Binding Instrument on Autonomous Weapons Systems: Which Way Ahead: https://www.ethikundmilitaer.de/en/magazine-datenbank/detail/01-2024/article/advocating-for-a-legally-binding-instrument-on-autonomous-weapons-systems-which-way-ahead
[8] See Geiss, R. (2015): Die völkerrechtliche Dimension autonomer Waffensysteme: https://library.fes.de/pdf-files/id/ipa/11444-20150619.pdf, p. 12 [15.10.2025].
[9] Cf. Schörnig, N. (2014): Automatisierte Kriegsführung. Wie viel Entscheidungsraum bleibt dem Menschen? https://www.bpb.de/shop/zeitschriften/apuz/190115/automatisierte-kriegsfuehrung-wie-viel-entscheidungsraum-bleibt-dem-menschen [15.10.2025].
[11] Heyns, C. (2013): Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions: https://www.ohchr.org/Documents/HRBodies/HRCouncil/RegularSession/Session23/A-HRC-23-47_en.pdf [02.12.2025].
[12] Scharre, P. (2018): Army of None: Autonomous Weapons and the Future of War. New York: Norton, p. 293.
[13] Kant, I. (1903): Grundlegung zur Metaphysik der Sitten, Akademie-Ausgabe IV, p. 429; Dürig, G. (1956): Der Grundrechtssatz der Menschenwürde, AöR 81, pp. 117–157.
[14] See: Asaro, P. (2012): On Banning Autonomous Weapon Systems, International Review of the Red Cross 94 (886), pp. 687–709; Sparrow, R. (2016): Robots and Respect, Ethics & International Affairs 30(1), pp. 93–116.
[15] Schallenberg, A.: (2024): Opening Statement. International Conference »Humanity at the Crossroads: Autonomous Weapons Systems and the Challenge of Regulation«. Vienna, 29 April 2024: https://www.bmeia.gv.at/ministerium/presse/reden/2024/04/eroeffnungsrede-internationale-konferenz-humanity-at-the-crossroads-autonomous-weapons-systems-and-the-challenge-of-regulation [13.10.2025].