By Ioanna LEKEA
Associate Professor Ioanna K. Lekea is a specialist in military ethics with a focus on the responsible integration of emerging technologies in defence. As the Director of the War Games Lab of the Hellenic Air Force Academy, she leads research on the ethical design of AI in targeting, human-in-the-loop systems, and the impact of automation on decision-making. Her work explores how cognitive and emotional factors influence pilot decision-making in extremis and how simulation-based tools can enhance ethical awareness and operational performance.
Making decisions during war has always been challenging. Logistics, incomplete information about the enemy, risks to both military personnel and civilians, and environmental implications clearly show that decisions must be based on specific parameters. These parameters need to address not only the tactical and operational requirements for planning and executing a successful mission but also ethical considerations, the standing Rules of Engagement (RoE), and International Humanitarian Law (IHL). Gathering information from the field has always been, and remains, a significant challenge and a crucial factor in military decision-making. Could the combination of remote information gathering, AI enhancement of drone operators’ targeting, and remote target execution make this process easier?
In contemporary military operations, Unmanned Aerial Vehicles (UAVs) have become essential tools for surveillance, reconnaissance, and precision strikes. Their operational value is undeniable: they provide persistent presence, deep penetration into hostile territory, and reduce the exposure of friendly forces. However, the physical and emotional distance they create between operators and the battlefield raises significant legal, psychological, and ethical concerns. As Artificial Intelligence (AI) becomes increasingly integrated into targeting process, it enables faster data processing. Yet, the speed and abstraction AI risks of reducing human oversight and moral responsibility. How can we ensure that targeting decisions remain both operationally effective and ethically sound? And how can we design AI systems that support – without diminishing – human judgment?
From Automation to Enhancement: AI as a Decision Support Agent
UAV operators face a challenging paradox. On one hand, they are physically removed from the battlefield, operating in control rooms far from the point of impact. On the other hand, they bear the full moral and legal responsibility for the decisions they make and the strikes they perform often under pressing operational time constraints. They must execute missions based on intelligence reports, visual feeds, tactical objectives, and legal parameters. Yet, as humans, their decisions are also influenced by fatigue, stress, tunnelling, and emotional reactions to the unfolding situation on the ground. Although trained to adhere to strict procedures, operators often face ethical dilemmas that require human judgment that cannot be fully codified into a predefined algorithmic logic.
AI systems can enhance decision-making by processing vast volumes of sensor data, detecting patterns, identifying anomalies, and even predicting likely threats. We should therefore expect that AI enhancement will increase accuracy, reduce human error, and enable quicker reactions to dynamic battlefield situations. However, we must not overlook the potential risks. While AI can serve as an insightful decision-support tool, it must not cross the line into becoming the primary decision-maker or opinion leader, as it lacks moral awareness and is incapable of interpreting moral complexity. An algorithm may identify a target based on pattern recognition or prior data, but it cannot reflect on the proportionality of a strike, the probable value of restraint, or the moral cost of civilian casualties.
AI correlates data and outputs recommendations based on training sets and coded parameters. In contrast, human reasoning is holistic. It considers uncertainty, prior experience, and ethical reasoning. Where an algorithm sees a lawful strike, a human may sense doubt, hesitation, or the need for further verification. Therefore, it is absolutely necessary to preserve and reinforce meaningful human control in AI-enhanced targeting. This does not simply mean giving a human the final “yes” or “no”; it means designing systems that continuously engage the operator’s judgment, that prompt ethical reflection, and that allow intervention before the moment of decision.
In targeting decisions, AI may assist, but only humans can be responsible. To build a successful human-AI teaming, trust is a critical parameter particularly in high-stakes military environments. Operators must understand the logic behind AI outputs to assess their relevance and validity. This requires systems to be explainable -not only to engineers, but to the end users under time pressure. Therefore, an explainable AI system should:
- Reveal the data sources and logic paths behind its conclusions,
- Clearly indicate confidence levels and assumptions,
- Allow the operator to challenge, reject, or refine recommendations.
Such features help avoid over-reliance or automation bias and support the retention of human moral agency. Transparency also ensures compliance with command oversight, legal audits, and post-strike review, which is essential for accountability.
Accepting AI enhancement in targeting
To explore how AI can reinforce ethical decision-making, simulation environments and serious games have been developed in the War Games Lab of the Hellenic Air Force Academy to replicate real-world targeting dilemmas. In a representative experimental scenario, cadets (38 in total) who played the role of the UAV operators were tasked with evaluating a high-value target under uncertain conditions: possible civilian presence, incomplete intelligence, and operational time pressure. Participants used a prototype decision-support interface incorporating AI-generated alerts, legal assessments, and visualizations of potential collateral effects. They could accept, question, or override system outputs. Results showed that:
- In pre-mission briefing many operators (67%) revealed their fear that AI would be an opinion leader and expressed mixed (24%) and negative (27%) emotions about receiving suggestions from AI,
- During the experiment, the operators who took into consideration (78%) the AI suggestions made decisions that were aligned with IHL and RoE; of those who followed the standing procedures (and chose to base their decision making on the AI suggestions) the vast majority (94%) were also attentive to ethical and legal obligations, however they stated that they had difficulty with the tight timeframe of the mission and the dilemmas they had to face,
- Confidence in the final decision increased when operators felt that they could debate the AI’s logic,
- Stress from ethical uncertainty was reduced when the system flagged potential violations and allowed the operator to ask for Collateral Damage Estimation (CDE) or re-evaluation of the target,
- During the de-briefing session, many operators discussed the risk of over trusting AI’s suggestions due to time constraints (36%) and emphasized the need for constant training on the platform (88%), as well as on ethics and legal constraints during the military operations (74%), especially in urban environments.
These findings underscore the importance of ethic-by-design in AI systems, starting from operator needs and legal obligations, not merely technical capabilities.
Ethics as Architecture, Not Afterthought
Developing a responsible AI enhancement tool for targeting is not merely a technical challenge; ethics is a critical design parameter. Implementing ethical safeguards must begin at the conceptual design level: with hardcoded constraints that reflect the ethical and legal principles of distinction and proportionality; interactive interfaces that promote ethical reflection by making uncertainty and risk visible; and rigorous auditability to ensure transparency and accountability. Such systems must be capable of handling conflicting inputs and adapting to morally complex environments without defaulting to pre-programmed lethal outcomes.
Far from being a constraint, ethics can act as a force multiplier in military operations. Targeting decisions that are legally grounded, morally defensible, and procedurally transparent ensure operational legitimacy, effectiveness, and public trust. As AI becomes increasingly central to targeting processes, the need for deliberate, ethically grounded system design intensifies. Human operators must not be isolated or overwhelmed by automation; they must be empowered by it.
Through ethic-by-design principles, AI can reinforce – rather than substitute – human moral judgment, enabling military operations that are not only more precise, but also more just. The future of targeting does not depend on removing the human from the loop, but on equipping the operators with systems that will help them recognize what is permissible and not merely what is technically feasible.
References
Boyle, M. J. (2020). The drone age: How drone technology will change war and peace. Oxford: Oxford University Press.
Brunstetter, D. R. (2020). Wading Knee-Deep into the Rubicon: Escalation and the Morality of Limited Strikes. Ethics & International Affairs, 34(2), 161–173.
Di Nucci, E., & Santoni de Sio, F. (Eds.) (2016). Drones and Responsibility: Legal, Philosophical and Socio-Technical Perspectives on Remotely Controlled Weapons. (Emerging Technologies, Ethics and International Affairs). Milton Park, Abingdon-on-Thames: Routledge Taylor & Francis Group.
Enemark, C. (2023). Moralities of drone violence. Edinburgh: Edinburgh University Press.
International Committee of the Red Cross (ICRC) (2019). AI and machine learning in armed conflict: A human-centered approach. Available at: https://international-review.icrc.org/articles/ai-and-machine-learning-in-armed-conflict-a-human-centred-approach-913 .
International Committee of the Red Cross (ICRC) (2024). International Humanitarian Law and the Challenges of Contemporary Armed Conflicts: Building a Culture of Compliance for IHL to Protect Humanity in Today’s and Future Conflicts (IHL Challenges Report). Available at: https://www.icrc.org/en/report/2024-icrc-report-ihl-challenges .
International Committee of the Red Cross (ICRC) (2025). Submission to the United Nations Secretary-General on Artificial Intelligence in the Military Domain. Available at: https://www.icrc.org/sites/default/files/2025-04/ICRC_Report_Submission_to_UNSG_on_AI_in_military_domain.pdf
Muhammed Enes Bayrak (2024). State Responsibility for Targeted Killings by Drones: An Analysis Through the Lens of IHL Principles. Law and Justice Review 81. Available at SSRN: https://ssrn.com/abstract=4573790
Schweiger E. (2019). Targeted killing’ and the lack of acquiescence. Leiden Journal of International Law 32(4):741-757.
UN General Assembly Resolution 79/1 (2024). Available at: https://docs.un.org/en/A/RES/79/1 .
The text published by C&V Defence only bind their authors. They do not bind C&V Defence or the institutions to which they belong in any way.