The integration of artificial intelligence (AI) with military operations is accelerating across global powers, including the United States, its allies, and competitor states such as China and Russia. AI’s operational use in the military exists on a spectrum of autonomy, with varying degrees of human involvement across positional (in, on, or out of the loop), dimensional (which tasks are controlled), and temporal (when humans intervene) domains. This complexity disrupts traditional understandings of command and accountability. Current U.S. policy, such as Department of War (DOW) Directive 3000.09, mandates that autonomous systems operate under human authority and within defined legal and ethical bounds.2 However, the rapid pace of AI development—and adversaries’ less restrained deployment—call into question the adequacy of current frameworks.
This essay emphasizes that AI should complement, not replace, human judgment in military contexts. Ethical oversight must be embedded throughout AI design and deployment, ensuring that all decision-making remains accountable to human actors. AI is currently employed in roles that support human decision-making, autonomously control weapon systems, and conduct both kinetic and non-kinetic operations. While these capabilities offer significant advantages in speed, precision, and operational scale, they also introduce unprecedented ethical, legal, and strategic challenges—particularly as AI systems become increasingly autonomous. Systems that provide clear, explainable reasoning for their actions should be authorized for combat use. The illusion of precision and neutrality in AI can mask moral disengagement and accelerate impunity as detached from human judgment. To mitigate these risks, we advocate for a framework that recognizes global diversity in values while establishing shared norms to guide responsible AI use.
Read more