Naming and Framing
Artificially intelligent systems are being developed to have iteratively autonomous function, and these systems are increasingly being considered for use in military settings, weapon platforms, and operations.1 We have noted a subtle, but nonetheless important misconception regarding these systems and their functions; namely that while a system may possess and exercise autonomy of particular functions, that does not, nor should not imply that the system is autonomous as-a-whole. Thus, when terms such as “semi-autonomous AI” and/or “autonomous AI” are used, we feel it is important (if not necessary) to clarify:
(1) which component functions of the system are or are not autonomous;
(2) the parameters, extent and limits of such autonomous functionality; and
(3) how such autonomous functionality and its constraints will be implemented and realized in operational practices.
At present, AI systems are capable of autonomously conducting data synthesis, threat analyses and prioritization, and mission-relevant task execution. The capacity to deploy such AI across command, control, surveillance, and kinetic platforms introduces a decisive transformation in the tempo and topology of operational engagement. While the current Department of Defense Directive 3000.09 (as first developed in 2013, and subsequently updated and revised in 2023) frames and provides guidance of iteratively autonomous AI systems’ use, we opine that while good — and surely necessary — its scope and depth is insufficient in light of recent and ongoing progress in AI, in general, and militaries' use of AI more specifically. Clearly, a gap exists between Department of Defense (DoD) dictum, as based upon three to five year-old (and therefore somewhat dated) appraisal of and response to the technological readiness of various iterations of AI, and the current state of various AI platforms, both as force multipliers in other military weapons and capability systems, and as a weaponizable entity, per se.
In light of this, we offer the following three recommendations:
• First, that DoDD 3000.09 be reviewed and revised every two to three years per a regular schedule;
• Second, that working groups both within US CYBERCOM and those combat commands with high AI utilization be established to (a) monitor and guide AI technological readiness level, and (b) establish key tasks relevant to next phase DODD 3000.09 updates, and
• Third — and likely as an undergirding precept to the prior two — that a common lexicon be developed that provides a working terminological codex for use within the US military, its allies and (domestic and international) defense industrial base entities to enable enhanced inter-operativity of AI system R/D, applications, use, and oversight.
To be sure, the ongoing development of AI systems’ capability, and the transformative impact(s) such systems’ use may have upon military operations demand recalibrated doctrine that is grounded in technical precision, ethical integrity, and enforceable human authority.
Iteratively Autonomous (Functions of) AI – and the Need for a Synthesized Human-AI Command Structure
Iteratively autonomous AI systems function through recursive learning, environmental modeling, and probabilistic assessment.2 Task execution unfolds through programmed inference that is optimized for effect, scaled for efficiency, and structured through algorithmic feedback.3 Yet, at present these systems cannot yet accurately interpret intent, assess moral weight to projected consequences.4,5 Thus, human command must remain embedded and actionable at every level, preserving the capacity to initiate, monitor, intervene, redirect or terminate action.6
We assert that strategic integration of a spectrum of autonomous functions and functionality in military AI systems requires a framework that supports differentiated levels of such autonomy that is aligned with mission phase, operational complexity, and ethical probity. For example, systems functioning in intelligence aggregation and sensor-based classifications of data and information could be afforded more latitude. Conversely, functions involving target designation and engagement would demand immediate human evaluation and approval. This structure would ensure that the attributable aspects of operational execution and responsibility for risk and benefit remain within the domain of human command. We refer to this model as synthesized command (SYNTHComm).7
The SYNTHComm model entails:
1. Performance standards that should include real-time diagnostics, transparent decision paths, and high-fidelity system monitoring and metrics. These metrics should assess inference precision, temporal responsiveness, adaptability to situational flux, and system recovery capacity. Each metric should reflect system effectiveness and efficiency, as well as the extent to which outputs remain consistent with ethico-legal and ethical parameters and constraints. Simply put, we assert that technical performance without auditability reduces command authority to observation, and rebukes responsibility.
2. Correction mechanisms that should operate as integrated elements across the spectrum of (automatic to autonomous) AI platforms. These could include predictive error detection, automatic reconfiguration, intervention protocols, and mission-execution cutoffs. Intervention capability must remain functionally immediate, structurally accessible, and procedurally integrated. Each anomaly in system logic, sensor interpretation, or behavioral execution must generate a signal that is recognizable to a human collaborator; and these signals should be timed and of significant salience to enable response before consequence.
3. Oversight functions that should extend across systems’ design, missional deployment, and operational execution. We propose a system ecology wherein architects encode the foundational logic; operational commanders define mission parameters and ethical boundaries; and field supervisors maintain real-time operational contact that can override system recommendations and halting sequences and outputs that diverge from ethically sound and lawful engagement. This triumvirate oversight infrastructure would ensure that all AI functions remain subject to human command responsibility.
It could be argued that the decision latency in human judgment may restrict the temporal and operational effectiveness and efficiency of AI-based systems and operations. We acknowledge that human involvement axiomatic to the SYNTHComm model may incur some measure of delay, but argue that this latency can allow for more accurate discernment. The ability to assess proportionality, collateral risk, and ethico-legal classification exists solely within human reasoning, and these considerations are not (yet) constituent to any current or emergent algorithm. Operational legitimacy depends on this difference. Within this SYNTHComm model, the system performs; the human evaluates.
Within the SYNTHComm architecture, data outputs should feed into a continuous archive, producing an auditable record of system function, decision sequences, and operator inputs. This archive would support after-action analysis, ethico-legal compliance review, and tactical revision and strategic recalibration, as each autonomous action produces tactical outcomes as well as obligations for human justification. Thus, human operators, system designers, and mission commanders each contribute to a full accountability chain. For SYNTHComm to be effective, this chain must operate without failure at the point of engagement, without degradation during execution, and without ambiguity during review. The system's autonomy does not confer exemption from accountability. Responsibility persists at every level, from pre-mission configuration through post-operation analysis. Indeed, the capabilities of iteratively autonomous AI extend human action, but should not replace human judgment.
But ethical probity should not exist solely within the human component of the integrated SYTHComm system. Autonomous capabilities should function within boundaries that are both technical and moral, and should be established through mission constraints, ethical codes and legal parameters — by design. Pro the work of William Casebeer, 8 we advocate that system constructs should internalize these boundaries in structure and logic. And system function(s) should respect these boundaries in operational execution, such that every decision point within the system reflects an embedded value structure of encoded priorities, and weighted assessments.9 These elements should originate from human developers and operational designers and should be actualized by human command decisions.
We believe that this model of integrative, supervised autonomy allows for speed and scope, and constrains unchecked, and thus unattributable actions that degrade ethical responsibility and legal retributability. In short, we offer that precision, speed, and efficiency best serve the operational objective when deployed within frameworks of responsibility. We opine that the future of warfare depends on preserving that alignment, irrespective of the systems or platforms deployed, so that every decision and action remains attributable to human judgment, guided by ethical principle, constrained by law, and executed through discipline-by-design.
Disclaimer
The views and opinions expressed in this essay are those of the authors, and do not necessarily reflect those of the United States government, Department of War, or the National Defense University.
Elise Annett is the Institutional Research, Assessment, and Accreditation Associate at the National Defense University.
Dr. James Giordano is Director of the Center for Disruptive Technology and Future Warfare of the Institute for National Strategic Studies at the National Defense University.
References
1Kilian, K. A. (2025). Beyond accidents and misuse: Decoding the structural risk dynamics of artificial intelligence. AI & Society. https://doi.org/10.1007/s00146-025-02419-2
2Lillemäe, E., Talves, K., & Wagner, W. (2025). Public perception of military AI in the context of techno-optimistic society. AI & Society, 40, 929–943. https://doi.org/10.1007/s00146-023-01785-z
3Munch, L. A., Bjerring, J. C., & Mainz, J. T. (2024). Algorithmic decision-making: The right to explanation and the significance of stakes. Big Data & Society, 11(1). https://doi.org/10.1177/20539517231222872
4,8Casebeer, W. D. (2020). Building an artificial conscience: Prospects for morally autonomous artificial intelligence. In Y. R. Masakowski (Ed.), Artificial intelligence and global security: Future trends, threats and considerations (Chapter 5). Emerald Publishing Limited. https://doi.org/10.1108/978-1-78973-811-720201005
5Shook, J. R., Solymosi, T., & Giordano, J. (2020). Ethical Constraints and Contexts of Artificial Intelligent Systems in National Security, Intelligence, and Defense/Military Operations. In Y. R. Masakowski (Ed.), The Ethics of Artificial Intelligence in Education: Practices, Challenges and Opportunities (Chapter 8). Emerald Publishing Limited. https://doi.org/10.1108/9781789738117
6,7Annett, E., and Giordano, J. (2026). AI versus AI: Human engagement via synthesized command
(SYNTHComm) in AI warfare. HDIAC Journal, 10(1). In press.
9Oimann, A. K., and Salatino, A. (2024). Command responsibility in military AI contexts: balancing theory and practicality. AI Ethics, 2024. doi:10.1007/s43681-024-00512-8