The public acknowledgement of the increasing use of decision-based artificial intelligence (AI) in U.S. defense provides a backdrop to a structural reordering of how military missions will be generated, exercised, and contested. The declassification of U.S. interest in AI-enabled decision operations, coinciding with the National Defense Authorization Act, the National Strategic Security Study, and the 2026 Responsible AI in the Military Domain summit advances a strategic transition in which military competition progressively centers on control over the decision-space itself. As this domain matures, differentiation in decision-making capability (and speed) will prove to be decisive; yet this emphasis tends to de-emphasize alignment with multinational norms; simply put, the tempo of AI-based action affords greater advantage than the more lethargic pace of norm development.
Decision-space therefore emerges as a battle domain that can be shaped, contested, and degraded through cognitive means. Recognizing this necessitates institutional structures that align command authority, planning horizons, and operational design around the structuring of choice. Decision-based AI enables actors to pre-shape operational environments by filtering information, sequencing options, and accelerating commitment in ways that constrain adversary response sets and compress opportunities for political intervention. In this environment, power increasingly derives from the capacity to structure choice rather than from the accumulation of force alone; and advantage accrues through decisions that channel opponent behavior toward predictable pathways and foreclose adaptive response. To exploit this dynamic at scale, command architectures will benefit from explicit designs that enable exercising human authority at machine speed rather than reliance on legacy models of human involvement that were shaped for slower paces of conflict.
Cross-Domain Tempo and the Transformation of Command
The tempo of interaction across domains of land, sea, air, space, cyber, and the electromagnetic spectrum exceeds the limits of unaided human cognition. AI can enable sustained cross-domain coherence by affording high-velocity data fusion, pattern recognition, and probabilistic evaluation of courses of action across heterogeneous and incomplete data streams. When integrated into command and control architectures, these capabilities transform the observe–orient–decide–act (OODA) loop.
However, in this paradigm, decision timelines compress, orientation becomes increasingly machine-mediated, and relative advantage shifts toward actors capable of coherent cross-domain action at computational speed. Under such conditions, command authority gains effectiveness through dynamic placement of the human relative to the machine rather than fixed control points.
Thus, the configuration of human authority becomes strategically decisive. Human-in-the-loop, human-on-the-loop, and human-near-the-loop arrangements represent distinct command functions that generate particular advantages (and relative burden) at different phases of conflict. Human-in-the-loop control allows legal clarity and escalation control during commitment-sensitive decisions. Human-on-the-loop control enables the tempo of machine initiative while preserving (human) supervisory authority and intervention capacity. Human-near-the-loop control affords continuous cognitive influence over framing, thresholds, and intent.
Operational effectiveness increases when these modes function as an integrated system rather than isolated safeguard configurations. Toward such integration, we have proposed a synthesized human-loop architecture (i.e., synthesized command and control; SYNTHComm) to enable authority to be applied across the continuum of AI functions as the operational tempo and escalation risk(s) evolve; thereby maintaining human judgment in cooperation with AI speed.
Effective control of this domain depends on embedding human influence across the full lifecycle of decision formation, evaluation, and execution rather than concentrating authority at a single point. Recognition of these dynamics should compel the DoW to evolve command institutions in tandem with technical adoption such that governance functions operate as system architecture. Indeed, ethical, legal, and policy constraints gain effectiveness when integrated directly into model objectives, interface design, escalation thresholds, and command workflows rather than if applied (and perhaps iteratively layered) through episodic external review.
Recommendations
Decision-based AI should be regarded as a current and evolving capability that will reconfigure moral, cognitive, and institutional aspects of military power and warfighting. This transition will place new demands on how authority, judgment, and competition are organized within the Department of War. The following recommendations are proposed to fortify such transformation.
1. Recognize decision-space as a warfighting domain. Decision-based artificial intelligence reorganizes how power is generated and contested by structuring human judgment at speed. Formal recognition of decision-space as a distinct domain concentrates doctrine, planning, and capability development on decision quality, and control of escalation rather than upon platform performance alone.
2. Design command architectures for adaptive human–machine authority. Human-in-the-loop, human-on-the-loop, and human-near-the-loop arrangements function as dynamic command modalities. Operational advantage emerges when authority migrates across this continuum in response to tempo, uncertainty, and escalation risk, sustaining judgment while operating at machine speed.
3. Embed governance as system architecture. Governance integrated within system design can serve to provide continuity of control conditions in alignment with the scale(s) of AI-based decision velocity.
4. Develop metrics for power that assess decision quality. Assessment frameworks gain strategic relevance when oriented toward escalation controllability, and shaping of adversary behavior. Such metrics should align optimization with strategic stability under conditions of accelerated cognition.
5. Prioritize cognition as a contested terrain. Command authority increasingly depends on understanding (and engaging) AI-mediated perception, option framing, and action acceleration. These concepts should be developed and reinforced through professional military education and ongoing training in cognitive systems and human-machine interactions.
Decision-centric competition rewards actors who are capable of engaging choice at computational speed; yet the ethico-legal constructs of warfare and engagement dictate attribution to, and responsibility for human authority. The pace of engagement may escalate, but the risk of force escalation, and the accountability, and obligations of human agents in command and control of the decisions and actions of warfighting are not lessened by such advances in the tools of battle.
Disclaimer
The views and opinions expressed in this essay are those of the authors and do not necessarily reflect those of the United States government, Department of War, or the National Defense University.
Elise Annett is a Research Fellow at the National Defense University. Her work examines the operational and ethical challenges of iteratively autonomous AI systems.