Under Secretary of War for Research and Engineering Emil Michael has emphasized that the Department of War (DoW) has historically under-deployed artificial intelligence (AI) and that the current moment demands rapid, enterprise-wide integration of AI capabilities across the DoW workforce to better support both efficiency and warfighting functions. Recent developments such as the Department of War’s 2026 Artificial Intelligence Strategy and the planned integration of commercial large-language models like Grok across classified and unclassified DoW networks, reflect Under Secretary Michael’s incentivization, and illustrate ongoing commitments to rapid AI adoption and technological primacy.
We believe that this initiative, announced by Secretary of War Pete Hegseth at SpaceX, signifies a reconfiguration of decision-making authority, informational control, and strategic agency within the conduct of war. AI is becoming a constitutive element through which operational knowledge is acquired, filtered, and acted upon. As such, AI reshapes both how force is applied and how tactical engagement and strategic judgment are structured and enacted.
AI as a Strategic Actor in the Battlespace
AI systems employed across DoW networks increasingly function as epistemic actors. They determine which data are viable and valuable, which patterns are prioritized, and how actionable options are presented to commanders. Under these conditions, decision superiority emerges less from better sensors or faster weapons, and more from control over the decision environment itself.
This reframing is reflected in the statutory policy of the National Defense Authorization Act for Fiscal Year 2026, which embeds AI and autonomous technologies within a range of defense programs and initiatives as part of DoW’s broader priorities toward modernization.
Both the House and Senate Armed Services Committee versions of the FY 2026 NDAA include AI-related provisions that direct the Secretary of War to integrate commercial AI capabilities, develop AI governance frameworks, and establish cross-functional teams for model management, oversight, and assessment to fortify current and future force design and deployment.
The Artificial Intelligence Strategy for the Department of War establishes AI as a central pillar of national power. We opine that this reflects an intentional convergence of defense modernization, economic competitiveness, and geopolitical influence within a unified strategic logic. Increasingly, AI is regarded as an enabling element of state capacity, by virtue of its capacity to shape and bolster military effectiveness, prompt industrial vitality, and leverage diplomacy.
Within this framework, AI leadership operates as an internal force multiplier and an external signaling mechanism. Executive guidance emphasizes accelerated integration of AI across federal agencies, adaptive procurement pathways, and sustained incorporation of commercial innovation(s) in AI to national security initiatives and missions. Internationally, United States’ prominence in AI research, development and use is a reference point for interoperability, norm establishment, and technological alignment with allies, and force capability against adversaries. The strategic implication of this posture is that leadership in AI enables the U.S. to influence global standards, expectations, and operational conventions within the emerging AI ecology.
To our view, future strategic competition will be shaped less by discrete platforms and more by interconnected ecosystems of innovation. Defense, industry, finance, and data infrastructure operate within ever more integrated environments wherein AI functions to link military modernization to supply chain resilience, industrial base vitality, and long-term economic credibility and power. From this perspective, defense AI policy directly aligns with — and contributes to — national efforts to sustain technological superiority, and strategic leverage.
Operational Velocity, Cognitive Integration, and Strategic Risk
In this light, we posit that two interrelated factors shape AI-enabled military operations: (1) operational velocity: AI compresses temporal cycles of sensing, analysis, and response to reshape the pace of tactical engagement(s) and strategically relevant decision-making, to alter how legitimacy is conferred upon military decisions and how political control is exercised over force; (2) cognitive integration: AI systems curate and filter the informational environment, shaping the set of feasible options made available to commanders. As these systems are embedded across command-and-control constructs such as Joint All-Domain Command and Control (JADC2), human judgment interacts with algorithmic prioritizations rather than unfiltered situational data.
Without doubt these dynamics enhance the tempo of military operations and can contribute to enhanced operational effectiveness. However, they also recalibrate how authority and agency are exercised, given that AI will increasingly precondition both strategic interpretation and operational orientation of mission execution.
Strategic Competition and Incentive Structures
Such integration of AI is occurring in an environment of intensifying strategic competition. State actors are advancing concepts of “intelligentized warfare” emphasize iterative use of AI within and across operational domains as a defining feature of future conflict. This imparts pressure on U.S. forces to accelerate the adoption and deployment of AI in order to preserve operational relevance, competitive advantage, and deterrence capability and credibility.
But these competitive incentives also shape organizational priorities in ways that can compromise deliberative efforts. The imperative to accelerate incorporation of AI (and other emerging critical technologies) can reduce institutional tolerance for deep evaluation, debate, and recalibration of research, development and operational goals and tempo, despite the importance of coherence of doctrine, governance, and force employment.
Weighing Systemic Consequences
We believe the most consequential effects of AI will extend beyond the battlefield. AI introduces systemic dependencies that center on data validity and integrity, model robustness and reliability, and resilience of information infrastructures that undergird contemporary military operations. These dependencies influence operational readiness, and strategic confidence across a variety of domains.
Moreover, AI is expanding exposure of the defense ecosystem to novel modes of vulnerability and attack. Adversaries’ exploitation of AI algorithms, purloinment and corruption of data, and manipulation of AI-based models create new vectors and opportunities for incurring disruptive and/or destructive effects without direct kinetic engagement. Such vulnerabilities elevate the informational and cognitive aspects of engagement, and thereby alter the character of conflict.
A View Toward Strategic Conditions, Readiness and Capability
In conclusion, we maintain that the progressive integration of AI within the military constitutes a reconfiguration of agency, and is not merely a technological evolution. Decisions about the use of force will be increasingly defined by AI mediation, priorities, and compressed temporal scales. We perceive the emergence of a dual architecture that on one side promotes AI at scale, and on the other is poised, yet paused to develop and articulate robust operational norms and accountability mechanisms with equivalent velocity. In practice, we think that this may enable AI systems to shape interpretive frameworks by default; especially in those situations and environments where the operational tempo and competitive pressures outpace the evolution and enforceability of governance.
Thus, to afford equivalence of technical and doctrinal pacing and capabilities, we offer the following recommendations:
1. Codify AI-mediated decision authority within operational doctrine.
Doctrine should delineate where and how AI informs or accelerates command decisions, while preserving decisive human judgment and unity of command under accelerated tempo.
2. Align ethical responsibility with command accountability in AI-enabled operations.
Responsibility for AI-influenced decisions should remain vested in command authority; the aforementioned doctrine should prevent accelerated decision cycles from obscuring accountability for the use of force.
3. Integrate cognitive effects and escalation dynamics into operational planning.
Operational planning and wargaming should account for AI effects on threat perception, option salience, and decisional tempo, as these factors directly influence escalation dynamics, crisis stability, and cross-domain command and control.
4. Protect the integrity of AI-relevant data, models, and decision environments.
Operational and counter-adversary planning should address risks arising from data manipulation and algorithmic exploitation, as these directly affect command judgment, tactical engagement, and strategic decision-making.
5. Embed AI competence and ethical judgment within professional military education and leader development.
Leader development should prepare commanders to critically evaluate AI function(s), understand system capabilities and limitations, and recognize when algorithmic recommendations for tactical commitment are in contrast or conflict with strategic intent, rules of engagement, and/or ethical precepts, responsibilities and obligations.
Collectively, we opine that these recommendations support the integration of AI within military operations, and concomitantly necessitate the doctrinal clarity, strategic discipline, and ethical accountability commensurate with expanding the roles for AI in shaping how force is perceived, authorized, and employed.
Disclaimer
The views and opinions expressed in this essay are those of the authors and do not necessarily reflect those of the United States government, Department of War or the National Defense University.
Elise Annett is Institutional Research Associate at the National Defense University. She is a doctoral candidate at Georgetown University. Her work addresses operational and ethical issues of iteratively autonomous AI systems in military use.

Dr. James Giordano is Director of the Center for Disruptive Technology and Future Warfare of the Institute for National Strategic Studies at the National Defense University.