News

News | Nov. 17, 2025

The Agentic Database and Military Command: A Perspective on Autonomous C2 Systems

By Elise Annett and Dr. James Giordano Strategic Insights

As recently noted by Yasmeen Ahmad in a piece appearing in InfoWorld, the shift from passive databases to “active reasoning engines” in commercial agentic AI signals a fundamental transformation in how decisions are made, authority is exercised, and accountability is maintained. To be sure, there’s a time- and cost-effective attractiveness of “off the shelf” AI systems that might be viable for military use. However, as the Department of Defense positions for assessing current and near-term technology acquisitions, any military applications of such commercial AI should consider how these systems may affect operational decision-making, the nature of command, and ultimately, the ethical foundations upon which military action(s) rest. Moreover, as these systems increasingly permeate space operations, cyber-psychological influence campaigns, bio-cognitive security, and autonomous logistics chains, their scope of impact widens beyond traditional battlefield considerations.

The Command Relationship as Cognitive Architecture

Iteratively autonomous AI architectures entail systems that “perceive, reason, act, and learn” with viability for diminished human intervention in each cycle. The core notion being that databases must evolve from “passive ledgers” to “active reasoning engines” directly parallels the transformation occurring in military command and control (C2) systems. Ahmad frames this in a positive light as “emergent intelligent behavior”; but in operational military contexts, we view such emergence in autonomous systems to be a critical problem. Military operations demand predictability and accountability in the chain of command. When an AI system exhibits genuinely emergent behavior (i.e., doing something not explicitly programmed or anticipated) it can exceed its command intent. We regard this as insubordination by algorithm.

The Knowledge Graph as Operational Intelligence: Promise and Peril

Ahmad rightly identifies the promise of GraphRAG (Graph Retrieval-Augmented Generation) to afford AI systems capability to traverse complex relationships between entities in order to connect disparate intelligence sources and reveal patterns invisible to human analysts. This is certainly compelling in that it could represent a significant leap in intelligence fusion capabilities. However, the assertion that “success hinges on…high-velocity deployment” bespeaks what we believe to be a precarious mismatch with the realities (and responsibilities) of military operations. Expedience in decision-making is undeniably beneficial in situations of exigent import. But the commercial imperative for “velocity” can all too easily usurp and supersede military requirements for verification, validation, and ethical probity. If merely compressed into “high-velocity” automated workflows, it is easy to envision how these judgements might incur catastrophic ethico-legal consequences in operational settings and circumstances.

The Chain of Thought Problem: Explainability Under Fire

It is also of note that Ahmad proposes that databases must provide an “immutable, explainable ‘chain of thought’ for why [an agent] did” some decision process or action. This addresses a fundamental requirement for accountability. In military operations, every decision that results in the application of force must be ethically and legally defensible. Yet, we opine that despite being a worthy precept, Ahmad’s concept of Explainable AI (XAI) that enables a system to “trace a generated output back to its source,” while necessary, is insufficient for genuine military accountability. Knowing which data points an algorithm has weighted most heavily in recommending an action does not explain the ethical judgment about whether and why that action should occur. The XAI paradigm treats explanation as a technical problem of transparency; but true military accountability requires substantive moral justification that engages principles, not just processes. Simply put, data citation is not moral reasoning. Human moral reasoning involves counterfactual thinking that entails imagining alternative courses of action and their consequences. Furthermore, it requires a theory of mind to understanding the perspectives, intentions, and suffering of others. Current AI systems lack this capability.

And while state-of-the-science neuromorphic AI systems designs may be able to engage ever-increasingly higher cognitive processing, it remains to be seen:

(1) if such systems could (and/or will) develop some construct of a theory of mind;

(2) how such a conceptualization would guide the systems’ regard and actions toward (human and AI) others;

(3) whether such systems could genuinely internalize and apply ethical principles in the conduct of operational judgment, and do so in a consistent and reliable manner when exposed to stress, uncertainty, high-tempo engagements, or deliberate adversarial perturbation, and

(4) how any cognitive (and/or moral) capacities could be effectively verified and governed to ensure that the system’s actions remain congruent with the laws, norms, and values that define and legitimate military authority, and

(5) what this (viz.- 1-4 above) might portend for military applications of AI

Convergence Architecture and the Speed of Command

The unification of transactional, analytical, and vector processing (HTAP+V) into a single architecture enables real-time decision-making that combines current operational status, historical patterns, and semantic understanding. For military C2, this could afford situational awareness than enables insight to what is happening, what it means, and what might happen next. We see this as an important, if not necessary cog in the gears of an operational paradigm of synthesized command and control (SYNTHComm). The value of human commanders lies partly in their ability to adjust the pace of decision making and action execution. Iteratively autonomous AI systems that are optimized for velocity can default to action-reaction cycles that escalate beyond human comprehension or control, particularly when adversaries deploy their own autonomous systems in algorithmic feedback and feedforward loops.

Thinking Ahead: Human-Machine Teaming, Not Replacement

Thus, we believe that the commercial model of “autonomous agents as primary drivers” is fundamentally inappropriate for military C2. Therefore, we offer that a more appropriate model is human-machine teaming where AI systems enhance human cognitive capabilities without displacing human judgment, attributability, accountability or responsibility on matters of consequence. This requires architectural choices different from, and greater than those optimized for commercial velocity. A SYNTHComm model such as we have proposed serves to preserve such human authority. In short, it will be critical to develop systems that augment human wisdom, operate within clearly defined boundaries, and preserve meaningful human control over the application of force. The stakes, as measured in human lives, legal obligations under the law of armed conflict and rules of engagement, and the moral character of the military profession demand nothing less.​​

Disclaimer

The views and opinions presented in this essay are those of the authors and do not necessarily represent those of the United States government, Department of War, or the National Defense University.

 

Elise Annett is an Institutional Research Associate at the National Defense University. She is a doctoral candidate at Georgetown University. Her work addresses operational and ethical issues of iteratively autonomous AI systems in military use.



 

Dr. James GiordanoDr. James Giordano is Director of the Center for Disruptive Technology and Future Warfare of the Institute for National Strategic Studies at the National Defense University.