News

News | Sept. 24, 2025

Beyond Mechanistic Control: Causal Decision Processing in Neuromorphic Military Artificial Intelligence

By Dr. James Giordano Strategic Insights

The Next Step in AI: From Simple Cadence to Causal Processing

Recently, a paper by Kevin Mitchell and Henry Potter in the European Journal of Neuroscience provided a valuable overview of current understanding of causation in neurocognitive processing, which I believe has interesting implications for military applications of neuromorphically-based artificial intelligence (AI) systems. As we transition from traditional mechanistic AI architectures to those that are designed and developed to more closely mirror the complex causal dynamics of neural systems, military stake and shareholders (and oversight organizations) must confront new paradigms of autonomous decision-making that can challenge conventional understandings of predictability, command control, and accountability in AI.

To date, military AI systems have operated upon relatively linear mechanistic principles; in other words, input A leads to process B, which can generate output C; output C can affect process D to incur output E, etc. This deterministic framework has proven to be highly valuable for specific tactical applications where clear stimulus-response patterns are sufficient. However, with the development of increasingly sophisticated iteratively autonomous systems that are capable of operating in complex dynamic environments, the limitations of purely mechanistic approaches become evident. Modern battlespaces and engagements are characterized by ambiguity, rapid contextual shifts, and the need for more nuanced interpretation of incomplete information. These are all challenges that mirror, or in some cases are identical to those engaged by biological neural networks. 

The fundamental weakness of the mechanistic paradigm of AI processing is its inability to account for contextual sensitivity, and exercise the adaptive flexibility characteristic of, and necessary for effective military decision-making. So, while mechanistic AI may excel at pattern recognition in controlled scenarios, it can fail (and perhaps catastrophically) when confronted with novel or completely new situations that require interpretation beyond parameters of initially programmed processes of stimulus recognition and response. This inflexibility becomes particularly problematic when considering the potential lethal consequences inherent to certain military applications.

Criterial Causation and Autonomous Military AI

The concept of criterial causation, as originally described by Peter Ulric Tse, offers a more sophisticated framework for understanding how neuromorphically-designed and -structured AI systems might operate in military contexts rather than simple stimulus response mechanisms (i.e., triggering these systems would evaluate multiple, often distinct and divergent convergent criteria before initiating both decisional processes and actions). In neuromorphic military AI, the equivalent of synaptic weights, thresholds, and contextual factors would determine whether incoming data (i.e., inputs) meet the criteria necessary and sufficient to evoke specific operational decisions and action (i.e., output) responses.

For example, let's consider an autonomous, unmanned vehicular system tasked with identifying and engaging a target. A mechanistic system might rely upon predetermined optical signals/cues to identify target features, and associated these with recognized risks and threats. In contrast, a neuromorphic system operating on criterially causative principles would integrate multiple streams of visual data, communication intercepts, behavioral patterns, environmental contexts, and mission parameters to establish dynamic characteristics that must be evaluated and satisfied before a decision is initiated and an output action is authorized. The system's relative weighting of distinct types and amounts of information would continuously be adjusted based upon accumulated experience and changing operational conditions (i.e., it would act as a dynamical Bayesian system). 

This approach offers considerable advantages in terms of operational flexibility and reduced false positive rates and outcomes. However, it also introduces new challenges for military oversight. Traditional command and control structures tend to assume predictable, attributable decision pathways. In contrast, criterial causations may make tactically sound decisions through processes that can be difficult to reconstruct or predict based upon the diversity, expanse and differential weighting of various types, levels and amounts of data, thereby complicating both real-time oversight and post-action accountability. 

Historical Causation and Military Neuromorphic AI Systems

The integration of historical causation into neuromorphically-based military AI represents perhaps the most transformational aspect of these emerging technologies. By incorporating historical causation, such a system would continuously modify its operational parameters based upon accumulated experience, training data and environmental/ situational contingencies and exigencies. The temporal dimension of this type of causation has important implications for military effectiveness and oversight: an AI system that learns from past engagements, adapts to adversarial countermeasures and develops tactical approaches could provide unprecedented battlefield advantages. Yet, this also presents challenges for command structures that are designed around predictably controllable assets. 

Historical causation processing suggests that neuromorphically-based AI’s current decision-making capacity is inextricably linked to its developmental history, training data, previous operational experience, and environmental forces and evolutionary pressures encountered during various deployments. This creates a type of “institutional memory” within the system that may be difficult to audit, modify, or reset. Thus, it will be important to consider how these systems can be managed given their current capabilities and limitations, which are products of complex historical trajectories rather than direct results of explicit programming.

Semantic Causation and Military Neuromorphic AI

Perhaps the most significant advancement in military neuromorphic AI is semantic causation, whereby AI could derive decision-making power not merely from pattern recognition, but from the relative meaning and value such patterns obtain and portend within various operational contexts. A neuromorphically-based military AI operating through semantic causation processes would interpret incoming data through learned association, contextual significance and adaptive value. This approach directly mimics how human intelligence analysts process information. 

For instance, similar intelligence data might evoke differing responses depending upon operational context, missional parameters and accumulated expertise. An adversary's communication intercept that appears to be routine and benign in one context might signal imminent risk or threat in another, based upon semantic meaning derived from historical patterns and current situational realities and awareness. In military applications, semantic causation offers potential for more subtle, contextually appropriate responses to increasingly complex circumstances and scenarios. But the meaning an AI system assigns to incoming data may not align with human interpretation, which can lead to unexpected or inappropriate reactions. This can incur interpretive variability that may challenge traditional military doctrine that emphasizes standardized responses and predictable outcomes. 

Implications for Military Oversight and Ethics

The transition from mechanistic to neuromorphically-based AI systems that exercise causal decision-making processes necessitates reconsideration of some fundamental aspects of military oversight. Traditional approaches to AI governance assume transparent, auditable decision processes and pathways that can be identified, verified, modified, and held accountable. Systems operating through criterial, historical, and semantic causation challenge each of these assumptions and oversight requirements.

Identification and verification of inherent AI decision-making processes of data assimilation, association and valuation are somewhat more difficult given the complexity (and perhaps opacity) of criterially causal mechanisms. The dynamic nature of these systems means that verification at one point in time may not predict behavior under different conditions or after additional learning experiences. Military oversight must develop new methods for assessing system reliability and appropriateness that account for this inherent dynamism.
Modification of neuromorphic systems presents additional challenges in that expanded causal frameworks may require more multifaceted interventions that account for historical causation patterns and semantic meaning structures that have evolved over time and system experience. Military planners should consider what changes to a system are required, and how to implement these changes without disrupting and compromising the adaptively beneficial capabilities they provide.

Recommendations for Military Implementation

The integration of neuromorphically-based AI systems that utilize expanded causal frameworks into military applications will require careful consideration of opportunities, benefits, burden(s) and risks. Toward such evaluation and to facilitate potential adoption and use of such systems, I recommend a phased approach that begins with low-risk applications where system autonomy can be gradually increased as understanding and oversight capabilities mature.

• First, initial deployment of neuromorphically-based AI systems that use causal decision processing should focus upon intelligence analysis and decision support roles where human oversight and engagement remains robust and final authority for output actions remains with human operators. 

• Second, as experience with these systems grows and oversight mechanisms evolve, gradual expansion of system autonomy in operational contexts becomes more feasible and should be modeled in particular settings and under discrete circumstances.

• Third, military institutions should solicit, develop and sustainably cultivate new expertise in neuromorphically-based AI systems and their functions, inclusive of personnel well-versed in applications of criterial, historical, and semantic causation principles.

• Fourth, military technical education, focused on mechanical and electronic systems, should be supplemented with training in complex adaptive systems’ architectures, functions, and the emergent behavior patterns they obtain, so as to fortify the complement of competent system developers, operators, and command personnel.

Conclusion

The concepts of expanded causal frameworks provide key insights for understanding and implementing neuromorphically-base AI systems that can be employed in military contexts. Without doubt, these systems offer considerable capabilities for adaptive, context-sensitive decision-making; yet they also challenge undergirding assumptions about predictability, control, and accountability that are essential to traditional military doctrine.

Therefore, I posit that success in integrating these technologies will require knowledge, appreciation, and engagement of technical advancement as well as modification of military thinking, training, and oversight mechanisms that will enable effective, efficient and ethically sound operational use of such systems. Simply put, I believe that the stakes are far too high for anything less than engaging a careful, systematic approach to understanding and managing these emerging capabilities. With the integration of iteratively autonomous AI systems in military contexts, I offer that lessons from the brain sciences about the complexity of causal decisional and action processes in neural systems should inform and guide prudent approaches to neuromorphically-based artificial ones.
 

Disclaimer

The views and opinions presented in this essay are those of the author, and do not necessarily reflect those of the Department of War, United States government, or the National Defense University.

 

Dr. James GiordanoDr. James Giordano is Director of the Center for Disruptive Technology and Future Warfare of the Institute for National Strategic Studies at the National Defense University.

 

 

Notes

https://doi.org/10.1111/ejn.70064

Tse, Peter Ulric (2013) The Neural Basis of Free Will. Cambridge: MIT Press, ISBN: 97802262313162