Research into consideration and possible utility of employing marine mammals in military support operations is nothing new. During the Cold War, the United States (U.S.) and Soviet Union employed dolphins and sea lions for detection, retrieval, and harbor defense. Those programs operated within defined operational parameters and structured oversight. However, recent reports alleging that Russia is employing advanced neurotechnologies to modulate and direct the behavior of orcas for military purposes, if validated, represent an ethical inflection point.
As with prior uses of marine mammals in military contexts, the tactical logic and strategic possibilities are clear: orcas (Orcinus orca) are intelligent, socially sophisticated, highly trainable creatures with exceptional sonar capabilities, endurance, and adaptability to harsh maritime environments. However, the use of invasive or remotely actuated neurotechnologies to manipulate or control such animals’ behavior transcends traditional training paradigms and introduces ethical and legal concerns about the instrumentalization of sentient cognition through technological coercion, why such models of neurotechnological intervention are being developed, and the limits of “brain control” in military settings.
The shift from training to direct neurotechnological control, whether via implantable electrode arrays, pharmacological modulation, or remote stimulation systems is morally salient and sentinel. It moves the relationship from one of guided partnership (however asymmetrical) to one of engineered behavioral domination. This gives rise to three major ethico-legal and operational issues:
First, there is the question of sentience and moral consideration. Orcas possess higher cognitive capabilities, long-term memory, evidence of affective and sophistication, culture-like transmission of behaviors, and complex social structures. The deliberate intrusion into such a creature’s neural substrates to override volition raises concerns analogous and therefore applicable to coercive human neuromodulation. Moral gravity is not based upon anthropomorphic projection, but from credible evidence that these animals experience stress, suffering, and social deprivation in ways that are ethically relevant. To technologically suppress or redirect these capacities for lethal ends risks crossing a line from utilization to violation. This prompts the question of whether such forms of “brain control” are being considered for human use and harkens back to the consternation over experiments such as those conducted in operations MK-ULTRA.
Second, there is the problem of dual-use neurotechnology. The same techniques that might enable therapeutic neuromodulation in humans (e.g., closed-loop neural stimulation, brain-computer interfacing [BCI], or optogenetic manipulation) could become tools of behavioral weaponization in of non-human species (and such experiments and operational trials in animals may serve as templates for use in humans!). This underscores a broader neuroethical challenge; technological capabilities developed for therapeutic or augmentation intent can be repurposed to compel, constrain, or coerce biological agents, whether animal or human. Such translational drift erodes normative guardrails unless proactively governed.
Third, there are legal ramifications and strategic implications. While the Law of Armed Conflict (LOAC) and Additional Protocol I to the Geneva Conventions emphasize principles of necessity, proportionality, and avoidance of unnecessary suffering, their language is overwhelmingly humanocentric. Certainly, there have been and are concerns about the welfare and treatment of animals used in military operations, and such concerns are relevant here. But the use of neurotechnologically-controlled sentient animals falls into a grey zone. It may not explicitly violate current treaty frameworks, yet it strains the Martens Clause’s appeal to “principles of humanity” and “dictates of public conscience.” Moreover, should such practices proliferate, they could catalyze an escalation in developing and deploying bio-integrated systems, normalizing increasingly invasive manipulations of living organisms as tactical assets.
From a neuroethical perspective, the core concerns are not simply that animals are being used in military operations, as there is clearly precedent for such use, but that their neural integrity is being technologically overridden in ways that manipulate agency, can induce control and suffering without recourse, and that such experiments may provide models for human translation. In light of these concerns, we offer the following recommendations:
1. Establish an international normative dialogue on neurobiological weaponization. The U.S., in concert with international allies and like-minded partners, should initiate formal discussions within appropriate venues (e.g., the Convention on Certain Conventional Weapons) to assess whether and how neurotechnological manipulation of sentient animals constitutes a new category of concern(s). Even absent a binding treaty, articulating normative expectations can shape state behavior and stigmatize excesses.
2. Develop a Department of War policy on neurotechnology use in animals. The DoW should advance explicit guidelines to govern research and potential operational use of neurotechnologies in non-human organisms. Such policy should define permissible applications, specific boundaries, requirements for veterinary oversight, and prohibit applications that induce or complete behavioral coercion or prolonged suffering beyond established welfare standards.
3. Expand ethical review frameworks to include sentient non-human organisms in security contexts. Institutional Animal Care and Use Committees (IACUCs) and analogous organizational bodies must be updated to address the unique risks posed by the use of invasive or remote neuromodulation technologies. Ethical review should explicitly consider cognitive complexity, social deprivation, and potential long-term risks (inclusive of harm to the organisms or humans).
4. Invest in non-biological alternatives. Where operational objectives can be achieved through unmanned vehicles, AI-enabled platforms, or other robotic systems, these should be prioritized. The development of such systems offers a technologically viable alternative that avoids the ethical burdens of neural coercion.
5. Integrate strategic communication and relevant transparency. Given the reputational stakes, the US should articulate a principled stance on the neurotechnological manipulation of sentient animals. Transparency, within operational security constraints, can reinforce normative leadership and preempt competitive/adversarial narratives.
In sum, if reports are credible, the use of advanced neurotechnology to modulate or control orcas for military purposes represents a test case for how the convergence of neuroscience, BCIs and warfare are guided and governed. The measure of ethical probity and strategic maturity is not in whether such things can be done, but in whether we choose to do or not do so, and with what consideration(s), and under what conditions and constraints.
Disclaimer
The views and opinions expressed in this essay are those of the authors and do not necessarily reflect those of the United States government, Department of War or the National Defense University.
Dr. Elise Annett is a Research Fellow at in the Program for Disruptive Technology and Future Warfare of the Institute for National Strategic Studies at the National Defense University. Her ongoing work addresses emerging operational issues arising from the use of iteratively autonomous generative and agentic artificial intelligence and quantum systems in military applications.

Dr. James Giordano is Head of the Center for Strategic Deterrence and Study of Weapons of Mass Destruction, and Program Lead for Disruptive Technology and Future Warfare of the Institute for National Strategic Studies at the National Defense University.