PUBLICATIONS

Through its publications, INSS aims to provide expert insights, cutting-edge research, and innovative solutions that contribute to shaping the national security discourse and preparing the next generation of leaders in the field.

 

Publications

News | July 1, 2025

The Orb’s Eye: Seeing the National Security Implications of Iris Based ‘Proof of Humanity’

By Elise Annett, James Keagle, James Giordano Strategic Insights

Breathe deep the gathering gloom, watch lights fade from every room…
…Cold-hearted orb that rules the night, removes the colors from our sight.
Red is grey and yellow white, but we decide which is right…and which is an illusion
.

Graeme Edge, “Late Lament” from The Moody Blues’ “Nights in White Satin” (1967)

 

The Sight-Picture

As recently reported in the cover story of Time magazine, the launch of The Orb—a beach‑ball‑sized biometric device developed by Tools for Humanity (co‑founded by Sam Altman)—marks a paradigmatic shift in digital identity and biosecurity technology and its implications. The Orb scans irises to generate a 12,800‑digit “iris code” (World ID), to verify a human user, and issues a cryptocurrency token (Worldcoin) as reward. Presented as an infrastructural element for an AI‑reliant future, the Orb platform is precariously balanced upon the fine line between a security enabling technology, and one that incurs critical vulnerability, particularly in the hands of adversarial actors.

Envisioned Benefit(s)

In an era of increasingly indistinguishable AI-driven online content, the Orb addresses the legitimate concern of differentiating genuine humans from AI agents, and arguably, safeguarding digital discourse is essential. The national security implications are substantial. Identity infrastructure serves dual-use purposes: enabling civil coordination and population management while simultaneously offering military-grade capability to control access, influence, and response. In conflict or crisis, biometric systems can enforce economic lockdowns, implement digital geofencing, or regulate access to critical infrastructure without deploying troops. In this context, the battlespace retracts from geographic terrain to biometric nodes and digital terminals.

Moreover, the Orb’s integration with decentralized finance introduces a consequential transformation. Digital identity fused with programmable currency enables precise behavioral modulation. Economic incentives, sanctions, and permissions are algorithmically assigned based on biometric verification, location, or compliance. This creates a dynamic, modular form of social governance, self-updating and self-enforcing according to algorithmic priorities. Strategically, identity infrastructure becomes a tool of operational conditioning, reflecting the logic of preemptive governance: anticipate, verify, control. In an environment populated by synthetic agents, AI-generated disinformation, and non-human actors, proof of personhood becomes as fundamental as national borders. The Orb meets this demand by centralizing and codifying the definition of the human, integrating it within systems of control.

 

A View of Risk and Threats

Yet centralizing persons’ unique biometric data in a globally accessible system introduces a number of risks, including:

Surveillance and Coercion: Authoritarian regimes, or any entity controlling Orb deployment, would gain a global framework for tracking and profiling individuals across borders.

Target for Adversaries: State-and non‑state actors could attempt infiltration, either by compromising Orb hardware installations or via covert acquisition of World IDs (and linked biodata) from unknowing users.

Biometric Sovereignty: Iris patterns, unlike passwords, are permanent. Compromise of even hashed or derivative biometric templates on a ubiquitous network vastly increases the risk of identity theft and biodata purloinment (as mentioned above).

These vulnerabilities have disruptive potential for democratic and civic institutions. The nefarious use of Orb technology and the data it acquires can undermine trust. Falsified World IDs could enable deep‐fake election campaigns, false testimony, or “bot armies” masquerading as humans to pervade digital public spheres and forums to disseminate persuasive disinformation with precision and credibility, with resulting implications for cognitive influence of select target individuals and/or groups. Beyond the risks of surveillance and behavioral profiling, compromised digital identities pose a deeper operational concern. They can be leveraged to deny or manipulate access—selectively excluding individuals or groups from platforms of communication, expression, and collective engagement. This engineered exclusion suppresses political agency, disrupts civic cohesion, and constrains participation in consensus-building processes. In effect, such systems become instruments of influence, enabling control over the flow of ideas and the architecture of sociopolitical discourse.

Furthermore, Tools for Humanity proposes that users could delegate their World ID to AI agents. A hostile actor controlling such agents could conduct clandestine operations ranging from cyber‑espionage to AI‑mediated influence campaigns. To this latter point, integration of World IDs into financial systems—such as the Visa partnership noted by TechRadar —could enable large‑scale money laundering or market manipulation. Influence and disruption could also be exercised via controlling access to critical infrastructure and services. For example, if World ID is utilized as a gatekeeping token for services, this could create vulnerabilities for system infiltration and manipulation. For example, ID forgery and/or denial of individuals’ or groups’ access could hobble financial, healthcare, energy, and transportation sectors.

Even if Orb retains only “derivative codes,” certification of human identity remains permanent: As the Time article rightly notes, “…once you look into the Orb, a piece of your identity remains in the system forever”. We view such permanence as a double‑edged blade: while it may be viable for preventing fraud, it perdurably binds individual identity to the system, limiting cancellation or revocation; thus should the system be corrupted, so too the security of its constituent users. This liability is exacerbated by Tools for Humanity retaining control of these data, which creates a singular global point of failure—whether by external compromise, internal subversion, or outright hostility.  In sum, we believe that Orb technology and methods have considerable dual-use potential for enhancing covert operations and cognitive warfare.

Indeed, adversaries could deploy Orb-like systems surreptitiously to capture biodata from key targeted individuals (e.g.- military and intelligence personnel, diplomats, et al.) raising risk of (1) corrupting individuals’ medical records with false information, or using these data to develop “precision pathogens”, both of which could affect targets’ health and/or capabilities; and/or (2) creating “digital Doppelgängers” with forged World IDs to disrupt aspects of operational command and control, influence policy discourse, manipulate elections, mask cyber‑attacks, and weaken allied information confidence.

 

Seeking Balance

To maximize benefit and address these emergent risks, we propose the following recommendations:

1: Develop Biometric Data Governance and Protection Strategies

These approaches should: establish strict standards for biometric data storage, access, and revocability, following frameworks like the US National Security Directive 42 (NSD‑42) and enforce zero retention except for live verification, with cryptographic proofs of deletion.

2: Initiate Threat Red Teaming and Supply Chain Oversight

Establish mandatory periodic adversarial testing of Orb infrastructure and deployment points, including hardware/firmware security evaluations, and based upon findings of such exercises, develop requirements for security vetting – and ongoing monitoring and secure oversight of Orb locations.

3: Advocate Decentralized Architecture

Promote multi‑party governed Orb networks with federated oversight by national or intergovernmental bodies, with requirements for both open-source audits of all AI systems and continuous real-time assurances of compliance.

4: Regulate Delegated AI Agent Use

Delegation authorizations should be restricted to pre‑approved, auditable agents, with assigned risk provenance, and with fail-safe mechanisms for all AI actions using World IDs to enable default to human control and anomaly detection.

5: Develop International Norms and Treaties for Biometric and AI Systems

Diplomatic efforts via NATO, ICRC, and UN should be fortified to define norms of behavior for global biometric‑AI systems in order to formalize obligations for multilateral inspection and enforceability to prevent state/co‑state misuse.

 

Looking Ahead…

The Orb system can be viewed as both sentinel and architect of the future digital domain. Its potential to verify humanity in an increasingly AI-engaged world certainly seeks to sustain critical ideals of authenticity and trust. Yet this system is also vulnerable to appropriation, misuse, and malign manipulation. From national security and defense perspectives, the stakes extend beyond privacy or personal autonomy to affect the integrity of democratic institutions, operational resilience, and the global strategic balance of power.

While we cannot halt technological progress—and face no easy path forward—proactive, principled national and allied strategies can define how tools like Orb are shaped, constrained, and governed. If harnessed responsibly, Orb-like infrastructures could strengthen national security postures and preparedness. If left unchecked, they may become vectors of corruption, doubt, and disruption capable of tipping the balance of influence – and power – on the international stage of the current and near-future information age.

 

Disclaimer

The views and opinions expressed in this essay are those of the authors, and do not necessarily reflect those of the United States government, Department of Defense, or the National Defense University.


Elise Annett is the Institutional Research, Assessment, and Accreditation Associate at the Eisenhower School for National Security and Resource Strategy; and is a doctoral candidate at Georgetown University. Her work addresses operational and ethical issues of iteratively autonomous AI systems in military use.

 

 

Dr. James Keagle is currently a professor at the National Defense University, specializing in national security strategy, policy, and artificial intelligence, and previously served as the Provost of NDU. He has over 50 years of distinguished service in government. 

 

 

Dr. James Giordano is the Director of the Center for Disruptive Technology and Future Warfare of the Institute for National Strategic Studies at the National Defense University.
Contact: james.j.giordano.civ@ndu.edu