In an unusually swift move for the United Nations, the General Assembly’s First Committee adopted a resolution on November 6, 2025, addressing the potential risks of integrating artificial intelligence (AI) into nuclear weapons systems, particularly in nuclear command, control, and communications (NC3). Spearheaded by Austria, El Salvador, Kazakhstan, Kiribati, Malta, and Mexico, the resolution brought a technical issue long discussed in expert circles into a formal diplomatic setting. With 115 states voting in favor, eight against, and 44 abstaining, the vote revealed a stark divide: nuclear-armed states and many allies largely opposed or abstained, while the Global South and most non-nuclear-weapon states strongly supported it. This split underscores the early and unsettled nature of global thinking on AI in nuclear systems and positions the resolution as an initial attempt to translate technical debates into diplomatic language.
The resolution represents a significant step beyond the baseline principle of “keeping humans in control” of nuclear weapons decisions. It emphasizes how AI could inadvertently escalate crises by reducing human oversight, compressing decision timelines, and introducing risks of misperception, miscalculation, or cognitive bias, even when humans retain formal decision-making authority. By embedding these concerns in the broader NC3 architecture, the resolution encourages states to consider not only the preservation of human control but also the wider systemic risks associated with AI integration. It also signals how states understand the AI–nuclear nexus and what they are willing to formalize, laying groundwork for future discussions in forums such as the 2026 Nonproliferation Treaty (NPT) Review Conference.
Voting patterns highlighted enduring divides in nuclear politics. Non-nuclear-weapon states and disarmament-focused governments viewed AI integration as a new layer of risk in an already fragile system, advocating for stricter guardrails. In contrast, nuclear-armed states and many allies saw AI as a potential operational and strategic advantage, emphasizing benefits like early warning, situational awareness, and resilient command-and-control, and often regarded regulatory measures as premature or constraining. Some opposition also stemmed from procedural concerns, including the lack of agreed definitions for terms like “artificial intelligence” and “meaningful human control,” as well as debates over the appropriate forum for such discussions. Additionally, decades of partial automation in NC3 systems may have made endorsing language asserting complete human control politically or technically challenging.
Looking ahead, the resolution offers lessons for shaping the next phase of debate. Broader support will require balancing discussions of risks with recognition of AI’s potential benefits, particularly in ways that appeal to both disarmament-oriented and security-focused states. Opportunities exist for low-stakes AI applications that reduce nuclear risks, such as improving data consistency in monitoring exercises, analyzing past crises, and flagging unusual missile or exercise patterns. While the NPT provides a forum to explore AI implications across non-proliferation, disarmament, and peaceful uses, operational details of AI integration in NC3 will need to be addressed directly among nuclear-weapon states, potentially through formats like the P5 discussions or trilateral dialogues among the United States, the United Kingdom, and France, with China engaged where possible.
A third critical lesson concerns the concept of “human in the loop.” While most states endorse human control in principle, interpretations vary widely. Future negotiations will need to clarify acceptable reliance on AI assessments, levels of transparency and testing, and safeguards to ensure that human authority is meaningful rather than symbolic. By moving toward concrete, context-specific discussions, states can better align on operational realities and risk management strategies. The First Committee resolution, though not exhaustive, sets a foundation for inclusive dialogue, recognizing AI-related escalation risks and opening space for nuclear-weapon states and non-nuclear states alike to engage in shaping norms and policies around the responsible use of AI in nuclear command, control, and communications.







