In September 2025, when the United Nations General Assembly unanimously adopted a landmark resolution establishing the first truly global AI governance mechanisms—an Independent International Scientific Panel on AI and a Global Dialogue on AI Governance—the moment represented something political scientist Valerie M. Hudson could only speculate about in a book first published in 1991. Yet reading her edited volume Artificial Intelligence and International Politics today feels uncannily prophetic.
Hudson’s collection arrived at a peculiar moment in time. The Cold War had just ended, personal computers were still rarities, and artificial intelligence meant symbolic logic systems and expert databases—not the generative algorithms now shaping diplomatic cables and negotiation strategies. The scholars Hudson assembled were asking a question that seemed almost quaint: Could machines help us understand how states behave?

Three decades later, we’re asking something far more unsettling: Are machines beginning to determine how states behave?
Architecture of Political Thought
Hudson’s volume opened with Philip Schrodt’s foundational essay, “Artificial Intelligence and International Relations: An Overview,” which framed foreign policymakers as information processors operating under severe constraints. Decision-makers, Schrodt argued, function like algorithms themselves—taking inputs, applying rules, generating outputs. The question was whether computers could replicate or illuminate this process.
Helen Purkitt extended this framework in her essay on “Intuitive Foreign Policy Decision-Makers Viewed as Limited Information Processors.” Long before the data deluge of the 21st century, Purkitt warned that overwhelmed decision-makers rely on simplified mental models—what Herbert Simon called “bounded rationality.” Replace those policymakers with contemporary AI systems, and the warning holds: the problem isn’t access to information, but the capacity to process it meaningfully.
The essays that followed represented the state of the art in early computational international relations: rule-based systems that attempted to model diplomatic behavior, case-based reasoning that drew lessons from historical precedent, and natural language processing systems designed to parse political discourse. These were symbolic AI systems—logic chains and structured databases—attempting to capture the grammar of international politics.
What’s striking is how their ambitions mirror our own. Those early researchers wanted AI to make foreign policy more rational, transparent, and predictable. Instead, we’ve built systems that make diplomacy faster, more opaque, and increasingly algorithmic.
From Theory to Practice: The Algorithmic State
The transformation Hudson’s contributors anticipated is now reality, though by means they never imagined. For instance, China’s Ministry of Foreign Affairs unveiled an AI system designed to evaluate foreign investment projects worth hundreds of billions of dollars under the Belt and Road Initiative—analyzing political, economic, and environmental risks at scale. Israeli diplomat Elad Ratson has pioneered what he calls “algorithmic diplomacy”—using machine learning to shape how national narratives flow through digital spaces.
The U.S. State Department released its first Enterprise Artificial Intelligence Strategy in 2024, establishing a framework for how AI will “advance United States diplomacy and shape the future of statecraft.” The document acknowledges what Hudson’s volume could only theorize: AI is no longer just analyzing diplomacy; it’s becoming embedded in its practice.
At the Center for Strategic and International Studies, researchers developed an interactive program called “Strategic Headwinds” to help simulate negotiations to end the war in Ukraine. The system was trained on hundreds of historical peace treaties and open-source reporting on each side’s positions. The goal: identify pathways to agreement that human negotiators might overlook.
This is precisely the kind of application Philip Schrodt envisioned in his essay on “Pattern Recognition of International Event Sequences”—using computational methods to detect structures in diplomatic behavior and project them forward. What Schrodt couldn’t foresee was how opaque these systems would become. As CSIS researcher Benjamin Jensen notes, modern AI diplomacy faces a fundamental “black box problem”—it’s often impossible to know why a model recommends a particular course of action.
Language Wars
Perhaps the most prophetic essays in Hudson’s collection were those focused on political language. John C. Mallery’s “Semantic Content Analysis” and Gavan Duffy’s work on representing historical time attempted to map political discourse into machine-readable formats. They saw language as data to be structured.
Today’s AI sees language as something to generate.
Recent research demonstrates how generative AI is being deployed across diplomatic functions—from drafting consular communications to analyzing crisis situations in real-time. The shift from analytical to generative systems represents a fundamental change: AI no longer just interprets what states say; it increasingly crafts what they might say next.
This has profound implications. The same tools that can translate diplomatic documents across languages or identify emerging crises from news reporting can also manufacture disinformation campaigns. China’s AI-driven influence operations across the Global South and automated narrative-shaping systems represent the darker evolution of technologies that once promised only insight.
Governance Paradox
Hudson’s contributors believed AI could make international politics more rational and predictable. The opposite has materialized: AI has made global affairs faster, more complex, and harder to govern.
The UN’s August 2025 resolution establishing global AI governance mechanisms represents an attempt to catch up. As Chatham House analysts note, these new bodies face formidable challenges: inadequate funding, intensifying US-China rivalry, and no enforcement mechanisms. Moreover, recent research shows that 118 countries weren’t parties to any significant international AI governance initiatives as of 2024—only seven developed nations participated in all of them.
This creates what scholars call a “weak regime complex”—fragmented governance structures that leave significant gaps and enable competitive dynamics to override cooperative ones. The EU’s AI Act attempts to impose human rights-centered standards, while China promotes state-centric sovereignty principles. The United States oscillates between regulatory frameworks emphasizing democratic values and deregulatory approaches prioritizing competitiveness.
Hudson and her colleagues hoped AI would help states understand cooperation. Instead, AI has become a new arena for great power competition.
What the Book Got Right—And Wrong
Reading Artificial Intelligence and International Politics in 2025 reveals both prescience and blind spots. The contributors were remarkably accurate about three things:
First, they understood that AI would fundamentally reshape how foreign policy decisions are made—moving from intuitive judgment toward data-driven analysis. This is now reality, from diplomatic cable analysis to algorithmic trade negotiation support.
Second, they recognized that representing political knowledge computationally would raise profound methodological challenges. The “black box problem” they worried about with early expert systems has only intensified with deep learning.
Third, they anticipated that AI would create new forms of inequality—between states with computational capacity and those without. This digital divide now shapes global governance negotiations.
But they also got key dynamics wrong. They assumed AI would make politics more transparent; instead, algorithmic decision-making often obscures accountability. They expected AI to enhance rational deliberation; instead, it has accelerated information overload and enabled sophisticated manipulation. They hoped AI might reduce cognitive biases; instead, it has embedded new ones through biased training data and Western-centric datasets.
Most fundamentally, Hudson’s contributors saw AI as a research tool for understanding international politics. We now confront AI as a participant in international politics—shaping outcomes, not just analyzing them.
Enduring Question
The core question Hudson’s volume posed remains unanswered: Can computational methods illuminate the logic of interstate cooperation and conflict? The technologies have evolved beyond recognition—from symbolic AI to neural networks, from expert systems to large language models. But the fundamental challenge persists.
In 1991, AI promised to make the rational actor assumptions of international relations theory more rigorous. In 2025, AI threatens to replace human reasoning altogether—or at least to make the boundary between human and machine judgment increasingly porous.
As researchers now warn, the uneven adoption of AI across foreign ministries could deepen global power asymmetries. Nations with advanced AI capabilities may come to dominate diplomatic negotiations, not through better arguments but through superior information processing and faster decision cycles.
Hudson’s collection reminds us that every generation believes technology will rationalize politics. Every generation confronts, instead, new forms of complexity. The scholars who once tried to teach machines to think politically could not have imagined a world where machines might teach politics how to think—or where that teaching might serve some states far better than others.
The task ahead isn’t to decide whether AI belongs in international relations—it already does. The question is whether we can build governance structures that keep algorithmic statecraft accountable to human values and diplomatic wisdom.
Thirty-four years after Hudson’s volume, we’re still searching for that answer.

