The next major international crisis may not begin with troops massing at borders or warships crossing into disputed waters. It might start with an algorithm detecting an anomaly, triggering an automated response, and setting off a cascade of machine-speed reactions that outpace human comprehension. As explored in Artificial Intelligence and International Conflict in Cyberspace (Routledge, 2023), edited by Fabio Cristiano, Dennis Broeders, François Delerue, Frédérick Douzet, and Aude Géry, AI doesn’t simply make cyber operations faster; it rewrites the very logic of conflict itself.
Artificial intelligence is transforming not just how wars are fought, but how they begin, escalate, and potentially end. The integration of AI into military cyber capabilities collapses traditional distinctions—between offense and defense, between war and peace, between human and machine decision-making. This transformation demands urgent attention from policymakers, diplomats, and citizens alike, as our existing frameworks for managing international conflict prove increasingly inadequate for an algorithm-driven world.
Speed of War: How Machine Learning Changes Command Cycles and Crisis Management
The history of military technology is largely a history of acceleration. From cavalry to telegraph to nuclear missiles, each innovation compressed the time between decision and consequence. But AI represents something qualitatively different: the potential for conflict to unfold faster than human cognition can follow.
Consider a scenario where Nation A’s AI-powered cyber defense system detects what it interprets as preparation for a major attack from Nation B. In milliseconds, it could launch preemptive countermeasures, which Nation B’s AI systems interpret as an unprovoked assault, triggering their own automated responses. Within seconds, both nations could find themselves in an escalating cyber conflict that neither intended, with human decision-makers struggling to understand what’s happening, let alone stop it.
This isn’t science fiction. Current AI systems can already identify and respond to cyber threats orders of magnitude faster than human analysts. The U.S. Cyber Command’s persistent engagement strategy, which involves continuous contact with adversaries in cyberspace, increasingly relies on automated tools to maintain operational tempo. China’s Strategic Support Force similarly emphasizes “intelligentized” warfare capabilities that leverage machine learning for both defensive and offensive operations.
The compression of decision cycles fundamentally alters crisis management. Traditional diplomatic mechanisms—the late-night phone calls, the carefully worded communiqués, the graduated signals—all assume human-speed deliberation. When conflicts can escalate to dangerous levels in minutes rather than days, these tools become obsolete. As the contributors to Artificial Intelligence and International Conflict in Cyberspace argue, we need new frameworks for managing crises that unfold at machine speed while preserving human control over matters of war and peace.
Invisible Battlefields: Cyber Operations Below the Threshold of War
Perhaps the most insidious transformation AI brings is the normalization of constant, low-level conflict in cyberspace. Machine learning enables what experts call “gray zone” operations—activities aggressive enough to achieve strategic objectives but calibrated to remain below the threshold that would trigger traditional military responses.
AI systems excel at finding and exploiting vulnerabilities in ways that blur the line between espionage, sabotage, and warfare. They can conduct reconnaissance, map networks, and even manipulate data in ways subtle enough to avoid detection for months or years. The 2020 SolarWinds breach, while not definitively attributed to AI, demonstrated how sophisticated actors can infiltrate thousands of networks simultaneously, creating strategic advantages without firing a shot.
This perpetual state of low-intensity cyber operations challenges fundamental assumptions about international order. The Westphalian system assumes clear distinctions between war and peace, with different legal and normative frameworks governing each. But when AI enables continuous operations that are neither quite war nor quite peace, these categories lose their meaning. Nations find themselves in a permanent state of competition where the boundaries of acceptable behavior become increasingly unclear.
The psychological impact cannot be underestimated. When nations know they’re under constant algorithmic surveillance and probing, trust erodes and paranoia grows. Every network anomaly could be an attack; every software update could hide a vulnerability. This atmosphere of perpetual insecurity makes diplomatic solutions harder to achieve and military responses more likely.
Delegating Judgment: The Risks of Removing Humans from Decision Loops
The seductive promise of AI in military applications is its ability to process vast amounts of information and respond faster than any human. But this capability comes with a profound risk: the gradual removal of human judgment from decisions about life, death, and war.
Current military AI systems typically include “human in the loop” safeguards, requiring human approval for critical decisions. But the pressure to match adversaries’ response times creates powerful incentives to grant AI systems greater autonomy. If your opponent’s AI can launch and complete a cyber attack in seconds, can you afford to wait for human approval to respond?
This delegation of judgment to machines raises profound ethical questions. AI systems, however sophisticated, lack the contextual understanding, moral reasoning, and accountability that humans bring to decisions about conflict. They operate on probabilities and patterns, not wisdom or restraint. An AI might accurately identify a cyber intrusion but fail to recognize it as a mistaken action by an ally rather than an attack by an adversary.
The book’s editors pose a crucial question: “What is at stake with the use of automation in international conflict in cyberspace?” The answer extends beyond tactical military concerns to fundamental questions about human agency in matters of war and peace. Each step toward greater automation makes the next step easier to justify, creating a slippery slope toward fully autonomous warfare.
New Deterrence Dilemma: When Attribution Becomes Impossible
Deterrence, the cornerstone of strategic stability since the Cold War, depends on clear attribution and credible threats of retaliation. AI undermines both foundations. When attacks can be launched through layers of automated proxies, masked by AI-generated deceptions, and executed by algorithms that adapt and evolve, determining responsibility becomes nearly impossible.
Traditional cyber attribution is already challenging, often taking months of investigation to establish with moderate confidence. AI exponentially compounds this problem. Adversaries can use machine learning to mimic others’ attack patterns, generate false flags at scale, and create deepfakes that muddy the waters further. When you cannot definitively identify your attacker, how can you deter future attacks?
This attribution crisis extends beyond technical challenges to political ones. Even when technical evidence points toward a particular actor, the complexity and opacity of AI systems provide plausible deniability. Nations can claim their AI acted autonomously, without direct human instruction. This ambiguity creates a responsibility gap that aggressive actors can exploit while cautious nations become paralyzed by uncertainty.
The result is a deterrence paradox: the more sophisticated offensive AI capabilities become, the less effective traditional deterrence strategies prove. This dynamic incentivizes first strikes and preemptive actions, destabilizing the international system.
Why Governance Matters: Building Frameworks for an AI-Driven World
The inadequacy of current international law and norms for governing AI in conflict, thoroughly documented in Artificial Intelligence and International Conflict in Cyberspace, demands urgent attention. The Geneva Conventions, the UN Charter, the Laws of Armed Conflict—all assume human decision-makers operating at human speeds with human accountability. None adequately address autonomous systems making split-second decisions about war and peace.
Creating effective governance requires acknowledging AI’s dual nature as both weapon and infrastructure. Unlike nuclear weapons, which have primarily military applications, the same AI technologies that power military systems also run civilian economies. This dual-use nature makes traditional arms control approaches ineffective. We cannot simply ban or limit AI development without crippling legitimate civilian applications.
The path forward demands innovative governance approaches that focus on specific military applications while preserving beneficial uses. Most critically, we need mandatory human authorization for escalatory actions, ensuring that decisions to intensify conflicts remain under human control regardless of operational tempo. Technical standards for AI transparency in military systems could enable verification of compliance with international agreements, creating accountability even in highly classified domains. Our cyber-deterrence frameworks must evolve to address AI capabilities, incorporating new forms of attribution and accountability that acknowledge both the technical complexity and political ambiguity of algorithmic warfare. Perhaps most urgently, we need international cooperation mechanisms specifically designed for managing AI-related crises before they escalate beyond control, recognizing that traditional diplomatic timelines are obsolete when conflicts unfold in seconds rather than days.
The path forward requires unprecedented cooperation between technologists, policymakers, diplomats, and military leaders. As the contributors to this essential volume make clear, the stakes couldn’t be higher. We stand at a crossroads where the choices we make about AI and conflict will shape international security for generations.
The question is not whether algorithms will go to war—they already have. The question is whether humans will maintain meaningful control over decisions of war and peace, or whether we’ll cede that awesome responsibility to machines. The answer will determine not just how future conflicts unfold, but whether humanity retains agency over its own survival.
For deeper insights into these critical issues, read Artificial Intelligence and International Conflict in Cyberspace (Routledge, 2023), edited by Fabio Cristiano, Dennis Broeders, François Delerue, Frédérick Douzet, and Aude Géry.


