Artificial Intelligence and Nuclear Weaponry
Below is the essay of Dr. Ford's that INHR published on March 27, 2026. The essay can be found on the INHR website here, or read the text below.
For most people, the topic that first comes to mind when talking about nuclear weapons and Artificial Intelligence (AI) relates to the question of whether or not “killer robots” problems might emerge in the arena of nuclear warfighting, with national leaders outsourcing their judgment to a machine on whether (or how) to start a nuclear war. And indeed, this was the issue that U.S. President Joe Biden and Chinese Communist Party Chairman Xi Jinping addressed in their November 2024 summit, where they agreed that any decision to use nuclear weapons should be controlled by humans, not by artificial intelligence.
That said, the Biden-Xi statement really only addressed low-hanging fruit, for such a declaration seems a pretty easy one to make. As I have argued before, “fully autonomous (i.e., ‘human out of the loop’) nuclear weapons systems are not terribly likely,” and that “[i]f there is any area in which national leaders would refuse to countenance handing over their own most existentially critical trigger-pulling authority to a computer … this is presumably it.”
One can’t say that this is unequivocally true, of course, since there is at least one historical counterexample of a major nuclear power being willing to countenance automatic nuclear weapons use. The Soviet Union, after all, built and deployed the so-called Perimeter (or “Dead Hand”) system many years ago, which was apparently designed to launch all of Moscow’s remaining nuclear arsenal if sensors confirmed nuclear strikes on Soviet territory and the system’s robot brain lost its computerized communications with the Soviet General Staff. As far as I know, the Russians have kept Perimeter in service to this day. So there is already a “killer nuclear robot” out there; it’s just a Russian one rather than a Chinese or American one, and it seems to be intended for what are already basically “the world is already ending” scenarios rather than for any quesitons related to whether or when to get into a nuclear war.
Such potential exceptions aside, more national forays into actual autonomous nuclear weapon-firing feel unlikely – though I’d not put anything past the North Koreans. More realistically, however, I think there’s a high likelihood that AI will increasingly come to be used in providing decision-support assistance to national leaders making nuclear weapons decisions, so that’s perhaps where the questions get a bit more interesting. Starting from that observation, therefore, let me offer a few musings about the potential impact of AI on nuclear escalation risks.
AI and Nuclear Targeting
Crisis Target Selection
Whether it occurs in the context of conventional warfighting or nuclear warfighting, one of the potential risks of AI-facilitated decision-making – e.g., AI -based decision-support tools for human leaders – is that in high-pressure situations in which huge volumes of incoming information need to be analyzed and acted upon with extraordinary rapidity, overwhelmed human operators may become so reliant upon machine-generated decisional inputs that they essentially cede their judgment to the computer. In such circumstances, the classic distinction can erode between having a “human in the loop” (i.e., a human actually making the final decisions) and a “human on the loop” (i.e., a human merely overseeing a process of machine-made decision). Indeed, the utility even of having a human merely onthe loop could also disappear if the speed and complexity of AI decision processes increase beyond the ability of any given supervisor to follow it. (After all, being “on the loop” is only helpful if you can tell what’s happening in it!)
One already hears concerns being raised about this happening. It’s hard to know just what to make of the various often-conflicting reports that have appeared in the press about Israeli AI-assisted decision making during the Gaza campaign of 2023-25, for instance. Nevertheless, it has at least been claimed that the Israeli Defense Forces’ (IDF’s) “Lavender” AI program – a “human in the loop” tool for generating targets for air strikes – effectively evolved in some instances into an essentially wholly autonomous program, because under high-stress and high-operational-tempo (OPTEMPO) wartime conditions, human operators feeling “enormous pressure to accelerate and increase the production of targets and the killing of these targets” sometimes “treated the outputs of the AI machine ‘as if it were a human decision’” and acted unquestioningly as no more than a “rubber stamp” for machine-generated prompts. Similar worries have also been voiced about the U.S. Department of Defense’s “Project Maven” target-identification program, with one media recounting operators merely “concurring with the algorithm’s conclusions … in a rapid staccato: ‘Accept. Accept. Accept.’”
From the outside, I can’t evaluate the merits of concerns raised about Lavender or Project Maven, and for all I know both systems work extremely well, making their operators not unquestioning approvers of robot decisions, but in fact simply vastly better human targeteers. Nonetheless, functional decisional displacement to AI remains theoretically possible, and it is a potential hazard that could have an effect in constraining effective escalation management by human leaders in a nuclear crisis.
That said, as I see it, the key question to ask about AI-based nuclear decision-support capabilities is not about their absolute but rather their relative value. From what I understand of the processes and procedures for U.S. presidential nuclear weapons decision-making in a crisis, in fact, it could actually be good to have more AI-facilitated decision-support tools in this area.
Very little is known publicly about the details of what decision-options are made available to the U.S. president in the immediate time-urgent press of a nuclear attack warning scenario, of course, and that’s surely for the best. Yet my impression is that while American planners have worked for years to improve the flexibility of the system and increase the range of nuclear use options available to the president in such circumstances, the diversity of the “menu” of pre-planned targeting packages that would be available to him in such a crisis is inherently quite limited. Moreover, the ability of the system to generate new possibilities in response to in-the-moment presidential questions and directions – e.g., his desire to spare certain targets while perhaps adding additional ones to the mix based upon his judgment about what circumstances actually warrant – would be unavoidably constrained if one only had a handful of minutes in which to make the call.
Compared to this status quo, therefore, it might well be that some kind of AI decision-support tool could actually help the president make decisions – and be able, at least in general terms, to evaluate the actual operational military and civilian collateral damage impact of bespoke alternative nuclear courses of action – with much more care, consideration, and nuance in a nuclear crisis than he can today. With such support, the president’s decisions might perhaps still be worse ones than if he had the help an army of experienced human advisors and technical advisors, and hours or days in which to make the call. But if you’re talking about a nuclear attack warning crisis, he’s probably not ever going to have either of those things – and well-designed AI decision-support might be a definite improvement.
In that sense, therefore, it may be that AI support to nuclear decision-making here could be quite valuable and stabilizing: aiding and facilitating human critical reasoning and moral judgment rather than displacing it, and in fact allowing more such reasoning and judgment than the system presently permits in a crisis. The impact of AI might thus be a very good thing in this particular part of the nuclear decision-making arena.
“Damage Limitation” Warfighting
Another potentially important – though much grimmer – way in which AI might be incorporated into nuclear weapons-related decision-making without the real-world leaders involved finding it to be more problematic than helpful, however, relates to weapon allocation and release decisions in the very specific and extreme circumstances of conducting a “damage limitation” campaign once a full-scale nuclear conflict has begun.
Most aspects of nuclear strategy revolve around issues of how to deter war in the first place, how to prevent or at least control escalation to nuclear use in a conventional conflict, and how to handle issues of “escalation management” in ways that keep any limited nuclear use that might occur from spiraling into a full-scale exchange. There is a subset of nuclear weapons thinking, however, that concerns itself with what to do in the event – hopefully extremely unlikely – that both deterrence and escalation management have failed, and all-out nuclear war has begun.
In those unhappy circumstances, “damage limitation” refers to the effort, in effect, to destroy as many of the enemy’s nuclear assets as possible, as quickly as possible, in order to prevent them from being used against you. It is not damage prevention, of course, because in such circumstances that is presumably impossible. Nevertheless, there is an undeniable logic in the idea that it is better to be hit by only some of an adversary’s nuclear arsenal than by all of it, and damage limitation strategies seek to make this “some” as small a number as possible.
My point with respect to AI integration flows from the extremity of this challenge, but let me first back up for a moment to explain my priors. I am not myself of the view it is always inappropriate to allow AI to make lethal decisions. As I outlined in a policy paper published when I was performing the duties of the Under Secretary of State for Arms Control and International Security in 2020, the key question here is contextual. We are already comfortable with permitting computers to make lethal-engagement decisions in certain narrowly defined situations, such as when a shipboard Close-In Weapons System (CIWS) is turned to “automatic” when the vessel is under assault from incoming enemy missiles. (Indeed, even an antique anti-tank landmine could be considered a crude form of lethal autonomous weapon: humans set the weight, vibration, and other parameters under which it is permitted to explode under the wheels of a vehicle, and then trust the mine, in effect, to make this “decision” for itself in the field.) The question thus is not autonomy per se, but when, under what circumstances, and within what pre-assigned parameters a machine is permitted to exercise such autonomy.
As will be discussed further below, it may very well be that AI tools are poorly suited to handle the more “human” aspects of nuclear decision-making that involve questions of deterrence and escalation management, where outcomes depend upon the interaction of opposing leaders minds, and outcomes are shaped not merely by reason and available information but also by hopes, fears, moral judgments, anxieties, ambitions, antipathies, affinities, and whole range of contextual assumptions about how the world works. For these sorts of issues, I suspect machine autonomy will perform quite badly for the foreseeable future.
Once one crosses over into the context of nuclear “damage limitation” strategy, however, such “human” variables are far less important, the main questions seem an essentially mechanical ones of how to locate, attack, and disable adversary nuclear capabilities as rapidly as possible, and the speed and effectiveness of that engagement cycle is essentially the soleperformance criterion. For nuclear decision-making in that context, it may be that – somewhat akin to turning the CIWS to “auto” when one its out in the middle of the ocean away from civilian targets and facing multiple inbound missile threats – there may well be some logic in fully autonomous AI nuclear warfighting. Taking such an approach would hardly be consistent with the November 2024 Biden-Xi statement, of course, but if the world were already burning, such fidelity might not be considered a high priority.
Conventional War, Escalation, and AI
Compared to automated nuclear trigger-pulling, however, I am more concerned about the potential effect upon nuclear escalation risks from the use of AI in conventional warfighting. And indeed, a targeting revolution seems already to be well underway in the conventional arena.
Even before the current AI boom, we were already well into an era of increasingly computer-facilitated “sensor-to-shooter” reconnaissance and strike planning, in which adversaries compete for advantage in combat by having the shortest “OODA loop” – that is, each trying, in John Boyd’s famous formulation, to be quicker than the other in observing their environment, orienting themselves within it, deciding what to do, and acting upon that decision. If you can cycle through the OODA loop faster than your opponent, the theory goes, you can generally outfight him, as you’ll be changing his operational environment to his disadvantage faster than he can react to it.
A key to warfighting success here is thus “accurate speed,” and it’s easy to imagine that AI tools could do a lot to further shorten warfighting OODA loops, at least within whatever physical parameters may unavoidably be set by factors such as weapon systems range and transit time (relative to geography) and munitions “magazine depth.” As one report by the Royal United Services Institute put it two years ago,
“[t]heoretically[,] the processing power of machine learning could empower analysts to make decisions with much higher accuracy, combining intelligence [from myriad sources very quickly] to provide a rigorous depiction of the potential target.”
To this end, AI-based targeting decision-support tools are being designed to “accelerate the kill chain and make the process of killing progressively more autonomous” by accomplishing data collection and analysis tasks at far greater speed and scale than ordinary human operators can – and, in principle, without the errors that human analysts can and sometimes do make as the result of fatigue, stress, or emotion.
In the context of conventional military operations, some of this this acceleration seems already to be happening. As noted earlier, Israel has built a machine-learning algorithm called “Lavender” that can quickly sort data to hunt for low-level militants. In the recent Gaza campaign, Lavender reportedly helped the IDF quickly amass a list of 37,000 human targets based on their ties to Hamas. The American “Project Maven” is also said to be “built for speed,” apparently permitting an operator to “sign off on as many as 80 targets in an hour of work, versus 30 without it.” This is greatly increasing the rapidity at which strike planning can be done. Time Magazine has quoted a former IDF legal advisor, for example, that whereas a decade ago “you needed a team of around 20 intelligence officers to work for around 250 days to gather something between 200 to 250 targets[,] … [t]oday, the AI will do that work in a week.”
But all that just has to do with picking targets. To the degree that armed forces also explore letting the machines play a greater role in deciding whether to shoot, in addition to merely what to shoot, the tempo of conventional warfighting is sure to accelerate still further.
Either way, however, there may be a potential nuclear escalation problem in connection with the fact that in conflicts that occur between nuclear-armed adversaries, the interests of conventional warfighting efficacy – the sort of thing that military AI would presumably be trained to maximize – may not always coincide with the interests of nuclear-related escalation management. This point is worth unpacking a bit.
In effect, leaders in a conflict involving two nuclear weapons states must play two slightly different but overlapping games at the same time. On the one hand, since they are in a conventional fight, they need to fight that fight, and do so as effectively as they can. But because these are also nuclear powers with the capability to incinerate each other – and perhaps a good deal of the rest of the world besides – their leaders simultaneously need to be playing a nuclear deterrence and escalation control game. The difficulty, however, is that these simultaneous games each have slightly different rules, and that some targets have considerable relevance in both games at the same time, but in ways that may be in tension with each other.
It’s not hard to imagine that an optimized “kill chain” for prosecuting a conventional war, for instance, might in some circumstances call for striking a range of enemy command-and-control targets the destruction of some of which might affect not only that country’s ability to control its regular forces but also its ability to control its nuclear ones. Though undertaken for purely conventional warfighting reasons and perhaps with every desire to avoid escalation across the threshold of nuclear weapons use, such attacks might be hugely escalatory, for they could create actual or perceived risks to the victim’s nuclear command-and-control capability, leading its leadership to feel “use or lose” incentives with respect to its nuclear arsenal – and perhaps even to conclude (falsely) that a nuclear assault is commencing.
All this could produce escalation that may have been entirely unintended by, or even come as a surprise to, the attacking power. (The same could be said of strikes on dual-capable missile systems or aircraft – that is, those capable of carrying either nuclear payloads or conventional ones – and perhaps on other targets, such as key leadership nodes.) This is something of an inherent danger in any high-intensity fight between nuclear states, I would imagine, but widespread incorporation of AI into conventional warfighting could increase these risks by making it harder for human leaders to monitor the evolution of their own battle plans and to interpose some kind of check upon behaviors that – however useful they might be in conventional terms alone – could create unwanted nuclear escalatory pressures.
It may be that, as I suggested earlier, battlefield efficacy in a high-OPTEMPO conventional fight involving a vast array of different assets is something that AI would indeed be quite good at optimizing. That feels to me like a quasi-“mechanical” set of calculations that could scale and automate quite well, and hence to some extent be well-suited to some degree of AI-mechanical decisional outsourcing.
By comparison, however, deterring an adversary from aggression and controlling nuclear escalation risk in crisis or wartime feels like a much more “human” sort of game – one involving, among many other variables, the psychology and emotions of leaders on both sides, the temperament and political dynamics of politicians and publics, and deep issues of what values two competing governments and peoples feel are in play in the first place. (There may also an alchemy of timing involved here, too, insofar as even strikes that have identical operational military effect in conventional terms could have very different results in affecting nuclear-relevant perceptions of the adversary power depending upon when in a conflict they are administered.) It is thus far from obvious to me that deterrence and escalation management are “automatable” at all, even with really good AI, and it’s certainly not clear how one could train AI to optimize for success in both the mechanical and the human/perceptual arenas at the same time.
For these reasons, my own thinking is that the potential nuclear-related risks that could be presented by the use of AI in the military context are potentially much greater at the nexus between conventional warfighting and nuclear deterrence and escalation management than they are within the specific domain of nuclear command-and-control itself. One key question for inquiry is thus how movement is likely to occur “up” and “down” the so-called “escalation ladder” of conflict in situations where AI-based decision-making is introduced into more traditional approaches to warfighting – and whether (or to what degree) this could change what would otherwise occur when unaided humans interact with each other in a conflict situation.
Strategists have been theorizing about how adversaries might interact moving up and down such escalation ladders for years, of course. The seminal U.S. nuclear strategist Herman Kahn, for instance, articulated a 16-step ladder, beginning with a mere “subcrisis disagreement” between two potential adversaries and progressively increasing in severity through “rungs” such as “crisis,” “show of force,” “controlled local war,” and “some kind of ‘all-out’ war.” (He also elaborated this into a remarkable 44-step escalation ladder that included an almost bewildering variety of gradations, including such ladder rungs as “‘peaceful’ worldwide embargo,” “demonstration attack on zone of interior,” “slow-motion counterforce war,” “countervalue salvo,” and even “spasm or insensate war.”)[1]
As a RAND Corporation report observed in 1984, Kahn “did not require that his ladders be uniquely correct,” of course. He was instead “largely concerned with providing a structure within which to do more nearly rigorous thinking about the unthinkable.” The question for us, therefore, is about whether or how the introduction of AI could change dynamics of interaction that have hitherto been conceived as being exclusively ones between counterpoised human decision-makers.
Escalation ladders have occupied the attention of strategists for many years precisely because strategy – including especially nuclear strategy – has been so centrally and necessarily concerned not with avoiding all escalation risk but rather with manipulating it safely and effectively. Deterrence, after all, requires deliberately creating some degree of risk for the adversary, including the risk of escalation to absolute catastrophe, in order to persuade him that it is not in his interest to start a war, or to take the next step “up” the escalation ladder within one. It requires, in effect, finding a sort of “Goldilocks Point” between creating enough risk to deter what one wishes to deter, on the one hand, and creating so much that things spiral out of control. (Zero escalation risk, it also follows, means something very much like “no deterrence.”) Questions of how to handle the challenges of “escalation management” – both leading up to war and within a conflict itself – are thus critical to national security policy, to questions of deterrence, and to how (or to what degree) one might hope to prevent escalation into a devastating nuclear exchange.
It is perhaps noteworthy, therefore, that some observers have suggested that the widespread introduction of autonomous weapons systems could make decisions to go to war – a war in which escalation to nuclear levels might occur – more likely. A recent article in Time, for instance, speculates that conventional military capabilities based upon autonomous systems might have a greater potential to lead to warmaking decisions than where the participants could only call upon human soldiers, sailors, and airmen:
“… [I]if the ability to wage war remotely and autonomously leads to minimal human toll, that in itself may increase risk tolerance, meaning more operations that have higher escalation potential. For instance, it would be a gutsy move for a conventional U.S. Navy vessel to attempt to break any Chinese blockade of self-ruling Taiwan. Sending an unmanned submersible, however, feels less confrontational – as would a People’s Liberation Army decision to sink it. Yet those ostensibly lower-risk scenarios may in fact accelerate an escalatory spiral toward full-blown conflict. If a nation can wage war without the political cost of bringing home flag-draped coffins, will it be more likely to engage in unnecessary conflicts?”
Since any conflict between nuclear-armed states has at least some potential to escalate into the nuclear arena, such dynamics of conflict initiation could certainly also have potential nuclear implications.
Another question, which I have noted above in my comments about the nexus between conventional warfighting and nuclear war, is whether the introduction of AI systems into such equations could lead one adversary to move more quickly into nuclear portions of the escalation ladder, in the eyes of its opponent, than had in fact been intended or understood. How high-speed AI-facilitated decision-making in the conventional arena – let alone some degree of fully automated decision-making – is likely to interact in the real world with more traditional human decision-making, and how this admixture may affect movement up or down some hypothesized escalation ladder, is a subject that presumably needs more study, including through game-theoretical simulations.
And here, it needs to be stressed that despite occasional pretenses to the contrary such as China’s always but nowadays increasingly non-credible “no first use” (NFU) policy, questions about AI’s impact upon conventional-to-nuclear escalation are salient for all three of the largest nuclear weapons states. Each in their own way, Washington, Moscow, and Beijing all clearly contemplate circumstances in which sufficiently adverse circumstances even of purely non-nuclear attack could lead them to turn to nuclear weaponry.
This is least surprising, perhaps, for the United States, because during the Cold War NATO made it very clear – and indeed made it central to allied deterrence doctrine – that it might use nuclear weapons if faced with an otherwise overwhelming conventional attack by Warsaw Pact forces on the Central Front in Europe. More recently, U.S. officials also made it clear in both the 2010 and 2018 Nuclear Posture Reviews that there were circumstances in which a sufficiently grave non-nuclear attack – what the latter document called a “significant non-nuclear strategic attack” – might result in an American nuclear response.
Russia, too, has made such a possibility central to its nuclear planning. In guidance issued in 2024 by President Putin, for instance, it was explicitly stated that Russia “reserves the right to employ nuclear weapons” in response to an act of “aggression … with the employment of conventional weapons” which created “a critical threat” to the country’s sovereignty or territorial integrity. And even China, virtue-signaling NFU posturings aside, revealed in guidance issued to its strategic missile forces in 2004 that “nuclear coercion” by China might be appropriate when “a strong enemy only using conventional attacks” creates “enormous threats to us,” such as “[w]hen conventional war continuously escalates and the strategic situation is extremely disadvantageous to us.”
With all three powers thus clearly able to imagine circumstances in which some kind of nuclear response might be appropriate in response to non-nuclear attacks by others – and with all three now also increasingly incorporating AI into their own non-nuclear warfighting as they stare uneasily at each other – it is clearly important for us to consider the potential impact of such AI integration upon potential nuclear escalation.
AI and Strategic Stability
Threats to Second-Strike Forces?
Another question related to the possible impact of AI on nuclear weapons issues relates to whether or not AI-facilitated data-aggregation and pattern recognition could help overcome traditional obstacles to detecting and targeting hitherto survivable second-strike nuclear forces. That AI could help lead to dramatic advances in such engagement is certainly possible, though whose nuclear forces might most stand to gain from such advances is less clear.
From Washington’s perspective, the danger is that AI-related advances in data-collection and analysis could finally help our adversaries crack the nut of submarine survivability. For decades, the United States has invested huge sums and enormous amounts of effort in ensuring that it possesses survivable second-strike nuclear forces in the form of ballistic missile submarines (SSBNs) capable of quietly disappearing into the “deep blue” of clandestine mid-ocean deployments, where their acoustic stealth protects them from adversary detection or attack. The Soviets tried hard to overcome U.S. advantages here, but they were thankfully never able to figure out how to detect, track, and engage American SSBNs on deterrence deployment. (London and Paris have also benefited from this situation, for their deterrence postures both rely heavily – and the British now exclusively – upon SSBNs.) Nor have Russia or China yet solved this sub-hunting problem today, leaving U.S. retaliatory capabilities secure.
Advances in AI, however, could perhaps change this, and this possibility may excite strategic planners in Moscow and Beijing. Were advances to “make the sea transparent,” as the saying goes, thus facilitating targeting of deployed SSBNs, this would clearly represent an enormous problem for the United States and its allies. In this respect, therefore, AI-facilitated advances in detection and tracking in the ocean depths could be deeply destabilizing by providing strategic advantages for Russia and China. The Russians and the Chinese do not themselves depend so much upon “deep blue” concealment, preferring to protect their own SSBNs not far out at sea alone, but rather within reasonably well-defined underwater “bastion” sanctuaries close to their home territory, patrolling within concentric circles of air and naval assets whose job it is to prevent U.S. fast-attack submarines (SSNs) and other potential sub-hunting assets from intrusion.
I do not mean to suggest that “making the seas transparent” would exclusively harm the Western powers, however. In fact, those very same Russian and Chinese SSBN “bastions” might well become just as transparent as the deep oceans – and perhaps do so in some respects more quickly, since the expanses of undersea space that would have to be searched for submarines would likely be much less than would need to be surveyed in order to find Western vessels. In these respects, Russian and Chinese submarines might have as much to lose as U.S. ones do from an AI-facilitated revolution in antisubmarine ISR.
AI-facilitated strategic Intelligence, Surveillance, and Reconnaissance (ISR) might present a threat to America’s adversaries in other ways as well. Both Russia and China rely heavily in their own nuclear postures – and vastly more than the Americans – upon mobile, land-based missile forces that are difficult to locate and track by virtue of their cross-country mobility and their ability to shelter at various locations in networks of caves, tunnels, or revetments. Ever since the infamous “SCUD-hunting” problems U.S. forces encountered with Iraqi missiles in the Gulf War of 1991, it has been clear that finding deployed missies on their mobile Transporter-Erector-Launcer (TEL) units can be extremely challenging. And both Russia and China have invested heavily in such missiles as part of their strategic deterrent.
All in all, to whom the “net” advantage (or disadvantage) would belong is not obvious. This would also depend upon additional factors, such as the reliability and speed by which each party could actually sink the other side’s boats. (“Kill-chains” have “links” beyond simply reconnaissance, after all, and mere detection absent the capacity also effectively to engage would be much less destabilizing.)
Threats to Nuclear Command and Control
Another way in which AI-facilitated warfare might affect strategic stability relates to its potential – possibly through activity in cyberspace, where AI bots are proving themselves increasingly adept – to enable effective attacks upon nation’s Nuclear Command, Control, and Communications (NC3) architectures. And in fact, while the specific utility of AI tools in this respect may be a new factor, cyber-facilitated NC3 attack – that is, deliberate degradation rather than the inadvertent sort I discussed earlier in the context of conventional-to-nuclear escalation – may become easier.
Indeed, “NC3 warfare” seems to have been developed as a possible wartime option even before the current age of AI. It is not well known, but for the last several years declassified information has been available making clear that U.S. planners have contemplated the possibility of using cyber-facilitated attacks to impede the potential wartime effectiveness of adversary NC3, in extremis, ever since the early 1980s. As I recounted in remarks at the Center for Strategic and International Studies in 2024 based upon unclassified doctoral dissertation research at George Mason University by a colleague of mine, the U.S. Joint Chiefs of Staff (JCS) created a special unit in 1982-83 that was dedicated, among other things, to exploring how jamming and signals injection into Soviet communications networks could be weaponized against Moscow’s NC3 systems as part of an overall strategy against deployed Soviet SSBNs.
According to one report, by the mid-1980s, U.S. scientists were developing ways to reliably deny access to and disrupt Soviet communications networks and nuclear weapons systems in the event of armed conflict.[2] As later explained by one of its participants, some of this work envisioned what was essentially cyber-facilitated NC3 attack:
“… [W]e realized we were looking at an automated system that was meant to keep the Soviet leadership in control of their forces – basically it was an early digital system, a Soviet style concept of a network. We understood that we had an opportunity to effect [sic] the network and affect confidence in the network for deterrence purposes ….”[3]
These preparations for command-and-control warfare, moreover, were coupled with the more openly signaled posture of a new “Maritime Strategy,” pursuant to which the U.S. Navy prepared itself for possible wartime operations against Soviet SSBNs in their bastions and for launching nuclear strikes against the USSR from carrier battlegroups in those northerly seas.[4]
It would have been an inherently provocative and potentially escalatory step actually to take those steps, of course, but U.S. officials believed that if full-scale war had broken out with the USSR, there might be little alternative but to launch such attacks. (Signaling something of our ability to do so with the naval posturings of the Maritime Strategy, it was felt, might also contribute to deterring the Soviets from launching a war in the first place, by holding at risk key elements of what the Kremlin felt it would need to preserve in order to have any hope of winning such a conflict.) As on participant in these U.S. efforts put it later, it was felt that “we cannot afford not to do this if war broke out one day.”[5] It thus seems quite clear that in the peak years of late-Cold War strategic competition, if deterrence had failed and World War III had indeed begun, the United States was prepared for an unstinting damage limitation campaign against Soviet strategic nuclear assets.
All of that being now a full four decades or more ago, one can only assume that cyber-facilitated NC3 attack probably remains a possibility today – and that AI tools could perhaps now make such campaigns even more viable than before. This could have important implications for strategic stability and nuclear escalation. Powers with such AI-augmented “NC3-defeat” capabilities, for instance, might be tempted to use them in a crisis, to uncertain but potentially destabilizing effect. Since adversary powers would presumably also use AI in an attempt to detect such AI-facilitated NC3 attacks, moreover, strategic stability could depend in worrying ways upon the effectiveness and error rates of such technologies in an offense-versus-defense arms race. Such a nuclear balance would surely feel highly fraught, for while false positives – i.e., an incorrect assessment that one’s adversary has begun attacking one’s NC3 – could lead to extreme and escalatory “use or lose” nuclear responses, false negatives could result in one’s strategic defeat.
Disinformation and Decisional Context?
Though the topic is well beyond detailed treatment here, AI-facilitated disinformation might also be used to shape nuclear crisis bargaining and escalation (or de-escalation) decisions by an adversary’s national leaders, especially but not exclusively those in the Western democracies. Leaders make their decisions, after all, in a broader context made up of the innumerable different facets of what they perceive to be happening in the world and the factors bearing upon how they interpret its meaning. If AI allows both for disinformation campaigns to be undertaken at machine speed and scale and permits them to be “tailored” to particular targeted leaders’ specific personal political, emotional, or psychological “hot buttons,” this could certainly have an impact upon nuclear decision-making.
AI Counterproliferation
Finally, it is worth saying a quick word about the potential implications of strategically-relevant AI on counterproliferation. As AI tools become increasingly important in the conduct of (and preparation for) warfare, it seems to me all but inevitable that strategic competition between rival powers will also accelerate in “AI defeat” capabilities, with each competitor feeling powerful incentives to impede the other side’s pursuit or implementation of AI. In other words, strategic competition in AI will likely come to involve elements of nonproliferation and counterproliferation policy, including not just efforts to deny critical inputs to adversary AI development, but perhaps even efforts to sabotage competitors’ AI programs through means such as data poisoning, manipulative or destructive cyberattacks, or influence operations that erode a possessors’ faith in the integrity of its AI tools.
Data poisoning attacks on AI models have already been reported in the form of Russian insertions of corrupted or fabricated information into Wikipedia pages and the data pools used for training Western AI models in order to rewrite the history of Vladimir Putin’s war of annexation in Ukraine. Such data injections are apparently intended to support broader Kremlin influence operations by tricking the large language models (LLMs) of popular AI chatbots into telling Western users lies the Russian government wishes them to believe about the war.
It seems increasingly to be felt that “data poisoning operations against adversary AI systems” can help a country achieve “a decisive asymmetric advantage in future conflicts.” And AI systems are sure to become greater targets for covert sabotage in direct proportion to the degree that they continue to help countries achieve better and better warfighting capabilities.
This is a possibility to which Craig Wiener and I have already drawn special attention in the extreme case of potential future Artificial General Intelligence (AGI) or Artificial Superintelligence (ASI) capabilities We argued recently in Missouri State University’s journal Defense & Strategic Studies Online for an approach we labeled “Persistent Offensive Preclusion of Adversary AI” (POPAAI) – or “PopEye.” This would represent, we suggested, “a new, forward- leaning approach to U.S. competitive strategy in the ASI arena focused upon counterproliferation” aimed at
“intervening actively to interdict problematic [ASI] proliferation-facilitating transactions or transfers that are already underway, and perhaps even to roll back whatever progress would-be proliferators have already made.”
We contended in that article that the potential dangers resulting from adversary possession of superintelligent AI made such an aggressive approach a strategic necessity, but at least some of the same logic could still apply even to adversary AI tools well below the level of AGI or ASI. All in all, I would thus be surprised if we were not now entering a new arena of sparring in AI counterproliferation.
Conclusion
The advent of military AI is still a new enough phenomenon that in many respects we can only really speculate about the full range of its future implications. Nevertheless, I hope these musings will provide useful food for thought, and that they can perhaps serve as a jumping-off point for further inquiry.
-- Christopher Ford
Notes
[1] Kahn’s lists derive most famously from Herman Kahn, On Escalation: Metaphors and Scenarios (Prager, 1965). They can be found more conveniently, however, in Paul K. Davis & Peter J.E. Stan, “Concepts and Models of Escalation” (RAND Corporation, 1984), 5-6, https://www.rand.org/content/dam/rand/pubs/reports/2007/R3235.pdf.
[2] Nicole Perlroth, This is How they Tell Me the world Ends: The Cyber-Weapons Arms Race (Bloomsbury, 2021), 82.
[3] Craig J. Wiener, “Penetrate, Exploit, Disrupt, Destroy: The Rise of Computer Network Operations as a Major Military Innovation,” dissertation submitted in partial fulfilment of the requirements for the degree of Doctor of Philosophy, School of Policy, Government, and International Affairs, George Mason University (October 26, 2016), t 83 & 84 (quoting Richard L. Haver, in interview by Craig J. Wiener (December 11, 2015)).
[4] See, e.g., Christopher Ford & David Rosenberg, The Admirals’ Advantage: U.S. Navy Operational Intelligence in World War II and the Cold War (U.S. Naval Institute Press, 2005), 77-108.
[5] Ibid (emphasis added).







