I’m pleased to let NPF readers know about the publication of a new edition of my book, The Admirals’ Advantage: U.S. Navy Operational Intelligence in World War II and the Cold War, by the U.S. Naval Institute Press. It now appears in paperback, with a new forward by the U.S. Chief of Naval Operations, Admiral Jonathan Greenert, and I hope readers will find it interesting. (Since the original hardback edition is getting hard to find, and thus expensive, it’s doubly nice to have a new paperback available.)
The occasion of this new edition provides an opportunity to reflect a bit on some of the issues raised in the original edition. What I’d like to tackle here today has to do with the nature and effectiveness of the relationship between intelligence analysts and the consumers of the information they produce.
The first edition of The Admirals’ Advantage came out in 2005, based in part upon research undertaken as part of an oral history project conducted at the Office of Naval Intelligence (ONI) by my Navy Reserve unit under the supervision of Captain David Rosenberg, USNR. One of the major themes of the book was the importance to intelligence success of having the right kind of analyst/consumer relations. As it turned out, that was a good time to emphasize such questions.
In mid-2004, a few months before The Admiral’s Advantage came out, the U.S. Senate Select Committee on Intelligence (SSCI) published a report on the findings of its investigation into the problems U.S. Intelligence Community (IC) analysis on Iraq before the U.S. invasion of 2003. Adopted unanimously by the SSCI’s Republicans and Democrats under Chairman Pat Roberts (R-Kansas) and Vice Chairman John D. Rockefeller IV (D-West Virginia), this SSCI Report is still worth a careful read nearly a decade later. I would like to offer some thoughts on the relationship of its findings to those in my own book.
I. Analyst/Operator Relations in Operational Intelligence
The Admirals’ Advantage provided an institutional history of the world of U.S. Navy “operational intelligence” (OPINTEL): the provision of information to warfighters about the location and activity of an enemy in the field. One of the most important elements it discussed in the history of naval OPINTEL vis-à-vis the Soviet Union was the centrality of the relationship between naval intelligence officers charged with tracking Soviet fleet activity and the U.S. naval commanders whose job it was to defeat these Soviet forces in time of war.
One of book’s arguments, in fact, was how, in the Cold War OPINTEL context, this relationship was greatly improved by the “embeddedness” of naval intelligence professionals in the U.S. Navy’s operational context and within its operational community. While all intelligence analysis prizes accuracy, OPINTEL hugely prizes timeliness of information. It is a critical part of what U.S. Air Force Colonel John Boyd famously termed the “OODA” loop – the cycle of operational behavior by which military commanders in the field “Observe” their environment, “Orient” themselves within it, “Decide” what to do, and then “Act” on an ongoing basis.
Being able quickly to work through this cycle is an important source of military advantage, since being capable of more rapid “OODA-cycling” than your opponent helps you shape his environment faster than he can figure out what to do in response. OPINTEL is a critical aspect of the “observe” and “orient” part of the cycle, for it provides environmental sense impressions – “Where is the enemy right now, and what is he doing?” – that are critical to any commanders’ ability to act effectively against an adversary.
Evaluating how U.S. Navy OPINTEL developed during the Cold War, The Admirals’ Advantage stressed the importance of intelligence analysts knowing “Blue” as well as they know “Red” – that is, of being intimately aware of friendly forces’ location, capabilities, and intentions, as well as those of the enemy – in order to understand, anticipate, and meet operators’ information needs. The book also emphasized the importance, in the intense, near-real-time world of OPINTEL collection and dissemination, of building relationships of mutual familiarity and deep trust between commanders and their analysts. Without such nearly reflexive understanding and trust, the rapid-fire world of OPINTEL could not function.
During the Cold War, OPINTEL was to a great extent a U.S. Navy specialty, inasmuch as the Army and Air Force were largely garrison forces: they would deploy in time of conflict, but in peacetime had relatively little interaction with the Soviet adversary. By contrast, the Navy spent a great deal of time cruising and maneuvering at sea with many of the very same Soviet naval assets with which they would exchange fire if war were to break out. With both sides constantly stalking each other at sea just on the brink of operational engagement, as it were, naval OPINTEL necessarily became very focused, developing great proficiency out of this ongoing white-knuckle peacetime practice.
The Navy OPINTEL story is partly about advancements in “hardware,” both in terms of sensors and in terms of communications, since of course technology did develop tremendously over the decades of Cold War competition. The Admirals’ Advantage argued, however, that our OPINTEL successes were as much as anything a “software” story – by which I mean one not of computer coding, but of organization and institutional behavior and corporate culture. Building on the experience of the British Admiralty in the early years of the Second World War against Nazi U-boats, the U.S. Navy gradually built an extraordinarily effective OPINTEL system for the collection, analysis, and rapid dissemination of information, and “intelligencers” and operators worked together to integrate rapidly-changing information into operational decision loops.
One key to success was the development of institutional and personal trust, and this was heightened by professional interpenetration. Analysts were stationed aboard ships for various parts of their careers, having the opportunity to “grow up,” professionally speaking, alongside operational commanders and in an operational context. At the same time, this long-term colocation permitted operators to become very accustomed to having intelligence professionals as integral parts of their team from a very early point.
(This may also have been easier in a Navy context than for the other services, inasmuch as the basic unit in the Navy’s operational environment is relatively large: a combat vessel with a variegated crew of specialists aboard, performing complementary functions. This was not a context in which individual squads of troops or formations of aircraft deployed after having been briefed by intelligence professionals who remained behind. In the Navy, intelligence analysts deployed with the elementary operational units in their ongoing maneuverings vis-à-vis the adversary. OPINTEL was thus entirely organic to the unit-level structure of the fleet.)
The OPINTEL relationship, therefore, was one that permeated from the highest levels of naval and military command all the way down to commanders at sea. This interpenetration of intelligence and operational institutions, not just across the tiers of command but also over time as career paths developed, provided a good environment for breeding institutional and personal trust and cooperation, and this relationship was very important to overall success in an OPINTEL context in which operational imperatives did not permit much opportunity for languid reflection.
II. Iraq and Analysis for High-Level Policymakers
The context addressed by the SSCI’s 2004 report on Iraq was a very different one – intelligence analysts’ provision of high-level national threat intelligence for senior policymakers – but the SSCI’s “what went wrong” analysis of U.S. Intelligence Community (IC) analysis vis-à-vis Iraq also dwells on the importance of analyst/consumer relations. The Committee’s findings provide a fascinating and important counterpoint and complement to the points made, in a very different context, in The Admiral’s Advantage.
The most important thing to remember about the SSCI’s 2004 analysis of IC findings about prewar Iraq intelligence is that the Committee assessed two different (albeit related) issues: (1) analysis of Iraqi weapons of mass destruction (WMD) programs; and (2) analysis of Iraqi links to international terrorism. The Committee’s basic conclusions are easily characterized, insofar as the IC got its WMD analysis rather badly wrong but ended up getting its analysis of terrorist linkages generally right. (We know with hindsight that there was little of either, but the IC said that Saddam Hussein’s regime had scant connections to terrorism but had a lot of WMD.) The critical detail, however – and the essential point for our analysis here of analyst/consumer relationships – lies in the SSCI’s findings about how the IC’s prewar assessments went the way they did.
With regard to Iraqi WMD, the SSCI concluded that they key judgments of the IC’s infamous 2002 National Intelligence Estimate (NIE) on Iraq “either overstated, or were not supported by, the underlying intelligence reporting.” The IC, moreover, did not “adequately explain to policymakers the uncertainties behind the judgments,” and indeed sometimes “layered” conclusions of additional Iraqi WMD progress atop older judgments without carrying forward the underlying uncertainties in those earlier assessments.
IC analysts, furthermore, “suffered from a collective presumption that Iraq had an active and growing” WMD program, and this
“‘group think’ dynamic led Intelligence Community analysts, collectors, and managers to both interpret ambiguous evidence as conclusively indicative of a WMD program as well as ignore or minimize evidence that Iraq did not have active and expanding weapons of mass destruction programs.”
In fact, this “group think” dynamic was “so strong that formalized IC mechanisms established to challenge assumptions and group think were not utilized.” Intelligence Community managers “did not encourage analysts to challenge their assumptions, fully consider alternative arguments, accurately characterize the intelligence reporting, or counsel analysts who lost their objectivity.” (On the basis of intelligence analysis produced under these conditions, alas, we went to war in 2003.)
This was a damning assessment, from the perspective of analytical tradecraft The important point here for analyst/consumer relations, however, comes by comparing these scathing WMD-related conclusions to the SSCI’s findings about Iraq-related terrorism intelligence.
At the time, allegations had been made by Democrats in Congress and in the press that administration officials had “pressured” intelligence officials, through “repeated questioning” of intelligence conclusions, to find links between the government of Saddam Hussein and international terrorists such as al-Qa’ida. In fact, however, the Committee’s unanimous findings pointed in exactly the opposite direction, though one wouldn’t know this from media coverage of the SSCI report – which, in perhaps another example of “group think,” was essentially ignored by the press, apparently because it did not fit with the dominant narrative of “politicization” then being circulated. But the SSCI’s conclusions suggest important lessons for the analyst/consumer relationship, inasmuch as this relationship seems to have been far better in the Iraq terrorism case – in the sense of being more effective in producing good intelligence – than the relationship between policymakers and intelligence officials in the WMD realm.
On Iraqi terrorism intelligence, the SSCI concluded that “repeated questioning” from policymakers about IC conclusions, far from representing insidious “politicization,” had helped analysts reach sounder conclusions. Intelligence analysts interviewed by the Committee reported that skeptical questions from policymakers about terrorism intelligence “had forced them to go back and review the intelligence reporting, and that during that exercise they came across information they had overlooked in initial readings.” This process, the SSCI said, “actually improved the Central Intelligence Agency’s (CIA’s) products,” because the analysts ended up producing “careful, measured assessments which did not overstate or mischaracterize the intelligence reporting upon which [they were] based.”
For the Committee, therefore, challenging engagement by policymakers with their intelligence briefers – a relationship in which IC claims were met not by uncritical acceptance but by probing skepticism that forced analysts to articulate and justify their reasoning and the degree to which it really was supported by the evidence – was essential. Through this prism, the relationship that characterized analyst/policymaker in the Iraqi terrorism context was a good one that improved the quality of IC assessments.
Here we see the perfect counterpoint to the SSCI’s own analysis of WMD-related intelligence. With terrorism, policymakers began by being skeptical of IC claims that there was no connection between the Saddam regime and international terrorists, and they pressed the analysts to justify their conclusions. When the analysts went back to double-check, they did find a few links after all, but they also learned that their basic conclusion (i.e., no significant Iraqi connection to terrorism) had been correct – and they were now better able to justify this conclusion in their responses to policymaker questions. The end result was better intelligence.
With WMD-related intelligence, however, the IC came in with (rather badly) flawed analysis, but since the analysts were telling a story that policymakers expected to hear – since this narrative of a swelling Iraqi WMD arsenal fit perfectly with what everyone had been assuming since the mid-1990s – no one engaged them critically on this information. Rather than initiating an iterated process of progressively strengthening analytical conclusions, therefore, this relationship produced uncritical policymaker acceptance of bad IC analysis. In a sense, both sides of the relationship fell down on the job: the analysts by producing shoddy work, and policymakers by accepting junk analysis at face value. As a result of this inadvertent conspiracy of failure, we were off to war.
III. A Continuum of Analyst/Consumer Relationship Needs
The 2004 SSCI Iraq report has gotten far too little attention in the years since, for it was an important document and has much to teach us still. It offers valuable lessons about the nature of the relationship between analysts and consumers that complements the OPINTEL lessons of The Admiral’s Advantage, even though on the surface their lessons might seem to point in opposite directions.
The secret to reconciling these two analyses is to remember that they address intelligence needs in very different contexts: (1) the near-real-time world of combat-facilitating informational OPINTEL support for warfighting commanders; and (2) the slower-paced and more reflective environment of high-level threat assessment for national policymakers. In both contexts, we see clearly that success is highly dependent upon the “right” sort of relationship between analysts and the consumers of their information. But how to square their divergent prescriptions?
The answer lies in the implications of the timing and information requirements of the decision-making context. The differing analytical lessons suggested by The Admiral’s Advantage and the SSCI’s Iraq report are each correct in the right context. OPINTEL occurs in a fast-paced, ideally “real-time” context that places a premium upon quick reactions. Although the intelligence analysis that goes into OPINTEL data can be very complex and sophisticated, moreover – involving the “fusion” of probabilistic data from a variety of collection sources – it produces relatively discrete items of information. Intelligence support for military operations includes a range of analytical products, from baseline assessments of enemy capabilities to evaluations of the adversary’s strategy and objectives, but the narrow slice of intelligence support that is OPINTEL is primarily about no more than where the other fellow is and what he’s doing right now.
OPINTEL thus forms one pole of a conceptual continuum of intelligence support, at the other end of which sits the high-level NIE process studied by the SSCI in its 2004 report on Iraq. The basic idea of a National Intelligence Estimate is to provide conclusions that are as authoritative and comprehensive as possible, on behalf of the entire Intelligence Community, on a particular issue. An NIE grows out of a complex bureaucratic process involving coordinated input from multiple agencies, conducted pursuant to detailed terms of reference, and eventually approved by the National Intelligence Council and the National Intelligence Board. If there is no consensus on any particular point addressed, elaborate procedures also exist for dissenting agencies to record disagreement. Mechanisms have also been developed for articulating and conveying analytical uncertainties, and there is even some provision for potential outside review by nongovernmental experts.
The preparation of NIEs can thus take a very long time, though this is hardly inappropriate given their status as the highest-level conclusions about the most important issues addressed by the Intelligence Community. At this end of the continuum – a world away from the moment-by-moment OPINTEL needs of a combatant commander “in the fight” – the premium is not upon speed and situational responsiveness, but rather upon comprehensiveness and high-quality reflection.
Indeed, one of the (many) complaints made about the IC’s infamous 2002 NIE on Iraqi WMD was precisely that it was produced too fast. Although the SSCI concluded that the fundamental flaws in analytical tradecraft that led to misunderstandings of Iraq’s WMD programs were not of a sort likely to have been prevented had more time been available, the SSCI nonetheless chastised the IC both for having waited so long to undertake an NIE on this important topic and for having taken so little time in preparing it. (The document was apparently produced in just a few weeks, and had only been undertaken in the first place under pressure from Congressional leaders.) At this end of the continuum, it is important to have time in which the appropriate degree of sober reflection and careful analysis can occur.
The importance of such reflection to analytical quality in work at this end of our conceptual spectrum is also why the SSCI placed such emphasis upon the benefits of challenging questioning from information consumers (i.e., national-level policymakers) in pressing analysts to explore the assumptions that underlie intelligence conclusions, the quality and extent of the factual reporting that exists, and the uncertainties involved. With authoritative overall judgment the objective, and with careful reflection the means by which it is to be achieved, the SSCI stressed that the kind of “repeated questioning” that occurred in the Iraqi terrorism case – but, fatefully, not in the Iraqi WMD case – is not just unobjectionable, but in fact essential. As the report declared, “[i]f policymakers did not respond to analysts’ caveated judgments with pointed, probing questions, and did not require them to produce the most complete assessments possible, they would not be doing their jobs.”
At this end of the continuum, therefore, the intelligence analytical process draws upon the philosopher Karl Popper’s notions of intersubjective appraisal as being the basis for knowledge. Such knowledge is, in a sense, socially constituted. Intelligence conclusions are theories about the world, and as such, they must be subjected to critical encounters before they can be regarded as truly sound. “Repeated questioning” and intellectual challenge by a skeptical audience is a vital part of ensuring the integrity of the conclusions reached. For such authoritative, high-level reflective assessments, one might thus say, the best relationship between intelligence analyst and intelligence consumer involves a degree of distrust. (Too much distrust, of course, can be poisonous, but conclusions must be greeted with enough skepticism to keep analysts on their feet, scrambling to make sure their work can stand up to scrutiny.)
By contrast, back at the near-real-time OPINTEL end of the continuum, circumstances call for something more akin to a neurological reflex arc. There, too much time spent on a Popperian form of intersubjective appraisal through iterated deliberation and debate would be maladaptive, for by the time a decision were reached about where the enemy is, he might be somewhere else. In this context, deep and all but reflexive trust is an essential part of the analyst/operator relationship: if the stream of incoming OPINTEL information is not basically trustworthy, on its face and without further study, it is useless.
This doesn’t mean that OPINTEL requires certainty about the enemy’s location and activity, of course, or that this is even possible. OPINTEL is commonly merely probabilistic, and uncertainties need to be clearly understood here too. But there is little time in which to explore the underlying tradecraft of one’s intelligence briefer. Such rapidity would presumably be impossible if the information needs on this end of the continuum were the same as those addressed by an NIE, but the narrower conceptual bandwidth of OPINTEL information about location and activity makes speed possible if there is a strong relationship of mutual understanding and trust between the intelligence analyst and the operational commander.
In both cases, the nature of the relationship between analyst and information consumer is necessary for a successful result. A sound understanding of the needs of the relationship under differing circumstances, however, is essential. In the history of U.S. Navy OPINTEL during the Cold War, and apparently for prewar analysis of Iraqi terrorist links, we ended up getting the recipe right for the particular circumstances involved. With Iraqi WMD intelligence, however, it went woefully wrong – and not because of “politicization,” but because there wasn’t enough skeptical engagement between analyst and consumer.
We surely still have lessons to learn from the range of contextual needs suggested by these two studies. The meta-lesson, however – for analyst and consumer alike – is the need for a clear understanding of what one is trying to achieve, and what the context of decision-making requires. This is a kind of professional situational awareness that runs fairly deep, but it is one toward which both The Admiral’s Advantage and the SSCI’s Iraq report point us, if we are willing to pay attention.
-- Christopher Ford