International Conference on Naturalistic Decision Making 2022, Orlando, Florida A Model of Reframing for Intelligence Analysis Teams Jordan M. HAGGIT a , John M FLACH a , Timothy R. MCEWEN a , Katherine E. WALKER a , Robert FOLKER b , and Michael W. SMITH a a Mile Two LLC, Dayton, OH USA b PatchPlus Consulting, Medford, NJ USA ABSTRACT In many defense organizations, intelligence analysis (IA) is shifting from individual platform-specific work to team-based problem-focused work. One of the impacts this has on sensemaking is that analysts must consider more complex information and hypotheses. The Data/Frame model of Sensemaking, plus a Signal-Detection Theory (SDT) approach to the processes of reframing and elaboration, provides a useful characterization of sensemaking for team-based intelligence analysis. We illustrate this through examples from knowledge elicitation interviews with intelligence analysts. KEYWORDS Sensemaking; Military; Data/Frame Theory; Signal Detection; Intelligence Analysis, Team Problem Solving. INTRODUCTION Intelligence analysis (IA) consists of generating hypotheses of meaningful events that occur in vast amounts of distributed data and synthesizing those events into analytic insights to inform decision making. The cognitive work required to provide these insights is continuously challenged by environmental complexity that includes varying levels of workload, time pressures, uncertainty, and variable data access (Hutchins et al., 2007; Patterson et al., 2010). Recently, directives such as the Distributed Common Ground System (DCGS) Next Generation have resulted in new methods and models of envisioning analyst work - emphasizing a shift from stove piped analyses by highly-focused specialists solving limited-scope problems to holistic problem solving within a team of specialized analysts with a broader range of analytical skills (i.e., generalists) focused on complex problems. For example, under the new work paradigm, an analyst may instead be tasked with determining how Integrated Air Defense (IAD) systems are being transported into a country of interest rather than on monitoring ships moving in and out or ports and passing that information along to another analyst. Within the context of the U.S. Air Force, it is envisioned that instead of Airmen linked one-to-one with a specific Intelligence, Surveillance, and Reconnaissance (ISR) asset (e.g., Predator, Global Hawk), console operators in DCGS Next Generation will be part of Analysis and Exploitation Teams (AETs) in which each member is expected to integrate their specialized observations with Multi-INT data sources to think holistically and creatively address problems. It entails a shift from platform-centric ISR (i.e., assigning analyst roles and teams with specific data sensors) to a problem-centric approach (i.e., analyst roles and teams are sensor agnostic and focused more on synthesis of heterogeneous data). The transition from narrow questions to broader problems is likely to increase mental demands on analysts and have significant implications for understanding analyst performance as their skills are brought to bear in new forms of work. In this paper, we present a model of decision making, informed by knowledge elicitation (KE) sessions conducted with intelligence analysts, that elaborates on current theories of sensemaking and emphasizes work within the emerging AET construct. Regardless of the analysis paradigm, one of the major performance concerns for intelligence analysts is prematurely locking onto a hypothesis of events (i.e., mental model, frame) without considering appropriate alternatives. Zelik et al. (2010) refer to this as the risk of ‘shallow analysis’ and discuss how it reduces the likelihood that an analyst adequately addresses the intelligence problem in question. To guard against shallow analysis, Heuer (1999) and others have suggested structured analytic techniques (SATs) that encourage analysts to delay speculating at the outset and instead log and track their hypotheses before arriving at a conclusion. SATs are consistent with the work of researchers who emphasize human limitations and biases (e.g., Kahneman, 2011). These efforts attempt to make the analysts more “rational” and focus on constructs such as confirmation bias, which refers to the case where an analyst interprets new information in a way that is favorable to their pre-existing ideas, has been widely cited as an explanation for shallow analysis (e.g., Heuer, 1999). However, others contend these effects hold for only a small set of conditions and misrepresent human behavior (e.g., Klein, 2019). In many cases, when people are given different instructions or context about the task, the so-called