Chapter 8 The Time to Decide: How Awareness and Collaboration Affect the Command Decision
Making Douglas J. Peters, LeRoy A. Jackson, Jennifer K. Phillips, and Kami G. Ross
Ultimately, it is the command decision, and the resulting action, that affects the battle outcome. All the processes we have discussed to this point— collection of information, collaboration, and formation of situation aware ness—contribute to the success of the battle only inasmuch as they enable
effective battle decisions. Figure H.l depicts but a small part of the complex relations between actions, decisions, collaboration, situation awareness, and automation, as we observed them in the MDC2 program. Command deci sions—both the command cell's decisions and the automated decisions—lead to battle actions.
These, in turn, alter the battlefield situation, bring additional information, often increase or decrease uncertainty, and engender or impede collabora tion. Changes in the availability of information lead to a modified common
operating picture, automated decisions produced by the system, and further actions. These changes also lead to changes in the awareness of the battle situation in the minds of the human decision makers. Collaboration impacts
the humansituation awareness both positively and negatively (aswehave seen in the previous chapters), which in turn affects the quality and timeliness of decisions and actions.
Still, the complexity of these relations in itselfdoes not indicate that decrsion making in such an environment is difficult, or at least does not inform us what makes it difficult. Yet, as the previous chapters have told us, the command-cell members often find it very challenging to arrive at even a
remotely satisfactory decision. Why, then, is decision making so difficult in this environment?
Afterall, we provide the cell memberswith a powerful information gather ing, integration, and presentation system. We give them convenient tools to
Tin-Tunc 111 Decide
—• Information provWotf?.^ Actions Ground
--> Loads to...
^r Sensor Loss
Knowledge ^\ (Algorithm settings) Action
Figure 8.1. Commander Decision Environment—complex relations between actions, decisions, collaboration, situation awareness, unil automation. See Appendix for explanation of abbreviations.
examine the available information and to explore its meaning in eollnhurfl: tion with otherdecision makers. The CSE system offers many automntit'iilly
generated decisions, such as allocation and routing of resources for lire unci intelligence collection tasks. The cell has established effective procedure lor allocation and integration of decision-making tasks. Yet effective dcrininti making continues to be a challenge, in spite of all these aids. One highly visible culprit is the lack ofusable information: inmmplen'iiHMi of battlespace information, doubts about the reliability of the available inhu mation, and uncertainty about the likelihood of a decision's consequence nr
about the utility ofthe respective alternatives. Atheorist ofmilitary command argues thatthelack ofinformation and its uncertainty are the mosi mipoi Mill drivers of command: "The history of command can be undcrsiond in k.mimh
i>;iiiic or v.o^iiuion
of a race between the demand for information and the ability of command
systems to meet it.The quintessential problem facing any command system is dealing with uncertainty" (van Creveld 1985). Another major source of challenges involves the limits on the rationality of human decision makers (Simon 1991). Such limitations are diverse: con straints on the amount and complexity of the information that a human can
processes or acquire in a given time period and multiple known biases in deci sion making. In particular, time pressure isa well-recognized source of errors in human decision makingâ€”as the number of decision tasks per unit time grows, the average quality ofdecisions deteriorates (Louvet, Casey, andLevis 1988). In network-enabled warfare, when a small command cell is subjected to a flood of information much of which requires some decisions, the time pressure can be a major threat to the quality of decision making (Kott 2007). Galbraith, for example, argued that the ability of a decision-making organi zation to produce successful performance is largely a function of avoiding information-processing overload (Galbraith 1974). I Iuman decision-making biases are surprisingly powerful and resistant to mitigation. Many experiments demonstrate that real human decision mak ing exhibitsconsistent and pervasive deviations (often termed iwmdoxvs) Irom the expected utility theory, which for decades was accepted as a normative model of rational decision making. For example, humans tend to prefer those outcomes that have greater certainty, even if their expected utility is lower than those of alternative outcomes. For this reason, it is widely believed that bounded rationality is a more accurate characteri/.ation of human decision
making than is the rationality described by expected utility theory (Tversky and Kahneman 1974; Kahneman and Tversky 1979). The anchoring and adjustment biases, for example, can be very influential when decision mak ers, particularly highly experienced ones, follow the decisions made in similar situations in the past (naturalistic decision making | Klein I999|). Althoughsuch biases can be valuable as cognitiveshortcuts, especially under time pressure, they also are dangerous sources of potential vulnerabilities. For example, deception techniques are often based on the tendency of human decision makers to look for familiar patterns, to interpret the available infor mation in light of their past experiences. Deceivers also benefit from confir mation bias, the tendency to discount evidence that contradicts an accepted hypothesis (Hell and Whaley 1991). With a system like CSF, one might expect that biases are at least partially alleviated by computational aids. Decision-support agents like the Attack
Guidance Matrix that we discussed earlier can greatly improve the speeil and* accuracy of decision making, especially when the information volume is large and time pressure is high. But they also add complexity to the system, lead ing to new and often more drastic types of errors, especially when interacting with humans (Perrow 1999).
Additional challenges of decision making stem from other factors, such as social forces within an organization, which go beyond the purely information-
I he l line to I JiTiile
processing perspectives. I''or example, [rrotijUhink'the tendency nl deitaiun makers within acohesive group to pressure each other toward iiiiilnmiiiy iintj against voicing dissenting opinions (Janis 1982)—can produce ciitiiNiruphlti failures ofdecision making. Indeed, our observations of failures on coniDMMJl cells' collaboration point to possible groupthink tendencies, particularly il! view of the fact that information overload encourages groupthink (Jnnlfl 1982, 196).
Which of these factors, if any, impact the decision making in network*
enabled warfare, and to what extent? How much can a system like Ihe C'.Sl^ alleviate or perhaps aggravate such challenges to human decision inilklnfjr"1As a key part of the MDC2 program, we sought to evaluate the ability fl|'
the command-cell members—commanders and staff to make dTetjIlVtt decisions in the information-rich environment of network-enabled wnrl'Wflf
Understanding the decision-making process of the commanders, the \\V® of automated decision aids, and the presentation of critical information (W decisions were crucial to this evaluation.
We begin this chapter byexploring how we collected information to.fiup** port our decision-making analysis throughout the experimental |>r0g»'9tll»
This section chronicles not only the progression of the approaches wC ((,)()]$* but also what we learned from the methods themselves and how they W(*Pd,
adapted to yield a richer set ofdata. We then proceed todiscuss some oflh^ lessons learned in the analysis of thedata and their potential relevance to lll8' development of future command tools. COLLECTING THE DATA ABOUT DECISION MAKING
A key effort in the MDC2 experiments was to devise mechanisms for ihi|)« turing the data about decision making. We found it remarkably challenging to obtain the data that would give us the desired insights. In a trinl-umFriTOf fashion, we proceeded through a number of approaches. 'lb begin with, we built automated loggersthat captured an enormous qmill»
tity of data for each experimental run. For example, CSE automated delu sions and contextual decision-making information (such as the SAi curves) were available directly from the data loggers. However, the raw data Imill these loggers were of limited direct use in evaluatinghuman decision making because they could not quantify the commander's cognitive processes and hlii understanding of the situation. At best, they were helpful to support lindiiityti and to understand what was happening during critical decisions. In addition to automated data logging, five other mechanisms were used to collect decision-related information: analytic observers, focus groups, opera
tor interviews, surveys, and battle summary sessions. These were developed and refined throughout the experimental campaign, especially over the l.i'.l five experiments in the campaign, beginning with Experiment 4a. In Experiment4a,weemployed several traditional toolsof the analyst s loiif box. During the experiments, analytic observers recorded significant cvcnKi
and characterized die effectiveness of tlie battle-command environment with
respect to the commander's ability to execute his mission. Within our cadre of observers, one person was dedicated to record and classify every decision that the commander verbalized. Each decision was identified as relating to seeing (for example, repositioning sensors or classifying imagery), striking (for example, when and where to place fires), or moving (for example, how to array the forces for movement). Additionally, each decision was characterized according to the associated complexity. We classified decisions that were prompted by a clear trigger and appeared to be made according to a small set of understandable rules as auto matable decisions. Examples of automatable decisions were "Fire at that tank" and "Let's get BDA (Battle Damage Assessment) on that engagement." Those decisions that were based on a well-understood and limited set of
variables but required a degree of human judgment not reducible to wellunderstood rules, were classified as adjustment decisions. An example of an adjustment decision was to determine when the necessary conditions are sat isfied to begin operations. Finally, decisions that required a broad, holistic understanding of the situa tion, encompassing a wide range of variables, and that fundamentally changed (or confirmed) the entire operation's strategy were characterized as complex decisions. An example of a complex decision from Experiment 4a: the com mander identified a deficiency in his plan and saw the need to develop con tingency plans: "If the enemy gets into Granite Pass, it is going to be very difficult for us to get through him. We need to look at some other maneuver options."
Collecting this information allowed us to characterize the decision making in a number of ways. Analyzing the types of decisions madeby the command ers, we identified the battle ftinctions on which the commander focused most
of his attention. Likewise, decision complexity characterizations helped us better understand whether the commander was making decisions that could be automated with a tool or making frequent complex decisions. 'Ibgether, these two characterizations enabled us to identify specific areas of the CSE that could better be tailored to support the decision maker's needs. Figure 8.2 shows a partial analysis of decisions by type from Experiment:4a. Surveys, on the other hand, proved much less useful. After each experi mental run, we asked each commander to complete a surveyâ€”his assessment of how well the run went and what challenged him during the run. These surveys, while containing occasional nuggets of interesting information were
largely ineffective because the questions were not specific to the events of'a given run, and because beingthe lasteventof a long day, surveys did not elicit sufficiently detailed responses from the fatigued commander and his staff. Overall, although the decision characterization was useful to help improve the functions CSE, it did not tell us much about the effectiveness of the
decisions, or about the specific information and conditionssupporting effec tive decisions. Therefore, in Experiment 4b (a repeat of the Experiment 4a
iiwww i»m urn
All articulated choices were recorded nn dnclhlnnn 173 decisions were observed over H record ruim Of 32 Automatable-See decisions:
• 13 involve sensor allocation and positioning
• 5 involve changes, to the nctivo sonsor modn • 11 involve cross-cueing different aenaon, • 3 involve micro-UAV use to enhance I IDA.
Decision Types • Automatable — all variables known or can be calculated, somolhlnu computer can do (25%).
• Adjustment —mostly known variables within the plan context, roqulrw human judgment (70%).
• Complex— requiresdefinition of options, criteria and decision procoiw (5%). Decision Focus and Content
• Move — the movement of organic assets (25%). • See — the development of the intel picture (47%). • Strike — the application of effects (28%).
Figure 8.2 Experiment 4a summary of decisions by type and complexity,
with a different, less-experienced team ofoperators), we ;uUU-(\ ;i quallmtjvg assessment of decisions. Wc conducted this assessment in pen K|)i rllill til
analysis sessions with die help of military subject matter experts nlin wall It ing a replay ofthc battle and events leading to the decision in qiiaillnn II"
following criteria, derived from the Network Centric ()peralions I!oni i|)1111! Framework (Evidence Based Research 2003), were used to evalunic thl qtlfll ity of a decision as follows:
• Appropriateness: consistency of the decision with situation siwsireiiCNH (till ilfttfl lion as was known to the decision maker at the moment), mission nbjoi tlV)
• Correctness: consistency of the decision with ground truth (i.e., with Ihi It'lttfll situation).
• Timeliness: whether the decision is made within the window of opportunity
• Completeness: the extent to which all die necessary factors are consider) (I 111 mill "m a decision.
• Relevance: the extent to which the decision is directly related I
Battle of Cognition
• Confidence: the extent to which the decision maker is confident in a decision made.
• Outcome consistency: the extent to which the outcome of the decision is consistent with the intended outcome.
• Critieality: the extent to which the decision made is critical to mission success.
Although this approach provided us with extensive data on the quality of the decisions (e.g.,sec FigureS.3), it also proved to be of limited use. Without determining the context and the reasons for a decision, and the information that led to the decision, we could not pinpoint ways for the CSE to improve the decision-making environment. In addition to the study of decision quality, we introduced another datacollection approach that showed its initial promise in Experimental) but then became a core analytic tool for later experiments. Process tracing (Sbattuck and Miller 2004) examines a single episode of an experimental run in detail. 'Ibis methodology connects collaboration to changes in SA and SA to deci sion making with a focus on the operators and (heir use ofthc CSE. Process tracing externalizes internal processes (Woods IW.l) and tries to explain the genesis of a decision by mapping out how an episode unfolded, including information elements available to the operators, what information was noted by operators, and operators' interpretations ofthc information in immediate and larger contexts.
In Experiment 4b, we completed process tracing for a single event, and although we were unable to draw any significant insights from one event, the methodology showed promise for understanding both the context of a decision and the challenges that faced the decision maker at the time ofthc decision.
Run 8 UodofCAUl lUtio Losses
Hvmif^T •%? '
Slow growth R) SA,
Note: Only losses ol k(sy UAV
assets find manned systems am hKjhlJrjhted with names 60
I bih.' (minutes)
Figure H.3. Expert assessments considered the correctness, timeliness, relevance, and other characteristics of decisions.
The Time to Decide
With the introduction of a manned dismounted platoon for Experiment 5,
the complexity of the decision-making environment increased significantly. Now, instead of communicating his thoughts and decisions to staffmembers located in the same vehicle, the commander had to convey his intent and orders to subordinate commanders reachable via the radio and shared dis
plays, with sufficient clarity and detail. Ouranalysts also examined informa tion requirements for warfighters conducting dismounted operations. The process-tracing techniques were well suited for this complex environ ment, and we focused on identifying key decisions during each run and ana
lyzing those decisions in detail. The detailed process tracing combined video and audio playback of events leading to a decision, audio logs ofthc commu nications, query results from the automated loggers, the SAi curve, observer notes,and interview records. All these components together supported a very detailed study of short-duration events.
'lb facilitate these process tracings, we compiled critical information into a single source. By plotting different typo of information across a common time axis, we were able to show what was happening at various time points
during the battle. Hecausc these charts were developed by slacking multiple variables against a common lime axis, we referred to these composite views as "lacked charts. An example is shown in Figure 5.10. This particular slacked chart was developed to help ussimultaneously view decision making, collabo ration, information availability, and battle tcmpo*data. The relations between these elements helped us understand what events shaped a key decision. Of particular value in this methodology is a technique for extracting criti cal information through interviews. (liven our earlier lack of success with end-ol-run surveys, wc were eager to try a technique that would allow us to identify details of critical decisions. The critical decision method of inter viewing (Klein, Calderwood, and MacCregor 1989) uses a two-person team to identify a single decision made during a run and explore it in detail. There are four steps to this interviewing technique: Step I is incident identification. The interviewer presents a situation or a critical event ami asks the decision maker to talk about the event from his perspective with a
particular focus on the role he played during the event. The interviewer does not interrupt the interviewees with clarifying questions (these come later in Step 3), and the interview learn takes careful notes regarding die actions and decisions made during the event.
Step 2 establishes the timeline of the event. The interviewer repeats the story hack to the interviewee with special emphasis on the liming of events and decisions. Through this process, the interviewer becomes familiar with the subcomponents and timing of events, and how they impacted the outcomes and decisions made. Special attention is paid to decision points, shifts in situation awareness, gaps, and anomalies.
Step3,deepening, tries to uncover thestorybehind thestory. Heremost ofthedetailed information becomes apparentâ€”why thingswere done as they were,why decisions
were or were not made, what informationand experiential components contributed the most. This stage uses the event timelineand explores it in detail. Anomalies or gaps in the story are investigated during this phase. Step 4 focuses on the what-if queries. The purpose of this step is to consider what conditions may have made a critical difference in how the situations unfolded and in die decisions that were made. It also asks the question of what a less-experienced person may have done in the same situation to further draw out the subtle factors that enable the interviewee to make effective decisions.
Because both the process traces and the interviews proved to be effective in Experiment 5, Experiments 6 and 7 built on these analytic tools and intro duced two additional tools.
The first additional tool -a detailed timeline of a run-became necessary due to the increased complexity and duration of runs. Although we had exten sive and detailed records ol what happened during each run (including video and audio recordings), the lask of producing a unified, concise description of what happened duringa run was difficult after the experiment was complete. Therefore, after each experimental run, a group of analysts who had closely observed the various echelons and cells (friendly and enemy) wrote a short but complete synopsis ofthc run. In the synopsis they were able lo capture concisely the How ofthc battle and detail the most significant events ofthc battle from both the Blue and Red perspectives. The second tool we introduced in the later experiments was focus groups. Organized lor each command cell, a focus group session was relatively short (less than one hour) and was facilitated by a member ofthc core analysis team who observed that cell during planning and execution. The facilitator began the locus group session with candidate decisions of interest identified by the analysis team during or immediately after the run. A recorder took notes. After the focus group session, the facilitator or recorder briefed the entire analytic observer team on key findings.
At the focus group sessions, wc tried to understand the battle in general, and the key events specifically, from the perspective ofthc operators. Facilitators used the following questions to guide the focus group and to ensure that all members participated in the session.
â€˘ Ask the operators losummarize the battle from their perspective. Uriel'back the key elements of the bailie summary. Use ihe operators words lo the maximum extent possible. Introduce the decisions of interest, placing them in the context of the battle summary. â€˘ Ask the operators to describe the events that led to a specific decision. Listen for
decision points, collaborations, shifts in situation awareness, gaps in the story, gaps in the timeline, conceptual leaps, anomalies or violations of expectations, errors, ambiguous cues, individual differences, and who played the key roles. Ask clarifying questions and then brief back the incident timeline.
â€˘ Ask those operators who played key roles questions about situation assessment and
cues. Listen for critical decisions, cues and their implications, ambiguous cues, strategies,
I he lime l<> Decide
anomalies, and violations of expected behavior with respect to the commander's intent.
• Ask operators to describe CSE-related issues. Ask probing questions as necessary.
What worked well? What features helped their situation awareness? What features
did you use to collaborate? How did you use automated decision-support functions? What did you not use and why? What would you automate?
• Ask operators to describe procedure-related issues. Ask probing questions as neces sary. What responsibilities were assigned to each operator? What tools were associ ated with the assigned responsibilities? When were the operators overwhelmed with the work-load? Ilow did the commander adjust staff roles during the mission? What
new procedures did you implement and why? What did you struggle with? Why did vou use a certain procedure?
The combination offocus group and OTA interviews along with the other
quantitative data logs gave us an ability to reconstruct the battle, to examine how decisions were made, and lo identify issues that may affect battle com mand in the future force. The following.sections describe some ofthe result ing conclusions. THE HEAVY PRICE OF INFORMATION
Because the notional future force represcntctkin ourexperimental program
was heavily armed but lightly armored, availability ofinformation was excep
tionally critical lo mission success. The cost ofstumbling upon an undetected
enemy asset was inevitably the loss ofacritical piece ofequipment. Ilowever, ifthecommander could \nu\ the enemy, becould use his precision weapons to
engage the enemy at great distance. In order to find the enemy at long range,
the force was equipped with a rich set ofsensor platforms. The sheer num ber of sensors, along with the well-understood importance of inhumation about enemy assets, led the commander lo focus more attention on informa tion needs than is common for commanders of todays forces. This additional
emphasis on information was not only the result ofthc increased ini\wnaiice of information but also due to the increased availability of information.
Because even a single enemy entity could have a major imparl on ibis lightly armored force, our commanders focused much oftheir atlcniion on intelligence gathering regarding individual enemy platforms, in ;iddiin»n to the more conventional tasks of aggregating information on enemy l«.i mil
lions and possible enemy courses ofaction. Our commanders needed to know where individual enemy entities were and, just as importantly, where iluy were not. In addition, they paid attention to the classification ol ilrin i<<\ entities and the condition of targets after they were engaged BDA. In l;i<t, the commander's strong focus on "seeing the enemy" at the expense ol other functions became obvious when we analyzed the content of his de<•p.hiiis.
For example, in Experiment 4a, almost half of the decisions verhali/ed by the command-cell members were characterized as see decisions (tin- <»ihei
Halllc ol Cognition
common types, move and stiike decisions, accounted for about 25% each; see Figure 8.2).
Still, the commanders in our experiments tended to delegate the entitybased information-gathering responsibility to the intelligence manager. This
helped devolve a substantial cognitive load from the commander and also
served to unify control of the sensor assets. On the other hand, this dele
gation deprived the cell of the critical big picture of the enemy since the intelligence manager was focused on finding and characterizing individual battlespace entities instead of developing an aggregated understanding of the enemy.
In Experiment 6, one of the commanders recognized this deficiency and
saw that bis intelligence manager was overloaded with tasks, while the effects
manager was being underutilized (since many ofthc engagement tasks were
automated orassisted by the (!SE)- Thecommander made the effects manager
responsible for coordinating with the intelligence manager to obtain images for BIM and to conduct BDA assessments. The advantage of placing this
responsibility with the effects manager was obviousâ€”not only did it alleviate the cognitive load placed on the intelligence manager, but it also enabled a rapid recngagcmenl ofassets that were not destroyed by the original engage ment. In general, the flexibility ofCSE facilitated opportunities for creative and unconventional allocation (and dynamic reallocation during the battle) oi
responsibilities between members ofthc command cell. BDA proved to be particularly critical and demanding throughout the experi mental program,ami commanders struggled with ob^ from their available images. More often than not, BDA images (produced with realistic imagery simulator) did not provide enough information to make definitive conclusions about the results of an engagement.Thus, about
90 percent of BDA images from Experiment 4a were inconclusive (Figure 6.5 ofChapter 6). This ultimately led to frequent recngagements of targets in order to ensure they were destroyed. In Experiment 4a, 44 percent oftargets were reengaged, and in Experiment 4b, 54 percent were reengaged. The need to understand the state of enemyentities through effective BDA
was clearly demonstrated in Experiment 4a, Run 6, where a single enemy armored personnel carrier destroyed enough of the Blue force to render the unit combat ineffective. This particular enemy entity had been engaged
early in the battle and suffered a mobility-kill. 1Iowever, the intelligence manager classified the asset as dead based on a BDA picture This mistake was not found until it was too late. The Blue force was unable to continue its mission.
Undoubtedly, tomorrows commanders will greatly benefit from the rich information available to them. At the same time, they will be heavily taxed
with the need to process the vast information delivered through networked sensorsâ€”both initial intelligence and BDA. Commanders should expect to
spend more time, perhaps over half of their time, on "seeing" the enemy. Partofthc solution isto equip them with appropriate information-processing
I he I line lo Decide
tools. In addition, the staff responsibilities should be conliiuially reevaluated and reallocated to ensure that all critical duties arewell covered. ADDICTION TO INFORMATION
Information can be addictive. We often observed situations when com
manders delayed important decisions in order to pursue an actual or perceived
possibility of acquiring additional information. The cost of the additional
information is time, and lost time is a heavy price to pay, especially for the future force that relies on agility.
As with today's commanders, uncertainly is present in all decisions, and
decisions are often influenced by aversion to risk in the presence of uncer
tainty. Unlike today's commanders, however, our commanders had the tools readily available to them to further develop their information picture They
could'reduce their uncertainly by maneuvering sensor platforms into position to better cover a critical area. This availability of easy access to additional information was adouble-edged sword because itoften slowed the Blue force
significantly. Commanders commonly sacrificed the speed advantage of their lightly armored force in order to satisfy their perceived need for information.
These delays enabled the enemy to react lo an assault and move to positions of advantage
An example of this occurred in Experiment ,4a, Run 8, where the com mander incorrectly assessed that the enemy had asignificant force along the
planned axis of advance Even after covering this area several times with sen
sors and not finding many enemy assets, the commander ordered ". . . need to slow down a bit in the'north . . . don't want you wondering in there." At
this time in the battle, the average velocity ofmoving Blue platforms dropped from 20 kni/h to 5 km/h. The commander exposed his force to enemy artil
lery for the sake of obtaining even more detailed coverage ofthc area. On the other hand, commanders also frequently made the opposite mistake
when they rushed into an enemy ambush without adequate reconnaissance.
An example of this occurred in Run 8of Experiment 6where several critical sensor assets were lost early in the run, and the CAU-1 commander quickly outran the coverage ofhis remaining sensors. In cases like this, the commander was lulled by the lack of enemy detections on his CSE screen and advanced without adequate informationâ€”perhaps perceiving the lack ofdetections as sufficient information to begin actions on the objective This event is discussed in detail in the following section.
Today's commanders are often taught that the effectiveness of adecision is directly related to the timeliness of the decision. However, while timeliness will remain critical, tomorrow's commanders will need to pay more atlention
to the complex trade-offs between additional information and decision timeli ness. Effective synchronization of information gathering with force maneu ver is a formidable challenge in information-rich (and therefore potentially information addictive) warfare Both specialized training and new tools are
206 Battle of Cognition
Red of CAU1
N CAUIofRed Note: Only losses of key UAV assets and manned systems are highlighted with names.
Slow growth in SA t 20
Figure 8.4. SAt curve for Experiment 6, Run 8. See Appendix for explanation of abbreviations.
required to prevent the failures that commanders experienced so often in our experiments. THE DARK SIDE OF COLLABORATION
Effective decision making can also be delayed and even derailed by col laboration. In certain cases, we observed a commander's understanding of the current Blue or Red disposition degraded as a result of collaborations with
subordinates, peers, or higher headquarters commanders. Unlike in chapter 7 where we discuss cases of ineffective collaboration, here collaboration itself went well. However, the effects of the collaboration on a commander's deci sions were highly detrimental.
Run 8 of Experiment 6 provides an interesting example of how collabora tion can lull a decision maker into complacency by validating incorrect con clusions. In thisrun, the CAU-1 commander's force was destroyed bya strong enemy counterattack. Figure 8.4 shows the SAt curve for Run 8 with an over
layof time points when Blue entities were destroyed. At 32 minutes into this run (vertical dashed line), the CAT commander assessed that the enemy was defending heavy forward (i.e., mainly in the CAU-2 sector). Several minutes later, the CAU-2 commander seemed to confirm that assess
ment with his report "I suspect [the enemy's] intent is to defend heavy forward [in CAU-2 sector]." This assessment was derived from several detections made
very early in the run, The figure shows that little new information about the enemy is acquired before the CAU-1 commander announces that "I'm not see-
The Time to Decide
ing any counterattacking forces moving towards us [i.e., CAU-1]. I think the majority of die enemy force is in [CAU-2's] sector" at 52 minutes into the run.
This would be a reasonable conclusion if he were using his sensors lo develop the picture of the enemy, but in fact CAU-1 had focused his sensor*, on his flank and did not have any sensor coverage in the area where he wa.-. moving his troops. Soon thereafter, CAU-1 stumbled into a major Red conn terattack force and was combat ineffective within minutes.
So, the obvious question is, why did the CAU-1 commander not make more effective use of his sensors? Certainly, one important factor was a tac tical blunder early in the run that led to the destruction of several key sen sor assets, leaving him with fewer sensors to conduct his mission. With diis
reduced set of sensors, the commander had to protect his flank, scout forward to the objective, and conduct necessary BDA. At 44 minutes into the fight, the commander tasked his staff member to reposition the sensors to scout the objective but was distracted by the collaboration with a staff member who declared that he had found several enemy assets far to the west. Because of this collaboration, the commanderneglected his intended mis sion of covering the area ahead of his force and began focusing attention far to the western flank of the advancing force. Yet, less than 10 minutes later, and with no new information about the objective, the commander was secure
enoughin his assessment that he began hisoffensive and was met with a major enemy counterattack force that decimated his unit.
There were several reasons for this poor decision to begin operations with out conducting proper reconnaissance. The collaborative assessment of the situation with CAU-2 commander and with CAT commander led the CAU-1
commander to expect few enemy forces in his zone. Later, the commander's
collaboration with a staff member confirmed his erroneous understanding that the enemy force was far from his zone.
Though this was a rather extreme example of a collaboration negatively affecting decision making, there were many other examples throughout the experiments that showed collaborations either distracting the commander from making critical decisions or lulling him into accepting an incorrect understanding of the battlespace. In fact, of sevencollaboration processtraces chosen for detailed analysis in Experiment 6, only three cases of collabora tion yielded improved cognitive situation awareness for the operators. In the remaining four cases, collaboration dangerously distracted the decision maker from his primary focus or reinforced an incorrect understanding of the cur rent Red or Blue disposition.
Consider that commanders in our experiments were equipped with a sub stantial collection of collaboration toolsâ€”instant messaging, multiple radio frequencies, shared displays, graphics overlays, and a shared whiteboard. Although the commanders took full advantage of these tools and found them clearly beneficial, there was also a significant cost to collaboration. To min imize such costs, future command cells will need effective protocolsâ€”and
Battle of Cognition
correspondingdisciplineâ€”for collaborating: howoftenandunder whatcircum stances collaboration occurs, with what tools, and in what manner. AUTOMATION OF DECISIONS
Commanders and staffs used automated decisions extensively and could use them even more. However, the nature of these automated decisions requires
an explanation. In effect, the CSE allowed the commander to formulate his decisions before a battle and enter them into the system. Then, during the operations, a set of predefined conditions would trigger the decisions. Thus, the decisions were actually made by the commander and staff. It was only the invocation and execution of these decisions that was often performed auto matically when the proper conditions were met.
One type of such automatically triggered decision was the automated fires. The conditions for invoking a fire mission included staff-defined criteria for confidence level, type of target, the uncertainty of its location, and targetacquisition quality. Recall that in chapter 3 we discussed the Attack Guidance Matrix (AGM), an intelligent agentwithin the CSE that identified enemy tar gets and calculated the most suitable ways to attack them with Blue fire assets. It could also execute fires; for example, it could issue a command to an auto mated unmanned mortar to fire at a particular target, automatically or semi-
automatically, asinstructed by thehuman staffmember. Typically, acommander or an effects manager would specify the semiautomatic option: the AGM rec ommended the fire to them and would execute it only when a command-cell member approved the recommendation. Occasionally, in extreme situations, theywould allow fully automated fires, without a human in the decision loop. Another similar type of automated decision making was an intelligent agent for automated BDA management. This agent used the commanderestablished rules to determine which sensor asset was the most appropriate
to conduct BDAand would automatically task that asset to perform the BDA
assignment. For example, it would automatically command a UAV to collect information about the status of a recentlyattacked target. Such decisions were made based on the specified criteria regarding the available sensorplatforms,
areas of responsibility, and enemy assets to be avoided. In each experiment, we found that command-cell members used the auto mated fires feature effectively and frequently. Commanders and effects man agers spent ample time prior to thebeginning ofbattle defining theconditions for automated fires. During the runs, these settings were rarely changed and almost eveiy run had instances of automated engagements of enemy assets. However, therewere also many manual engagements that could have been automated but weren't. Instead, a cell member would manually identifya Red
target, select a Blue fire asset and suitable munitions, and dien issue a com mand to fireâ€”overall, a much more laborious and slower operation than a
semiautomated fire. One reason for preferring such manual fires was that it often took too long to accumulate enough intelligence on an enemy tar-
I Ml llll
get to meet the preestablished criteria for an automated or scmiaulomaied fire decisionâ€”they had to be fairly general and therefore too stringent. I'or
example, since in our experimental scenarios there were relatively few civilian tracked vehicles in the battlespace (a bulldozer being an obvious exception),
the effects manager would often engage any vehicle classified as tracked even before there was a clear indication that it was an enemy asset. At the same
time, he was hesitant to allow automatic fires on all tracked targets. In such cases, a manual engagement was intentional, but in other cases, thestaff wondered aloud why an enemy vehicle was not being engaged. To the
effects manager's eye, the specified conditions were apparently met, and the AGM should have initiated a fire event when in fact the situation had not met
the full set of the prespecified trigger conditions. The staff's puzzlement over
why an automated fire was not happening had an adverse affect. Because the CSE was not performing as expected by the effects manager, his confidence in the capability of the tool diminished. Unable to understand why theAGM refused to fire, the effects manager tended to apply simple and very specific rules sothatonly the most critical targets were automatically engaged. The automated BDA tool suffered from diis lack of understanding, which lead to a lack of trust, much more so than with the AGM. One would think
that the seemingly less critical and, nonlethal nature of BDA would lead to more ready acceptance by the operators. After all, the automated BDA tool was developed at the request of commanders in an early experiment where they routinely tasked a UAV to take a picture ofengaged Red assets. The com manders felt that if this task wasautomated, not only would it lighten the load of the staff, but it would also ensure that the task was conducted in a timely fashion. This seemed like an obvious task to develop effective rules, and the
CSEdevelopers set to work automating these seemingly obvious BDA tasks. The solution worked exactly as expected by the tool designers and by the command staffwho originally requested the automation. Unfortunately, the new command-cell members participatingin the next experiment had rather different expectations. Early in the experiment, theyused the automated BDA tool and became utterly confused. The information manager controlling the UAVs would wonder aloud, "Who is moving my UAV?" and "Where is that thing going now?"
What was originally designed to lighten the load of the command cell quickly turned into a perceived loss of control over critical assets. The auto mated BDA tool became available in Experiment 4b, and in each subsequent
experiment, commanders and theirstaffs began byusing the functionality but then quickly abandoned it because of the perceived loss of control. So, what decisions can and should be automated? Why was the automated
fires capability well received while the automated BDA was not? Based on our experience, we believe the difference conies down to the following consider ations.
First, the commanders and staff must trust the system. Not only must the
system be reliable enough to work as expected every time, but it must also
210 Battle of Cognition
be simple enough for the operators to understand when it will act and when
it won't. In particular, there must be a very clear and easily understandable distinction between the computer control and human control.
For example, in case of the automated fires, it was very clear whether the human or the computer was to make the final decision, and once a muni tion was launched, there were no opportunity for—or confusion about—the control. However, in case of the BDA management, there was continuous uncertainty about who was in control of a given platform—a human or a computer—and the information manager had no means to collaborate with the system to answer his questions about control.
Second, it should be easy for the operator to enter rules that govern an automated decision-making tool. For example, it may initially seem obvious to the developers of an automated tool to call for fires on detected enemy tanks as soon as possible. However, when low on ammunition, a commander might want to fire only at those tanks that are able to affect his axisof advance.
Likewise, hemay not want to automatically engage tanks near populated areas or if a civilian vehicle was spotted nearby. The more rules and tweaks, the harder it is to understand the decisions made by the tool and the sooner an operator will build distrust when the tool does not perform ashe expects. Naturally, other nontechnological factors also affect the extent to which automated decisions will be available to a future force. Perhaps our com manders accepted the automated fires so easily because the experiments were merely a simulation: the consequence of a wrong automated decision was the destruction of computer bytes and not of real people. In today's practice, a human is personally accountable for every fire decision, and great care is taken to avoid accidents. With any automation of decisions related to either lethal fires or to any other battle actions come many challenging questions about responsibility and accountability. THE FOREST AND THE TREES
Decision making can suffer from an excessive volume of detailed informa tion offered by the network-enabled command system. In our experiments, we observed several mechanisms by which the richness of information nega tivelyimpacted the decision making. First, recall that all operators' displays were tied to the same underlying data source. Therefore, soon after an enemy assetwas detected, everyscreen of every command-cell member in every command vehicle would show this new information. At first glance, this seems to be exactly the right behavior of the system, and the operators indeed desired to see all such information. And yet, this faithful delivery of detailed information proved to be a major distraction to the cell members' decision making, especially to commanders. Instead of focusing on understanding the enemy course of action and how to best counter likely enemy actions, commanders became mesmerized with the screen, hunting for changes in the display and reacting to them.
The Tiino to Decide
This so-called looking-for-trees behavior had at least two very adverse impacts on the commander's ability to understand the battlespace. On one hand, the commander gravitated to a reactive mode: he responded to changes on his display and frequently lost the initiative in the battle. This was espe cially true when inadequate sensor management led to detections of enemy assets outside of the truly critical areas of the battlespace. In such cases, the commander's fixation on the screen led him to focus on largely irrelevant topicswhile losing the graspof main events in the unfoldingbattle. On the other hand, responding to frequent updates on the screen pre vented the commander from spending the necessary time thinking about the bigger picture of the situation. For example, in Experiment4b, wenoticed the excessive frequency with which the commander shifted his attention. He was almost constantlyscanning the display for new information, moving his cur sor from one entity to another to determine if new information was available, and reacting to the appearance of an enemy icon or alert box on the screen. In Run 4, he shifted his attention 26 times over a 13-minuteperiodâ€”an average of once every 30 seconds. During a 16-minute period in Run 6, he shifted his attention 60 times, for an average dwell time of about 16 seconds. The implications of this frequent attention shifting are interesting and dis turbing. The more often a decision maker shifts attention, the shorter the dwell time on a data element, and the more shallow the cognitive processing. The decision maker may determine, for example, diat an enemy vehicle has been detected and may decide how to react to it. Then he shifts his attention to another change in his screen, without having enough time to reason about the broader issuesâ€”the implications of the detection of that type of vehicle at that place in the battlespace. Furthermore, the commander would often "drag" the other cell mem bers along with him as he shifted attentionâ€”announcing the updates he was noticing or issuing reactive tasks such as "DRAEGA just popped up, let's get a round down there." Such unnecessary and counterproductive communi cations about the newly arriving information were depressingly common. For example, in Experiment 6, as die commander watched on his screen the reports of Red artillery rounds landing around one of his platoon leader's vehicle, he felt compelled to keep announcing this fact to the beleaguered platoon leader. Of course, the platoon leaderwaswell aware that he wasunder fire, and the commander's communications only served to distract him.
Published on Oct 6, 2010
Published on Oct 6, 2010
Ultimately, it is the command decision, and the resulting action, that affects the battle outcome. All the processes we have discussed to th...