Sei sulla pagina 1di 24
THE INTERNATIONAL JOURNAL OF AVIATION PSYCHOLOGY, 7(4), 269-291 Copyright © 1997, Lawrence Erlbaum Associates, Inc. FORMAL PAPERS A Human Error Approach to Accident Investigation: The Taxonomy of Unsafe Operations Scott A. Shappell United States Navy Norfolk, VA Douglas A. Wiegmann University of North Florida, Jacksonville Human error continues to be implicated in the vast majority of aviation accidents. Yet, most accident investigation and reporting systems are not currently designed around any theoretical framework of human error. One reason may be the disconti- nuity between classical theories of human error and the practical application of these theoretical approaches in accident investigation, The Taxonomy of Unsafe Opera- tions presented here bridges the gap between theory and practice by providing field investigators with a user-friendly, common-sense framework from which accident investigations can be conducted and human causal factors classified. This taxonomy draws upon traditional models of human error to account for human accident causal factors, including the condition of operators and supervisory error. A detailed description of the taxonomy is provided, as well as a discussion of the benefits it provides the field of accident investigation and prevention. Human beings by their very nature make mistakes; therefore, it is unreasonable to expect error-free human performance. It is no surprise then, that human error has been implicated in a variety of occupational accidents, including 70% to 80% of ‘Requests for reprints should be sent to Dr. Scott Shappell, U.S. Naval Safety Center, 375 A Street, Norfolk, VA 23511-4399. 270 SHAPPELL AND WIEGMANN those in civil and military aviation [National Transportation Safety Board (NTSB), 1996; U.S. Government Accounting Office, 1996]. In fact, although U.S. Navy and Marine Corps aviation accidents attributed solely to mechanical failure have decreased markedly since 1977, those attributable at least in part to human error have declined at a much slower rate (Shappell & Wiegmann, 1996). It appears that interventions aimed at reducing the occurrence or consequences of human error have not been as effective as those directed at mechanical failures. Clearly, if accidents are to be reduced further, more emphasis needs to be placed upon the genesis of human error as it relates to accident causation. The predominant means of investigating the causal role of human error in accidents remains the analysis of postaccident data. Unfortunately, most accident reporting systems are not designed around any theoretical framework of human error. Indeed, most accident reporting systems are designed and employed by engineers and front-line operators with limited backgrounds in human factors. As aresult, these systems have been effective at identifying engineering and mechani- cal failures, whereas the human factors component of these reporting systems remains generally narrow in scope. In fact, when human factors are addressed, these reporting systems generally focus on identifying “human failures” without regard for why the failures occurred. Resulting postaccident databases are therefore typically not conducive to a traditional human error analysis, making the identifi- cation and development of viable human intervention strategies onerous (Wieg- mann & Shappell, 1997). What is required is a general human error framework around which new reporting systems can be organized or existing postaccident databases restructured. Previous attempts to apply traditional human error frameworks to accident analysis have met with encouraging, yet limited, success (O’Hare, Wiggins, Batt, & Mor- rison, 1994; Wiegmann & Shappell, 1997). This is due primarily to the fact that performance failures are influenced by a variety of human factors that are usually not addressed by traditional classification schemes, With few exceptions (¢.g., Rasmussen, 1982), human error taxonomies do not consider the potential adverse mental or physiological condition of the individual (e.g., fatigue, illness, attitudes, etc.) when describing cognitive failures. Likewise, traditional frameworks often fail to consider latent errors, committed by officials within the management hierarchy (such as line managers and supervisors), known to influence the condition and decisions of operators (Reason, 1990). Therefore, if a comprehensive accident reporting system is to be developed, a taxonomy that takes into account these multiple causes of human failures must be offered. The purpose of this article is to present a conceptual taxonomy of accident causation that addresses these needs. This classification scheme, coined the The Taxonomy of Unsafe Operations (Shappell, Wiegmann, & Schmidt, 1996), was developed for and has recently been adopted by the U.S. Navy and Marine Corps for use in aviation accident investigation and postaccident data analysis. However, A HUMAN ERROR APPROACH TO ACCIDENT INVESTIGATION 271 because the taxonomy draws upon several well-established theoretical frameworks as a foundation, it is applicable to accidents in a variety of occupational settings as well. For illustrative purposes, however, the taxonomy is described, and its appli- cation demonstrated, within the context of aviation. THE TAXONOMY OF UNSAFE OPERATIONS Rarely, if ever, is the human the sole cause of an accident. Rather, most accidents involve a complex interaction of several factors. This relationship is depicted by Edwards’ (1988) software-hardware-environmental conditions—liveware (SHEL) model of system design. Briefly, Edwards described four components necessary for the successful integration and design of a system: 1. Software—the rules, regulations, and so forth that govern operations. 2. Hardware—equipment, material, and other physical resources. 3. Environmental conditions. 4, Liveware—the human. Central to the SHEL model is the notion that accidents occur when any of these components, or interactions among them, fail. As such, the reliability of the system is often determined by its least reliable component, which in the vast majority of accidents is the human. Consequently, until human failure is adequately described, its complex interactions with other components of the SHEL model cannot be fully understood. As a first step, the liveware (human) component of the SHEL model was expanded based in large part on Heinrich’ s (Heinrich, Petersen, & Roos, 1931) first axiom of industrial safety which states that, “the occurrence of an injury invariably results from a completed sequence of factors—the last one of these being the accident itself.” This sequential approach to accident causation has been embraced by many in the field of accident investigation. Representative of this approach, Bird’s (1974) domino theory of accident causation describes five dominos within the accident sequence, each one effecting the toppling of the next. The first domino represents safety or the management of operational losses. When failures occur at this level, basic causes (Domino 2) such as personal and job-related factors begin to appear. In effect, these basic causes are at the root of those substandard practices and conditions (Domino 3) that directly lead to an accident (Domino 4) and, tragically, personal injury or property damage (Domino 5). Successful intervention, therefore, entails the removal of any of the preceding three dominos to prevent the occurrence of an accident and injuries. Drawing upon Bird’s (1974) sequential framework, the Taxonomy of Unsafe Operations (Figure 1) describes three levels of failure within the human component 272 — SHAPPELL AND WIEGMANN The Taxonomy of Unsafe Operations ‘UNSAFE, SUPERVISION ACCIDENT & INURY FIGURE 1 The Taxonomy of Unsafe Operations. of the SHEL model: (a) unsafe supervisory practices; (b) unsafe conditions of operators; and (c) the unsafe acts operators commit. Ultimately, it is failures at any or all of these levels that can lead to accidents, personal injury, and property damage. Each level is described further in the following sections, beginning with the level most closely tied to the accident itself, unsafe acts. AHUMAN ERROR APPROACH TO ACCIDENT INVESTIGATION 273 Unsafe Acts of the Operator The Taxonomy of Unsafe Operations directly incorporates Reason’s (1990) clas- sification of unsafe acts. According to Reason, any description of unsafe acts committed by operators must first consider the intentions of those committing them. Note, however, that intention here does not refer to outcome—after all, no one intends to have an accident. Rather, intention is determined by simply asking, “did the behavior proceed as planned?” If so, then the behavior is considered intentional; if not, then the behavior is considered unintentional. It is within this framework of unintended and intended actions that Reason described three basic error types (slips, lapses, and mistakes) and violations (Figure 1, bottom panel). Unintended unsafe acts. Unintended unsafe acts are characterized by ac- tions that unwittingly deviate from planned intentions due to failures of execution or memory and manifest as either slips or lapses (Table 1). Slips, the first of three basic error types described by Reason (1990), are characteristic of attention failures and may take the form of inadvertent activations, interference errors, omissions following interruptions, and order reversals, among others. In contrast, lapses, the second basic error type, typically arise from memory failures and include errors such as omitted items in a checklist, place losing, or forgotten intentions. Both are believed to occur at a largely automatic, skill-based level of processing and are therefore particularly sensitive to monitoring failures by the individual. TABLE 1 Classification of Unsafe Acts of the Operator Unintended Actions Intended Actions Slips (attention failures) Mistakes Intrusion Rule-based Omission Misapplication of a good rule Reversal Application of a bad rule Misordering Knowledge-based Mistiming Inaccurate or incomplete mental model of the problem space Lapses (memory failures) Violations Omitting planned items Routine Place-losing Habitual departures from rules and regulations condoned by Forgetting intentions ‘management Exceptional Isolated departures from rules and regulations not condoned by management Note. Adapted from Reason (1990). 274 = SHAPPELL AND WIEGMANN Intended unsafe acts. As originally described by Reason (1990), intended unsafe acts include mistakes (the third basic error type) and violations (Table 1). Mistakes are characteristic of intentional behavior that proceeds as intended, yet fails to achieve the desired outcome due to problem solving or planning failures. Mistakes may be further classified as rule-based or knowledge-based (Rasmussen, 1986). Rule-based mistakes involve either the misapplication of good rules or the application of bad ones. The misapplication of a good rule refers to the use of preprogrammed behaviors that, although appropriate in the context in which they were learned, prove inadequate in the new situation. These “strong-but-wrong” tules typically occur when the new situation shares common features with the usual problem environment, but counterindications are overlooked. In contrast, the application of a bad rule refers to learned inappropriate, inadvisable, or inelegant programs that succeed by chance in the context in which they were learned yet fail when employed in the new problem area. In contrast, knowledge-based mistakes occur when sufficient experience is not available to solve the problem; therefore, individuals must resort to slow and effortful problem solving behavior. Within aviation, this form of mistake often appears as a lack of proficiency or experience and is indicative of a wide range of error types, such as cognitive workspace limitations, confirmation bias, and over- confidence in problem solving evaluation. In contrast to the three error forms described previously (slips. lapses, and mistakes), violations represent a willful departure from those practices deemed necessary to safely conduct operations and as such are differentiated from errors. Violations are further divided into two types based upon the characteristics of individuals committing them and those who govern their actions. The first, routine violations, tend to be habitual by nature and are typical of the individual’s behav- ioral repertoire. Equally important, routine violations are often perpetuated by a system of supervision and management that tolerates such departures. An example would be automobile drivers who consistently exceed the posted speed limit by 5 to 10 mph when they know that such excursions are often tolerated by local authorities. Exceptional violations, on the other hand, are isolated departures from authority, neither typical of the individual nor condoned by management. Continuing with the driving example, most individuals would not drive 50 mph over the speed limit; however, in rare instances (e.g., being late for work) they may exceed the speed limit by 50 mph knowing that such violations would be punished. Hence, an isolated instance of driving 105 mph in a 55-mph zone would be considered an exceptional violation—not necessarily because of its extreme nature, but because it is neither typical of the individual nor condoned by authority. A HUMAN ERROR APPROACH TO ACCIDENT INVESTIGATION 275 Unsafe Conditions of the Operator In many instances it is the unsafe conditions of operators that contribute to or cause unsafe acts (Bird, 1974). Although Reason (1990) addressed this relationship, a classification scheme similar to that used with unsafe acts was not presented. Therefore, a framework was developed that loosely preserved Bird’s original notion of substandard conditions and practices of operators (Figure 1, center panel). Substandard conditions of the operator. Substandard conditions can be divided into three categories, the first of which, adverse physiological states, was created to account for those medical or physiological conditions that effect safe operations (see Table 2). Within aviation, this category would include conditions such as spatial disorientation, hypoxia, visual illusions, and the myriad of pharma- cological and medical abnormalities known to negatively influence performance. Take, for example, a pilot who loses visual references while flying at night in alow overcast. Such conditions often lead to spatial disorientation. If that pilot was either unaware of the unsafe condition, or did not correctly utilize the flight instruments, it is quite possible that the pilot would misjudge his attitude or altitude to the extent that mistakes (unsafe acts) could be made and ultimately lead to an accident. TABLE 2 Classification of Unsafe Conditions of the Operator ‘Substandard Conditions of the Operator Substandard Practices of the Operator Adverse Physiological States Mistakes-Misjudgments Spatial disorientation Poor dietary practices Hypoxia Overexertion while off duty Visual illusions Physical fatigue Crew Resource Mismanagement Motion sickness Not working as a team Medical illness Poor aircrew coordination Intoxication Improper briefing before a mission Inadequate coordination of flight Adverse Mental States ‘Loss of situational awareness Readiness violation Circadian dysthythmia Alertness (drowsiness) Overconfidence Complacency Physical and/or Mental Limitation Lack of sensory input Limited reaction time Insufficient physical capabilities Insufficient intelligence Not adhering to regulations regarding crew rest, alcoho! consumption, or medications 276 — SHAPPELL AND WIEGMANN The second category, adverse mental states, is composed of those psychological or mental conditions that negatively impact performance. Principal among adverse mental states are the loss of situational awareness, task saturation, and mental fatigue. Each continues to plague aviation and other occupational settings and must therefore be accounted for (Wiegmann & Shappell, 1997). Also included here, yet often overlooked, are personality traits and pernicious attitudes such as overconfi- dence, complacency, and misplaced motivation that likewise influence decisions and contribute to unsafe acts. Complacency, for instance, is often at the root of inattention which in turn leads to slips as described earlier. The final category, physical or mental limitations of the operator, includes those instances when necessary sensory information is not available, or when individuals do not have the ability or time to successfully problem solve. For aviation, the former often includes not seeing other aircraft, power lines, and obstacles due to the size or contrast of the object in the visual field. There are equally as many aural and tactile limitations that affect the condition of the operator. For example, fighter pilots in the military often rely on sound and tactile cues during air combat maneuvering. Any limitation of, or interference with, these senses would therefore put the pilot at risk. Likewise, there are many times when a situation requires such rapid mental processing or reaction time that it is beyond human limits to resolve it successfully. Take, for example, a pilot involved in terrain-following or nap-of- the-earth flight. When exposed to engine or control surface failures, the time required to evaluate and solve the problem may very well exceed human limits. In situations such as these, split-second responses are required and errors (unsafe acts) are often made. Substandard practices of the operator. The first of three categories, mis- takes—misjudgments, typically includes behaviors that do not violate existing rules or regulations but may reduce the operating capabilities of the individual. For example, individuals who run 10 miles before piloting an aircraft may impair their physical or mental capabilities to the extent that performance is degraded. Likewise, poor dietary habits are known to affect performance, yet many individuals continue to replace proper meals with a candy bar and cola before flying. Although there may be no rules governing such actions, reasonable judgment would argue against them. ‘The second category, crew resource mismanagement, concerns poor coordina- tion and communication between operators. The aviation literature is replete with accidents attributed to poor crew resource management. Literally volumes have been written on the topic (for reference, see Jensen, 1995). Therefore, no classifi- cation scheme would be complete without accounting for poor coordination be- tween aircrew members. However, this category also includes those instances when individuals directly responsible for the conduct of operations fail to coordinate A HUMAN ERROR APPROACH TO ACCIDENT INVESTIGATION 277 activities adequately. Within military aviation, this would include those crew members who function during the flight as aircraft commanders, flight leaders, section leaders, and so forth. Note, however, that this category is reserved for those individuals directly involved with the conduct of the flight and not those in positions of higher authority or otherwise detached from the actual flight (these are addressed later in the article). The final category, readiness violations, refers to violations of rules, regulations, and instructions that promote unsafe conditions within the individual. These violations are differentiated from violations within unsafe acts because the former impact the condition or readiness of the operator and typically do not occur in the direct operation of the aircraft. For example, although disregarding crew rest requirements is certainly a violation, it is classified here as a readiness violation because it promotes a condition of both physical (adverse physiological state) and mental (adverse mental state) fatigue that may elicit unsafe acts. Similarly, violating alcohol restrictions, self-medicating, or other practices that negatively effect the readiness of the operator—and therefore performance—would be viewed as readi- ness violations. In contrast, violating altitude restrictions by definition occurs during the conduct of flight and therefore is classified as an unsafe act and not an unsafe condition as described here. Unsafe Supervision Both Reason (1990) and Bird (1974) discuss the role of unsafe supervision in accident causation; however, neither presents a framework for classifying the various types of supervisory failures. Therefore, a framework was developed that expands upon Bird’s and Reason’s descriptions of unsafe supervisory practices within the context of unforeseen and known unsafe supervision (Figure 1, top panel). Unforeseen unsafe supervision. Viewed from the perspective of line man- agers and supervisors, unforeseen supervisory failures are best described as unsafe management or supervisory practices that go unnoticed yet are not the result of negligence or adverse behavior (Table 3). The first category, unrecognized unsafe operations, is likened to a loss of supervisory situational awareness. Though somewhat broad, it includes those instances when unsafe conditions or hazards exist yet go unseen or unrecognized. For example, aeromedical conditions such as illness or fatigue are known to adversely affect operator performance but often go unnoticed by the untrained or overtasked supervisor. In much the same way, the insidious effects of recent life changes experienced by operators (e.g., divorce, death of a family member, legal difficulties, etc.) may adversely affect safety of flight yet may remain undetected by even the best intentioned supervisor. 278 = SHAPPELL AND WIEGMANN TABLE 3 Classification of Unsafe Supervision Unforeseen Unsafe Supervision Known Unsafe Supervision Failure to recognize unsafe operations Inadequate supervision Loss of supervisory situational awareness Failure to administer proper training Unseen or unsafe conditions and hazards Lack of professional guidance Unrecognized adverse aeromedical conditions Life changes such as: divorce; death in family; Planned inappropriate operations legal, financial, or personal problems Improper work tempo Lack of Documentation and Procedures Failed to Correct Known Problem Lack of technical specifications, instructions, Failure to correct inappropriate behavior regulations, etc, Failure to correct safety hazard Inadequate design Supervisory violations Equipment design that contributes to the Not adhering to rules and regulations accident iful disregard for authority by supervisors The second category, inadequate documentation and procedures, is typical of most systems—particularly new ones where the “bugs” have yet to be discovered. Accounting for all possible contingencies through technical specifications, instruc- tions, regulations, and standard operating procedures is an extremely difficult task, at best. Unfortunately, accidents, incidents, and hazards continue to be a common way of identifying deficiencies in existing documentation, often after tragedy has struck. ‘The final category, inadequate design, was created to account for design flaws that are built into systems unbeknownst to supervisors or operators for reasons beyond their control. That is, time and fiscal priorities often drive design, leading to less than optimal systems. Unfortunately, in many instances the supervisor and operator are left to identify these latent flaws on their own through hazard and mishap reporting systems. Although many would consider this category a “hard- ware” rather than “liveware” issue, we have included it here as it relates to potential supervisory interventions. Specifically, had the supervisors known about a particu- lar flaw, they may have been able to circumvent or avoid adverse consequences. Known unsafe supervision. The first of four categories, inadequate super- vision, as described here refers to the mismanagement of individuals on a personal level. It is expected that individuals will receive adequate training, professional guidance, and operational leadership, and that all will be managed appropriately. Unfortunately, supervision may prove to be inappropriate or improper, or it may not occur at all. Regardless, inadequate supervision is viewed as a function of some action, or purposeful inaction, by the supervisor. HUMAN ERROR APPROACH TO ACCIDENT INVESTIGATION 279 Occasionally, the operational tempo or schedule is planned such that individuals are put at unacceptable risk and ultimately performance is adversely affected. Planned inappropriate operations encompass all aspects of improper or inappro- priate crew scheduling and operational planning which may focus on such issues as crew pairing, crew rest, and managing the risk associated with specific missions. Arguably, however, there are times when hazardous operations are unavoidable (e.g., during emergency operations such as medical evacuations) and supervisors are simply forced to schedule crews to the best of their ability. Although such operations might be acceptable in some extreme cases, during normal operations these practices would be considered unacceptable and therefore are included as supervisory error here. The remaining two categories, the failure to correct known problems and supervisory violations, are similar yet considered separately within the taxonomy. The failure to correct known problems refers to those instances when deficiencies among individuals, equipment, training or other related safety areas are known to the supervisor yet are allowed to continue uncorrected. For example, the failure to consistently correct or discipline inappropriate behavior certainly fosters an unsafe atmosphere, but it is not considered a violation if no specific rules or regulations were broken. Supervisory violations, on the other hand, are reserved for those instances when existing rules and regulations are willfully disregarded by supervi- sors when managing assets. For instance, permitting an aircrew to operate an aircraft without current qualifications or license is a flagrant violation that invariably sets the stage for the tragic sequence of events that predictably follow. AVIATION ACCIDENT EXAMPLE For demonstration purposes, the human causal factors associated with a well-docu- mented commercial aviation accident were classified using the Taxonomy of Unsafe Operations. However, before presenting the example, it is important to note that the goal was not to reinvestigate the accident, nor was it to validate or invalidate the original investigation. Rather, the taxonomy supplements the original investi- gation by providing a relational framework and structured nomenclature from which the underlying causes of human failures can best be described. American International Airways Flight 808 (Aug. 18, 1993) Accident Overview On August 18, 1993, at 1656 EST, a Douglas DC-8-61 freighter registered to American International Airways (AIA) crashed on approach to the U.S. Naval Air Station at Guantanamo Bay, Cuba (National Transportation Safety Board [NTSB], 280 — SHAPPELL AND WIEGMANN 1994). The aircraft was destroyed by impact forces and a postcrash fire. No fatalities occurred; however, all three crew members were seriously injured. Areview of events beginning 2 days before the accident revealed that the aircrew had flown a relatively normal cargo schedule; they began their crew day at 2300 on both days and, after several intermediate stops, concluded their crew day at 1200 and 0800, respectively. After being relieved at 0800 on the second day, the crew was reassigned to fly an unexpected trip to Guantanamo Bay. This reassignment was done even though the crew had been on duty since 2300 and was not due to report for their next scheduled flight until 2300 that evening. In fact, at the time of the accident the captain had been awake for nearly 24 hr. During that time, the captain had accumulated 12 hr of flight time and had been on duty for 18 hr, well within the company’s 24-hr crew day policy. After accepting the Guantanamo flight, the aircrew departed Atlanta, Georgia, 2 hr later, stopping in Norfolk, Virginia, for cargo. With the captain at controls and first officer (FO) handling communications, the aircraft departed Norfolk and continued uneventfully to Guantanamo Bay. Of note, none of the crew had ever landed a DC-8 at the destination airfield. Upon arrival to the Guantanamo Bay operating area, the FO initially requested Runway 28 for landing. Runway 28 is generally preferred because Runway 10 is obstructed by Cuban airspace that begins roughly 4,700 feet west of the threshold. To land on Runway 10, the aircraft must be positioned at the edge of Guantanamo’s airspace and requires a precisely timed right turn on final, considerably more difficult than an approach to Runway 28. On the day of the accident the prevailing winds also favored Runway 28. Nevertheless, the plane captain decided to request Runway 10 “just for the heck of it.” After much confusion between the aircrew and controller, the aircrew were instructed to remain within the airspace bounded by a strobe light. On the western edge of Guantanamo’ airspace is a fence line at a right angle to the coast and a strobe light. Unknown to the aircrew or the air traffic controller, the strobe light was inoperative. The ensuing moments were marked by confusion among the pilots as they searched for the inoperative strobe light. Ultimately, the captain and FO became so preoccupied with locating the strobe light that they failed to monitor the state of the aircraft, particularly its declining airspeed, during final approach. Only the flight engineer noted the extreme situation, but his repeated warnings were ignored by the captain until it was too late. Following an aggressive right turn to the runway, the aircraft stalled and crashed. Classification Using the Taxonomy of Unsafe Operations The original investigation by the NTSB revealed that this accident, like most, was a function of several human performance failures. The analyses and findings of the AHUMANERROR APPROACH TO ACCIDENT INVESTIGATION 281 NTSB are presented here as they were classified within the Taxonomy of Unsafe Operations. The final product is a chain of events (i.e., accident causal factors) linked within a human error framework. Working backwards from the final moments of the accident, it is evident that the captain’s failure to correctly respond to the impending stall directly led to the accident. The correct response to an impending stall would have been to level the aircraft’ s wings, apply full thrust, and abort the approach. Instead, the captain chose to increase his turn in an attempt to land the aircraft. Using the taxonomy, the incorrect response to the impending stall is considered an unsafe act and may be further classified as intentional, albeit the plan did not reach its desired ends (i.e., they crashed). It must be determined, then, whether there was intent to violate any tules or if the failed approach was simply the result of a mistake. In this case, continuing the approach was considered a mistake, because no rules were broken and the captain believed that he could make the approach even at such a steep angle of bank. This decision, although not prudent given the circumstances, was not surprising because the captain was quite experienced in the DC-8. Using Reason’s (1990) description of rule-based mistakes, it appears that the captain misapplied an otherwise good rule and increased the angle of bank to adjust for lineup during approach. However, unknown to him at the time, he had no chance of completing the approach given the aircraft’s decreasing airspeed and load factor. Itcan only be assumed that, had the captain been aware of the decreasing airspeed and impending stall, he would have aborted the approach. Why, then, was the captain unaware of his airspeed? As described earlier, upon approach to Runway 10, a strobe light is provided for proper alignment with the Cuban boundary fence. However, unknown to the aircrew and controllers, the strobe was inoperative. During the ensuing moments, the captain and FO became so preoccupied with locating the inoperative strobe (classified as a substandard condition, adverse mental state) that they failed to monitor their flight instruments, particularly the airspeed indicator. This unsafe action, or rather inaction, by the captain and FO was, therefore, unintended and is best classified, using the taxonomy, as a slip. Although the preoccupation with the inoperative strobe clearly played a major role in the tragic sequence of events that followed, a more insidious loss of situational awareness began much earlier during the turn from base leg to final. It is likely that during final approach the captain, being in the left seat, did not even have the runway threshold in sight due to an obstructed view from the cockpit. If, in fact, the runway was out of sight, the need for effective communication between the captain and other crew members was paramount. Unfortunately, the preoccu- pation with the strobe light interfered with any effective communication within the cockpit. In fact, when the flight engineer brought the airspeed of the aircraft to the attention of the pilot, he was quickly rebuffed. The breakdown in effective com- munication at such a critical juncture of the flight (classified as a substandard Practice, crew resource mismanagement) contributed to a rapidly degrading situ- 282 —SHAPPELL AND WIEGMANN ational awareness (classified as a substandard condition, adverse mental state) and affected the tragic sequence of events that followed. Clearly, the decision to continue a poor approach caused the accident; however, the event might never have occurred had the aircrew chosen to land on Runway 28 rather than Runway 10. Recall that Runway 10 is bounded to the east by Cuban airspace and requires precise timing for an approach from aright traffic pattern. On that day the approach was further complicated by a prevailing southerly wind requiring that the pilot turn sooner or use a steeper angle of bank to maintain the proper approach path. Although theoretically within the performance limits of the aircraft and ability of the aircrew, a Runway 10 approach left little margin for error and was unnecessarily risky given the circumstances. Using the taxonomy, then, the decision to use Runway 10 rather than 28 did not violate any rules and was therefore considered a mistake. Why would such an experienced captain choose such a difficult approach when a safer, more prudent one existed? In many ways, the answer may lie in the underlying fatigued state of the aircrew. The aircrew had been on duty for nearly 18 hr, having flown all night before being scheduled for the Guantanamo Bay flight. In fact, the captain reported feeling “very lethargic or indifferent” (NTSB, 1994, pg. 60) just before the accident. Likewise, according to the captain, the FO reported that “he might be more fatigued than he thought” (NTSB, 1994, pg. 60). It is likely then, that the insidious effects of fatigue, in particular those of mental fatigue, produced a substandard condition, adverse mental state among the aircrew that contributed to the series of poor decisions made by the aircrew. It is well documented that attitude, judgment, decision making, and other cognitive processes are impaired when individuals become fatigued or mildly sleep deprived (Neri & Shappell, 1993; Neri, Shappell, & DeJohn, 1992; Shappell & Neri, 1993). It is likely, therefore, that at least the captain, and perhaps other members of the crew, were adversely affected by the insidious effects of fatigue. The decision to land on Runway 10, for instance, was probably affected given the almost cavalier attitude with which the aircrew agreed upon the more difficult approach. These crew members were well known for their professionalism and had flown safely for over 40,000 hr. The confidence in their abilities, combined with their current fatigued state, may have produced a seemingly overconfident, almost complacent “what the heck” attitude not normally present within this aircrew. Ata minimum, they were guilty of complacency and overconfidence, considered a substandard condition, adverse mental state, using the taxonomy. Likewise, it is possible that the fatigue experienced by this aircrew influenced the loss of situ- ational awareness as described earlier as well as the decision to salvage a poor approach. Although it is tempting to focus solely on errors made by the aircrew, a thorough understanding of the accident sequence requires an examination of those latent failures committed by line management and supervisors that were inherited by the A HUMAN ERROR APPROACH TO ACCIDENT INVESTIGATION 283. aircrew. Much of the unsafe supervision committed by AIA personnel was charac- teristic of a rapidly expanding company operating within the very competitive environment of air freight. To remain competitive, AIA frequently operated on the edge of established Federal Air Regulations regarding crew rest and long duty hours. For instance, ATA had an “unwritten” policy to avoid scheduling aircrews for more than a 24-hr continuous duty day. Yet, the captain from the mishap flight stated that he had been assigned missions that exceeded a 24-hr duty day on several occasions. In fact, the duty day preceding this accident was not that unusual for AIA aircrews in general. Apparently, the hazards associated with circadian dys- thythmia and aircrew fatigue were of little concern. Given that the extent of their previous duty schedule was known by AIA, adding the Guantanamo Bay flight may be viewed as planned inappropriate operations. Adequate training, like scheduling, is the responsibility of management. There were at least two instances of inadequate training and guidance identified by the NTSB in this accident. The first involved crew resource management (CRM) training. CRM has been embraced by commercial airlines and other aviation communities for well over a decade now, yet AIA’s training in this area was minimal at best. It is quite possible that a more comprehensive program in CRM, consisting of practice and feedback, may have prevented some of the crew coordi- nation deficiencies evident at critical junctures in the accident sequence. Neverthe- less, itis clear from the NTSB investigation that AIA knowingly provided minimal training in CRM, albeit within established guidelines, that ultimately proved to be insufficient. Therefore, the causal factor was considered a case of known, inade- quate supervision using the taxonomy. Also identified by the NTSB were the inadequate training and guidance provided flight crews operating at special airports like Guantanamo Bay. However, unlike the lack of CRM training described previously, it is likely that this need for special training was unforeseen by AIA. Flight crew briefings into Guantanamo Bay are usually done by those Department of Defense facilities from which the flight originates (Norfolk Naval Air Station Air Transportation Operations Center, in this case). That the briefing was insufficient given the outcome can only be viewed as unforeseen by AIA and is therefore considered an instance of unrecognized unsafe operations. If anything, Norfolk Naval Air Station Air Transportation Operations Center, rather than AIA, was guilty of providing an inadequate brief. However, it is unclear from the report whether this was the case. The final two instances of unsafe supervision involved the U.S. Navy and the Federal Aviation Administration (FAA). The NTSB determined that the U.S. Navy had failed to provide a system that would ensure that the local tower controller was aware of an inoperative strobe. Presumably, if the controller had known that the strobe was out, he would not have instructed the aircrew to remain within the first fence line designated by the high-intensity strobe. However, hazardous situations like this often go unnoticed until it is too late. It is simply impossible to predict 284 = SHAPPELL AND WIEGMANN every possible hazardous situation, and no provisions had been made for a strobe malfunction. The failure of the U.S. Navy to provide a system capable of preventing this hazard was therefore viewed as unforeseen and further classified as unrecog- nized unsafe operations rather than the failure to correct a known problem. In contrast, the failure of the FAA to revise regulations governing flight and duty time (14 CFR, part 121, Supplemental Air Carrier, International Operations) has been a long standing issue with the NTSB and FAA. As such, it must be considered as a known instance of unsafe supervision, and therefore a failure to correct a known problem. Accident Summary Many human causal factors contributed to the loss of AIA Flight 808 (Table 4), several of which existed long before the fateful decisions of the aircrew. Unsafe supervisory practices such as poor guidance, inadequate aircrew coordination training, and inappropriate scheduling practices can all be linked to the unsafe acts committed by the aircrew. Presumably, the aircrew would have chosen the safer approach to Runway 28 had they been better informed of the hazards associated with a Runway 10 landing at Guantanamo Bay. Given that they chose Runway 10, however, the accident was still preventable. Had they properly monitored their instruments rather than searching for an inoperative strobe, or had they practiced proper aircrew coordination techniques, they would presumably have aborted the poor approach and prevented the stall. Fatigue, overconfidence, and perhaps even a sense of urgency to finish the extremely long crew day may have contributed to the series of poor decisions made by the crew throughout the final moments of the flight, In sum, this accident, like many, cannot simply be attributed to a single causal factor; it was a sequence of several missteps, the origins of which were founded in unsafe supervisory practices. FOCUSED INTERVENTIONS The ultimate goal of any accident investigation is to prevent the reoccurrence of accidents through the judicious application of intervention strategies. Whether the goal is to prevent the occurrence of human errors or simply reduce their conse- quences, effective interventions must focus on the underlying causes of human error. Unfortunately, as mentioned earlier, traditional methods of investigation focus primarily on documenting human failures responsible for accidents with little regard for their underlying causes. Consequently, many intervention strategies are inappropriately or unsystematically applied and often meet with limited success. “RoujoIsy = Od ‘NopeNsURUPY LONEIAY Texepar = VV ‘skemury [euoNEMraNUy UeoLoUTy = VV “a10N monerodo snoprezey peziusooaup) uaasaiojuQ, —worstarodns ayesup), wasds ZajweM aqons aapesadout apraoad 0} palmeey ‘AADN, ‘wra|goxd 201000 01 payer umouy — wojspadns ayesup, suonpinfar our Ainp pur 1yS1y Jo Aoenbapeuy va woqyexado snopsezey pazqusosanu) uoosa1oyup] — uoystazadns ayestip) snodie jefoads ye suoperodo Buiprefas Sututen syenbopeuy yIV uoysfazodns syenbopeuy umoUy —woystArodns ayesup), Sumuren WUD avenbapeuy viv suoperodo ayeidoxddeus pouuetg umouy —wors}arodns ayesuy) Agipige 219" Jo SHUNT OF x09 parnpayas: vIV ‘SWeIS TeNUOU! asIaAPY —_VOTIpUOD prepueisqng, souapyuoaraAo pu Aowsae{duto xanory ‘ofeis [eIUOUE aSIOAPY —_WORIPUOD prepuElsqng ongrey [eIVOW, oso OITA, popuoqyy 82 Aemuny vey) Joyzer QT Aemuny eso0y>, ureyded JuowoSeueustus aomosar AAI) —UOTIPUOD PIEpUEISqNS —_WoRPUoD ayes) WOREUTpIOGD MATONTE JO HOE] ssarony ‘ovals [eIWoUr astaApy UONIPUCS pIepwEsqng ——-VOTIpUOD ayesUN, Yyowordde PUY ojo SuZaANoUEWY O[TYM SsauARLAG EUOTIENIIS Jo sso] Og 7p Ud ds Popusrutus, PR AESUA, —- TM} FTE dodys UF OPA TEI e PIOAL pur poodsire JoWUOUT O} pale Od 7 UrEWdED UoRIpuco [eIUeM asIOApy _UONIPuos prepuessqng —_WO!IPUCD ayesup) agons aanexedour amp yum uonednoc0ag Od 7 urAde ONEISIN papusyy oR ayesuy ypeosdde Suypuey 100d e “yoge wep reyrer “anuAUOD 0} esoy ude £ 19877 Zpe4a7 112027 4010 posi wapiooy oun 8-09 VIV 10} siojes jesne9 jUepPay Jo Arewuung yanev. 285 286 — SHAPPELL AND WIEGMANN The Taxonomy of Unsafe Operations, by classifying human failures based on underlying causes, makes salient those areas in need of intervention. In this way, existing intervention strategies can be effectively focused, or in some instances, the necessity for novel approaches may be identified. To illustrate this point, some representative interventions at each level of the taxonomy are described. Unsafe Acts A thorough understanding of why humans make errors or violate rules will often dictate intervention strategies. Mistakes, for example, arise primarily from infor- mation problems and manifest within the context of intentional, controlled proc- esses such as rule- and knowledge-based errors. As such, improved personnel selection criteria may provide some improvement; particularly when individuals without the requisite aptitude to fly aircraft or personalities conducive to excep- tionally risky behavior can be identified. However, if the caliber of personnel is already high, as is often the case in aviation, efforts aimed at improving selection criteria may prove limited. Instead, modifications to the quality of training pro- grams that aid pilots with the transition from controlled processing to automatic, skill-based behavior may be more useful. Another possibility may be to engineer acomputer-aided decision-making system that assists pilots with problem analysis. The interested reader is directed to Reason (1990) for a discussion of several other potential intervention strategies for use with these and other unsafe acts. In contrast to mistakes, willful violations stem primarily from motivational problems (e.g., low morale, perceived lack of concern by supervisors, and sanc- tioned noncompliance). Thus, violations are best reduced by attitudinal changes within the organization. Such changes can be made only through a top-down approach focusing on motivational and leadership strategies. Nevertheless, even the best selection, training, and motivational strategies may fail to eliminate violations. In these instances, the only recourse may be to fire the individual, thereby removing him from the environment (e.g., control of aircraft). If, on the other hand, the unsafe act is unintentional (i.e, slips or lapses), interventions such as those used with intended acts may prove fruitless. The need for alternative intervention strategies is clear from the type of errors characteristic of this class. Both error forms (slips and lapses) occur during automatic, skill-based processing that is often independent of conscious control. Therefore, additional training would appear to be counterintuitive. Likewise, selecting for specific skill-based traits is a crude science at best and therefore not particularly appealing as an intervention strategy in this case. In these instances, warning systems or design changes that produce a more error-tolerant system or warn individuals when actions become unsafe may prove more efficacious. For example, in the AIA DC-8 accident described previously, a predictive stall-warning system may have diverted the A HUMAN ERROR APPROACH TO ACCIDENT INVESTIGATION 287 pilot’s attention away from the inoperative strobe light and alerted him to the impending stall. Unsate Conditions Many intervention strategies intended for unsafe conditions (e.g., focused training programs) are similar to those proposed for unsafe acts. In theory, however, interventions that eliminate or reduce unsafe conditions have the added benefit of reducing the opportunity for unsafe acts as well. For example, additional training in aircrew coordination techniques may reduce interpersonal resource mismanage- ment and ultimately reduce the likelihood that mistakes are made. This is clearly the case in the DC-8 example. Quite possibly, had the aircrew received more than the minimum instruction on crew resource management principles, the breakdown in effective communication between aircrew members would not have occurred and the accident would likely have been averted. Nonetheless, there may be some instances in which substandard practices persist even after additional training or, as with unsafe acts, some individuals will continue to operate outside the confines of established rules and regulations. In both cases, management may have no alternative but to fire the individual. Substandard conditions represent a unique challenge to the safety professional. Traditional interventions like selection and training are generally difficult to implement and often meet with limited success. In contrast, two particularly effective strategies aimed at reducing adverse physiological and mental states are the periodic review of aircrew human factors and regular medical monitoring. Both are staples of the U.S. Navy/Marine Corps Aviation Safety Program. However, while periodic medical reviews are fairly common throughout aviation, a human factors review is much less familiar and perhaps not as easily understood. Human factors here refers to that constellation of personal and professional characteristics that correlate with an individual’s ability to perform effectively. Examples include psychological and emotional stressors, interpersonal relationships, performance trends, training currency, and motivational factors, all of which are known to affect performance. However, it is not simply the manifestation of these factors that is of concern. Rather, it is the influence these human factors exert within the context of flight safety that is important, particularly with respect to crew coordination, air discipline, judgment, professionalism, leadership, and basic flying skills. Still, perfectly normal individuals can and will get into adverse physiological and mental situations that promote poor decisions. Likewise, there are many times when the limitations of the sensory system are naturally impaired (e.g., at night or in instrument meteorological conditions) and safety of flight is jeopardized. Addi- tional training that increases awareness of hazards associated with these conditions may provide some help, albeit limited, in ensuring that a correct decision is made. 288 — SHAPPELL AND WIEGMANN Perhaps more effective are engineering solutions that help operators overcome their physical and mental limitations. Examples range from basic flight instruments found in most aircraft to sophisticated devices such as onboard weather radar and night-vision goggles. Unsafe Supervision Although interventions at the operator level (unsafe acts and conditions) are arguably the most familiar—and therefore the focus of most safety programs—sev- eral interventions at the supervisory level also exist. In the case of known unsafe supervision, one possibility is to improve the criteria used to select supervisors. Unfortunately, supervisors typically represent highly successful individuals within an organization; thus, additional selection criteria may prove ineffective. Additional training in leadership, team development, and communication skills may prove more beneficial. Another possibility may be to heighten enforcement of existing regulations or develop new, more restrictive guidelines. Take, for example, the AIA scheduler who assigned the fatigued DC-8 aircrew the additional Guantanamo Bay mission. At the time, he was working within accepted guidelines promulgated by the organization. If, however, established regulations regarding crew rest and long duty hours had been more restrictive, presumably a more rested aircrew would have been scheduled, and the accident would likely have been averted. Unforeseen unsafe supervision is more insidious and considerably more difficult to address than known unsafe supervision. There are times when supervisors simply fail to recognize unsafe operations through no fault of their own. In these situations, increased awareness of system safety and periodic human factors reviews of all personnel managed may prove successful. Likewise, regular safety inspections, work site analyses, and simulated operations or dry runs may identify hazards before they manifest as accidents. Unfortunately, there is no guarantee that these methods will shed light on all hidden hazards. The key is to instill an attitude of safety within the organization so that when potential hazards do surface, they are reported and addressed in an efficient and timely manner. CONCLUSIONS Applying the Taxonomy of Unsafe Operations as an investigative or classification tool has several advantages. From a philosophical perspective, the taxonomy helps investigators gain a greater appreciation for the genesis of human error including latent failures associated with the condition of operators and those who supervise them. Historically, the focus of accident investigations has been almost entirely upon the operators. After all, their actions are in many instances directly linked to AHUMAN ERROR APPROACH TO ACCIDENT INVESTIGATION 289 the accident. As a result, labels such as “pilot error” have become commonplace and are indicative of a limited approach to the analysis of human error. This is not to say that comprehensive theories of accident causation do not exist. Rather, they remain more academic than practical. The Taxonomy of Unsafe Operations bridges the gap between theory and practice by providing field investigators with a user-friendly, commonsense framework from which to conduct a thorough accident investigation. Likewise, the Taxonomy of Unsafe Operations was designed to be a generic human error accident investigation scheme. Although it was presented here within the context of aviation, its cause-oriented (rather than event-oriented) approach makes it useful in a variety of other occupational settings as well. In other words, the taxonomy was not designed to simply catalog erroneous events such as the aircrew’s “failure to lower the landing gear” or their “failure to scan instruments properly.” Rather, these events are classified based on their underlying causes (e.g., inadequate supervision, adverse mental states, and slips) and are therefore inde- pendent of the setting in which they are applied. Put simply, slips are slips whether they occur in an aircraft, automobile, or nuclear power plant. On the surface, their manifestation may be entirely different, but their underlying causes are generally the same. Therefore, the taxonomy should prove useful in the investigation, classification, and prevention of a variety of occupation accidents as well. Furthermore, the Taxonomy of Unsafe Operations has uses beyond accident investigation. Although the taxonomy has been described here as a post hoc tool for accident investigation and then prevention, it can be used in the early identifi- cation and prevention of potential hazards as well. After all, waiting until tragedy strikes to employ the taxonomy is certainly not the best tactic to follow. Rather, the taxonomy offers a structured format from which to look for and identify hazards in the workplace before accidents occur. By instructing operators in the taxonomy, operator awareness of those latent failures (unsafe conditions of the operator and unsafe supervisory practices) that influence the commission of unsafe acts can be increased markedly. In fact, by integrating the taxonomy into an organization’s overall safety program, management has in effect sanctioned an active and struc- tured program of hazard identification and elimination. Ultimately this would improve safety in the workplace, thereby lowering losses attributable to human error. Even with the benefits that the Taxonomy of Unsafe Operations brings to the operational setting, it was never intended to be the only tool used in an accident investigation. Other techniques for analyzing equipment failures and design flaws exist and should always be employed. Furthermore, the taxonomy needs to be expanded to address failures associated with the interaction between operators and the remaining components of Edward’s (1988) SHEL model (i.c., hardware, software, and environmental factors). Consideration of human error at the organ- izational level (e.g., allocation of resources, decision centralization, and organiza- 290 — SHAPPELL AND WIEGMANN tional culture) must be incorporated into this scheme as well. Efforts along these lines are currently in progress. Nevertheless, the Taxonomy of Unsafe Operations as described here provides a solid foundation upon which a comprehensive human error approach to accident investigation and prevention can proceed. ACKNOWLEDGMENTS We express our sincere appreciation to Dr. John Schmidt whose insightful com- ments and criticisms contributed markedly to the development of the Taxonomy of Unsafe Operations. We would also like to acknowledge the insight and operational contributions Maj. Leonhard Raabe, USAF, and Maj. Steven Walker, USAF, made toward the development of the taxonomy. The views expressed in this article are those of the authors and do not reflect the official policy or position of the U.S. Navy, the Department of Defense, or the U.S. Government. REFERENCES Bird, F. (1974). Management guide to loss control. Atlanta, GA: Institute Press. Bdwards, B. (1988). Introductory overview. In E. L. Weiner & D. C, Nagel (Eds.), Human Factors in Aviation (pp. 3-25). San Diego, CA: Academic, Heinrich, H., Petersen, D., & Roos, N. (1931). Industrial accident prevention: A safety management approach (Ist ed.). New York: McGraw-1 Jensen, R. (1995). Pilot judgment and crew resource management, Brookfield, VT: Avebury Aviation. National Transportation Safety Board. (1994). Aircraft accident report: Uncontrolled collision with terrain American International Airways flight 808, Douglas DC-8-61, N8I4CK, U.S. Naval Air Station, Guantanamo Bay, Cuba, August 18, 1993 (Report No. NTSB/AAR-94/04). Washington, DC: Author. National Transportation Safety Board. (1996). Annual review of aircraft accident data U.S. general aviation calendar year 1994 (Report No. NTSB/ARG-96/01). Washington, DC: Author. Neri, D., & Shappell, S. (1993). The effect of combat on the work/rest schedules and fatigue of naval aviators during Operations Desert Shield and Desert Storm. Military Psychology, 6, 141-162. Neri, D., Shappell, S., & DeJohn, C. (1992). Simulated sustained flight operations and performance. Part 1. Effects of fatigue. Military Psychology, 4, 137-155. O'Hare, D., Wiggins, M., Batt, R., & Morrison, D. (1994). Cognitive failure analysis for aircraft accident investigation. Ergonomics, 37, 1855-1869. Rasmussen, J. (1982). Human errors: A taxonomy for describing human malfunction in industrial installations. Journal of Occupational Accidents, 4, 311-333. Rasmussen, J. (1986). Information processing and human machine interaction: An approach to cognitive engineering. New York: North-Holland, Reason, J. (1990). Human error. New York: Cambridge University Press. Shappell, S., & Neri, D. (1993). The effect of combat on aircrew subjective readiness during Operation Desert Shield/Storm. The international Journal of Aviation Psychology, 3, 231-252. A HUMAN ERROR APPROACH TO ACCIDENT INVESTIGATION 291 Shappell, S., & Wiegmann, D. (1996). U.S. naval aviation mishaps 1977-1992: Differences between single- and dual-piloted aircraft. Aviation, Space, and Environmental Medicine, 67, 66-69. Shappell, S., Wiegmann, D., & Schmidt, J. (1996). The model of unsafe operations: A human factors approach to accident investigation. Aviation, Space, and Environmental Medicine, 67, 109. U.S. Government Accounting Office. (1996). Military aircraft safety. Significant improvements since 1975. (Report No. GAO/NSIAD-96-69BR). Washington, DC: Author. Wiegmann, D., & Shappelll, S. (1997). Human factors analysis of postaccident data: Applying theoretical taxonomies of human error. The International Journal of Aviation Psychology, 7, 67-81. Manuscript first received October 1996 Copyright © 2002 EBSCO Publishing

Potrebbero piacerti anche