Kenneth R. Laughery
Rice University
Houston, Texas, USA
Kenneth R. Laughery
During the last 23 years of my career as a university professor, I have had an opportunity to spend some of my time as a consultant outside the university, an activity that most American universities permit or even encourage. One of the types of activities in which I have been involved is work as an expert witness in litigation. It is not my intent today to talk about the role of the ergonomics expert in litigation. Rather, I am going to talk about a concept related to the issues addressed by the ergonomics expert; namely, the concept of “human error.” My intent is to start with a few remarks about how the notion of human error fits into most people’s thinking, non ergonomists’ thinking, about safety. Then I will outline some current perspectives about human error; and, finally I will describe a few examples, drawn from my expert witness work, of systems failures where an over emphasis on human error may be misplaced. The concept of human error has a prominent place in the history of human factors and ergonomics. It has always seemed to me that “human error” is related to what might be thought of as a predisposition in the field of safety and accident analysis. The idea is that when there is a failure in a system where the human is a component, a strong tendency exists to blame the human. The system failed because the human made an error. This is a predisposition often shared by engineers, marketing people, juries in American courts, and, unfortunately, sometimes people whose work is concerned with safety. Even today, when an airplane crashes, it is not uncommon to hear the news media focusing on the question or issue “What did the pilot do wrong?” I will briefly mention three areas in which “human error” seems to be a prominent focus in thinking about accidents or other forms of system failure. First, in the domain of highway safety, much of the emphasis is on issues such as: How to get the driver to obey the laws (such as speed limits)? How to prevent people from driving under the influence of alcohol? How to limit use of a telephone while driving? All of these questions are, of course, very legitimate concerns, but they are not the only concerns in highway safety. Vehicle design and highway design have an important place as well, and they are important to ergonomists when they interface with and influence the performance of the human operator of the vehicle. I will return to highway Another arena where we tend to focus on “human error” instead of “system error” is industrial or workplace safety. Here again, there is often a tendency to assume the primary fault for an accident and injury lies with the employee. Several years ago I had an opportunity to work with Shell Oil Company on a project to develop a coding system for an employee accident/injury database. At the start of that project we examined literally thousands of previously-recorded accident reports. The standard report form used by Shell, which was typical of accident report forms used by most manufacturing organizations in the United States, contained a section for identifying the cause of the accident. The vast majority of the accident reports examined attributed fault to the person who was injured. “He or she was not paying attention” was the most common phrase. Usually, there was little or no effort to consider the role of other potential factors such as the equipment being used, the environment in which it happened, the task that was being carried out, or how these aspects of the system fit or did not fit the person. The third area I will mention is consumer product safety. Obviously consumer products span a broad range of safety issues. Mechanical, electrical, and/or chemical hazards may characterize the safety issues of various products. But here again, there seems to be a strong tendency to assume that when a person gets injured using a product, Now, it is not my intent to imply that humans do not make errors, nor that it is inappropriate to consider human error in the context of highway safety, workplace safety, and consumer product safety. But, clearly, there are more productive approaches to dealing with system failures than focusing blame on the human component and then limiting improvements to trying to modify the human’s behavior. This is the context in HUMAN ERROR: SOME PERSPECTIVES
Recent years have witnessed a substantial increase in theoretical and empirical research on human error. Much of this work has a cognitive perspective; that is, it employs a human information processing framework. A landmark publication on this topic was a 1990 book by James Reason titled Human Error. For anyone interested in this topic, I would consider this book a most significant source. Space does not permit an extensive review of the theoretical perspectives offered by Reason. Instead, I will try to present a brief overview of what I believe are a few salient points or ideas. I begin by quoting Reason’s definition of error: “Error will be taken as a generic term that encompasses all those occasions in which a planned sequence of mental or physical activities fails to achieve its intended outcome, and when these failures cannot be attributed to the intervention of some chance agency.” (Reason, p. 9) There are a couple of key points to be noted about this definition and the perspective it offers about the concept of error. First, the notion that an error has occurred is based on the failure to achieve an outcome or goal. The system did not do what it was supposed to do. Something went wrong in terms of what was desired. The second point focuses on the words “planned” and “intended” in the definition. The concept of intention is central to this entire perspective or theoretical framework regarding human error. Intention in this context consists of two elements: (1) an end-state to be attained (the goal), and (2) a means (the actions) by which the end-state is to be achieved. Both of these elements, the end-state and the means to achieve it, may vary greatly in their degree of specificity. To some extent, the specificity of intentions relates to another widely recognized cognitive framework that is useful for understanding and classifying errors; namely, Rasmussen’s (1983) distinction between three level of performance. These performance levels are labeled skill-based, rule -based and knowledge -based. I am going to define these three levels of performance, then return to the conc ept of intention, and then tie them together as a perspective about human error. Rasmussen’s three levels of performance essentially correspond to decreasing levels of familiarity or experience with the environment or task. Skill-based - At this level pe ople carry out routine, highly-practiced tasks in what can be characterized as a largely automatic fashion. Except for occasional checking, little conscious effort is associated Rule -based - When we have to take into account some change in a situation and modify our preprogrammed behavior, and it is a situation with which we are familiar or have been trained to deal with, we engage in rule-based behavior. It is called rule-based because we are Knowledge-based - This level of performance comes into play in novel situations where we have no applicable rules. It may be a form of problem solving employing analytical reasoning and stored So, this skill-rule-knowledge system for classifying performance relates our experience to how much cognitive effort or resources we have to allocate to carry out a task. It also has implications for the kinds of human errors we might expect in carrying out tasks. Returning to intention. Reason’s perspective on and classification scheme for human error essentially starts with the concept of intention. Consider three alternatives: Alternative 1 - A person intends to carry out an action, does so correctly, the action is appropriate, and the desired goal is achieved. No error has occurred. Alternative 2 - A person intends to carry out an action, does so correctly, the action is inappropriate, and the desired goal is not achieved. An error has occurred. Alternative 3 - A person intends to carry out an action, the action is appropriate, does it incorrectly, and the desired goal is not achieved. An error has occurred. Consider alternatives 2 and 3, situations where errors occurred. They differ in an important respect. In alternative 2, the person did what he/she intended to do, but it did not work. The intention or plan or rule was wrong. This type of error is referred to as a mistake. The definition of a mistake is: “Mistakes may be defined as deficiencies or failures in the judgmental and/or inferential processes involved in the selection of an objective or in the specification of the means to achieve it.” (Reason, p. 9) In short, the person intended to do the wrong thing. In alternative 3, on the other hand, the person’s intentions were correct, but the execution of the action was flawed - done incorrectly, or not done at all. This distinction between being done incorrectly or not at all is another important distinction. When the appropriate action is carried out incorrectly, the error is classified as a slip. When the action is simply omitted or not carried out, the error is termed a lapse. Reason’s definition is: “Slips and lapses are errors which result from some failure in the execution and/or storage stage of an action sequence.” An important point can now be made about the origin of slips, lapses and mistakes. Slips relate to observable actions and are commonly associated with attentional or perceptual failures. Lapses are more internal events and generally involve failures of memory. Mistakes are failures at a higher level - with the mental processes involved in assessing the available information, planning, formulating intentions, and judging the likely consequences of the planned actions. The above distinctions also have significant implications as to how one addresses the various types of errors; that is, how one tries to prevent or correct them. Slips are dealt with by addressing attention issues: using good displays, minimizing distractions, and so forth. Lapses are addressed by using memory aids, minimizing time delays, etc. Mistakes are dealt with by training, better procedural aids, etc. I earlier mentioned Rasmussen’s performance classification scheme: skill, rule and knowledge based behavior. Briefly, this scheme can be related to Reason’s error classification. Slips and lapses tend to occur in carrying out skill-based tasks where there is not a lot of cognitive effort allocated to the task. Slight differences in situations may go unnoticed (and attention failure). A step in a pr ocedure may simply be forgotten. Mistakes, on the other hand, occur in rule -based and knowledge -based tasks. A rule that would seem appropriate is simply not, or a correct problem solution is not found. There is, of course, much more that could be said about current perspectives on human error. What I have described is a very simplified version of some of Reason’s work, with a bit of Rasmussen added. The important point is what these ideas or perspectives offer us as ergonomists. They offer us a way of thinking about and analyzing systems involving humans from a human cognition point of view. They give us some guidance about the cognitive characteristics, abilities and limitations of people that have implications not just about what kinds of errors might be made, but also some suggestions about how to design systems so as to prevent errors, or as we sometimes say - forgive errors. Further, the concept of “induced error” is still alive, and understanding these perspectives on human error should help minimize the likelihood of designing EXAMPLES IN FROM LITIGATION
As noted earlier, in a law suit where someone has been injured or killed, there is a strong tendency to blame the injured party for the incident. One of the role s of the ergonomics expert is to provide a broader perspective and analysis of potential causal factors and their interactions. In the remainder of my talk I will note three examples of law suits where the defense contention was that the human component was at fault. I will also attempt to relate them to the human error classification scheme just described, and offer suggestions as to how the system failure might have been prevented. The examples are drawn from a highway accident, a workplace accident, and an event involving a The Highway Accident and Injury
Three years ago there was a great deal of publicity in the United States, and in some other parts of the world, regarding accidents involving Firestone tires and Ford Explorer vehicles. There was a substantial number of such accidents, not just in the United States but in other countries such as Saudi Arabia, Venezuela, and Malaysia. Indeed, in August, 2000 Firestone recalled millions of tires that were of the type involved. The typical accident occurred when a tire on the Explorer, usually a rear tire, detreaded (the tread came off), the vehicle swerved, the driver made steering inputs to control the vehicle but lost control, the vehicle rolled, and occupants were injured or killed. As a result, the injured occupants and/or heirs of the deceased sued Ford and Firestone. There are several potential ergonomics issues involved in this litigation involving both the tires and the vehicle. I am going to focus on just one, the control of To elaborate slightly on the accident event, when a rear tire loses its tread while the Explorer is traveling on the highway, the vehicle will tend to turn in the direction of the tire failure. For example, if the left rear tire fails, the vehicle will turn to the left. The vehicle response is the result of a couple of factors, including the drag created by the tire tread and the fact that the left rear tire is now smaller in diameter than the right rear tire. When the vehicle turns to the left, and imagine this happening on the highway at a speed of 100 kph, the driver’s reaction is to turn the steering wheel to the right to keep the vehicle on the highway and under control. But something else happens when the tire detreads; namely, the control dynamics of the vehicle also changes. Specifically, the vehicle goes from an understeer to an oversteer situation. Essentially this change has to do with the vehicle response to a control (steering) input. It is the classic ratio of system response to control input. It means that before the tire failure a steering wheel turn of 20 degrees caused the Explorer to turn a certain amount; but after the tire failure a 20 degree steering wheel turn caused the vehicle to turn a greater amount. As a result, when the Explorer turned to the left, the driver turned the wheel to the right an amount that he/she thought, from experience, would correct the vehicle direction. Instead, the vehicle turned to the right more sharply, which then led to the driver making a steering input to the left, which also was now excessive. Two or three such oscillations and, given the high center of gravity of the vehicle, the rollover started. One of the defenses in this litigation is that the driver should have been able to control the vehicle and instead, he/she over reacted; that is, the driver oversteered or overcorrected. But I would maintain that this was not a human/driver error. It was a system error. The driver was carrying out a control input that experience had taught was an appropriate action for steering this vehicle. The vehicle characteristics had changed in a way that drivers could not be expected to know or predict, and the control action was no longer appropriate. This is not the kind of circumstance in which the system designer should put the human operator. It is my understanding from engineering reports and testimony, that it is quite feasible to design the vehicle so that the vehicle response to steering inputs is minimally affected by an on-highway tire failure. Where does this example fit into the error classification scheme? If one considers that the driver was applying a rule based on experience, and the rule is now not appropriate, it could be viewed as a rule-based error, a mistake. But the solution here is not to try to train the driver to steer differently if a rear tire fails on the highway. Such an approach is not likely to be successful for dealing with such emergency situations. The solution is to design the vehicle so the rule, the experience for steering the vehicle, still Workplace Safety
There are many examples of workplace accidents and injuries that lead to litigation that could be cited. Because of the laws in the United States, it is generally not possible to sue an employer if an employee is injured on the job. Much of this litigation, therefore focuses on the injured person suing the manufacturer of some equipment that was being used on the job. I will mention one type of accident that accounts for several cases on which I have worked. The accidents involved situations where one employee was performing maintenance or service work on a large piece of equipment, the equipment was shut off, another employee turned it on, and the maintenance worker was 1. A facility where trees were being debarked, a part of the line where the logs were being transported became clogged, the equipment was shut off, an employee was removing debris from the line to clear it, another worker started the machinery, the worker removing the debris was caught in the equipment and 2. A facility where used cardboard was being bailed by a ram device, there was a potential problem with unwanted material having inadvertent ly been placed in the trough with the cardboard, the equipment was shut off, an employee went to inspect the material, another employee started the machine, the employee in the trough was caught by the piston or ram and killed. These two accident examples have some common characteristics: a. The worker who was injured or killed was not within the view of the employee who activated the controls to start the machine; b. The employee who started the machine did not know the other worker was involved in the maintenance or service task; c. No tag-out or lock-out procedure had been carried out to prevent the equipment from being restarted or to warn employees that maintenance or service work was in progress. A typical reason for not doing a tag out or lock out was that the maintenance or inspection task was expected to be brief. The strong tendency in these kinds of accidents is to blame one or both of the employees involved - the injured person and/or the worker who restarted the equipment, and I do not suggest that some such allocation of responsibility is inappropriate. But consider some other system issues. The maintenance, service or inspection tasks being carried out by the injured worker were foreseeable; that is, they were tasks that were required from time to time. A failure-modes analysis certainly would have identified starting the machine when a worker is in the danger zone as a safety issue. The controls for the equipment were located so that important information needed by the operator for safe operation was not always available; that is, he could not see the potential danger zones. The rules for tag out and lock out were not sufficiently clear to cover circumstances where the hazard might exist for only a brief time. There are some other issues, but these are enough. Was there human error involved? The answer, of course, is yes. The judgment of the injured employee not to tag out or lock out the controls because the task would be brief was and error, a wrong rule, a mistake. The decision of the employee to activate the controls without adequate information as to whether or not someone else might be in a danger zone was an error, a wrong rule, a mistake. How can such errors be prevented? One possibility is controls design. Locate controls so that needed information is available to the operator, or make the information available through some type of display. Another possibility is more stringent rules about tag out and lock out procedures that would preclude any type of maintenance or service work without initiating such procedures. Still another is the use of kill switches that deactivate the controls when a person enters a danger zone. These are solutions that are well established concepts. What is interesting is that such circumstances still exist, such accident still occur, and people are often inclined not to look beyond the human error Consumer Product Safety
My third and last example concerns safety issues associated with consumer products. Here the potential examples seem almost unlimited. Incidentally, this category refers to products that are sold to the general public. I have selected one example. First, a background point needs to be made. One of the public health problems in the United States is obesity, people being overweight. Many people, large numbers, are motivated to lose weight, but they also like to eat. In short, losing weight is hard, and they look for easy solutions. One type of product in pill form to which many have turned is called a “dietary supplement.” This type of product contains ingredients, ephedra and caffeine, for curbing the appetite and increasing energy. Many manufacturers market these supplements under a variety of names. One such product is manufactured by a company named Metabolife. The pill has had remarkable marketing success. Furthermore, studies indicate that it is somewhat successful. People take it, they lose weight, they are pleased, they tell their fat friends about it, the friends take it and so forth. BUT, the Metabolife pill also has some side effects, some of which can be quite serious, and in some cases fatal. There are contraindications; that is, circumstances or conditions that if they exist the person should not use this dietary supplement. For instance, people with high blood pressure, heart problems, or immune deficiencies should not use it. Among the potential serious side effects are heart attack and stroke. Recently there has been increasing litigation in the United States regarding these products. People who have suffered serious side effects are suing Metabolife and other manufacturers. I have been involved in some of the litigation. While there are some design issues in the law suits involving the chemical content of the product, there are also warnings issues; that is, the adequacy of the information provided to the consumer regarding the contraindications and side effects. This is the issue with which I have been involved. The pills come in bottles with labels. There are several issues regarding the label, I will only briefly mention a couple. The central panel on the label indicates what the product is and provides some information that can be regarded as reassuring the consumer of its effectiveness and its safety. “Dietary supplement” might sound like it’s a food. “Natural herbs” sounds positive. “Independently laboratory tested for safety” sounds like if you take it everything will be OK. Finally, a logo has the appearance of a seal of approval, which it is; but it has nothing to do with safety. Then on the side panel of the label, in rather dense text, is some information about health hazards associated with using the product. Again, there are several issues associated with this labeling, I will note a. The front panel is highly positive and reassuring that the product is safe. It does not motivate the user to read or analyze the side panel for safety information. Indeed, it lessens the likelihood that the user will do so. b. The format of the safety information, its density, lack of white space, etc. c. The content of the safety information does not communicate the seriousness of the potential consequences of using the product. Much research has shown this aspect of warnings plays a significant role in compliance. The defense in this litigation is that the user was adequately warned; and, where there were contraindications, the consumer should not have taken the pills. In short, the user made an error and was at fault. The plaintiff’s arguments are that from a display point of view, the safety information was badly presented, emphasizing the product’s safety and de-emphasizing its hazards. The error is in the form of a faulty decision based on faulty information. In terms of the classification scheme, it is probably a knowledge-based error, a mistake. The solution, if the product cannot be modified to address the side effects, would involve better displays of the safety information. CONCLUSIONS
This paper has provided a very brief overview of some of the current perspectives on human error. Recent work in this field has focused heavily on cognitive issues; that is, the characteristics, abilities and limitations of the human information processing system and their implications for understanding human error. The tendency to allocate blame for system failures to the human component is still widespread. The examples I presented in the context of litigation in the United States was intended to show how fault might be allocated differently and the search for solutions might be approached differently depending on one’s perspective. The ergonomist expert has much to contribute to the REFERENCES
Reason, J. (1990). Human Error. Cambridge, Cambridge University Press. Rasmussen, J. (1983). Skills, rules, knowledge: Signals, signs, and symbols and other distinctions in human performance models. IEEE Transactions on Systems, Man, and Cybernetics, 13, 257-267.



Valuing Viagra: What Is Restoring Potency Worth?Printed from ACP Online. Document URL: Close this window Effective Clinical Practice Valuing Viagra: What Is Restoring Potency Worth? Effective Clinical Practice, July/August 1999. For author affiliations, current addresses, and contributions, see end of text. Context. The use

Alzheimers study guide

Alzheimer’s Disease Learning Guide What it is Complications Treatment Prevention and research Caring for the AD client What it is Alzheimer’s disease (AD) is the most common form of dementia. More than 4 million Americans have AD. The disease is characterized by memory loss, language deterioration, poor judgment, and an indifferent attitude. Dementia is a brain

Copyright ©2010-2018 Medical Science