Mislabeled (Incorrect Patient) Stem Cells Transfused

Một phần của tài liệu Patient safety a case based comprehensive guide springer new york (2014) (Trang 190 - 197)

Clinical Summary

Patient A is a 42-year-old female diagnosed with non-Hodgkin lymphoma. Three years after a splenectomy and several rounds of chemotherapy, her lymphoma returned and her physician ordered an autogeneic stem cell transplant. After under- going induction chemotherapy and stem cell collection on April 1st, she was sched- uled to receive her stem cell infusion on April 14th.

Patient B is a 63-year-old female newly diagnosed with multiple myeloma. Her physician prescribed induction chemotherapy followed by autogeneic stem cell col- lection. Patient B’s stem cells were also harvested on April 1st and cryopreserved for potential future need. All procedures took place at the same facility: a large academic medical center.

On April 12th, the hospital’s Transfusion Service/Stem Cell Laboratory received a request to prepare patient A’s stem cells for infusion at 10 A.M. on April 14th. On the morning of April 14th, the laboratory technologist removed four canisters from the freezer, each labeled as containing a unit of stem cells for patient A. He thawed the stem cell products, pooled them, and issued the pooled product to the patient’s fl oor. At 2 P.M., while reconciling the morning’s paperwork with the tags from the units and the canister labels, he noticed that one of the four canisters that were labeled for patient A actually contained a unit labeled for patient B. He immediately called the fl oor, but the pooled product had already been transfused (Fig. 11.1 ).

Preliminary Investigation

An investigation began immediately following detection of the error. The initial focus was on the bedside process. How could stem cells labeled for another patient

Fig. 11.1 Case 1 Timeline—Mislabeled stem cells transfused

have been transfused? It was quickly realized that the units, having been pooled and relabeled, no longer retained the information from the bags in which they were fro- zen. There was no information from the individual units in the pool available at the bedside, so the pre-transfusion bedside check could only confi rm that the label on the pooled product matched the patient’s information.

While staff on the patient’s unit were looking into the product administration phase of this event, the laboratory was working to determine how this mistake could have occurred. The technologist was experienced, having performed this procedure hundreds of times with no ill consequence. Working backwards in time, he recalled that during the thawing process he hadn’t checked the tags on the bags themselves against the labels on the canisters from which they were removed. He also did not ask for a “second person check.”

Causal Analysis Method

The method used to perform a root cause analysis in this chapter is known as causal tree or fault tree building. This technique provides a structured, standardized method for uncovering underlying actions, circumstances, and decisions that contributed to an event in question. The tree provides a visual representation of an event which includes all possible causes gathered during the investigation process [ 13 ]. The con- sequent or discovery event is at the top of the tree and is described in terms of the event’s consequences—harm, no harm, or a near miss event (an event that could have reached the patient, but was prevented by a barrier). The branches of the causal tree are constructed of precursors which reveal what “set up” the consequent event.

Precursors are displayed in both logical and chronological order proceeding across and down the tree. By continuing to ask “why” at each step on each of the tree’s branches, all relevant precursors and the root causes of the event are revealed. Root causes are indentifi ed in the bottom boxes of each branch of the tree, and in these examples are coded using the Eindhoven Classifi cation Model—Medical Version (also known as PRISMA) [ 14 ]. Causal trees provide a realistic view of how a sys- tem is functioning, as well as facilitate the creation of effective and lasting solutions.

Causal Analysis, Discussion, and Possible Solutions

In the causal tree that was built surrounding Case 1 (Fig. 11.2 ), the consequent event is described as “Patient A received stem cells labeled for Patient B (one of four units, pooled).” As described, this active error occurred after many preceding, latent events had occurred. Moving down the tree, we can see these latent factors as pre- cursors leading up to the consequent event, as well as the root causes identifi ed along with their codes for classifi cation and trending purposes (Table 11.2 ). The following issues contributed to this event and were revealed by an investigation and building of the causal tree.

OP

HSS OP

HRVHRM OCTD

OPOP Fig. 11.2 Case 1 Causal tree analysis—Mislabeled stem cells transfused (refer to Table 11.2 for an explanation of codes)

Human Failure

When event investigations begin, they typically focus on a human error, which is only one, and often the fi nal component of a chain of actions and decisions that “set up” the event to occur. Suggested corrective actions are then directed at changing human behavior. But if we only look for human errors, we will miss seeing the tech- nical and organizational system fl aws, which are for the most part easier to fi x than are humans. As illustrated in Table 11.2 , the Eindhoven Classifi cation Model—

Medical Version of causes used in this case stresses a focus on technical and organi- zational issues before turning toward the human’s role in the event. James Reason has described two approaches to the problem of human fallibility: the person approach and the system approach. The person approach focuses on the active errors of individuals, blaming them for forgetfulness and inattention, while the

Table 11.2 Causal codes key (selected) Causal classifi cation Defi nition

HRM

Human rule based:

monitoring

Monitoring of process or patient status. An example could be a trained technologist operating an automated instrument and not realizing that a pipette that dispenses reagents is clogged.

Another example is a nurse not making the additional checks on a patient who is determined as “at risk” for falls

HRV

Human rule based:

verifi cation

The correct and complete assessment of a situation including related conditions of the patient and materials to be used before beginning the task. An example would be failure to correctly identify a patient by checking the wristband

HSS

Human skill based: slip

Failures in the performance of highly developed skills. An example could be a computer entry error or skipping a patient on the list for phlebotomy rounds

OC

Organizational: culture

A collective approach and its attendant modes to safety and risk rather than the behavior of just one individual. Groups might establish their own modes of function as opposed to following prescribed methods. An example of this is not paging a manager/physician on the weekend because that was not how the department operated; “It’s just not done”

OM

Organizational:

management priorities

Internal management decisions in which safety is relegated to an inferior position when faced with confl icting demands or objectives. This may be a confl ict between production needs and safety. An example of this is decisions made about staffi ng levels

OP

Organizational: protocols/

procedures

Failure resulting from the quality or availability of hospital policies and procedures. They may be too complicated, inaccurate, unrealistic, absent, or poorly presented

TD

Technical: design

Inadequate design of equipment, software, or materials. Can include the design of workspace, software, forms, etc. An example is a form that requires a supervisory review that does not contain a fi eld for signature

system approach concentrates on the system’s “built-in” latent failures, focusing on minimizing or eliminating them, and reinforcing defenses to avert errors or mitigate their effects [ 15 ].

In laboratory settings, which are heavily focused on the individual, institu- tions have traditionally relied upon the “blame and shame” and “blame and (re) train” approaches for staff involved in events. Not surprisingly, these approaches create strong pressure on individuals to cover up mistakes rather than admit to them [ 16 ] and do nothing to fi x the system fl aws that set them up to make the error [ 17 ]. Experts in the fi elds of error and human performance reject these methods [ 18 ].

In this case it was quickly realized that the earlier labeling error was undetectable to the nurses at the bedside. The pooled stem cell unit contained the contents from the four canisters and had been labeled with the name that matched three of the canisters and the patient’s wristband. The fi nal check for the right blood product and patient at the bedside is typically the “2-person, 3-way check” requiring active verifi cation of paperwork, product label, and patient identity carried out by two qualifi ed staff. In this instance, the checking procedure had been carried out properly.

However, this check is often performed improperly, incompletely, or not at all [ 9 ].

To account for the human in the process, particularly when distracted or interrupted, bar-code labeling [ 19 ], radiofrequency identifi cation tags (RFID) [ 20 ], and even palm vein-scanning technology [ 21 ] are increasingly being utilized in patient identifi cation.

Safety Culture

As Reason points out, the quality of a reporting culture is contingent on its response to error [ 11 ]. If its routine response is to blame, then reports will be few and far between. However, if blame is limited to behavior which is in reckless disregard of patient safety or is of malicious intent, then it is a component of a

“just culture,” is supportive of reporting and enhances the opportunity for organi- zational learning [ 22 ].

In this case the technologist did not check his own work, nor did he ask for a second-person review of his work. Although both of these verifi cation steps were part of the written procedure, he was the same technologist who had frozen the cells on the day that they were harvested and believed there was little value in rechecking his own work. He also had never before seen a discrepancy at this stage of the pro- cess. Dekker has written about fl awed systems and the dangerous complacency resulting from the fact that “Murphy’s Law” is wrong, and that “What can go wrong usually goes right,” prompting us to then draw the wrong conclusion [ 23 ]. In reality, eventually things that usually go right will go wrong. We cannot afford to have a safety culture that allows us to become “mindless” about our seemingly fl awless processes. “Nothing recedes like success” is an often quoted reminder of this cau- tion [ 24 ]. In high reliability organizations (HROs) [ 25 ], staff regard success with

suspicion and act mindfully, paying close attention to even weak signals in order to detect a problem in its earliest stage.

The technologist did not ask for a second-person check of the canisters against the labeled bags of stem cells inside them because the only other person present in his department was the supervisor, and she had said that she was not to be inter- rupted that morning. The technologist recognized that he was not following proce- dure, but believed that it was acceptable. His diligence and vigilance had decreased based on past experience, and he did not consider that an event of low probability could occur.

What allowed this situation to occur? Organizational culture can play a signifi - cant role in contributing to error. Organizational culture is characterized by both visible behaviors and the more subtle values and assumptions that underlie them.

The cultural focus on individual autonomy, for example, seems to confl ict with desired norms of teamwork, problem reporting, and learning [ 26 ]. In this case, it was not only acceptable for the supervisor to be unavailable by choice, but the technologist also did not feel comfortable in going against her wishes to interrupt her, even when it was called for by protocol. Westrum has defi ned culture as

“the organisation’s pattern of response to the problems and opportunities it encounters” [ 27 ]. He states that leaders, by their preoccupations, shape a unit’s cul- ture through their symbolic actions and rewards and punishment, and these become the preoccupations of the workforce. The supervisor in this case sent a clear mes- sage to staff, setting a culture that allowed, or even encouraged the tech to break with procedure.

Safety culture is not easy to change. A group in the UK performed a literature review covering the processes and outcomes of culture change programs and found little consensus over whether organizational cultures are capable of being shaped by external manipulation to benefi cial effect [ 28 ]. Key factors that appear to impede culture change are wide and varied. They concluded that while managing culture is increasingly viewed as an essential part of health system reform, transforming cul- tures that are multidimensional, complex, and often lacking leadership is a huge task. Other studies have shown that culture change is slow and diffi cult, but possi- ble. Moving toward high reliability, including preoccupation with failure, reluc- tance to simplify interpretations, sensitivity to operations, commitment to resilience, and deference to expertise can push an organization’s culture in the right direction [ 25 ].

Developed by the Department of Defense’s Patient Safety Program in collabora- tion with the Agency for Healthcare Research and Quality, TeamSTEPPS is a sys- tem that has proven to be one solution in improving safety culture [ 29 ]. It is an evidence-based system designed to improve communication among healthcare pro- fessionals by integrating teamwork principles into all areas. TeamSTEPPS has been shown to facilitate optimization of information, people, and resources, resolve con- fl icts and improve information sharing, and eliminate barriers to quality and safety.

Communication and teamwork between the transfusion service and the nursing department, for instance, can go a long way in reducing sample collection errors.

Redundancy

A second-person review of one’s work, often performed by a passive visual check, is a common approach in transfusion medicine in both detecting and preventing error. However, passive checks have signifi cant potential for distraction, and dual responsibility does not necessarily enhance human performance. In fact, in a system where two people are responsible for the same task, neither person feels truly responsible. Paradoxically, such safety procedures may provide less, rather than more assurance [ 30 ].

Various views exist concerning the value of redundancy. Normal Accident Theorists (NATs) argue that adding redundancy can increase the complexity of a system, and efforts to increase safety through the use of redundant safety devices may backfi re, inadvertently making systems fail more often and creating new cate- gories of accidents [ 31 ]. Sagan describes the phenomena of “social shirking,”

another way in which redundancy can backfi re. Diffusion of responsibility is a com- mon phenomenon in which individuals or groups reduce their reliability in the belief that others will pick up the slack. In transfusion services, backup systems are often humans that are aware of one another. Awareness of redundant units can decrease system reliability if it leads an individual to shirk off unpleasant duties because it is assumed that someone else will take care of it [ 32 ].

On the other hand, High Reliability Theorists (HRTs) believe that duplication and backups are necessary for system safety. Redundancy in High Reliability Organizations (HROs) takes the form of skepticism, in that when an independent effort is made to confi rm a report, there are now two observations where there was originally one. Redundancy involves doubts that precautions are suffi cient and wari- ness about claimed levels of competence. HRTs believe that all humans are fallible and that skeptics improve reliability [ 33 ].

A slight modifi cation of the traditional two-person check, however, has the potential to resolve this issue. It has been estimated that the average failure rate of error detection by one person passively checking another person’s work after the fact is as high as one in ten. But the failure rate in a two-person team with one per- son performing, and the second person monitoring, and then switching roles, is approximately one in 100,000 trials [ 34 ].

Human Factors

The investigation also showed that the technologist would have been much more prone to ask for the second person check if there was a distinct place for that per- son’s sign-off on the form. Forms and records can and should be designed to effec- tively control potential mistakes [ 35 ]. Had there been a check-box and a place for a signature, the omission would have been apparent, presenting itself in a way that pointed out that the technologist did not follow procedure, and in retrospect perhaps

he might not have made the same decision. Computerization of forms and records can also be used as a tool to fl ag omissions.

The room in which the laboratory procedures were carried out had two separate product processing areas. The investigation revealed that each patient’s stem cell products were in their appropriate canisters, having been handled with appropriate segregation. However, one of the units in one of the canisters was then labeled with the incorrect patient information. The labeling of the blood bags was the critical failure step in the process. The technologist had prepared the labels on a desk out- side of the cell preparation areas. He believes that the labels were switched before they were placed into their appropriate segregated areas for the labeling process.

This failure was compounded by a failure to check the blood bag label against the canister label at the time the blood bag was placed into the container and lack of required documentation of this verifi cation. The potentially detectable error remained undetected.

Knowing the potential risk of confusing products from different patients, the lab had previously put into place the human factors “group or distinguish” rule [ 36 ], segregating the areas for each patient’s products, but it was not suffi cient in this case. Perhaps if the protocol had called for the labels to be prepared in the segre- gated areas, this error might have been avoided. Other human factor solutions could have made this error visible. If, for instance, each patient’s information was printed on a different color label, the error would have been made obvious before the units were frozen and certainly before they were pooled.

Process

Methods utilized to intercept errors should appear as far upstream in the process as possible. The paperwork/blood bag reconciliation which ultimately made the error visible occurred too late in the process to prevent the incorrectly labeled stem cells from being transfused. It is clear that the additional time to perform this reconcilia- tion in the laboratory before the units are released to the patient is warranted. This way of thinking reminds us once again that patient verifi cation processes may be needed everywhere, not just at the bedside [ 37 ].

Một phần của tài liệu Patient safety a case based comprehensive guide springer new york (2014) (Trang 190 - 197)

Tải bản đầy đủ (PDF)

(412 trang)