Glossary of patient safety terms

From WikiMD's Wellness Encyclopedia

A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z

# - A[edit | edit source]

The terms active and latent as applied to errors were coined by Reason. Active errors occur at the point of contact between a human and some aspect of a larger system (e.g., a human–machine interface). They are generally readily apparent (e.g., pushing an incorrect button, ignoring a warning light) and almost always involve someone at the frontline. Active failures are sometimes referred to as errors at the sharp end, figuratively referring to a scalpel. In other words, errors at the sharp end are noticed first because they are committed by the person closest to the patient. This person may literally be holding a scalpel (e.g., an orthopedist operating on the wrong leg) or figuratively be administering any kind of therapy (e.g., a nurse programming an intravenous pump) or performing any aspect of care. Latent errors (or latent conditions), in contrast, refer to less apparent failures of organization or design that contributed to the occurrence of errors or allowed them to cause harm to patients. To complete the metaphor, latent errors are those at the other end of the scalpel—the blunt end—referring to the many layers of the health care system that affect the person "holding" the scalpel.

See Primer. An adverse event (i.e., injury resulting from medical care) involving medication use.

Examples:

  1. anaphylaxis to penicillin
  2. major hemorrhage from heparin
  3. aminoglycoside-induced renal failure
  4. agranulocytosis from chloramphenicol

As with the more general term adverse event, the occurrence of an ADE does not necessarily indicate an error or poor quality of care. ADEs that involve an element of error (either of omission or commission) are often referred to as preventable ADEs. Medication errors that reached the patient but by good fortune did not cause any harm are often called potential ADEs. For instance, a serious allergic reaction to penicillin in a patient with no prior such history is an ADE, but so is the same reaction in a patient who has a known allergy history but receives penicillin due to a prescribing oversight. The former occurrence would count as an adverse drug reaction or non-preventable ADE, while the latter would represent a preventable ADE. If a patient with a documented serious penicillin allergy received a penicillin-like antibiotic but happened not to react to it, this event would be characterized as a potential ADE.

An ameliorable ADE is one in which the patient experienced harm from a medication that, while not completely preventable, could have been mitigated. For instance, a patient taking a cholesterol-lowering agent (statin) may develop muscle pains and eventually progress to a more serious condition called rhabdomyolysis. Failure to periodically check a blood test that assesses muscle damage or failure to recognize this possible diagnosis in a patient taking statins who subsequently develops rhabdomyolysis would make this event an ameliorable ADE: harm from medical care that could have been lessened with earlier, appropriate management. Again, the initial development of some problem was not preventable, but the eventual harm that occurred need not have been so severe, hence the term ameliorable ADE.

Adverse effect produced by the use of a medication in the recommended manner—i.e., a drug side effect. These effects range from nuisance effects (e.g., dry mouth with anticholinergic medications) to severe reactions, such as anaphylaxis to penicillin. Adverse drug reactions represent a subset of the broad category of adverse drug events—specifically, they are non-preventable ADEs.

Any injury caused by medical care.

Examples:

pneumothorax from central venous catheter placement

anaphylaxis to penicillin

postoperative wound infection

hospital-acquired delirium (or "sundowning") in elderly patients

Identifying something as an adverse event does not imply "error," "negligence," or poor quality care. It simply indicates that an undesirable clinical outcome resulted from some aspect of diagnosis or therapy, not an underlying disease process. Thus, pneumothorax from central venous catheter placement counts as an adverse event regardless of insertion technique. Similarly, postoperative wound infections count as adverse events even if the operation proceeded with optimal adherence to sterile procedures, the patient received appropriate antibiotic prophylaxis in the perioperative setting, and so on. (See also iatrogenic).

See Primer. Being discharged from the hospital can be dangerous for patients. Nearly 20% of patients experience an adverse event in the first 3 weeks after discharge, including medication errors, health care–associated infections, and procedural complications.

See Primer. Computerized warnings and alarms are used to improve safety by alerting clinicians of potentially unsafe situations. However, this proliferation of alerts may have negative implications for patient safety as well.

The common cognitive trap of allowing first impressions to exert undue influence on the diagnostic process. Clinicians often latch on to features of a patient's presentation that suggest a specific diagnosis. Often, this initial diagnostic impression will prove correct, hence the use of the phrase anchoring heuristic in some contexts, as it can be a useful rule of thumb to "always trust your first impressions." However, in some cases, subsequent developments in the patient's course will prove inconsistent with the first impression. Anchoring bias refers to the tendency to hold on to the initial diagnosis, even in the face of disconfirming evidence.

The Acute Physiologic and Chronic Health Evaluation (APACHE) scoring system has been widely used in the United States. APACHE II is the most widely studied version of this instrument (a more recent version, APACHE III, is proprietary, whereas APACHE II is publicly available); it derives a severity score from such factors as underlying disease and chronic health status. Other points are added for 12 physiologic variables (i.e., hematocrit, creatinine, Glasgow Coma Score, mean arterial pressure) measured within 24 hours of admission to the ICU. The APACHE II score has been validated in several studies involving tens of thousands of ICU patients.

The balance of decision-making power or the steepness of command hierarchy in a given situation. Members of a crew or organization with a domineering, overbearing, or dictatorial team leader experience a steep authority gradient. Expressing concerns, questioning, or even simply clarifying instructions would require considerable determination on the part of team members who perceive their input as devalued or frankly unwelcome. Most teams require some degree of authority gradient; otherwise roles are blurred and decisions cannot be made in a timely fashion. However, effective team leaders consciously establish a command hierarchy appropriate to the training and experience of team members. Authority gradients may occur even when the notion of a team is less well defined. For instance, a pharmacist calling a physician to clarify an order may encounter a steep authority gradient, based on the tone of the physician's voice or a lack of openness to input from the pharmacist. A confident, experienced pharmacist may nonetheless continue to raise legitimate concerns about an order, but other pharmacists might not.

The tendency to assume, when judging probabilities or predicting outcomes, that the first possibility that comes to mind (i.e., the most cognitively "available" possibility) is also the most likely possibility. For instance, suppose a patient presents with intermittent episodes of very high blood pressure. Because episodic hypertension resembles textbook descriptions of pheochromocytoma, a memorable but uncommon endocrinologic tumor, this diagnosis may immediately come to mind. A clinician who infers from this immediate association that pheochromocytoma is the most likely diagnosis would be exhibiting availability bias. In addition to resemblance to classic descriptions of disease, personal experience can also trigger availability bias, as when the diagnosis underlying a recent patient's presentation immediately comes to mind when any subsequent patient presents with similar symptoms. Particularly memorable cases may similarly exert undue influence in shaping diagnostic impressions.

B[edit | edit source]

Probabilistic reasoning in which test results (not just laboratory investigations, but history, physical exam, or any aspect for the diagnostic process) are combined with prior beliefs about the probability of a particular disease. One way of recognizing the need for a Bayesian approach is to recognize the difference between the performance of a test in a population vs. in an individual. At the population level, we can say that a test has a sensitivity and specificity of, say, 90%—i.e., 90% of patients with the condition of interest have a positive result and 90% of patients without the condition have a negative result. In practice, however, a clinician needs to attempt to predict whether an individual patient with a positive or negative result does or does not have the condition of interest. This prediction requires combining the observed test result not just with the known sensitivity and specificity, but also with the chance the patient could have had the disease in the first place (based on demographic factors, findings on exam, or general clinical gestalt).

Beers criteria define medications that generally should be avoided in ambulatory elderly patients, doses or frequencies of administration that should not be exceeded, and medications that should be avoided in older persons known to have any of several common conditions. The criteria were originally developed using a formal consensus process for combining reviews of the evidence with expert input. The criteria for inappropriate use address commonly used categories of medications such as sedative-hypnotics, antidepressants, antipsychotics, antihypertensives, nonsteroidal anti-inflammatory agents, oral hypoglycemics, analgesics, dementia treatments, platelet inhibitors, histamine-2 blockers, antibiotics, decongestants, iron supplements, muscle relaxants, gastrointestinal antispasmodics, and antiemetics. The criteria were intended to guide clinical practice, but also to inform quality assurance review and health services research.

Most would agree that prescriptions for medications deemed inappropriate according to Beers criteria represent poor quality care. Unfortunately, harm does not only occur from receipt of these inappropriately prescribed medications. In one comprehensive national study of medication-related emergency department visits for elderly patients, most problems involved common and important medications not considered inappropriate according to the Beers criteria—principally, oral anticoagulants (e.g., warfarin), antidiabetic agents (e.g., insulin), and antiplatelet agents (aspirin and clopidogrel).

An attribute or achievement that serves as a standard for other providers or institutions to emulate. Benchmarks differ from other standard of care goals, in that they derive from empiric data—specifically, performance or outcomes data. For example, a statewide survey might produce risk-adjusted 30-day rates for death or other major adverse outcomes. After adjusting for relevant clinical factors, the top 10% of hospitals can be identified in terms of particular outcome measures. These institutions would then provide benchmark data on these outcomes. For instance, one might benchmark "door-to-balloon" time at 90 minutes, based on the observation that the top-performing hospitals all had door-to-balloon times in this range. In regard to infection control, benchmarks would typically be derived from national or regional data on the rates of relevant nosocomial infections. The lowest 10% of these rates might be regarded as benchmarks for other institutions to emulate.

The prominent warning labels (generally printed inside black boxes) on packages for certain prescription medications in the United States. These warnings typically arise from post-market surveillance or post-approval clinical trials that bring to light serious adverse reactions. The U.S. Food and Drug Administration (FDA) subsequently may require a pharmaceutical company to place a black box warning on the labeling or packaging of the drug. Although medications with black box warnings often enjoy widespread use and, with cautious use, typically do not result in harm, these warnings remain important sources of safety information for patients and health care providers. They also emphasize the importance of continued, post-market surveillance for adverse drug reactions for all medications, especially relatively new ones.

The blunt end refers to the many layers of the health care system not in direct contact with patients, but which influence the personnel and equipment at the sharp end who do contact patients. The blunt end thus consists of those who set policy, manage health care institutions, and design medical devices, and other people and forces, which, though removed in time and space from direct patient care, nonetheless affect how care is delivered. Thus, an error programming an intravenous pump would represent a problem at the sharp end, while the institution's decision to use multiple different types of infusion pumps, making programming errors more likely, would represent a problem at the blunt end. The terminology of "sharp" and "blunt" ends corresponds roughly to active failures and latent conditions.

C[edit | edit source]

See Primer. Though a seemingly simple intervention, checklists have played a leading role in the most significant successes of the patient safety movement, including the near-elimination of central line–associated bloodstream infections in many intensive care units.

Any system designed to improve clinical decision-making related to diagnostic or therapeutic processes of care. Typically a decision support system responds to "triggers" or "flags"—specific diagnoses, laboratory results, medication choices, or complex combinations of such parameters—and provides information or recommendations directly relevant to a specific patient encounter.

CDSSs address activities ranging from the selection of drugs (e.g., the optimal antibiotic choice given specific microbiologic data) or diagnostic tests to detailed support for optimal drug dosing and support for resolving diagnostic dilemmas. Structured antibiotic order forms represent a common example of paper-based CDSSs. Although such systems are still commonly encountered, many people equate CDSSs with computerized systems in which software algorithms generate patient-specific recommendations by matching characteristics, such as age, renal function, or allergy history, with rules in a computerized knowledge base.

The distinction between decision support and simple reminders can be unclear, but usually reminder systems are included as decision support if they involve patient-specific information. For instance, a generic reminder (e.g., "Did you obtain an allergy history?") would not be considered decision support, but a warning (e.g., "This patient is allergic to codeine.") that appears at the time of entering an order for codeine would be. A recent systematic review estimated the pooled effects for simple computer reminders and more complex decision support provided at the point of care (i.e., as clinicians entered orders in computerized provider order entry systems or performed clinical documentation in electronic medical records).

An event or situation that did not produce patient injury, but only because of chance. This good fortune might reflect robustness of the patient (e.g., a patient with penicillin allergy receives penicillin, but has no reaction) or a fortuitous, timely intervention (e.g., a nurse happens to realize that a physician wrote an order in the wrong chart). Such events have also been termed near miss incidents.

Having the necessary knowledge or technical skill to perform a given procedure within the bounds of success and failure rates deemed compatible with acceptable care. The medical education literature often refers to core competencies, which include not just technical skills with respect to procedures or medical knowledge, but also competencies with respect to communicating with patients, collaborating with other members of the health care team, and acting as a manager or agent for change in the health system.

Provides an approach to understanding the behavior of systems that exhibit non-linear dynamics, or the ways in which some adaptive systems produce novel behavior not expected from the properties of their individual components. Such behaviors emerge as a result of interactions between agents at a local level in the complex system and between the system and its environment.

Complexity theory differs importantly from systems thinking in its emphasis on the interaction between local systems and their environment (such as the larger system in which a given hospital or clinic operates). It is often tempting to ignore the larger environment as unchangeable and therefore outside the scope of quality improvement or patient safety activities. According to complexity theory, however, behavior within a hospital or clinic (e.g., non-compliance with a national practice guideline) can often be understood only by identifying interactions between local attributes and environmental factors.


See Primer. Computerized provider order entry systems ensure standardized, legible, and complete orders, and—especially when paired with decision support systems—have the potential to sharply reduce medication prescribing errors.

The tendency to focus on evidence that supports a working hypothesis, such as a diagnosis in clinical medicine, rather than to look for evidence that refutes it or provides greater support to an alternative diagnosis. Suppose that a 65-year-old man with a past history of angina presents to the emergency department with acute onset of shortness of breath. The physician immediately considers the possibility of cardiac ischemia, so asks the patient if he has experienced any chest pain. The patient replies affirmatively. Because the physician perceives this answer as confirming his working diagnosis, he does not ask if the chest pain was pleuritic in nature, which would decrease the likelihood of an acute coronary syndrome and increase the likelihood of pulmonary embolism (a reasonable alternative diagnosis for acute shortness of breath accompanied by chest pain). The physician then orders an EKG and cardiac troponin. The EKG shows nonspecific ST changes and the troponin returns slightly elevated.

Of course, ordering an EKG and testing cardiac enzymes is appropriate in the work-up of acute shortness of breath, especially when it is accompanied by chest pain and in a patient with known angina. The problem is that these tests may be misleading, since positive results are consistent not only with acute coronary syndrome but also with pulmonary embolism. To avoid confirmation in this case, the physician might have obtained an arterial blood glass or a D-dimer level. Abnormal results for either of these tests would be relatively unlikely to occur in a patient with an acute coronary syndrome (unless complicated by pulmonary edema), but likely to occur with pulmonary embolism. These results could be followed up by more direct testing for pulmonary embolism (e.g., with a helical CT scan of the chest), whereas normal results would allow the clinician to proceed with greater confidence down the road of investigating and managing cardiac ischemia.

This vignette was presented as if information were sought in sequence. In many cases, especially in acute care medicine, clinicians have the results of numerous tests in hand when they first meet a patient. The results of these tests often do not all suggest the same diagnosis. The appeal of accentuating confirmatory test results and ignoring nonconfirmatory ones is that it minimizes cognitive dissonance.

A related cognitive trap that may accompany confirmation bias and compound the possibility of error is "anchoring bias"—the tendency to stick with one's first impressions, even in the face of significant disconfirming evidence.

Crew resource management (CRM), also called crisis resource management in some contexts (e.g., anesthesia), encompasses a range of approaches to training groups to function as teams, rather than as collections of individuals. Originally developed in aviation, CRM emphasizes the role of human factors—the effects of fatigue, expected or predictable perceptual errors (such as misreading monitors or mishearing instructions), as well as the impact of different management styles and organizational cultures in high-stress, high-risk environments. CRM training develops communication skills, fosters a more cohesive environment among team members, and creates an atmosphere in which junior personnel will feel free to speak up when they think that something is amiss. Some CRM programs emphasize education on the settings in which errors occur and the aspects of team decision-making conducive to "trapping" errors before they cause harm. Other programs may provide more hands-on training involving simulated crisis scenarios followed by debriefing sessions in which participants assess their own and others' behavior.

A term made famous by a classic human factors study by Cooper of "anesthetic mishaps," though the term had first been coined in the 1950s. Cooper and colleagues brought the technique of critical incident analysis to a wide audience in health care but followed the definition of the originator of the technique. They defined critical incidents as occurrences that are "significant or pivotal, in either a desirable or an undesirable way," though Cooper and colleagues (and most others since) chose to focus on incidents that had potentially undesirable consequences. This concept is best understood in the context of the type of investigation that follows, which is very much in the style of root cause analysis. Thus, significant or pivotalmeans that there was significant potential for harm (or actual harm), but also that the event has the potential to reveal important hazards in the organization. In many ways, it is the spirit of the expression in quality improvement circles, "every defect is a treasure." In other words, these incidents, whether near misses or disasters in which significant harm occurred, provide valuable opportunities to learn about individual and organizational factors that can be remedied to prevent similar incidents in the future.

D[edit | edit source]

Any system for advising or providing guidance about a particular clinical decision at the point of care. For example, a copy of an algorithm for antibiotic selection in patients with community acquired pneumonia would count as clinical decision support if made available at the point of care. Increasingly, decision support occurs via a computerized clinical information or order entry system. Computerized decision support includes any software employing a knowledge base designed to assist clinicians in decision making at the point of care.

Typically a decision support system responds to "triggers" or "flags"—specific diagnoses, laboratory results, medication choices, or complex combinations of such parameters—and provides information or recommendations directly relevant to a specific patient encounter. For instance, ordering an aminoglycoside for a patient with creatinine above a certain value might trigger a message suggesting a dose adjustment based on the patient’s decreased renal function.


See Primer. Thousands of patients die every year due to diagnostic errors. While clinicians' cognitive biases play a role in many diagnostic errors, underlying health care system problems also contribute to missed and delayed diagnoses.


See Primer. Many victims of medical errors never learn of the mistake, because the error is simply not disclosed. Physicians have traditionally shied away from discussing errors with patients, due to fear of precipitating a malpractice lawsuit and embarrassment and discomfort with the disclosure process.


See Primer. Popular media often depicts physicians as brilliant, intimidating, and condescending in equal measures. This stereotype, though undoubtedly dramatic and even amusing, obscures the fact that disruptive and unprofessional behavior by clinicians poses a definite threat to patient safety.


See Primer. Long and unpredictable work hours have been a staple of medical training for centuries. In 2003, the Accreditation Council for Graduate Medical Education (ACGME) implemented new rules limiting duty hours for all residents to reduce fatigue. The implementation of resident duty-hour restrictions has been controversial, as evidence regarding its impact on patient safety has been mixed.


E[edit | edit source]

An act of commission (doing something wrong) or omission (failing to do the right thing) that leads to an undesirable outcome or significant potential for such an outcome. For instance, ordering a medication for a patient with a documented allergy to that medication would be an act of commission. Failing to prescribe a proven medication with major benefits for an eligible patient (e.g., low-dose unfractionated heparin as venous thromboembolism prophylaxis for a patient after hip replacement surgery) would represent an error of omission.

Errors of omission are more difficult to recognize than errors of commission but likely represent a larger problem. In other words, there are likely many more instances in which the provision of additional diagnostic, therapeutic, or preventive modalities would have improved care than there are instances in which the care provided quite literally should not have been provided. In many ways, this point echoes the generally agreed-upon view in the health care quality literature that underuse far exceeds overuse, even though the latter historically received greater attention. (See definition for Underuse, Overuse, Misuse.) In addition to commission vs. omission, three other dichotomies commonly appear in the literature on errors: active failures vs. latent conditions, errors at the sharp end vs. errors at the blunt end, and slips vs. mistakes.

Error chain generally refers to the series of events that led to a disastrous outcome, typically uncovered by a root cause analysis. Sometimes the chain metaphor carries the added sense of inexorability, as many of the causes are tightly coupled, such that one problem begets the next. A more specific meaning of error chain, especially when used in the phrase "break the error chain," relates to the common themes or categories of causes that emerge from root cause analyses. These categories go by different names in different settings, but they generally include (1) failure to follow standard operating procedures, (2) poor leadership, (3) breakdowns in communication or teamwork, (4) overlooking or ignoring individual fallibility, and (5) losing track of objectives. Used in this way, "break the error chain" is shorthand for an approach in which team members continually address these links as a crisis or routine situation unfolds. The checklists that are included in teamwork training programs have categories corresponding to these common links in the error chain (e.g., establish a team leader, assign roles and responsibilities, and monitor your teammates).

Use of the phrase "evidence-based" in connection with an assertion about some aspect of medical care—a recommended treatment, the cause of some condition, or the best way to diagnose it—implies that the assertion reflects the results of medical research, as opposed to, for example, a personal opinion (plausible or widespread as that opinion might be). Given the volume of medical research and the not-infrequent occurrence of conflicting results from different studies addressing the same question, the phrase "reflects the results of medical research" should be clarified as "reflects the preponderance of results from relevant studies of good methodological quality."

The concept of evidence-based treatments has particular relevance to patient safety, because many recommended methods for measuring and improving safety problems have been drawn from other high-risk industries, without any studies to confirm that these strategies work well in health care (or, in many cases, that they work well in the original industry). The lack of evidence supporting widely recommended (sometimes even mandated) patient safety practices contrasts sharply with the rest of clinical medicine. While individual practitioners may employ diagnostic tests or administer treatments of unproven value, professional organizations typically do not endorse such aspects of care until well-designed studies demonstrate that these diagnostic or treatment strategies confer net benefit to patients (i.e., until they become evidence-based). Certainly, diagnostic and therapeutic processes do not become standard of care or in any way mandated until they have undergone rigorous evaluation in well-designed studies.

In patient safety, by contrast, patient safety goals established at state and national levels (sometimes even mandated by regulatory agencies or by law) often reflect ideas that have undergone little or no empiric evaluation. Just as in clinical medicine, promising safety strategies sometimes can turn out to confer no benefit or even create new problems—hence the need for rigorous evaluations of candidate patient safety strategies just as in other areas of medicine. That said, just how high to set the bar for the evidence required to justify actively disseminating patient safety and quality improvement strategies is a subject that has received considerable attention in recent years. Some leading thinkers in patient safety argue that an evidence bar comparable to that used in more traditional clinical medicine would be too high, given the difficulty of studying complex social systems such as hospitals and clinics, and the high costs of studying interventions such as rapid response teams or computerized order entry.

F[edit | edit source]

F

The extent to which a technical concept, instrument, or study result is plausible, usually because its findings are consistent with prior assumptions and expectations.

Error analysis may involve retrospective investigations (as in Root Cause Analysis) or prospective attempts to predict "error modes." Different frameworks exist for predicting possible errors. One commonly used approach is failure mode and effect analysis (FMEA), in which the likelihood of a particular process failure is combined with an estimate of the relative impact of that error to produce a "criticality index." By combining the probability of failure with the consequences of failure, this index allows for the prioritization of specific processes as quality improvement targets. For instance, an FMEA analysis of the medication dispensing process on a general hospital ward might break down all steps from receipt of orders in the central pharmacy to filling automated dispensing machines by pharmacy technicians. Each step in this process would be assigned a probability of failure and an impact score, so that all steps could be ranked according to the product of these two numbers. Steps ranked at the top (ie, those with the highest "criticality indices") would be prioritized for error proofing.

A common process used to prospectively identify error risk within a particular process. FMEA begins with a complete process mapping that identifies all the steps that must occur for a given process to occur (e.g., programming an infusion pump or preparing an intravenous medication in the pharmacy). With the process mapped out, the FMEA then continues by identifying the ways in which each step can go wrong (i.e., the failure modes for each step), the probability that each error will be detected (i.e., so that it can be corrected before causing harm), and the consequences or impact of the error not being detected. The estimates of the likelihood of a particular process failure, the chance of detecting such failure, and its impact are combined numerically to produce a criticality index.

This criticality index provides a rough quantitative estimate of the magnitude of hazard posed by each step in a high-risk process. Assigning a criticality index to each step allows prioritization of targets for improvement. For instance, an FMEA analysis of the medication-dispensing process on a general hospital ward might break down all steps from receipt of orders in the central pharmacy to filling automated dispensing machines by pharmacy technicians. Each step in this process would be assigned a probability of failure and an impact score, so that all steps could be ranked according to the product of these two numbers. Steps ranked at the top (i.e., those with the highest criticality indices) would be prioritized for error proofing.

FMEA makes sense as a general approach and it (or similar prospective error-proofing techniques) has been used in other high-risk industries. However, the reliability of the technique is not clear. Different teams charged with analyzing the same process may identify different steps in the process, assign different risks to the steps, and consequently prioritize different targets for improvement.


Failure to rescue is shorthand for failure to rescue (i.e., prevent a clinically important deterioration, such as death or permanent disability) from a complication of an underlying illness (e.g., cardiac arrest in a patient with acute myocardial infarction) or a complication of medical care (e.g., major hemorrhage after thrombolysis for acute myocardial infarction). Failure to rescue thus provides a measure of the degree to which providers responded to adverse occurrences (e.g., hospital-acquired infections, cardiac arrest or shock) that developed on their watch. It may reflect the quality of monitoring, the effectiveness of actions taken once early complications are recognized, or both.

The technical motivation for using failure to rescue to evaluate the quality of care stems from the concern that some institutions might document adverse occurrences more assiduously than other institutions. Therefore, using lower rates of in-hospital complications by themselves may simply reward hospitals with poor documentation. However, if the medical record indicates that a complication has occurred, the response to that complication should provide an indicator of the quality of care that is less susceptible to charting bias.

An aspect of a design that prevents a target action from being performed or allows its performance only if another specific action is performed first. For example, automobiles are now designed so that the driver cannot shift into reverse without first putting her foot on the brake pedal. Forcing functions need not involve device design. For instance, one of the first forcing functions identified in health care is the removal of concentrated potassium from general hospital wards. This action is intended to prevent the inadvertent preparation of intravenous solutions with concentrated potassium, an error that has produced small but consistent numbers of deaths for many years.

The "Five Rights"—administering the Right Medication, in the Right Dose, at the Right Time, by the Right Route, to the Right Patient—are the cornerstone of traditional nursing teaching about safe medication practice.

While the Five Rights represent goals of safe medication administration, they contain no procedural detail, and thus may inadvertently perpetuate the traditional focus on individual performance rather than system improvement. Procedures for ensuring each of the Five Rights must take into account human factor and systems design issues (such as workload, ambient distractions, poor lighting, problems with wristbands, ineffective double check protocols, etc.) that can threaten or undermine even the most conscientious efforts to comply with the Five Rights. In the end, the Five Rights remain an important goal for safe medication practice, but one that may give the illusion of safety if not supported by strong policies and procedures, a system organized around modern principles of patient safety, and a robust safety culture.

A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z

H[edit | edit source]

See Primer. The process when one health care professional updates another on the status of one or more patients for the purpose of taking over their care. Typical examples involve a physician who has been on call overnight telling an incoming physician about patients she has admitted so he can continue with their ongoing management, know what immediate issues to watch out for, and so on. Nurses similarly conduct a handover at the end of their shift, updating their colleagues about the status of the patients under their care and tasks that need to be performed. When the outgoing nurses return for their next duty period, they will in turn receive new updates during the change of shift handover.

Handovers in care have always carried risks: a professional who spent hours assessing and managing a patient, upon completion of her work, provides a brief summary of the salient features of the case to an incoming professional who typically has other unfamiliar patients he must get to know. The summary may leave out key details due to oversight, exacerbated by an unstructured process and being rushed to finish work. Even structured, fairly thorough summaries during handovers may fail to capture nuances that could subsequently prove relevant.

In addition to handoffs between professionals working in the same clinical unit, shorter lengths of stay in hospitals and other occupancy issues have increased transitions between settings, with patients more often move from one ward to another or from one institution to another (e.g., from an acute care hospital to a rehabilitation facility or skilled nursing facility). Due to the increasing recognition of hazards associated with these transitions in care, the term "handovers" is often used to refer to the information transfer that occurs from one clinical setting to another (e.g., from hospital to nursing home) not just from one professional to another.


See Primer. Although long accepted by clinicians as an inevitable hazard of hospitalization, recent efforts demonstrate that relatively simple measures can prevent the majority of health care–associated infections. As a result, hospitals are under intense pressure to reduce the burden of these infections.


Individuals' ability to find, process, and comprehend the basic health information necessary to act on medical instructions and make decisions about their health. Numerous studies have documented the degree to which numerous patients do not understand basic information or instructions related to general aspects of their medical care, their medications, and procedures they will undergo. The limited ability to comprehend medical instructions or information in some cases reflects obvious language barriers (e.g., reviewing medication instructions in English with a patient who speaks very little English), but the scope of the problem reflects broader issues related to levels of education, cross-cultural issues, and overuse of technical terminology by clinicians.

Loosely defined or informal rules often arrived at through experience or trial and error that make assessments and decisions (e.g., gastrointestinal complaints that wake patients up at night are unlikely to be benign in nature). Heuristics provide cognitive shortcuts in the face of complex situations, and thus serve an important purpose. Unfortunately, they can also turn out to be wrong, with frequently used heuristics often forming the basis for the many cognitive biases, such as anchoring bias, availability bias, confirmation bias, and others, that have received attention in the literature on diagnostic errors and medical decision making.


See Primer. High reliability organizations refer to organizations or systems that operate in hazardous conditions but have fewer than their fair share of adverse events. Commonly discussed examples include air traffic control systems, nuclear power plants, and naval aircraft carriers. It is worth noting that, in the patient safety literature, HROs are considered to operate with nearly failure-free performance records, not simply better than average ones. This shift in meaning is somewhat understandable given that the failure rates in these other industries are so much lower than rates of errors and adverse events in health care. This comparison glosses over the difference in significance of a "failure" in the nuclear power industry compared with one in health care. The point remains, however, that some organizations achieve consistently safe and effective performance records despite unpredictable operating environments or intrinsically hazardous endeavors. Detailed case studies of specific HROs have identified some common features, which have been offered as models for other organizations to achieve substantial improvements in their safety records. These features include:

Preoccupation with failure—the acknowledgment of the high-risk, error-prone nature of an organization's activities and the determination to achieve consistently safe operations.

Commitment to resilience—the development of capacities to detect unexpected threats and contain them before they cause harm, or bounce back when they do.

Sensitivity to operations—an attentiveness to the issues facing workers at the frontline. This feature comes into play when conducting analyses of specific events (e.g., frontline workers play a crucial role in root cause analyses by bringing up unrecognized latent threats in current operating procedures), but also in connection with organizational decision-making, which is somewhat decentralized. Management units at the frontline are given some autonomy in identifying and responding to threats, rather than adopting a rigid top-down approach.

A culture of safety, in which individuals feel comfortable drawing attention to potential hazards or actual failures without fear of censure from management.

In a very general sense, hindsight bias relates to the common expression "hindsight is 20/20." This expression captures the tendency for people to regard past events as expected or obvious, even when, in real time, the events perplexed those involved. More formally, one might say that after learning the outcome of a series of events—whether the outcome of the World Series or the steps leading to a war—people tend to exaggerate the extent to which they had foreseen the likelihood of its occurrence.

In the context of safety analysis, hindsight bias refers to the tendency to judge the events leading up to an accident as errors because the bad outcome is known. The more severe the outcome, the more likely that decisions leading up to this outcome will be judged as errors. Judging the antecedent decisions as errors implies that the outcome was preventable. In legal circles, one might use the phrase "but for," as in "but for these errors in judgment, this terrible outcome would not have occurred." Such judgments return us to the concept of "hindsight is 20/20." Those reviewing events after the fact see the outcome as more foreseeable and therefore more preventable than they would have appreciated in real time.


See Primer. Human factors engineering is the discipline that attempts to identify and address safety problems that arise due to the interaction between people, technology, and work environments.

The Health Insurance Portability and Accountability Act of 1996 (HIPAA) contains new federal regulations intended to increase privacy and security of patient information during electronic transmission or communication of "protected health information" (PHI) among providers or between providers and payers or other entities.

"Protected health information" (PHI) includes all medical records and other individually identifiable health information. "Individually identifiable information" includes data that explicitly linked to a patient as well as health information with data items with a reasonable potential for allowing individual identification.

HIPAA also requires providers to offer patients certain rights with respect to their information, including the right to access and copy their records and the right to request amendments to the information contained in their records.

Administrative protections specified by HIPAA to promote the above regulations and rights include requirements for a Privacy Officer and staff training regarding the protection of patients’ information.

I[edit | edit source]

An adverse effect of medical care, rather than of the underlying disease (literally "brought forth by healer," from Greek iatros, for healer, and gennan to bring forth); equivalent to adverse event.


See Primer. Patient safety event reporting systems are ubiquitous in hospitals and are a mainstay of efforts to detect safety and quality problems. However, while event reports may highlight specific safety concerns, they do not provide insights into the epidemiology of safety problems.

The process whereby a physician informs a patient about the risks and benefits of a proposed therapy or test. Informed consent aims to provide sufficient information about the proposed treatment and any reasonable alternatives that the patient can exercise autonomy in deciding whether to proceed.

Legislation governing the requirements of, and conditions under which, consent must be obtained varies by jurisdiction. Most general guidelines require patients to be informed of the nature of their condition, the proposed procedure, the purpose of the procedure, the risks and benefits of the proposed treatments, the probability of the anticipated risks and benefits, alternatives to the treatment and their associated risks and benefits, and the risks and benefits of not receiving the treatment or procedure.

Although the goals of informed consent are irrefutable, consent is often obtained in a haphazard, pro forma fashion, with patients having little true understanding of procedures to which they have consented. Evidence suggests that asking patients to restate the essence of the informed consent improves the quality of these discussions and makes it more likely that the consent is truly informed.

J[edit | edit source]

The phrase "just culture" was popularized in the patient safety lexicon by a report that outlined principles for achieving a culture in which frontline personnel feel comfortable disclosing errors—including their own—while maintaining professional accountability. The examples in the report relate to transfusion safety, but the principles clearly generalize across domains within health care organizations.

Traditionally, health care's culture has held individuals accountable for all errors or mishaps that befall patients under their care. By contrast, a just culture recognizes that individual practitioners should not be held accountable for system failings over which they have no control. A just culture also recognizes many individual or "active" errors represent predictable interactions between human operators and the systems in which they work. However, in contrast to a culture that touts "no blame" as its governing principle, a just culture does not tolerate conscious disregard of clear risks to patients or gross misconduct (e.g., falsifying a record, performing professional duties while intoxicated).

In summary, a just culture recognizes that competent professionals make mistakes and acknowledges that even competent professionals will develop unhealthy norms (shortcuts, "routine rule violations"), but has zero tolerance for reckless behavior.

L[edit | edit source]

L

The terms active and latent as applied to errors were coined by Reason. Latent errors (or latent conditions) refer to less apparent failures of organization or design that contributed to the occurrence of errors or allowed them to cause harm to patients. For instance, whereas the active failure in a particular adverse event may have been a mistake in programming an intravenous pump, a latent error might be that the institution uses multiple different types of infusion pumps, making programming errors more likely. Thus, latent errors are quite literally "accidents waiting to happen." Latent errors are sometimes referred to as errors at the blunt end, referring to the many layers of the health care system that affect the person "holding" the scalpel. Active failures, in contrast, are sometimes referred to as errors at the sharp end, or the personnel and parts of the health care system in direct contact with patients.

The acquisition of any new skill is associated with the potential for lower-than-expected success rates or higher-than-expected complication rates. This phenomenon is often known as a learning curve. In some cases, this learning curve can be quantified in terms of the number of procedures that must be performed before an operator can replicate the outcomes of more experienced operators or centers. While learning curves are almost inevitable when new procedures emerge or new providers are in training, minimizing their impact is a patient safety imperative. One option is to perform initial operations or procedures under the supervision of more experienced operators. Surgical and procedural simulators may play an increasingly important role in decreasing the impact of learning curves on patients, by allowing acquisition of relevant skills in laboratory settings.

A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z


M[edit | edit source]

A designation by the Magnet Hospital Recognition Program administered by the American Nurses Credentialing Center. The program has its genesis in a 1983 study conducted by the American Academy of Nursing that sought to identify hospitals that retained nurses for longer than average periods of time. The study identified institutional characteristics correlated with high retention rates, an important finding in light of a major nursing shortage at the time. These findings provided the basis for the concept of magnet hospital and led 10 years later to the formal Magnet Program.

Without taking anything away from the particular hospitals that have achieved Magnet status, the program as a whole has its critics. In fact, at least one state nurses' association (Massachusetts) has taken an official position critiquing the program, charging that its perpetuation reflects the financial interests of its sponsoring organization and the participating hospitals more than the goals of improving health care quality or improving working conditions for nurses. Regardless of the particulars of the Magnet Recognition Program and the lack of persuasive evidence linking magnet status to quality, to many the term magnet hospital connotes a hospital that delivers superior patient care and, partly on this basis, attracts and retains high-quality nurses.


See Primer. The concept of medical emergency teams (also known as rapid response teams) is that of a cardiac arrest team with more liberal calling criteria. Instead of just frank respiratory or cardiac arrest, medical emergency teams respond to a wide range of worrisome, acute changes in patients' clinical status, such as low blood pressure, difficulty breathing, or altered mental status. In addition to less stringent calling criteria, the concept of medical emergency teams de-emphasizes the traditional hierarchy in patient care in that anyone can initiate the call. Nurses, junior medical staff, or others involved in the care of patients can call for the assistance of the medical emergency team whenever they are worried about a patient's condition, without having to wait for more senior personnel to assess the patient and approve the decision to call for help.


See Primer. Unintended inconsistencies in medication regimens occur with any transition in care. Medication reconciliation refers to the process of avoiding such inadvertent inconsistencies by reviewing the patient's current medication regimen and comparing it with the regimen being considered for the new setting of care.

Mental models are psychological representations of real, hypothetical, or imaginary situations. Scottish psychologist Kenneth Craik (1943) first proposed mental models as the basis for anticipating events and explaining events (i.e., for reasoning). Though easiest to conceptualize in terms of mental pictures of objects (e.g., a DNA double helix or the inside of an internal combustion engine), mental models can also include "scripts" or processes and other properties beyond images. Mental models create differing expectations, which suggest different courses of action. For instance, when you walk into a fast-food restaurant, you are invoking a different mental model than when in a fancy restaurant. Based on this model, you automatically go to place your order at the counter, rather than sitting at a booth and expecting a waiter to take your order.

Metacognition refers to thinking about thinking—that is, reflecting on the thought processes that led to a particular diagnosis or decision to consider whether biases or cognitive short cuts may have had a detrimental effect. Numerous cognitive biases affect human reasoning. In some ways, metacognition amounts to playing devil's advocate with oneself when it comes to working diagnoses and important therapeutic decisions. However, the devil is often in the details—one must become familiar with the variety of specific biases that commonly affect medical reasoning. For instance, when discharging a patient with atypical chest pain from the emergency department, you might step back and consider how much the discharge diagnosis of musculoskeletal pain reflects the sign out as a "soft rule out" given by a colleague on the night shift. Or, you might mull over the degree to which your reaction to and assessment of a particular patient stemmed from his having been labeled a "frequent flyer." Another cognitive bias is that clinicians tend to assign more importance to pieces of information that required personal effort to obtain.

In some contexts, errors are dichotomized as slips or mistakes, based on the cognitive psychology of task-oriented behavior. Mistakes reflect failures during attentional behaviors—behavior that requires conscious thought, analysis, and planning, as in active problem solving. Rather than lapses in concentration (as with slips), mistakes typically involve insufficient knowledge, failure to correctly interpret available information, or application of the wrong cognitive heuristic or rule. Thus, choosing the wrong diagnostic test or ordering a suboptimal medication for a given condition represents a mistake. Mistakes often reflect lack of experience or insufficient training. Reducing the likelihood of mistakes typically requires more training, supervision, or occasionally disciplinary action (in the case of negligence).

Unfortunately, health care has typically responded to all errors as if they were mistakes, with remedial education and/or added layers of supervision. In point of fact, most errors are actually slips, which are failures of schematic behavior that occur due to fatigue, stress, or emotional distractions, and are prevented through sharply different mechanisms.



An event or situation that did not produce patient injury, but only because of chance. This good fortune might reflect robustness of the patient (e.g., a patient with penicillin allergy receives penicillin, but has no reaction) or a fortuitous, timely intervention (e.g., a nurse happens to realize that a physician wrote an order in the wrong chart). This definition is identical to that for close call.

N[edit | edit source]

See Primer. The list of never events has expanded over time to include adverse events that are unambiguous, serious, and usually preventable. While most are rare, when never events occur, they are devastating to patients and indicate serious underlying organizational safety problems.

Though less often cited than high reliability theory in the health care literature, normal accident theory has played a prominent role in the study of complex organizations. In contrast to the optimism of high reliability theory, normal accident theory suggests that, at least in some settings, major accidents become inevitable and, thus, in a sense, "normal."

Perrow proposed two factors that create an environment in which a major accident becomes increasingly likely over time: complexity and tight coupling. The degree of complexity envisioned by Perrow occurs when no single operator can immediately foresee the consequences of a given action in the system. Tight coupling occurs when processes are intrinsically time-dependent–once a process has been set in motion; it must be completed within a certain period of time. Importantly, normal accident theory contends that accidents become inevitable in complex, tightly coupled systems regardless of steps taken to increase safety. In fact, these steps sometimes increase the risk for future accidents through unintended collateral effects and general increases in system complexity.

Even if one does not believe the central contention of normal accident theory–that the potential for catastrophe emerges as an intrinsic property of certain complex systems–analyses informed by this theory's perspective have offered some fascinating insights into possible failure modes for high-risk organizations, including hospitals.

Normalization of deviance was coined by Diane Vaughan in her book The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA, in which she analyzes the interactions between various cultural forces within NASA that contributed to the Challenger disaster. Vaughn used this expression to describe the gradual shift in what is regarded as normal after repeated exposures to "deviant behavior" (behavior straying from correct [or safe] operating procedure). Corners get cut, safety checks bypassed, and alarms ignored or turned off, and these behaviors become normal—not just common, but stripped of their significance as warnings of impending danger. In their discussion of a catastrophic error in health care, Chassin and Becher used the phrase "a culture of low expectations." When a system routinely produces errors (paperwork in the wrong chart, major miscommunications between different members of a given health care team, patients in the dark about important aspects of the care), providers in the system become inured to malfunction. In such a system, what should be regarded as a major warning of impending danger is ignored as a normal operating procedure.

O[edit | edit source]

The onion model illustrates the multiple levels or layers of protection (as in the layers of an onion) in a complex, high-risk system such as any health care setting. These layers include external regulations (e.g., related to staffing levels or required organizational practices, such as medication reconciliation), organizational features such as a just culture, equipment and technology (e.g., computerized order entry), and education and training of personnel. An illustration of a modified version of the onion model can be found here.

A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z

P[edit | edit source]

Fundamentally, patient safety refers to freedom from accidental or preventable injuries produced by medical care. Thus, practices or interventions that improve patient safety are those that reduce the occurrence of preventable adverse events.

See Primer. The vast majority of health care takes place in the outpatient, or ambulatory, setting, and a growing body of research has identified and characterized factors that influence safety in office practice, the types of errors commonly encountered in ambulatory care, and potential strategies for improving ambulatory safety.

Pay for performance, sometimes abbreviated as P4P, refers to the general strategy of promoting quality improvement by rewarding providers (meaning individual clinicians or, more commonly, clinics or hospitals) who meet certain performance expectations with respect to health care quality or efficiency.

Performance can be defined in terms of patient outcomes but is more commonly defined in terms of processes of care (e.g., the percentage of eligible diabetics who have been referred for annual retinal examinations, the percentage of children who have received immunizations appropriate for their age, patients admitted to the hospital with pneumonia who receive antibiotics within 6 hours). Pay-for-performance initiatives reflect the efforts of purchasers of health care—from the federal government to private insurers—to use their purchasing power to encourage providers to develop whatever specific quality improvement initiatives are required to achieve the specified targets. Thus, rather than committing to a specific quality improvement strategy, such as a new information system or a disease management program, which may have variable success in different institutions, pay for performance creates a climate in which provider groups will be strongly incentivized to find whatever solutions will work for them.

See Primer. Long and unpredictable work hours have been a staple of medical training for centuries. However, little attention was paid to the patient safety effects of fatigue among residents until March 1984, when Libby Zion died due to a medication-prescribing error while under the care of residents in the midst of a 36-hour shift. In 2003, the Accreditation Council for Graduate Medical Education (ACGME) implemented new rules limiting work hours for all residents, with the key components being that residents should work no more than 80 hours per week or 24 consecutive hours on duty, should not be "on-call" more than every third night, and should have 1 day off per week.

Commonly referred to as PDSA, refers to the cycle of activities advocated for achieving process or system improvement. The cycle was first proposed by Walter Shewhart, one of the pioneers of statistical process control (see run charts) and popularized by his student, quality expert W. Edwards Deming. The PDSA cycle represents one of the cornerstones of continuous quality improvement (CQI). The components of the cycle are briefly described below:

Plan: Analyze the problem you intend to improve and devise a plan to correct the problem.

Do: Carry out the plan (preferably as a pilot project to avoid major investments of time or money in unsuccessful efforts).

Study: Did the planned action succeed in solving the problem? If not, what went wrong? If partial success was achieved, how could the plan be refined?

Act: Adopt the change piloted above as is, abandon it as a complete failure, or modify it and run through the cycle again. Regardless of which action is taken, the PDSA cycle continues, either with the same problem or a new one.

PDSA can seem like a simple way to tackle quality problems. In practice, though, many omit key steps or do not perform sufficient cycles. PDSA aims to foster rapid change, with frequent tests of improvement, so relying on, for example, quarterly data to assess the effects of the efforts to date is usually not adequate. Another way in which practice deviates from theory for PDSA is the way in which the cycles play out. PDSA cycles are typically depicted as a smooth progression, with each cycle seamlessly and iteratively building on the previous. As the number of cycles increases, their effectiveness and overall cumulative effect strengthens. In practice, this type of work involves frequent false starts, backtracking, regroupings, backsliding, and overlapping scenarios within the process. Well-executed PDSA cycles in practice involve a more complex tangle of related improvement efforts talking different aspects of the target problem.

A potential adverse drug event is a medication error or other drug-related mishap that reached the patient but happened not to produce harm (eg, a penicillin-allergic patient receives penicillin but happens not to have an adverse reaction). In some studies, potential ADEs refer to errors or other problems that, if not intercepted, would be expected to cause harm. Thus, in some studies, if a physician ordered penicillin for a patient with a documented serious penicillin allergy, the order would be characterized as a potential ADE, on the grounds that administration of the drug would carry a substantial risk of harm to the patient.

The pressure to put quantity of output—for a product or a service—ahead of safety. This pressure is seen in its starkest form in the line speed of factory assembly lines, famously demonstrated by Charlie Chaplin in Modern Times, as he is carried away on a conveyor belt and into the giant gears of the factory by the rapidly moving assembly line.

In health care, production pressure refers to delivery of services—the pressure to run hospitals at 100% capacity, with each bed filled with the sickest possible patients who are discharged at the first sign that they are stable, or the pressure to leave no operating room unused and to keep moving through the schedule for each room as fast as possible. In a survey of anesthesiologists, half of respondents stated that they had witnessed at least one case in which production pressure resulted in what they regarded as unsafe care. Examples included elective surgery in patients without adequate preoperative evaluation and proceeding with surgery despite significant contraindications.

Production pressure produces an organizational culture in which frontline personnel (and often managers) are reluctant to suggest any course of action that compromises productivity, even temporarily. For instance, in the survey of anesthesiologists, respondents reported pressure by surgeons to avoid delaying cases through additional patient evaluation or canceling cases, even when patients had clear contraindications to surgery.

R[edit | edit source]

See Primer. Rapid response teams represent an intuitively simple concept: when a patient demonstrates signs of imminent clinical deterioration, a team of providers is summoned to the bedside to immediately assess and treat the patient with the goal of preventing adverse clinical outcomes.

When information is conveyed verbally, miscommunication may occur in a variety of ways, especially when transmission may not occur clearly (e.g., by telephone or radio, or if communication occurs under stress). For names and numbers, the problem often is confusing the sound of one letter or number with another. To address this possibility, the military, civil aviation, and many high-risk industries use protocols for mandatory read-backs, in which the listener repeats the key information, so that the transmitter can confirm its correctness.

Because mistaken substitution or reversal of alphanumeric information is such a potential hazard, read-back protocols typically include the use of phonetic alphabets, such as the NATO system ("Alpha-Bravo-Charlie-Delta-Echo...X-ray-Yankee-Zulu") now familiar to many. In health care, traditionally, read-back has been mandatory only in the context of checking to ensure accurate identification of recipients of blood transfusions. However, there are many other circumstances in which health care teams could benefit from following such protocols, for example, when communicating key lab results or patient orders over the phone, and even when exchanging information in person (e.g., handoffs).

Rules that must be followed to the letter. In the language of non-health care industries, red rules "stop the line." In other words, any deviation from a red rule will bring work to a halt until compliance is achieved. Red rules, in addition to relating to important and risky processes, must also be simple and easy to remember.

An example of a red rule in health care might be the following: "No hospitalized patient can undergo a test of any kind, receive a medication or blood product, or undergo a procedure if they are not wearing an identification bracelet." The implication of designating this a red rule is that the moment a patient is identified as not meeting this condition, all activity must cease in order to verify the patient's identity and supply an identification band.

Health care organizations already have numerous rules and policies that call for strict adherence. The reason that some organizations are using red rules is that, unlike many standard rules, red rules will always be supported by the entire organization. In other words, when someone at the frontline calls for work to cease on the basis of a red rule, top management must always support this decision. Thus, when properly implemented, red rules should foster a culture of safety, as frontline workers will know that they can stop the line when they notice potential hazards, even when doing so may result in considerable inconvenience or be time consuming and costly, for their immediate supervisors or the organization as a whole.

See Primer. Efforts to engage patients in safety efforts have focused on three areas: enlisting patients in detecting adverse events, empowering patients to ensure safe care, and emphasizing patient involvement as a means of improving the culture of safety.

See Primer. Initially developed to analyze industrial accidents, root cause analysis is now widely deployed as an error analysis tool in health care. A central tenet of RCA is to identify underlying problems that increase the likelihood of errors while avoiding the trap of focusing on mistakes by individuals.

Loosely defined or informal rule often arrived at through experience or trial and error (e.g., gastrointestinal complaints that wake patients up at night are unlikely to be functional). Heuristics provide cognitive shortcuts in the face of complex situations, and thus serve an important purpose. Unfortunately, they can also turn out to be wrong.

The phrase "rule of thumb" probably has it origin with trades such as carpentry in which skilled workers could use the length of their thumb (roughly one inch from knuckle to tip) rather than more precise measuring instruments and still produce excellent results. In other words, they measured not using a "rule of wood" (old-fashioned way of saying ruler), but by a "rule of thumb."

A type of statistical process control or quality control graph in which some observation (e.g., manufacturing defects or adverse outcomes) is plotted over time to see if there are "runs" of points above or below a center line, usually representing the average or median. In addition to the number of runs, the length of the runs conveys important information. For run charts with more than 20 useful observations, a run of 8 or more dots would count as a "shift" in the process of interest, suggesting some non-random variation. Other key tests applied to run charts include tests for "trends" (sequences of successive increases or decreases in the observation of interest) and "zigzags" (alternation in the direction—up or down—of the lines joining pairs of dots). If a non-random change for the better, or shift, occurs, it suggests that an intervention has succeeded. The expression "moving the dots" refers to this type of shift.

A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z


S[edit | edit source]

See Primer. High-reliability organizations consistently minimize adverse events despite carrying out intrinsically hazardous work. Such organizations establish a culture of safety by maintaining a commitment to safety at all levels, from frontline providers to managers and executives.

A term from organizational theory that refers to the processes by which an organization takes in information to make sense of its environment, to generate knowledge, and to make decisions. It is the organizational equivalent of what individuals do when they process information, interpret events in their environments, and make decisions based on these activities. More technically, organizational sensemaking constructs the shared meanings that define the organization's purpose and frame the perception of problems or opportunities that the organization needs to work on.

See Primer. An adverse event in which death or serious harm to a patient has occurred; usually used to refer to events that are not at all expected or acceptable—e.g., an operation on the wrong patient or body part. The choice of the word sentinel reflects the egregiousness of the injury (e.g., amputation of the wrong leg) and the likelihood that investigation of such events will reveal serious problems in current policies or procedures.

The sharp end refers to the personnel or parts of the health care system in direct contact with patients. Personnel operating at the sharp end may literally be holding a scalpel (e.g., an orthopedist who operates on the wrong leg) or figuratively be administering any kind of therapy (e.g., a nurse programming an intravenous pump) or performing any aspect of care. To complete the metaphor, the blunt end refers to the many layers of the health care system that affect the scalpels, pills, and medical devices, or the personnel wielding, administering, and operating them. Thus, an error in programming an intravenous pump would represent a problem at the sharp end, while the institution's decision to use multiple types of infusion pumps (making programming errors more likely) would represent a problem at the blunt end. The terminology of "sharp" and "blunt" ends corresponds roughly to active failures and latent conditions.

See Primer. The term "signout" is used to refer to the act of transmitting information about the patient. Handoffs and signouts have been linked to adverse clinical events in settings ranging from the emergency department to the intensive care unit.

Situational awareness refers to the degree to which one's perception of a situation matches reality. In the context of crisis management, where the phrase is most often used, situational awareness includes awareness of fatigue and stress among team members (including oneself), environmental threats to safety, appropriate immediate goals, and the deteriorating status of the crisis (or patient). Failure to maintain situational awareness can result in various problems that compound the crisis. For instance, during a resuscitation, an individual or entire team may focus on a particular task (a difficult central line insertion or a particular medication to administer, for example). Fixation on this problem can result in loss of situational awareness to the point that steps are not taken to address immediately life-threatening problems such as respiratory failure or a pulseless rhythm. In this context, maintaining situational awareness might be seen as equivalent to keeping the big picture in mind. Alternatively, in assigning tasks in a crisis, the leader may ignore signals from a team member, which may result in escalating anxiety for the team member, failure to perform the assigned task, or further patient deterioration.

Six sigma refers loosely to striving for near perfection in the performance of a process or production of a product. The name derives from the Greek letter sigma, often used to refer to the standard deviation of a normal distribution. By definition, 95% of a normally distributed population falls within 2 standard deviations of the average (or "2 sigma"). This leaves 5% of observations as "abnormal" or "unacceptable." Six Sigma targets a defect rate of 3.4 per million opportunities—6 standard deviations from the population average.

When it comes to industrial performance, having 5% of a product fall outside the desired specifications would represent an unacceptably high defect rate. What company could stay in business if 5% of its product did not perform well? For example, would we tolerate a pharmaceutical company that produced pills containing incorrect dosages 5% of the time? Certainly not. But when it comes to clinical performance—the number of patients who receive a proven medication, the number of patients who develop complications from a procedure—we routinely accept failure or defect rates in the 2% to 5% range, orders of magnitude below Six Sigma performance.

Not every process in health care requires such near-perfect performance. In fact, one of the lessons of Reason's Swiss cheese model is the extent to which low overall error rates are possible even when individual components have many "holes." However, many high-stakes processes are far less forgiving, since a single "defect" can lead to catastrophe (e.g., wrong-site surgery, accidental administration of concentrated potassium).

Errors can be dichotomized as slips or mistakes, based on the cognitive psychology of task-oriented behavior. Slips refer to failures of schematic behaviors, or lapses in concentration (e.g., overlooking a step in a routine task due to a lapse in memory, an experienced surgeon nicking an adjacent organ during an operation due to a momentary lapse in concentration).

Slips occur in the face of competing sensory or emotional distractions, fatigue, and stress. Reducing the risk of slips requires attention to the designs of protocols, devices, and work environments—using checklists so key steps will not be omitted, reducing fatigue among personnel (or shifting high-risk work away from personnel who have been working extended hours), removing unnecessary variation in the design of key devices, eliminating distractions (e.g., phones) from areas where work requires intense concentration, and other redesign strategies. Slips can be contrasted with mistakes, which are failures that occur in attentional behavior such as active problem solving.

What the average, prudent clinician would be expected to do under certain circumstances. The standard of care may vary by community (e.g., due to resource constraints). When the term is used in the clinical setting, the standard of care is generally felt not to vary by specialty or level of training. In other words, the standard of care for a condition may well be defined in terms of the standard expected of a specialist, in which case a generalist (or trainee) would be expected to deliver the same care or make a timely referral to the appropriate specialist (or supervisor, in the case of a trainee). Standard of care is also a term of art in malpractice law, and its definition varies from jurisdiction to jurisdiction. When used in this legal sense, often the standard of care is specific to a given specialty; it is often defined as the care expected of a reasonable practitioner with similar training practicing in the same location under the same circumstances.

Most definitions of quality emphasize favorable patient outcomes as the gold standard for assessing quality. In practice, however, one would like to detect quality problems without waiting for poor outcomes to develop in such sufficient numbers that deviations from expected rates of morbidity and mortality can be detected. Donabedian first proposed that quality could be measured using aspects of care with proven relationships to desirable patient outcomes. For instance, if proven diagnostic and therapeutic strategies are monitored, quality problems can be detected long before demonstrable poor outcomes occur.

Aspects of care with proven connections to patient outcomes fall into two general categories: process and structure. Processes encompass all that is done to patients in terms of diagnosis, treatment, monitoring, and counseling. Cardiovascular care provides classic examples of the use of process measures to assess quality. Given the known benefits of aspirin and beta-blockers for patients with myocardial infarction, the quality of care for patients with myocardial infarction can be measured in terms of the rates at which eligible patients receive these proven therapies. The percentage of eligible women who undergo mammography at appropriate intervals would provide a process-based measure for quality of preventive care for women.

Structure refers to the setting in which care occurs and the capacity of that setting to produce quality. Traditional examples of structural measures related to quality include credentials, patient volume, and academic affiliation. More recent structural measures include the adoption of organizational models for inpatient care (e.g., closed intensive care units and dedicated stroke units) and possibly the presence of sophisticated clinical information systems. Cardiovascular care provides another classic example of structural measures of quality. Numerous studies have shown that institutions that perform more cardiac surgeries and invasive cardiology procedures achieve better outcomes than institutions that see fewer patients. Given these data, patient volume represents a structural measure of quality of care for patients undergoing cardiac procedures.

Reason developed the "Swiss cheese model" to illustrate how analyses of major accidents and catastrophic systems failures tend to reveal multiple, smaller failures leading up to the actual hazard.

In the model, each slice of cheese represents a safety barrier or precaution relevant to a particular hazard. For example, if the hazard were wrong-site surgery, slices of the cheese might include conventions for identifying sidedness on radiology tests, a protocol for signing the correct site when the surgeon and patient first meet, and a second protocol for reviewing the medical record and checking the previously marked site in the operating room. Many more layers exist. The point is that no single barrier is foolproof. They each have "holes"; hence, the Swiss cheese. For some serious events (e.g., operating on the wrong site or wrong person), even though the holes will align infrequently, even rare cases of harm (errors making it "through the cheese") will be unacceptable.

While the model may convey the impression that the slices of cheese and the location of their respective holes are independent, this may not be the case. For instance, in an emergency situation, all three of the surgical identification safety checks mentioned above may fail or be bypassed. The surgeon may meet the patient for the first time in the operating room. A hurried x-ray technologist might mislabel a film (or simply hang it backwards and a hurried surgeon not notice), "signing the site" may not take place at all (e.g., if the patient is unconscious) or, if it takes place, be rushed and offer no real protection. In the technical parlance of accident analysis, the different barriers may have a common failure mode, in which several protections are lost at once (i.e., several layers of the cheese line up).

In health care, such failure modes, in which slices of the cheese line up more often than one would expect if the location of their holes were independent of each other (and certainly more often than wings fly off airplanes) occur distressingly commonly. In fact, many of the systems problems discussed by Reason and others—poorly designed work schedules, lack of teamwork, variations in the design of important equipment between and even within institutions—are sufficiently common that many of the slices of cheese already have their holes aligned. In such cases, one slice of cheese may be all that is left between the patient and significant hazard.

See Primer. Medicine has traditionally treated quality problems and errors as failings on the part of individual providers, perhaps reflecting inadequate knowledge or skill levels. The systems approach, by contrast, takes the view that most errors reflect predictable human failings in the context of poorly designed systems (e.g., expected lapses in human vigilance in the face of long work hours or predictable mistakes on the part of relatively inexperienced personnel faced with cognitively complex situations). Rather than focusing corrective efforts on reprimanding individuals or pursuing remedial education, the systems approach seeks to identify situations or factors likely to give rise to human error and implement systems changes that will reduce their occurrence or minimize their impact on patients. This view holds that efforts to catch human errors before they occur or block them from causing harm will ultimately be more fruitful than ones that seek to somehow create flawless providers.

This systems focus includes paying attention to human factors engineering (or ergonomics), including the design of protocols, schedules, and other factors that are routinely addressed in other high-risk industries but have traditionally been ignored in medicine.

T[edit | edit source]

See Primer. Providing safe health care depends on highly trained individuals with disparate roles and responsibilities acting together in the best interests of the patient. The need for improved teamwork has led to the application of teamwork training principles, originally developed in aviation, to a variety of health care settings.

Time outs refer to planned periods of quiet and/or interdisciplinary discussion focused on ensuring that key procedural details have been addressed. For instance, protocols for ensuring correct site surgery often recommend a time out to confirm the identification of the patient, the surgical procedure, site, and other key aspects, often stating them aloud for double-checking by other team members. In addition to avoiding major misidentification errors involving the patient or surgical site, such a time out ensures that all team members share the same "game plan," so to speak. Taking the time to focus on listening and communicating the plans as a team can rectify miscommunications and misunderstandings before a procedure gets underway.


Signals for detecting likely adverse events. Triggers alert providers involved in patient safety activities to probable adverse events so they can review the medical record to determine if an actual or potential adverse event has occurred. For instance, if a hospitalized patient received naloxone (a drug used to reverse the effects of narcotics), the patient probably received an excessive dose of morphine or some other opiate. In the emergency department, the use of naloxone would more likely represent treatment of a self-inflected opiate overdose, so the trigger would have little value in that setting. But, among patients already admitted to hospital, a pharmacy could use the administration of naloxone as a "trigger" to investigate possible adverse drug events.

In cases in which the trigger correctly identified an adverse event, causative factors can be identified and, over time, interventions developed to reduce the frequency of particularly common causes of adverse events. The traditional use of triggers has been to efficiently identify adverse events after the fact. However, using triggers in real time has tremendous potential as a patient safety tool. In a study of real-time triggers in a single community hospital, for example, more than 1000 triggers were generated in 6 months, and approximately 25% led to physician action and would not have been recognized without the trigger.

As with any alert or alarm system, the threshold for generating triggers has to balance true and false positives. The system will lose its value if too many triggers prove to be false alarms. This concern is less relevant when triggers are used as chart review tools. In such cases, the tolerance of false alarms depends only on the availability of sufficient resources for medical record review. Reviewing four false alarms for every true adverse event might be quite reasonable in the context of an institutional safety program, but frontline providers would balk at (and eventually ignore) a trigger system that generated four false alarms for every true one.

U[edit | edit source]

  • Underuse, Overuse, Misuse For process of care, quality problems can arise in one of three ways: underuse, overuse, and misuse.

Underuse refers to the failure to provide a health care service when it would have produced a favorable outcome for a patient. Standard examples include failures to provide appropriate preventive services to eligible patients (e.g., Pap smears, flu shots for elderly patients, screening for hypertension) and proven medications for chronic illnesses (steroid inhalers for asthmatics; aspirin, beta-blockers, and lipid-lowering agents for patients who have suffered a recent myocardial infarction).

Overuse refers to providing a process of care in circumstances where the potential for harm exceeds the potential for benefit. Prescribing an antibiotic for a viral infection like a cold, for which antibiotics are ineffective, constitutes overuse. The potential for harm includes adverse reactions to the antibiotics and increases in antibiotic resistance among bacteria in the community. Overuse can also apply to diagnostic tests and surgical procedures.

Misuse occurs when an appropriate process of care has been selected but a preventable complication occurs and the patient does not receive the full potential benefit of the service. Avoidable complications of surgery or medication use are misuse problems. A patient who suffers a rash after receiving penicillin for strep throat, despite having a known allergy to that antibiotic, is an example of misuse. A patient who develops a pneumothorax after an inexperienced operator attempted to insert a subclavian line would represent another example of misuse.

V[edit | edit source]

  • Voluntary Patient Safety Event Reporting (Incident Reporting). Patient safety event reporting systems are ubiquitous in hospitals and are a mainstay of efforts to detect safety and quality problems. However, while event reports may highlight specific safety concerns, they do not provide insights into the epidemiology of safety problems.

W[edit | edit source]

From the perspective of frontline personnel trying to accomplish their work, the design of equipment or the policies governing work tasks can seem counterproductive. When frontline personnel adopt consistent patterns of work or ways of bypassing safety features of medical equipment, these patterns and actions are referred to as workarounds. Although workarounds "fix the problem," the system remains unaltered and thus continues to present potential safety hazards for future patients.

From a definitional point of view, it does not matter if frontline users are justified in working around a given policy or equipment design feature. What does matter is that the motivation for a workaround lies in getting work done, not laziness or whim. Thus, the appropriate response by managers to the existence of a workaround should not consist of reflexively reminding staff about the policy and restating the importance of following it. Rather, workarounds should trigger assessment of workflow and the various competing demands for the time of frontline personnel. In busy clinical areas where efficiency is paramount, managers can expect workarounds to arise whenever policies create added tasks for frontline personnel, especially when the extra work is out of proportion to the perceived importance of the safety goal.

See Primer. Few medical errors are as terrifying as those that involve patients who have undergone surgery on the wrong body part, undergone the incorrect procedure, or had a procedure intended for another patient. These "wrong-site, wrong-procedure, wrong-patient errors" (WSPEs) are rightly termed never events. A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z

Glossaries, dictionaries, and lists in WikiMD[edit source]

Popular Glossaries Dictionaries Lists & Glossaries Topics
Medical dictionary
Medical dictionary

Additional Resources:[edit source]

WikiMD
Navigation: Wellness - Encyclopedia - Health topics - Disease Index‏‎ - Drugs - World Directory - Gray's Anatomy - Keto diet - Recipes

Search WikiMD

Ad.Tired of being Overweight? Try W8MD's physician weight loss program.
Semaglutide (Ozempic / Wegovy and Tirzepatide (Mounjaro / Zepbound) available.
Advertise on WikiMD

WikiMD's Wellness Encyclopedia

Let Food Be Thy Medicine
Medicine Thy Food - Hippocrates

Medical Disclaimer: WikiMD is not a substitute for professional medical advice. The information on WikiMD is provided as an information resource only, may be incorrect, outdated or misleading, and is not to be used or relied on for any diagnostic or treatment purposes. Please consult your health care provider before making any healthcare decisions or for guidance about a specific medical condition. WikiMD expressly disclaims responsibility, and shall have no liability, for any damages, loss, injury, or liability whatsoever suffered as a result of your reliance on the information contained in this site. By visiting this site you agree to the foregoing terms and conditions, which may from time to time be changed or supplemented by WikiMD. If you do not agree to the foregoing terms and conditions, you should not enter or use this site. See full disclaimer.
Credits:Most images are courtesy of Wikimedia commons, and templates Wikipedia, licensed under CC BY SA or similar.

Contributors: Prab R. Tumpati, MD