This is the second of a two-part series examining medical errors. Part 1 addressed thought processes hospitalists use that may lead to mistaken diagnoses (October 2007, p. 36). Part 2 examines what healthcare corporations are doing to improve diagnoses and reduce errors.
Pilots taking off, Swiss cheese, low-hanging fruit. Talk to hospitalists about the issue of medical errors and the analogies come quickly.
Ever since 2000’s landmark Institute of Medicine report “To Err is Human: Building a Safer Health System” found that anywhere from 40,000 to 100,000 patients incur injury or die every year because of medical errors, debate has been constant.
Medical literature is abundant on this topic. The Joint Commission, National Center for Patient Safety, Agency for Health Care Research and Quality, and myriad other organizations and institutions, including SHM, are all helping providers and hospitals solve the problems by establishing goals, standards, guidelines, and policies.
—Evan Falchuk, president, Best Doctors Inc., Boston
Definitions and Paradigms
Best-practice recommendations for reducing errors are generally based on two essential principles: using a systems-based approach to patient safety and creating an environment that supports open dialogue about errors, their causes, and strategies for prevention.
Terminology is a key factor. The terms “error” and “mistake” carry an emotional component associated with embarrassment and shame. Healthcare providers don’t like to be associated with errors. There is an accompanying fear of litigation, and people, perhaps especially physicians, don’t want to be known as someone who was sued.
“The language we use to talk about these issues is important,” says Janet Nagamine, MD, part-time hospitalist at Kaiser Permanente Santa Clara Medical Center in Calif. and current chair of SHM’s quality and patient safety committee. In fact, because of the negativity around the terms “misdiagnosis” and “delay in diagnosis” she advocates using the term “unintended adverse event” in order to appear more neutral.
“The term error is extremely threatening and scary to any health professional because it implies a personal failure,” says Dr. Nagamine. The goal when it comes to errors is essentially to look for the how—not the who.
Reporting medical errors is one thing, but reporting misdiagnoses is another, says Lakshmi Halasyamani, MD, vice chair for the department of internal medicine at St. Joseph’s Mercy Hospital in Ann Arbor, Mich., and SHM board member. “We don’t really talk about misdiagnosis,” she says. “That’s partly because we have tended to assign more individual blame for misdiagnoses.”
Drs. Nagamine and Halasyamani agree that altering the way of viewing errors means nurturing culture change.
“We have made very little headway helping physicians understand that in the course of their careers there will be misdiagnoses and the best of physicians have misdiagnoses,” says Dr. Halasyamani. “We are not developmentally at the same stage that we are with talking about medical errors.”
She believes this is largely a professionalism issue that first means normalizing the issue of misdiagnosis. “It is kind of ludicrous to think that you will practice medicine over 40 years and not have a misdiagnosis,” she says. “But we don’t look at it from that perspective when we begin to orient trainees.”
To the people at the U.S. Pharmacopeia Center for the Advancement of Patient Safety (USP), the arrival of universal electronic medical records (EMR) in the coming decade will be a boon to the error-reduction effort.
“Eventually we will all have our health information stored electronically for easy retrieval,” says Rodney W. Hicks, PhD, ARNP, manager, patient safety research and practice for the USP and first author on its most recent report. The beginning of regional networks for EMR is taking hold.
“Even before the IOM report, the USP was familiar with intensivists,” says Dr. Hicks. “We recognize hospitalists as experts who bring efficiency and effectiveness to the healthcare system.”
The USP maintains perhaps the largest database of medication errors in the world.1 Each year the USP issues a report that focuses on one topic and builds its knowledge base.
This year’s report focuses exclusively on the perioperative continuum of care. Last year’s covered ICU- and radiology-related errors. Two years ago the report was a five-year data summary of errors occurring primarily in hospitals.
“The area of errors due to breakdowns in handoffs remains a huge problem for diagnosis and the continuity of care,” says Shawn C. Becker, MS, BSN, RN, director of patient safety initiatives for the USP.
In general, errors are divided into those that stem from individual factors and those that are system-related, which include environmental and organizational factors.
Environmental risks are often related to human factors. Dr. Nagamine offers an aviation analogy to demonstrate the many pulls on caregivers’ attention.
“What happens in a cockpit at take off is that you are not allowed to talk about anything other than the take-off checklist,” she says. “In medicine, we have nurses’ stations or medication carts in the middle of the hallway, so nurses are preparing meds and people are tugging on their shoulder and interrupting them during a critical task.”
Organizational factors involve culture and priorities. If your organization says it values quality and safety but doesn’t put in place policies and processes to support it, that affects diagnostics and error-free performance.
“The discussion about quality is driven by medical error,” says Evan Falchuk, president of Best Doctors Inc., based in Boston. “But the issue is more interesting than simply looking for mistakes.”
Best Doctors partners with employers and health plans to help members with serious illnesses make sure they have the right diagnosis and treatment. Members can consult with specialists who assess diagnoses and can recommend treatment. The firm believes this is the best way to measure quality, and consumers around the world increasingly agree: The company serves more than 10 million people in 30 countries.
Best Doctors was founded in 1989 by two internists: Falchuk’s father, Kenneth H. Falchuk, MD, a professor of medicine at Harvard University Medical School, and Jose Halperin, MD, an associate professor of medicine at Harvard. The service they created is one in which doctors review a patient’s medical information, identify the important issues, and consult with leading experts from their peer-reviewed database. The company then has clinicians work with the patient and his or her doctor to ensure that the patient is getting appropriate care. The process has identified incorrect diagnosis or treatment in more than half of reviewed cases.
“Hospitalists should think about quality in terms of things happening outside the hospital,” says Falchuk. “The informed, demanding consumer is coming to healthcare, and their expectations are clear: to be paid attention to, to have all of their questions answered, and to be certain that their diagnosis and treatment are correct.” These sentiments affect how hospitals do business.
The increasing amount of medical information patients can find on the Internet can raise questions in their minds. “Patients want to trust their doctors,” Falchuk says. “But when the patient has lots of information and questions and finds it difficult to spend as much time as they would like with their doctor, trust is eroded—and patients start to wonder if their doctor is doing the right thing.”
The complexity of modern medicine, with new diagnoses, treatments, and testing, and ultra-specialized experts, can sometimes—ironically—lead to lesser quality.
“It is more important than ever before that doctors with differing perspectives discuss each case,” says Falchuk. “But doctors complain that the system, often because of constraints imposed by managed care, only allows for episodic interactions like that. If that kind of interaction can be made the norm, it will give patients an extraordinary amount of comfort as to the quality of their care.”
Falchuk believes the tipping point for combating errors is being reached—at least from the business point of view.
“With major employers, as many as one in 200 employees call us for help,” he says. “That is close to the incidence rate of the illnesses we commonly see: cancer, heart problems, and undiagnosed situations. When you see this flood of demand, you say, ‘Something is going on that is driving this.’ ”
These market-based factors are driving hospitals to publish and compare rates for process of care and mortality, and many medical centers are publishing report cards. With the advent of advanced technology, hospital and provider performance will be increasingly apparent and transparent. Corporations and institutions will have to be less guarded about what they share publicly as their public accountability is increased.
The healthcare industry has embraced British psychologist James Reason’s 1990 Swiss cheese model of error as a means of tackling the cumulative effects that cause adverse events. The model conceptualizes the factors that contributed to an error as holes in slices of Swiss cheese. Only when the holes line up does the adverse event occur; placing barriers at one or more of the holes “traps the error” from being realized.
For instance, Best Doctors notes “potholes” in the reading stage of pathology.
“We see many cases of underdiagnosis or even misdiagnosis based on one of the most difficult steps: a pathology review,” says Falchuk. “With new, specifically targeted treatments, getting that right is extremely significant.” Best Doctors experts often recommend having pathology re-reviewed. “I wouldn’t call this error, but patients view it as a question of quality,” he says.
Integrating technology with observation skills is an important way hospitals are working to improve diagnostics and reduce error.
Because the same factors contribute to a near miss as to an actual event, studying the patterns of near misses can provide a wealth of information.
“I heard some statistics that for every 19 near misses, you’re going to have one event,” says Dr. Nagamine. If on a particular day, a provider was fatigued or overtaxed and didn’t catch a contributing factor that is a risk—termed a “latent condition”—that’s when the holes line up. “An event is never just the result of one thing or one person; it is a combination of factors,” says Dr. Nagamine.
One example of this concerns an element in the policy for discharge bundles that institutions establish to better manage transfer of care. When lab results become available only after a patient’s discharge, it increases the risk for delays in diagnosis. Electronically placing the test results into the primary care physician’s e-mail inbox helps to “close the loop,” says Dr. Nagamine. “That information gets to somebody, and we’re clear on who that somebody is. We are putting in place those types of interventions—which are really the low-hanging fruit.”
Opening up about adverse events and providing feedback creates a different awareness about the risks surrounding the event.
“The look-alike, sound-alike medications are an example,” Dr. Nagamine says. “I went up to the unit and said, ‘A nurse recorded that she almost gave hydralazine instead of hydroxyzine—has that ever happened to you?’ The first three nurses I asked said, ‘Yes,’ ‘Yes,’ ‘Yes.’ Until I ask these types of questions, I don’t know. And until you can make it safe for your staff to talk, you will not have good information.”
Although this is the approach most of the safety world and quality world is embracing, it is not how most clinicians on the frontline view it.
“We are trying to educate people about a framework in which to think about this,” says Dr. Nagamine. “It is not constructive to point fingers, but it is important to give people feedback about how the event happened. It is far more constructive to look at the entire system and ask, ‘How did we fail here? What was your piece of it? What was the system’s piece of it?’ ”
Because of the connection between litigation and adverse events, changing the culture is a complex imperative. But providers must recognize that systems failures are involved in about 75% to 80% of medical malpractice cases—whether that involves communication breakdowns, inadequate availability of information, or a host of other factors. The individual, environment, and organization are linked.
Reducing negativity and sensitivity around the terms associated with error and reframing thinking toward prevention are important.
“There is a richness of information that comes once you change the culture from blaming to fixing,” says Dr. Nagamine. Providing feedback to frontline practitioners is key, as is thanking those who report. “It increases providers’ awareness about where the hot spots and vulnerabilities are and how to stay out of trouble. Simply by giving them information about an event raises their awareness of the magnitude of certain types of issues,” she explains.
When Dr. Nagamine led a safety initiative on the ICU floor consisting of human factors training and a new system for reporting events, the number of reports “went through roof,” she says. “My new problem was not that people were not reporting; it was being overwhelmed with the information that was coming in. We were able to create a culture of safety that made it safe to report and consequently had much better information from which we could devise prevention strategies.”
SHM, in planning to co-create standards for focused practice with the American Board of Internal Medicine, intends to promote the standards of professionalism along with other standards. The issue of personal accountability, although a part of that, has been less of a focus to date. In the future, all institutions may have technology hospitalists can use to learn whether the discharge diagnosis was correct in the months and years that followed. Will the culture be emotionally ready to handle what technology can offer?
“We will need to own it when things go well and when things don’t go well or when we are wrong,” says Dr. Halasyamani. “We need to be able to investigate the distribution of the reasons for misdiagnosis, determine how many of those problems are systems issues, and devise strategies to address them.” In a sense, she says, everything can be viewed as a system issue unless actions are egregious and malicious.
Yet when physicians are associated with an adverse event, research shows they feel they have little support to talk about it.
“The system we have in place has begun to try to address some of the issues related to system errors,” says Dr. Halasyamani. “But what if an individual does have an error? What systems do we have in place to support that person in the recovery? And along the way if we find people who are having these issues over and over again, we need to design processes to deal with that.”
One particular focus of SHM’s transfer-of-care initiative concerns communication surrounding handoffs. “We are setting those standards and thinking about what kinds of technology tools can help make those standards easier to adhere to and easier to implement,” says Dr. Halasyamani. TH
Andrea Sattinger is a frequent contributor to The Hospitalist.