Author Affiliations: Harvard School of Public Health, Boston, Massachusetts.
Most physicians think of progress in surgery—medical progress in general—as scientific progress. The great flowering of medicine in the last half of the 20th century was one of the greatest triumphs of science of all time. We developed progressively—and impressively—better treatments, medical and surgical, because we continually expanded our understanding of diseases, and we were incredibly innovative.
Using the scientific method of hypothesis generation, experimental design, and collection and analysis of data, we developed new operations, new devices, and new drugs. And, particularly in surgery, the results were truly awesome. Millions upon millions of patients have had their pain relieved, their health restored, and their lives extended because of the brilliant advances that occurred just in the lifetimes of many of us here today. This all came from exactly the kinds of studies we have heard about during this meeting the past 2 days.
But starting in the mid 1980s, an alternative way of looking at progress began to emerge. It began with the disturbing observations of John Wennberg that the rates at which patients received treatments, surgical and nonsurgical, varied tremendously, not according to incidence of disease, but according to where they lived.1
Shortly thereafter, researchers at RAND developed measures of appropriateness and carried out studies that showed significant numbers of patients were receiving procedures that would not benefit them, which we call overuse, while other patients failed to receive procedures of known value, which we call underuse.2 The science of quality measurement and improvement was born.
A few years later, more bad news arrived: the Harvard Medical Practice Study, a large study of medical injury in patients hospitalized in New York state in 1984, showed that 4% had complications of their treatment.3 Even more shocking was the finding that two-thirds of these iatrogenic injuries were due to mistakes and therefore were preventable.4 Curiously, there was almost no public or professional outcry at this time.
That changed with the publication of the 2000 Institute of Medicine (IOM) report, To Err is Human, that revealed as many as 98 000 Americans die yearly from medical mistakes.5 (That figure, interestingly, was a contemporary national extrapolation of the results in the New York study in the ’80s.) This time, everyone got aroused. Although quality concerns of overuse and underuse had disturbed policymakers and a few physicians for a decade or more, most doctors and the public had little interest. The revelation of the risk of injury, however, grabbed the attention of both the public and our profession. The field of patient safety was born.
A large number of studies have since shown that the extent of overuse, underuse, and misuse is even worse than these early studies suggested. We now know that half of Americans fail to get effective treatments they need,6 at least a third receive treatments of little or no benefit,2 and 10% or more are significantly harmed by preventable mishaps.7
It has increasingly become apparent that in the short term we could relieve more pain, restore more health, and extend more lives by improving quality and safety than by developing new treatments. That is, if we consistently did what we know how to do correctly every time, for every patient, we would double the quality of care. It would take a lot of innovation and new drugs to match that.
Of course, it's not an “either/or,” but clearly an “and.” We need to improve quality and safety as well as continue developing new treatments. But the quality question, the thing that people like me ask is, how can we deliver to all of our patients, all of the time, the safe, high-quality care that we clearly know how to give to some of our patients some of the time?
And the answer is by improving our systems. The other message from that famous IOM report was that those 98 000 preventable deaths were not caused by careless or incompetent people, but by bad systems. Quit blaming people for making errors and change your systems, the IOM said, and get on with it. If we made error prevention a national priority, we could reduce preventable medical injury by 50% in 10 years.
What do we mean by systems? Well, pretty much how we organize and carry out virtually everything we do—simple things and complicated ones. For example, it is well-known that nurses make frequent mistakes in measuring out medications from multiple-use vials. Thirty years ago it was discovered that unit dosing, having the pharmacist provide every medication to the nurse in the dose and form in which it is to be given, virtually completely eliminates dosing errors. Another medication example: physicians make errors in 5% to 10% of written prescriptions. Computerized physician order entry can reduce that error rate by 80%.8 Considering that we write over 3.5 billion prescriptions a year, that change alone would eliminate over 100 million medication errors.
Unfortunately, we never got the national commitment to error reduction at the government level that the IOM called for—unlike Britain, for example. We have left it to the private sector. But the private sector responded dramatically. Doctors and nurses don't like to hurt people; convince them you have a better way and they’ll do it.
Since the IOM report, there has been an incredible outpouring of effort to develop new safe practices, to validate them, and to implement them. By 2003, the National Quality Forum had published a list of 30 evidence-based safe practices9 and called on all hospitals to implement them. The Joint Commission took up the challenge and began to require hospitals to implement these new practices.
The VA [US Department of Veterans Affairs] developed the National Surgical Quality Improvement Program,10 now taken up by the American College of Surgeons, to provide risk-adjusted outcomes data to surgeons. Providing comparative data led to substantial improvements in complication rates and in mortality. We physicians are “all from Lake Wobegon”; show me I’m below average and I’ll do something about it. Data feedback alone has proved to be a powerful quality improvement tool.
In 2004, the World Health Organization formed the World Alliance for Patient Safety to encourage and enable the spread of safe practices around the globe. Its first major effort was a Global Patient Safety Challenge, 2006-2007, Clean Care is Safer Care, a worldwide campaign to promote hand hygiene. The second challenge, Safe Surgery Saves Lives is being led by Atul Gawande.11 The goal: to get the surgeons and staff in every operating room in the world to implement a simple surgical check list. He estimates that this systems change will save tens of thousands of lives every year.
Recently, we’ve heard some impressive results: the Institute for Healthcare Improvement 100 000 Lives campaign enlisted 3100 hospitals to implement 6 new proven safe practices. Result: Within 1½ years, 122 000 fewer people died.12 Ascension Health, the largest nongovernmental hospital system made safety a systemwide priority in 2005. They have since achieved dramatic reductions in infant mortality, pressure ulcers, falls, nosocomial infections, and overall hospital mortality.13 Many others have had similar successes.
Some of us have complained about the slow pace of improvement. Far too many people still die preventable deaths. But a long-term view may be less judgmental. For example, many surgeons date the beginning of cardiac surgery with the first division of the patent ductus arteriosus by Robert E. Gross in 1939. It was another 15 years before cardiopulmonary bypass became a reality and made open-heart surgery possible. And 25 years before, Favaloro performed the first successful CABG [coronary artery bypass graft]. So from an historical perspective, patient safety is about where cardiac surgery was in 1950!
But we’ve already learned a lot as we open up this new field. It is becoming increasingly clear that the quest for safe defect-free care is not just about things like changing medication processes, implementing checklists, and using computers; it is much more complicated. The quest for safety requires that we reexamine almost everything related to how we practice medicine: what we know, what we do, even what we are.
To begin with, if we are going to make health care safe, we need to learn a whole new body of knowledge. Not in biomedical science, but in the translation and application of that science to care of our patients, to the delivery of care. It is worth noting that biomedical science has profited immensely by learning from other disciplines such as biochemistry, physics, biomechanics, and genomics. Similarly, in safety we must learn from other disciplines such as ergonomics and human factors engineering, cognitive psychology, operations research, organizational management, and sociobiology.
Some of the lessons we have already learned from these disciplines are incredibly powerful. First among these is the concept of latent errors.14 These are defects in the design and organization of our systems and processes that are the primary cause of the errors that individuals make. This idea, that the causes of errors are bad systems, not bad people, is truly transforming. It challenges the conventional wisdom that proper training, practice, and conscientiousness were all that was required to avoid mistakes. Recall the power of unit dosing and computerized prescribing.
The implication that by changing our systems, ie, by designing our work using human factors principles such as standardization, simplification, use of forcing functions, avoiding reliance on memory, and standardizing communication, we can eliminate errors is powerful.15 And as we are now seeing, it works as well in health care as in other activities. This concept of system design is the cornerstone of patient safety; make it easy to do it right and hard to do it wrong.
Another set of intriguing insights concern how the mind works, or can suffer a short-circuit that leads our thinking astray. Jerry Groopman's new book, How Doctors Think, explains these miscues eloquently, showing how they can lead the radiologist to miss the cancer of the lung, for example, or the surgeon to mistake the ureter for a vein.
The technical term for these characteristics is cognitive dispositions to respond, and there are many of them, many ways in which we don't think straight. Wrong mental models are common. One of the purposes of using human factors design principles such as guidelines, checklists, and protocols, is to make it more difficult for us to be trapped by these cognitive misfires.
A third body of knowledge we have benefited from is organizational management and operations research, which provide new insights into making our work more efficient as well as safe. At the Massachusetts General Hospital, for example, application of these principles has greatly streamlined the function and efficiency of the operating room. Next up: emergency departments!
Commercial aviation has been held up as an example of how to become safe, but flying airplanes is very different from taking care of sick patients, and many aviation practices have no application in health care. One that does, however, is teamwork. In both airliners and hospitals, when people collaborate in teams, they are more satisfied with their work and they make fewer mistakes. Planes and patients have fewer crashes.
We’ve had some dramatic examples of this close to home. Gerry Healey at Children's Hospital has had United Airlines train his otolaryngology team, with impressive results. Ben Sachs and his group at Beth Israel–Deaconess Medical Center cut complications in labor and delivery dramatically with team training, over 50% reduction for the premature infants.16 Team training has taken off in hospitals all over the country. Simulators have been found to be a powerful means to facilitate team training, and their use is also expanding rapidly.
So the first big change we’ve had to make to improve patient safety is to learn a great deal of new material, much of it from disciplines we never heard about before. We’ve had to expand what we know. The second challenge is that we’ve had to learn a new way of looking at our work, what we do.
How do we apply systems theory in practice? What practices? What systems changes? None of this is simple. The practice challenge is really 3 challenges: How do we develop and validate new safe practices (the “what” of safety)? Second, how do we implement the practices? And, most difficult, how do we implement them 100% of the time in 100% of patients who need them?
Developing new safe practices, changing our systems, not trying harder, but trying smarter, is where we began. When the IOM called on health care to change its systems, we really weren't sure what to do; we had few established practices, and even less evidence for most of the changes we were recommending. The central task of the past decade has been developing, validating, implementing new effective safe practices (eg, reconciling medications, time-outs in the operating room, protocols for central line insertion, the ventilator bundle). In many ways it has been analogous to the process of developing new operations.
We’ve made truly incredible progress. There has been abundant research, and the National Quality Forum has reviewed the evidence and identified 30 evidence-based safe practices that all hospitals should follow (such as unit-dosing, wrong-site protocols, deep venous thrombosis prophylaxis).9 Implementing them seems easy, but isn’t. We have witnessed a huge amount of activity as hospitals all over the country have struggled. The Institute for Healthcare Improvement 100 000 Lives campaign got 3100 hospitals to begin to implement 6 safe practices. Most of them had trouble doing it. Similarly, many of you have been involved in the SCIP [Surgical Care Improvement Project] program and know how difficult it is to carry out. It is not easy to change practice! It is even harder to make a new practice work 100% perfectly in 100% of patients 100% of the time.
James Reason, probably the foremost contributor to error theory and the originator of the concept of latent errors, says that safety is more about relationships than about systems design. He should know; he's been studying it for 40 years in all kinds of environments, from oil drilling rigs to ferries to commercial aviation and, thankfully, now in health care.
What relationships does he mean? I think at least 3 aspects are important. The first is that there is no room for autocratic, demeaning, humiliating behavior to nurses, residents, students, anyone. It never was right, but now we know how devastating it can be, how it stifles creativity and saps the joy from everyday life, how it leads to mistakes and patient harm as nurses, residents, and students shy away from warning the abusive physician of an error in the making, avoid communication with those who put them down, are inhibited in their thinking about the patient's welfare as they try to protect their own ego and self-image from attack.17
We don't teach respect in medical school. We should. All members of the team, nurses, residents, orderlies, therapists, as well as physicians, are essential. Each brings something special and essential to the care we are trying to orchestrate for our patients; they need to feel valued, listened to.
A second insight, which we’ve already noted: everything goes better in teams. If there is any 1 characteristic that defines 21st century medicine, surely it is that modern health care is far too complicated to be delivered by any 1 person. It is estimated that 80% of Medicare funds are spent on the 20% of patients with 1 of 8 serious chronic diseases (diabetes, coronary disease, chronic pulmonary disease, etc). All of these conditions require multidisciplinary care involving medical and surgical specialists, nurses, therapists, and many others—a team. None of these patients can be taken care of properly by any 1 individual acting alone.
Teams also do better at preventing mistakes. They develop routines and standardized procedures that ensure consistency and reduce errors. They increase efficiency by distributing tasks to those most qualified to do them. Team members identify and intercept each other's mistakes.
We all know the protocol to prevent contamination when the line is inserted. Aseptic precautions should be comparable to those in the operating room. The question is how to do it perfectly 100% of the time? Peter Pronovost from Johns Hopkins Hospital had an idea. First, believe it's possible. Second, take it on as a team project—everyone counts. The defining moment came when they empowered the nurse to stop the procedure if there was any breach of technique. They did it perfectly 100% of the time. Results? No infections for 4 months, 6 months, 9 months.18
But the sea-change happened when Pronovost took his team-based protocol to Michigan (the Keystone project) and got 68 hospitals to do it so well that they, too, were able to virtually eliminate central line infections (and ventilator-associated pneumonias as well)—none of either complication for more than 6 months in 68 hospitals. Net savings: 1578 lives and $165 million. If 68 hospitals can do it in Michigan, all hospitals can do it.19
The third insight we’ve learned about relationships is that leadership is crucial. As physicians, we think of ourselves as leaders of the teams, and others often do too. For many surgeons, the model is the “captain of the ship.” In the sense that we are ultimately responsible, that is certainly true. But in the sense that we can command and all will obey, that is certainly no longer true, if it ever was. Effective command, even for a ship captain, is about leading, showing, motivating, not dictating and compulsion.
Effective leadership is about motivating people, about getting them to want to do what needs to be done. It requires not just that we set objectives, but that we listen, that we not only listen, but that we hear. As they teach at West Point and the Naval Academy, leadership is about we, not me, about helping every member of the team realize their own potential. Maybe it is time to start teaching that in medical school and in residency.
Finally, the safety movement is not just about what we know and what we do, but it also presses us more deeply to ask what we are as individuals.
In the final analysis, safety is about character. Are we trustworthy? Can our patients be sure that we are doing everything we know to make health care safe, by ourselves and our colleagues? Are we honest, open, and forthcoming with our patients and ourselves when things go wrong? Nothing is more important to them. Are we respectful? Do we honestly value all of our coworkers? It sounds a bit like the Boy Scout Law! We could do worse!
What we are really talking about is accountability. But it must be 2-way. Institutions first have to hold their practitioners accountable for following safe practices. But caregivers have the right to hold the institution accountable to provide them with the tools and environment they need to practice safely. For example, it is unfair to hold a nurse responsible for safe medication administration while failing to provide medications in unit doses.
Individual accountability is more complex. We each need to hold ourselves accountable to do everything we know to make care safe. At the most basic level, this requires us to scrupulously follow established safe practices (such as hand hygiene, use of preoperative antibiotics, following surgical site protocols, etc). But it also requires us to actively participate in the implementation of new safe practices (such as reconciling medications, using time-outs), to report errors and adverse events, and to identify new hazards and take initiative to do something about them in time to prevent errors.
While no individual should be punished for making an error, which by definition is an unintended act, willful disregard of safe practices or knowingly engaging in unjustified hazardous conduct must not be tolerated. We seek not a blame-free culture, but a just culture, one in which there is no blaming for errors, but also no tolerance for misconduct.
So we expect everyone to follow the new practices for ensuring positive identification of patient, organ, site, and procedure every time, 100%, without fail, regardless of whether you agree with time-outs or preoperative briefings or not. No individual can have a veto over safety.
It is, of course, the way the rest of the world has always behaved. Can you imagine a United Airlines pilot saying he doesn't think he needs to file a flight plan today, or that he knows he has enough fuel, so he doesn't have to do the weight calculations?
In addition to accountability for ourselves, each of us has a second type of accountability, the responsibility to ensure that all of our colleagues are safe and competent. The public assumes we do; who else can? But we don't do a very good job of it, and that has to change. Like the rest of safety, it's a systems failure; we need better systems for measuring performance and competence and for identifying doctors when their performance falls off before they injure a patient, and then to give them help to get back on track.
Finally, we need to be accountable to our patients. Not just for practicing safely, but also to be open and honest with them when we fail and hurt them. Injured patients want 3 things: They want someone to tell them what happened. They want to hear that we're sorry. They want to know what we are going to do to keep it from happening to someone else.
It's the right thing to do; we’ve always known it was the right thing to do, and it's what most of us, but not all, have done. We now know that the advice from insurance companies and lawyers to not admit and not apologize is based on a myth that it will increase the risk of being sued. There is not a shred of evidence that is so. The facts are exactly the opposite; when you conceal and fail to take responsibility you are more likely to get sued. Most patients sue because we lie to them. Hospitals and doctors who are honest, apologize, and compensate have fewer suits and lower total payouts. Apologizing is the most powerful thing we can do to heal the pain of preventable harm, both for our patients and for ourselves.
Making health care safe is a tall order; we have new knowledge and skills to master, new behaviors to learn, and new roles to play. In a sense, patient safety is about everything that is special about being a doctor, caring enough to do and be the best, putting the patient's interest first, always, and being responsible, open-minded, and honest.
There is a lot of cynicism about health care these days. Some of it is justified; certainly we have seen more than enough conflicts of interest and greed in recent years. But the vast majority of doctors are like you and me; they put their patients first, and they really want to do a good job. It is an incredible privilege to be a doctor, to be able to use your talents so effectively to help others, to make a difference in the lives of so many. It is a sacred trust. But like the pilots and their wings, we must earn that privilege every day.
Correspondence: Lucian L. Leape, MD, Harvard School of Public Health, 677 Huntington Ave, Boston, MA 02115 (firstname.lastname@example.org).
Accepted for Publication: February 26, 2009.
Financial Disclosure: None reported.
Previous Presentation: This lecture was presented at the 89th Annual Meeting of the New England Surgical Society; September 28, 2008; Boston, Massachusetts; and is published after peer review and revision.
Thank you for submitting a comment on this article. It will be reviewed by JAMA Surgery editors. You will be notified when your comment has been published. Comments should not exceed 500 words of text and 10 references.
Do not submit personal medical questions or information that could identify a specific patient, questions about a particular case, or general inquiries to an author. Only content that has not been published, posted, or submitted elsewhere should be submitted. By submitting this Comment, you and any coauthors transfer copyright to the journal if your Comment is posted.
* = Required Field
Disclosure of Any Conflicts of Interest*
Indicate all relevant conflicts of interest of each author below, including all relevant financial interests, activities, and relationships within the past 3 years including, but not limited to, employment, affiliation, grants or funding, consultancies, honoraria or payment, speakers’ bureaus, stock ownership or options, expert testimony, royalties, donation of medical equipment, or patents planned, pending, or issued. If all authors have none, check "No potential conflicts or relevant financial interests" in the box below. Please also indicate any funding received in support of this work. The information will be posted with your response.
Some tools below are only available to our subscribers or users with an online account.
Download citation file:
Web of Science® Times Cited: 6
Customize your page view by dragging & repositioning the boxes below.
Enter your username and email address. We'll send you a link to reset your password.
Enter your username and email address. We'll send instructions on how to reset your password to the email address we have on record.
Athens and Shibboleth are access management services that provide single sign-on to protected resources. They replace the multiple user names and passwords necessary to access subscription-based content with a single user name and password that can be entered once per session. It operates independently of a user's location or IP address. If your institution uses Athens or Shibboleth authentication, please contact your site administrator to receive your user name and password.