How safe is your hospital? Is it good? One of the best?
Seeking the answer can be like walking through a carnival fun house, with its wavy mirrors. Look in one direction, and you might see a bobblehead version of yourself. Turn the other way, and your legs look like stilts. It’s still a reflection of you, but it’s one that naturally highlights certain aspects while downplaying others.
Similarly, a hospital that looks top-notch in one rating or ranking system may be middle of the pack when seen in another.
The differences in these rating systems can befuddle patients seeking to find the true top performers. Last year, researchers from Johns Hopkins and other institutions reported that across four prominent ratings—issued by U.S. News and World Report, Consumer Reports, Leapfrog Group and Healthgrades—just 10 percent of hospitals deemed high performing on one were also high performers on another. Zero hospitals were high performers in all four.
This summer, the Centers for Medicare and Medicaid Services (CMS) is expected to issue its first round of hospital overall quality ratings, with each getting anywhere from one to five stars. As U.S. News editors wrote in April, it “will be no surprise if none of the five-star hospitals is on the U.S. News Honor Roll.”
Expect things to get murkier. Even from one year to the next, a rating system can change its methods and sway the results. Johns Hopkins’ rankings in U.S. News have always benefited from our stellar reputation among physicians. Yet within both specialty and overall hospital rankings, U.S. News has been reducing the weight given to reputational scores while increasing the influence of other metrics, such as patient safety. Additionally, this summer’s U.S. News Honor Roll may be the first to factor in common care ratings in areas such as heart failure and knee replacement surgery. These ratings debuted as a separate measure last year.
Perhaps the variation in ratings shouldn’t be surprising: There are no standards for how valid a measure has to be. The measures that are rolled into these ratings are often flawed, distorting the picture of hospital quality. Commonly used patient safety indicators (PSIs), which track preventable harms, such as blood clots, are based on notoriously imperfect billing data, not on clinical documentation. In a recent study by Armstrong Institute researchers, we found that out of 21 PSIs, just one met the scientific criteria for being a true indicator of hospital safety.
The variation in performance is also influenced by the accuracy of the data. And unlike financial data, there are no standards for auditing quality data to ensure accuracy.
So what should we do as a health system, in the face of hospital ratings confusion and measurement madness?
From a scientific and policy standpoint, we have engaged with government agencies, ratings organizations and researchers about the need to ensure that measures are valid and the data are accurate. Last year, we found that CMS data—which U.S. News and other organizations analyze for their ratings—had an unintended bias against Maryland hospitals, damaging both our patient safety and mortality scores. Unlike hospitals in all other states, which report detailed pay-for-quality data to CMS, Maryland hospitals report to a state agency but send only basic data to the federal government.
One result: Complications that patients experienced outside our hospitals were deemed hospital-acquired because CMS had not required us to report present-on-admission data. For example, of 29 Johns Hopkins Hospital patients whose pressure ulcers were identified as hospital-acquired, the more detailed clinical record showed that all but one had the ulcer before admission.
After making our case, U.S. News has said that, moving forward, it will use state data to analyze PSIs. Yet many other quality and safety measures, such as mortality, hospital-acquired infections and readmissions, should also be calculated using state data.
Regardless, we must be willing to challenge ourselves to do better. Despite the statistical “noise,” we must review these measures and rankings for the signals—areas that indicate true improvement opportunities. Your clinical department may keep a patient safety and quality performance dashboard, tracking the measures that it deems most important to your patients. Familiarize yourself with the dashboard, understand the measures and find ways to get involved in improving them.
For most of us, the truest reflection of our quality—and the only one we can influence—involves caring for the patients we see every day. We must always put their well-being first, whether we stop to give them directions in the hallway, comfort them through bad news or ensure that they know exactly how to take care of themselves when they go home. And every day, we must seek to continuously learn and improve.
Peter Pronovost is director of the Johns Hopkins Armstrong Institute for Patient Safety and Quality and senior vice president for patient safety and quality at Johns Hopkins Medicine.