Allowing for more quality measures in the federal government's quality star rating program would create a more fair and equitable model for assessing the level of quality at U.S. acute-care hospitals, according to a Henry Ford Health System study.
Researchers found that recognizing four fundamental quality factors in the safety of care category, and assigning more equal weights to the eight measures in that category, would modify the rating system's current scoring methodology and produce more accurate and informative results.
Since the introduction of the star ratings in 2016, they've been met with skepticism due to their methodology.
Created by the Centers for Medicare and Medicaid Services, the rating system assigns a score of one to five stars -- five being the highest -- based on a set of 57 individual quality measures across seven categories: mortality, readmission, safety of care, patient experience, effectiveness of care, timeliness of care and efficient use of medical imaging.
The first four categories each account for 22 percent of a hospital's total score, while the last three each count for four percent. A complex weighting scheme is used to assign an importance to each measure in each category, and the scheme for the safety of care category makes only one of the eight measures really count.
The team sought to evaluate the current methodology and how an alternative approach might affect the ratings scoring. Researchers performed principal components analyses on a subset of 674 hospitals in the December 2017 national data set that reported on eight safety measures used at the time. They then assigned equal weight to each of the measures instead of the model employed by CMS, and considered the number of cases at a hospital and how it impacted the accuracy of each measure.
The effect of applying this alternative methodology was illustrated in a single hospital, whose safety category score was markedly different (+1.65) compared to CMS' current methodology (-2.35). That was enough difference to move the score from below average to above average. Assigning equal weight to each measure had a "pronounced effect," researchers wrote.
They said the designers of the ratings system were well intentioned in creating a system that would be both "simple and patient-friendly" for consumers. But the methodology is so highly complex that it has led to confusion and "very significant consequences for the ratings of individual hospitals."
Further refining the methodology, they say, would better summarize the quality measures and "offer greater upside for consumers and providers."
Publicly available hospital ratings and rankings should be modified to allow quality measures to be prioritized according to the needs and preferences of individual patients, an August 2018 analysis found.
That research proposed a new way of rating hospitals by creating tools that allow patients to decide which performance measures to prioritize. For example, they showed how the different priorities of a pregnant woman and a middle-aged man needing knee surgery might change which of their local hospitals has the highest overall rating.