There are major measurement issues in patient experience data collected from emergency departments nationwide, including high variability and limited construct validity, according to an analysis published by researchers at George Washington University and U.S. Acute Care Solutions.
Patient experience data is becoming increasingly important in healthcare. The data is incorporated into the Centers for Medicare and Medicaid Services' public reporting and value-based purchasing models for inpatient hospital care, and will be used in the implementation of the Medicare Access and CHIP Reauthorization Act, or MACRA.
The data is also used to judge physician performance and hospital performance, often driving managerial decisions such as compensation and employment, and how a hospital is perceived in the community.
The philosophy of measuring patient experience and providing financial incentives for those who deliver it better are sound -- but it only works when the data being collected is valid and reliable.
The authors looked at commercially-generated patient experience data from 2012-15 collected from a large sample of emergency departments. The data evaluated satisfaction surveys gathered from patients about their experience in the emergency department with questions on how they perceived their physician and the facility.
The research team found the data varied greatly month-to-month, with physician variability considerably higher than facility variability. In some cases, a physician was ranked in the 20th percentile one month, 80th in the next and 30th in the next after that -- even though the experience the physician provided for patients was similar.
A major driving factor in the findings was the response rate, which was between 3-16 percent. Only the very happy or very unhappy tended to return their surveys, resulting in a biased sample. This means drawing meaningful conclusions from the data is difficult.
Nevertheless, several facility factors were found to predict higher scores: Departments associated with a residency program, a higher amount of older, male, and discharged patients without Medicaid insurance, lower patient volume, less requirement for physician night coverage, and shorter lengths of stay for discharged patients.
Younger physician age, participating in patient satisfaction training, rising relative value units/visits, more commercially insured patients, higher CT/MRI use, working during less crowded times, and fewer night shifts were found to predict higher physician satisfaction scores.
From this, the authors concluded that the survey process was marginally valid, and while some factors that predicted scores were within a hospital's control, many were not. Facility-level scores were shown to have greater construct validity -- the degree to which a test is measuring what it claims to measure -- than physician-level scores. So the authors recommend the use of risk-adjustment models to balance the scores to account for factors outside of a hospital's control.
Healthcare consultancy group Press Ganey, however, expressed some concerns with the study's sample size, saying the variability is a function of the volume of data used in the analyses for monthly estimates and monthly provider estimates of performance.
"Roughly half of all patients enter the hospital through the Emergency Department. It's a critical setting to collect deep patient experience and evaluate care," said Deirdre Mylod, senior vice president of analytics/solutions for Press Ganey and executive director of the institute for innovation. "We understand the challenges associated with capturing patient experience data in the ED. However, without adequate sample sizes, variability like that observed by the researchers is likely to occur. For this reason, both CMS and Press Ganey have minimum standards for numbers of surveys, which were not considered in this research. This study demonstrates why capturing the voice of every patient is critical in order to gain a true picture of the patient experience."