I first became aware of measuring customer satisfaction when a service manager at a car dealership explained in an upbeat way that I would be getting a survey and that any response by me other than the highest rating would result in significant negative consequences. I can recall being pleased with the service right up to the point the manager tried to extract a promise from me that I would provide “top box” answers.
What started with a car dealership survey has become a near avalanche of surveys from my credit card, bank, airlines, hotels, and other businesses. Each starts by assuring me that it will take only a minute or two to complete the survey, but if I completed every survey sent my way, it would add up to a significant amount of time. So I’ve stopped responding to nearly all of them, not so much as a form of protest but as part of my overall time-management efforts.
I imagine many of our patients see surveys from hospitals and other healthcare providers similarly: just another one to add to the pile. Patients in their 80s and 90s—a significant portion of hospitalist patients—probably interact a lot less with companies that send satisfaction surveys and so might be more attentive to ones from healthcare organizations. But I suspect that a reasonable portion of older patients rely on a family member to complete them, and this person, often a son or daughter, probably does get a lot of similar surveys. Surely, survey fatigue is influencing the results at least a little.
Healthcare Surveys: HCAHPS
For all the surveying going on, I find it pretty difficult to use HCAHPS results to guide patient-satisfaction improvement efforts. Sure, I can see how individual doctors or different physician groups score compared to one another and try to model my behaviors after the high performers. That is a really valuable thing to do, but it doesn’t get to the granular level I’d like.
One would hope the three physician-specific HCAHPS questions would support drilling down to more actionable information. But every hospitalist group I’ve seen always has the same pattern, scoring from lowest to highest as follows:
- How often did doctors explain things in a way you could understand?
- How often did doctors listen carefully to you?
- How often did doctors treat you with courtesy and respect?
So I don’t think the difference in scores on these questions is very useful in guiding improvement efforts.
Looking beyond HCAHPS
For a few years, our hospitalist group added a very short survey to the brochure describing the practice. I still think that was good idea to ensure accurate attribution and more granular information, but it didn’t yield much value in practice because of a low response rate. Ultimately, we stopped using it because of our hospital risk manager’s concern any such survey could be construed as “coaching” patients in their HCAHPS responses, something the Centers for Medicare & Medicaid Services forbids.
Mark Rudolph, MD, vice president of physician development and patient experience at Sound Physicians, told me about their experience with their employed RNs using tablet computers to survey every patient the day following hospital admission (i.e., while patients were still in the hospital). It seems to me this could be a really valuable tool to provide very granular feedback at the outset of a patient stay when there is still time to address areas in which the patient is less satisfied. They found that for about 30% of patients, the survey uncovered something that could be fixed, such as providing another blanket, determining what time a test was likely to be done, etc. I bet for most patients the fact that a nurse cared enough to ask how things are going and try to remedy problems improved their HCAPHS scores.