Hospitalists are no strangers to performance measurement. Every day, their performance is measured, formally and informally, by their sponsoring organizations, by third-party payers, and by patients.
But many hospitalists are not engaged in producing or reviewing that performance data.
“Historically, hospitalist groups have relied on the hospital to collect the data and present it to them—and still do, to a great extent, even today,” says Marc B. Westle, DO, FACP, president and managing partner for a large private hospital medicine group (HMG), Asheville Hospitalist Group in North Carolina.
This often puts hospitalists at a disadvantage, says Dr. Westle. If hospitalist groups don’t get involved with data reporting and analysis, they can’t have meaningful discussions with their hospitals.
With a background in hospital administration, Leslie Flores, MHA, co-principal of Nelson/Flores Associates, LLC, is well acquainted with the challenges of collecting and reporting hospital data. Through her consulting work with partner John Nelson, MD, she has found that sponsoring organizations often don’t review performance data with hospitalists. Hospitalists may examine their performance one way, while the hospital uses a different set of metrics, or analytical techniques. This disconnect, she notes, “leads to differences in interpretations and understandings that can occur between the hospital folks and the doctors when they try to present information.”
A new white paper produced by SHM’s Benchmarks Committee, “Measuring Hospitalist Performance: Metrics, Reports, and Dashboards,” aims to change these scenarios by encouraging hospitalists to take charge of their performance reporting. Geared to multiple levels of expertise with performance metrics, the white paper offers “some real, practical advice as to how you capture this information and then how you look at it,” says Joe Miller, SHM senior vice president and staff liaison to the Benchmarks Committee.
— Daniel Rauch, MD, FAAP
Select a Metric
The Benchmarks Committee used a Delphi process to rank the importance of various metrics and produced a list of 10 on which they would focus. The clearly written introduction walks readers through a step-by-step process intended to help HMGs decide which performance metrics they will measure.
Flores, editor of the white paper project, cautions that the “magic 10” metrics selected by the committee don’t necessarily represent the most important metrics for each practice. “We wanted to stimulate hospitalists to think about how they view their own performance and to create a common language and understanding of what some key issues and expectations should be for hospitalists’ performance monitoring,” she says. “They can use this document as a starting point and then come up with performance metrics that really matter to their practice.”
Choosing metrics to measure and report on for the hospitalist service will depend on a variety of variables particular to that group, including:
- The HMG’s original mission;
- The expectations of the hospital or other sponsoring organization (such as a multispecialty group) for the return on their investment;
- Key outcomes and/or performance measures sought by payers, regulators, and other stakeholders; and
- The practice’s high-priority issues.
Regarding the last item, Flores recalls one HMG that decided to include on its dashboard a survey of how it used consulting physicians from the community. This component was chosen to address the concerns of other specialists in the community, who feared the hospitalists were using only their own medical group’s specialists for consultations.