As far as I can tell, few hospitalist groups conduct any sort of formal peer review. Most seem to rely on the hospital’s medical staff peer review to encourage quality of care and address shortcomings; the review is often coupled with a salary incentive paid for good performance on certain quality metrics. While these reviews are of some value, I think they are pretty blunt instruments. Every hospitalist practice should think about developing a more robust system of peer review for their group.
Assessment of each provider’s individual performance, whether they are an MD, nurse practitioner, or physician assistant, can be divided into three general categories. The first is the traditional “human resources” category of performance, which includes whether the person gets along well with others in the practice as well as other hospital staff, patients, and families. Does the person arrive at work when scheduled, manage time effectively, and work efficiently? Do nurses and other hospital staff have compliments or complaints about this doctor?
The second category of performance can encompass the hospitalist’s business and operational contributions to the practice. Do they document, code, and bill visits correctly? Do they attend and participate in meetings and serve on one or more hospital committees?
The third category assesses measurable quality of care. This could include an assessment of mortality, readmission rate, performance on such quality metrics as core measures, and performance on selected initiatives (e.g., appropriate VTE prophylaxis). Aggregate data for these measures can be difficult to attribute to a single hospitalist, so this may require a review of individual charts instead of relying on reports generated by the hospital’s data systems.
A number of metrics might apply to more than one of the three categories. For example, documenting accurate medicine reconciliation can be thought of as both a quality issue (good for patients) and a business issue (e.g., your hospital might provide a financial reward to your group for good performance). Ensuring the referring physician is “CC’d” on all dictated reports is both a quality and business issue. It really doesn’t matter which category you put these in.
The categories I have listed, and the sample items in each, are intended as examples. You should think about the unique attributes of your practice and its current priorities in order to develop the best internal peer review system for your group. You probably will want to change metrics periodically. For example, you may choose to focus on VTE prophylaxis for now, but at some point it may make sense to replace it with a new metric, such as glycemic control.
Manage the Review
There is no single right approach to conducting your own peer review. Just make sure that the process is fair and meaningful for all involved. The process probably will be more valuable if most of the data on each hospitalist can be reviewed by the whole group, or at least by a designated peer review committee. The main exceptions to such transparency are issues in the first human resources category. If a nurse or another hospitalist has specific criticisms of one hospitalist, it is best not to share that information with the whole group. But it should be fine for everyone in the group to know who is best and worst at things like documenting and coding visits or ordering VTE prophylaxis when needed. Beyond these general principles, the specific process your group uses for peer review can take many forms.
It may make sense to form a peer review committee that performs all the reviews on everyone in the group, including the members of the committee itself. Each member of the committee should have a specified term, such as one or two years. It might not make sense for some groups, especially ones with less than 10 hospitalists, to have a formal committee. In that case, every member of the group could serve as a reviewer for all other doctors except themselves.
The group should hold formal peer review sessions quarterly or semi-annually. The group for which I serve as medical director reviews about one-fourth of the doctors at a roughly two-hour meeting each quarter. Prior to each meeting, we conduct a survey (see Figure 1) using a free Web-based tool to collect opinions about the doctors under review. We use SurveyMonkey.com, though there are many other options. The tool makes it easy to send reminders to get everyone to complete the survey and to collect and analyze the results. At the beginning of the meeting, the medical director of the practice reviews the results with the doctor being surveyed; they are not shared with others.
Most of the meeting time is spent assessing 10 charts for the doctor under review. Using the billing system, we select patients the doctor saw for many consecutive days. We want to avoid pulling charts at random only to find that the doctor only made one visit and there isn’t much to review. We assess a number of measures:
- Was VTE prophylaxis addressed appropriately?
- Was the referring doctor CC’d in the dictated reports?
- Did the doctor choose the appropriate CPT code for each visit?
- Was there a good plan for transition of care at discharge?
The doctor is provided a summary of all the findings of the peer review session, and a copy is kept on file. TH
Dr. Nelson has been a practicing hospitalist since 1988 and is co-founder and past president of SHM. He is a principal in Nelson/Flores Associates, a national hospitalist practice management consulting firm. He also is part of the faculty for SHM’s “Best Practices in Managing a Hospital Medicine Program” course. This column represents his views and is not intended to reflect an official position of SHM.