
When he was a resident, Thejaswi Poonacha, MD, a hospitalist and assistant professor of medicine at Emory University in Atlanta, set out to look at the clinical practice guidelines of the National Cancer Care Network (NCCN), a collection of 30 National Cancer Institute (NCI)-designated comprehensive cancer centers in the U.S. that is considered perhaps the foremost authority on cancer care.
He found that only 61 of the NCCN’s 1,023 recommendations on the 10 most common cancers — just 6% — were based on a “high level of evidence” with uniform consensus among the expert panelists. The rest were based on a “lower level of evidence” with varying degrees of consensus. Most were based on a lower level of evidence with uniform consensus, as was the case with 83% of the recommendations. Ten percent were based on a lower level of evidence without a uniform consensus, but no major disagreement, and 1% had some level of evidence but with major disagreement among the panelists.1
Dr. Poonacha
“The NCCN guidelines should be viewed, in general, as a representation of expert consensus on good clinical practice rather than the final word in patient care decision making,” Dr. Poonacha and his co-author wrote.
Nine years later, Dr. Poonacha did the study again. While the number of NCCN recommendations in the guidelines on the 10 cancer types had expanded dramatically — jumping from 1,023 to 1,818, a 77% increase — the level of evidence did not change much. He and his colleagues found that 7% of the recommendations were based on a high level of evidence with uniform consensus, 87% were based on a lower level of evidence with uniform consensus, 6% on a lower level of evidence without a uniform consensus but no major disagreement, and 0% on some level of evidence but with major disagreement. 2
Dr. Poonacha and his co-authors said that guidelines have to be regarded and used with care, and this is especially true, they said, when they are based mainly on expert opinion rather than strong evidence in the literature, as is the case with many guidelines throughout medicine, not just those from the NCCN.
“Our study underscores both the urgent need and available opportunities to expand the current evidence base in oncology, which forms the platform for clinical practice guidelines,” they said.
Hospitalists and doctors in general often look for a “source of truth” in medicine, a kind of oracle that can be relied upon as a bedrock for decision making. Clinical practice guidelines are usually the main sources to which physicians refer. But in recent years, there has been a series of shifts in advice from government agencies, most prominently on vaccines, and statements from government officials intended to be viewed as authoritative, that have complicated this search for truth. And, for that matter, clinical practice guidelines themselves differ in how well they meet standards for being trustworthy.
Experts and seasoned clinicians who have studied clinical guidelines, worked on developing them, and put them to use over many years say that it is important to understand the benefits and limitations of guidelines, and to separate government commentary from scientific evidence. Decision making often can rely on guidelines, but it also has to factor in clinical experience and the facts of the clinical situation that lie before a hospitalist at any given time, which might or might not be addressed in guidelines or consensus statements.
Understanding the Inherent Limitations
Dr. Poonacha said hospitalists have to be careful to acknowledge the limitations inherent in guidelines.
“These guideline members do a fantastic job in collecting evidence, but what is available out there? They can’t go and create more evidence,” he said. “They’re experts in the field, and therefore they’ve been selected to be on the panel, but all they can do is to go and see what’s available out there and then use that to the best of their knowledge and implement it into guidelines.”
Dr. Golden
Blair Golden, MD, MS, assistant professor of medicine at the University of Wisconsin-Madison School of Medicine and Public Health in Madison, Wis., who has worked with the SHM research committee in identifying gaps in existing guidelines, said hospitalists need to understand that most guidelines are not a black-or-white proposition.
“One of the challenges, especially in the context of being a busy clinician, is that the emphasis of the guideline can be just the kind of one-line takeaway,” she said. “But what is really critical to look at is what’s the grading of evidence in terms of, are there actually robust clinical trials to support that recommendation?”
There will not always be clinical-trial-based recommendations, or possibly any guideline recommendations at all, for a particular situation in a hospital. For instance, she said, “We know that patients really care about how we talk to them, but it’s not something where there are huge randomized controlled trials about it.”
“Is there a guideline that’s relevant to this clinical case?” Dr.Golden said. “What’s the strength of the evidence? If there’s not great evidence, does that matter for what I’m doing in my practice?”
She said guidelines are also limited by the pace at which they can be updated. New guidelines were recently issued about complicated urinary tract infections, she said, but she won’t necessarily be relying on those guidelines for her patient care.
“Oftentimes, by the time those guidelines come out, we’ve already really incorporated the latest studies. We treat that condition so commonly that it’s not really practice-changing for me.”
Uses for Guidelines
But, Dr. Golden said, the guidelines can be a useful reminder, at times, for evidence that might have been missed. And she said they can also be helpful in her teaching.
“I use it as kind of a primer for educating myself, and also for educating residents and medical students where they may not always be on top of the most cutting-edge stuff,” she said.
Guidelines can also help put into context some of the recommendations that can come from consultant specialists. They can act as “kind of a check” against what might have been considered common practice, and in cases where there is ambiguity, she said, “particularly if it’s something I don’t see commonly.”
For instance, when there were changing views about the preferred modalities for cardiac workup for heart failure, she checked a clinical practice statement from the American Journal of Cardiology. She saw, “to my surprise, that cardiac MRI was the next best test for our patient, which is not something I would necessarily immediately go to.” That spared a cardiology consultant.
“It kind of helped facilitate for something that was maybe a little bit outside my comfort zone,” she said.
Consider Guidelines and Their Developers
The National Academy of Medicine, formerly the Institute of Medicine, in 2011 published a lengthy report on how to evaluate clinical practice guideline trustworthiness—a kind of guideline on guidelines.3 The report identified eight factors for ensuring that guidelines can be trusted: transparency, management of conflict of interest, the composition of guideline committee members, how guideline committees and systematic review authors interact, methods for rating the strength of recommendations, how well the recommendations are articulated, external review of the recommendations, and updating.
Authors of the report underscored the growing importance of guidelines, given that the number of randomized controlled trials quintupled to 25,000 from the 1970s and 80s to the early 2000s.
“Clinicians can no longer stay abreast of the rapidly expanding knowledge bases related to health,” the report authors said. Therefore, they say, they rely on guidelines and need them to be trustworthy.
And, in addition to the evidence-based methods of developing the guidelines, decisions about who creates the guidelines can have important effects on what is recommended, the authors wrote.
For instance, a guideline committee member who performs a certain procedure is more likely to rate more indications for this procedure than someone who does not perform it.4 On the importance of managing conflicts of interest appropriately, the authors of the report note that a study of Food and Drug Administration Advisory Committees found that members regularly disclosed financial conflicts but rarely recused themselves from voting as a result. But when they did, the drug in question received less favorable voting outcomes.5
In his 2020 study on the evidence base of the NCCN guidelines, Dr. Poonacha and his colleagues drew attention to the participation of pharmaceutical companies’ involvement in research, and the complications that this introduces for providing the most robust evidence base for guidelines.
“While the collaboration between industry and clinical trial groups has been effective, the potential conflict between the goals of government-funded cooperative groups versus pharmaceutical industry sponsors cannot be ignored,” they wrote. “The primary objective of the pharmaceutical companies is to provide data appropriate for a licensing application, while the objective of the cooperative groups is to evaluate the additive benefit of that new agent to standard treatment. A trial addressing a question of great importance to oncologists and patients may be of no interest to the pharmaceutical industry. The cooperative groups may also want to combine or compare agents from two different companies, which may not serve the interests of industry.”
Hospitalists’ Use of Guidelines
Hospitalists interviewed for this piece said that, rather than evaluate the wide array of guidelines that are available, they tended to simply rely on the guidelines put together by the most well-known organizations—for instance, the NCCN for cancer guidelines or the American College of Cardiology for guidelines on cardiac care.
But they emphasized that an overreliance on, or a misuse of, guidelines can pose problems.
Micah Prochaska, MD, a hospitalist and assistant professor of medicine at the University of Chicago, said that guidelines can potentially be misapplied as health systems oversimplify them as they try to standardize care.6
Dr. Prochaska
Dr. Prochaska, a member of the red blood cell transfusion international guidelines committee, said the recommendation that patients should not be given a transfusion until their hemoglobin drops below 7 g/dL has been implemented into electronic medical record systems, meaning that physicians are more likely to make their decisions based on an electronic health record prompt derived from this single metric, rather than making a decision based on the full clinical context.
The actual guidelines include nuance, but these complexities are often ignored in the pursuit of standardization and a well-intentioned interest in eliminating variability in care, he said.
“The role of doctors when we read guidelines is to recognize the certainty and uncertainty in the data, and then use clinical expertise and experience to try and decide, does this apply to my patient or not?” he said. “But when you have an electronic health record system that’s prompting you to either ‘Give blood or not give blood based on one clinical data point,’ you’re divorcing the clinical decision from the guidelines and supporting evidence, minimizing uncertainty and nuance, and I think you end up with quite dogmatic care. It is standardized, but it’s not clear that it’s the right care. It’s not even clear that it represents what the guidelines support.”
He said that much of the research on which guidelines are based may not directly apply to the setting in which hospitalists practice.
“As a hospitalist, we care for acute illness in the context of chronic comorbidities,” Dr. Prochaska said. “And a lot of the studies that inform guidelines are not done in hospitalized patients. A lot of these are ambulatory studies.”
On the international transfusion guidelines committee, he said, he is the only hospitalist, adding that he was not chosen to be on the committee because of his hospitalist experience, but because of his research on red blood cell transfusion.
Dr. Agrawal
Pankaj Agrawal, MD, chief medical officer at SGMC Health in Valdosta, Ga., said guidelines are full of “gray areas” and confounders—such as guidelines based on studies performed mostly in big cities and rarely in rural areas—and it boils down to the physician’s assessment.
“At the end of the day, I become the decision maker to say, where’s my patient, what works better for his situation?” he said. “It becomes an art.”
He recalled a time as a physician when he faced a case on which there was hardly any foundation in literature at all, and he wished there was a guideline on which to rely. A patient came in with ciguatera fish poisoning. There was no robust evidence, nothing in UpToDate. When he did a Google search, he found only three case reports that offered some guidance.
As federal agencies make dramatic shifts in recommendations and national elected and appointed officials make statements about public health and medical care without clear scientific backing, even more confusion is being introduced into the search for clear guidance on medical care. In January, the Centers for Disease Control and Prevention (CDC) stopped routinely recommending six out of the 17 standard childhood vaccines. Even if physicians might look to rigorous evidence for guidance, patients can bring concerns and questions based on these pronouncements, hospitalists said.
Dr. Agrawal said that statements without evidence are not worth considering.
“Those are comments or statements; they’re not evidence-based, they’re not published in a study,” he said. “That doesn’t change my practice, knowing this is not driven by medical research.”
Dr. Prochaska acknowledged that the CDC has made mistakes in the past in its recommendations, but the overhaul that is being undertaken now is concerning. He said he is no longer sure whether he could be comfortable referring a patient to the CDC’s online information about vaccinations.
On statements such as the press conference in which the President suggested that pregnant women should not take acetaminophen because they would put their children at risk of autism, he said evidence should carry the day.
“In order to convince me that Tylenol causes autism, you would have to have a series of studies and data and experts that were laying it out in a transparent way to say, ‘We really screwed this up. We misinterpreted the old data.’ And instead, what happened was, the websites changed overnight.”
In the end, Dr. Poonacha suggested, as hospitalists seeing complex patients search for guidance and truth in their care, little is more important than what they see with their own eyes in the moment. And common sense can go a long way, he said.
A guideline might say that the standard of treatment for atrial fibrillation is to put patients on a blood thinner.
“But if the patient has dementia and is a high risk for falls,” he said, “then that guideline is tossed out.”
Tom Collins is a medical writer based in South Florida.
References
1. Poonacha TK, Go RS. Level of scientific evidence underlying recommendations arising from the National Comprehensive Cancer Network clinical practice guidelines. J Clin Oncol. 2011;29(2):186-91. doi: 10.1200/JCO.2010.31.6414.
2. Desai AP, et al. Category of evidence and consensus underlying National Comprehensive Cancer Network guidelines: Is there evidence of progress? Int J Cancer. 2021;148(2):429-436. doi: 10.1002/ijc.33215.
3. Graham R, Mancher M, Miller Wolman D, Greenfield S, Steinberg E, eds. Clinical Practice Guidelines We Can Trust. Washington (DC): National Academies Press (US); 2011. PMID: 24983061.
4. Hutchings A, Raine R. A systematic review of factors affecting the judgments produced by formal consensus development methods in health care. J Health Serv Res Policy. 2006;11(3):172-9. doi: 10.1258/135581906777641659.
5. Lurie P. Financial conflicts of interest are related to voting patterns at FDA Advisory Committee meetings. MedGenMed. 2006;8(4):22.
6. Prochaska M, et al. When guideline-concordant standardized care results in healthcare disparities. J Clin Ethics. 2023;34(3):225-232. doi: 10.1086/726815.