Implementation has a playbook. De-implementation has a prayer.

By:

Dr. Craig Joseph

We are reasonably good at adding things to clinical practice. New drug approved? We have pathways for that. New procedure shown to reduce mortality? Grand rounds, CME credits, order set update: done. The machinery of implementation, while imperfect, at least exists and is reasonably well understood. 

De-implementation is a different problem entirely, and we are not good at it. Not even close. For the unfamiliar, de-implementation refers to the deliberate removal of practices that evidence has shown to be ineffective or harmful. We treat it as the mirror image of implementation, assuming that if we just run the same playbook in reverse, the behavior will change. It won’t. And we have twenty years of chest X-rays to prove it. 

The evidence has been in for a while 

A study published earlier this year in Pediatrics examined chest radiograph use across 38 US children’s hospitals between 2016 and 2024, analyzing more than 650,000 emergency department encounters for pediatric asthma exacerbations. The findings are, depending on your disposition, either unsurprising or damning. 

The National Asthma Education and Prevention Program, the Global Initiative for Asthma, and multiple other professional societies have recommended against routine chest x-rays in asthma exacerbations for over two decades. Changes in clinical management due to these images occur in fewer than 5% of cases. The evidence is not ambiguous, not contested, and not new.

And yet: 22% of children in this study received a chest x-ray anyway. At some hospitals the rate was 37%. The overall rate did not decline meaningfully across the eight-year study period. 

If you work in adult medicine and are feeling comfortable right now, don’t. The dynamics documented in this study (e.g., guideline inertia, practice variation driven by institutional habit rather than evidence, imaging that generates diagnoses rather than confirming them) are not unique to pediatric emergency departments. They are features of clinical medicine broadly. The study just happens to offer unusually clean data because children with asthma are a relatively uncomplicated population. If we can’t de-implement a low-value test there, the prognosis for adult medicine is not encouraging. 

The data that should make all of us uncomfortable 

Two findings from this study deserve more attention than they typically get in the abstract-skimming culture of busy clinicians. 

The first is the disparity pattern. White children with private insurance were significantly more likely to receive chest x-rays. Black children were approximately half as likely to be imaged as white children, and this association held after controlling for other variables. The authors note, with characteristic academic restraint, that this “may be driven by structural inequities, clinician-level biases, and health care system dynamics,” and raise the possibility that perceived litigation risk, patient satisfaction pressures, or a felt obligation to demonstrate thoroughness to certain patient populations is influencing ordering behavior. 

To translate: physicians may be over-imaging patients they feel more accountable to, and under-imaging patients they feel less accountable to. Neither pattern reflects evidence-based practice. Both are problems, though they are different kinds of problems requiring different interventions. 

The second finding is the diagnostic labeling cascade. Hospitals in the highest quartile of chest x-ray utilization had nearly twice the rate of pneumonia diagnoses compared with the lowest quartile. Higher imaging rates were not associated with lower admission rates, shorter hospital stays, or reduced costs. They were, however, associated with higher rates of return visits within three days among discharged patients. 

The implication is uncomfortable but important: the chest x-ray isn’t just low-value in aggregate. It may be actively net negative. More imaging produces more pneumonia labels, which presumably produce more antibiotics, which produce patients returning to the ED confused about why they’re not improving from ana disease process (i.e., asthma exacerbation) that was never pneumonia in the first place. The ordering physician almost certainly never connects those dots, because the feedback loop doesn’t close. 

Why guidelines alone don’t work and neither does removing the order 

Before discussing what does work, it’s worth being precise about what doesn’t, because both failure modes are instructive. 

The first failure mode is the guideline. Both the chest x-ray data and a separate quality improvement study from Children’s Hospital of Philadelphia (CHOP) illustrate this point cleanly. The CHOP team was trying to reduce albuterol use in infants with bronchiolitis, a related but distinct problem, since albuterol is not recommended for bronchiolitis and yet was being administered to 43% of patients in their ED prior to intervention. When the American Academy of Pediatrics updated its bronchiolitis guidelines in 2014, albuterol use at CHOP dipped briefly, and briefly and then returned to baseline within weeks. The guideline created a momentary ripple and nothing more. 

The second failure mode is subtler and worth dwelling on, because it’s the mistake that well-meaning informatics teams make constantly. The intuitive response to “clinicians are ordering something they shouldn’t” is to remove it from the order set. Out of sight, out of mind. Friction through absence. It’s exactly what I would’ve done. Yet … 

It doesn’t work. When CHOP’s earlier order set simply omitted albuterol, physicians just stopped using the order set. They knew where to find the drug. Absence of an option in a structured workflow is not a recommendation against it; it’s just an inconvenience, and clinicians are very good at working around inconveniences when they’re motivated to do so. 

This is an important lesson for any CMO or CMIO who believes that order set hygiene constitutes a de-implementation strategy. It doesn’t. It’s a necessary condition, not a sufficient one. 

The CHOP experiment and what it reveals about human psychology 

What CHOP eventually found — and what the chest x-ray literature supports more broadly — is that effective de-implementation requires a hierarchy of interventions, layered deliberately and designed with human psychology in mind rather than against it. 

The approach that moved the needle at CHOP was keeping albuterol in the order set, visible and accessible, but labeling it explicitly as not recommended for routine bronchiolitis use. If a physician wanted to order it anyway, they could, but they were asked to document their indication and pre- and post-treatment assessments. The drug remained available. The recommendation against it was impossible to miss. And the documentation requirement introduced enough friction to prompt genuine reflection: the clinician had to commit in writing to overriding a visible evidence-based recommendation.  

Many of them, faced with that moment of deliberate choice, decided not to. Albuterol use for bronchiolitis in the ED dropped from 43% to 20%. Admission rates, length of stay, and return visit rates were unchanged. The intervention worked, it was sustained through a second bronchiolitis season, and it did not harm patients. 

This is the principle: the goal is not to prevent clinicians from doing the thing. The goal is to make the right thing slightly easier than the wrong thing, and to ensure that choosing the wrong thing requires a conscious, documented act rather than an automatic one. The EHR, for all its well-documented sins against clinician cognitive load, is actually well-suited to this kind of friction design … if the people designing the workflows are thinking about behavior change rather than just data capture. 

For chest x-rays in asthma, the equivalent intervention might be a pre-populated order set with imaging deselected by default, accompanied by a brief visible note: “Guidelines recommend against routine CXR in asthma exacerbations. Order if atypical features are present.” This is not an interruptive alert that physicians learn to click through in 0.3 seconds. Rather, it’s a default state that embeds the recommendation into the workflow itself. 

Beyond order set design, two other interventions have strong evidence behind them and are underutilized in most health systems. The first is peer benchmarking delivered to the clinician, not just the CMO. The 13% to 37% variation in chest x-ray rates across similarly resourced children’s hospitals strongly suggests that most physicians at high-utilization institutions have no idea they are outliers. Monthly or quarterly data showing individual and department-level imaging rates, benchmarked against peer institutions, is among the highest-return investments in low-value care reduction, but only if it reaches the person making the decision. A dashboard that lives in a VP’s office is not a behavior change intervention. 

The second is team-level audit and feedback rather than individual-level accountability. Individual feedback triggers defensiveness and attribution bias. Team-level data (e.g., our department is in the top quartile for chest x-ray utilization; here is what the lowest quartile does differently) creates peer accountability without shame, and it invites collective problem-solving rather than individual excuse-making. 

The organizational question no one is asking 

This final point is directed less at clinicians than at the executives who set organizational priorities. 

Health systems celebrate implementation. EHR go-lives, AI deployments, new service lines, quality programs with names and champions and steering committees. The machinery of adding things is well-funded, well-staffed, and reasonably well-managed. 

De-implementation has no equivalent infrastructure. There is rarely a named owner, a dedicated team, a budget line, or a governance process for systematically identifying and removing low-value practices. The result is predictable: every few years a study like this one gets published, confirms that nothing has changed in twenty years, and gets filed away while the utilization rates hold steady. 

The COVID-19 pandemic offered an inadvertent natural experiment. Chest x-ray rates dropped sharply in early 2020, then rebounded almost completely once conditions normalized. The barrier to de-implementation is not knowledge, and it is not capability. It is the absence of sustained organizational will, expressed through system design, accountability structures, and the unglamorous work of changing defaults. 

That rebound is the most important finding in the paper. The behavior is not fixed. The system, properly designed, can change it. Guidelines, however well-intentioned, are not a system. 

The question for health system leaders is not whether you have a de-implementation problem. You do, in volumes that would be embarrassing if anyone were measuring them systematically. Estimates put the national cost of low-value care at $345 billion annually. That number doesn’t live in a waste report. It lives in your order sets. The question is whether anyone in your organization is actually responsible for finding and fixing it, or whether you’re still waiting for the next guideline to do the work that only deliberate system design can do. 

Stay up to date on how healthcare’s changing and how we’re helping organizations change with it.

Join us for a night of networking

Join Nordic for an after‑hours networking happy hour at HIMSS. Connect with your chapter’s industry experts over great drinks and insightful conversation. This complimentary event is open to members of all HIMSS chapters.