Frequent readers of this blog post will know that I’m keenly interested in design and how it impacts the technology that clinicians use every day to care for their patients. Some of you may know that I recently started a podcast with my Nordic colleague, Jerome Pagani, PhD, focusing on healthcare and design, unironically called In Network. (Get it? You’d rather be in network than out of network. Amirite? Anyone? Bueller? Anyway, search for it wherever fine podcasts are given away, such as Apple, Google, Spotify, etc.!) What precious few of you may know is that Dr. Pagani and I are writing a book on the very same topic. We expect it to be out in summer, and it’s pretty, pretty, pretty good if I do say so myself. I’ll share more details as we get closer to launch but suffice it to say that everyone should start to get excited!
Given all this information, you can imagine how thrilled I was to come across a recent article in the Journal of the American Medical Informatics Association (JAMIA) titled “Behavioral ‘nudges’ in the electronic health record to reduce waste and misuse: 3 interventions.” Researchers at the University of California San Francisco (UCSF) looked at the choice architecture of several common physician ordering workflows and identified some that seemed to nudge doctors in the “wrong” direction. By making minor tweaks to their electronic health record (EHR), they were successful in improving decision-making and decreasing waste.
As I referenced above, choice architecture studies how decisions are influenced by the way in which options are presented. Anyone who has been lost in a Las Vegas casino or been swayed to purchase some candy at the grocery store checkout lane likely has some inkling that their choices are not always completely theirs to make. While we might have free will, the environment in which we make decisions affects us in many ways. Sometimes designers leverage choice architecture for good and sometimes for bad; however, as this study shows, even if there are no purposeful design principles applied, how we present options can significantly affect the outcome.
The UCSF scientists identified three ordering patterns that were sub-optimal in terms of clinical need or operational efficiency. First, they noted that many physicians who needed to know if their patient had a therapeutic level of the anti-epileptic medicine phenytoin ordered a “free phenytoin level” instead of the less specific “total phenytoin level.” The former test had to be sent to a reference lab, costing time and money. The latter test generally gave the same information but was cheaper and faster. Second, the researchers found that many doctors ordered a “CT abdomen” when their patient really needed a “CT abdomen/pelvis.” When the test they ordered often did not give them the diagnostic information that they required, the patient had to go back for further testing. Lastly, the UCSF team noticed that many doctors who ordered medicine for patients who were anxious about an imaging study tended to prescribe a dose for folks who took the medicine regularly; hence, the dose for pre-testing patients was often higher than needed.
To improve these ordering workflows, the researchers modified the EHR in very subtle ways (let’s just say they “nudged” the doctors into doing the right thing!). To push doctors to avoid the free phenytoin lab unless it was truly necessary, they modified the ordering procedure so that physicians would see the recommended indications for the test before they could order it. To help with the proper selection of the CT test, they simply added an extra space in the name of the preferred order, thereby causing “CT abdomen/pelvis” to appear above the often incorrectly ordered “CT abdomen.” Finally, the UCSF researchers created a new medication order that appeared when physicians searched for the original med; this new order specified that it was for pre-procedure anxiety and was configured with the proper dose and dispense number. It became easier to order the new med, which was already pre-filled with what most patients need.
What were the outcomes of these very minor EHR changes? The ideal lab test to judge the anti-epileptic med level for most patients moved from 92% to 100%. Inappropriate CT scans that were studied decreased from 11% to 5%. The best pre-procedure sedation and anxiety medication ordering increased from 12.9% to 22.2%. All these changes were statistically significant, meaning that the interventions likely caused the improvements (as opposed to just sheer luck!).
Choice architecture and how it can affect physician ordering patterns is a fascinating field. (Did I mention the book I’m co-authoring? I did? Great.) As we learn more about how all this works, I’m confident that we’ll see that, whether we realize it or not, EHR developers and those who configure EHR tools are nudging clinical and operational users in certain directions even if they don’t intend to do so. We need to marshal these insights from design and use them for good!