As healthcare and technology become increasingly integrated, it's essential to create systems that simplify workflows rather than add complexity. The best solutions integrate smoothly into existing workflows, supporting physicians without overwhelming them, and the goal should be to enhance patient care and optimize processes through thoughtful design. A crucial takeaway in health tech is the importance of understanding user needs, where even small design alterations can have a big impact on accuracy and efficiency.
On today’s episode of In Network's Designing for Health podcast, Nordic Chief Medical Officer Craig Joseph, MD, talks with Aaron Neinstein, MD, Chief Medical Officer at Notable. They discussed designing systems that empower patients, the importance of understanding user workflows, and the power of "invisible design" in healthcare technology. They also talk about how AI can significantly reduce the administrative burden on healthcare providers, enabling them to focus more on patient care.
Listen here:
In Network's The Consulting Show podcast feature is available on all major podcasting platforms, including Apple Podcasts, Amazon Music, iHeart, Pandora, Spotify, and more. Search for 'In Network' and subscribe for updates on future episodes. Like what you hear? Leave a 5-star rating and write a review to help others find the podcast.
Want to learn more from Dr. Joseph? Order a copy of his book, Designing for Health.
READ THE TRANSCRIPT
Show Notes:
[00:00] Intros
[00:28] Aaron’s background and work
[06:22] Making healthcare systems work for both providers and patients
[10:24] The power of understanding user needs
[14:08] Importance of invisible design
[20:29] Seamless integration in healthcare software
[24:24] The role of AI in healthcare
[31:44] AI in workflow design
[36:48] Aaron’s favorite well-designed product
[38:33] Outros
Transcript:
Dr. Craig Joseph: Aaron Neinstein, welcome to the pod. How are you today?
Dr. Aaron Neinstein: I'm great. Thanks, Craig. Good to see you.
Dr. Craig Joseph: And where do we find you today?
Dr. Aaron Neinstein: I'm in San Mateo, California, just 20 minutes south of San Francisco.
Dr. Craig Joseph: And it looks like you are at Notable HQ. Or you've put up a sign behind you that implies you're at Notable HQ.
Dr. Aaron Neinstein: No. I'm here. Most of the company is at headquarters, in person together.
Dr. Craig Joseph: And that's on the 101st floor, is that correct?
Dr. Aaron Neinstein: That's right.
Dr. Craig Joseph: It's quite the view. All right, let’s get an understanding of where you’re coming from. I know you're an internal medicine doctor specializing in endocrinology. You’re still seeing patients today, but you’ve also been involved in healthcare-related industries. Can you share a bit about where you’ve been and how you got here?
Dr. Aaron Neinstein: Yeah. As you mentioned, I trained as an endocrinologist, but most endocrinologists love biochemistry and deep science. I was an American Studies major in college and cared more about Harvard Business Review or Health Affairs than Nature or Science. I’ve always been interdisciplinary and focused on the human side of healthcare and care delivery.
During my residency at UCSF, on Christmas Eve, I had a hallway conversation with our Chief Medical Officer. She mentioned UCSF had signed a contract with Epic and needed physician leadership for the role. I was interested, and by my last month of residency, I was on a plane to Verona, got Epic certified, and helped lead the implementation at UCSF over several years. I fell in love with informatics. During my endocrine fellowship, I got frustrated with how complex it was to download and use data from insulin pumps and continuous glucose monitors. So, I co-founded a nonprofit, Tidepool, to build software integrating device data for people with diabetes.
Later, UCSF started its digital health program, the Center for Digital Health Innovation. I spent the next ten years building out programs for patient access, referrals, self-scheduling, precision digital marketing, virtual care, telehealth, and remote patient monitoring. Working with different vendors at UCSF, I realized a significant opportunity to integrate systems for a cohesive experience. About a year and a half ago, I joined Notable as Chief Medical Officer, seeing how tools like ChatGPT would transform healthcare.
Dr. Craig Joseph: So, you’ve moved between clinical practice and digital health projects, from electronic health records to integrating hardware and software for continuous glucose monitoring. You seem to enjoy bouncing between these worlds.
Dr. Aaron Neinstein: I was never full-time in practice—always mostly full-time in informatics and health IT, with part-time clinical work. The common themes for me have been empowering patients with access to their healthcare data, simplifying workflows for providers and patients, and addressing the disconnect between physicians and patients. For example, InBasket messages are great for patients—more access to their doctors and information—but they’re a challenge for physicians. It highlights how much of healthcare creates tension between providers and patients. After a visit, it’s great for a patient to get more information about what the doctor is thinking regarding their care.
But for doctors, it’s often “pajama time”—working late to catch up on documentation. So much of care delivery has put doctors and patients in opposition. To me, this is fundamentally broken and flawed. At their core, the interests of patients and doctors align. Why did we all go to medical school? To help patients. We’re here to help them access care, get care, and understand their care. It’s our job in informatics to design systems that actually support patients and providers being on the same team, working together to address health problems and improve patient outcomes. We need to create processes, workflows, and tools that make it easy for providers to help patients and for patients to help themselves.
Dr. Craig Joseph: Yeah, I love it. And you said my favorite word—design. My coauthor Jerome Pagani described exactly what you’re talking about with the idea of a brick wall. You want to make things easier for the patient, so you push a brick to make it smooth on their side. But on the other side of that wall, there’s a physician or clinician.
Dr. Aaron Neinstein: Right, exactly.
Dr. Craig Joseph: You push that brick to make it easy for the doctor, and now it’s sticking out on the patient’s side.
Dr. Aaron Neinstein: Exactly.
Dr. Craig Joseph: We need a bigger wall so that things don’t end up poking out on either side. Let’s talk about your time at UCSF’s Center for Digital Health Innovation, which you helped start. What were some of the projects you focused on, and how did you determine the problems that could be solved or mitigated?
Dr. Aaron Neinstein: When we started the Center, it was part of the academic side of UCSF. Early on, we focused on industry partnerships and projects they brought to us. But around 2018-2019, our own care delivery system began to see opportunities to leverage the skills we had developed. We had built expertise in design, product management, data science, and AI development. The health system realized these skills could help solve internal challenges. From then on, we aligned our work with the health system’s five-year strategic plan. For example, when the system aimed to grow specialty service lines like orthopedics and neurosurgery, we focused on making it easier to achieve those goals.
As telehealth and remote patient monitoring became strategic priorities, we shifted focus there. It was rewarding to work directly on the system’s biggest challenges, from improving patient access (referrals to specialist visits) to enhancing the overall care experience outside of visits. Over the years, our main focus areas were:
- Patients seeking care—streamlining the process from referral to seeing a specialist.
- Patients receiving care—improving every touchpoint outside of the visit to make their experience better and their care process more seamless.
Dr. Craig Joseph: Okay. All right. Well, yeah, that all makes sense. And you know you've succeeded when people are coming to you, and you don't need to go looking for projects. When we were prepping for this call or this podcast episode, I'd asked you for some things that maybe didn't go as well as you thought. You know, where you would. I don't know if I use the f word for failure, but, you know, things that didn't work. I think we all learn a lot from projects, and you had come up with several.
So, I want to talk about those right now, Aaron, all of your failures. Let me bring your mother in. We have her waiting on the line. No, we're not going to take you talking right. Yeah, we're not. Now lay down, and let's talk about your mother and all other kinds of things like that. No, let's talk about Epic SmartSets. Epic SmartSets. Now, I've had a similar experience as an Epic employee in the past, but you had said that you were told or volunteered maybe to be on the Epic team for the UCSF implementation, went to Verona, got some training, and then built a bunch of outpatient order sets, or maybe those were inpatient order sets. Tell us that story.
Dr. Aaron Neinstein: So, and many others probably had a similar experience. Circa 2010, Epic. I was on the front lines of our build and deployment, which started in primary care and then outpatient medical specialties. I went to Verona, got my training, and then my to-do list from Epic said to meet with every specialty and build and validate, I don't remember the exact number, but call it ten SmartSets per specialty. Being a diligent, type-A endocrinologist, I did my homework. I met with every specialty and spent dozens of hours building, validating, editing, and tweaking. Then we went live and found out nobody used them. It was a powerful lesson: you have to understand what problem you're trying to solve, your users' workflows, and make sure that what you're building is solving someone's problem.
We hadn't gone live yet, so we didn't know what we were doing and were just following instructions. One year into the Epic implementation, we got pinged by radiology about a very high frequency of erroneous orders for the wrong CT abdomen/pelvis protocol. We dug into it and realized that when you typed "CT abdomen" into the Epic order search bar, the first one that came up was the wrong order. I can’t remember the protocol, but it wasn’t the most commonly desired one. What we did was add a space in "CT abdomen," which moved the most frequently desired order to the top of the list. Sure enough, by just adding a space, we completely flipped the accuracy of orders radiology was seeing. Contrast those two things: one was a playbook-following effort with hundreds of wasted hours building unused SmartSets. The other was diagnosing the problem, understanding workflows, and solving it with a literal space bar. That small change transformed care far more than all those SmartSets ever did.
So, it was certainly right there. But people just go to the top. They see the first thing, and that's what they order. You followed the rule of making it easy to do the right thing. I think that's amazing. With the first issue, it commonly happens. It's probably happening today when someone's implementing new software or an electronic health record. Well, we need these things because that's how we do it today. It's impossible to conceive of how we'll do it tomorrow, even with screenshots or the chance to play around with it. It's complicated.
I had a similar story when I worked at Epic. I was working with a big children's hospital, and they were building many order sets. I said, "Hey, this is a lot of work, and I suspect you're not going to use all of these order sets. It's going to be wasted time—maybe don't do it." I got pushback: "No, we're doing it. This is what they asked for." A physician told me, "This is what my colleagues want, so this is what I'm giving them." I said, "Okay." We went live, and the same thing happened to them as it did to you—low adoption of most order sets. People didn't find them helpful, so they didn’t use them.
On a post-live trip, I walked with the CMO—though we didn't call them CMOs back then. He asked one of the physicians who had requested the order sets if they were using them. The physician responded, "What are you talking about? Oh, those? I remember creating them, but no, I don’t use them." After he walked away, I said to the CMO, "I guess I was right; you shouldn’t have created those." He said, "Oh no, I’d do it the same way tomorrow." I asked him to explain, and he said, "You don’t understand our culture. If someone asks for something and I give it to them, they’re done. They can’t complain, even if it doesn’t work out. It was worth it to get them off my back." That approach wouldn’t work in many healthcare systems, but it worked there. He knew the organization better than me, so it made sense. I love that UCSF publishes some of these stories. That article you mentioned also discussed Dilantin ordering and a small change that improved outcomes. Apparently, physicians were ordering a free Dilantin level.
For non-clinical listeners, Dilantin is an anti-seizure medicine with a narrow therapeutic margin. If the dose is too high, it's toxic; if too low, it's ineffective. The sweet spot matters, so you measure levels in the bloodstream. You can order a free or total level. Free costs more, gets sent out, is harder to process, and takes 3–4 days for results. Total is quicker and sufficient most of the time. Physicians were ordering the free option without realizing the delay, then getting upset when results didn’t come back the same day. You designed a solution to make it harder to order the wrong test—adding friction so doctors couldn't easily choose the less practical option. These kinds of changes are the unsung heroes of human-centered design and the electronic health record.
Dr. Aaron Neinstein: If done well, these improvements are invisible. The common instinct is to add an alert—"Are you sure?" But there are better ways. For instance, instead of an alert for the free Dilantin, you designed a system that naturally guided users to the right choice. Those are really expensive. And you know what would have happened? People would have blown through the alert and said, "Yes, that's what I want to order," because they would have wanted to swat the nuisance out of their way. It would have interfered with their workflow. You have to actually look at people's workflow and design to it.
Another example of this is Notable. We help with HCC risk adjustment coding, suspecting codes to help people get more accurate coding. A lot of these products take providers out into a separate website or panel to work in. They say, "Ours is so easy, the user experience is great, and providers are going to love it.". In my mind, the best workflow and user experience for a provider is actually no user interface—just the one they're already using. I went to one of our customer sites a couple of weeks ago, visiting their primary care practices to sit with physicians while they were seeing patients. I wanted to see how things were going with the deployment of this risk adjustment tool. I met with about ten providers at three clinics and asked how things were going with the notable risk adjustment tool. They looked at me funny, saying, "What are you talking about?"
It turned out they had no idea they were using our software. Normally, this might be a bad thing, but in my case, it was the greatest compliment. It meant we had embedded the tool so seamlessly into their native workflow in the EHR that they thought it was just part of the EHR. Having that design-thinking background, understanding exactly what users are trying to accomplish, and working around existing systems is key. You’re never starting with a blank sheet of paper in enterprise or healthcare software. People already have workflows, so you design around them. You have to work within the systems they're using, not from a blank slate.
Dr. Craig Joseph: This is a lesson I first learned from Dan Norman's book, The Design of Everyday Things. It’s full of pictures of poorly designed doors. My takeaway from his brilliant work is that if a door needs instructions, it’s not a good door. If I need a sign to know whether to push or pull, that's a design failure. Similarly, the best designs for workflows don't need explanation. Users just figure it out and do it without realizing they’re moving between different software. That’s how it’s supposed to be. When I worked on the hospital side, I used to say that when someone says, "We’re going to need to train the doctors," it really means, "Our software isn’t well-designed." Ideally, it should just work. Doctors should see it and naturally understand it. Of course, this goal is rarely fully achievable, but it’s what we should aim for.
Dr. Aaron Neinstein: How much training did you need for Gmail?
Dr. Craig Joseph: Exactly. What we want is the ATM experience—where anyone can figure it out. But medicine is more complicated. Let me pivot a bit. Artificial intelligence—have you heard of it?
Dr. Aaron Neinstein: You’ll have to walk me through that.
Dr. Craig Joseph: It’s brainiac computers, that’s how I’d explain it.
Dr. Aaron Neinstein: I think you’re joking.
Dr. Craig Joseph: Let’s talk about AI. I think you mentioned Notable wasn’t a big user of AI until the last few years. Tell us about how Notable started and where you’re at now.
Dr. Aaron Neinstein: Our co-founders came out of the financial services industry. They built a product that automated the back-end work for mortgage applications. Think about the last time you applied for a mortgage—you filled out a lot of paperwork that someone had to package into a digital application, making it look like a seamless workflow. Compare that to the patient access journey. A referral or prior authorization involves stacks of paperwork that health systems must turn into digital data. Patients need to see their progress as they move from referral to authorization, to scheduling, to intake, and finally to their appointment. In 2017, our co-founders realized there was too much administrative burden in healthcare. Costs were spiraling, the experience was worsening, and there was an opportunity to drive efficiencies in healthcare similar to what they had done in financial services. They believed we could move from just digitizing processes to automating and personalizing them. Initially, they focused on physician documentation.
In 2017, ambient listening solutions were a newer concept. They started building, deploying, and making progress in this area but found the technology wasn’t ready to scale. Deploying into physician workflows was challenging—every physician wanted their own individualized note template. After a couple of years, they pivoted to administrative workflows, focusing on referrals, prior authorizations, pre-registration, intake, and other time-consuming tasks. Over the last five years, the company has focused on these areas to reduce labor-intensive workflows for value-based care. This includes chart reviews, chart scrubbing, outreach, and care management workflows. We’ve been using AI since the beginning. Initially, building AI algorithms for specific workflows was labor-intensive and slow.
Over the past two years, our ability to build, test, configure, and scale AI has accelerated dramatically. Previously, creating an algorithm for tasks like identifying care gaps—such as finding if a patient had a breast cancer screening or diabetic eye exam—could take months. Now, we can do it much faster. The speed at which we can tackle use cases has significantly increased as AI technology has improved. This acceleration has allowed us to automate processes and address administrative burdens more effectively. For example, reducing the time spent on chart scrubbing and identifying care gaps lets us focus more on improving patient outcomes.
Dr. Craig Joseph: It seems like you’re following the trend of focusing on administrative tasks rather than clinical diagnosis. Working on diagnosis, like predicting sepsis before it becomes critical, can backfire if the system over-alerts. On the other hand, addressing friction in administrative workflows, like documentation, prior authorizations, and referrals, seems like a great place for AI. These tasks don’t directly help patients but automating them allows physicians and nurses to focus more on patient care.
Dr. Aaron Neinstein: Exactly. Eric Topol wrote something like, “We don’t need AI to cure cancer. The greatest opportunity with AI is restoring the trust and relationship between doctors and patients. "The biggest challenge in clinic today isn’t diagnosis—it’s the 20 minutes I spend before every visit reviewing the chart, summarizing, and making sure I’m up to speed.” But my biggest challenge is after the visit, dealing with the billing, the coding, and every prior authorization.
You know, I prescribe a lot of GLP-1s and insulin pumps. Every single one of those is a prior authorization nightmare. Over and over, it’s the referral that I’m trying to send the patient to cardiology or orthopedics. And we’ve got to deal with getting that referral through and another prior authorization. So those are the things that stack up and drive practicing physicians crazy and, you know, get in the way of patients getting the care they need. And so, yeah, I think there’s a great opportunity to relieve the burden of those tasks and allow, again, this idea of doctors and patients being on the same team.
Well, if we can get AI to take the pressure off for us, if we can get AI to be the best intern or medical student summarizing the chart for me before I walk into the room so that all I need to be focused on when I’m in the room with the patient is their problem, their situation, and talking to them, that’s a transformed experience. And so, okay, maybe someday AI will make sure I’m not making a mistake in the diagnosis. But, you know, as any good informaticist, I live my life in two-by-twos. So, the two-by-two I think of here is like risk versus impact. What are the things that are super high impact and potentially lower risk? And let’s go after those first before we go after things that are really high risk. I think another piece, another informatics rule of thumb, and like mistakes I’ve made time after time in my career—learning the hard way—is there’s an impulse in healthcare to spend all of our focus on the algorithm.
And we hear this now all the time as we talk about AI. Let’s validate the algorithm. Is this algorithm safe? Is this algorithm effective? The algorithm is not the product. The product is the workflow in which you’re using the algorithm. And I think there was a great piece that Chris Longhurst and his team from UCSD wrote about this sometime in the last year. The reason we’ve seen hundreds of papers written about successful sepsis algorithms and then very little actually translated into practice is the way in which that story unfolded. All of these research labs built an algorithm, and then they said, how do we deploy this algorithm into clinical care? They didn’t spend time understanding, you know, the workflow they were trying to impact. They just wanted to put the algorithm into care and see what happened. And no surprise, nothing happened. So, it’s actually much more valuable to understand the entire workflow and to ask the question of, what change am I trying to make? Then you can put an algorithm into that workflow to influence the change.
We saw this at UCSF years ago. My colleague Sara Murray and her team at UCSF wrote a great paper about no-show prediction. Early on, they deployed Epic’s standard no-show prediction algorithm. Lo and behold, no change in outcomes or care. They went back and had to think about what they were actually trying to achieve with a no-show prediction algorithm. You’re not actually trying to predict a no-show. You’re trying to predict which patients need a ride to the clinic, which patients might need an appointment in the morning or afternoon, which patients have other social needs that need to be met. Those are the questions you need to build algorithms to answer because those are the actions you’re trying to take.
So, I think it’s a fallacy or a common mistake to build AI in a bubble or vacuum, or just an algorithm, and then try to deploy it. You’ve got to start from the workflow, from understanding the problem, and then you can build the algorithm to answer the right question and solve the right thing.
Dr. Craig Joseph: Yeah. Everything often works perfectly except for the human part. And if we could just get the humans out of the picture, both from the patient and the clinician side, I think it would just work so much better. But apparently, that’s complicated because, I guess, you need humans there. So yeah, I guess we’re stuck trying to figure out how to make the technology work.
Well, Aaron, this has been a pleasure. We’ve learned a lot, and I appreciate your time. I always like to ask a final question about something so wonderfully designed that it brings you joy and happiness whenever you use it. We’ve clearly determined that sometimes that’s not the electronic health record. Is there something you want to tell us about that’s well-designed and leaves you thinking, that’s really cool, every time you interact with it?
Dr. Aaron Neinstein: Yeah, I love this question. I’m going to go with my AirPods. They are so easy to use. You carry them in your pocket, pop them out, put them in your ears, and suddenly, you can connect to the audio of a phone call or listen to a podcast. They connect to my Apple TV so I can watch TV at night without bothering my family. The way they connect you to your computing device is very low friction, user-friendly, and powerful. If only they could fix Siri—which they’re supposed to have done now. But imagine a world where instead of Siri, it’s ChatGPT or something that works better. Then you’ve got this powerful computer you can talk to and use all the time. I like how simple and unobtrusive they are.
Dr. Craig Joseph: Yeah, that’s great. You know, they do two things. One is they work, right? And two, they connect pretty easily if you’re within the Apple ecosystem. So yeah, that’s awesome. Well, I’m sure the engineers in Cupertino will appreciate the tip of the hat to the AirPod team. Thank you again. This has been great. We appreciate having you here, and I look forward to all the cool stuff you’re going to do in the future.
Dr. Aaron Neinstein: Thanks, Craig. Always fun to chat with you.