In this episode, Dr. Craig Joseph sits down with Sarah Gebauer whose career spans anesthesiology, palliative care, clinical informatics, consulting, and AI governance. Dr. Gebauer explains how her diverse background from strategy consulting at Bane lead her to national security AI evaluation. She also discusses her work of designing effective healthcare tools.
On today’s episode of In Network’s Designing for Health podcast, Nordic Chief Medical Officer Craig Joseph, MD, talks with Sarah Gebauer. An anesthesiologist, palliative care physician, informaticist, AI evaluator, world traveler, and all‑around multidimensional human. Together, they explore the intersections of clinical practice, healthcare technology, AI safety, system design, and what “good” should look like in the future of care.
The Designing for Health podcast is available on all major platforms including Apple Podcasts, Amazon Music, iHeart, Pandora, Spotify, and more.
Search for “Designing for Health” and subscribe for updates. If you enjoy the episode, please leave a five-star rating and review.
Want to learn more from Dr. Joseph?
Order a copy of his book, Designing for Health.
SHOW NOTES
- [00:00] Intros
- [01:18] Dr. Gebauer’s Academic Path & Clinical Training
- [02:00] Exposure to Different Healthcare Systems
- [04:50] The Reality of AI in Clinical Practice
- [07:44] Evaluating Technology Outside of Go‑Live
- [14:00] AI Model Evaluation & National Security
- [17:12] Defining What ‘Good’ Looks Like in AI
- [24:25] Workforce Education
- [31:20] Dr. Gebauer’s favorite well‑designed tool
- [34:50] Outros
TRANSCRIPT
Intro:
Hello and welcome to the In Network podcast feature, Designing for Health. I’m Nordic’s Chief Medical Officer, Dr. Craig Joseph.
Today’s guest has one of the most unusual CVs I’ve encountered and I’ve met a few physicians in my time. Dr. Sarah Gebauer is an anesthesiologist, palliative care physician, clinical informaticist, and AI evaluator who spent two years at RAND assessing AI risks for national security. She’s also a practicing clinician, which means she thinks about AI governance from both ends of the problem.
In this conversation, we dig into something that doesn’t get nearly enough attention: how health systems evaluate technology after go-live, not just at implementation. We also talk about the accountability vacuum AI creates when it starts taking on responsibilities that used to belong to humans, and why that should concern every physician leader. And we close with a surprising answer to my standard question about well-designed things.
Let’s plug in.
Craig Joseph MD, FAAP, FAMIA
Sarah Gebauer welcome to the podcast. How are you today?
Sarah Gebauer, MD
I am wonderful. Thanks so much for having me. So excited to be here with you and talking about designing for health, which I think is such an underappreciated but crucial aspect of health care. And I, which is what I spent a lot of time thinking about.
Craig Joseph MD, FAAP, FAMIA
Oh, that’s terrific. And it’s worth the $5 that I now have to pay you for saying that sentence that I just wrote for you. No, that’s very kind of you. Let’s just start off talking about your background. So you’re an anesthesiologist by training. But what I found interesting when we were preparing for this call is that you didn’t go directly from undergrad to medical school. I think that’s not that uncommon now, but it might have been uncommon a few years ago when you did it. What? What gifts? What did you do?
Sarah Gebauer, MD
I worked at Bane, which is a strategy consulting firm, traditionally thought of as one of the big three McKinsey, BCG and Bane And honestly, I mostly applied because I needed money for medical school. Little did I realize that they would just later take all of that from me so I could pay for medical school. But, I had a wonderful experience.
They do a great job training, and I learned so many business skills. It’s kind of like a mini MBA. And as a bonus, I made a lot of friends who had expense accounts as business people and were able to take me out to dinner during medical school later on.
Craig Joseph MD, FAAP, FAMIA
This should be a recommendation that everyone should do after graduating from college. Just go to Bane. Although I suspect that that would be difficult because, I think some of those consulting companies like Bane are a little bit choosy, on who they who they select. What was your undergraduate degree like? How did you get into such a how did you get into consulting right out of undergrad?
Sarah Gebauer, MD
Consultants usually hire from across the spectrum. I was a chemistry and Spanish double major, so I actually wrote my thesis on women’s health, in the 15th century in Spain. Actually got to go to Spain and look through the old textbooks at Spanish medical textbooks and translate them from old Spanish into current Spanish and then into English, which was really fun, but totally unrelated to consulting.
But the consulting background actually has been incredibly helpful as I’ve gone through medicine. After I worked at Bane, I went to Stanford for medical school and did a clinical, research concentration while I was there. And then I went to UCSF and did my anesthesia residency and then onto palliative care fellowship in San Diego, and then got a clinical informatics degree while I was on faculty at the University of New Mexico for five years, doing a mix of, palliative care and or anesthesia and then joined a small private practice group in a rural area after we had had four kids in five years.
We wanted to raise them in a smaller location with a lot of hospital leadership in both locations, both at the hospital medical center and also, in a smaller rural setting. So I’ve gotten a great exposure to different kinds of healthcare design and different kinds of approaches to different kinds of problems.
Craig Joseph MD, FAAP, FAMIA
Yeah. So obviously it’s very typical. Everyone does what you just mentioned. I’m actually going to go take a nap now because I’m now tired just contemplating, contemplating all of that. I, the only thing that I can really relate to of what you just said is the four kids in five years thing. I did do that, but I did it because I thought it was a big tax deduction.
And it turned out that that that didn’t turn out that way. However, let’s let me focus. So what do you think all of those different vantage points from technology to anesthesia to palliative care to kind of entrepreneurship. Now, how does that change what you think health care should look like, that you have like a picture of multiple different, kind of views of, of health care that most of us don’t get.
Sarah Gebauer, MD
Yeah, it’s a great question. It’s been a really interesting experience. So after we moved here to steamboat, you know, I was I did a lot of leadership roles, as I mentioned, and then started doing more health consulting, which was easy because of my previous background in consulting at Bane. And then, have really transitioned into more, roles helping vendors and, and others do AI governance and AI evaluation.
I spent two years at Rand doing AI model evaluation for national security risks. For example, I spent a lot of time looking at health care from the outside in. But I’ve also been a practicing clinician during that entire time. And I think one of the biggest disconnects that people are surprised about is that I use very little AI in my regular clinical practice.
I just don’t have the opportunity to. There are no tools that I have available to me to use AI, whereas, you know, I think I think about AI all the time I talk about it, I try to think of ways that we can incorporate AI into clinical care more safely and more effectively. So the last mile problem for health care is, of course, one of the biggest that any technology or really any change management in healthcare faces.
Health care processes are so variable, and the approaches to solving the same problem and be wildly different. And I talked to engineers across the spectrum who have great ideas and create beautiful products that just are not going to be used either because they’re not designed appropriately for the clinical workflow. They’re not designed in a way that the end user actually can interact with them in a meaningful way, or really understands them, or they’re just not solving the usual business problem.
They’re just not solving a true pain point. And I always am thrilled when I see engineers and technical people who want to build in health care because, frankly, they’re a lot easier places to make money in the world. And they usually the people building and health care are doing it because they are really altruistically motivated to make real change in our system and because they really want to do something good in the world.
And I find that really inspiring and wonderful that we have these colleagues from all different disciplines who want to come help us, figure out a better way to do things, because we clearly have not figured out the best approach for much of the things that we do in healthcare.
Craig Joseph MD, FAAP, FAMIA
Yeah, well, I hear, hear, Hear, hear to all of that. Let’s let me pick out a few of those juicy bits of information you just kind of threw at us when evaluating technology. And I know you’re spending a lot of time on AI, and I want to spend a lot of time on AI. But one of the things that you’ve written about is evaluating technology in general.
And I think electronic health records specifically throughout their life cycle, not just at Go Live. And I’m curious about that because it’s I don’t I’m not sure most people are reevaluating their EHR go live, let alone throughout their life cycle. So what are some of the things that that you think people were missing and how can they be better? Or the system around to be better for constantly looking at it?
Sarah Gebauer, MD
Yeah, I mean, I, I am of the generation that that has used over a dozen EHRs, in my lifetime, for a long period there when they were just getting started and they were just point solutions and now are, of course, all encompassing throughout the hospital. So I’ve used them in a lot of different contexts. I could see all the ways that they do things great, and some of them that they do terribly.
I think the system’s gotten a lot better, in the last probably five years, about actually talking to the end users about what their experiences and making sure it’s not a top down solution of this is how we do it. There are now, at most, forward thinking, medical centers. There is, for example, a governance committee or some kind of committee separate from an AI governance committee and governance committee by specialty, even or by service line.
Usually that looks at the health at the and make suggestions directly to the team and to the medical system team that formal system for user feedback is really important. And, you know, now we see the same things happening with nursing, which is wonderful because nurses have such a burden from the EHR. In terms of documentation, that process has really improved.
I do think there’s not enough evaluation of how well is it actually working and how much value is it adding. And that’s a part of the evaluation that I would love to see built out more meaningfully in the future. I think we do a decent job of user experience, more so now certainly than we did before, but that’s one aspect of performance of a very multi faceted instrument.
For each of the instruments in each of the tools within our ears, they obviously have different levels of capability and different levels of, of helpfulness and actual effectiveness that we’re using. And many of them we don’t actually look at to see, you know, how effective is this? You know, for example, we have an operating room scheduling tool within our EHR that predicts case length.
And I have a sense of how effective it is. I feel like I could, based on my personal experience, make a very good educated guess. But I have no data to back it up, and I doubt that there is the transparency available to look at that on an ongoing basis as well. And that’s just one example of the hundreds of thousands of tools within each EHR, which is really a collection of different tools under one umbrella. And we don’t really know how effective most of those are that we use on a daily basis.
Craig Joseph MD, FAAP, FAMIA
Let’s focus in on the OER example that you gave. So predicting the length of a case seems like it’s a it’s not that important, but it’s really very important. And it’s not just based on the patient. Right. It’s of course based on the surgeon and the anesthesiologist and the team in the operating room. You know, those are very expensive geographies.
The operation, the operating theater, making the most efficient use of them is very important. So how do you evaluate, do you go back and does anyone actually do that. Go back and look and say wasn’t right.
Sarah Gebauer, MD
No, I mean that’s the problem. No one goes back and says, hey, how well is this tool functioning? And, you know, for example, I have many ideas on how it could function better, but no one there’s no way for me to give feedback on this particular tool either. It’s not really within the realm of the EMR governance committee, for example.
And I would also just say that these tools are important for them to be effective, not just for revenue management and decreasing hospital waste. Although it’s important to keep our hospitals, operating at a, you know, in an effective way, but also because if they’re wrong, then we have patients waiting for hours for their surgery. They’re already nervous.
They’re sitting in the preoperative area and they’re just waiting and waiting. And that is a really unpleasant way to start your surgical experience. And so we also just want to be mindful of our patients and make sure these cases are scheduled in a in the right way so that we’re serving our patients and in the right way also, and that we’re taking care of them in the way that we would want our own family members to be taking care of.
Even something as seemingly operational and technical and boring as how long is this case going to take, really does impact our patients and how they view their care and even their future course. For example, if because of the operating room prediction being wrong, perhaps they start their surgery late 7:00 pm at night and they don’t get out of the operating room till 10 p.m. and then they’re up, you know, until 2 a.m. doing recovery and things like that.
They’re not going to have a smooth recovery.
As somebody who started, you know, as they would have if they had started 3 to 5 hours before when they were supposed to start surgery. And so it really does impact patients. And there’s a report on how well we utilize our operating room time, but there’s not a report who’s watching the tools that tell us how long these things are supposed to take. That’s the kind of reporting that we don’t really see.
Craig Joseph MD, FAAP, FAMIA
You make a great point. And as someone who’s been the patient, you know, multiple times and I, I of course, I’m perfectly healthy, but I like to do my research. And so I go and I have, procedures just because I’m that dedicated.
Sarah Gebauer
Your dedication to have surgery in the in the service of research.
Craig Joseph MD, FAAP, FAMIA
And someone has to do it, I feel, and it should be me. And you’re absolutely right. You’re NPO for longer, and in my case, without caffeine for longer. So the headaches are really come in. And I guess my assumption was that it’s we’re never on time, you know, unless you’re the first case, you’re just going to be delayed.
And that’s, maybe that’s just my cynical, jaundiced view of of the world as an insider kind of looking out. But that’s a that’s an excellent point. We do. We talk about efficiency for the hospital. We don’t really talk about the optimal care for the patient when we’re talking about some of these more kind of economical or create at least tools or as you refer to them, as instruments, which I love, thinking about them from, not just the, the economy, but, from, you know, patient healing.
Let me ask you about AI. So it’s I think you teased us a little bit and said that you’ve done some a AI evaluation for national security. I’m, I’m suspecting that there’s one anesthesiologist in this country that can say what I just said, that you’ve done some national security work. How did you get into that? How is that related to the research and training that you’ve done?
Sarah Gebauer, MD
That’s a great question. I helped with a lot of our hospital’s Covid response during Covid, and I think it was hard to go through that as a clinician, as somebody trying to figure out, you know, how do we run eight ventilators at a time or eight people on one ventilator at a time, for example, without thinking, you know, a, this could be a lot worse and be, you know, how can I help prevent this in the future?
How can I help contribute to this going better in the future? And Rand has a really forward thinking, I department and a we’re looking at, biosecurity risks. And with my medical background, it was, it was a natural fit. I had an AI background already. And so being able to figure out where are the biggest threats going forward with AI?
Are there threats that we don’t recognize right now that I could either increase the severity of the threat or develop a threat in a new way, and such that we would need to develop different kinds of mitigations. And these are things like, you know, how you order DNA, how you create DNA, how new and evolving technology contributes to biosecurity risk and likely will in the future as well.
Honestly, the work itself is really interesting from a from a scientific standpoint. I love being at the point where nobody really knows exactly how to do the science, and you get to help figure it out. And it’s, to me, one of the most exciting parts of science and of life, really. And also working with really smart people.
And I got to work really smart engineers, but also social scientists and linguists and all kinds of people that because I is so multifaceted and multidisciplinary, you have to involve teams that have a really broad perspective in order to do a good job of evaluating all the different kinds of risks that might exist.
And to me, that’s one of the great joys of the kind of work I’ve been able to do, is I get to work with people who have really different backgrounds and really different perspectives on the same problem, and I get to learn from them, which is basically that encapsulates my perfect day.
Craig Joseph MD, FAAP, FAMIA
Yeah, well, AI is perfect for you. I think a lot of it is because no one really knows how it works as well, right? At least that’s my understanding as the as the guy on the outside looking in that even the people that are creating it, they have some idea but can’t really predict, what any answer is going to be.
And so you are kind of designing and building this plane as you, as you fly it, especially as it comes to kind of health care. Let me mine some of your, your Substack, articles. And so I’d recommend to everyone that’s interested to subscribe and read some of the things that you’re thinking about. One thing that you wrote talking about kind of evaluating AI and technology and in particular is if we never I love this quote.
If we never articulate what good looks like, we will never recognize it when we see it. To me, you know, I know a workflow that works for me, when I’m in the air and I like it, I’ve never written it down. I don’t have that a priori kind of understanding of what great is. How does one go about deciding and agreeing upon what good is.
Sarah Gebauer, MD
Yeah, it’s it can be hard. And I think the, the main thing is that we want to first know what we’re deciding about. So the most important parts of the AI evaluation that we want to decide about are or the safety parts, you know, we want to first do no harm. We want to make sure that we’re not violating before.
We don’t want to create a product that’s worse than what the humans do already, or insert a product, an AI product into a workflow, and then the performance of the whole system degrades. So that’s the first priority. And that’s what we’re trying to do now where we would like to get right. Where we need to get is to say, let’s test how much better this is than the current state.
And so both of those questions are slightly different, but both need to be evaluated in order to make sure that what we’re doing with AI makes sense. If we’re going to put something in the workflow, and then it doesn’t actually improve performance or efficacy or timeliness or any of those metrics, then there’s no point in having it there in the first place.
And certainly if it degrades performance and we don’t want it in there either, what good looks like will to be dependent on what you’re measuring, of course, which is one of the challenges with AI measurement. There are so many places and ways that I might interact with the healthcare system that no universal standard will be able to be created.
However, we can anticipate reasonable ways where I will probably fail and where it might interact with failures that we already know exist in the healthcare world. So we know how I often fails. Those are well documented. We know how humans fail, and health care and the health care system fails. Those are well documented. And that’s actually an advantage.
And I don’t think that we think about that often enough because there’s been failure. Literature and such a culture of root cause analyzes and patient safety. And for so many years now in medicine that we have this robust literature of understanding of these are the likely ways that things are going to go awry in other fields.
For example, in biosecurity, you know, no one is really asked, you know, what is what is the hardest and most likely place to fail for creating a virus?
For a new virologist, for example, you know, nobody really cares. But the new PhDs, they go in there and they do it, and it takes them however long it takes them. And no one is really ever cared to answer that question. It doesn’t really impact a large system or other people the way that, you know, new intern mistakes do that.
We have studied, for example. So we have actually have a lot of help in health care that we don’t have in other fields in terms of predicting where the failure points might be and therefore where we should evaluate for breakdowns, and what our question should be, and therefore what good should look like. So, you know, good should look like at least minimal performance that we currently have now and preferably better.
And then going a step deeper, a lot of that governance questionnaires that I was referring to in that Substack ask questions about, you know, how do you handle the data, how do you prevent bias. And these are things that we are getting better at doing from, engineering standpoint. It’s really our engineering colleagues, mostly in other fields, like banking, for example, insurance and education.
Other heavily regulated fields are grappling with a lot of the same issues in terms of how do we how do we evaluate these kind of things and make sure that we’re not perpetuating bias? You know, this is a big deal when you’re evaluating loans, for example, you know, you don’t want to be that company and you shouldn’t be, and you need to have a system to avoid that the same way we do in health care.
So there are emerging techniques to deal with these kinds of problems that we can borrow from colleagues in other industries, and that the machine or machine learning engineer colleagues are really doing a good job of, of helping us with. So some of them are technical solutions in terms of what this could look like. And then there are some solutions that are going to be very human workflow.
What does good look like. So these are things like how do you train your staff about AI. How do you tell them what a bad answer from I might look like? To help them understand just enough about how the AI works and to understand what its limitations are. Yeah, that’s really if you understand what AI is fundamentally good at and fundamentally bad at, you have a much better sense of what you should be using it for and in what context than you otherwise will.
And that’s something that a lot of people that I talk to are having a real challenge with these days is, you know, they’re having to basically upskill their entire workforce in terms of this is how the AI works.
This is you know, what you should be thinking about. And this is not a this is a societal issue to basically upskill our entire community of humans about, you know, how do you use this new tool that hospitals are trying to take on at least the healthcare specific piece?
And then from a a more specialized form, how do you enhance your workforces ability even more and upskill them in order to be able to evaluate these tools as part of your hospital’s AI governance committee and look through the answers and understand if those are good answers or not. So there are a few levels of how do you get to know what good is?
One is you have to know what you’re testing. The second is you have to understand from a technical aspect what you’re really looking for in a good answer. And then third, you have the people who know who know enough about AI to be able to look at those answers and make sure that those things match.
Craig Joseph MD, FAAP, FAMIA
I definitely hear the same kinds of concerns that that you’re you’re talking about. And I think one of the things that’s often overlooked is you really do kind of need to be an expert in an area to evaluate an AI’s response. Right? And there are things where, you know, a nurse might look at clinical decision support and say, like, that’s not in the realm of possibility that I should be.
I worried about that. Even a physician might look at the same thing and go, oh, okay, anyone who’s not in the specialty here, like there’s a lot that nurses do that, we would never be able to do. And let me speak for myself that I, I couldn’t do and don’t understand everything that goes on. I’ve often thought in the pre EHR workflow world, young people listening.
There was a pre year time. Right. Whenever I asked physicians like, hey how does your order get, you know, how does your order get operationalized? And they’d be like, well I write it down on the piece of paper. And then magic happens and then the medicine gets into the patient. We don’t know and we don’t care, right?
Like, I don’t actually know how that order and, and even within electronic health records still like, well, I typed it in and, somehow the medicine gets into the patient or somehow the order gets, translated into the patient’s. Now, ambulate, three times a day, and it’s just kind of magic. But, sure, you need to be an expert in that, in that area, but then you also need to understand that it’s, an AI is basically a prediction machine, and it’s looking at, at things that it knows, but it’s not really good at all.
It thinks that it doesn’t know or that we don’t we haven’t seen before. Right. And how can we upskill everyone. It’s a problem. Is, is this like, everyone gets down into a lecture room or are we or is it just a PowerPoint with three slides? Tell us, tell us. Give us the answer now, please. We’re waiting.
Sarah Gebauer, MD
This is a great question. And I know I’ve talked to leaders at different health systems that are really grappling with this, and they’re taking different approaches. Some are sending out in a weekly AI update email. Some are doing are trying to push through AI education for the entire workforce, which is always a challenge. Some are taking a very targeted approach that when they implement something, they take that opportunity to do some background AI education during the implementation.
There are definitely a range of techniques I do want to touch on something you mentioned briefly, which is AI makes the magic even more. More magic if possible, right? I and I still think that, you know, the patient’s belongings magically go from their, to their hospital room as far as I’m concerned, magically. But AI puts on another layer of that, which is that we really there is not a person behind that that’s making that decision and that distributed responsibility risk is a real risk that we really haven’t figured out how to how to deal with yet.
So, for example, when, you know, say, it used to be the workflow that you would give the orders to the nurse and then the nurse would go do something with them. But they now you, you know, you give the order to the AI and the AI agent, let’s say operationalize this, for you so it, you know, it takes them and it puts them all in the right spot in the H.R.
And makes all the things happen that you requested, in a different format. You’ve now gone from the nurse taking responsibility for those orders being correct and being in the right spot and making sure they get done to basically no person being responsible for that or it’s still a physician. But the chances that the physician realizes that they are now on the hook for also not just putting those orders in, but also making sure that they get done and that they get done correctly are almost zero.
And so that is, to me, one of the larger, concerns about, AI implementation is that there’s already such a we know that many of the health care failures happen because of lack of communication. And when we take a human out of a roll, not out of the loop, we’re not taking, you know, all the all the AI tools still have a human in the loop for, for the most part.
So even when you’re not talking about taking them out of the loop, we’re talking about replacing a human, one of the human’s responsibilities with an AI tool. We then shift that responsibility for making sure it gets done, and gets done correctly. To someone who may or may not be aware of that. And to me, that’s an underappreciated risk because that’s where the failures happen.
That’s a big Swiss cheese hole that we’ve been hearing about for years. When nobody pays well, raise their hand and say, oh, that was my job. And I didn’t do it. My guess is that everybody would, like, look around the room and say, I don’t know, I thought somebody else was going to do it by magic because I was still a prevailing, thought of many processes that happen in the hospital that people are not directly responsible for because there are so many pieces of work that happen in a hospital that people don’t really understand other people’s role, and much less how the AI might layer on top of that and take responsibility away from them.
Craig Joseph MD, FAAP, FAMIA
Yeah, you said that very eloquently. I say that in eloquently when I say we need somebody to sue. Who is who are we going to sue? Right. That’s what it often comes down to. And this, this country. When I hear that radiologists are all going to be retrained as a school crossing guards because we won’t need them to interpret images like we’re always going to need someone to sue.
And so there’s going to have to be, at least. And unless society makes some changes to the way we do things, there’s going to be there needs to be a human who’s supervising. And then and to your point, standing up and saying it, I take responsibility for this. I, I looked over the film or if I, you know, the nurse, hey, I’m the one that saw that the blood needed to be drawn and made sure there was a phlebotomist or I drew it myself.
So I agree with you. It’s hard to imagine a world in which that doesn’t exist. There needs to be someone who’s got responsibility because you can’t go after. You can’t sue an AI unless you know something I don’t know.
Sarah Gebauer, MD
Well, I think there are emerging approaches, machine learning approaches to be able to assign what people are, colloquially calling responsibility for different AI decisions. So wrong AI decisions would be able to be traced back to rules and specific areas of the tool. Where they made a wrong decision is that a suspect will be part of the solution.
Because auditing machines is or these tools is really complicated. In terms of, you know, did it pull from the wrong database and pull from old guidelines? Did it pull from the especially with agents, you know, where is it pulling from? Is that the right location?
Did the machine you go around its usual guardrails? Most of these tools that we use in health care have kind of a, you know, one of the main L brains, one of the large foundation model brains, and then, is wrapped in a deterministic wrapper, meaning that it can only think about what to do only in the sense of which tools to choose from and which list tochoose from to provide information back.
So that’s how most of the tools work these days. But yeah, we have to have some kind of way to audit that as well. And so most of the tool, most of the hospitals and health systems do require auditability of their tools. But I agree that most of the liability is likely to fall on physicians for the foreseeable future.
That’s been the record of medicine for many, many years now, and there’s no real reason to think will change now, I know that there are some agreements with some of the manufacturers of AI tools, about liability, amongst the hospital systems and the AI developers that are starting to emerge but are not commonplace yet. And a lot of physicians are nervous about this role of taking de facto responsibility for AI tools that they probably didn’t have much say in choosing or implementing. And it’s a hard spot to be in. There’s no way for that to be avoided. And a lot of from a practical standpoint, in a lot of places.
Craig Joseph MD, FAAP, FAMIA
It reminds me of the early electronic health records, when a lot of responsibility was taken from health care team member and put on physicians because it was easy to do, you know, well, where’s the prescription going? What’s the pharmacy? Well, the doctor can ask, is the patient pregnant? The doctor can ask that there’s lots of things that are easy.
At least when we first started doing this that, maybe weren’t the best decision. They certainly were the easiest decision, but not the best outcome. Well, we are getting short on time, and I, I always like to end our interviews with the same question. And I’m going to ask you this very same question. Is there something in your life that is so well designed, it brings you joy whenever you use it? Now, it doesn’t have to be an AI, but it can be. But it doesn’t have to be.
Sarah Gebauer, MD
Well, my family and I have traveled to 54 countries in the last four years together, and my four kids and, between the ages of nine and 13. Now you can imagine that we have to bring along a lot of medications with us. You know, we have been to many places without potable water. So you can imagine what kind of medicines we need that we need Benadryl in case there’s an allergic reaction.
We need pain relievers. We need migraine medicine. We need all these things. And I have a medication container that will fit all of the medicines we need for six months into a, cylindrical container about, eight inches tall or so and, about two inches in diameter, that can slot into almost anywhere in anyone’s backpack. I am so happy that I don’t have to lug around bottles of medication.
And, weigh us down. We only carry one backpack per person when we travel, so even for 3 to 5 months at a time. So we are our super light packers. I am always very thrilled to have such a small package that I can very efficiently pack a lot of medication into. If I’m ever stopped at a border and asked to show what is inside of it, it will be a jumble of pills that I’m sure will make me look terrible.
But it sure, it sure is convenient and makes me very happy when I use it.
Craig Joseph MD, FAAP, FAMIA
So, first of all, I think everyone is, who’s listening right now. Said to themselves, wait, how many countries? What? What was that number again?
Sarah Gebauer, MD
Yeah, 54, which is just been a wonderful experience for a family. We’ve been all over the world. We’ve been to six continents with our kids and, school in Spain and had them learn Spanish. We’ve taken, several months of family French lessons and the Alps. We’ve traveled to Asia, on Tunisia, all over the Middle East and Eastern Europe, the Balkans.
Our kids can explain in great detail well, why World War One started, for example, which I did not really understand until I was 42. And we’ve been to the Galapagos for and we spent three weeks in the Galapagos just looking at different animals every day, which was amazing. So we’ve been really lucky to be able to spend this time together, and our kids are the easiest, most pleasant travel companions because they don’t really care where they are.
If the plane’s delayed, they’ll just play. They were either going to play somewhere else or they’ll just play in the airport. They don’t really care, so they’re a lot easier than adults.
Craig Joseph MD, FAAP, FAMIA
I love it. And where did you find this, pill container.
Sarah Gebauer, MD
Yeah. On Amazon.
Craig Joseph MD, FAAP, FAMIA
Amazon. Okay.
Sarah Gebauer, MD
Yes. There’s, a screw. Each one has a little screws up. Seven different, seven different, little sections.
Craig Joseph MD, FAAP, FAMIA
All right, all right. Well, we will try to find a link. And, if we can find a link, we’ll throw it in the show notes, so. Well, great. Doctor Sarah Gebauer, thank you so much for joining us. This was great. Almost every, sentence out of your mouth makes me tired thinking about all the things that you’re doing and continue to do.
And I look forward to kind of learning, from you and I look forward to you solving all the problems of AI. If you could do that, maybe by like 2027, that would be fabulous.
Sarah Gebauer, MD
Wonderful. Well, it’s on my calendar already, so it’ll definitely be accomplished. And, thank you. Thank you so much for having me. I had a great time and, look forward to hearing more from you.
Craig Joseph MD, FAAP, FAMIA
Awesome. Thank you.
Outro:
Thanks for tuning in. We hope you enjoyed today’s episode. For more on Dr. Sarah Gebauer follow her Substack called Machine Learning for MDs: Crucial Concepts in Healthcare AI.
Check back for more episodes of Designing for Health wherever you listen to podcasts or on NordicGlobal.com. We’ll see you again next time on Designing for Health.