Designing for Health: Interview with Jeff Mounzer, PhD, and Peter Kriss, PhD [Podcast]

At first glance, many challenges in healthcare look like massive, complex, and sometimes unsolvable obstacles. But when human nature is taken into account, many of these foggy issues become clear. With a deeper understanding of behavioral economics, it becomes easier to discern why some problems in healthcare persist, and how they can be remedied. Pairing new technologies like artificial intelligence (AI) with well-established findings of human psychology is a pathway to solving some of healthcare’s biggest obstacles.

On today’s episode of In Network's Designing for Health podcast feature, Nordic Chief Medical Officer Craig Joseph, MD, talks with Jeff Mounzer, PhD, and Peter Kriss, PhD, both of Qventus, a company that uses AI to enhance healthcare workflows and operations. They discuss their respective PhD journeys, how they ended up at the intersection of psychology and technology, and how they’ve been able to simplify clinical operations within health systems. They also talk about using technology to expand operating room availability, implementing behavioral principles within healthcare, and streamlining the hospital discharge process.

Listen here:

 

 

In Network's Designing for Health podcast feature is available on all major podcasting platforms, including Apple PodcastsAmazon MusiciHeartPandoraSpotify, and more. Search for 'In Network' and subscribe for updates on future episodes. Like what you hear? Make sure to leave a 5-star rating and write a review to help others find the podcast. 

Want to hear more from Dr. Joseph? Order a copy of his book, Designing for Health.

Show Notes:

[00:00] Intros
[02:57] The goal of Qventus
[12:31] Perioperative scheduling optimization
[24:46] Integrating behavioral economics into workflows
[29:15] Estimating dates of discharge
[40:36] AI operational assistance
[45:53] Things that bring Drs. Mounzer and Kriss joy
[49:17] Outros

Transcript:

Dr. Craig Joseph: Doctors. Welcome to the pod. How are we doing today, Peter?

Dr. Peter Kriss: I'm doing great. Thanks for having us.

Dr. Craig Joseph: And, Jeff, how are you?

Dr. Jeff Mounzer: Also doing well, thanks so much.

Dr. Craig Joseph: All right. I thought a good way of starting would be to go over some of your minimal education that is required for appearing here with me, a distinguished physician who graduated from one of the finest medical schools in the city of Detroit. Jeff, you are Qventus’ chief product officer, and you're the product of the PhD program at Stanford University. And further, I read, Jeff, that your focus was on mathematical modeling and optimization to solve resource allocation problems in wireless networks, IT security, and the smart grid. So, was that, like, kind of underwater basket weaving? Like, is that, what's up with that?

Dr. Jeff Mounzer: Yeah, it was a winding road, to end up in the PhD program that I ended up in. That could be a whole different discussion. But, ended up studying queueing theory, and that is a rich mathematical discipline that has applications in all of those areas that you were talking about and also in healthcare. And that's actually how I was introduced to the problems in healthcare operations, originally our research group at Stanford, among those various problem areas that you were just mentioning, we also studied how patients flow through health systems. And, and that ended up being an area of deep interest.

Dr. Craig Joseph: Peter, you're Qventus’ director of product and analytics and the product of the PhD program at some place called Carnegie Mellon University. And my question to you, sir. Dr. Kriss, is what kind of football team do you guys have down there?

Dr. Peter Kriss: I think I've seen them once. But I can't tell you much about Carnegie Mellon football. So I'll just skip that question for the most part.

Dr. Craig Joseph: That's fair. That's fair. I think that maybe you didn't choose to get your PhD from Carnegie Mellon based on their football team. Well, so more seriously, what kind of problems are you all trying to solve? Jeff, at Qventus? What are you working on?

Dr. Jeff Mounzer: For over a decade now, we've been working on problems related to what we call care operations. All of the operational challenges of running a health system that are adjacent to the delivery of care that often end up on the plates of nurses, physicians, that aren't licensed clinical work, but are all the administrative, operational things that glue the health system together, make sure that patients are flowing, through the health system, things like care progression and inpatient or scheduling, and access to operating rooms. We have a fascination with what we call glue roles in health systems to people who are just gluing everything together. And it really is sort of a manual glued-together system in many places. And I think we all experience that as patients. But we, certainly, see that experience as nurses, doctors all over health systems. So, we've been passionate about solving this problem from the beginning. That's what the company was founded to do. The mission statement is to simplify how healthcare operates. And we are really fortunate and thankful to have the opportunity to solve these problems every day with our health system partners. We work with health systems all over the country. We have dozens of partners, and we solve the hard problems. We sometimes say we're suckers for punishment. We started out going after length of stay, perioperative scheduling, and access. All those difficult, thorny, problems that require a lot of different people to jump in, save the day, make those diving catches. How do we take all of that and smooth that out so that the health system works well?

Dr. Craig Joseph: That sounds like it's a thorny, thorny set of issues that you're dealing with. And we'll get into some more detail, you know, in a little bit. Peter, part of your background or a significant part of your background is as a behavioral scientist. And that is not a typical role, at a digital health company. And so can you kind of tell us how your background kind of landed you a job at Qventus and, you know, the makeup of the team that you work with and what kind of work you do?

Dr. Peter Kriss: So I've always been kind of at the intersection of fields, went to a small liberal arts college, Swarthmore, major in math, minor in philosophy, minor in psychology. Spent a year at the London School of Economics. And, over my time there, really discovered that what really fascinated me the most was kind of the intersection, intersecting corners of those fields, like game theory, rationality, morality, human decision-making. And my key mentor there was Barry Schwartz in the psychology department. I remember building up the courage once, at the time. I was a senior, asked him, I said, Professor Schwartz, can I ask you a personal question? And he said, sure. I said, why do you wear your watch on the inside of your wrist? Like, you can look at it, you know, turn his wrist the other way, normal. And he said, Peter, there's something that you'll discover over the course of your studies, which is the vast majority of things that people do, they do for extremely trivial reasons. He explained he just had a watch at one point where it was kind of the weight of it always made it go to the bottom. So, he got used to looking at it there, and now, just, that's the habit. Every new watch goes on the same way. And so, part of what really fascinated me about these fields, it's how what seem like mistakes are not random. and they very well may be mistakes that we make in certain contexts, but they're not just inexplicable mistakes. There's certain patterns in the types of mistakes people make. Is there a mismatch between, you know, the way they think in one domain that works quite well and how to apply that to another domain where it doesn't work so well anyway? So that's sort of where my intellectual interest in this stuff started. So, to do more of that, as you mentioned, I went to this department at Carnegie Mellon, which was a very unique department called Social and Decision Sciences. There's a bunch of economists, psychologists, others like mathematicians and historians, all studying human decision-making. So, there I did, mostly lab experiments. So, we'd bring undergrads into computer labs and have them play games for money. And I studied mostly two things which were social preferences, like fairness, punishment, lying, that kind of thing. And the other was coordination. So, how do you get, basically when interests are not perfectly aligned but not perfectly opposed, how do you get communication to work? Well, so if, for example, if interests are perfectly overlapping, you let people talk, they’ll come to the best answer. It's pretty simple. If interests are perfectly opposed, then what people say basically doesn't matter. Like what people say at a poker table has basically no impact. But in between, really weird stuff happens. Sometimes communication helps, sometimes it hurts. Hierarchy matters. Group size matters. some of my work was showing that the meaning of silence depends on the context. Sometimes silence is a sign of confidence. Sometimes it’s the sign of uncertainty. So that was the kind of academic programs, mostly training to be a professor. But there'd been a significant string of students before me who had gone into industry. And I felt like I could genuinely be happy either way. I ended up finding this company, called Medallia, which, broadly speaking, does customer feedback, but in a fairly unique way. So it wasn't, not market research in the sense of do a bunch of analysis until leaders want to do, it was more oriented around how do we just get as real-time as possible feedback from the customer to the front line, in a way they can act on. And that resonated with me a lot, because then that's arguably the largest application based on what I was studying, interest overlapping, but not perfectly, companies and customers. It's like a massive example of that. How do you get the right information flow so that people can actually do something with it and get better outcomes? So I was on a series of analytics and research teams there. And then, after six years, I ended up following Jeff to Qventus, I was really excited about finding a place where I'd say deeply understanding and changing frontline behavior in complex organizations was going to make or break the company. I want to be somewhere where that was absolutely central. And that became clear after, you know, 15 minutes of talking to Jeff that we had a great match there. So that's largely where I've been focused. Qventus is figuring out how do we tell what's working, what's not working, why, and what to do about it. And we built a team around that, we built both analytics of our product itself, figure out how we need to adjust it. Is it actually driving outcomes, the behaviors, and the outcomes we want? And then also we built tools for our clients themselves, to answer those same questions about their own operations. What in my hospital is working and not working. Well, what should I do about it? So that's the couple-minute version of my path.

Dr. Craig Joseph: I love it. And it's, I think it's a big differentiator between successful, not only companies, but healthcare systems as well. You know, just deciding that we've got good tech and putting it out in the world and then wondering why you're not moving the needle, whatever that needle is. Oftentimes, it's related to how the humans are using your tool, and those humans get in the way all the time. Gosh, I always joked when I was in practice that my practice would run much more efficiently if it weren't for the patients.

Dr. Jeff Mounzer: Yeah, yeah. I mean, we realized very early in our journey as a company that we really could not be successful driving the impact that we were hoping to drive without being really good at the things Peter is good at, behavioral science, we found just about every problem in care operations has some set of incentives, behavior, etc., that need to be understood and designed for very explicitly. And no matter how good our core tech was and, if I may say so myself, it's very good, if you can't make something that fits into the reality of the lives of care providers and navigates the complex dynamics of the relationships in the health system, the hierarchy, the way that multi-disciplinary care teams interface, the way they hand things off between each other. It just doesn't work. And our journey as a company has been sort of a series of trying things, failing, learning, and eventually being able to put together all of these different elements: behavioral science, design technology, automation. So that these really historically intractable problems actually can be systematically solved and repeatedly solved. But behavioral science is one of the most important linchpins we found, to be able to make it successful.

Dr. Craig Joseph: So can you maybe give us an example of one of the problems that you are trying to solve or feel that you have solved and how it's a combination of technology and, behavioral science and kind of understanding how humans tick, is there one where you can kind of say we went down this path and we thought we were going to be brilliant, but it didn't work out exactly the way we wanted it to and, we pivoted and now it's amazing.

Dr. Jeff Mounzer: So, let's take how to improve perioperative scheduling and access. Interesting problem that, you discover as you start to look at this deeply is, patients and surgeons are desperately trying to get access to ORs, OR time to be able to do surgery, hospitals and health systems actually have plenty of unutilized time, and their OR is, on average, about 40% of the time goes unused. So, you look at this problem outside in and you say, oh, this is a very simple math problem that should be simple to solve, right? You have a lot of people who want that time more than they are getting in. You have a lot of available time. Just put those two things together and you should be able to really improve access and improve utilization of ORs. And by the way, it's very expensive to run an operating room that's not doing a surgery. That's a huge way of losing money quickly for a health system. So, as we started to look at this, and our first instinct, of course, was, okay, let's understand the problem. And you start to quickly realize that there are a set of ways that access to ORs is set up in health systems that create this problem, actually a queuing problem in some sense, and it’s pretty dangerous to start talking about queuing before you go down a rabbit hole pretty quickly. But, high-level, the core mechanism at play is, surgeons will have blocks, which are times that are reserved for them in operating rooms, and they can book into those times as they get certain surgeries skipped, right? So I have, let's say, every Monday all day in operating room three is mine. And just as I'm seeing patients in clinic, I'm scheduling them in my block time. Really what you've done when you've set up blocks and most health systems will be mostly block. Right. But most of the surgeons who operate there will have a block. And so you've taken up that OR scheduling, chunked it up. And all these times that surgeon A can operate in this one OR, from a queuing theoretic perspective, what you've done is actually created a lot of small queues, which is, generally speaking, suboptimal. It's kind of the same thing as if you're at the grocery store, and there's all these different checkout lines, and people don't naturally balance themselves across those checkout line super well. You'll see some of it, but you'll see some that get really long, some they get really short, same idea. And so, what happens is not every surgeon fills every one of their blocks, but no one else can put into those blocks because it's reserved for that surgeon. And by the time it might become available, and, usually a health system has a policy that says like three days before surgery, if you haven't used your block, will release it to everyone else. Three days is not a lot of time to go find another surgery to put into that time. So we started solving this as an optimization problem. Okay. These blocks should be this size, you know, resize it to be appropriate. Actually, probably this block should become open time that anyone can look into that would be more flexible. Turns out that's almost completely useless doing all of that, math problems, it's nice, theoretically, there are lots of great papers about it, I have been involved in, for how to solve this optimization problem. Sharing data about this was also a natural starting point for us. Say, hey, look like, here's every surgeon's block utilization across the last two quarters, okay? This one doesn't have as high utilization as this other one. Maybe take some block from this one, move it to that one. Turns out it's all about the behavioral science for this problem. Because surgeons, and any human being sort of care about having reserved time in a place where they feel valued. There's no really strong incentive to give up that time, especially if you're a well-established surgeon who the health system is really wanting to keep your patients in the health system. And then the politics of navigating sort of reallocating block that's a bear that almost no one wants to tackle, right? You're going to go to a surgeon, say, I'm taking away this block from you, and I'm giving it to this other surgeon. That is a high emotional energy, very difficult conversation to navigate because we're talking about people's livelihoods here, right? How much time I have available in the OR dictates how many surgeries I can do. So, to be able to start solving that problem, we had to actually take a completely different approach. We had to start looking at this as a behavioral science problem. How do you set up the incentives so that surgeons not only be willing to release block early, for example, but actually want to do that, to be a good steward of that overall OR time, to understand that it's actually not going to cost them anything because they weren't going to fill that time anyways to understand that they can get time if they need it and open time. So, we set up not just the suggestion, let's say, to release this amount of block that you're not likely to use, but also communicate that in a way that a surgeon can say, okay, I'm not going to lose anything by doing this. And actually, this is going to help me. It's going to increase my overall utilization. I’m going to show that stewardship of time, etc. So, the key was all in the design of the incentives and the behavioral science of it. And once we started to get that piece right, when we start to understand the interplay of relationships in the OR, the dynamics and incentives, that's when all of a sudden we built a solution that started to make a difference and a big difference, and the math helped. It's important we have big, important machine learning models that are running to identify who's not going to use their time and make those suggestions at the right time. All of those kinds of things are critical ingredients. But the real magic of this thing is in the personalization that each surgeon, each person in the perioperative landscape, and making sure that we're meeting them where they are, understanding their incentives, and designing with their goals in mind, not just solving a math problem.

Dr. Craig Joseph: So, if I put my surgeon hat on, it's kind of like a budget in a big company. You know, if I don't use my budget, and I'm going to get less budget next year, which is the opposite of what I want, so I better use my budget, even if it doesn't make a lot of sense. If I give up my time, the three days prior or five days prior, then if I do get a last-minute case, I can't do it now because I've given up my time. So isn't it always in my kind of best, in my best interests to just keep that until the hospital says I have to give it back, whether that's three days prior or whatever. So, I'm interested in what are the rewards that are out there for me? What are the incentives for me to give up this time? Because now I'm limiting that last-minute patient who I know is out there. I just haven't found them yet. How do you figure that out?

Dr. Jeff Mounzer: I think that's a Peter question.

Dr. Peter Kriss: Well, a couple of ways. But one of the really important things is to directly address the core concern. Like, if I have a case that I won't be able to get back to book it soon. And so that's where some of the other pieces of the solution come in. So, one of the pieces of the solution is like a Google flights or kayak.com for finding a case in time for a case. So, as you may know, like, right now to book a case, you know, also requires like ten plus phone calls and faxes back and forth between surgeon schedulers and schedulers to find the time. But one of the pieces of the product allows them to just basically say, here's the kind of case I want to do here. The requirements, timing the type of equipment I need, and just shoot them back. Here are the available slots, you can identify and priority order which ones they want. Request goes to the right team, approved quickly. So, once they've had the experience of having a short-term case, they want to book really, really soon, and finding the time easily that can eliminate some of the anxiety about giving up the slot another piece is also on the point of kind of really core kind of machine learning models behind the scenes. It's when we see an open slot in which there are many, whether it's always been open time or it was a block time that was released, identifying which surgeon would be most likely to use that slot based on the past times a week they operate like the cases, etc. And then proactively outreaching to their scheduler saying, hey, here's a slot, do you want it? So they have enough of this experience of actually getting the time for a case isn't that hard, then the way more willing to release it, particularly if we can say sometimes in those messages, things like, you know, it looks like there's less than a 10% chance you're going to use this slot and give it up, and sometimes the answer is it's like, yeah. So, these messages often go to the schedulers, and sometimes the scheduler response is essentially, oh yeah, they're on vacation that week, and they just never would have had any other reason to release it. Just such low-hanging fruit that, that, you know, to use the overused analogy, that goes unpicked, I guess I have to continue that analogy, but some of the problems are just really, really simple when it comes down to it, they're just hidden because of the complexity of the system.

Dr. Jeff Mounzer: And I think Peter's hitting on a really important couple of points there. One is that overall service design is really critical to making this work. If you don't have a system that is addressing the complex set of needs of the people that are involved, then one component element of it, I'm just reaching out and saying, hey, it doesn't look like you're going to use this block, can you release it so that other people use it, but there's not that ability to then get time when I need it or know that the OR is looking out for me and trying to surface times that I am going to be able to use whenever I need it, or all of the other components of the solution, then you can't build that flywheel of trust and belief that this is a net-net good thing for me, right? So that overall experience and building a system, not just building a point solution, it's critically important. And then I think the other pieces, each of the interactions that we have with a surgeon, has to be thoughtful about the concerns that they are raising, right? When we reach out and suggest, hey, it looks like you're not going to use this portion of your block, we're not going to say you can use the whole block, this portion of your block, and there's less than a 5% chance that you're going to use it. And by the way, if you are able to release this, you know, 20, 30 days in advance, someone else is going to be able to use it. But also, your overall block utilization metric, which most institutions are tracking, is going to go up, not down. So, you're going to look like a good steward of your block better. Each of those, if the design, not just the system, but each interaction to address the needs and concerns that aren't going to be naturally raised, right? What you said is exactly what most people think. That's what I would think. Like what is my incentive? Least setup? I just hold it hold on to it, to the last second, just in case. But you have to build that environment of trust in each interaction and then in the system that's around it. And then you can start to see pretty transformational change, which is what we know.

Dr. Craig Joseph: It sounds like with the data that you have access to, you’re kind of doing what Peter’s professor, came up with, you know, with that, wearing his watch on the inside. Although, Peter, it sounds like your professor knew why he wore his watch on the inside. He said it was kind of silly, but, you know, it's based on a heavy watch many decades ago, but he couldn't change after that. I think that it sounds like you're able to come to surgeons and say, with that kind of 5% certainty or, you know, it's unlikely with only with high certainty, there's a small chance that you're going to actually use this. I think over time, it sounds like they're trusting you. And when they see that kind of 5%, they're like, yeah, you're right. You know, Friday, or even Thursday afternoon. I'd rather not really fill this with big cases because I don't want to come in on the weekends to have to go around on these patients that are post-op. And so, kind of, maybe they start to see, explain to themselves how they behave, even though they really never would have normally sat down and kind of thought that through.

Dr. Jeff Mounzer: Yeah. I think it's really important to meet people where they are starting from, just as a general behavioral principle. You know, as an extreme example, we could come in and say, all right, mathematically speaking, we should change the block allocations completely every quarter. That actually is probably mathematically right. We're going to take five hours off of your block, move it to this other person based on their utilization pattern, etc. That's a huge change to try to put on a health system all at once. Eventually, it would be great if they can work their way up to that, but it is meaningfully different. If you're asked once in a while and you release a couple hours out of your block, not a systematic change to the block allocations, but it's a small change that actually ends up yielding the same net result. In the end, you're creating more liquidity in that marketplace, but you're not doing it in a way that requires a complete change in behavior, in the way the system is constructed, that kind of, thinking about how do you incrementally change behavior, but in a meaningful way that actually benefits the system and comprehensively, but also doesn't make each person have to completely change the way that they think about the world. That's where a lot of the magic is when it comes to care operations. You have to remember that care providers have very difficult, complex jobs and care most about caring for patients. And the more that we are able to allow them to focus on that and not have to change all of their habits so that they're solving operational problems, the more successful any of these kinds of changes are.

Dr. Craig Joseph: 100% agree. And you know, especially when we're talking about surgeons and I can say this because they're not here to defend themselves. There was a joke when I was a resident, that a surgery resident would pose a question asking, you know, what is the worst part about being on call every other night? Because many of them back in the day when I was a resident, they were they were on call every other night. And do you know what the answer was, Jeff or Peter?

Dr. Jeff Mounzer: I do not.

Dr. Peter Kriss: Yeah. I'm not familiar.

Dr. Jeff Mounzer: I’m not that funny. So, it's hard for me to come up with the punchline to a joke.

Dr. Craig Joseph: Yeah. The only bad part about being on call every other night is that you missed half the good cases, and it really, and you can smile. So that's a joke. That's a joke, right? And they're like, I don't know, maybe I don't know.

Dr. Jeff Mounzer: There's a real, there's real truth to that.

Dr. Craig Joseph: They love being in the operating room and operating. And that's why, like, this is core to what they do. It's also how they make a lot of their money. But that's really a very low secondary or tertiary reason. The main thing is they love operating and anything you can do to get them that to maximize their time is appreciated. And I guess they would probably walk through fire for you if you help them do that.

Dr. Jeff Mounzer: That's entirely the point. And that's what we're being able to do, consistently. That's something we're very proud of. And that's why we're very fortunate to have supportive surgeons that we work with. Because there are real benefits to them being able to do what they want to do.

Dr. Craig Joseph: Let's pivot a little bit to maybe surgeons care about this, but physicians that admit patients to the hospital, let's talk about the estimated date of discharge. All right. So, I was taught, back in 1926 when I was a resident, that date might not be actually, fully accurate, but it was a long time ago, I was taught that discharge planning began at the time of admission and so, yeah, you need to start thinking about getting them out of there the second that they just got in there. At least thinking about it to make sure that you got rid of every obstacle that you could in a timely fashion. And so, one part of that planning is understanding what is when do we think this patient is going to go home? You know, patient came in with community-acquired pneumonia or had a car accident. But you generally have a sense that, yeah, I have three or four days. And so, you can, okay, I'm estimating no one's holding you to it, but I'm estimating that they'll go. They came in on Monday. They'll probably go home on Thursday. And that's really important, right? Because the hospital counts on that there are there are legions of people kind of behind the scenes looking to make sure that if that's possible, that's safe and effective, that that can happen. And so, I understand that you all have kind of dealt with this estimated date of discharge and, you know, have some kind of science and technology behind it, making it accurate, making it helpful to both patients and the hospital. Can you tell us a little bit more about that?

Dr. Jeff Mounzer: Everyone is taught that discharge planning begins at the time of admission. That is the right way to think about it. In practice, operationalizing that is surprising. Something that sounds easy. It's actually super hard to do. And you'll see it in the data. And we work with health systems all over the country. I think the average percentage of patients who have an estimated date of discharge in the first day of admission, probably on average, is less than 20%. They say, okay, well, everyone knows that this is theoretically supposed to happen, and, it should be helpful, but it doesn't happen. And so why is that? Turns out there are a lot of reasons, and many of them are rooted in behavioral science. One of them is, it's actually pretty hard to make that estimate for many people if there's a lot of uncertainty early on and patient stay, sometimes you have a sense for it, sometimes you don't. If you're at a newer hospital, you don't have that much data to draw from. There are a lot of considerations that are outside of clinical care as well. Right? So, a socially complex patient is going to be harder to discharge. That might affect your estimated date of discharge. So that's one, there's a coordination problem as well, which is how do you make sure that the right group of people have discussed what that estimated date of discharge is? It's usually there's going to be some benefit from multiple perspectives. That's one of the big advantages of multidisciplinary rounds, which, again, are a best practice that are implemented with varying degrees of quality across health systems. Many of those thought processes don't work very well. Even bringing the group of, clinicians together to discuss what the plan should be for a patient, a third one is a lot of people don't understand the point of putting in an estimated date of discharge. Okay, put in a day. What does that do? And each person can only see sort of their piece of the puzzle. I'm a physician. I put an estimate, a date of discharge. But I know when I think the patient is going to discharge, why do I need to put this in? Why do I need to go, I’m already in that EHR two to four hours a day documenting. I don't know why I need to put in this extra field for something. I know in my head, and I don't see what benefit it provides, but I don't understand the impact of it. So, it turns out that even this little thing is very complicated to actually solve. And again, in our journey of learning, you know, we started with. Well, let’s use a machine learning model to predict what the estimated date of discharge is going to be. Turns out we can do that extremely well. And turns out that that by itself didn't do very much, it’ll offer a prediction and say, here's when we think it's going to be. Designing that so that it's actually useful and accepted, turned out to be much harder. We had to learn that actually what we need is a target date of discharge versus an estimate, because we just predict what's happened in the past. You're encoding all the operational inefficiency that's built into that, you know, historical behavior on like this day. So, we might actually want to discharge the patient on Thursday. But my model is going to predict that it's going to be Friday because we've always discharged patients a day later than ideally the date. But that's one example. Another example is how much do we just take the action on behalf of the care team versus give them a suggestion somewhere that they have to process and think about? So, can we use smart defaults? And that actually can make a really big difference. How do you explain what's happening so that they understand why this intervention matters? And by the way, this is another example of system thinking, which is if I know that putting in the EDP means that something downstream is going to happen, that it’s going to make the patient's care progression more efficient operationally. Well, that makes a difference in my motivation to do this activity. So, for example, if I know that if I put in, or confirm an estimated date of discharge that's coming from our massive machine learning models, then each ancillary is going to have a better ability to prioritize which patients to see in which order so that they can keep going with their care progression. And that expected date of discharge is one of the ingredients that we use in our optimization model for ancillary priorities, among many others. And now I don't have to go call the ancillary every time. I don't have to call physical therapy and say, hey, can you see my patient? It's the day before discharge. And I got to make sure that they get seen. Otherwise, they're going to be delayed. We can eliminate all of that burden. Now people start to see, oh, there is a positive feedback loop here where if I take this action, things happen, and I can see that those things are happening. It’s another critical component of this is actually showing what's happening as a result of these actions. Now you start to be able to make an impact, but it's shockingly complex. I mean, I spent many, many years just trying to understand this one component and then seeing how clearly it makes an impact. It's very obvious to anyone if you make a plan for something, it's much more likely that you achieve that plan than if you go into something without a plan at all. And that's really what an estimated date of discharge is. But actually making that happen consistently and well, what do you think, Peter? Five, six years of research and banging our heads against it before we really sort of started to crack that nut? Anyways, Peter, anything to add on the behavioral side of it?

Dr. Peter Kriss: Yeah, I think this is a great example of another observation I've had, which is that, you know, we now have many decades of really excellent research on the science of behavior change. But the vast majority of it is focused on individual behavior, often in the context of a larger system, like getting somebody on Facebook to click an ad or whatever. But, what's really fundamentally different about these organizational problems is that all the same individual stuff still occurs, all the same individual issues and like, everything we just discussed. But on top of that, you have a whole new set of problems. But you have to get so many different pieces at pieces right. Simultaneously. You're kind of in a game theory terms called a weakest link problem, where if you mess up one piece of this of the five pieces and all the other pieces are designed perfectly, you still might get zero impact and assume that the other four pieces are wrong too. So, you have to have a granular enough understanding of each of the intermediate objectives of each piece of the solution. And what is actually holding them back like, I think one mistake that many companies, including ourselves, have made in the past is on the behavioral science front is assuming that the reason the behavior is not happening is because they've forgotten, and they need a reminder. And so, you send them some kind of notification with a text message or a pop-up in an app or whatever. And it makes no difference whatsoever because that's not the problem. The problem is not that they don't know the problem is something else. Either they don't see why it's valuable, they’re busy doing something else that they think is more important or, well, maybe more important. And so, you have to understand both for each of these individual behaviors you're trying to drive. And the overall picture, like how do they fit together? And when possible, the easiest path is often to skip the step and or automate the step so you can bypass that piece of the problem. Definitely not always possible. Many interesting hybrids like the human in the loop. Here's a suggestion. Very easy to just accept it than having to come up with the answer in your head. But that's part of what makes the problems fun.

Dr. Craig Joseph: I love so many aspects of that discussion. One is just the name. Right? Estimated date of discharge versus target date of discharge. And the implication thereby kind of makes all the difference in the world for someone who's kind of thinking about that. I love the transparency that you were talking about. And it reminds me of the simplest example of transparency, is when I press the elevator call button and it lights up. I just think I continue to think that's the most amazing thing. Because now I know I don't have to press it again when it's not coming. I see it's lit up so I know it's pressed. And I also know that when I walk up to the elevator and someone’s standing in front of it, I see that thing pressed, the light is on. I know that it's been pressed. And, so, you know, kind of explaining to people, exactly hey, what do you get out of this for giving us a good target date of discharge? And there are huge benefits, and it's, you know, sometimes, physicians, they might see some benefit to a hospital, but the hospital makes a lot of money. They don't really need me here. But there are absolutely aspects of getting patients through efficiently that benefit directly benefit physicians in terms of efficiency, time savings for them. You know, even financial incentives there. So that makes complete sense. So, Peter, I understand one of the things that Qventus is working on, something that you call AI operational assistance and kind of helping at the right time, the right person in the right way. Can you tell us a little bit more about that?

Dr. Peter Kriss: Yeah, definitely. I'll start with kind of explaining, giving a story about why this matters. Getting this right. So, this was many, many decades ago when the Deep Blue, the, you know, chess computer, beat Kasparov, reigning world champion, for the first time. So huge milestone in like, computing technology. And what they had Deep Blue do next was predict medical diagnoses, so that it was a tool for physicians where they plug in the symptoms and it spits back a list of possible diagnoses and probability order; they test this with doctors. And the overwhelming response they get is, I hate this. This is useless. The first change they make is they change the phrasing from here's the probability of each diagnosis to here's the probability that these suggestions will be helpful to you. Overwhelmingly response changes to it's great. It's like having a conversation with a colleague. And so what's going on here? I mean, on some level it's obviously just wording, but, you know, looking at it more deeply, what that second phrasing is sidestepping is this phenomenon called reactance. It's this feeling of like, don't tell me what to do, which is especially strong when it comes to technology and ML, AI, all that. Especially strong when you're in the domain of someone's expertise. And so the key when building products like this is to both seem and actually be assistive, and part of what, kind of new developments, what's possible with generative AI, LLM type stuff, is taking that to an entirely new level, in terms of how we like, how we can take away the stupid stuff, maybe I’ll turn it over to Jeff for, some more examples about what that means in an AI context.

Dr. Jeff Mounzer: Yeah. I mean, it's a really fascinating set of technology developments over the last couple of years with generative AI, and of course, us being an AI, machine learning, and automation company, would like to be at the forefront of that work. And have been and one of the things that we are seeing, and I think it's going to be a very interesting couple years, in that, on that path toward ‘assistiveness’, as Peter was talking about is, these types of technologies enable us, for example, to build what we're calling these AI operational assistants. Think of it as having an intern, or an administrative assistant, or unit clerk for every person in the health system. And they can do such interesting things when you combine them with an underlying platform like ours, where you have all this real-time data and machine learning and optimization models running, now you're able to have these assistants not only gather more information and unstructured ways to feed that the brain of this platform, but you also now allow them to be able to take a lot more actions downstream and different types of modalities, make phone calls, write messages, make suggestions, in a very human-oriented way, it also means that there are new design paradigms that are going to emerge. And we are building many of them, for the way that we interact as humans with technologies like ours, where we can build these even more sophisticated human in the loop in dialog with their assistant types of interactions, that we see a multiplicative impact coming in terms of our ability to take all of this class of care operations tasks and really transform the way that we think about doing them and again it’s through that concept of an assistant that you are in dialog with that knows when to bring you the things that you see, knows when to do the things itself, can find those balances and that you can train and dialog as well. So, we're really excited about this next generation of innovation that's coming around, this idea that everyone can have an assistant type of experience with the core technologies.

Dr. Craig Joseph: That's amazing. I am equally excited about, you know, an AI, helping me do stuff that I don't really think adds a lot of value, right? But still important. But needs to be done. Well, I'm sad to say we are near the end of our time. I think this conversation has been so exciting and I've learned so much. I'd love to carry it on. We might have to do a second version, at some point and see what's happened in the interim. But, let me ask you both, Peter and Jeff, the question that we like to end with, which is, is there a product or workflow, a thing that's so well-designed, it brings you joy. Peter, why don't we start with you?

Dr. Peter Kriss: I’d say, for me, the thing that brings me disproportionate joy is a really well-designed game. And the reason is that part of the reason I like them so much is that, like, games create this space very unlike the real world in which you're told exactly what you're supposed to care about. There's some sort of point system, scoring system, or something that's unambiguous. But a well-balanced, designed game has so much freedom within that and nuanced and like interpersonal interaction that it creates the opportunity to, like, play in this decision space in a way that you don't really get to do in real life because your decisions matter. But in that space, you know, they don't. So, playing with that, like, decision-making, capability. So, me personally, one of the games I love is playing pool, like eight ball or nine ball, it has the disadvantage of you have to get to a certain level of like mastering the physical basics before the kind of the balance of the game really works great. But that would be my answer really well-designed games.

Dr. Craig Joseph: Love it. Jeff, what about you?

Dr. Jeff Mounzer: Yeah, I'm a product guy, so I could give a classic product answer Google Docs, for what it's worth, transformative to the way people work. But I think the one that I'll share just thinking about in this conversation I’ve been thinking a lot about service design and systems, is a silent retreat. There's a retreat house, not very far from where I live. I've been going there for probably over ten years now, and it is one of the greatest sources of joy in my life to go up there for a weekend. And it's a remarkably well-designed experience. And I'm not sure it was, completely intentional. I think it might have evolved over 100 years, perhaps, but the end result, every element of that experience I find perfect, whether it's the size of the room you stay in, very small, Spartan, kind of encourages you to be outside in the foothills. The setting is immaculate, but even maybe just one example that I like best is some of the rules. This is a very simple set of rules, but well crafted. And one of them is, you don't have to make eye contact with anybody. Like, it's an accepted social interaction, because even the act of making eye contact with someone as you're walking by them, or that feeling of like, I have to say hi, or if you're, you know, trying to get dinner, you know, there are people kind of around you the freedom to say, you know, I don't even have to think about a human interaction in this time and space, and that that little insight completely changes the way you experience that setting. And so, ironically, I think I can talk all day about silent retreats, but highly recommend, especially in our day and age, you can get 48 hours completely silent, no technology, no need to interact with another human, can be a remarkably powerful reset and an opportunity to clear your head.

Dr. Craig Joseph: I love it, I love it. I think that's great. Well, thank you both for this fascinating conversation. And look forward to the work that you do in the future. And again, I'm going to go double down on my threat to talk to you again at some point in the upcoming year. So, thank you so much.

Dr. Peter Kriss: Thank you. Appreciate it.

Dr. Jeff Mounzer: Loved spending time with you, Craig.

 

Topics: featured, podcast

Module heading text

Get the highest quality chemistry and microbiology testing services aligned closely with current good manufacturing practices (CGMP) for all types of products across all phases of development.

Subscribe to receive blog updates