Humanity’s imagination has long been captured by intelligences that exist outside of the human brain. We are fascinated by the creatures we share the planet with, and it is easy to bring to mind examples of our attempts to understand and even shape (e.g., the dog) the way they think. The birth of computers introduced the possibility of in silico intelligence, and a new frontier of possibility and worry was born. Movies like The Terminator, Blade Runner, and AI display both our fascination and fear of non-human thinking. Recently, algorithms patterned on our own neural networks have moved from a possibility to the cusp of usefulness in our work and personal lives. Generative AIs like ChatGPT are the perfect example. Early use of predictive algorithms and other AI-adjacent technologies in healthcare have shown us that there is a last-mile problem, however. To overcome the issues of trust, transparency, bias amplification, privacy concerns, and workflow integration, health systems need to take a gradual, iterative approach. And it is worth it. As my colleague Dr. Craig Joseph points out, it may be a key tool in health systems’ goal to avoid disintermediation.
Artificial intelligence (AI) has the potential to reshape the healthcare industry, particularly in back-office operations. Rapid pattern detection, made possible by AI technology, has enabled healthcare providers to streamline their operations and improve revenue cycle management. This includes the translation of diagnoses, treatments, and procedures into billing codes, which leads to more accurate billing and reimbursement processes. AI is also being used to speed up the claims and adjudication processes, leading to faster and more efficient healthcare reimbursement. AI is also demonstrating utility in fraud detection, where it is deployed to identify aberrant insurance claims, detect evidence of billing scams, or identify patterns of prescribing that are not in line with normal practice.
On the clinical side, AI has the potential to reduce care team workloads and automate routine tasks. This frees up care providers to focus on more complex cases and spend more time with patients. There are also examples of algorithms being used to improve patient care. Nordic has worked with clients to deploy predictive models that reduce hospital-acquired infections by, in some cases, more than 30%. AI also has a role to play in enhancing the patient experience. It can personalize care by tailoring treatment plans, educational materials, and reminders to individual patients based on their unique characteristics and medical history. This can help patients to better understand their conditions and make informed decisions about their care. AI can also improve the patient experience by reducing wait times and improving scheduling. For instance, AI can be used to optimize patient flow through hospitals and clinics, ensuring that patients are seen in a timely manner and that appointments are scheduled efficiently. AI can also be used to provide reminders to patients about appointments and medication, ensuring that patients are adhering to their treatment plans.
But all new technologies come with implementation challenges. The initial investment can be significant, and ongoing costs add up quickly. Additionally, maintenance may be complex, additional support and trainings may be needed, and good governance needs to be stood up. User adoption can also be an obstacle, as staff may be resistant to changes in their workflow or may require additional training to become proficient with the new technology.
AI implementation has some unique challenges that require some hard thinking about the desired end state before the project’s initiation (Dr. Joseph and I talk about the importance of starting with the end state, and end-users, in mind in our forthcoming book Designing for Health). One challenge is the issue of data size and quality. AI training sets can require massive amounts of data, which can make computing time very expensive. There are also significant privacy concerns for health consumers, particularly if data are aggregated from several sources. The quality of data is also a significant concern. As I mentioned in a previous blog, there is a lack of representation in data sets which often presents with a skew towards data from a wealthier, largely white population. The result can be algorithms that magnify that skew, increasing health inequities.
Another challenge is the issue of drift. Drift in AI refers to the phenomenon where AI models gradually become less accurate over time. There are several reasons for this, but one is that it is driven by increasing discrepancies between the training data and the real world. Training data are historical, after all, and represent a snapshot of the relationship between variables at a point in time. This drift can lead to errors, bias, and other unintended consequences. Therefore, it is essential to have governance policies and mechanisms in place to ensure AI models remain accurate and unbiased over time. Trust is another huge hurdle, and it can be a massive barrier to user adoption. AI systems are seen as "black boxes" – the processes used to arrive at a particular output are not transparent. This is problematic for providers in situations where AI systems are making decisions, even in situations where the physician is still the ultimate decision-maker and the algorithm is just choosing what they see.
So how should health systems approach the “last mile” of AI adoption?
Start with the easy(ier) problem. From both a trust and a problem space perspective, operational/administrative AI is a lighter lift. Deploying AI here is lower risk, sees a faster return on investment, is easier to trust, and requires smaller training sets. This gives the enterprise a “practice run,” a chance to see AI in action (building trust) and time to implement the kind of governance built on iterative improvement.
Be clear about the “how” and “who” of getting there. What does the successful deployment of AI in a healthcare ecosystem (and the enterprise) look like? A group out of Stanford succinctly put it like this: “The prospect of AI in healthcare has been described as a Rorschach blot on which we cast our technological aspirations.” They recommend bringing AI into practical use requires by focusing on usability, utility, and actionability – and defining the person(s) responsible for that.
Go big (data). Make the investment in the infrastructure needed to collect, store, and analyze massive amounts of data. Identify multi-department use cases, where AI can be deployed across the organization to streamline operations and improve patient outcomes. Starting with hypothesis generation helps to identify the specific problems that AI can help solve, enabling the organization to prioritize its investment in AI technology.
Remember: It is about people. On the caregiver side, that means working alongside clinical stakeholders from the beginning to select use cases that will have the greatest impact on patient outcomes. It also means ensuring staff is properly trained on how to use AI tools and understand their benefits, as well as ensuring they can provide feedback in the iterative review process. On the patient side, it is essential to clearly communicate that the goal is to augment the patient-physician interaction, and not to turf them over to Dr. Roboto.
As with all technologies, AI holds both promise and peril. And like all implementations, the possibility of both agony and ecstasy. To ensure success and overcome the last mile problems that have already relegated so many algorithms to the dustheap, it is essential to design and plan implementation for the humans who will be giving and receiving care. Not just for our future robot overlords.