Your delivery robot is fine. Your rollout may not be.

By:

Dr. Craig Joseph

In 2022, two University of Tennessee students were charged with felony vandalism for picking up a Starship food delivery robot on campus and slamming it to the ground. The robot had done nothing more provocative than carry somebody’s burrito from point A to point B. UC Berkeley’s Kiwibot fleet has logged roughly 1,600 acts of vandalism across its first 80,000 deliveries. Robots in Los Angeles have been graffitied, thrown, and, in at least one documented case, defaced with feces. Temple University researchers now study “why are people attacking delivery robots” as a real research question, with coded categories of assault and a published framework.

The robot is not the threat. The robot is an extremely polite, slightly anxious piece of plastic and metal that refuses to jaywalk. What gets it kicked is not what it is. What gets it kicked is what it represents: a future that nobody walking across that quad was asked to vote on, rolled out with a cheerful corporate logo and an algorithmic confidence that we will all just accept it over time as the new normal.

Healthcare is running its own version of this experiment, at enormous scale, on a workforce that has already lived through CPOE, medication barcoding, and the great EHR roll out. The clinicians are (probably? maybe?) not going to kick a robot. But if you are a health system executive preparing a multi-year AI strategy, you should be very interested in why someone would.

The prediction that aged poorly

In a 2012 TechCrunch piece, Vinod Khosla argued that we do not really need doctors; we need algorithms. He later told a Toronto conference that radiologists were toast. Geoffrey Hinton doubled down in 2016, calling it “completely obvious” that deep learning would outperform radiologists within five years and advising people to stop training them. Two famous men. One bold prediction. Endless repetition from the peanut gallery.

Today’s scorecard in response is not subtle. Mayo Clinic’s radiology department has grown by 55% since Hinton’s warning. The American College of Radiology projects 26% workforce growth over the next 30 years. Roughly half of 2023 radiology recruiting searches went unfilled, and the average position takes 130 days to close. Median radiologist compensation hit $585,000 in 2026: not too shabby. Hinton himself has walked the prediction back, conceding he was wrong about the timing and probably about the direction.

The useful observation is that Khosla and Hinton were wrong about the specific outcome (radiology disappearing) and right about the underlying direction (AI would significantly reshape clinical practice). But when two loud, credentialed voices made a loud, wrong prediction, they did not reassure the profession. They inoculated it against the correct worry. The radiologists thankfully did not vanish. But every clinician in every specialty absorbed the subtext anyway: you are replaceable; the only question is when.

What clinicians actually fear (and it isn’t what you think)

Here is where most executive AI strategies go wrong. The C-suite assumes frontline staff are afraid of what the keynote speakers were promising a decade ago: wholesale replacement. We are not. Ask a practicing clinician what they fear about AI, and it does not sound like the TechCrunch version.

Almost no one in clinical practice is upset about ambient AI scribes drafting progress notes and queuing orders for physician signature. That is the happy version of AI, the one that trades a documentation burden nobody enjoys for a tool that lets the clinician look up from the screen. Research backs this up: physicians who have actually used AI tools professionally report dramatically higher enthusiasm than those who have only heard about them.

What clinicians fear is the overlord version, the one I argued against last summer in AI isn’t coming for your job; it’s coming for your workflow. We fear the AI that does not draft and step aside, but instead second-guesses and stays put. We are scared of AI that replaces the useless hour spent writing a progress note with an equally useless hour chasing sepsis alerts that are mostly noise.

Layer on the liability concern, which is real and almost always underestimated by the executive team. A 2025 review in Advances in Clinical and Experimental Medicine argued that under current regulatory frameworks, physicians may end up fully liable for AI outputs even when their “oversight” is limited to clicking through summary screens they have no realistic ability to interrogate. If an ambient AI summarizes a chart and misses the buried note about penicillin allergy, the signature on the order is the physician’s. The plaintiff’s lawyer knows exactly whose name to put on the complaint.

The fear is not about being replaced. It is about being overruled or being held responsible for something you do not actually control.

The uncomfortable truth

If AI were only going to do the parts of clinical work nobody enjoys, there would be no reaction. Nobody feels threatened by the tool that handles prior authorizations. The anxiety exists because AI has started doing the parts of the work clinicians thought were uniquely theirs, and doing them well enough to be embarrassing.

The 2023 study by John Ayers and colleagues in JAMA Internal Medicine is the one to sit with. The team pulled physician responses to patient questions from the Reddit forum r/AskDocs, ran the same questions through ChatGPT, and had blinded reviewers rate both. Evaluators preferred the chatbot response 78.6% of the time. They rated chatbot responses as empathetic or very empathetic 9.8 times more often than physician responses.

That finding is not about a chatbot winning a typing contest. It is about a machine – on the metric clinicians most associate with being human – outperforming the doctors on the other end of the chat. The delivery robot on the quad carries somebody’s burrito. The thing in the Ayers paper was reading a worried parent’s message about their child and writing back more kindly than a physician did. (I acknowledge that if physicians had unlimited time, energy, and focus just like an AI, they might win this empathy contest; don’t @ me!)

Clinicians read these papers. They draw conclusions. Then they sit through their hospital’s AI transformation kickoff, where the strategy deck never once acknowledges what any of this actually feels like from the other side of the table. The gap between AI is an exciting opportunity, and I just got outperformed at empathy is precisely where resistance gets born.

What leaders can and must do

Treat the reaction as signal, not noise. When your ED nurses, your radiology group, or your hospitalists push back on an AI tool, the first question is not “How do we educate them past this?” It is “What are they seeing that we are not?” The frontline is generally correct: the alert fires when the patient is already being treated, the summary drops a clinically relevant history line, the tool optimizes for a metric nobody at the bedside is actually measured on. Leaders who skip the listening step and go straight to adoption cheerleading earn every bit of resistance they get.

Stop deploying predictive tools nobody validated for your patients. Failed sepsis models did not become a clinical problem because they were an AI tool. They became a problem because they were deployed at scale without rigorous external validation at most of the sites using them. Before the next predictive algorithm goes live in your ICU, your governance committee should be able to answer three questions: What is the performance at this hospital, not the vendor’s reference site? What is the false-positive burden on the nurse at 3 AM? What is the off-ramp if the tool turns out to be worse than doing nothing?

Use implementation science, not vendor slide decks. This will sound familiar to anyone who read my Yondr post last July: the phone pouch worked not because of the pouch, but because of everything the schools did around the pouch. AI is the same story at higher stakes. A 2024 piece in Implementation Science by Sandeep Reddy lays out what responsible generative AI adoption looks like: the NASSS framework, the Technology Acceptance Model, structured deployment grounded in empirical implementation practice. This is what competent change management looks like, and it is almost nothing like the way most AI pilots are currently governed.

Identify the real ask, and pay for it. The honest conversation with clinicians is not “AI will not replace you.” It is “Your role will be reconfigured, and here is how we will invest so the reconfiguration is something you help design rather than something that happens to you.” Reconfiguration costs money. It costs protected time, needs real retraining, and must offer career pathways for the AI-fluent physician and the informatics-capable nurse. It costs legal clarity about where liability lies when a tool you were told to use produces a bad output. Treat AI fluency as a clinical skill worth developing, not a compliance module.

Don’t get fooled again

The delivery robot is fine. It is cheerful; it is well-lit. It carries burritos to hungry college students. The thing that gets it kicked over is not the robot. It is the quiet conviction, in the people walking past it, that nobody asked them whether they wanted their quad to feel like this.

Healthcare is about to run the same experiment, at enormous scale, on a workforce that has already been through the wringer, and has exactly zero appetite for being surprised again. The robot carries the burrito. Whether anyone kicks it over is up to leadership.

Stay up to date on how healthcare’s changing and how we’re helping organizations change with it.

Join us for a night of networking

Join Nordic for an after‑hours networking happy hour at HIMSS. Connect with your chapter’s industry experts over great drinks and insightful conversation. This complimentary event is open to members of all HIMSS chapters.