We optimized the medical record for everything… except medicine

By:

Dr. Craig Joseph

Bryan Vartabedian recently wrote a sharp piece naming something clinicians have felt for years but rarely articulate clearly: much of what lives in the medical record looks professional, sounds plausible, and contributes very little to patient care. He called it medical slop. The term fits.

Slop isn’t new. What’s new is the speed, scale, and confidence with which it now shows up, thanks in part to generative AI.

The uncomfortable truth is this: AI didn’t invent medical slop. It industrialized it. Much like the Industrial Revolution didn’t invent pollution but made it impossible to ignore, AI has exposed how much low-value documentation healthcare already tolerates. The problem isn’t that machines can now write clinical text. It’s that we never really decided what the text was supposed to be for in the first place.

What is “medical slop”?

Medical slop isn’t wrong information, and it’s not fraud. It’s also not usually laziness. Slop is documentation that appears legitimate but has little informational density. It’s verbose without being meaningful. Technically complete, but clinically thin.

Think of the multi-page progress note that faithfully reproduces yesterday’s assessment with a few updates swapped in. Or the pristine H&P filled with templated normal findings that obscure the one abnormal detail that actually matters. Slop is what happens when plausibility substitutes for purpose.

And importantly, slop is often produced by good clinicians acting rationally within broken systems (hello moral injury). When documentation is optimized for billing, defensibility, and checkbox completeness, rather than reasoning and decision-making, slop becomes the path of least resistance.

I know this firsthand. Because of OpenNotes and patient portals, I review every note written about me. I’m a physician, so I read them with clinical eyes. What I find, from doctors I genuinely respect, is often slop, not templated filler exactly, but copy-forwarded text from previous visits with a few lab values updated, repeated encounter after encounter. I know what my own medical problems are and what the plan is. But if I came to that record as a stranger — as the covering hospitalist at 2 AM, or a consulting specialist seeing me for the first time — I’m not sure I could reconstruct either. These aren’t bad doctors, and that’s exactly the point. The system doesn’t reward clarity even from people who are capable of it.

Why slop matters more than we admit

The stakes here are higher than aesthetics or physician annoyance.

From a clinical standpoint, slop erodes signal. When everything is documented, nothing stands out. Clinicians are forced to hunt for meaning instead of absorbing it. Important nuances like uncertainty, tradeoffs, and rationale are buried under layers of boilerplate. The medical record becomes harder to use precisely because it contains so much “information.”

Operationally, slop incurs real costs. Longer notes create downstream drag: more time for coders, reviewers, quality abstractors, and legal teams to sift through text that adds little value. The irony is that verbosity is often justified as a revenue or risk-mitigation strategy, yet it often increases administrative overhead without improving either.

Then there’s the cognitive tax. Clinicians already operate under an immense mental load. Asking them to read and interpret bloated documentation adds friction to already strained workflows. Chart fatigue is real, and slop is one of its primary drivers. In software terms, slop is documentation debt. Easy to accrue, brutally expensive to unwind.

Three lenses for understanding the slop problem

Technology (pronounced EHR) can be optimized for the wrong things: completeness over clarity, reuse over reasoning, defensibility over decisiveness. Templates, copy-forward functionality, and AI scribes all amplify whatever incentives are already baked into the workflow. If the system rewards longer notes, you get longer notes. If it rewards checked boxes, you get checked boxes. AI doesn’t change that calculus; it accelerates it. Slop isn’t a clinician failure; it’s an information architecture failure.

Governance is the second problem, and AI makes it more urgent. When AI-generated text is signed by credentialed clinicians, it inherits legitimacy whether it deserves it or not. The danger isn’t machine-generated prose; it’s that most organizations haven’t defined which parts of documentation require genuine clinical judgment versus which can be safely automated. The assessment and plan is where a physician’s clinical identity is on the line. It requires synthesis, not transcription. It’s the one section I don’t want AI writing.

Culture is the third problem, and the hardest to fix. Medical training often equates thoroughness with volume, and reimbursement models reward density over discernment. That was true before the EHR, and the EHR made it easier. AI makes it effortless. The uncomfortable reality is that you can’t culture-change your way out of a system that still pays for slop. Incentives determine behavior. Until documentation quality is treated as a measurable organizational priority, the culture won’t move.

From slop to signal: what better documentation could look like

If slop is the accumulation of low-value text, then the antidote isn’t better prose; it’s better purpose. The medical record should function less like a transcript and more like a decision log. What did we think was happening? What options did we consider? Why did we choose this path over another? This doesn’t require more words, just different ones.

Ironically, AI could help here, but only if we invert how we use it. Instead of generating more text, the most valuable AI tools may be those that compress, summarize, and surface signal. Tools that delete redundancies. Tools that highlight uncertainty. Tools that make clinical reasoning easier to see. The future of documentation isn’t expansion; it’s distillation.

What healthcare leaders can actually do about it

This is leadership work, not an IT project. Executives should treat documentation quality as a patient safety issue and an operational efficiency issue, not a matter of physician preference. That reframe matters because it determines which levers you’re willing to pull.

Two of those levers already exist in most health systems; they just haven’t been pointed at this problem.

The first is peer review. Most organizations have a mature peer-review infrastructure to evaluate clinical decision-making. Note quality belongs in that framework. When documentation that obscures critical information contributes to a care error or near-miss, classify it explicitly as a documentation quality failure in your safety reporting system, not a communication issue or a workflow gap: a documentation failure. That language changes the organizational conversation, and it changes it quickly. What gets named gets measured. What gets measured gets managed.

The second is your existing patient safety event taxonomy. When a care error or near-miss is traceable to a note that buried the relevant finding under boilerplate, or forwarded an outdated assessment into a new clinical context, that’s a reportable documentation failure. Treating it as such creates institutional memory, surfaces patterns, and signals to clinical staff that leadership takes this seriously enough to investigate it.

Both approaches share a common logic: they don’t require new infrastructure. They require the will to apply existing accountability mechanisms to a problem we’ve historically tolerated as background noise.

Be equally explicit about AI. “Clinicians must review AI output” is not a governance policy; it’s more of a disclaimer. A real policy names which sections of the record require genuine human synthesis and why. The assessment and plan is a reasonable place to draw that line and defend it publicly.

As long as volume is rewarded and documentation failures are invisible, slop will persist regardless of how sophisticated the tools become.

Slop is a choice, even if it doesn’t feel like one

AI didn’t create medical slop. It exposed how comfortable we’d become with it. If healthcare leaders don’t actively design for signal, we’ll drown in plausible nonsense: professionally written, confidently signed, and clinically hollow.

The organizations that get this right won’t be the ones with the longest notes or the most sophisticated AI. They’ll be the ones that protect the parts of documentation that are irreducibly human.

If you stripped your medical record down to only the sentences that required a physician’s judgment to write, how much would be left?

Stay up to date on how healthcare’s changing and how we’re helping organizations change with it.

Join us for a night of networking

Join Nordic for an after‑hours networking happy hour at HIMSS. Connect with your chapter’s industry experts over great drinks and insightful conversation. This complimentary event is open to members of all HIMSS chapters.