HIMSS 2026: 6 priorities healthcare IT leaders should focus on next

By:

Lindsay Hudson
Priorities from HIMSS26

HIMSS 2026 delivered no shortage of ideas, insights, demos, and bold predictions. Over a few days, we heard candid challenges, hard-won lessons, and real success stories from health systems of every size, alongside leaders across government, academia, nonprofits, and the private sector working at the forefront of AI, data, and cybersecurity. 

For healthcare IT leaders, the challenge after HIMSS is deciding where to focus next amid tightening budgets, staffing constraints, constant cyber risk, and rising expectations from clinicians, patients, and regulators. 

One reality came through clearly: cost, risk, efficiency, and revenue are inseparable. Technology decisions shape clinician capacity, patient experience, operational resilience, and long-term financial sustainability. 

Based on what we saw and heard at HIMSS, progress will come from choosing a clear, intentional focus anchored in real problems, real workflows, and real outcomes, not from chasing every emerging capability.  

Below are six priorities grounded in what stood out at HIMSS and what we see every day in our work with hospitals and health systems. Each includes practical next steps you can take now. Start with the two or three that feel most relevant to your organization. 

Priority 1: Decide what you need to change, then decide what AI should be trusted to do 

AI should not be the starting point. Across sessions, leaders consistently returned to the same first step: decide why change is needed, how work must change, and where technology should operate before deciding whether AI is the right tool. 

AI initiatives that begin with technology selection rather than problem definition are far more likely to stall, not because the tools fail, but because the organization has not clarified the desired outcome, the workflow impact, or the boundaries required for safe use. 

Several speakers cautioned against AI for AI’s sake. The strongest examples started with a clear articulation of the problem to solve, the experience to improve, or the operational friction to remove. Only then did teams decide whether AI was needed and, if so, where it should be applied and what level of autonomy it should have. 

A practical pattern also surfaced for getting started: Build AI readiness by encouraging broad use for mundane, low-risk tasks. Examples included meeting summaries, internal research, administrative drafting, or other nonclinical work. By lowering the stakes, teams learn where AI is useful, where it falls short, and which guardrails are needed before AI moves closer to clinical or operational decision-making. 

Proof point: At HIMSS, a leader from Brown University Health described using an internally deployed AI agent within the hospital’s existing productivity environment to summarize patient satisfaction survey feedback, enabling faster insights and reducing manual effort. 

Decision to make now: What do you want to change, how must work change to support it, and where will AI be trusted to operate, if at all? 

What to do next 

  • Next 30 days: Require a clear problem statement for any proposed AI initiative that describes the desired outcome, the workflow change, and the patient or clinician impact. 
  • Next 90 days: Identify a small set of low-risk, nonclinical uses to build comfort and shared understanding, while piloting one clinically or operationally relevant use case with named ownership and a feedback loop. 
  • Next six months: Solidify trust boundaries, governance, and operational ownership so AI can move beyond pilots and scale responsibly. 

Decision check: Can you clearly define the desired outcome, the workflow changes required, and who will be accountable for results once this is live? 

Priority 2: Fix workflow friction before adding automation 

Even with strong technology foundations, clinicians and staff are still navigating unnecessary steps, handoffs, and interruptions that slow care and undermine confidence in new tools. 

Several leaders emphasized that adoption fails when tools do not fit how work actually happens. When technology disrupts flow, adds cognitive burden, or requires extra effort to use, it gets bypassed. Accuracy, usability, and fit inside existing workflows emerged as core requirements for health IT success.  

Decision to make now: Which workflow creates the most visible friction for clinicians or staff today? 

What to do next 

  • Next 30 days: Identify one or two workflows where friction is clearly visible to clinicians or staff. 
  • Next 90 days: Simplify or remove steps before introducing new automation. 
  • Next six months: Standardize how you measure success, using day-to-day use and workflow impact, not feature capability. 

Decision check: If a tool does not reduce friction in real workflows, do you have a clear trigger to pause, adjust, or stop it? 

Priority 3: Replace app sprawl with platform-first decisions, intentionally 

Over and over, leaders said the same thing. Health systems do not need more tools. They need to get more out of what they already have. 

A common sequence surfaced: 

  • Use existing health IT platforms first. 
  • Look to the market if gaps remain. 
  • Build internally only after those options are exhausted, and only when ownership and long-term sustainability are clear. 

This approach reflects budget pressure, integration fatigue, and the reality of managing increasingly complex environments. 

Decision to make now: Where are you adding tools when you should be optimizing the platforms you already have? 

What to do next 

  • Next 30 days: Inventory underused capabilities within existing platforms. 
  • Next 90 days: Identify overlaps and redundancies that increase support and security burden. 
  • Next six months: Set a clear decision pathway for when to extend platforms, buy, or build. 

Decision check: If you add one more tool, will it reduce operational burden, or will it create new integration and support debt? 

Priority 4: Treat data strategy as AI strategy, and make it patient-centered 

A consistent message at HIMSS was that AI outcomes are constrained by the data behind them. Without a unified, governed data foundation, AI risks introducing more tools, more alerts, and more decisions without the context required to act confidently or safely. 

HIMSS also reinforced a broader shift in how health systems think about data. Data is no longer primarily for compliance, reporting, or retrospective analysis. Its purpose is to serve patients, clinicians, and operations teams in real time. 

As digitization accelerates, fragmented visibility becomes increasingly disruptive. Patients feel the impact when they have to repeat the same information or undergo duplicate tests because data is unavailable when and where it is needed. Clinicians experience this as gaps, delays, and added cognitive burden. Interoperability is an IT challenge as well as a patient experience and outcomes issue. 

The ideal state leaders outlined is a shared, real-time view of the patient and the operation, with fewer handoffs, fewer repeat intakes, and fewer moments where teams have to start over. To support that future state, leaders emphasized that data must enable predictive, not reactive, AI; be complete and usable across settings; and be actionable for patients, clinicians, and operations teams. 

Manual approaches to data collection, integration, and cleansing cannot keep pace with today’s demands. Automation is required, and AI can help make data timely and usable, but only when the underlying foundation is unified and governed. A recurring architectural theme at HIMSS was end-to-end data lifecycle management that connects enterprise and operational data, applies consistent governance and context, and delivers insights directly into workflows. That foundation gives AI the real-time understanding needed to improve care delivery at scale. Without it, AI lacks context and risks adding complexity rather than value. 

Decision to make now: Which patient, clinician, or operational outcomes are being blocked today by fragmented data and interoperability gaps? 

What to do next 

  • Next 30 days: Map priority AI initiatives to the data they require and identify where fragmentation or governance gaps exist. 
  • Next 90 days: Prioritize interoperability efforts that directly reduce patient friction and workflow disruption. 
  • Next six months: Strengthen governed data pipelines and automate integration and quality work so teams are not maintaining brittle, one-off connections. 

Decision check: Can you deliver trusted, timely data into frontline workflows without adding burden for clinicians or patients? 

Priority 5: Prepare now for an AI-native EHR, or risk accelerating today’s problems 

The importance of a strong data foundation becomes even clearer as the EHR is reengineered for AI-native workflows. A strong signal at HIMSS was that AI is no longer an add-on to the EHR. It is increasingly embedded directly into documentation, reimbursement, clinical decision support, and operational workflows. 

In an AI-native EHR environment, data fragmentation and weak governance quickly become systemic risk. AI-embedded workflows amplify whatever foundation they sit on. When data is unified, governed, and available in real time, these workflows can reduce burden and improve continuity. When it is not, they accelerate inefficiency, inconsistency, and loss of trust across the organization. 

At HIMSS, the major EHR vendors reinforced a shared direction: moving beyond layering AI onto legacy platforms toward making the EHR itself AI‑native. Oracle Health emphasized a ground‑up rebuild designed for AI‑embedded workflows, with agents operating directly inside clinical and administrative processes to reduce documentation burden and automate revenue cycle tasks. Epic previewed a similar evolution, positioning the EHR as a governed platform where AI agents are natively integrated across care, operations, and patient engagement and can be built and orchestrated by health systems. MEDITECH echoed this approach by expanding native AI within Expanse, embedding ambient intelligence, conversational access to the record, and denial management directly into core workflows rather than treating AI as a separate layer. 

Preparing for AI-native EHRs is a readiness decision that depends on data quality, governance, and operating discipline. That includes workflow consistency, configuration rigor, clear ownership, and sufficient training and change support for clinicians and staff. Without those foundations, AI‑embedded workflows surface weaknesses faster and compound existing challenges, particularly in organizations carrying legacy complexity from mergers and acquisitions. 

As AI becomes embedded in core systems like the EHR, attention is also shifting to where it delivers the next stage of value. Beyond administrative efficiency, leaders are beginning to apply AI to earlier detection, prediction, and intervention. 

Proof point: At HIMSS, a presenter from a healthcare data exploration platform discussed collaborating with Intermountain Health to use machine learning and self-service analytics to predict unplanned dialysis “crash starts” in patients with advanced chronic kidney disease with over 92% accuracy. By combining synthetic and real-world data, the organization enabled earlier, targeted interventions for patients at elevated risk, improving outcomes while making more efficient use of limited clinical and care coordination resources. 

Decision to make now: If AI were deeply embedded across your EHR workflows tomorrow, would it reduce friction and burden, or accelerate the problems you are already managing? 

What to do next 

  • Next 30 days: Identify where AI-embedded EHR workflows would create immediate value and where current gaps would introduce risk. 
  • Next 90 days: Audit configuration consistency, workflow variation, and data quality that could undermine AI effectiveness. 
  • Next six to 12 months: Invest in governance, optimization, and clinician support required to sustain AI-native workflows safely at scale. 

Decision check: Have you clearly defined who owns configuration, optimization, and ongoing safety monitoring once AI-native workflows are live? 

Priority 6: Build governance, cybersecurity resilience, and enablement as standing infrastructure 

Governance is essential. The challenge is operationalizing governance at scale, without turning it into a bottleneck. 

At HIMSS, leaders framed governance as more than compliance. It should be approached as a patient safety function. Accountability must be named, not implied, and the governance lifecycle must be continuous. Without this foundation, organizations drift into redundant solutions, overlapping tools, unclear ownership, support mismatch, performance drift, ROI ambiguity, and trust failure. 

Cybersecurity emerged as the parallel pressure point. Leaders emphasized relentless threats and ransomware risk combined with insufficient staffing and budget. Visibility is foundational as organizations can’t secure what they can’t see. Many are struggling to gain clear visibility into data traffic and integrations, yet the ability to quickly understand network connections and data flows is increasingly critical for breach detection and response. 

Leaders also acknowledged that AI introduces compliance and security gaps that many organizations are not prepared to manage today. Security leaders want to say yes to AI, but only when adoption is responsible, governed, and secure. That requires clear guardrails, human-in-the-loop design, continuous monitoring for drift, and proactive management of technical debt. 

In a HIMSS session, Hackensack Meridian Health laid out its governance and enablement program, which is designed to support enterprise AI adoption as it moves closer to core clinical and operational workflows. Their approach combines layered oversight across executive, clinical, and operational leadership, a structured lifecycle that spans intake through monitoring and reporting, and an adoption program that includes an AI catalog of approved tools and organization-wide AI education to build readiness before deployment. 

The need for clarity becomes even more pronounced as AI shifts from assisting to acting. Several sessions emphasized that agentic AI cannot scale without explicit ownership, scoped access, and auditable controls. 

Proof point: At HIMSS, a healthcare cybersecurity leader stressed that autonomous AI systems require new safeguards for protected health information. They recommended a simple governance triad for every AI agent: a named owner (IT administrator), a sponsor (clinical or business leader), and a manager responsible for the agent’s specific EHR access and permissions. Unclaimed agents should be quarantined in an AI registry, and all agents should have short-lived managed identity tokens (instead of passwords) and tightly scoped access, retrieving only the data required for their task. 

To reduce friction when governance feels slow, some leaders recommended running parallel paths where appropriate, keeping lower-risk requests moving while higher-risk initiatives undergo deeper review. For resource-constrained organizations, a pragmatic starting point is to leverage existing groups already engaged in strategic conversations rather than creating entirely new committees. 

Many leaders also acknowledged that even organizations further along still struggle with capturing metrics, benchmarking, and reporting consistently. That is not a reason to delay governance. It is a reason to standardize measurement and build reporting into the operating model, including reporting that security and IT leaders can bring to executives and boards. 

Decision to make now: What governance, security, and operating ownership must be in place before you allow AI, especially agents, to touch clinical and operational workflows at scale? 

What to do next 

  • Next 30 days: Define a simple, repeatable governance intake and review process, including named ownership after go-live. 
  • Next 90 days: Establish guardrails for scope, access, identity, monitoring, and escalation, especially for agentic AI. 
  • Next six months: Improve visibility into data flows and integrations and build reporting that leadership and boards can act on. 

Decision check: Will your governance scale beyond committees and policy documents, and can security teams clearly see data flows and integrations before risk outpaces response? 

Your next 90 days: a practical starting sequence 

  1. Pick one high-friction workflow and define success in operational terms. 
  2. Confirm data readiness for the top two AI priorities, then close the biggest gaps. 
  3. Establish governance and ownership before scaling any agentic or automated work. 
  4. Reduce tool sprawl by committing to a platform-first decision path. 
  5. Improve visibility and resilience in security operations, especially across integrations. 

Bottom line: Build the foundation, then scale 

HIMSS 2026 reinforced that AI progress is not about chasing more tools. It is about building the foundations that make AI useful at scale. That means clear problem definitions, workflow fit, a governed data foundation, and operating models that can sustain AI in production. 

The organizations that move fastest will be the ones that pair ambition with discipline. They will invest in data, governance, and cybersecurity as core infrastructure, define ownership and guardrails up front, and measure impact in real workflows. That is how AI reduces friction and risk instead of adding to both. 

Nordic can guide you through complexity and translate these priorities into a realistic plan. Schedule a 1:1 conversation with our healthcare-only consultants to align your goals, constraints, and next steps. 

Stay up to date on how healthcare’s changing and how we’re helping organizations change with it.

Join us for a night of networking

Join Nordic for an after‑hours networking happy hour at HIMSS. Connect with your chapter’s industry experts over great drinks and insightful conversation. This complimentary event is open to members of all HIMSS chapters.