The Trust Gap

The Federal Government Wants More AI in Healthcare, but Not the Liability

Nick Reese and Jared Worley|March 1, 2026

A strange thing is happening in American healthcare. The federal government is pushing AI adoption harder than any lobbyist ever could, while the regulations and frameworks defining liability, privacy, and other issues have not moved an inch.

President Trump made the administration's position clear in July 2025: "We have to grow that baby and let that baby thrive. We can't stop it with foolish and stupid rules." He followed that with a call for "one Federal Standard instead of a patchwork of 50 State Regulatory Regimes." In December 2025, an executive order directed agencies like CMS to withhold funding from states enacting "burdensome" AI regulations. The message from Washington is unambiguous. Move fast, adopt AI, or get out of the way.

Healthcare systems are listening. HHS published an RFI titled "Accelerating the Adoption and Use of Artificial Intelligence as Part of Clinical Care" in the Federal Register in December 2025. AI tool adoption across HHS jumped 64%, according to STAT News reporting in February 2026. The Rural Health Transformation Program is committing $50 billion over five years, with "AI nurses" as a central component. CMS launched the WISeR Model for 2026 through 2031, putting AI prior authorization into Medicare across six states, with participating companies paid based on the denial of care. The ACCESS Model, starting July 2026, ties AI diagnostics directly to reimbursement outcomes.

We are not seeing limited or tentative experimentation. This is a full-throttle federal push to embed AI into clinical workflows, payment systems, and patient care decisions.

Here's what should keep every hospital administrator and practice leader up at night. HIPAA has not changed. Malpractice liability frameworks are the same. State medical boards still hold individual providers accountable for clinical decisions, regardless of what tools informed them. Texas passed TRAIGA in January 2026, requiring written disclosure to patients when AI is used in diagnosis and treatment, creating new compliance obligations that run directly counter to the federal push for frictionless adoption.

Providers are caught in the middle. The federal government is saying adopt, and the liability frameworks are saying you are liable. According to the 2026 CISO AI Risk Report, 92% of CISOs lack full visibility into AI identities operating within their systems. Only 16% of organizations effectively govern AI access to core business systems. That means a vast majority of healthcare organizations deploying AI tools cannot account for what AI tools are accessing, what decisions they're influencing, or what data they're touching.

The CMS models make the tension concrete. WISeR pays private companies based on denied care. That's an incentive structure that will face legal challenge when denied prior authorization results in patient harm. ACCESS ties reimbursement to AI-driven diagnostic outcomes. When the outcome is wrong, who carries the liability? The provider who relied on the tool? The vendor who built it? The federal program that funded its adoption?

None of these questions has a settled answer. The regulatory environment is moving in multiple directions at once: federal preemption pushing one way, state disclosure laws pushing another, and HIPAA and existing regulations and frameworks are practically unchanged.

This is where adaptive governance becomes essential, not compliance exercises, but a survival mechanism. The organizations navigating this period of rapid change are deploying and implementing governance systems that can flex with the regulatory environment while maintaining hard boundaries between regulation and data, people, systems, and AI.

The worst possible outcome is not that healthcare AI fails. It's that it works just well enough that organizations lean on it without proper governance, until patients are harmed and the legal system asks who was responsible. The answer cannot be "the algorithm." It has never been "the algorithm." And no amount of federal enthusiasm for AI changes the fact that when something goes wrong with a patient, a provider was the one in the room holding the responsibility.

The opportunity for breakthroughs in healthcare service and outcomes is just as real as the liability. Modern governance can close the trust gap between the promise of AI innovation and efficiency that patients and governments are both pushing for.

Sources

  • White House AI policy statements (White House, July 2025)
  • Executive Order on state AI regulation preemption (White House, December 2025)
  • HHS RFI on AI in clinical care (Federal Register, December 2025)
  • HHS AI adoption statistics (STAT News, February 2026)
  • CMS WISeR and ACCESS Model documentation (CMS.gov, 2025-2026)
  • Texas TRAIGA (Texas Legislature, January 2026)
  • 2026 CISO AI Risk Report (Cybersecurity Insiders, January 2026)