What's going on with Payer A.I., denials and audits?
- Eric Fontana
- 2 days ago
- 11 min read
Updated: 1 day ago
In the last couple of months, we’ve been pulling up chairs with dozens of revenue cycle leaders to understand what’s keeping them awake at night. As Union’s chief research officer, Yulan Egan, and I explored during our recent board briefing on the revenue cycle (members-only link here), denials and post-payment audits have been a thorn in the side of providers for about as long as anyone can remember. However, the last 18 months have seen some acute exacerbations of a chronic complaint. And merely days after our webinar, an updated snapshot of 2025 provider data dropped, showing the initial denials and write-offs picture continues its downward spiral. As the discussion evolves from “This is getting bad” to “Okay, but seriously, what do we do about it?”—three questions typically emerge.
1. How does payer-AI impact the current state of denials and audits?
In a nutshell, both the speed and the sophistication are dialed up with AI-based approaches. I recently chatted with a senior director of revenue cycle who highlighted a great example of how payers are using more advanced analytical methods, such as assessing longitudinal clinical activity, to evaluate whether a particular treatment is medically necessary:
“A.I.-based denials are faster, smarter, and the diversity is greater. What we’re now seeing from payers is an attempt to challenge the clinical validity, or at least delay the process with clinical justification, especially for higher-dollar claims. Take a patient with a kyphoplasty as an example. It’ll dig through past data and say: “…we don’t see any evidence of you trying physical therapy.” Automation 1.0 didn’t have that level of sophistication. So, we did everything right on our end, but the payer doesn’t see a history of physical therapy, so the burden of proof is now on us.”
There may be a variety of reasons why the insurer can't observe a prior history of physical therapy that has nothing to do with the clinical picture itself, such as lacking access to medical records, a patient changing insurance. resulting in limited access to historical data, a patient paying for physical therapy outside of insurance, etc. In any case, a provider is now forced into detailed explanatory rigmarole as to why surgery is the appropriate course of treatment.
(At this point I feel obligated to point out, hey we get it, payers need to review cases. And if you were working for a payer, wouldn’t you also be thinking about the long-term viability of your plan and scrutinizing dollars spent? Wouldn’t you want to spot check here and there to see if providers actually can make the case as a quality control mechanism? Okay, good, we have balance...)
Let’s be real, insurers didn’t really need to use AI if they wanted to slow the payment cycle, but the approach introduces the potential for some interesting gamification elements that convey an advantage over and above mere speed. And with 20/20 hindsight, we can now look back and see that the payers’ move into using more advanced technology was effectively foreshadowed in the mid-2010s. It was around this time that providers got their first taste of payers testing algorithmic-driven, automated denials, for specific DRGs or procedures—effectively a “pause-prompt” for a deeper review, along with more rapidly shifting payment criteria that landed a 1-2 punch on then unsuspecting revenue cycle teams. At that time, the providers were relatively successful at overturning denials, fighting back with early automation and manually intensive workflows. What has been distinctly different about the more recent payer investment in A.I., is that it began at a time when providers had limited financial and time resources due to the demands of the pandemic, meaning payers had the luxury of both time and money to make thoughtful "chess moves" while providers weren’t even sitting on the other side of the board.
Roll the clock forward to 2025 and—thanks to some analytic insight by our friends at Anomaly.ai—we’re getting a look into approaches that AI is either enabling or amplifying: payers deviating from published policy, payment rules changing much more rapidly than seen before, selective enforcement of policies and patterns suggestive of targeting. And the discussions with boots-on-the-ground providers confirm these findings feel like what they’re contending with daily. The upshot: traditional provider-side-prevention efforts based on tagging and identifying root causes of denials/audits, or efforts to predict where a payer may go fishing next, feel increasingly futile, as health insurers have discovered their version of a Moneyball advantage. Not to mention the fact that AI-toting third parties, financially incentivized to discover opportunities, are serving as resources for private insurers, with promises of significant returns.

Not only are providers feeling like back markers in the race, but it's also likely payers have only just begun their efforts here. So successful has the technical execution been—public relations aside—that it’s hard to see them dialing back investments without legislation or additional public outcry. For one, it’s not like shareholders of at least one major national payer were openly asking for denials to be reduced while suing the company for its financial performance. And then there are prominent strategy consulting firms publishing guidance on the seemingly huge cost reduction opportunity that remains for payers who can weave A.I. even more deeply into the fabric of their operations.
Now the government is getting in on the A.I. act for case reviews. Hot on the heels of CMS’ announcement in May of AI-employed RADV audits for Medicare Advantage plans, the agency followed up a month later with the Wasteful and Inappropriate Service Reduction (WISeR) Model, which kicks off its six-year term on January 1, 2026. The regionally based model (initially in AZ, NJ, OH, OK and TX) openly aims to borrow from the commercial payer playbook by using vendors deploying AI/ML methods to triage prior authorization for selected items (initially limited to outpatient skin and tissue substitutes, electrical nerve stimulator implants, and arthroscopy for knee osteoarthritis) while providers may choose between submitting prior authorization requests or going through typical post-service payment review.
The short of it is, payers—Medicare included—are actively testing how A.I. can expand the scope of claims review beyond what it could do with people alone in an effort to crack down on spending, and it’s likely that unprepared providers are likely to find themselves ill-equipped to deal with the onslaught.
Open questions
Will the payer’s use of A.I. continue to expand?
What specific tactics emerge as more widespread (and sophisticated) adoption of payer A.I. takes hold?
What precedent could the government’s uptake of A.I. for auditing and denials ultimately set for private payers?
2. What’s the likelihood of near-term regulatory relief?
The short answer might be “look to the states”. As of clicking "Publish" for this blog, I counted 13 states with proposed legislation to limit AI-only healthcare denials while six have enacted legislation, leaving 31 remaining comparatively unregulated.
But as recently as a few weeks ago this wasn’t what I’d have told you. If you’re a provider organization, I'm guessing you probably breathed a big sigh of relief at the news that the Senate voted down a 10-year regulatory pause contained in the One Big Beautiful Bill Act (a.k.a H.R.1) that was previously passed in the House. That’s because the federal law appeared likely to nullify many of the healthcare consumer protections that states had put in place (as written about by more scholarly legal experts here, here, and here), many of which were specifically designed to curb hefty volumes of automated denials.
But here’s the rub: any state-level healthcare consumer protections are limited to legislating against payers conducting AI-only denials and this is a relatively easy bar for health plans to chin, simply by including a “human in the loop” on any AI-identified denial. And while that may be definitionally acceptable from a legal standpoint, most of the payer experts we’ve spoken with suggest that such rulings don’t place much of a material limit on the enhanced speed and sophistication of AI-driven approaches to deny claims. And while some legal experts believe there could be future attempts at federal legislative efforts to override state level-AI regulations, it feels a little less likely today, given the resounding bipartisan support to remove H.R.1’s 10-year A.I. regulatory pause as the bill passed through the Senate (and was eventually signed into law by the President). This means for the foreseeable future, regulatory efforts tied to consumer protections for healthcare, including denials, are likely to be state-dependent. Admittedly, open questions exist about the use of such algorithms (or "devices") that the FDA ostensibly has the power to regulate. However, it's probably reasonable to assume such efforts are not a core focal point for the current administration.
Open questions:
Will the federal government take another run at loosening A.I. regulations, along with the related consumer protection implications, despite the deep unpopularity of the recent HR.1. provisions?
Will the states continue to evolve legislation to be more specific in terms of how denials can be assessed and the degree of human involvement required?
3. What options do providers have to respond to the recent barrage of denials, underpayment and audits?
So, we’ve established how payer investment in A.I, especially the large national variety, have firmly nudged the healthcare-payment pinball machine with recent investments in AI, with no apparent “TILT” risk, creating a huge capability imbalance between themselves and providers. The ball is now squarely in the provider’s court to respond. The evergreen situation is that while the tried-and-true denial/audit prevention and management tactics still matter for providers, none of them meaningfully shift payer incentives. Revenue integrity (performed by providers seeking reasonable payment for services delivered) and payment integrity (performed by payers seeking accuracy and appropriateness of services it is reimbursing) can sometimes be viewpoints that are at conflict with one another. And if payers’ margins tighten due to exogenous factors, such as heavier utilization by an aging population, they may pull the payment integrity lever harder to balance the books (or ensure profitability, depending on your point of view). Thus, denials will continue, and providers will need to look at technology to address them.
All good revenue cycle leaders will consistently tell anyone that an ounce of prevention is worth a pound of cure. Upstream activities like root-cause analytics, CDI, accurate coding, a well-oiled clinical appeals mechanism (not to mention a robust focus on payer contract compliance which we'll discuss more in future), typically work far better than waiting to plug holes in a leaky boat. However, the prospect of a future suite of AI-driven capabilities (envision something like automated claims submission, linked with continuous policy adaptation and predictive denials capabilities in the front end, followed up by automated appeals with self-correction capabilities, anchored with machine learning models) holds intriguing promise to upgrade the current day tool-set to one that is substantially more effective, and also reduces providers' reliance on the manual-heavy approaches that have dominated the last decade.
At this point, it's worth mentioning that the ripple effects of payer investment in A.I. on providers isn’t limited to clawing back reimbursement. Not uncommonly, a major investment by health systems means a commitment to doing something at the expense of something else. The emergent need to “fight AI fire with AI fire” poses a both non-negotiable and significant risk, such as the potential for throwing good money after bad if an initial choice fails to pan out. Choosing the right vendor or solution isn’t a slam dunk guarantee. The market is currently littered with vendors peddling capabilities on a spectrum somewhere between “vapor-ey” (I say this in jest, but hopefully the point is clear) or completely overstated to others with legitimate A.I.-grade technology capabilities. And that’s before confronting the reality that we hear, which is some/many of the vendors out there possess limited depth of understanding of the problems healthcare providers are facing. Then there’s also the trade-off that investments in payer-nullifying tech could delay potentially valuable clinical technology investments.
So, providers must wade into the RCM AI waters clear-eyed, using well-thought-out vendor selection rubric, technology pilots and detailed contract terms to help set them up for success. The historical slow-and-sure approach will be challenged by the realities of current margins and the pace of payer tech innovation that puts providers at risk for being lapped if they wait too long. The good news is that advancements in early A.I. revenue cycle capabilities are helping to identify some low hanging fruit that wasn’t as readily apparent just a couple of years ago. For example, mid-cycle capabilities in LLM-enhanced coding presents an opportunity to pick up some (clinically justifiable) additional reimbursement that legacy coding approaches likely miss, even with the best CDI programs; auto-generated clinical defense letters enable clinical-denials teams to scale more effectively when making the case that a particular course of care is indeed appropriate. Both examples are already demonstrating returns and that’s ahead of the anticipated (and I say this cautiously) agentic boom-to-come that gives hope of seriously impactful labor-substitution along with greater efficiency. Whether we are headed to a rock-‘em-sock-‘em ‘bot resolution probably lies a way off yet (color me skeptical because that would require meaningful integration between payer-provider systems and as I mentioned earlier…incentives) but the first steps to anything that looks like parity will involve a substantial amount of work by providers in the next one to three years.
If you’re a provider, it’s worth sitting back for a moment to ponder how the current bag of payment-defense tactics matches with what you’re ideally trying to accomplish longer term. I cooked up the following “thought-exercise-y” Venn diagram according to three provider-focused goals for revenue cycle: 1. Revenue integrity (this is accurate payment with reduced transactional friction); 2. Strategic advantage; 3. Cost effectiveness; and then began to plot many (although definitely not all, before anyone points out “hey what about…”) of the interactions providers may have with payers that could involve some degree of a denial/audit-inflecting touchpoint.

What came out in the exercise, a working draft, to be clear, is that A.I. could reasonably be considered one of the few solutions with the potential to hit all three provider aims. Here’s why:
(1) Strategically, given that we hear payers appear to target providers that don’t respond to payment challenges assertively, AI may help a larger objective of reducing overall scrutiny. A harder-and-faster defense approach may cause payers to direct their resources and attention elsewhere. Here we may need to acknowledge the possibility that while there is likely to be a first mover advantage with A.I., less well-resourced provider organizations may not be so lucky. This dynamic has long been true in healthcare. (I will also point out -again - despite any strategy benefit, A.I. gives providers a "bigger-stick" but still won't address payer incentives)
(2) Thanks to the “upskilling” element of A.I. (meaning that it’s enabling teams to operate more effectively compared to without) improved reimbursement appears to be a near-given, especially initially. Several providers we’ve spoken with are employing mid-cycle focused A.I that is enabling more accurate representation of relevant clinical detail in coding, even in situations where best-in-class CDI teams were plying their trade. Given that so many providers have told us they don’t, or perhaps more accurately can’t, touch every account that is denied due to sheer volume, any ability to extend a team's reach to collect more dollars that are both justified and were previously uncaptured is huge for provider organizations. Further, capabilities that provide a sense of scale to quickly and accurately synthesize various sources of clinical data into cogent, well-structured denials defense letters potentially helps to short cut payer interactions in a manner that either accelerates cash or at least the decision-making process on a clinical case in ways that could not have been achieved before with fully-manual clinical detail assembly and letter writing.
(3) Downstream from today, as AI capabilities (especially the increasingly marveled about agentic flavor) advance more prominently into RCM, and actually begin replacing some of today’s human-performed actions, freeing the same FTEs to do more complex tasks (a.k.a. the “top of license” concept) we may finally see the administrative portion of the labor cost growth curve bend a little. A near-term win would be slowing the roll on adding RCM FTEs at last decade's rates. And if capabilities really advance, it’s not out of the realm of possibility that the oft-uttered adage of doing more with less, may meaningfully crystallize, in a similar manner as consulting and banking are reportedly experiencing.
Disagree with our chart? Great! We’d love to hear from you as to how you'd tweak it. Nevertheless, A.I. helped get us into this mess, and so it’s going to be a critical tool for providers to restore some balance to the reimbursement force. Both survey data and conversations with revenue cycle leaders indicate that providers continue to look hard at denials-focused A.I driven solutions positioning such tech as an inevitable arrow in providers quiver in the very near future.
Open questions:
How rapidly does the emergence of agentic technology meaningfully shape costs in a way not previously seen in "automation 1.0"
How will payer's tech-stack and ground game evolve as providers upgrade their capabilities to fight back against denials and audits?
How can providers lean more heavily on clear and concise payer-contract terms, unearthed through their own data-driven insight, to reduce ambiguity in dispute resolution around denials, improving outcomes and reducing cost (on both sides)? More thoughts on this in future...
Interested in coming to our provider focused RCM-summit in Nashville in early August? We have a limited number of seats for health system revenue cycle and finance leaders available. Email me at eric@unionhealthcareinsight.com for more detail.