How patients are using AI to fight back against denied insurance claims

As health insurers increasingly rely on artificial intelligence to process claims, denials have been on the rise. In 2023, about 73 million Americans on Affordable Care Act plans had their claims for in-network services denied, and less than 1% of them tried to appeal. Now, AI is being used to help patients fight back. Ali Rogin speaks with Indiana University law professor Jennifer Oliva for more.

Read the Full Transcript

Notice: Transcripts are machine and human generated and lightly edited for accuracy. They may contain errors.

Ali Rogin:

As health insurers increasingly rely on artificial intelligence to process claims, denials have been on the rise. In recent years, nearly 20 percent of claims filed by Americans on Affordable Care Act plans were denied. In 2023 alone, that number added up to about 73 million people who had filed claims for in network services. And less than 1 percent of them tried to appeal their claim, either because the process was too lengthy or too confusing.

Now artificial intelligence is being used to help patients fight back. Software companies are harnessing the power of AI to create detailed appeal letters for patients in a fraction of the time it would take a human to do. Joining us to talk about the AI battle in health insurance is Jennifer Oliva, professor at Indiana University's Maurer School of Law. Thank you so much for joining us.

So before we talk about some of these new ways companies are using AI to fight back against claim denials, I want to talk about how health insurance companies have, up until this point, been using AI and other automated predictive algorithms to work into their claim approval or denial systems.

Jennifer Oliva, Professor of Law, Indiana University: Yeah, it's somewhat of a mystery, to be fair, because there's no way to know for sure, but the National Association of Insurance Commissioners sent out a survey to health insurers in 2025, and they answered. 71 percent of them said, we're using AI for utilization management, which means that they're admitting in a survey where they can say whatever they want that they are using it for prior and concurrent authorization processes.

In addition, I'll add that there's several lawsuits that allege that this is going on. And as part of what we've learned from these lawsuits, indeed, some of the insurers have sent patients denial letters stating that the claim was reviewed by an AI program.

Ali Rogin:

And we also know that a very small number of people ever try to appeal these claims. Why is that the case? Why don't more people take advantage of this system that exists?

Jennifer Oliva:

I think that it's a very. It's very complicated. First of all, you have to remember that when people are in these situations, they're often in a complicated or an emergent acute care healthcare situation. So it all depends on what their prior knowledge is of a very complex system, if they have the resources and ability at the time when they're in emergency care, the ICU, to deal with a situation that's as complex. It's true very few people appeal, but a majority of the people who appeal are very successful.

Ali Rogin:

We now have software companies who are employing AI to help people appeal these rejections. How does that work? What do we know about it?

Jennifer Oliva:

The system asks you for all the documentation that you have that would be helpful in filing the appeal based on what the insurer's expecting. And you pay like a $40 or $50 fee as a general rule. And it — the AI creates a claims appeal for you that you can submit to your insurer. It seems to have gone well for the people that are in the sort of reported conversations about this, and I would encourage people to reach out to these companies.

I think the problem here is that we're in an AI arms race where as consumers become more savvy and are more empowered by these tools to fight back, the insurers will just, you know, up the ante on their side with the AI. So what I would like to see is a system in place where there's fewer denials on the front end that are illegal.

Ali Rogin:

What are your concerns with this so called AI arms race that the industry seems to be engaged in? What's the worst case scenario that you think we might be escalating towards?

Jennifer Oliva:

I think the worst case scenario is that insurers profit from claims denials. That's just part of the business model as a of how private insurance works in the United States. Therefore, we should be very suspicious when they adopt technologies and tools that make it easier for them to deny claims. We know they deny claims at a high rate. We know very few people appeal. We also know that AI makes it really easy for them to detect people who won't appeal based on past, a long standing past history in claims data and on people who won't live through an appeal based on the time that the appeals take.

So I'm very, very concerned about robust AI being used on the insurer side to sort of pick through the data quite carefully to choose victims who are expensive and will not, or perhaps will not live through their appeal. So that's my concern on that side.

On the opposite side, on the provider and patient side, I am glad that they have these tools to fight back, but I just feel like it's all going to keep escalating and the insurers will become more savvy, and providers and patients don't have the same resources.

Ali Rogin:

What about the regulatory landscape here? What exists in terms of preventing some of that escalation that we just talked about and what should be in place?

Jennifer Oliva:

So almost nothing exists. That's my whole interest in this field, is it's very lightly regulated. To the extent that it's regulated, it's generally something like a human has to be in the loop, right? A human has to oversee the final determination.

But as we know from investigative reporting and the lawsuits that are pending, it seems like that's not the case, that humans are just sort of approving what the AI is deciding on the insurer side. And so what I've been arguing for is we need robust regulation on the front end to make sure that the AI tools that the insurers are using are making good, accurate, transparent and valid decisions based on the patient's medical necessity, which they're required to by law based on their plan contracts.

Ali Rogin:

Jennifer Oliva, professor of law at Indiana University. Thank you so much.

Jennifer Oliva:

Thank you for having me.

Listen to this Segment