AI could transform healthcare. Can safety-net providers keep up?

AI could transform healthcare. Can safety-net providers keep up?


North Country HealthCare is the “only game in town” for care delivery in some of the regions it serves, said Dr. Jennifer Cortes, quality and population health medical officer at the federally qualified health center. The provider operates 13 primary care clinics and two mobile units, caring for 55,000 people across rural Northern Arizona. 

Some communities are very remote — meaning patients may be forced to travel hours to reach specialty care — which makes recruiting providers a challenge, Cortes said. 

That’s one area artificial intelligence could help. Adopting an AI scribe, which typically records providers’ conversations with patients and drafts clinical documentation, could alleviate some of clinicians’ administrative work and decrease burnout, she said. 

“When ChatGPT first came out, I was like, ‘Oh my God, this might make things so much better for those of us working in this field,’” Cortes said. “I just want my job to not be so challenging all the time. It would be amazing if this works.”

But taking on an AI project isn’t easy for a safety-net provider. The technology can be labor intensive to implement, requiring technical expertise and oversight capabilities that many are unlikely to be able to easily access, experts say. 

And if health systems with the fewest resources — that often care for the most medically complex patients — aren’t able to realize the benefits of AI, they could fall even further behind larger or more affluent providers.

“If you look at the kinds of health systems that are actively deploying AI right now, those that can afford it are the ones that are more aggressively pursuing it,” said Brian Anderson, CEO of the Coalition for Health AI, an industry group developing guidelines for responsible AI use in healthcare. “Those that are in rural communities, for example, that don’t have the IT staff to deploy and configure different kinds of AI tools aren’t able to do that. That’s an example of the digital divide already being reinforced in the AI space.”

‘A ton of human effort’

Adopting AI products at health systems can require specialized human labor and technology resources to safely implement, creating significant barriers for cash-strapped safety-net providers, experts say. 

“People tend to talk about it or conceptualize it like you’re turning a light switch on,” said Paige Nong, assistant professor at the University of Minnesota School of Public Health. “It’s actually not that simple. These tools and these systems require a ton of human effort.”

Safety-net providers are likely to operate at slim margins, given their heavier reliance on Medicaid — a pressing challenge as the insurance program faces federal funding cuts — and higher uncompensated care demands. 

For example, the net margin at community health centers, which provide primary care to underserved populations, was just 1.6% in 2023, according to health policy research firm KFF. That fell from 4.5% in 2022, driven by inflation and the expiration of pandemic-era funding.

Many community health centers also face workforce concerns, with more than 70% reporting a primary care physician, nurse or mental health professional shortage last year, according to the Commonwealth Fund. Meanwhile, labor costs are a significant expense for many providers. 

And implementing AI will take plenty of work to manage. For example, health systems will need to set up AI governance structures that can evaluate products for safety and efficacy as well as maintain regulatory and legal compliance. Plus, providers should keep monitoring their AI tools, because the assumptions underlying the model, like patient characteristics, could change over time, potentially degrading its performance, experts say.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *