Americans Turning To AI To Handle Insurance Complexities: Report

As healthcare costs remain a primary financial concern for Americans, a growing number of patients are turning to artificial intelligence chatbots to challenge medical bills and navigate insurance complexities, according to a report by the New York Times.

While tools such as OpenAI’s ChatGPT and Anthropic’s Claude offer a no-cost alternative to professional legal or billing assistance, experts and users warn that the results remain inconsistent.

The trend has become significant enough that the American Hospital Association has issued alerts to its members, noting that patients are increasingly using AI to dispute charges. This shift comes as a response to the long-standing use of AI by insurers and healthcare providers to maximize billing and manage claims.

In one instance, Walter Kerr used the chatbot Claude to challenge a $22,604 emergency room bill sent to his partner, Jackie Davalos. After uploading billing and medical records, the chatbot identified potential legal failures regarding debt and insurance requirements. Kerr used these arguments in a letter to hospital executives, which resulted in the entire bill being waived.

“For the first time,” he said, he felt that “we might actually win,” Kerr told the Times.

However, the technology’s advice is not always legally or procedurally accurate.

Legal experts noted that in the Davalos case, the chatbot misunderstood specific debt laws, applying requirements to the hospital that actually belonged to third-party collectors. Other users reported “deflating” experiences where chatbots provided “dead end” advice or failed to ask for the necessary context to understand specific insurance plans.

Critics also highlight significant privacy risks, as chatbot companies are not bound by the Health Insurance Portability and Accountability Act (HIPAA).

Information shared with these bots is not legally protected in the same manner as a conversation with a physician and could potentially be used in legal discovery.

In response to these concerns, OpenAI stated that its newer models are designed to “hedge more, browse more and proactively ask for additional details when needed.” Both OpenAI and Anthropic have also recently pledged not to train their models on users’ health information, though these protections often require users to opt-in or hold paid subscriptions.

Despite the efficiency of AI in translating medical jargon or combing through dense policy documents, experts maintain that the tools lack the nuanced judgment required for complex billing disputes.

“Success,” Mr. Kerr said, often requires persistence, something “A.I. can’t solve for you,” according to the report.