The latest revelations about Allstate’s use of artificial intelligence (AI) in drafting insurance claims emails should serve as a wake-up call for policyholders, insurance regulators, and insurance industry claims handlers. According to a report from Futurism, Allstate Is Demanding We Delete These Quotes by Its Exec About How It’s Using AI to Write Insurance Emails,1 Allstate’s Chief Information Officer openly acknowledged that AI is responsible for composing claims-related emails, with human adjusters merely reviewing them for accuracy. This admission was met with an immediate—and somewhat baffling—attempt by Allstate’s media relations team to erase these statements from the public record, even going so far as to pressure journalists to delete direct quotes from their executive.
This development raises serious concerns about how AI is being deployed by the insurance claims industry. It is no secret that insurers have long sought ways to minimize claims payouts, sometimes at the expense of fair treatment for policyholders. The introduction of AI into this equation, if not properly monitored and regulated, could accelerate a trend where technology is used not to enhance customer service but to reduce claim approvals under the guise of efficiency. I discussed this nearly two years ago in Claims Leakage Criticism in the New Era of Artificial Intelligence:
What are criticisms of leakage management?
While leakage management is an important aspect of insurance operations, there are some criticisms of this approach. Here are a few:
Focus on Cost Reduction: Some critics argue that the primary focus of leakage management is cost reduction rather than ensuring that legitimate claims are paid. This can lead to a situation where claims are denied or delayed unnecessarily, causing frustration and financial difficulties for policyholders.
False Accusations: Leakage management techniques such as fraud detection algorithms and special investigation units can lead to false accusations of fraud, which can harm policyholders’ reputations and cause them undue stress.
Lack of Transparency: Leakage management techniques can lack transparency, leading to confusion and mistrust among policyholders. Some policyholders may not understand the reasons for denied claims or may feel that the claims process is arbitrary.
Reduced Benefits: In some cases, leakage management techniques can result in reduced benefits for policyholders. For example, if an insurance company reduces the amount paid out for a claim to save money, the policyholder may not receive the full amount they need to cover their losses.
Overemphasis on Prevention: Critics argue that some leakage management techniques overemphasize prevention at the expense of remediation. For example, fraud detection algorithms may be effective at preventing fraud, but they may not be effective at identifying and addressing the root causes of fraud.
Overall, while leakage management is an important aspect of insurance operations, it must be balanced with a focus on ensuring that legitimate claims are paid, maintaining transparency, and providing adequate benefits to policyholders.
Lack of transparency is the biggest obstacle. Transparency regarding claims handling by the insurance company is the most significant means of finding claims honesty and good faith claims handling. Who opposes this? Insurance lobbyists and insurance defense attorneys always do. No wonder! Transparency exposes the awful truths regarding profits over proper conduct. Otherwise, why not be transparent? Crickets are heard from the property insurance defense bar because they know this is true.
The insurance industry markets AI as a tool to improve accuracy, reduce costs, and expedite claims. In theory, this should benefit both insurers and policyholders. But the Allstate controversy exposes a more troubling reality: AI is being used in ways that lack transparency, and when questioned, companies may seek to rewrite the narrative rather than provide clear answers. If an executive’s own words about AI-driven processes can be dismissed as a mistake or “misinterpretation,” what does that say about the accountability of these systems?
AI in claims handling has the potential to become wrongful intelligence when it prioritizes cost-cutting over customer service. Consider the following risks:
Bias and Lack of Context: AI systems are trained on past data, which means they can inherit biases from historical claims pricing and handling data. Outdated pricing and decisions were skewed against certain types of claims or policyholders. AI could easily perpetuate those injustices on an even larger scale. The Florida Department of Financial Services supports this outdated pricing model in its recent emergency order, as I noted in Stop Using Licensed Contractor Bids For Claim Estimates—A Quick Analysis of Florida’s Imperfect Emergency Rule of Property Loss Adjusting. It is hard to believe we accept computer numbers regarding pricing versus real contractor numbers, but that is what Florida regulators require adjusters to do.
Lack of Transparency: When AI algorithms make decisions, they often do so through opaque processes that even their creators struggle to fully explain. If a claim is denied or undervalued by AI, how does a policyholder challenge that decision effectively?
Dehumanization of Claims Processing: Insurance is about trust. When a policyholder files a claim, they are often in distress, having suffered a loss. Replacing human communication with AI-generated responses—especially if those responses lack empathy—only deepens frustration and erodes faith in the claims process. Maybe those in the claims business should read The Emotional Impact of Recovery from Wildfires before ever speaking with a policyholder.
The Allstate incident also highlights a broader issue: When insurers rely on AI but refuse to acknowledge its full role, they create a dangerous lack of accountability. If an insurance company can’t even stand by its own executive’s statements about AI usage, how can customers trust that AI-driven decisions are being made fairly and ethically?
Insurance regulators and consumer advocates should take note. If AI is to be integrated into claims handling, it must be subject to the same standards of fairness, transparency, and accountability as human adjusters. Policyholders deserve to know when AI is being used, how it is influencing their claims, and what recourse they have if AI gets it wrong. I discussed this in The Regulatory Blind Spot: How Insurance Departments Fail to Detect Systemic Bad Faith Claims Practices, suggesting that we dig into internal claims practices and take regulatory interviews from claims managers:
Uncovering bad faith conduct typically requires a deep understanding of insurance company operations, claims handling procedures, and internal incentive structures. It demands rigorous analysis of claims data, thorough examination of internal documents and communications, and skilled interviewing of company personnel. Based on my experience, many state insurance departments simply lack the expertise and resources to conduct this level of in-depth investigation. Where do they go to learn how to do this? What is their motivation to do so?
Artificial intelligence can be a valuable tool, but in the hands of insurers who see it primarily to reduce payouts, it can become wrongful intelligence. The Allstate controversy serves as a reminder that while technology evolves, the fundamental duty of insurers remains unchanged: to honor the promises made to policyholders. If AI is being used to undermine that duty, then it is not progress—it is a step backward.
Thought For The Day
“The more we give artificial intelligence the power to make decisions, the more important it becomes to ask: Who watches the watchers?”
—Shoshana Zuboff
1 Victor Tangermann, Allstate Is Demanding We Delete These Quotes by Its Exec About How It’s Using AI to Write Insurance Emails, Futurism, Feb. 13, 2025. (Available online at https://futurism.com/allstate-demanding-delete-quotes-ai).