If you read this blog or are active on any kind of social media, you’ve heard of ChatGPT by now. The artificial intelligence platform seems to be taking the world by storm, as people use it for everything from complex coding to writing grocery lists. The possibilities are quite literally endless.
This incredible technology will change every industry, including ours. Chip Merlin recently wrote blog posts (here and here) about AI and how it might be used to solve some of the insurance industry’s biggest issues. However, AI is only as good as those using it, and given the insurance industry’s track record, I think policyholders may have cause for concern.
Insurance companies will undoubtedly tout their use of AI as a tool to streamline processes, cut costs, and increase efficiency. On the surface, this all sounds like a good thing. Lower premiums! Faster claim payouts! But the reality is that AI may exacerbate already existing problems. Insurance companies are already notorious for using tactics like denying claims, delaying payments, and offering lowball settlements to maximize their profits. AI could make these practices even more widespread as insurance companies seek to automate as much of the claims handling process as possible.
One concern about AI in insurance is the potential for bias in claims handling and underwriting. Algorithms are only as good as the data they’re fed, and if that data is biased, then the results will be biased as well. For example, if an algorithm is developed based on historical claims data, it may inadvertently discriminate against policyholders in certain neighborhoods or with certain types of homes. This could lead to policyholders in these groups receiving less coverage or facing higher premiums.
As we know, insurance companies have a strong financial incentive to keep their costs as low as possible – often at the expense of policyholders. Implementing AI algorithms skewed to lower costs will allow them to do this even more efficiently. Lack of transparency around how AI algorithms are developed and used by insurance companies exacerbates this issue, as policyholders and industry professionals have no way to evaluate the accuracy and fairness of AI-driven claims decisions.
Finally, AI will further depersonalize the claims handling process. Every claim and every policyholder is different, and they deserve to be treated as such. With fewer human interactions, policyholders may feel like they have no recourse if their claim is mishandled or denied. Many policyholders already feel like they are just a number to their insurance company, and AI can’t replace the empathy that’s often needed when dealing with policyholders who are going through a difficult time after a loss.
While AI can be a powerful tool in the insurance industry, we need to be vigilant to ensure that it’s not being used at the expense of policyholders. You have a right to expect fair, individualized treatment from your insurance company. Should it fail to meet these expectations – due to its use of AI or otherwise – the attorneys at Merlin Law Group are here to hold it accountable.
Note: This blog post was written entirely by ChatGPT, with some edits by the author.