The homeowners property insurance landscape is shifting rapidly, driven by advancements in artificial intelligence (AI) and surveillance technology. A recent article from Business Insider, Through The Roof: My Journey Into The Surreal, Infuriating Future of Homeowners Insurance, highlights the growing concern over insurance companies using drones, AI, and surveillance tools to monitor and evaluate homeowners, sometimes leading to policy cancellations or other adverse actions. As these technologies become more prevalent, they bring with them a host of ethical, legal, and regulatory challenges that both insurers and policyholders must navigate.

This evolving landscape is not going unnoticed by regulators. For example, The Michigan Department of Insurance and Financial Services recently issued Bulletin 2024-20-INS, setting forth expectations for insurers’ use of AI systems. The National Association of Insurance Commissioners (NAIC) adopted a model bulletin providing guidelines on the responsible use of AI in the insurance industry. These regulatory efforts aim to ensure that while innovation drives efficiency and accuracy, it does not come at the expense of fairness, transparency, and consumer protection.

The Rise of AI and Surveillance in Homeowners Insurance

In recent years, insurance companies have increasingly turned to AI and surveillance technologies to assess risk, process claims, and even detect fraud. Drones equipped with high-resolution cameras can capture detailed images of a property, allowing insurers to evaluate the condition of a home without setting foot on the premises. AI systems can analyze these images, along with other data, to make predictions about potential risks, set premiums, and make underwriting decisions.

While these technologies offer significant benefits, such as faster processing times and more accurate assessments, they also raise critical concerns. For example, there is the potential for AI systems to make decisions based on incomplete or biased data, leading to unfair treatment of policyholders. Furthermore, the use of surveillance tools, such as drones, can feel invasive to homeowners, who may not even be aware that they are being monitored. I noted how drone surveillance impacted insurance regarding a church in Church Loses Insurance From Satellite Imagery – GuideOne Refuses to Consider Other Evidence of a Roof’s Condition.

Regulatory Response: Michigan’s Bulletin

Recognizing these challenges, the Michigan Department of Insurance and Financial Services issued Bulletin 2024-20-INS in August 2024. This bulletin emphasizes that while AI can drive innovation in the insurance industry, it also presents unique risks that must be carefully managed. The bulletin outlines the expectations for insurers operating in Michigan, including the requirement to develop and implement a comprehensive AI systems (AIS) program.

Key points from the Michigan Bulletin include:

Compliance with Existing Laws: Insurers must ensure that their use of AI systems complies with all applicable insurance laws and regulations, including those addressing unfair trade practices and unfair discrimination.

Governance and Risk Management: Insurers are required to establish robust governance frameworks and risk management controls specifically tailored to their use of AI systems. This includes ongoing monitoring and validation to ensure that AI-driven decisions are accurate, fair, and non-discriminatory.

Transparency and Explainability: The bulletin stresses the importance of transparency in AI systems. Insurers must be able to explain how their AI systems make decisions, and they should provide clear information to consumers about how these systems may impact them.

Third-Party Oversight: If insurers use AI systems developed by third parties, they must conduct due diligence to ensure these systems meet the same standards of fairness and compliance. Insurers are also expected to maintain the right to audit third-party systems to verify their performance and compliance.

The Michigan bulletin reflects a growing awareness among regulators that while AI can offer significant advantages, it must be used responsibly to protect consumers from potential harm.

The NAIC Model Bulletin on AI in Insurance

The NAIC’s model bulletin on the use of AI systems in insurance, adopted in December 2023, complements the Michigan bulletin by providing a comprehensive framework for all states to consider. The NAIC bulletin emphasizes several core principles:

Fairness and Ethical Use: AI systems should be designed and used in ways that are fair and ethical, avoiding practices that could lead to discrimination or other adverse consumer outcomes.

Accountability: Insurers are accountable for the outcomes of decisions made or supported by AI systems, regardless of whether these systems were developed internally or by third parties.

Compliance with Laws and Regulations: AI systems must be compliant with all applicable laws and regulations, including those related to unfair trade practices and claims settlement practices.

Transparency and Consumer Awareness: Insurers should be transparent about their use of AI systems and provide consumers with access to information about how these systems impact their insurance coverage and claims.

Ongoing Monitoring and Improvement: AI systems must be continuously monitored and updated to ensure they remain accurate, reliable, and free from bias. This includes validating and testing systems regularly to detect and correct any issues that arise over time.

The NAIC bulletin also highlights the importance of data governance, requiring insurers to implement policies and procedures to manage data quality, integrity, and bias in AI systems. Additionally, the bulletin addresses the need for insurers to retain records of their AI systems’ operations, including documentation of how decisions are made and the data used to support those decisions. Those records and data will undoubtedly be reviewed during Market Conduct Examinations.

Implications for Policyholders

For policyholders, the increasing use of AI and surveillance in homeowners insurance presents both opportunities and risks. On the one hand, these technologies can lead to better risk management and more accurate pricing and faster claims processing. On the other hand, they raise concerns about privacy, fairness, and the potential for discrimination.

One of the most significant risks is the potential for AI systems to make decisions based on biased or incomplete data. For example, an AI system might use data from drones to assess the condition of a home, but if that data is not accurate or is interpreted incorrectly, it could lead to an unjustified increase in premiums or even the cancellation of a policy. Similarly, AI systems might rely on historical data that reflects past biases, leading to discriminatory outcomes for certain groups of homeowners.

Amy Bach of United Policyholders noted that these technologies are leading to greater cancellations of policies in areas of greater risk. “One of the most significant factors driving the crisis is the technology that insurers are using now,” Bach said. “Aerial images, artificial intelligence and all kinds of data are making risks that they had been taking more blindly a lot more vivid to them.” 1 (fn)

Another concern is the lack of transparency in AI-driven decisions. Many homeowners may not understand how their insurance premiums are calculated or why their claims are approved or denied. If these decisions are based on complex AI algorithms, it can be challenging for consumers to get clear answers. This lack of transparency can erode trust between insurers and policyholders, making it more difficult for consumers to feel confident in their coverage.

Navigating the Future: What Insurers Should Do

As the insurance industry continues to evolve, insurers must take proactive steps to navigate the challenges and opportunities presented by AI and surveillance technologies. The following strategies can help insurers ensure they use these technologies responsibly and in ways that benefit both their business and their customers:

Develop Comprehensive AI Governance Frameworks: Insurers should establish clear governance frameworks that define how AI systems will be developed, deployed, and monitored. These frameworks should include robust risk management controls, regular audits, and ongoing training for employees involved in AI-related decisions.

Prioritize Transparency and Consumer Education: Insurers should strive to be as transparent as possible about their use of AI and surveillance technologies. This includes providing clear explanations of how these systems work and how they impact consumers. Insurers should also invest in consumer education efforts to help policyholders understand how AI-driven decisions are made and what they can do if they believe they have been treated unfairly.

Invest in Data Quality and Bias Mitigation: The effectiveness of AI systems depends on the quality of the data they use. Insurers should implement rigorous data governance practices to ensure that their data is accurate, complete, and free from bias. This includes regularly testing AI systems for potential biases and making necessary adjustments to prevent discriminatory outcomes.

Engage with Regulators and Policymakers: As regulators like the Michigan Department of Insurance and Financial Services and the NAIC continue to develop guidelines for AI in insurance, insurers should actively engage with these efforts. By participating in the regulatory process, insurers can help shape policies that promote innovation while protecting consumers.

Consider the Ethical Implications of Surveillance: While surveillance technologies like drones can provide valuable data for insurers, they also raise significant ethical concerns. Insurers should carefully consider the implications of using these technologies and ensure they are used in ways that respect the privacy and rights of homeowners.

The future of homeowners insurance is being shaped by powerful new technologies that offer significant potential benefits but also pose substantial risks. As AI and surveillance become more ingrained in the industry, insurers must navigate a complex landscape of regulatory expectations, ethical considerations, and consumer concerns.

By developing robust AI governance frameworks, prioritizing transparency, investing in data quality, and engaging with regulators, insurers can harness the power of these technologies while ensuring they are used in ways that are fair, ethical, and beneficial to all stakeholders. In doing so, they can build trust with policyholders and position themselves for success in a rapidly changing industry. The recent actions by the Michigan Department of Insurance and Financial Services and the NAIC serve as important reminders that while innovation is critical to the future of insurance, it must be pursued responsibly and with a clear focus on consumer protection. As the industry continues to evolve, insurers that embrace these principles will be best positioned to thrive in the years ahead.

One aspect of the current AI and surveillance technology being used for insurance risk management mitigation is that it is new and simply not working as well as it could. As the systems improve, the current problems of poor outcomes and wrong determinations noted in the Business Insider article will be reduced. For example, properly identifying early signs of roof damage can allow insurers to alert policyholders to fix problems before they lead to significant claims, thus reducing the insurer’s payout costs. The loss that never happens or is mitigated is truly a win-win scenario which these technologies can advance.

Thought For The Day

As technology advances, regulators must ensure that innovation does not come at the expense of consumer protection. Fairness and transparency should never be compromised.
—Rohit Chopra


1 Marin homeowners grapple with fire insurance cancellations, June 20, 2024, Marin Independent Journal, accessed at https://uphelp.org/marin-homeowners-grapple-with-fire-insurance-cancellations/