SAN FRANCISCO, CA, Oct. 5, 2022 - Ensuring the interests of independent insurance agents and brokers are protected in any future regulatory proceeding, IIABCal participated in a California Department of Insurance “prenotice workshop” on Sept. 21 intended to signal potential amendments to existing regulations that govern insurers’ use of artificial intelligence, or “AI,” in evaluating underwriting risks.
“The increasing prevalence of the insurance industry’s use of big data and algorithmic tools to underwrite, rate, process claims, and market to consumers presents a new potential for unfair discrimination in the business of insurance whether it is intentional or not,” CDI stated in its description of the problem it believes now exists. “California law prohibits discrimination in insurance ratemaking, claims handling practices, accepting insurance applications, and when canceling or nonrenewing insurance policies. The purpose of this workshop, therefore, is to explore how Artificial Intelligence (AI), Machine Learning (ML) and other algorithmic tools are used by insurance companies in California, and [ … ] ultimately [adopt] rules to prescribe, the responsible governance and use of those tools in insurance decisions….”
IIABCal General Counsel Steve Young, who participated in the hearing, conducted solely virtually, said it was highly likely that CDI would propose a variety of new disclosure and training requirements—and that IIABCal would examine any such proposals very carefully to ensure that neither the Department, nor insurers, attempted to foist undue compliance burdens upon producers.
This workshop focused on the insurance industry’s use of algorithmic tools, such as machine learning tools, artificial intelligence tools, and other technologies designed to examine big data to make judgments about consumers, marketing, underwriting and other decisions or practices relating to insurance transactions—which the Department believes may result in bias, or unfair discrimination.
"It is highly likely that CDI would propose a variety of new disclosure and training requirements—and that IIABCal would examine any such proposals very carefully to ensure that neither the Department, nor insurers, attempted to foist undue compliance burdens upon producers."
Steve Young, IIABCal Senior Vice President & General Counsel
The questions on which CDI solicited feedback included:
- When is it appropriate to use AI, ML, and Algorithms for insurance decisions in California?
- How are these tools used by insurance companies for marketing strategy?
- How are these tools used in the claims handling processes?
- How are these tools used for ratemaking purposes?
- How are these tools used for underwriting purposes?
- How are these tools used for any other purpose related to insurance?
- Even if AI, ML, or Algorithmic tools can be shown to have an actuarial correlation to risk of loss, if such tools harm a protected class of persons, should the tools be permitted at all? If so, under what circumstances? How would their usage be considered permissible under California’s Insurance Code
- How do insurance companies and licensees – today – test AI, ML, and Algorithms to prevent bias against protected classes?
- What are insurance companies, as a business practice, doing to ensure compliance with all applicable laws and training of their staffs on the proper application of and full compliance with all laws applicable to insurance?
- How do insurers and licensees provide transparency to Californians by informing consumers of the specific reasons for any adverse underwriting decisions?
- If an insurance company or other licensee uses AI, ML, or other Algorithms in a way that unfairly discriminates against protected classes, aside from immediately discontinuing the use of the tool, what consequences should result?
Click here to review the complete CDI Notice of the workshop and the issues the Department believes need to be addressed by further regulation.