Join Webinar: Bias Detection in Large Language Models - Techniques and Best Practices
Register Now
Learn more about EU AI Act

NYC Insurance Circular Letter: Using Consumer Data and Information Sources

Authored by
Ashyana-Jasmine Kachra
Policy Associate at Holistic AI
Published on
Dec 22, 2022
read time
0
min read
share this
NYC Insurance Circular Letter: Using Consumer Data and Information Sources

Key takeaways

  • In January 2019, the New York Department of Financial Services published a circular letter to all insurers authorized to write life insurance in New York State concerning the use of external data sources.
  • The New York Department of Financial Services (NYDFS) reserves the right to audit and examine an insurer’s underwriting criteria, programs, algorithms, and models.
  • The letter clarifies that the burden and liability lie with the insurer.
  • The letter also reminds readers about the obligation to comply with existing anti-discrimination and civil rights laws and compliance with all other requirements in the Insurance Law and Insurance Regulations.

What is NYC Insurance Circular Letter?

In January 2019, the New York Department of Financial Services published a circular letter to all insurers authorized to write life insurance in New York State. The letter makes it clear that insurers should not use an external data source, algorithm or predictive model in underwriting or rating unless it has been determined by the insurer (not just the vendor) that the system does not collect or use prohibited criteria.

The burden and liability lie with the insurer.

This is significant as New York’s Department of Financial Services (NYDFS) reserves the right to “audit and examine an insurer’s underwriting criteria, programs, algorithms, and models. This includes within the scope of regular market conduct examinations, and the purview to take disciplinary action, including fines, revocation and suspension of license, and the withdrawal of product forms.”

Two areas of concern

The letter addresses two areas of concern with the use of external data sources, algorithms, or predictive models in determining life insurance rates.

  1. The use of external data sources, algorithms, and predictive models has a significant potential negative impact on the availability and affordability of life insurance for protected classes of consumers. An insurer should not use an external data source, algorithm or predictive model for underwriting or rating purposes unless the insurer can establish that the data source does not use and is not based in any way on race, color, creed, national origin, status as a victim of domestic violence, past lawful travel, or sexual orientation in any manner, or any other protected class.
  1. The use of external data sources is often accompanied by a lack of transparency for consumers. Where an insurer is using external data sources or predictive models, the reason or reasons for any declination, limitation, rate differential or other adverse underwriting decision provided to the insured or potential insured should include details about all information upon which the insurer based such decision, including the specific source of the information upon which the insurer based its adverse underwriting decision.

Compliance requirements

The letter also reminds readers about the obligation to comply with existing anti-discrimination and civil rights laws. Emphasis is on ensuring any external data sources are not unfairly discriminatory and comply with all other requirements in the Insurance Law and Insurance Regulations.

These include:

  • The NY Insurance Law
  • Executive Law
  • General Business Law
  • Federal Civil Rights Act

For example, Insurance Law Article 26 prohibits using race, colour, creed, national origin, status as a victim of domestic violence, or past lawful travel in any manner, among other things, in underwriting. Suppose an insurer uses an external data source, such as an algorithm trained to consider protected characteristics. In that case, the insurer is technically liable even if the algorithm is bought from an external vendor. Moreover, they would not only be in breach of the industry guidance, leaving them open to an NYDFS audit and in breach of existing anti-discrimination law.

Taking steps early is the best way to get ahead of this and other global AI regulations. At Holistic AI, we have a team of experts who, informed by relevant policies, can help you manage the risks of your AI. Reach out to us at we@holisticai.com to learn more about how we can help you embrace your AI confidently.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo