On January 18, 2019, New York State's Department of Financial Services (DFS) sent a Circular Letter - one with far-reaching impact. The letter was to: "All insurers authorized to write life insurance in New York state." The topic was a digital era wake-up call:
re: Use of external consumer data and information sources in underwriting life insurance
Now, this may not seem like the biggest deal in the world, but just about every state insurance department follows NY lead.
The effect on InsurTech, e.g., is big. Life insurance in the US is about a $600-$700 billion industry, and they are trying desperately to "digitize." And this is bleeding into all personal lines - auto, homeowners, etc. The New York regulators sounded the alarm about the use of “new” kinds of data and algorithms, a cautionary tale for all industries pursuing digitization using shadowy data aimed at people.
Data privacy has new implications - across all kinds of insurance
Though the point of the letter was to warn Life Insurance companies doing business in the State of New York, the impact is more widely felt. Insurance in the US is regulated by the states. The federal government was barred from interfering with insurance, even though much of it occurs across states, by the McCarren-Ferguson Act of 1945.
Though there is no federal regulation, there is an organization, the National Association of Insurance Commissioners (NAIC) which is a standard-setting and regulatory support organization created and governed by the chief insurance regulators from the 50 states, the District of Columbia and five U.S. territories.Through the NAIC, state insurance regulators establish standards and best practices, conduct peer review, and coordinate their regulatory oversight. Nevertheless, the NAIC does not control the state regulators. Most states follow the lead of the NYS DFS.
In recent years, life insurance companies, including Life, Accident, Health, Disability and Long Term Care, have begun to use unregulated data in their pricing, reserving, underwriting and even clains adjudication at an individual level. As we wrote in a previous article about the proliferation of this data:
Optum. The company, owned by the massive UnitedHealth Group, has collected the medical diagnoses, tests, prescriptions, costs and socioeconomic data of 150 million Americans going back to 1993, according to its marketing materials. Since most of this is covered by HIPPA, they are very clever in getting around the regulations. But that socioeconomic thing is real red flag.
The context is that insurers moved to on-line underwriting. In at least one case, issuance of a term insurance policy of up to $1,000,000 happens within minutes, if the application is worthy based on the questions. Previously, this would require a visit or visits with a broker, blood and urine samples, and EKG, etc.
Instead, these "accelerated issue" programs rely on algorithms and data supplied form the growing multitude of data providers, some of whom are of dubious quality and almost all represent questionable privacy violations, especially HIPPA. This development alarmed the NYS department that oversees all financial services. From the letter:
First, the use of external data sources, algorithms, and predictive models has a significant potential negative impact on the availability and affordability of life insurance for protected classes of consumers. An insurer should not use an external data source, algorithm or predictive model for underwriting or rating purposes unless the insurer can establish that the data source does not use and is not based in any way on race, color, creed, national origin, status as a victim of domestic violence, past lawful travel, or sexual orientation in any manner, or any other protected class.
Moreover, an insurer should also not use an external data source for underwriting or rating purposes unless the use of the external data source is not unfairly discriminatory and complies with all other requirements in the Insurance Law and Insurance Regulations. Second, the use of external data sources is often accompanied by a lack of transparency for consumers. Where an insurer is using external data sources or predictive models, the reason or reasons for any declination, limitation, rate differential or other adverse underwriting decision provided to the insured or potential insured should include details about all information upon which the insurer based such decision, including the specific source of the information upon which the insurer based its adverse underwriting decision.
And, just in case the insurance company didn’t understand the above:
Data, algorithms, and models that purport to predict health status based on a single or limited number of unconventional criteria also raise significant concerns about the validity of such models.
Insurance data scenarios raise troubling ethical questions
It gets worse. If you have a life insurance policy, there is, in every one, a two-year period of incontestability. In other words, if you die in less than two years of the issue date, the company can deny the claim based on any misrepresentation in the application. After two years, historically, a claim could only be denied in the event of a “material misrepresentation” of the facts. Misstating age, gender or illness are the most common issues. Most insurance companies will do a calculation of the amount of insurance and rates you would have been issued and paid on the premiums you actually paid, and pay a reduced benefit. But this is where is gets sticky - with the kinds of data and algorithms NYS refers to.
When a death claim is made today, many life insurance companies are relying on those shadow data sources to see if there is cause to pay a reduced benefit, or no benefit at all. Beside defeating the incontestability clause, it creates uncertainty about whether insureds’ beneficiaries will get the benefit. Life insurance is a social benefit. It provide peace of mind that funds will be available for final expenses, settling an estate, income continuance, funding a partnership agreement or endowment of charitable gift.
Consider this case: You took out a life insurance policy to pay off the mortgage of your home, in case you did not survive to pay off the mortgage over time. On the application, you stated that you had never been a smoker, but a data broker unearthed a picture of you in college holding a cigarette. Perhaps you smoked a few cigarettes, but you never took up the habit. Instead of a lump sum to retire the mortgage, the company cancelled the policy, and paid your spouse the total premiums plus interest, which were a small fraction of the death benefit causing the loss of the property. The NYS letter refers to transparency:
The failure to adequately disclose the material elements of an accelerated or algorithmic underwriting process, and the external data sources upon which it relies, to a consumer may constitute an unfair trade practice under Insurance Law Article 24.
The letter does not refer to claims adjudication explicitly, but it does in principle. I am not extrapolating this. Since the letter came out, I spoke with a number of “InsurTech” companies that confirmed they will use this data for claims as well as underwriting. And this is not confined to life insurance. Personal insurance, such as auto and homeowners, is similarly impacted by these practices.
According to the Director of the New Mexico Department of Insurance, there is an increase in personal lines (auto and homeowners) rate filings that use ML algorithms that are difficult for the regulators to dissect, but it is clear they are using information beyond what the statutes provide.
She did say that they did forbid insurers to use credit reports because they unfairly discriminate against certain protected classes:
New Mexico is a poor state, and auto insurance is expensive, mandatory, necessary for making a living and is in actuality, a regressive tax. Our first responsibility is to ensure that these rates are fair, and it’s getting more difficult to do.
The initial enthusiasm for big data to create all kinds of new advantages is now facing the ethical questions about what is acceptable for human beings.