Ensuring fair use of AI techniques and outputs by actuaries - questions to be answered

Neil Raden Profile picture for user Neil Raden September 26, 2023
Summary:
Critiquing a new set of guidance around AI, aimed at actuaries

An image of someone pointing with their finger to a screen with bright blue digital circles

The insurance business affects the economy and our everyday lives, though it is less fashionable and trendy than many other industry sectors. Still, those sectors would not. exist without it. In particular, every innovation depends on the insurance industry. It provides a safety net and mitigates risk from loss of life, health, property and liability. Things aren't built, products aren't released, and risks aren't taken without insurance. Insurance practices must be aligned with the broader purpose, not solely profit and solvency.

The work of actuaries is central to the operation of all phases of insurance. Professional certification of actuaries is a long and arduous education process and an intense and challenging series of exams spanning years administered by actuarial societies worldwide. The Code of Professional Conduct is part and parcel of that education, including requirements for continuing education. Since the work of actuaries involves, among many other functions, mathematical dexterity and the development of models, it is only natural to understand the expanding role of Data Science and Artificial Intelligence (AI) in their practice.

For example, the Institute and Faculty of Actuaries (IFoA), the professional body in the UK dedicated to educating, developing and regulating actuaries based both in the UK and internationally, recently publishedRisk Alert: The development and use of Artificial Intelligence (AI) techniques and outputs by actuaries.” The document needs some comments, which I have included below.

In the section “Considerations for all members”, 

It is important that there is ethical and transparent use of data and models, balancing consumer, and commercial outcomes.

  • The word “balancing” is disappointing. It feels indecisive and needs a firm stand. It lacks boldness as the opening "consideration" of a serious issue. Regarding "ethical and transparent use," perhaps “alignment” would be a better approach.
  • Rather than balancing, employing AI to accelerate and strengthen the business may further entrench conservative social structures and institutions.
  • The profession should take a historical perspective of its role and practices.

There may be bias in the underlying data, with steps required to mitigate this. Data science techniques may involve larger and more complex datasets, potentially exacerbating this risk.

  • Focus solely on the development and deployment stages of the AI life cycle overlooks consideration of problems that occur during the earlier stages of conceptualization, research, and design. From: Why Are We Failing at the Ethics of AI? | Carnegie Council for Ethics in International Affairs. Or they need to comprehend when and if an AI system operates at a level of maturity required to avoid failure in complex adaptive systems.
  • The treatment of concepts such as fairness and bias is frequently linked to the notion of neutrality and objectivity, the idea that a 'purely objective' dataset exists or a 'neutral' representation of people, groups, and the social world independent of the observer/modeler. This is the central fallacy of fairness and bias: "audits" are conducted computationally.
  • The world is structurally biased (there are inequities to the systemic disadvantage of one social group compared to other groups with whom they coexist), making biased data. Observation is a process. When we create data, we choose what to look for. Every automated system encodes a value judgment. Accepting training data as given implies structural bias does not appear, and replicating the data would be unethical.
  • Different value judgments can require contradictory fairness properties, each leading to different societal outcomes. Companies must document data collection processes, worldviews, and value assumptions. Value decisions must come from domain experts and affected populations; data scientists should listen to them to build values that lead to justice. 

Care is needed to ensure that the results of any AI work do not lead to inappropriate consumer outcomes, for example unfair pricing or limited access to necessary financial services products.

  • There are many mathematical models for fairness; the problem is that there needs to be a consensus on the meaning of fair in this context. It is possible to define fairness measures mathematically and develop algorithms to weigh whether a model exhibits levels of fairness. However, without a definition of fairness, any mathematical estimation is purely speculation. 

There is a need to ensure that the use of complex data and modelling techniques does not inadvertently breach any existing regulatory requirements, particularly in relation to protected characteristics and data privacy.

  • I would go further than protected characteristics and privacy. The potential for AI to harm entities other than people is already evident. 
  • Existing assessments of model risk and model governance in place to mitigate this may need to be reviewed, to ensure they remain sufficiently robust in the context of emerging AI models. This includes ensuring accessible documentation of assumptions and methodology.
  • There may be a range of potential models and tools to choose from, which brings additional challenges as to the most appropriate choice for a given problem.

Explanations and validation of complex data and modelling techniques are likely to be more challenging than for more traditional actuarial models, including where results from AI models are not necessarily reproducible.

  • AI systems are increasingly capable of solving complex problems but tend to be opaque, the so-called “Black box” problem. It isn't easy to look inside to see what they do and how they work. They are not deterministic like classic models such as rules engines, procedural code or decision trees. Unreproducible results cannot be solely relied upon if we have not yet reached "the end of science" (the truth is in the data).

Communication to users is an essential element of the process, both in relation to explaining assumptions, methods and results, and the risks and limitations associated with data and models.

  • The nascent field of Explainable AI (XAI) is just beginning to grapple with this problem.

Independent challenge and constructive scepticism are important (e.g., human-in-the-loop safeguards), and actuaries should speak up where they have concerns about AI implementations.

  • So long as human-in-the-loop is not simply a moral crumple zone, where issues can always land on an identified (and often blameless) party. 
  • Speaking up is, and should continue to be, a core precept of the profession

Care should be taken when using third-party large language tools or models, in relation to veracity of output and privacy and copyright risks.

  • And especially third-party data

Failure to address these points (and others not covered here) could result in significant reputational risks for actuaries, their firms, and the wider profession.

  • Indeed

My take

Actuarial societies and organizations have burned professionalism and integrity into their syllabi and practice guidelines, and they take them seriously. Nevertheless, other elements can override the ethical process, such as senior management and the work environment, e.g., some pressures come into play:

  • When an organization pressures development that may not seem ethical
  • You adopt an "it's only the math" excuse or "that's how we do it.
  • You engage in fairwashing - concocting misleading excuses for the results
  • You don't know that you're doing these things
  • The whole process is complicated and opaque in operation
  • The organization is not used to introspection before you embark on a solution
  • There is an "aching desire" to do something cool that obscures your judgment

The above Risk Alert is an adequate foundational set of guidelines that needs. To be elaborated thoroughly. I am aware of numerous efforts in progress to do that.

 

Loading
A grey colored placeholder image