Ensuring fair use of AI techniques and outputs by actuaries - questions to be answered This article is sponsored by:
Critiquing a new set of guidance around AI, aimed at actuaries
Based in Santa Fe, NM, Neil Raden is a mathematician (algebraic topology), founder of a management consulting firm, consultant to large and complex projects internationally, an industry analyst, and widely published author and speaker. His early background was in Property and Casualty actuarial R&D, and he remains active in consulting to the insurance industry. He founded Hired Brains Research in 1985 to provide thought leadership, context and advisory consulting and implementation services in Data Architecture, Analytics, AI, Data Science and organizational change for clients worldwide across many industries. His current portfolio includes data warehouse modernization, AI Last Mile and Complex Supply Chain analytics. Neil is a recognized authority on AI Ethics, author of more than fifty articles on the subject on Diginomica, and the author of the foundational report for the Society of Actuaries, “Ethical Use of Artificial Intelligence for Actuaries.” He, with James Taylor, is the co-author of the first book on Decision Management, “Smart (Enough) Systems.” Clients welcome his practical and valuable advice and counsel. He welcomes your comments at [email protected].
Critiquing a new set of guidance around AI, aimed at actuaries
President Biden has declared a national emergency, but what will the tech sector fall out be?
Enterprise AI is the surging topic, or, in some cases, the hot potato. A recent CIO forum raised critical issues - here are my responses.
The problem of technical debt is well known, but it's time to put ethical debt on the list of the top software development risk factors. AI projects bring the issue of ethical debt to a head, where the temptation to move fast comes at a potentially high cost.
What is the impact of DEI on the life insurance and financial services industry? Does AI bring new risks due to bias in training data or outputs? What about generative AI? These are some of the top questions I answered for the LIMRA/LOMA/SOA 2023 Supplemental Health, DI & LTC Conference.
The NAIC's highly anticipated Model Bulletin for the use of algorithms, predicative models and AI came out in July, but what are the implications? Though the bulletin is still in draft form, those with a stake in the insurance industry should be prepared.
Assessing generative AI risks may seem like catching a moving train from behind. New thinking about generative AI ethics can help to reduce the risks, not the least of which is handling data correctly.
Healthcare transformation is an imperative, but do hospital rankings make a difference? The US News and World Report has refined its hospital rankings around more quantitative - and supposedly objective - measures. Here's my review.
The complexities of modern energy management combine grid management with the incorporation of renewable energy. Managing energy at scale in a warming climate is by no means a simple mandate, but the real-time demands of these operations are well-suited for AI.
Messy data brings a dose of reality to analytics projects everywhere - it's a problem for AI projects as well. But can AI allow us to turn the messy data problem around? Let's take a deeper dive.
Is photonic quantum computing a breakthrough, or a high tech diversion? We're going to find out soon - here's why the momentum for photonic approaches to quantum is increasing.
As we attempt to build AI models that achieve fairness, approaches to protecting marginalized groups like differential and disparate impact are considered. However, when you dig into the stats, these methods fall short.
Digital transformation is sure to run into obstacles. One under-recognized pitfall? The fear of leaving something behind. A vital field lesson revealed the problem - and ways to address it.