style> .gh-canvas .article-image { width: 720px; height: auto; margin: auto; margin-top: 20px; } @media screen and (max-width: 500px){ .gh-canvas .article-image { width: 331px; height: auto; margin: auto; margin-top: 20px; } }

Thoughts on AI biases in insurance

We can delegate responsibility to AI, but never accountability. What does that mean in an industry where what we are supposed to do is discriminate fairly?

Thoughts on AI biases in insurance

A couple of weeks ago I was on a panel about modernizing insurance at InsurTech New England. We took some really smart audience questions about AI that have me thinking.

First rule of engagement: AI's not in charge, humans are.

We can delegate responsibility to AI, but never accountability.

What does that mean in an industry where what we are supposed to do is discriminate fairly? After all, insurance companies are really good at separating people and companies into risk pools based on characteristics that lead to loss (and hopefully only characteristics leading to loss).

I've spent my career working on insurance products. We've always checked our input data for biases, and also our outputs for unexpected biases. We catch a lot before we launch. Regulators catch some too, and adjust the playing field accordingly. And other skews we unfortunately only see as we collect data from a newly launched product.

The thing is, human biases are predictable. Human brains work how they work - we all live in one. We can scan for our known biases in how questions are asked and answered, and how data sources collect and report information. We can comb through outputs looking for unexpected results and correlations with demographics we didn't intend and that are often ghosts in the model from skews in the inputs.

AI biases are not so predictable.

They come in part from the data the model was trained on, which may not be our own data, and may not be assessable by outsiders. AI's biases also derive from innumerable statistical correlations that just aren't how a human brain calculates. That's why AI is so powerful, but also means results that can't be intuited by us, with our messy wet brains.

In this context, how do we check input and output for biases before we do harm to our customers (and our companies, and the insurance markets)? And how in the world does a regulator do this?

We have to move forward with AI - not to be too over the top, but better insurance analytics saves lives and heartache and money. Always has, always will. And as a leader starting to deploy AI, I can't drop my accountability, ever. And that's a little terrifying in this moment, when we're starting to launch analytical power that we scarcely understand.

Still, the humans are in charge. Or at least we think we are...

This is general information based on questions our customers ask us. It may not be right for your specific situation. You should get some advice from a licensed insurance agent (like us!) before you make a decision on your own insurance.