AI in insurance: a focus on fairness – Earnix (2024)

At its core, fairness in machine learning means implementing algorithms that make decisions impartially, writesLuba Orlovsky, principal researcher at Earnix, a technology firm focused on data-driven pricing.

On March 21, 2024, the United Nations General Assembly adopted a significant resolution aimed at harnessing artificial intelligence (AI) for global benefit and accelerating progress towards sustainable development goals outlined in the 2030 Agenda.

The resolution, titled “Seizing the opportunities of safe, secure, and trustworthy artificial intelligence systems for sustainable development” was passed with an emphasis on the imperative of addressing racial discrimination worldwide.

This is just one example of the significant ongoing high-level discussions on leveraging technology for positive global impact, and how closely key decision-makers are following developments.

AI in insurance: a focus on fairness – Earnix (1)

We all have a responsibility, and as AI continues its meteoric rise across industries, its application in insurance has become a focal point of both innovation and scrutiny.

The fundamental challenge lies in harnessing the power of AI to optimise end-to-end insurer operations – including models – while upholding principles of fairness and equity.

Recognising this balance is critical, which is why the industry, and its technology providers must take proactive steps towards sustaining and safeguarding best practice around the ethical, fair and explainable application of AI in insurance.

With the increasing complexity of AI models and developing global regulatory concerns, insurers face challenges in managing AI transformations while remaining compliant with regulations such as the EU’s General Data Protection Regulation (GDPR), the Insurance Distribution Directive (IDD) in Europe, and Consumer Duty legislation in the UK.

As AI becomes further ingrained in the corporate fabric of global insurance, AI governance is essential for responsible and ethical AI development and deployment to maintain alignment with regulations as well as transparency and fairness in models.

Maintaining customer trust

It is vital that insurers can transparently explain AI decisions and comply with regulatory requirements to avoid potential legal consequences and maintain customer trust. It’s essential for insurers to prioritise AI governance to stay ahead of regulatory changes and leverage AI’s potential to deliver value to customers and their business.

Insurers collect vast amounts of data, including customer demographics, claims history, vehicle information, property details, and more. AI algorithms, such as machine learning models, are deployed to analyse this data to identify patterns and correlations that humans might overlook. By understanding these patterns, insurers can better predict the likelihood of a claim and adjust pricing accordingly.

AI is seen by many as a crucial tool to assess risk more accurately by considering a wider range of variables and factors. For example, in auto insurance, traditional risk factors might include age, driving history, vehicle make and model. AI algorithms can incorporate additional data points such as driving behaviour (captured through telematics devices), weather conditions, and even road infrastructure quality to provide a more nuanced risk assessment. AI enables insurers to adjust models dynamically in response to changing risk factors and market conditions.

Methods are evolving

This is why fairness in AI isn’t just a buzzword – it’s a moral imperative. As AI algorithms increasingly influence decisions in insurance, from risk assessment to pricing, the potential for bias and discrimination becomes a pressing concern. The insurance industry must commit to addressing these concerns head-on. What is needed is a compass guiding insurers towards ethical AI best practices.

At its core, fairness in machine learning (ML) means implementing algorithms that make decisions impartially, without prejudice towards sensitive and protected attributes such as gender, race, or age. This commitment to social responsibility extends beyond regulatory compliance – it’s about creating equal opportunities and outcomes for all individuals.

How to measure fairness?

Fairness in decision-making processes can be assessed across various dimensions, which can be applied methodically to models and decision-making algorithms. Demographic parity, for instance, emphasises the independence of decisions from sensitive attributes. Equal opportunity aims to achieve equal true positive rates among diverse groups, while predictive equality seeks to balance false positive rates across these groups.

Equalised odds integrates both equal opportunity and predictive equality to strive for parity in both true and false positive rates. Individual fairness, meanwhile, focusing on comparable individuals receiving similar predictions. Finally, calibration focuses on the accuracy of predicted probabilities across different groups, encompassing a comprehensive framework for assessing fairness in decision-making.

By offering a multifaceted approach to measuring fairness, including demographic parity, equal opportunity, and predictive equality, insurers can be empowered to prioritise and measure fairness in their AI models, and to have their models impartially assessed against best practice, from segmentation awareness to metric selection. This isn’t just about identifying disparities; it’s about taking actionable steps to address them.

In a competitive industry where pricing is paramount, ethical AI isn’t just a moral imperative – it’s a strategic advantage. Insurers who prioritise fairness in their AI models not only mitigate regulatory risks but also build trust with customers. As consumers demand more transparency and accountability from insurers, embracing ethical approaches empowers insurers to deliver on these expectations.

With regulations such as GDPR and IDD in Europe, and state and federal scrutiny in the US, insurers are increasingly in the spotlight over their AI practices. The industry requires tools to navigate these regulatory complexities, aiming to achieve fairness compliance while driving innovation, and the tech sector needs to step up to provide them.

Ultimately, fairness in AI isn’t just a moral imperative, it’s a strategic advantage. Insurers who prioritise fairness not only mitigate regulatory risks but also build trust with customers in an increasingly transparent and accountable market. At Earnix, we are taking proactive steps towards achieving this balance with an experimental module to showcase our commitment to creating a future where AI empowers, rather than discriminates. As the insurance industry embraces AI, ethical considerations must remain at the forefront.

AI in insurance: a focus on fairness – Earnix (2024)

References

Top Articles
Latest Posts
Article information

Author: Barbera Armstrong

Last Updated:

Views: 6334

Rating: 4.9 / 5 (79 voted)

Reviews: 94% of readers found this page helpful

Author information

Name: Barbera Armstrong

Birthday: 1992-09-12

Address: Suite 993 99852 Daugherty Causeway, Ritchiehaven, VT 49630

Phone: +5026838435397

Job: National Engineer

Hobby: Listening to music, Board games, Photography, Ice skating, LARPing, Kite flying, Rugby

Introduction: My name is Barbera Armstrong, I am a lovely, delightful, cooperative, funny, enchanting, vivacious, tender person who loves writing and wants to share my knowledge and understanding with you.