Predictive models in insurance are often built on data influenced by human decisions, structural inequities, and representation bias. Simply removing sensitive attributes does not eliminate bias. This paper outlines the legal and business imperatives for fairness in actuarial practices and introduces core concepts of group fairness. It also presents practical bias-mitigation methodologies aimed at reducing legal exposure while maintaining predictive performance.
We examine:
- Reputational and legal risks of unjustified discrimination
- Main notions of (group) fairness and bias diagnosis
- Main techniques of bias remediation
- Applications of fairness assessments and remediations across actuarial use cases