Mastercard: Measuring Fairness in Financial Transaction Machine Learning Models

Mastercard employs sophisticated ML models to predict cardholder behaviour, optimising revenue and engagement through targeted financial offerings. However, ensuring these models operate fairly across diverse demographic groups presents significant ethical and technical challenges. In this project, we delves deeply into the intersectional fairness of financial transaction ML models, specifically addressing biases related to gender, ethnicity, and age. Using a rich synthetic dataset, the study evaluates complex multi-label predictions for customer spending across various industries. The analysis reveals substantial intersectional biases, uncovering disparities in how different demographic groups are predicted to behave. By integrating innovative fairness measurement methodologies, including classifier two-sample tests and fairness tensors, the researchers rigorously assess bias in model predictions. Moreover, advanced mitigation techniques, such as adversarial debiasing and exponentiated gradient reduction, demonstrate potential pathways to reduce biases while highlighting the challenging fairness-accuracy trade-offs inherent in ML systems. This study underscores the critical need for multi-dimensional fairness considerations in AI-driven financial services, paving the way for more inclusive and equitable technological practices.

The paper is now published and can be accessed via here.

Joint work with Mastercard and The Alan Turing Institute DSG, 2024