Mitigating Algorithmic Bias in AI-Driven Decision Systems for Financial Services

Abstract:

Artificial intelligence (AI) and machine learning (ML) systems have become integral to financial services, underpinning applications such as credit scoring, lending, fraud detection, and risk management. While these technologies enhance efficiency and scalability, they also introduce new forms of algorithmic bias that threaten fairness, accountability, and regulatory compliance. This paper presents a systematic literature review of scholarly and industry studies published between 2015 and 2025 to examine how algorithmic bias in AI-driven financial systems is conceptualised, mitigated, and governed. Using NVivo-assisted thematic and co-occurrence analysis, five dominant themes were identified: data nature and bias sources, fairness metrics and evaluation, mitigation methods, ethical and regulatory governance, and organisational implementation. Findings reveal a strong concentration of attention on data-centric and technical mitigation strategies, with comparatively limited focus on the integration of ethical frameworks into organisational practice. Cross-keyword cooccurrence analysis further highlights a socio-technical divide - “bias,” “data,” and “fairness” form dense clusters, while “ethical” and decision-making dimensions remain marginal. In response, this study proposes a socio-technical bias mitigation framework that positions fairness as an emergent property of interactions between technical systems, organisational structures, and regulatory environments. The paper concludes that achieving fairness in financial AI requires not only methodological innovation but also organisational alignment, cross-disciplinary collaboration, and continuous governance to bridge the gap between ethical principles and operational reality.