Explainable AI for Loan Approval Decisions in FinTech Platforms

Main Article Content

Elowen Hartley
Li Kevin

Abstract

The integration of artificial intelligence (AI) into financial technology (FinTech) has dramatically transformed the loan approval landscape by enabling automated, real-time decision-making systems. However, the adoption of complex models such as deep neural networks has introduced significant challenges concerning interpretability, fairness, and compliance. This paper proposes a comprehensive framework that combines state-of-the-art prediction models with explainable AI (XAI) tools, including SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-agnostic Explanations), to ensure transparency in algorithmic decisions. We evaluate the proposed system on both public and proprietary credit datasets, analyzing performance trade-offs between accuracy, interpretability, and fairness. Results demonstrate that with minimal sacrifice in predictive power, the framework significantly enhances model transparency and regulatory alignment. This study provides both theoretical foundations and practical guidance for implementing XAI in real-world FinTech loan systems.

Article Details

How to Cite
Hartley, E., & Kevin, L. (2025). Explainable AI for Loan Approval Decisions in FinTech Platforms. Journal of Computer Science and Software Applications, 5(6). Retrieved from https://www.mfacademia.org/index.php/jcssa/article/view/231
Section
Articles