Explainable AI for Loan Approval Decisions in FinTech Platforms
Main Article Content
Abstract
The integration of artificial intelligence (AI) into financial technology (FinTech) has dramatically transformed the loan approval landscape by enabling automated, real-time decision-making systems. However, the adoption of complex models such as deep neural networks has introduced significant challenges concerning interpretability, fairness, and compliance. This paper proposes a comprehensive framework that combines state-of-the-art prediction models with explainable AI (XAI) tools, including SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-agnostic Explanations), to ensure transparency in algorithmic decisions. We evaluate the proposed system on both public and proprietary credit datasets, analyzing performance trade-offs between accuracy, interpretability, and fairness. Results demonstrate that with minimal sacrifice in predictive power, the framework significantly enhances model transparency and regulatory alignment. This study provides both theoretical foundations and practical guidance for implementing XAI in real-world FinTech loan systems.
Article Details

This work is licensed under a Creative Commons Attribution 4.0 International License.
Mind forge Academia also operates under the Creative Commons Licence CC-BY 4.0. This allows for copy and redistribute the material in any medium or format for any purpose, even commercially. The premise is that you must provide appropriate citation information.