With the rapid growth of digital banking, online payments, and cashless transactions, financial fraud has become a major challenge for banks and financial institutions. Fraudsters continuously develop new techniques, making traditional rule-based fraud detection systems ineffective and outdated. In recent years, machine learning and deep learning models have shown strong potential in identifying fraudulent transactions by learning complex patterns from large volumes of financial data. However, most of these models function as black-box systems, meaning their decisions are difficult to understand and explain. This lack of transparency creates trust issues for users and raises concerns regarding regulatory compliance in the financial sector. This study focuses on financial fraud detection using a combination of stacking ensemble learning and Explainable Artificial Intelligence (XAI) techniques. By integrating multiple high-performing machine learning models, the stacking approach improves detection accuracy and handles challenges such as class imbalance and evolving fraud patterns. At the same time, explainability methods such as SHAP, LIME, and feature importance analysis are used to provide clear insights into how and why a transaction is classified as fraudulent or legitimate. The study highlights that combining high accuracy with interpretability is both achievable and necessary for modern fraud detection systems. By improving transparency and trust, explainable fraud detection models can better meet real-world operational and regulatory requirements.
Introduction
The rapid growth of digital financial services has increased the risk and complexity of financial fraud, making accurate and timely fraud detection a critical challenge for financial institutions. Traditional rule-based fraud detection systems are limited in their ability to adapt to evolving fraud patterns and often produce high false positives. As a result, machine learning, deep learning, and ensemble learning approaches have been widely adopted due to their ability to analyze large-scale transaction data and capture complex, non-linear fraud behaviors. However, many of these high-performing models operate as black boxes, creating serious concerns regarding transparency, trust, and regulatory compliance.
Explainable Artificial Intelligence (XAI) has emerged as a key solution to address these issues by providing human-understandable explanations for model predictions. Integrating XAI with advanced fraud detection models allows institutions to maintain high accuracy while ensuring accountability, fairness, and user trust. The reviewed literature consistently highlights the importance of explainability in financial fraud detection, especially under strict regulatory and ethical requirements.
The literature review covers a wide range of studies focusing on explainable machine learning, stacking ensembles, deep learning, and federated learning for fraud detection. Research shows that ensemble models—particularly stacking approaches combining algorithms such as XGBoost, LightGBM, and CatBoost—achieve superior performance in handling class imbalance and improving detection accuracy. XAI techniques such as SHAP, LIME, permutation feature importance, and partial dependence plots are widely used to interpret both global model behavior and individual transaction decisions. While these approaches enhance transparency and analyst trust, challenges remain, including high computational cost, real-time scalability, explanation complexity, and lack of standardized evaluation metrics for explainability.
The proposed methodology integrates stacking ensemble learning with XAI to build an accurate, reliable, and interpretable fraud detection framework. The process includes data preprocessing, handling severe class imbalance using techniques like SMOTE, and feature selection guided by SHAP values. Multiple machine learning models are trained as base learners, and their outputs are combined using a meta-learner to produce final predictions. Model performance is evaluated using standard classification metrics, while XAI methods provide transparent explanations at both global and local levels.
Conclusion
Financial fraud continues to pose a significant challenge in the rapidly expanding digital financial ecosystem. With the increasing volume and complexity of online transactions, traditional fraud detection methods are no longer sufficient to effectively identify evolving fraud patterns. This review highlights the growing adoption of machine learning and ensemble-based approaches for financial fraud detection, which have demonstrated strong performance in identifying fraudulent activities. However, the lack of transparency and interpretability in many advanced models remains a critical concern, particularly in regulated financial environments. The integration of Explainable Artificial Intelligence (XAI) with stacking ensemble learning offers a promising solution to this challenge. By combining multiple high-performing models, stacking ensembles improve detection accuracy and robustness, while XAI techniques such as SHAP and LIME provide meaningful insights into model decisions. These explanations help analysts understand why transactions are classified as fraudulent or legitimate, improving trust, accountability, and regulatory compliance. This study concludes that explainability is not merely an additional feature but a fundamental requirement for modern fraud detection systems. Future research should focus on improving real-time scalability, developing user-friendly explanation methods, and ensuring ethical and responsible AI deployment. Overall, the combination of accuracy and interpretability is essential for building reliable, transparent, and trustworthy financial fraud detection systems.
References
[1] Fahad Almalki and Mehedi Masud, “Financial Fraud Detection Using Explainable AI and Stacking Ensemble Methods,” arXiv preprint arXiv:2505.10050, May 2025.
[2] Hasan, M., Rahman, S., and Hossain, M., “Explainable Artificial Intelligence in Credit Card Fraud Detection,” Journal of Computer Science and Technology Studies, vol. 6, no. 2, pp. 45–58, 2024.
[3] Sai, C. V., Das, D., Elmitwally, N., Elezaj, O., and Islam, M. B., “Explainable AI-Driven Financial Transaction Fraud Detection Using Machine Learning and Deep Neural Networks,” SSRN Preprint, 2023. DOI: 10.2139/ssrn.4439980
[4] Suriya, R., and Sireesha, M., “Credit Card Fraud Detection Using Explainable Artificial Intelligence,” Journal of Information Systems Engineering and Management, vol. 10, no. 1, 2025.
[5] Ojo, A., and Tomy, K., “Explainable Artificial Intelligence for Credit Card Fraud Detection,” World Journal of Advanced Research and Reviews, vol. 15, no. 2, pp. 112–121, 2025.
[6] Yeo, K., Lim, S., and Tan, W., “A Comprehensive Review on Financial Explainable Artificial Intelligence,” Artificial Intelligence Review, vol. 58, no. 4, pp. 1–29, 2025.
[7] ?ernevi?ien?, J., and Kabašinskas, A., “Explainable AI in Finance: A Systematic Review,” Artificial Intelligence Review, vol. 57, no. 3, pp. 345–372, 2024.
[8] Prabhudesai, S., Kulkarni, R., and Patil, A., “Explainable and Responsible AI in Credit Card Fraud Detection,” University of Mumbai (SAKEC), Technical Report, 2025.
[9] Aljunaid, S., Alshamrani, A., and Khan, M., “Explainable AI-Driven Federated Learning for Financial Fraud Detection,” Journal of Risk and Financial Management, vol. 18, no. 1, 2025. DOI: 10.3390/jrfm18010045
[10] Faruk, M., Rahman, T., and Hossain, A., “Explainable AI for Fraud Detection: Trust and Transparency,” Financial Security Systems Research Paper, 2025.
[11] Chen, Y., Li, Z., and Wang, X., “Deep Learning in Financial Fraud Detection: Innovations and Applications,” Data Science and Management, Elsevier, vol. 7, pp. 88–102, 2025.
[12] Gaav, A., Mehta, P., and Shah, N., “Recent Advances in Credit Card Fraud Detection: An Analytical Review,” Journal of Future AI and Technologies, vol. 4, no. 1, pp. 25–39, 2025.
[13] Bhattacharyya, S., Jha, S., Tharakunnel, K., and Westland, J. C., “Data Mining for Credit Card Fraud: A Comparative Study,” Decision Support Systems, vol. 50, no. 3, pp. 602–613, DOI: 10.1016/j.dss.2010.08.008
[14] Dal Pozzolo, A., Caelen, O., Le Borgne, Y. A., Waterschoot, S., and Bontempi, G., “Learned Lessons in Credit Card Fraud Detection from a Practitioner Perspective,” Expert Systems with Applications, vol. 41, no. 10, pp. 4915–4928, 2014. DOI: 10.1016/j.eswa.2014.02.026
[15] Lundberg, S. M., and Lee, S. I., “A Unified Approach to Interpreting Model Predictions,” Advances in Neural Information Processing Systems (NeurIPS), pp. 4765–4774, 2017.