Abstract:
This paper explores the role of artificial intelligence (AI) in risk detection and management in the financial and tax sectors, with a special focus on transparency, security, and explainability. Through a systematic literature review (SLR), nine key studies have been identified which are analyzing AI's impact on credit risk assessment, fraud detection, and the security vulnerabilities of machine learning models in financial systems. The research highlights that AI-driven models, such as Gradient Boosting and Random Forest, are outperforming traditional statistical models in accuracy. But their "black box" nature presents challenges in terms of transparency and regulatory compliance. Explainable AI (XAI) methods are essential to ensure that decision-makers understand the factors driving risk assessments. Building trust and fulfilling legal requirements such as the General Data Protection Regulation (GDPR) are also important.
Additionally, the paper addresses the security risks associated with AI, particularly adversarial attacks, and data leaks, which threaten the integrity of AI systems in financial institutions. The FINSEC platform, a cloud-based microservices framework, is proposed as a solution to detect and mitigate algorithmic risks. This platform is integrating multiple detection methods and data sources to get an early threat recognition.
The study concludes that while AI offers significant advantages for risk detection and management, more research is needed to improve model transparency and security. Recommendations for both theoretical and practical improvements are provided, including the development of standardized XAI frameworks and the adoption of robust security systems to protect AI models in practice.