Abstract:
Artificial intelligence (AI) is increasingly used in risk management, but its successful integration depends not only on technical performance, but also on transparency, explainability, and user acceptance. This study was part of a dissertation work and examines the factors of AI acceptance and effective human-AI collaboration in professional risk management. A quantitative survey was conducted among 71 professionals from the Finance and Insurance sector and the Chemical and Pharmaceutical industry in Germany, Austria, and Switzerland.
The results show that AI is generally perceived as useful, but that operational benefits alone do not guarantee a broader acceptance. Age was positively associated with AI acceptance, while the gap between desired explainability and currently perceived transparency emerged as a relevant factor in the multivariate model. By contrast, perceived transparency and the importance of explainable AI did not show significant direct bivariate relationships with AI acceptance. Respondents also emphasized transparent decision-making, workflow integration, and performance evaluation as important conditions for effective human--AI collaboration.
The study suggests that AI acceptance in risk management is shaped by a combination of user-related, organizational, and interpretability-related factors. Successful implementation therefore requires not only efficient AI systems, but also structures that make AI-supported decisions understandable and professionally defensible.
