Abstract:
The increasing integration of artificial intelligence (AI) in risk management decision-making processes brings significant opportunities but also challenges. This work examines the interfaces and task distribution between humans and AI in order to optimally exploit the full potential of collaboration while meeting ethical, legal and practical requirements. Based on qualitative interviews with experts from the chemical/pharmaceutical and financial sectors, a detailed picture of human-AI collaboration has been developed.
The results show that AI systems are particularly strong in data analysis, forecasting and automating routine tasks. At the same time, humans remain essential for making final decisions, interpreting results and ensuring ethical standards. The study also identifies challenges such as a lack of transparency, potential dependency on AI systems and the need to ensure human professional skills. Different perspectives of the survey participants – shaped by professional experience, age and internationality – illustrate the necessity of a context-sensitive design of the collaboration.
In addition to the analysis of key challenges, practice-oriented recommendations for the development of efficient human-AI interfaces are also presented. These include user-friendly interfaces, interactive adaptability and comprehensive training measures to promote acceptance and trust.
This study makes an important contribution to the further development of risk management in a data-driven world and shows how a balance can be struck between automated processes and human control to ensure long-term efficiency and security.