Abstract:
The rapid adoption of generative artificial intelligence (AI) systems in organizational environments introduces new and complex challenges in the field of information assurance and cybersecurity. While generative AI technologies, including large language models, provide significant benefits in automation, decision support, and knowledge management, they also create emerging risk vectors related to data confidentiality, integrity, system misuse, and regulatory compliance. This paper aims to examine cybersecurity risks associated with generative AI systems from an organizational perspective, with a particular focus on trust, governance, and cybersecurity risk management. The study is based on a structured review of recent academic literature, industry reports, and regulatory frameworks addressing AI security and information assurance. Key threat categories are identified, including prompt injection attacks, unauthorized data disclosure, model manipulation, lack of transparency, and limited auditability of AI-driven systems. The analysis highlights how these risks affect organizational decision-making, accountability, and compliance with data protection regulations. Based on the findings, the paper proposes a conceptual framework for managing cybersecurity risks of generative AI systems that integrates technical safeguards, organizational policies, and compliance mechanisms. The framework emphasizes the transition from reactive detection toward trust-oriented governance and continuous risk monitoring. The paper concludes by outlining future research directions focused on empirical validation of generative AI security controls and their effectiveness in strengthening organizational information assurance.
