Emerging risks and new challenges In addition to the amplification of existing means they can produce highly realistic and challenging to detect and mitigate such threats. business operations, and result in security security risks, AI can bring with it a host of convincing content, making the detection of Users can deliberately exploit system breaches, primarily when the model is granted emerging risks. such harmful outputs increasingly challenging. vulnerabilities to elicit unauthorized behavior too much decision-making power and autonomy. from the GenAI model or attempt to subvert Hallucinations: An AI hallucination, in which Model theft: Model theft involves the illegal safety and security filters. Direct injections Regulatory compliance: AI regulations like the an AI model generates false or misleading copying or theft of proprietary large language overwrite system prompts, while indirect ones European Union Artificial Intelligence Act (EU manipulate inputs from external sources. information, can pose risks to organizational models (LLMs), which can erode competitive AI Act) are designed to ensure that AI systems integrity, and in high-stakes sectors like health advantage and lead to financial losses as are developed and used in a way that is safe, care, finance, or legal services, they can lead to unauthorized parties can replicate models Training data poisoning: Training data poisoning transparent, and responsible. Violations of the EU significant challenges. Hallucinations can also without incurring development costs. Brand involves tampering with the data used to AI Act can cost companies as much as 35 million cause ethical and trust issues. Users must be reputation may also suffer from model theft if teach models to introduce vulnerabilities, euros or 7% of annual turnover. This creates able to trust that AI systems will provide accurate the stolen models are misused, and as language biases, or backdoors. This can hurt a model’s uncertainty for security and risk leaders as 62% and reliable information, and hallucinations models grow more powerful, their theft also security and reliability and create risks like poor of business leaders said they do not understand undermine this trust. poses a significant security threat, such as performance, system exploits, and harm to a AI regulations that apply to their sector.3 unauthorized use and sensitive data exposure. company’s reputation. Harmful content: GenAI can generate These are a select few emerging security offensive, dangerous, or legally non-compliant Prompt injections: In a prompt injection attack, a Excessive agency: Excessive agency allows an challenges. For more information about these material. Malicious actors can use AI-produced hacker disguises a malicious input as a legitimate LLM-based system to perform harmful actions and other AI risks, view a list of the top 10 risks deepfake video and audio content, fabricated prompt, causing unintended actions by an AI due to misinterpretations or unexpected errors for LLMs and GenAI Apps, compiled by the Open news articles, and manipulated images to system. By crafting deceptive prompts, attackers in its decision-making. This vulnerability can Worldwide Application Security Project (OWASP), spread misinformation, sow discord, or harm can trick an AI model into generating outputs compromise sensitive information, disrupt and visit MITRE ATLAS (Adversarial Threat reputations. The sophistication of AI models that include confidential information, making it Landscape for Artificial-Intelligence Systems).

Accelerate AI Transformation with Strong Security - Page 15 Accelerate AI Transformation with Strong Security Page 14 Page 16