AI security and Zero Trust

Agile security for agile business

AI security and Zero Trust Agile security for agile business

Table of contents 2 In this whitepaper 03 05 07 Executive summary Chapter 01 Chapter 02 Explore how Zero Strategic Zero Trust and Trust and AI can imperatives for AI: A symbiotic build resilience security strategy relationship for and innovation end-to-end security 10 13 17 Chapter 03 Chapter 04 Chapter 05 Data value, risks, Adapting security Conclusion and security needs for AI and futures are amplified by AI

Executive summary 3 Executive summary Artificial intelligence (AI) transforms Security teams must rethink their approach to both the assets that require protecting data and assets with AI. Because AI safeguarding and the mechanisms by focuses on data, it’s crucial to prioritize data which cybersecurity functions, classification and security. Traditional network adapts, and performs. defenses like firewalls simply can’t effectively protect data and AI applications. Instead, you AI’s potential to drive business forward is must protect data and assets wherever they immense, but without proper management, are in the cloud, AI services, mobile devices, overlooking security can undermine business or anywhere else. The best way to do this initiatives, marketing campaigns, and is by adopting a comprehensive security regulatory compliance. By integrating security framework like Zero Trust. early and embracing Zero Trust principles, organizations can take advantage of AI while mitigating risks, much like brakes on a car enable people to safely travel faster. Key takeaways In particular Generative AI (GenAI) is • AI introduces multiple strategic disrupting business operations and forcing imperatives for security everyone to evolve, including security teams. • Securing AI requires Zero Trust’s Most security leaders have encountered asset-centric and data-centric teams eager to implement new AI tools, approach sometimes without approval. Consider the sales manager who wants faster pricing • AI requires evolving security information or the developer eager to controls and defenses use new tools without security blocking • AI increases the value, risks, and innovation. Everyone wants to maximize security needs for data productivity, but many often overlook that • AI can be used to help accelerate security is essential to prevent damaging data Zero Trust leaks and other security risks.

Executive summary 4 This whitepaper explores how Zero Trust Rapidly update security strategy helps you navigate the security challenges Organizations must quickly adapt their and opportunities presented by AI so you security strategy because of the rapid can build a resilient and innovative digital adoption of GenAI by both attackers and infrastructure. Note that these learnings business teams. generally apply to multiple types of AI capabilities (including machine learning) AI increases focus on data but this paper primarily focuses on GenAI AI is fundamentally a data analysis and because of its power, popularity, and direct generation technology, so the quality and interaction with end users. security of AI applications is heavily reliant Key takeaways for AI Security on the quality, lineage, classification, and protection of the underlying data. and Zero Trust Securing AI is a shared responsibility with While the security implications of AI are the organization’s provider still emerging space, several learnings AI technology, like cloud technology, is most have become crystal clear. This whitepaper often a partnership with the provider that describes those key learnings: requires each partner to work on different aspects of security. It is crucial to learn this Zero Trust and AI have a symbiotic shared responsibility model and plan your relationship where they depend on each other security investments around it to effectively • AI requires Zero Trust, using an asset- mitigate AI security risks. centric and data-centric approach to secure both AI applications and their Security controls need to be adapted to AI underlying data, as opposed to relying on Most existing security controls are built for a traditional network perimeter-centric classic deterministic computing that generates security model. the exact same output for the same request • AI accelerates Zero Trust security each time. Many AI technologies dynamically modernization by enhancing security generate new outputs each time, which automation, offering deep insights, requires updating existing security controls providing on-demand expertise, speeding and introducing new ones to be effective. up human learning, and more. The guidance in this whitepaper is designed to help you navigate the continuous changes posed by AI, capitalize on the opportunities, and manage the security risks and challenges.

Strategic imperatives for security strategy 5 Chapter 1 Strategic imperatives for security strategy Attackers and business units are already adopting and using AI right now. Security teams must acknowledge this reality and urgently update security strategy to enable adoption of the skills, processes, and tools to manage these risks effectively. The top strategic imperatives for AI security are: Protect AI applications and data Attackers are already targeting AI applications to steal data and to establish beachheads for larger attacks. You should integrate security experts into the development of AI-enabled applications to protect them from the beginning (as it will be much more expensive and difficult to fix security later). Security leaders should also work with business and technology leaders to shape the AI strategy to favor SaaS and PaaS to avoid the organization taking on unnecessary risk from “build your own” AI. See the AI shared responsibility model for more information.

Strategic imperatives for security strategy 6 Provide guidance to users the impact of an attack, building reports on Attackers are already using AI to increase incidents and investigations, and reverse the quality and volume of existing attack engineering scripts. Security teams should techniques like phishing emails, scam phone evaluate AI security capabilities to see if it will calls for business email compromise, and more. increase their ability to keep up with attacks Review your use policy, user support processes, and user education to ensure users are aware Establish appropriate standards of how convincing attacker communications Organizations should ensure they have written can be, how to identify these threats, and how standards that can guide organizational to escalate them to security teams. decisions and show due diligence and due care to regulators and other 3rd parties. These Adopt AI security capabilities standards typically cover security, privacy, and AI technology is no silver bullet, but it ethical topics depending on the organization’s provides clear and compelling value in key expected and authorized use of AI. For an scenarios like guiding analysts through the example you can use Microsoft’s Responsible incident response process, summarizing AI Standard as a reference. Managing multiple dimensions of AI security risk Protect AI data and applications abilities p a c y t i r u c e s AI Expect, plan t for, and track dop attacker use A of AI Protect AI data and applications User educa tion and policy

Zero Trust and AI: a symbiotic 7 relationship for end-to-end security Chapter 2 Zero Trust and AI: A symbiotic relationship for end-to-end security AI is changing what security needs to protect and how it operates AI and Zero Trust work together in a What is powerful, symbiotic relationship. AI thrives Zero Trust? within the secure environment that Zero Trust creates, while Zero Trust evolves to meet the Zero Trust is a modern security approach challenges that AI introduces. Each enhances that aligns to business risk and real the other, helping to ensure that your security world attacks, continuously adapts, and framework is both adaptive and resilient. pragmatically prioritizes investments. This Zero Trust transformation from a classic Zero Trust has been defined by NIST network-centric approach to an asset- and and The Open Group. Microsoft data-centric approach is recommended for defines Zero Trust using business effective security in today’s age of cloud enablement and three security services, mobile devices, and now AI. General principles, summarized at infrastructure like network configurations, the top of the next page. firewalls, and access controls form the foundation, but these are not sufficient on their own. Just as cars require specific safety measures, digital assets need data classification, data encryption, adaptive access management, and other asset-centric security measures to stay safe.

Zero Trust and AI: a symbiotic 8 relationship for end-to-end security Verify explicitly Use least Assume breach Protect assets against privilege access Assume attackers attacker control by Limit access of can and will explicitly validating a potentially successfully attack that all trust and compromised asset, anything (identity, security decisions typically with just-in- network, device, app, use all relevant time and just-enough- infrastructure, etc.) and available information access (JIT/JEA) and plan accordingly. and telemetry. risk-based policies like adaptive access control. Business enablement: Align security to the organization’s mission, priorities, risks, and processes. Considering Zero Trust to help secure AI Classic network-perimeter security • AI activity is dynamic, so it doesn’t approaches simply cannot protect data or AI match static patterns. These controls are applications. Firewalls, IDS/IPS, and Network designed to detect and block. Additionally, DLP controls in a network perimeter focus AI also allows rapid generation of new on detecting and mitigating risk using static tooling that can copy existing functionality predictable patterns of network traffic. These (e.g. a custom NMAP clone) which can classic approaches don’t work effectively for evade static signature-based defenses. AI applications because: Additionally, one way to protect data • AI network traffic is often encrypted effectively is with a data-centric approach that for privacy and security reasons, which stays with data wherever it goes. Network prevents these network-based controls controls are limited to the organization’s from getting any visibility. security perimeter and the assets in it so they cannot protect data on mobile devices, cloud • AI operates at data and application services, USB drives, and other locations. For abstraction layers, so differentiating these reasons, a Zero Trust asset-centric and between dangerous and safe actions/ data-centric approach is an effective way to communications requires controls that protect AI and related data assets. understand applications, data, and users.

Addressing legacy security challenges 9 with AI and Zero Trust AI accelerates the adoption of Zero Trust AI can play a pivotal role in accelerating the Generating automation (scripts, programs, implementation and operationalization of a etc.) also accelerates Zero Trust by allowing Zero Trust strategy. Generative AI enhances security teams to avoid repetitive work and this by acting as an on-demand resource for focus their efforts on strategic activities learning and automating key tasks, such as instead. This also reduces repetitive manual data discovery and classification. This not only tasks, which are prone to human error and speeds up workflows but also ensures that can cause risk. Note that these should be data is managed securely and consistently created in a secure by design manner to avoid across the organization. introducing additional security risks. Furthermore, Generative AI can enhance both Note that many recommended business process understanding and security data security and governance risk identification. For example, it can help practices have been known organizations discover sensitive data and assess before the usage of the ‘Zero whether unauthorized individuals have access, while also detecting patterns that indicate Trust’ name, but they still fit the unusual data movement within or outside the Zero Trust philosophy perfectly. organization. These patterns could point to These have also often been insider risks, innocent errors, or even external deferred or deprioritized at many data exfiltration attempts. By learning from organizations, so they are often these patterns, AI not only benefits productivity ‘new’ approaches to but also refines security design, helping the organization. business and security teams better understand processes and identify vulnerabilities.

Data value, risks, and security needs 10 are amplified by AI Chapter 3 Data value, risks, and security needs are amplified by AI AI elevates focus on classifying AI increases the value of data (to and protecting data business and attackers) AI massively amplifies the priority of data GenAI’s ability to generate insights from data security for an organization. Organizations has transformed it into an even more valuable often recognize the importance of data asset, as AI models increasingly become a core security but have frequently had to defer driver of business profitability. These models or deprioritize it in favor of more urgent require high-quality, original data sources for priorities like modernizing identity and access training, further elevating the importance of security for the cloud, maturing security proprietary data. As a result, enterprise data is operations, adapting infrastructure and not only crucial for business success but also a development security practices to cloud and lucrative target for cyber attackers. DevOps, or other key initiatives. GenAI’s success depends heavily on the The advent of AI means organizations quality, lineage, classification, and protection must prioritize data security to help tackle of the data it processes. With the increasing the important (and challenging) work of saturation of low-quality data on the open classifying and protecting their data. This internet, public data sources have become increase in prioritization is primarily driven by less reliable for training robust AI models. two factors: This makes high-quality enterprise data an increasingly valuable resource, not only for • AI increases the value of data (to building better AI but also as a prime target businesses and attackers) for cyber attackers. These attackers seek to • AI amplifies existing data security and exploit enterprise data for financial gain, either governance challenges by using it to train their own models or by selling it to other malicious actors.

Data value, risks, and security needs 11 are amplified by AI Furthermore, the integration of AI with For example, internal users may ask an disparate data sets can lead to blurred lines AI application about the salaries and around data ownership and stewardship, compensation for other employees and complicating the already complex issues of executives in the organizations or external privacy and intellectual property. Accidental attackers may ask AI what secret projects disclosures, whether through training the organization is working on. The AI may models or in retrieval-augmented generation provide answers to those unauthorized (RAG) applications, pose significant risks to users if the documents describing employee organizations. As AI continues to evolve, compensation and sensitive projects have safeguarding enterprise data becomes not been classified correctly, or the AI critical, not only for producing reliable application does not recognize or enforce AI outputs but also for ensuring that access rules based on these classifications. this valuable asset isn’t compromised or For the record, Microsoft Copilot respects weaponized by external threats. your identity model and permissions, inherits your sensitivity labels, applies your retention AI amplifies data security and policies, supports audit of interactions, and governance challenges follows your administrative settings. One of the strengths of GenAI is the ability Organizations must recognize these challenges to discover data and make it easily accessible. and update their data governance and security Because many organizations have not strategies, starting with clear policies and established or consistently applied a formal procedures for data classification. These must data classification strategy, introducing AI be supported with technical controls on the initiatives can make sensitive information data itself as well as the applications that use easily discoverable to unauthorized users the data (including AI applications). (which they weren’t previously aware of). As AI reshapes business roles and elevates While users may have access to these the importance of data, these changes create documents already, they often don’t know both opportunities and risks. However, it’s not how to find them or have the time to search all bad news as AI can also help discover and through them. mitigate data risks in addition to the value it creates for business.

Adapting security for AI 12 Chapter 4 Adapting security for AI A shared responsibility model The three layers of an AI system are as follows: for AI security 1. AI platform Securing AI systems is a partnership where Microsoft offers a range of AI solutions security responsibility is shared between running on Azure, including those organizations and their AI providers, powering their own Copilot solutions. similar to cloud technology. It is critical 2. AI application for all stakeholders to learn this shared Software developed by the organization responsibility model and plan their security to ensure productive and secure use of investments, strategies, and controls based generative AI solutions. on this model. This collaborative approach 3. AI usage helps keep AI systems secure and resilient How generative AI is used within an against evolving threats. organization, including data consumption and generation.

Adapting security for AI 13 The table below summarizes the shared PaaS responsibilities between organizations and AI The customer develops applications on top of providers in securing AI applications across Azure AI offerings, with Microsoft providing these layers. Depending on the type of AI many embedded controls. The customer deployment—Infrastructure as a Service is responsible for securing the custom (IaaS), Platform as a Service (PaaS), or application and its usage. Software as a Service (SaaS)—the division of responsibilities changes: SaaS Managed services, and/or Microsoft’s Copilot IaaS can help deliver the necessary functionality The organization builds their AI models on a without the customer needing to develop or cloud platform like Azure, where Microsoft manage software. The customer still manages provides the infrastructure. The customer how the service is used and secures any data manages the security of their models, data, provided or generated. and applications. AI security shared responsibility model IaaS (BYO model) PaaS (Azure AI) SaaS (Copilot) AI usage User training, identity and access, data security and governance AI application Plugins, design, infrastructure, safety systems AI platform Model safety, accountability, tuning, design, training data governance Organization Microsoft For a more detailed version of this model, see AI shared responsibility model.

Adapting security for AI 14 Operationalizing a shared responsibility model To operationalize this shared responsibility 1. Data access controls: Safeguard data with that safeguard AI at every level, organizations APIs, ACLs, and labeling. can focus on three key areas, as shown in the 2. Application controls: Manage how diagram below. These three pillars form the applications interact with data and models. foundation to implement these controls and help create resilient AI systems. 3. AI model controls: Ensure AI models are secure to prevent unintended disclosures. Resource content Skills, functions, and plugins New data Prompt AI application Active data Massive data stores User content Generated content Dark data 1 Data access controls 2 Application controls 3 AI model controls (API, ACLs, and labels) (Access, input, output) (LLM safety and security) Don’t undo these boundaries Don’t give unlimited access It will give the secrets it knows Ensuring that access to data Managing how applications Safeguarding AI models, is strictly regulated through interact with data and models, particularly large language mechanisms like APIs, Access including the regulation of models, to prevent them from Control Lists, and data labeling. input, processing, and output. inadvertently revealing sensitive This helps maintain the This prevents AI applications information or being manipulated integrity and confidentiality of from becoming a weak link in to produce harmful outputs. data, preventing unauthorized the security chain, especially These controls are vital in access or misuse. when dealing with sensitive or maintaining the trustworthiness critical information. and security of AI systems.

Adapting security for AI 15 AI-specific security measures to rethink their data practices and security controls to ensure safe use of AI: AI applications are fundamentally different Attack simulation (red team/pen testing) than traditional applications. Traditional has to operate differently, focusing on using applications are deterministic, which means human language to trick AI models in addition they generate the exact same output every to exploiting deterministic code vulnerabilities. time they get the same input. Today’s security controls and security assumptions are built Security and technology roles have to rely around that predictability. heavily on threat models to evaluate these AI based applications that use generative AI new system designs until a knowledgebase of models are different because they are dynamic security controls are established for standard in nature—the model will generate a different application patterns. output each time they are run with the exact Business and AI application roles that are same input. For example, asking an image sponsoring and developing AI projects need generation model to “draw a picture of a to work with security teams to understand the kitten in a security guard uniform” repeatedly inherent risks for AI and available mitigations. is unlikely to generate the exact same picture twice (though they will all be similar). Data owners need to work with security This dynamism offers new value for teams to ensure that sensitive data is classified businesses but also introduces new types and handled properly by AI (which may be of security risks. This dynamism also means excluding its use by AI). that current (deterministic) security controls The image below illustrates how AI applications designed will not be effective against AI are typically a combination of both predictable applications. This requires the organization deterministic logic and dynamic AI logic: Different technical Classic app components AI components use exploits and defenses for: use predictable logic dynamic logic Precise interruption/ Consistent (deterministic) A pattern of variable outcomes redirection of logic flow outcomes based on execution based on model design, training General biases and of classic programing data, real-time inputs, etc. hallucinations in outcomes For more information on the types of threats to AI, see Microsoft AI Red Team

Conclusion 16 and futures Chapter 5 Conclusion and futures Without adopting a Zero Trust approach to security and integrating these learnings on AI, your organization could face increased risk of damaging cybersecurity attacks and diminished business returns from AI initiatives. Here at Microsoft, we recommend organizations adopt a Zero Trust approach for security to give them the agility and asset- centric controls to manage risks to AI and data. Just as a transportation system needs more than road signs and barriers to keep people safe, an organization needs more than basic network security controls to protect business assets. Road signs and stripes are crucial, but you also need car-specific protection like insurance, seatbelts, airbags, and collision avoidance systems to make cars and drivers safe. As we navigate the brave new world of continuous changes driven by AI, cloud, mobility, and more, we are confident that the Zero Trust principles can guide the way and help you navigate the opportunities and challenges yet to come.

Resources 1717 Guidance and technical resources The following resources expand on the principles, lessons learned, and requirements covered earlier to help accelerate your AI and Zero Trust readiness: Security Adoption Framework (SAF) Zero Trust for Microsoft Copilots Guidance on adopting Microsoft security solutions, Apply Zero Trust protections to Microsoft Copilots. helping organizations effectively implement and Partner integration with Zero Trust optimize security strategies. Adoption Scenario Plan Phase Grid Apply Zero Trust protections to partner Microsoft cloud solutions. Easily understand the security enhancements for each Best practices for AI security risk management business scenario and the level of effort for the stages and objectives of the Plan phase. Learn about best practices for managing AI security Zero Trust adoption tracker risks to protect your AI deployments. Downloadable PowerPoint deck to track your progress Threat modeling AI/ML systems and through the stages and objectives of the Plan phase. dependencies Business scenario objectives and tasks Explore comprehensive guidance on threat modeling AI/ML systems to identify and mitigate potential Downloadable Excel workbook to assign ownership security risks. and track your progress through the stages, objectives, and tasks of the Plan phase. NIST AI Risk Management Framework Additional Zero Trust documentation Review the NIST AI Risk Management Framework for standards and guidelines to manage risks related to AI. See additional Zero Trust content based on a documentation set or your role in your organization. Strengthen your Zero Trust posture blog Adversarial threat landscape for AI systems Offering practical guidance for organizations to enhance their security posture with integrated, Utilize MITRE’s ATLAS™ to understand and defend streamlined tools. against adversarial threats targeting AI systems. ©2024 Microsoft Corporation. All rights reserved. This document is provided “as-is.” Information and views expressed in this document, including URL and other Internet website references, may change without notice. You bear the risk of using it. This document does not provide you with any legal rights to any intellectual property in any Microsoft product. You may copy and use this document for your internal, reference purposes.