Accelerate AI Transformation with Strong Security

Explore strategies for securely adopting AI in your organization with this guide from Microsoft Security.

Accelerate AI transformation with strong security The path to securely embracing AI adoption in your organization

Foreword 3 The security transformation: A 16 Introduction 4 path to a secure yes to AI The push for rapid AI adoption 6 4 steps to implementing effective security for AI 18 Step 1: Form a dedicated security team for AI 19 Facing the new realities of 10 Step 2: Optimize resources to secure GenAI 22 security for AI Step 3: Implement a Zero Trust strategy 23 Amplified risks of data leakage in AI systems 13 25 Step 4: Adopt a comprehensive security solution for AI Increased threats with rapid AI development 14 Emerging risks and new challenges 15 The path forward 27 Appendix 29

Report Foreword by Michal ways—they’re accelerating rare disease research, want to provide information that can be used and security. It answers questions such as: How Braverman-Blumenstyk, Corporate maximizing value for clients, powering better to help make crucial decisions about how to widespread is the use of GenAI in organizations? Vice President, Microsoft Security patient care, and more. secure and govern AI in organizations. Through How prevalent is GenAI app development? conversations with many security and risk Division CTO, Israel R&D Center leaders about their excitement and concerns Second, readers can discover which GenAI GenAI is already driving significant Managing Director advancements and transformation within regarding GenAI, I realized that offering industry security risks are most on the minds of security organizations, but the rapid pace of adoption insights and data points can help clarify the path and risk leaders, and learn more about the new forward to confident AI adoption. Since the emergence of generative AI (GenAI), of this new technology comes with a host of and amplified risks that come with AI use and more and more companies have been adopting new security concerns. And while security and development. Perhaps most importantly, the this powerful new technology and realizing risk leaders want to be able to say “yes” to their Microsoft Security, in partnership with MDC report outlines some of the steps leaders are Research Group, conducted a study to learn taking or planning to take to address these risks. its potential to transform their businesses. company’s efforts to innovate, they want to According to Microsoft’s Work Trend Index, the be sure that the right security measures and how organizations approach AI adoption, use, use of GenAI has nearly doubled in the last six solutions are in place. development, and security. Based on survey I hope this report helps you feel more responses from over 400 enterprise IT and data months, with 75% of global knowledge workers confident as you move forward on the path using it. Organizations are using GenAI not At Microsoft, we wholeheartedly agree with a security-decision makers, this report serves to securing and governing AI, enabling your three main purposes. First, it provides security organization to say “yes” to AI innovation with a only to boost productivity, increase revenue, security first approach as we prioritize security and reduce costs, but also to drive innovation. above all else. That’s why we’re focused on and risk leaders with an opportunity to see safe and secure foundation. At Microsoft, we’re seeing organizations use helping customers use and build trustworthy AI what their peers around the world are thinking Microsoft Copilot to innovate in a variety of that is secure, safe and private. It’s also why we about and doing in relation to AI use, adoption, Michal Braverman-Blumenstyk

INTRODUCTION 4 The balancing act between Questions that keep security and innovation and security risk leaders up at night include: What if the AI application inadvertently leaks The adoption of generative AI (GenAI) is On one side, business leaders are eager to adopt accelerating across industries. Organizations of AI applications so they can drive innovation. sensitive data? all sizes are in a sprint to incorporate AI into their operations to improve productivity and business On the other side, security teams are grappling What if it hallucinates and produces processes, and increase revenue. Although there with the obstacle of strengthening defenses is a sense of hope and excitement surrounding against the new and complex security challenges inaccurate information? this revolutionary technology, many security and posed by AI. risk leaders find themselves attempting to strike a balance between two powerful forces at play. What if it produces malicious content that could damage our reputation?

The secure “yes” Security and risk leaders want to be able to say makers. While the research found that companies “yes” to their company’s appeals to innovate are rapidly adopting and developing their own with AI, but they want it to be a safe and secure AI applications, security leaders clearly have “yes.” They want to ensure that the right security concerns about the new and amplified risks that measures, protocols, and solutions are in place, using and developing AI applications entails. and they want to be sure that their valid concerns about data security, inaccurate outputs, harmful However, while AI risks are evolving, so are content, and more are addressed. security solutions and best practices for AI. This white paper includes several recommended To explore the topics of AI adoption, use, steps to more effectively address security development, and security in more detail, concerns about AI, many of which emerged from Microsoft Security conducted quantitative the survey findings. More details about those will and qualitative research that included over be provided later. But the first step is to examine 400 security and IT decision makers as well as the current AI landscape. a series of in-depth interviews with decision

01 The push for rapid AI adoption

The pUsh fOR RapID aI aDOpTION 7 The push for rapid AI adoption C-suite executives in companies around the world are pushing for the rapid service with GenAI-powered virtual assistants, and optimize energy consumption adoption of GenAI. These leaders view GenAI as a critical driver of innovation in smart grids. These are just a few examples of the transformative possibilities that and know that it holds the potential to revolutionize various sectors and lead GenAI can offer. to significant advancements. The potential for GenAI innovation is high, and so are adoption rates. Of the survey GenAI can enable faster and more accurate cancer diagnoses, offer personalized respondents who passed the original screening criteria, only 5% of respondents learning experiences tailored to individual student needs, detect fraudulent said that they are neither using nor developing GenAI and have no plans to do so. transactions in real time, optimize production processes, enhance customer Conversely, 95% of respondents reported that they were either planning to or already use and/or develop GenAI. GenAI user and/or developer 95% Not GenAI user or developer 5%

A majority of companies are using and developing GenAI-related apps Although the widespread adoption and use of Over a quarter (26%) of respondents were GenAI applications was expected, one notable actively using third-party SaaS or enterprise- survey finding was the number of respondents ready GenAI applications and had implemented who said their companies were both using and GenAI apps they had developed or customized. developing GenAI apps or planning to. Another 28% were in the process of actively testing third-party SaaS or enterprise- ready GenAI applications, and they were 66% of respondents said that their organizations are not only using GenAI applications but actively in the process of developing or customizing GenAI apps. are also planning to develop or are already developing their own GenAI apps. Meanwhile, over three quarters (76%) of those currently in the “use only” category said they of organizations are developing or have preliminary plans to develop or customize 66% planning to develop their own in the next year. Gen AI applications

The pUsh fOR RapID aI aDOpTION 9 13.9 Why companies are developing Average number of Drive business innovation 58% several GenAI apps apps in development Maintain control of data 55% Another notable survey finding was that integrate with existing systems (52%), cost and companies that develop apps aren’t working on scalability concerns (49%), and compliance just one or two. Among companies who said and regulatory requirements (44%). Need to integrate with existing systems 52% they are developing or have developed apps, the average number they were working on or had This fast-paced drive to develop new innovative worked on was 13.9 apps. applications can be exciting and potentially Cost/scalability concerns of using SaaS 49% profitable for organizations, but it comes applications The top reason companies choose to with new security concerns. With companies develop or customize customer-facing GenAI developing and deploying so many applications, applications is to drive business innovation AI components like orchestrators, models, Compliance and regulatory requirements 44% (58%). Other reasons include wanting to and plug-ins can increase exposure and maintain control of data (55%), needing to vulnerabilities for organizations. With all these Lack of enterprise-ready solution to meet elements in play, understanding new and 37% specific need amplified security concerns about AI is essential. Other 2%

02 Facing the new realities of security for AI

Facing the new realities oF security For ai 11 Facing the new realities of security for AI Addressing the evolving threat landscape is crucial to enabling trustworthy AI. GenAI introduces new attack surfaces, such as prompts, responses, training data, retrieval-augmented generation data, and models, effectively changing the risk landscape. In addition to managing traditional threat vectors, security and risk leaders also need to address amplified risks such as data leakage and data oversharing, and new risks such as prompt injections, hallucinations, and model vulnerabilities. e f i l d p a m n a d I A n n e e w G r i s k s e l a a k t a a g D e M o d k e a l v e u r l b n l i e a r J a b i l i ty n o i ct e j n t i a t t w a e c n k p I s A u en rf a H G c m es a o l r l p u t c i c n e a r t i i d o n n I g s n i d n a i t a a r n T R o i A t a G r st d e a h t c a r I o A M s ea o e thr t v r e u c d s o t Y o n rs e o l p s s e R P l u s t n i o g p d n N E e s tw i o n r t k s p / s m k o i r l l P ty s i t n e d I D a t a n o i t C ca l i o l u p d p A

faCINg The New RealITIes Of seCURITy fOR aI 12 Security leaders’ top concerns about GenAI Top concerns of companies developing GenAI 60% Data leak/exfiltration Security and risk leaders at companies using GenAI said Top concerns of companies using GenAI Inappropriate use of personal data 50% their top concerns are data security issues, including Leakage of sensitive data 63% leakage of sensitive data (63%), sensitive data being Violations of regulations 42% overshared, with users gaining access to data they’re not authorized to view or edit (60%), and inappropriate Sensitive data being overshared 60% Lack of visibility into AI 42% components and vulnerabilities use or exposure of personal data (55%). Other Inappropriate use or exposure of concerns include insight inaccuracy (43%) and harmful personal data 55% Over-permissioned access 36% granted to AI apps or biased outputs (41%). Inaccurate insights generated by 43% Incorrect or misleading 36% In companies that are developing or customizing GenAI GenAI responses (Hallucination) apps, security leaders’ concerns were similar but slightly Lack of visibility into data and Malicious models 32% access risks 42% varied. Data leakage along with exfiltration (60%) and the inappropriate use of personal data (50%) were again Harmful or biased outputs from Unintended functionality 29% performed by AI (excessive agency) top concerns. But other concerns emerged, including GenAI apps 41% the violation of regulations (42%), lack of visibility into Risky or insecure plugins in Supply chain vulnerability 27% AI components and vulnerabilities (42%), and over- GenAI apps 39% permissioned access granted to AI apps (36%). Violations of regulations or Training data poisoning 23% code-of-conduct policies 38% Overall, these concerns can be divided into two Insecure plug-in design 22% Non-compliant or unethical use categories: amplified and emerging security risks. The of GenAI apps 38% following sections examine these risks in more detail. Adversarial prompt attacks 22% Insider risk from use of GenAI 32% apps Model theft 20% Overreliance on GenAI apps 27% Denial of service attack 19% Shadow IT 19% Wallet abuse 9%

Facing the new realities oF security For ai 13 Amplified risks of data leakage in AI systems As the volume of AI-generated content expands, Without appropriate user training, the rapid the potential for data leakage and exposure proliferation of AI tools can also create increases as well. Organizations face heightened environments in which users share or use data risks stemming from practices such as data without fully understanding its sensitivity, oversharing and shadow IT. compounding the risk of compliance violations and data breaches. Data oversharing and breaches: Data We want to make sure that whatever oversharing occurs when users inadvertently Shadow IT: With 78% of AI users bringing 1 data that we feed to it, it stays within [our gain access to sensitive information through their own AI tools to work (BYOAI), sometimes AI applications, often due to insufÏcient without the knowledge of the IT or security company], and it’s not some proprietary labeling policies or inadequate access controls. group within an organization, the risk of data This might lead to unauthorized exposure of leakage increases. When employees use third- information [that] gets leaked outside… confidential data, posing significant risks to both party AI tools and paste sensitive information individuals and organizations. such as source code, meeting notes, and data spreadsheets into user prompts, they can inadvertently expose confidential company data outside of the company. Technical Decision Maker, IT

faCINg The New RealITIes Of seCURITy fOR aI 14 Increased threats with rapid AI development By “2028, open-source generative AI models will As organizations attempt to keep up with underpin more than 50% of enterprise GenAI the rapid evolution of AI technologies, the use cases, up from less than 10% today.”2 As accelerated development and deployment of AI organizations increasingly utilize open-source applications introduces elevated security risks for organizations. software to develop GenAI, components within the AI stack such as models and orchestrators Rushed deployments: Companies often face can introduce vulnerabilities into their environments, which could be exploited by intense pressure to innovate quickly, which can malicious actors. result in inadequate testing, rushed deployments, and insufÏcient security vetting. This increase AI misconfiguration: When developing and in the pace of development can leave critical deploying AI applications, misconfigurations vulnerabilities unaddressed, creating security can expose organizations to direct risks, such risks once the AI system is in operation. as failing to implement identity governance AI supply chain vulnerabilities: The AI supply for an AI resource, and indirect risks, such as vulnerabilities in an internet-exposed virtual chain is a complex ecosystem that presents machine, which could allow an attacker to gain potential vulnerabilities that could compromise the integrity and security of AI systems. access to an AI resource. Vulnerabilities in third-party libraries or models can expose AI systems to exploitation.

Emerging risks and new challenges In addition to the amplification of existing means they can produce highly realistic and challenging to detect and mitigate such threats. business operations, and result in security security risks, AI can bring with it a host of convincing content, making the detection of Users can deliberately exploit system breaches, primarily when the model is granted emerging risks. such harmful outputs increasingly challenging. vulnerabilities to elicit unauthorized behavior too much decision-making power and autonomy. from the GenAI model or attempt to subvert Hallucinations: An AI hallucination, in which Model theft: Model theft involves the illegal safety and security filters. Direct injections Regulatory compliance: AI regulations like the an AI model generates false or misleading copying or theft of proprietary large language overwrite system prompts, while indirect ones European Union Artificial Intelligence Act (EU manipulate inputs from external sources. information, can pose risks to organizational models (LLMs), which can erode competitive AI Act) are designed to ensure that AI systems integrity, and in high-stakes sectors like health advantage and lead to financial losses as are developed and used in a way that is safe, care, finance, or legal services, they can lead to unauthorized parties can replicate models Training data poisoning: Training data poisoning transparent, and responsible. Violations of the EU significant challenges. Hallucinations can also without incurring development costs. Brand involves tampering with the data used to AI Act can cost companies as much as 35 million cause ethical and trust issues. Users must be reputation may also suffer from model theft if teach models to introduce vulnerabilities, euros or 7% of annual turnover. This creates able to trust that AI systems will provide accurate the stolen models are misused, and as language biases, or backdoors. This can hurt a model’s uncertainty for security and risk leaders as 62% and reliable information, and hallucinations models grow more powerful, their theft also security and reliability and create risks like poor of business leaders said they do not understand undermine this trust. poses a significant security threat, such as performance, system exploits, and harm to a AI regulations that apply to their sector.3 unauthorized use and sensitive data exposure. company’s reputation. Harmful content: GenAI can generate These are a select few emerging security offensive, dangerous, or legally non-compliant Prompt injections: In a prompt injection attack, a Excessive agency: Excessive agency allows an challenges. For more information about these material. Malicious actors can use AI-produced hacker disguises a malicious input as a legitimate LLM-based system to perform harmful actions and other AI risks, view a list of the top 10 risks deepfake video and audio content, fabricated prompt, causing unintended actions by an AI due to misinterpretations or unexpected errors for LLMs and GenAI Apps, compiled by the Open news articles, and manipulated images to system. By crafting deceptive prompts, attackers in its decision-making. This vulnerability can Worldwide Application Security Project (OWASP), spread misinformation, sow discord, or harm can trick an AI model into generating outputs compromise sensitive information, disrupt and visit MITRE ATLAS (Adversarial Threat reputations. The sophistication of AI models that include confidential information, making it Landscape for Artificial-Intelligence Systems).

03 The security transformation: A path to a secure yes to AI

The seCURITy TRaNsfORmaTION: a paTh TO a seCURe yes TO aI 17 Level of urgency for having security measures in place for AI Total percentage of respondents who said it is 95% 95% 95% “Very” or The security transformation: A path “somewhat” important to a secure yes to AI Security and risk leaders are clear they need to address these amplified and emerging threats as soon as possible. 95% of respondents agree that their company needs to have security measures in place for their AI apps 66% 64% 63% Very in the next 12-24 months. They voiced a high concern for having security measures in place across all three AI application categories—third-party SaaS, enterprise-ready, and custom-built apps. Somewhat Not very 95% of security and risk leaders 29% 31% 33% agree their company needs to have security measures in place for their AI apps including third-party SaaS, 95% 5% enterprise-ready, and custom-built 4% (n=204) apps in the next 12-24 months. 3% This sample represents those respondents who are developing or planning to develop their own Third-party SaaS Enterprise- Custom-built GenAI application ready apps

The seCURITy TRaNsfORmaTION: a paTh TO a seCURe yes TO aI 18 01 02 4 steps to implementing effective security for AI Form a dedicated Optimize resource security team allocation to secure for AI. GenAI. As awareness of the risks associated with the These recommended practices focus on rapid implementation of GenAI increases, many fostering a collaborative environment and organizations are responding proactively by implementing effective security measures dedicating substantial resources to enhance that will support GenAI advancements while their security measures. Security and risk leaders safeguarding organizational interests. can take several actionable steps to create a path toward safe and secure AI innovation. 03 04 Implement a Zero Adopt new Trust strategy. dedicated security solutions for AI.

The seCURITy TRaNsfORmaTION: a paTh TO a seCURe yes TO aI 19 Step 1 64% Form a dedicated AI security team of AI security teams are reporting to SDMs A majority of companies recognize the need Eighty percent of survey respondents either to form dedicated, cross-functional teams to currently have (45%) or plan to have (35%) a manage the unique security challenges posed dedicated team to address security for AI. Over by AI. Dedicated security teams ensure that AI six in 10 said their teams will report to a security systems are rigorously tested, vulnerabilities are decision-maker, ensuring not only vigilant swiftly identified and mitigated, and security oversight but also strategic vision and leadership protocols are continuously updated to keep pace in addressing AI-related risks. with evolving threats. 80% of organizations currently 45% 35% have a dedicated team or plan to 80% have to address security for GenAI. of these organizations currently have of these organization plan to have a a security team for AI. security team for AI. toc_shapes.ai:10

The seCURITy TRaNsfORmaTION: a paTh TO a seCURe yes TO aI 20 Notably, the median team size, or intended team size, of these dedicated security teams was 24 employees—underscoring the substantial resources that companies are committing to safeguarding their AI initiatives. When the size of company was factored in, team sizes varied. “I think [budget-wise], it’s going to always sit in security...Your AI team is looking at the use cases and the data input and output and how things are shared, but security is not their focus, so it’s really the Median team size security team that’s going to own this and 24 want to make sure that you’re protecting things properly.” Security Decision Maker, Healthcare

The seCURITy TRaNsfORmaTION: a paTh TO a seCURe yes TO aI 21 Best practices for building a security team for AI Here are a few best practices organizations can use to successfully build an effective cross-functional security team for AI. 01 02 03 04 Form an AI committee Hire diverse skill sets Establish clear roles Invest in continuous training to foster collaboration Forming a successful security team for AI and responsibilities and development across departments requires a balance of skills. Look for team For effective productivity, clearly The rapid evolution of AI technologies members with expertise in data science, define each team member’s role. mandates ongoing education for Security for AI is a collective effort cybersecurity, software engineering, and that goes beyond the IT department. Ensure everyone understands their security teams. Provide access to training machine learning. This diversity ensures Encourage collaboration among teams specific responsibilities, which programs and workshops that focus on that various aspects of security are promotes accountability and avoids practices, emerging threats, and ethical like security, IT, legal, compliance, covered, from technical development overlap in efforts. considerations related to security for and risk management to create to threat prevention. AI. This investment not only empowers comprehensive security strategies. team members but also ensures Having varying perspectives and that the organization stays ahead expertise will enhance the effectiveness of security protocols. of potential vulnerabilities.

The seCURITy TRaNsfORmaTION: a paTh TO a seCURe yes TO aI 22 Step 2 Security for AI - budget expectations Optimize resources to secure GenAI 78% 4% 17% 1% The introduction of AI applications within Allocating funds for compliance assessments, organizations is not only revolutionizing legal consultations, and audits becomes essential operations but also necessitating significant to align an organization’s AI strategy to an changes in resource and budget allocation, industry framework and enable more secure, especially in IT security. safe, and compliant AI usage and systems. Prioritizing funds for ongoing employee training and skills development—which could A significant majority of security and risk leaders include specialized training on security tools (78%) believe their IT security budget will increase to accommodate the unique challenges for AI, risk management strategies, and ethical and opportunities brought about by AI. This considerations in AI use—is also important to consider when allocating budget and resources. adjustment is crucial for several reasons. AI systems require a robust security infrastructure to operate securely. This might involve upgrading existing security systems, implementing more stringent access controls, and enhancing data security and governance. Additional resources Budget will Budget will No Not might also be needed to meet emerging new AI increase decrease change sure regulatory requirements. 78% believe their IT security budget will increase with the adoption of GenAI applications.

Step 3 Implement a Zero Trust strategy When preparing for AI adoption, a Zero Trust Trust minimizes the risk of data leakage and strategy provides security and risk leaders with protects an organization from both internal and a set of principles that help address some of external threats. Continuous verification, least their top concerns, including data oversharing privilege access, and dynamic risk management or overexposure and shadow IT. A Zero Trust are the cornerstones of this approach, providing approach shifts from a network-centric focus to a robust and adaptable security framework that an asset and data-centric focus and treats every supports the success of an organization’s end- access request as a potential threat, regardless to-end security. of its origin. By embracing Zero Trust, organizations can Zero Trust constantly validates the identities secure their AI deployments and know that their of every user and device, ensuring that only security is continuously validated and protected. those with clear permissions can reach sensitive Zero Trust empowers organizations to embrace information. By dynamically adjusting security AI confidently, ensuring that AI’s powerful measures based on real-time assessments, Zero capabilities are harnessed safely and effectively.

Zero Trust principles Verify explicitly Diligently verify all identities accessing AI applications, and assess every application used, deployed, and developed to ensure security integrity. By defining and monitoring both intended and unintended activities, organizations can maintain a strong defense against unauthorized access. Use least privilege access Limit AI systems to only access data necessary for intended uses by authorized users and ensure that AI agents operate with the minimum privilege to perform intended tasks—typically with just-in-time and just-enough-access (JIT/JEA) and risk-based policies like adaptive access control. Assume Breach Breaches are inevitable, so this principle focuses on minimizing their impact. To proactively design effective controls to reduce risks, operate under the assumption that each AI prompt could have malicious intent, responses might inadvertently leak data, and that AI components may possess vulnerabilities.

The seCURITy TRaNsfORmaTION: a paTh TO a seCURe yes TO aI 25 Step 4 Security plans for AI 72% 64% 2% 3% Adopt a comprehensive security solution for AI As AI adoption continues to expand across When asked how they plan on securing and industries, the need for comprehensive, protecting the usage and development of AI dedicated security solutions has become applications in their organizations, a majority increasingly apparent. AI introduces specific risks of survey respondents (72%) said they plan to that traditional security measures might not fully procure a new dedicated security solution to Procuring a new Using existing security Other Not sure address. Security for AI is designed to mitigate secure the usage and development of AI, while dedicated AI Security solutions to secure the solution to secure the usage and development usage and development of GenAI of GenAI these risks. 64% stated they plan to use existing security solutions to secure AI. A significant majority of companies plan to procure specialized tools and platforms to IT and security leaders believe that the primary Security for AI - budget contributors secure both the usage and development budget contributors for new solutions for the of AI applications. protection and governance of AI will be IT 63% 57% 37% 23% 9% 7% departments (63%) and information security/ cybersecurity departments (57%). Organizations are looking for These findings show that in addition to a comprehensive security continuing to leverage existing security solution set among new and solutions, organizations see the need to look existing solutions for new solutions that can help address the amplified and emerging risks of AI. IT Department Information Technology Data Operations Legal/Compliance security/ Department Management Department Department Cybersecurity Department Department

The seCURITy TRaNsfORmaTION: a paTh TO a seCURe yes TO aI 26 Organizations should choose a Once AI adoption is underway, security solution for AI that helps a security solution for AI should prepare their environments for help security teams continuously AI adoption with confidence. This discover security, safety, and includes classifying and labeling privacy risks–allowing them to Key elements of data within their environments, as proactively design and adjust well as ensuring robust identity controls and policies to address and access governance to support D evolving threats. a comprehensive a Zero Trust strategy. e i r s a c security solution p o e v r e for AI P r When evaluating security solutions for AI, Generative security and risk leaders should consider AI comprehensive solutions that help companies prepare environments for secure adoption, discover AI-related risks, protect AI systems, and G govern AI data to comply with regulations. o t v c e e r t n o A comprehensive solution is r To comply with regulatory P designed to continuously protect requirements, a security solution AI and data as developers for AI should include guidance deploy AI applications, and as and assessments to evaluate, users and customers interact implement, and strengthen with these applications. This compliance controls, alongside includes protecting sensitive data enterprise-ready compliance in AI prompts and responses, solutions to help govern AI and detecting, blocking, and responding to threats such as interactions more effectively. prompt injections.

The path forward In a rapidly evolving technological landscape, organizations continuously evaluate and respond security and risk leaders can strike a balance to threats. They are also adopting comprehensive between their company’s needs for innovation security solutions for AI that empower them to and security by taking proactive measures to proactively prepare their environments, discover successfully mitigate potential risks associated AI-specific risks, protect their AI systems, and with AI technologies. This enables organizations govern AI to ensure compliance with ever- to confidently embrace AI as a powerful evolving regulations. tool for transformation and growth without compromising security. Through this strategic, multi-faceted approach, organizations can tap into the remarkable To effectively tackle the complexities of security potential of GenAI and drive meaningful for AI, companies are forming security teams innovation without sacrificing the high for AI, optimizing resources, and implementing standards of security required in today’s a robust Zero Trust strategy that helps interconnected world.

Learn more about how Microsoft can help secure and govern AI to accelerate your AI transformation with confidence.

Appendix Methodology References Quantitative research for this study was conducted by Microsoft Security in partnership with MDC 1. 2024 Work Trend Index Annual Report, May 2024 Research Group in June 2024. A total of 402 surveys were completed by enterprise business IT 2. Gartner®, Innovation Guide for Generative AI Models, Arun and data security-decision makers, with consumers from the US, UK, Canada, Australia, and other Chandrasekaran, Arnold Gao, Ben Yan, August 26, 2024. GARTNER is global English markets. Survey respondents were employed full time at an enterprise company with a registered trademark and service mark of Gartner, Inc. and/or its 1,000 or more employees. afÏliates in the U.S. and internationally and is used herein with permission. All rights reserved. In a separate study, qualitative and quantitative research was conducted by Microsoft Security in 3. First Annual Generative AI study: Business Rewards vs. partnership with Answers Research in July 2024. A total of 15 60-minute in-depth interviews were Security Risks, ISMG, Q3 2023 conducted with participants who were employed full-time in an ITDM, SDM, or TDM role, as a C-suite, C-1, or C-2 level in a company with 5,000+ employees. A total of 409 surveys were completed by enterprise IT, technical, and data security-decision makers in the US in a company with 1,000+ employees.