OverviewThis guide shows how to turn NIST AI RMF into enforceable enterprise controls across the AI lifecycle (build, deploy, run). You’ll get a practical control-family mapping, an evidence/logging checklist for audit readiness, and a 30/60/90-day rollout plan to govern GenAI, embedded SaaS copilots, and internal AI apps.Key terms glossary: AI governance is the operational rules, accountability, and oversight that keep AI use safe, compliant, and aligned to business intent.AI security posture management (AI-SPM) is continuous discovery and risk assessment of AI apps, models, data connections, and permissions—so misconfigurations and exposures get fixed before they bite.An AI bill of materials (AI-BOM) is a traceable inventory of what an AI system is made of (data, models, components, vendors, and dependencies) and how it’s used end to end.Prompt injection is an attack that sneaks malicious instructions into what an AI system reads (prompts, files, web pages, or retrieved data) to hijack outputs or actions.The Model Context Protocol (MCP) is a standard way for AI tools and agents to securely connect to external data sources and services to fetch context and take actions.WebSockets are long-lived, two-way connections that keep AI chats and streaming responses flowing in real time—without the stop/start of traditional web requests.Guardrails are enforceable, runtime controls that monitor and restrict AI behavior (inputs, outputs, and actions) to prevent data loss, policy violations, and unsafe outcomes.  Introduction: AI governance is an operational problem, not a policy problemHere is a scenario that plays out every day across enterprise security teams. Someone in finance pastes a quarterly forecast into ChatGPT to clean up the formatting. A developer uses an AI coding assistant that quietly routes completions through an external model endpoint. A new software as a service (SaaS) platform update quietly activates an embedded artificial intelligence (AI) copilot that now has access to your customer relationship management (CRM) data.Nobody did anything wrong, exactly. But your sensitive data just left the building, and your acceptable use policy did nothing to stop it.This is the core problem with how most organizations approach AI governance. They treat it as a documentation exercise. Draft a policy, circulate it, check the box. But with 100% of industries now engaging with AI in some form, written guidelines cannot keep pace with how fast AI is moving into your environment—and they have no mechanism to stop the risks that come with it.The National Institute of Standards and Technology AI Risk Management Framework (NIST AI RMF) gives you the structure to think about this problem correctly. What it does not give you is enforcement. That gap between framework guidance and controls that actually work in real time is what this guide is designed to close. So let’s close that gap.We’ll break down NIST AI RMF in plain English, map controls across the build, deploy, and run lifecycle, cover evidence and logging requirements, and give you a 30/60/90-day rollout you can actually execute. One goal throughout: turn governance guidance into enforceable controls with full visibility across public GenAI, embedded SaaS copilots, and internally developed AI. What AI governance frameworks do (and don’t) solveFrameworks are not the problem. They are genuinely useful. NIST AI RMF gives security and compliance teams a shared risk taxonomy, common language, and a reporting structure that works across security, legal, IT, and app teams. When everyone is using the same vocabulary, it is much easier to align stakeholders around actual outcomes.The problem is what frameworks cannot do, and what too many organizations assume they can.A framework cannot block a user from pasting source code into ChatGPT. It cannot detect a prompt injection attack in real time. And it does not account for how modern AI systems actually communicate.Most frameworks also predate the explosion of AI features embedded in enterprise SaaS platforms, which means the risk categories they describe do not fully map to where your exposure actually lives.What breaks in practice:Transaction-based web filters do not work for multi-turn AI conversationsKeyword matching is not contextual understandingFirewalls and virtual desktop infrastructure (VDI) solutions cannot govern AI sessions and modern protocols without significant added cost and operational complexityLegacy tools have no awareness of persistent WebSocket connections, Model Context Protocol (MCP) servers, or multi-turn contextual conversations that look nothing like traditional HTTP trafficThe organizations that succeed at AI governance use frameworks as the foundation for policy development and layer technical controls on top to make those policies enforceable. That translation, from principle to enforcement, is where the work actually happens. NIST AI RMF: Key functionsThe NIST AI RMF organizes AI risk into four interconnected functions. On paper, they can read like audit-speak. In practice, each one maps to a set of operational decisions your team needs to make. Here is what they actually mean.Govern: Set the rules before you need themMost organizations establish AI policies reactively, after an incident, after a compliance inquiry, after someone in Legal raises an alarm. The Govern function is about getting ahead of that.Define acceptable use policies that reflect how your organization actually works. Sales teams need AI writing assistants. Engineering teams need code completion tools. A productivity tool that summarizes meeting notes carries different risk than a customer-facing chatbot handling sensitive inquiries. Your policies should reflect those distinctions, not flatten them.Strong governance policies share four characteristics:Specific rather than vague: “Marketing may use approved GenAI tools for draft content creation” beats “Use AI responsibly.”Role-based: Different functions have different needs and different risk profiles.Actionable: Clear enough that someone could write enforcement rules from them.Maintainable: Structured so updates are straightforward as AI capabilities evolve.Establish a clear definition of sanctioned AI versus prohibited use. Blocking all AI is neither practical nor desirable. The goal is governed adoption. Identify your evidence requirements, logs, inventories, and testing results, before an auditor asks for them. Being audit-ready is dramatically easier when you design for it from the start.Map: Know what you’re actually dealing withThe Map function is where most enterprises get a humbling reality check. When security teams do their first serious AI inventory, they almost always find more than they expected, often significantly more.The instinct is to focus on the obvious: ChatGPT, Gemini, Claude. But the harder discovery challenge is everything else.AI asset categories that are commonly missed:Browser extensions with AI-powered writing assistantsMobile applications with AI tools on corporate devicesAPI integrations where custom applications call AI services directlyEmbedded copilots that activate automatically inside your SaaS platformsDeveloper tools, including integrated development environments (IDEs), command-line interfaces (CLIs), and MCP serversA complete inventory is not just a list of apps. It is a map of data flows, where information enters AI systems, how it moves, and where it could end up. Establish AI supply chain lineage via an AI Bill of Materials (AI-BOM): trace datasets to models to runtime usage to understand where risk originates and propagates. This is where governance starts.Measure: Test what you think you knowHaving controls in place is not the same as having controls that work. The Measure function is about closing that gap, continuously, not just at annual assessment time.Continuous validation requires two layers: automated adversarial testing through AI red teaming (simulating attack techniques including prompt injection, jailbreaks, and context poisoning) and ongoing model evaluation as models and their risk profiles evolve.AI-specific attack patterns that traditional tools miss:Indirect prompt injection: Malicious instructions embedded in documents or data sources that the AI processes—our firewall never sees itContext manipulation: Attacks that corrupt the information available to AI systemsCapability elicitation: Techniques that convince AI systems to perform actions outside their intended scopeTraining data exposure: Methods that extract sensitive information from model weightsThese are not edge cases. They are active attack patterns that require purpose-built detection.Manage: Turn findings into enforcementGovernance without enforcement is just documentation, and documentation does not stop attacks.The Manage function is where governance programs either prove their worth or expose their limits.When adversarial testing reveals that a particular attack technique succeeds against your AI application, what happens next? In a mature program, the answer is automatic: a runtime guardrail deploys to block that technique in production. The loop between finding and fix closes without a manual remediation cycle in between.Exception processes matter too. Legitimate business needs will fall outside standard policies. A well-designed exception process documents the business justification, applies compensating controls, and sets review dates to confirm the exception remains necessary. It keeps flexibility without creating permanent blind spots. Control mapping across the AI lifecycle: Build, deploy, runMost AI security programs start at runtime, inspecting traffic after AI is already in production. That is the wrong starting point. Risk accumulates across every phase: in the training data, the deployment configuration, and the runtime interaction. Controls need to match.Build: Development and data preparationMost build-phase risk goes undetected because traditional security tools were not designed for AI infrastructure. Overly permissive model access, unprotected training pipelines, shared credentials across environments, and missing input validation all create exposure that surfaces later, at runtime, when it is far more expensive to fix.The starting point is inventory. That means training datasets and data sources, developer environments, authorization models (such as Microsoft Entra ID for agents and AWS Identity and Access Management (IAM)), and AI infrastructure components: large language models (LLMs), MCP servers, and agent platforms. Apply training data controls, enforce least privilege, and track model lineage—publisher, licensing terms, and risk factors all included. Know what you built with before you ship it.AI security posture management (AI-SPM) makes this visible at scale, surfacing misconfigurations, excessive permissions, sensitive data exposure, and vulnerabilities across GenAI SaaS, embedded agentic AI in SaaS, and internally developed AI, with risk scoring to prioritize what gets fixed first. AI-BOM lineage tracks the full AI supply chain and associated authorization models. Compliance benchmarking maps posture findings to frameworks like NIST AI RMF and the EU AI Act, so you are not running a separate audit process on top of your security workflow.Build phase checklistInventory training datasets, data sources, developer environments, and AI infrastructure componentsMap authorization models (Entra ID, AWS IAM) for agents and servicesEnforce least-privilege access to training data and model endpointsTrack model lineage: publisher, licensing terms, and associated risk factorsRun AI-SPM to surface misconfigurations and excessive permissions before they reach productionEstablish AI-BOM traceability across your full AI supply chainDeploy: Release, configuration, and access pathsThe window between development and production is where a lot of AI security programs go quiet. Configurations get set once and are not revisited. Permissions that made sense in a dev environment carry forward into production. By the time something goes wrong, the misconfiguration is already load-bearing.Misconfigurations and excessive permissions are far easier to fix before an AI app reaches production than after. Traditional vulnerability scanning, cloud security posture management (CSPM), cloud workload protection platforms (CWPP), and virtual firewalls leave gaps when applied to AI apps because they were built for different threat models. Pre-production assessment needs to account for AI-specific risks: not just common vulnerabilities and exposures (CVEs), but also misconfigurations, permission sprawl, and integration risks specific to AI systems. Apply approval gates and change control to AI deployments the same way you would to any production system. Treat your AI deployment pipeline as a security boundary.A purpose-built AI security platform handles this at the deploy phase by providing risk analysis across SaaS and internally developed AI apps and infrastructure before they go live, with prioritized remediation guidance so teams know exactly what to address and in what order. Continuous automated adversarial testing across build, deploy, and runtime phases, with remediation tracking as AI environments evolve, replaces the point-in-time assessment model that leaves gaps between audit cycles. Custom policy creation and governance requirement mapping support compliance alignment at the deployment stage rather than scrambling to retrofit it after.Deploy phase checklistReview all configurations and entitlements before any AI app reaches productionApply approval gates and change control to AI deployments the same way you would any production systemRun pre-production AI-SPM risk analysis to catch AI-specific misconfigurations that CVE-based scanning will missValidate that the system resists prompt injection, jailbreaks, and data extraction before go-liveMap deployment configurations to governance requirements and document for audit readinessRun: Production usage and runtime interactionsRuntime is where most security programs focus, but the threat surface here is more complex than legacy tools were built to handle. Many GenAI services rely on WebSockets rather than traditional HTTP. Developer tools increasingly use MCP. Multi-turn AI conversations carry context across interactions in ways that a transaction-based inspection model simply cannot follow. Governing AI at runtime means accounting for this protocol-level complexity, not just URL categories and request/response snapshots.When an employee pastes confidential information into an AI prompt, you need inline inspection that can block that transmission before the data leaves your environment. When a prompt injection attack attempts to manipulate your AI application through malicious content embedded in a document it is processing, you need detection that understands what the model is being asked to do, not just what the request looks like on the wire.Inline inspection prevents data loss and protects against advanced threats at the prompt and response layer. Access controls by user and group, with allow, block, warn, and isolation enforcement modes, let you apply graduated policy rather than blunt category blocks. Secure browser technology extends coverage to unmanaged and bring-your-own-device (BYOD) access, so unmanaged devices do not become the path of least resistance. Prompt extraction and classification covers request and response traffic across dozens of GenAI apps. Advanced AI detectors support content moderation, flagging off-topic or policy-violating usage before it becomes a compliance event. Applying zero trust principles to AI development environments adds inline controls for IDEs connecting to AI infrastructure. Runtime guardrails and detectors address prompt injection, personally identifiable information (PII), source code leakage, and unsafe outputs across production AI systems.Run phase checklistDeploy access controls by user and group for all generative and embedded AI appsEnable inline data loss prevention (DLP) on prompts and uploads for sensitive data typesExtend coverage to unmanaged and BYOD devices via secure browser technologyActivate prompt extraction and classification across major GenAI appsDeploy runtime guardrails with detectors for prompt injection, jailbreaks, PII leakage, and content moderationConfirm your inspection layer handles WebSocket and MCP traffic, not just HTTP Turning guidance into enforcement: The control familiesKnowing where controls apply is only half the equation. The other half is understanding what those controls actually are and how they work together as a unified enforcement layer rather than a stack of point tools.AI asset management: Discovery and postureAI asset management is the foundation. You cannot enforce policies against AI you cannot see.Shadow AI detection identifies unsanctioned generative AI applications that employees use without approval. It also surfaces AI features embedded within sanctioned SaaS platforms that may have activated without explicit awareness, because SaaS platforms are increasingly AI apps, whether you configured them that way or not.AI-SPM goes further, evaluating AI-specific risks across your portfolio: misconfigurations, excessive permissions, sensitive data exposure, and known vulnerabilities, with risk scoring and guided remediation to focus effort where it matters most. This extends across services, agents, and retrieval-augmented generation (RAG) frameworks. AI agent detection covers both embedded SaaS agents and enterprise-deployed agents, with visibility into related traffic flows.AI access security: Who can use what, and howAccess security determines which users can interact with which AI applications and under what conditions.Policy enforcement modes, from least to most restrictive:Full access: Approved apps and user groups with no restrictionsWarning mode: Triggers data handling reminders at the point of interactionBrowser isolation: Prevents direct data transfer for sensitive applicationsComplete blocking: Reserved for the highest-risk casesIsolation also functions as an enforcement mode, controlling copy/paste and actions within AI sessions. Secure browser technology extends this coverage to unmanaged devices. Granular upload controls restrict what data users can send to AI applications.Two principles to anchor your approach: enable sanctioned AI safely rather than defaulting to blocking everything, and do not rely on keyword-only or transaction-based controls for multi-turn AI conversations.Data security: What data can be sharedMost data leakage conversations focus on what goes into an AI prompt. The response layer is just as important and more often overlooked. A model that has been fed sensitive context through retrieval-augmented generation pipelines, connected data sources, or prior conversation turns can surface that information in its outputs even when the original prompt looked clean. Enforceable data security means covering both directions: inline DLP on prompts and uploads for source code, PII, PCI, and PHI, and response-layer detectors that catch leakage on the way out.Content governance: Acceptable useContent governance enforces organizational policies about how AI gets used, beyond data protection. Advanced AI detectors analyze prompts and responses to detect policy-violating usage, including toxic content, off-topic interactions, restricted topics, and competitive topics, and enforce appropriate controls. This is contextual understanding applied at scale, not keyword matching.AI red teaming and governance mapping: Continuous policy alignmentRed teaming provides ongoing validation that AI systems resist attack and meet governance requirements. Automated adversarial testing using thousands of simulated attack techniques tests your AI applications continuously, not just at point-in-time assessments. Prompt hardening and testing simulates exploitation of system prompts, then generates hardened alternatives with step-by-step guidance.The enforcement side is where this pays off: a runtime detector library covering jailbreaks, prompt injection, data leakage, and content moderation, combined with automated policy generation that translates red teaming findings directly into production guardrails. When a test finds a vulnerability, the fix deploys to runtime. AI security controls map to NIST AI RMF and EU AI Act requirements, making governance readiness an output of your security program rather than a separate workstream. Evidence and auditability: What to log to prove governanceGovernance programs must demonstrate compliance, not just claim it. Proper evidence collection supports audits, investigations, and regulatory inquiries.Minimum evidence set (Baseline)Start with asset inventory: all AI models, agents, and services operating in your environment, where they are deployed, and their dependencies. Add data assets connected to AI, including datasets, vectors, and exposure status, and access paths and entitlements showing who and what can reach sensitive training data. AI-BOM-style lineage evidence traces datasets to models to runtime usage to support traceability requirements.Interaction evidence (Runtime)At the runtime layer, log the following:Prompt and response activity through extraction and classification. You do not necessarily need to store full text. Classification metadata often satisfies audit requirements.DLP events with blocked/allowed status and dictionary hit typeAccess policy actions: warn, block, isolate, and copy/paste restrictionsContent moderation events with topic classification and enforcement actionAgent visibility evidence: detected agents, both embedded and enterprise-deployed, along with related traffic flowsGovernance reportingCompliance posture dashboards show framework alignment status and highlight areas of drift. Remediation tracking documents how identified issues get addressed. Audit-ready reporting outputs support both internal and external audits. 30/60/90-day rollout plan for enforceable governanceImplementing AI governance works best as a phased program that builds capabilities progressively while delivering value quickly.First 30 Days: Establish enforceable baselinesStart with discovery. Surface the unsanctioned GenAI applications and embedded SaaS AI features already in use across your organization. This number is almost always larger than expected.Priority actions in the first 30 days:Discover AI usage and assets: shadow AI and AI ecosystem inventoryDefine initial policies covering allowed apps, restricted data types, and acceptable useEnable prompt and response visibility and classification across major GenAI appsTurn on inline DLP for prompts covering source code, PII, PCI, and PHI data typesDeploy access controls (warn, block, and isolate) for the top GenAI applications in useSet your foundational guardrails early: do not treat AI as standard web traffic, and do not rely on keyword-only or transaction-based policies for multi-turn AI conversations.Days 31 to 60: Expand controls and posture managementPriority actions in days 31 to 60:Extend discovery to models, agents, services, datasets, vectors, and developer tool paths including IDEs and CLIsEstablish AI-BOM traceability from datasets and data sources to models to runtime usage, including authorization models like Entra ID for agents and AWS IAMAssess misconfigurations and excessive permissions, and prioritize remediation using AI-SPM risk scoringImplement guided remediation workflows and enforce least-privilege across your AI portfolioAdd content moderation policies for off-topic, toxic, restricted, and competitive contentIntroduce continuous red teaming and prompt hardening for critical AI applicationsBegin compliance benchmarking and reporting against NIST AI RMF, the EU AI Act, HIPAA, and GDPR as applicableDays 61 to 90: Operationalize continuous governancePriority actions in days 61 to 90:Automate governance mapping of AI risks to frameworks for ongoing NIST AI RMF and EU AI Act readinessDeploy runtime guardrails and detectors for prompt injection, jailbreaks, data leakage, and content moderationUse automated policy generation to push red teaming findings directly into enforceable runtime policiesSet up continuous monitoring for drift, new assets, new AI apps, and new risk classesStandardize audit packages with monthly and quarterly reporting cycles and evidence retention that meets your regulatory requirementsWith the right framework stack in place, the question becomes execution. Related guidance to reference beyond NIST AI RMFNIST AI RMF provides a strong foundation for AI governance, but several complementary frameworks address specific aspects of AI risk. Use them together rather than treating them as competing options.FrameworkBest used forEU AI ActRisk-based classification for AI systems operating in European marketsOWASP LLM Top 10Technical implementation guidance on large language model vulnerabilitiesMITRE ATLASThreat modeling against adversarial tactics targeting AI systemsISO/IEC 42001Formal AI management system standard for mature governance programsDepending on your industry and geography, NIS2, DORA, HIPAA, GDPR, and SAMA requirements may also apply. The practical approach: use NIST AI RMF as the governance foundation, incorporate EU AI Act requirements for applicable systems, reference the Open Worldwide Application Security Project (OWASP) for technical implementation, and leverage MITRE Adversarial Threat Landscape for AI Systems (ATLAS) for threat modeling. How Zscaler supports enforceable AI governanceMost AI security conversations end up in the same place: a stack of point tools that each solve one slice of the problem without talking to each other. You get a posture tool, an access tool, a DLP tool, a red teaming tool, and a governance program that is more fragmented than the risk it is trying to address.Zscaler AI Security is built differently. It extends the Zero Trust Exchange™ platform, already proven at enterprise scale for users, workloads, clouds, and branches, to cover the full AI lifecycle from build through deploy through run. Inventory, access control, posture management, and runtime guardrails are designed to work together. And when red teaming finds a vulnerability, enforcement deploys automatically. That closed loop is not a feature. It is the architecture.What this looks like in practice:AI Asset Management and AI-SPM: Full AI ecosystem visibility across GenAI SaaS, embedded agentic AI in SaaS, and internally developed AI. AI-BOM lineage, AI agent detection, AI-SPM risk scoring, and prioritized remediation are all part of the same workflow.AI Access Security: Controls that go beyond URL categories: allow, block, warn, and isolate by user and group, with prompt extraction and classification, and Zero Trust Browser coverage for unmanaged devices.AI Red Teaming and AI Guardrails: Continuous adversarial testing, prompt hardening, automated policy generation, and runtime guardrails that stay current as your AI environment evolves.Governance mapping: AI security controls map to NIST AI RMF and EU AI Act requirements as a natural output, not a separate reporting workstream bolted on at the end.AI governance does not have to be a choice between security and speed. The organizations moving fastest on AI adoption are the ones that built enforceable controls early, so they can say yes to AI with confidence, not just caution.Request a demo of Zscaler AI Security today.  

​[#item_full_content] OverviewThis guide shows how to turn NIST AI RMF into enforceable enterprise controls across the AI lifecycle (build, deploy, run). You’ll get a practical control-family mapping, an evidence/logging checklist for audit readiness, and a 30/60/90-day rollout plan to govern GenAI, embedded SaaS copilots, and internal AI apps.Key terms glossary: AI governance is the operational rules, accountability, and oversight that keep AI use safe, compliant, and aligned to business intent.AI security posture management (AI-SPM) is continuous discovery and risk assessment of AI apps, models, data connections, and permissions—so misconfigurations and exposures get fixed before they bite.An AI bill of materials (AI-BOM) is a traceable inventory of what an AI system is made of (data, models, components, vendors, and dependencies) and how it’s used end to end.Prompt injection is an attack that sneaks malicious instructions into what an AI system reads (prompts, files, web pages, or retrieved data) to hijack outputs or actions.The Model Context Protocol (MCP) is a standard way for AI tools and agents to securely connect to external data sources and services to fetch context and take actions.WebSockets are long-lived, two-way connections that keep AI chats and streaming responses flowing in real time—without the stop/start of traditional web requests.Guardrails are enforceable, runtime controls that monitor and restrict AI behavior (inputs, outputs, and actions) to prevent data loss, policy violations, and unsafe outcomes.  Introduction: AI governance is an operational problem, not a policy problemHere is a scenario that plays out every day across enterprise security teams. Someone in finance pastes a quarterly forecast into ChatGPT to clean up the formatting. A developer uses an AI coding assistant that quietly routes completions through an external model endpoint. A new software as a service (SaaS) platform update quietly activates an embedded artificial intelligence (AI) copilot that now has access to your customer relationship management (CRM) data.Nobody did anything wrong, exactly. But your sensitive data just left the building, and your acceptable use policy did nothing to stop it.This is the core problem with how most organizations approach AI governance. They treat it as a documentation exercise. Draft a policy, circulate it, check the box. But with 100% of industries now engaging with AI in some form, written guidelines cannot keep pace with how fast AI is moving into your environment—and they have no mechanism to stop the risks that come with it.The National Institute of Standards and Technology AI Risk Management Framework (NIST AI RMF) gives you the structure to think about this problem correctly. What it does not give you is enforcement. That gap between framework guidance and controls that actually work in real time is what this guide is designed to close. So let’s close that gap.We’ll break down NIST AI RMF in plain English, map controls across the build, deploy, and run lifecycle, cover evidence and logging requirements, and give you a 30/60/90-day rollout you can actually execute. One goal throughout: turn governance guidance into enforceable controls with full visibility across public GenAI, embedded SaaS copilots, and internally developed AI. What AI governance frameworks do (and don’t) solveFrameworks are not the problem. They are genuinely useful. NIST AI RMF gives security and compliance teams a shared risk taxonomy, common language, and a reporting structure that works across security, legal, IT, and app teams. When everyone is using the same vocabulary, it is much easier to align stakeholders around actual outcomes.The problem is what frameworks cannot do, and what too many organizations assume they can.A framework cannot block a user from pasting source code into ChatGPT. It cannot detect a prompt injection attack in real time. And it does not account for how modern AI systems actually communicate.Most frameworks also predate the explosion of AI features embedded in enterprise SaaS platforms, which means the risk categories they describe do not fully map to where your exposure actually lives.What breaks in practice:Transaction-based web filters do not work for multi-turn AI conversationsKeyword matching is not contextual understandingFirewalls and virtual desktop infrastructure (VDI) solutions cannot govern AI sessions and modern protocols without significant added cost and operational complexityLegacy tools have no awareness of persistent WebSocket connections, Model Context Protocol (MCP) servers, or multi-turn contextual conversations that look nothing like traditional HTTP trafficThe organizations that succeed at AI governance use frameworks as the foundation for policy development and layer technical controls on top to make those policies enforceable. That translation, from principle to enforcement, is where the work actually happens. NIST AI RMF: Key functionsThe NIST AI RMF organizes AI risk into four interconnected functions. On paper, they can read like audit-speak. In practice, each one maps to a set of operational decisions your team needs to make. Here is what they actually mean.Govern: Set the rules before you need themMost organizations establish AI policies reactively, after an incident, after a compliance inquiry, after someone in Legal raises an alarm. The Govern function is about getting ahead of that.Define acceptable use policies that reflect how your organization actually works. Sales teams need AI writing assistants. Engineering teams need code completion tools. A productivity tool that summarizes meeting notes carries different risk than a customer-facing chatbot handling sensitive inquiries. Your policies should reflect those distinctions, not flatten them.Strong governance policies share four characteristics:Specific rather than vague: “Marketing may use approved GenAI tools for draft content creation” beats “Use AI responsibly.”Role-based: Different functions have different needs and different risk profiles.Actionable: Clear enough that someone could write enforcement rules from them.Maintainable: Structured so updates are straightforward as AI capabilities evolve.Establish a clear definition of sanctioned AI versus prohibited use. Blocking all AI is neither practical nor desirable. The goal is governed adoption. Identify your evidence requirements, logs, inventories, and testing results, before an auditor asks for them. Being audit-ready is dramatically easier when you design for it from the start.Map: Know what you’re actually dealing withThe Map function is where most enterprises get a humbling reality check. When security teams do their first serious AI inventory, they almost always find more than they expected, often significantly more.The instinct is to focus on the obvious: ChatGPT, Gemini, Claude. But the harder discovery challenge is everything else.AI asset categories that are commonly missed:Browser extensions with AI-powered writing assistantsMobile applications with AI tools on corporate devicesAPI integrations where custom applications call AI services directlyEmbedded copilots that activate automatically inside your SaaS platformsDeveloper tools, including integrated development environments (IDEs), command-line interfaces (CLIs), and MCP serversA complete inventory is not just a list of apps. It is a map of data flows, where information enters AI systems, how it moves, and where it could end up. Establish AI supply chain lineage via an AI Bill of Materials (AI-BOM): trace datasets to models to runtime usage to understand where risk originates and propagates. This is where governance starts.Measure: Test what you think you knowHaving controls in place is not the same as having controls that work. The Measure function is about closing that gap, continuously, not just at annual assessment time.Continuous validation requires two layers: automated adversarial testing through AI red teaming (simulating attack techniques including prompt injection, jailbreaks, and context poisoning) and ongoing model evaluation as models and their risk profiles evolve.AI-specific attack patterns that traditional tools miss:Indirect prompt injection: Malicious instructions embedded in documents or data sources that the AI processes—our firewall never sees itContext manipulation: Attacks that corrupt the information available to AI systemsCapability elicitation: Techniques that convince AI systems to perform actions outside their intended scopeTraining data exposure: Methods that extract sensitive information from model weightsThese are not edge cases. They are active attack patterns that require purpose-built detection.Manage: Turn findings into enforcementGovernance without enforcement is just documentation, and documentation does not stop attacks.The Manage function is where governance programs either prove their worth or expose their limits.When adversarial testing reveals that a particular attack technique succeeds against your AI application, what happens next? In a mature program, the answer is automatic: a runtime guardrail deploys to block that technique in production. The loop between finding and fix closes without a manual remediation cycle in between.Exception processes matter too. Legitimate business needs will fall outside standard policies. A well-designed exception process documents the business justification, applies compensating controls, and sets review dates to confirm the exception remains necessary. It keeps flexibility without creating permanent blind spots. Control mapping across the AI lifecycle: Build, deploy, runMost AI security programs start at runtime, inspecting traffic after AI is already in production. That is the wrong starting point. Risk accumulates across every phase: in the training data, the deployment configuration, and the runtime interaction. Controls need to match.Build: Development and data preparationMost build-phase risk goes undetected because traditional security tools were not designed for AI infrastructure. Overly permissive model access, unprotected training pipelines, shared credentials across environments, and missing input validation all create exposure that surfaces later, at runtime, when it is far more expensive to fix.The starting point is inventory. That means training datasets and data sources, developer environments, authorization models (such as Microsoft Entra ID for agents and AWS Identity and Access Management (IAM)), and AI infrastructure components: large language models (LLMs), MCP servers, and agent platforms. Apply training data controls, enforce least privilege, and track model lineage—publisher, licensing terms, and risk factors all included. Know what you built with before you ship it.AI security posture management (AI-SPM) makes this visible at scale, surfacing misconfigurations, excessive permissions, sensitive data exposure, and vulnerabilities across GenAI SaaS, embedded agentic AI in SaaS, and internally developed AI, with risk scoring to prioritize what gets fixed first. AI-BOM lineage tracks the full AI supply chain and associated authorization models. Compliance benchmarking maps posture findings to frameworks like NIST AI RMF and the EU AI Act, so you are not running a separate audit process on top of your security workflow.Build phase checklistInventory training datasets, data sources, developer environments, and AI infrastructure componentsMap authorization models (Entra ID, AWS IAM) for agents and servicesEnforce least-privilege access to training data and model endpointsTrack model lineage: publisher, licensing terms, and associated risk factorsRun AI-SPM to surface misconfigurations and excessive permissions before they reach productionEstablish AI-BOM traceability across your full AI supply chainDeploy: Release, configuration, and access pathsThe window between development and production is where a lot of AI security programs go quiet. Configurations get set once and are not revisited. Permissions that made sense in a dev environment carry forward into production. By the time something goes wrong, the misconfiguration is already load-bearing.Misconfigurations and excessive permissions are far easier to fix before an AI app reaches production than after. Traditional vulnerability scanning, cloud security posture management (CSPM), cloud workload protection platforms (CWPP), and virtual firewalls leave gaps when applied to AI apps because they were built for different threat models. Pre-production assessment needs to account for AI-specific risks: not just common vulnerabilities and exposures (CVEs), but also misconfigurations, permission sprawl, and integration risks specific to AI systems. Apply approval gates and change control to AI deployments the same way you would to any production system. Treat your AI deployment pipeline as a security boundary.A purpose-built AI security platform handles this at the deploy phase by providing risk analysis across SaaS and internally developed AI apps and infrastructure before they go live, with prioritized remediation guidance so teams know exactly what to address and in what order. Continuous automated adversarial testing across build, deploy, and runtime phases, with remediation tracking as AI environments evolve, replaces the point-in-time assessment model that leaves gaps between audit cycles. Custom policy creation and governance requirement mapping support compliance alignment at the deployment stage rather than scrambling to retrofit it after.Deploy phase checklistReview all configurations and entitlements before any AI app reaches productionApply approval gates and change control to AI deployments the same way you would any production systemRun pre-production AI-SPM risk analysis to catch AI-specific misconfigurations that CVE-based scanning will missValidate that the system resists prompt injection, jailbreaks, and data extraction before go-liveMap deployment configurations to governance requirements and document for audit readinessRun: Production usage and runtime interactionsRuntime is where most security programs focus, but the threat surface here is more complex than legacy tools were built to handle. Many GenAI services rely on WebSockets rather than traditional HTTP. Developer tools increasingly use MCP. Multi-turn AI conversations carry context across interactions in ways that a transaction-based inspection model simply cannot follow. Governing AI at runtime means accounting for this protocol-level complexity, not just URL categories and request/response snapshots.When an employee pastes confidential information into an AI prompt, you need inline inspection that can block that transmission before the data leaves your environment. When a prompt injection attack attempts to manipulate your AI application through malicious content embedded in a document it is processing, you need detection that understands what the model is being asked to do, not just what the request looks like on the wire.Inline inspection prevents data loss and protects against advanced threats at the prompt and response layer. Access controls by user and group, with allow, block, warn, and isolation enforcement modes, let you apply graduated policy rather than blunt category blocks. Secure browser technology extends coverage to unmanaged and bring-your-own-device (BYOD) access, so unmanaged devices do not become the path of least resistance. Prompt extraction and classification covers request and response traffic across dozens of GenAI apps. Advanced AI detectors support content moderation, flagging off-topic or policy-violating usage before it becomes a compliance event. Applying zero trust principles to AI development environments adds inline controls for IDEs connecting to AI infrastructure. Runtime guardrails and detectors address prompt injection, personally identifiable information (PII), source code leakage, and unsafe outputs across production AI systems.Run phase checklistDeploy access controls by user and group for all generative and embedded AI appsEnable inline data loss prevention (DLP) on prompts and uploads for sensitive data typesExtend coverage to unmanaged and BYOD devices via secure browser technologyActivate prompt extraction and classification across major GenAI appsDeploy runtime guardrails with detectors for prompt injection, jailbreaks, PII leakage, and content moderationConfirm your inspection layer handles WebSocket and MCP traffic, not just HTTP Turning guidance into enforcement: The control familiesKnowing where controls apply is only half the equation. The other half is understanding what those controls actually are and how they work together as a unified enforcement layer rather than a stack of point tools.AI asset management: Discovery and postureAI asset management is the foundation. You cannot enforce policies against AI you cannot see.Shadow AI detection identifies unsanctioned generative AI applications that employees use without approval. It also surfaces AI features embedded within sanctioned SaaS platforms that may have activated without explicit awareness, because SaaS platforms are increasingly AI apps, whether you configured them that way or not.AI-SPM goes further, evaluating AI-specific risks across your portfolio: misconfigurations, excessive permissions, sensitive data exposure, and known vulnerabilities, with risk scoring and guided remediation to focus effort where it matters most. This extends across services, agents, and retrieval-augmented generation (RAG) frameworks. AI agent detection covers both embedded SaaS agents and enterprise-deployed agents, with visibility into related traffic flows.AI access security: Who can use what, and howAccess security determines which users can interact with which AI applications and under what conditions.Policy enforcement modes, from least to most restrictive:Full access: Approved apps and user groups with no restrictionsWarning mode: Triggers data handling reminders at the point of interactionBrowser isolation: Prevents direct data transfer for sensitive applicationsComplete blocking: Reserved for the highest-risk casesIsolation also functions as an enforcement mode, controlling copy/paste and actions within AI sessions. Secure browser technology extends this coverage to unmanaged devices. Granular upload controls restrict what data users can send to AI applications.Two principles to anchor your approach: enable sanctioned AI safely rather than defaulting to blocking everything, and do not rely on keyword-only or transaction-based controls for multi-turn AI conversations.Data security: What data can be sharedMost data leakage conversations focus on what goes into an AI prompt. The response layer is just as important and more often overlooked. A model that has been fed sensitive context through retrieval-augmented generation pipelines, connected data sources, or prior conversation turns can surface that information in its outputs even when the original prompt looked clean. Enforceable data security means covering both directions: inline DLP on prompts and uploads for source code, PII, PCI, and PHI, and response-layer detectors that catch leakage on the way out.Content governance: Acceptable useContent governance enforces organizational policies about how AI gets used, beyond data protection. Advanced AI detectors analyze prompts and responses to detect policy-violating usage, including toxic content, off-topic interactions, restricted topics, and competitive topics, and enforce appropriate controls. This is contextual understanding applied at scale, not keyword matching.AI red teaming and governance mapping: Continuous policy alignmentRed teaming provides ongoing validation that AI systems resist attack and meet governance requirements. Automated adversarial testing using thousands of simulated attack techniques tests your AI applications continuously, not just at point-in-time assessments. Prompt hardening and testing simulates exploitation of system prompts, then generates hardened alternatives with step-by-step guidance.The enforcement side is where this pays off: a runtime detector library covering jailbreaks, prompt injection, data leakage, and content moderation, combined with automated policy generation that translates red teaming findings directly into production guardrails. When a test finds a vulnerability, the fix deploys to runtime. AI security controls map to NIST AI RMF and EU AI Act requirements, making governance readiness an output of your security program rather than a separate workstream. Evidence and auditability: What to log to prove governanceGovernance programs must demonstrate compliance, not just claim it. Proper evidence collection supports audits, investigations, and regulatory inquiries.Minimum evidence set (Baseline)Start with asset inventory: all AI models, agents, and services operating in your environment, where they are deployed, and their dependencies. Add data assets connected to AI, including datasets, vectors, and exposure status, and access paths and entitlements showing who and what can reach sensitive training data. AI-BOM-style lineage evidence traces datasets to models to runtime usage to support traceability requirements.Interaction evidence (Runtime)At the runtime layer, log the following:Prompt and response activity through extraction and classification. You do not necessarily need to store full text. Classification metadata often satisfies audit requirements.DLP events with blocked/allowed status and dictionary hit typeAccess policy actions: warn, block, isolate, and copy/paste restrictionsContent moderation events with topic classification and enforcement actionAgent visibility evidence: detected agents, both embedded and enterprise-deployed, along with related traffic flowsGovernance reportingCompliance posture dashboards show framework alignment status and highlight areas of drift. Remediation tracking documents how identified issues get addressed. Audit-ready reporting outputs support both internal and external audits. 30/60/90-day rollout plan for enforceable governanceImplementing AI governance works best as a phased program that builds capabilities progressively while delivering value quickly.First 30 Days: Establish enforceable baselinesStart with discovery. Surface the unsanctioned GenAI applications and embedded SaaS AI features already in use across your organization. This number is almost always larger than expected.Priority actions in the first 30 days:Discover AI usage and assets: shadow AI and AI ecosystem inventoryDefine initial policies covering allowed apps, restricted data types, and acceptable useEnable prompt and response visibility and classification across major GenAI appsTurn on inline DLP for prompts covering source code, PII, PCI, and PHI data typesDeploy access controls (warn, block, and isolate) for the top GenAI applications in useSet your foundational guardrails early: do not treat AI as standard web traffic, and do not rely on keyword-only or transaction-based policies for multi-turn AI conversations.Days 31 to 60: Expand controls and posture managementPriority actions in days 31 to 60:Extend discovery to models, agents, services, datasets, vectors, and developer tool paths including IDEs and CLIsEstablish AI-BOM traceability from datasets and data sources to models to runtime usage, including authorization models like Entra ID for agents and AWS IAMAssess misconfigurations and excessive permissions, and prioritize remediation using AI-SPM risk scoringImplement guided remediation workflows and enforce least-privilege across your AI portfolioAdd content moderation policies for off-topic, toxic, restricted, and competitive contentIntroduce continuous red teaming and prompt hardening for critical AI applicationsBegin compliance benchmarking and reporting against NIST AI RMF, the EU AI Act, HIPAA, and GDPR as applicableDays 61 to 90: Operationalize continuous governancePriority actions in days 61 to 90:Automate governance mapping of AI risks to frameworks for ongoing NIST AI RMF and EU AI Act readinessDeploy runtime guardrails and detectors for prompt injection, jailbreaks, data leakage, and content moderationUse automated policy generation to push red teaming findings directly into enforceable runtime policiesSet up continuous monitoring for drift, new assets, new AI apps, and new risk classesStandardize audit packages with monthly and quarterly reporting cycles and evidence retention that meets your regulatory requirementsWith the right framework stack in place, the question becomes execution. Related guidance to reference beyond NIST AI RMFNIST AI RMF provides a strong foundation for AI governance, but several complementary frameworks address specific aspects of AI risk. Use them together rather than treating them as competing options.FrameworkBest used forEU AI ActRisk-based classification for AI systems operating in European marketsOWASP LLM Top 10Technical implementation guidance on large language model vulnerabilitiesMITRE ATLASThreat modeling against adversarial tactics targeting AI systemsISO/IEC 42001Formal AI management system standard for mature governance programsDepending on your industry and geography, NIS2, DORA, HIPAA, GDPR, and SAMA requirements may also apply. The practical approach: use NIST AI RMF as the governance foundation, incorporate EU AI Act requirements for applicable systems, reference the Open Worldwide Application Security Project (OWASP) for technical implementation, and leverage MITRE Adversarial Threat Landscape for AI Systems (ATLAS) for threat modeling. How Zscaler supports enforceable AI governanceMost AI security conversations end up in the same place: a stack of point tools that each solve one slice of the problem without talking to each other. You get a posture tool, an access tool, a DLP tool, a red teaming tool, and a governance program that is more fragmented than the risk it is trying to address.Zscaler AI Security is built differently. It extends the Zero Trust Exchange™ platform, already proven at enterprise scale for users, workloads, clouds, and branches, to cover the full AI lifecycle from build through deploy through run. Inventory, access control, posture management, and runtime guardrails are designed to work together. And when red teaming finds a vulnerability, enforcement deploys automatically. That closed loop is not a feature. It is the architecture.What this looks like in practice:AI Asset Management and AI-SPM: Full AI ecosystem visibility across GenAI SaaS, embedded agentic AI in SaaS, and internally developed AI. AI-BOM lineage, AI agent detection, AI-SPM risk scoring, and prioritized remediation are all part of the same workflow.AI Access Security: Controls that go beyond URL categories: allow, block, warn, and isolate by user and group, with prompt extraction and classification, and Zero Trust Browser coverage for unmanaged devices.AI Red Teaming and AI Guardrails: Continuous adversarial testing, prompt hardening, automated policy generation, and runtime guardrails that stay current as your AI environment evolves.Governance mapping: AI security controls map to NIST AI RMF and EU AI Act requirements as a natural output, not a separate reporting workstream bolted on at the end.AI governance does not have to be a choice between security and speed. The organizations moving fastest on AI adoption are the ones that built enforceable controls early, so they can say yes to AI with confidence, not just caution.Request a demo of Zscaler AI Security today.