IntroductionAI is here and enabling tangible, real-world use cases.Boards are talking about it. Teams are experimenting with and deploying it. Roadmaps are being rewritten around it.But there’s a hard truth most organizations are not always paying attention to:If your foundation isn’t secure, AI will amplify your risk, not just your capability.Much of the discussion around AI security focuses on models, data, and governance. That’s critical, but something foundational is often missed or brought to light too lateBefore you fully embrace AI and become fully operational with it, you need to answer two questions:What resources can be reached from the internet?What can move laterally in your enterprise?If you don’t control those two things, you will always be exposed to breaches. 1. If You’re Reachable, You’re BreachableAI doesn’t just introduce new capabilities, it also introduces new and faster ways to discover and exploit your infrastructure which can happen accidentally or maliciously.Agents, automation, and modern tooling can continuously scan and profile IT environments at machine speed. What used to take time, skill, and persistence now happens by default and is accessible to not only broad and skilled adversarial audiences but also unskilled but motivated ones.If your applications or infrastructure are exposed, public IPs, open ports, reachable services, they are not just available. They are visible, profilable, and targetable.This means:You are continuously being mappedYour posture is being analyzedYour weaknesses are being identified and exploited faster than everThe reality is simple:If something can be reached, it can be profiled. If it can be profiled, it can be exploited and breached, and that includes your AI models.Reducing the attack surface—namely, making AI models and applications invisible unless explicitly accessed—is no longer a best practice.It’s table stakes. 2. Lateral Movement Makes Small Problems BigEven in well-defended environments, initial access is rarely the end goal.It’s the starting point.In traditional attacks, lateral movement is what turns a foothold into a breach. Once inside your environment, attackers move across systems, escalate privileges, and expand impact.With AI, that risk doesn’t just remain, it accelerates.AI agents are dynamic. They connect to systems, interact across environments, and increasingly act with autonomy. Whether they’re running on endpoints, inside your infrastructure, or interacting with third parties, they create new and often unintended paths.If an AI agent is compromised or simply behaves in an unexpected way the ability to move laterally can turn a contained issue into a systemic one.Think of a clinical AI agent with access to patient Electronic Health Records, connected to labs, imaging systems, and billing platforms.Now imagine it gains access to more than it should, or simply takes a path no one anticipated, and starts touching records across patients, departments, or even external systems.Patient data doesn’t have to be “stolen” to be compromised. It just has to be exposed.This is the risk most organizations underestimate.Eliminating lateral movement is not about improving detection. It’s about removing the opportunity entirely. Zero Trust Changes the EquationThis is where architecture matters.Zero Trust is not a control layered on top. It’s a different way of designing connectivity.Zscaler’s Zero Trust Exchange is built on this simple principle:Nothing is trusted. Everything is verified. Access is explicit.There is no implicit network access like with firewalls or with flat networks. No broad connectivity to exploit.Instead:Applications are not exposed to, and therefore not discoverable from, the internetUsers, workloads, and agents connect only to what they are explicitly allowed to, for example the apps onlyEvery connection is verified, scoped, and continuously monitored and evaluatedCrosstalk is visible, and even failed attempts to communicate are immediately brought to attentionThe result is a fundamentally different security posture.Even if something goes wrong and an AI agent “finds a way”, the blast radius is drastically reduced:To a specific userTo a specific workloadTo explicitly allowed connectionsThere is no network to traverse. No hidden paths to discover. If alarms are blaring, remediation is immediate! This Is the Foundation for AIOrganizations that are moving quickly and safely on AI are not starting with models.They’re starting with architecture.They are:Reducing the attack surface by making your AI models invisible to the internet, so there is less to discover and exploitEliminating lateral movement in case your AI is compromised and behaves in an unexpected way, so issues cannot spreadDesigning for containment by default just in case, things go southThis doesn’t slow innovation. It enables it.Because once the foundation is in place, teams can experiment, deploy, and scale AI with confidence without exposing the broader enterprise.Alibaba IncidentWe are not just recommending you to protect your AI deployments, we are recommending it strongly as such a case happened recently with Alibaba. Read our blog here to know more about this incident.The Bottom LineAI will explore,  connect, and find paths you didn’t expect or don’t know exist.The question is not whether that happens. The question is whether your architecture assumes it will. Before you embrace AI at scale, address the foundation. Reduce what can be reached. Eliminate how things can move. Everything else builds on that. Before You Embrace AI, Fix This FirstAI is accelerating fast and so are the risks.Most security conversations focus on models and data. The bigger issue is much more fundamental: what can be reached can be breached and what can move laterally inside your environment can turn minor issues into major ones —intentional or accidental.If your applications are exposed, they can be discovered, scanned, and breached. If lateral movement is possible, a small issue can quickly become a systemic one, especially with AI agents that operate across systems.This is why leading organizations are focusing first on two things:Reducing the attack surface so nothing is reachable unless explicitly allowedEliminating lateral movement through Zero Trust architectureGet this foundation right, and AI becomes an accelerator.Get it wrong, and it amplifies risk.Read more.  

​[#item_full_content] IntroductionAI is here and enabling tangible, real-world use cases.Boards are talking about it. Teams are experimenting with and deploying it. Roadmaps are being rewritten around it.But there’s a hard truth most organizations are not always paying attention to:If your foundation isn’t secure, AI will amplify your risk, not just your capability.Much of the discussion around AI security focuses on models, data, and governance. That’s critical, but something foundational is often missed or brought to light too lateBefore you fully embrace AI and become fully operational with it, you need to answer two questions:What resources can be reached from the internet?What can move laterally in your enterprise?If you don’t control those two things, you will always be exposed to breaches. 1. If You’re Reachable, You’re BreachableAI doesn’t just introduce new capabilities, it also introduces new and faster ways to discover and exploit your infrastructure which can happen accidentally or maliciously.Agents, automation, and modern tooling can continuously scan and profile IT environments at machine speed. What used to take time, skill, and persistence now happens by default and is accessible to not only broad and skilled adversarial audiences but also unskilled but motivated ones.If your applications or infrastructure are exposed, public IPs, open ports, reachable services, they are not just available. They are visible, profilable, and targetable.This means:You are continuously being mappedYour posture is being analyzedYour weaknesses are being identified and exploited faster than everThe reality is simple:If something can be reached, it can be profiled. If it can be profiled, it can be exploited and breached, and that includes your AI models.Reducing the attack surface—namely, making AI models and applications invisible unless explicitly accessed—is no longer a best practice.It’s table stakes. 2. Lateral Movement Makes Small Problems BigEven in well-defended environments, initial access is rarely the end goal.It’s the starting point.In traditional attacks, lateral movement is what turns a foothold into a breach. Once inside your environment, attackers move across systems, escalate privileges, and expand impact.With AI, that risk doesn’t just remain, it accelerates.AI agents are dynamic. They connect to systems, interact across environments, and increasingly act with autonomy. Whether they’re running on endpoints, inside your infrastructure, or interacting with third parties, they create new and often unintended paths.If an AI agent is compromised or simply behaves in an unexpected way the ability to move laterally can turn a contained issue into a systemic one.Think of a clinical AI agent with access to patient Electronic Health Records, connected to labs, imaging systems, and billing platforms.Now imagine it gains access to more than it should, or simply takes a path no one anticipated, and starts touching records across patients, departments, or even external systems.Patient data doesn’t have to be “stolen” to be compromised. It just has to be exposed.This is the risk most organizations underestimate.Eliminating lateral movement is not about improving detection. It’s about removing the opportunity entirely. Zero Trust Changes the EquationThis is where architecture matters.Zero Trust is not a control layered on top. It’s a different way of designing connectivity.Zscaler’s Zero Trust Exchange is built on this simple principle:Nothing is trusted. Everything is verified. Access is explicit.There is no implicit network access like with firewalls or with flat networks. No broad connectivity to exploit.Instead:Applications are not exposed to, and therefore not discoverable from, the internetUsers, workloads, and agents connect only to what they are explicitly allowed to, for example the apps onlyEvery connection is verified, scoped, and continuously monitored and evaluatedCrosstalk is visible, and even failed attempts to communicate are immediately brought to attentionThe result is a fundamentally different security posture.Even if something goes wrong and an AI agent “finds a way”, the blast radius is drastically reduced:To a specific userTo a specific workloadTo explicitly allowed connectionsThere is no network to traverse. No hidden paths to discover. If alarms are blaring, remediation is immediate! This Is the Foundation for AIOrganizations that are moving quickly and safely on AI are not starting with models.They’re starting with architecture.They are:Reducing the attack surface by making your AI models invisible to the internet, so there is less to discover and exploitEliminating lateral movement in case your AI is compromised and behaves in an unexpected way, so issues cannot spreadDesigning for containment by default just in case, things go southThis doesn’t slow innovation. It enables it.Because once the foundation is in place, teams can experiment, deploy, and scale AI with confidence without exposing the broader enterprise.Alibaba IncidentWe are not just recommending you to protect your AI deployments, we are recommending it strongly as such a case happened recently with Alibaba. Read our blog here to know more about this incident.The Bottom LineAI will explore,  connect, and find paths you didn’t expect or don’t know exist.The question is not whether that happens. The question is whether your architecture assumes it will. Before you embrace AI at scale, address the foundation. Reduce what can be reached. Eliminate how things can move. Everything else builds on that. Before You Embrace AI, Fix This FirstAI is accelerating fast and so are the risks.Most security conversations focus on models and data. The bigger issue is much more fundamental: what can be reached can be breached and what can move laterally inside your environment can turn minor issues into major ones —intentional or accidental.If your applications are exposed, they can be discovered, scanned, and breached. If lateral movement is possible, a small issue can quickly become a systemic one, especially with AI agents that operate across systems.This is why leading organizations are focusing first on two things:Reducing the attack surface so nothing is reachable unless explicitly allowedEliminating lateral movement through Zero Trust architectureGet this foundation right, and AI becomes an accelerator.Get it wrong, and it amplifies risk.Read more.