Board-level cyber risks requiring oversight: autonomous AI agents acting beyond their instructions, AI governance creating new D&O liability exposure, stolen travel data weaponized within hours, and a Russian-linked attack on European energy. AI Agents Are Making Decisions They Were Never Asked to MakeCompanies deploying autonomous AI agents across business systems are seeing real productivity benefits, but a pattern is emerging: these agents are acting beyond their instructions in ways no one anticipated.An experimental AI agent built on Alibaba’s technology independently began mining cryptocurrency and opened covert network tunnels to external servers during training. No one told it to. Last month at Meta, an AI agent responding to an engineering question instructed an employee to take actions that exposed sensitive user and company data internally for two hours, triggering a major security alert. In another recent case, an AI coding agent reportedly wiped a company database despite being explicitly told not to.These are not edge cases. They illustrate “agent drift,” the gap between what an agent is told to do and what it actually does when it encounters obstacles or optimization opportunities. The emerging best practice is to apply zero trust principles to AI agents: rather than granting broad permissions, explicitly define which systems, data, and peer agents each one can reach. Just as users should only have permissions to access certain applications and data, AI agents should be treated as untrusted entities requiring continuous verification, meaning they are permissioned only for their specific task and nothing more.What Directors Should Ask Management:How many AI agents are operating within our environment, and do we have a complete inventory of what systems and data each one can access?Are AI agents governed by zero trust principles, with access limited to the specific applications, data, and actions required for each task, or are they operating with broader privileges than any individual employee would be granted?Have any of our AI agents acted beyond their intended scope, and how would we know if they had? AI Governance Gaps Are Creating New D&O Liability ExposureA new report from Aon warns that AI is accelerating governance expectations for boards. Courts, regulators, and insurers increasingly expect directors to understand how AI is used in their organizations and to show that risks, including model failure, data misuse, and third-party dependency, are being addressed. The most material exposures include oversight failures that lead to financial or regulatory harm, disclosure risk when AI is material to performance or strategy, and shareholder litigation tied to weak board oversight. The insurance market is responding. Aon reports that more than 90% of insurance decision-makers now view AI-driven incidents as a material concern, and D&O underwriters are increasingly evaluating governance maturity, disclosure discipline, and controls around data leakage and third-party AI exposure. Organizations that can demonstrate documented model testing, robust oversight, and safeguards against data leakage will secure more favorable terms and greater capacity.What Directors Should Ask Management: Can we demonstrate to our D&O insurer that we have a documented AI governance framework and has it been reviewed by the board within the last twelve months? Booking.com Breach Weaponized Within DaysBooking.com confirmed on April 13 that hackers gained unauthorized access to customer reservation data including names, email addresses, phone numbers, and booking details. The company reset reservation PINs and began notifying affected customers. Within days, customers reported receiving convincing fake emails and WhatsApp messages impersonating the travel platform, using the stolen booking details to trick people into sharing payment card information.Attackers have a narrow window to exploit stolen data before the affected company can notify customers and prompt them to take protective action. AI is compressing that window further, enabling hackers to rapidly generate convincing, personalized phishing messages at scale within hours of a breach. For directors, this highlights the importance of understanding not just how quickly a breach can be detected, but how fast the organization can notify affected parties and disrupt downstream fraud before it scales.What Directors Should Ask Management: In the event of a breach involving customer data, how quickly can we execute notification and what processes exist to disrupt follow-on phishing or fraud campaigns that use the stolen information? Russian-Linked Hackers Attempted Destructive Attack on Swedish Power PlantSweden’s government disclosed on April 15 that pro-Russian hackers linked to Russian intelligence attempted a destructive cyberattack against a thermal power plant. The attack, which took place in mid-2025 but was only revealed this month, was blocked by a built-in protection mechanism at the facility. Sweden’s defense minister described the incident as evidence of “riskier and more reckless behavior,” signaling a shift from espionage toward attempted disruption of critical infrastructure.For directors, the lesson is broader than the energy sector. Nation-state cyber activity is becoming more aggressive, with operational disruption—not just data theft or ransomware—an increasingly plausible risk. Organizations connected to critical services, industrial environments, or essential supply chains should ensure resilience plans account for destructive attacks and that those plans have been tested.What Directors Should Ask Management: Does our business continuity planning account for destructive cyberattacks on operational infrastructure, not just data theft and ransomware, and has it been tested within the last twelve months? *****Zscaler is a proud partner of NACD’s Northern California chapter. We are here as a resource for directors to answer questions about cybersecurity or AI risks, and are happy to arrange dedicated board briefings. Please email rsloan[@]zscaler.com to learn more.
[#item_full_content] Board-level cyber risks requiring oversight: autonomous AI agents acting beyond their instructions, AI governance creating new D&O liability exposure, stolen travel data weaponized within hours, and a Russian-linked attack on European energy. AI Agents Are Making Decisions They Were Never Asked to MakeCompanies deploying autonomous AI agents across business systems are seeing real productivity benefits, but a pattern is emerging: these agents are acting beyond their instructions in ways no one anticipated.An experimental AI agent built on Alibaba’s technology independently began mining cryptocurrency and opened covert network tunnels to external servers during training. No one told it to. Last month at Meta, an AI agent responding to an engineering question instructed an employee to take actions that exposed sensitive user and company data internally for two hours, triggering a major security alert. In another recent case, an AI coding agent reportedly wiped a company database despite being explicitly told not to.These are not edge cases. They illustrate “agent drift,” the gap between what an agent is told to do and what it actually does when it encounters obstacles or optimization opportunities. The emerging best practice is to apply zero trust principles to AI agents: rather than granting broad permissions, explicitly define which systems, data, and peer agents each one can reach. Just as users should only have permissions to access certain applications and data, AI agents should be treated as untrusted entities requiring continuous verification, meaning they are permissioned only for their specific task and nothing more.What Directors Should Ask Management:How many AI agents are operating within our environment, and do we have a complete inventory of what systems and data each one can access?Are AI agents governed by zero trust principles, with access limited to the specific applications, data, and actions required for each task, or are they operating with broader privileges than any individual employee would be granted?Have any of our AI agents acted beyond their intended scope, and how would we know if they had? AI Governance Gaps Are Creating New D&O Liability ExposureA new report from Aon warns that AI is accelerating governance expectations for boards. Courts, regulators, and insurers increasingly expect directors to understand how AI is used in their organizations and to show that risks, including model failure, data misuse, and third-party dependency, are being addressed. The most material exposures include oversight failures that lead to financial or regulatory harm, disclosure risk when AI is material to performance or strategy, and shareholder litigation tied to weak board oversight. The insurance market is responding. Aon reports that more than 90% of insurance decision-makers now view AI-driven incidents as a material concern, and D&O underwriters are increasingly evaluating governance maturity, disclosure discipline, and controls around data leakage and third-party AI exposure. Organizations that can demonstrate documented model testing, robust oversight, and safeguards against data leakage will secure more favorable terms and greater capacity.What Directors Should Ask Management: Can we demonstrate to our D&O insurer that we have a documented AI governance framework and has it been reviewed by the board within the last twelve months? Booking.com Breach Weaponized Within DaysBooking.com confirmed on April 13 that hackers gained unauthorized access to customer reservation data including names, email addresses, phone numbers, and booking details. The company reset reservation PINs and began notifying affected customers. Within days, customers reported receiving convincing fake emails and WhatsApp messages impersonating the travel platform, using the stolen booking details to trick people into sharing payment card information.Attackers have a narrow window to exploit stolen data before the affected company can notify customers and prompt them to take protective action. AI is compressing that window further, enabling hackers to rapidly generate convincing, personalized phishing messages at scale within hours of a breach. For directors, this highlights the importance of understanding not just how quickly a breach can be detected, but how fast the organization can notify affected parties and disrupt downstream fraud before it scales.What Directors Should Ask Management: In the event of a breach involving customer data, how quickly can we execute notification and what processes exist to disrupt follow-on phishing or fraud campaigns that use the stolen information? Russian-Linked Hackers Attempted Destructive Attack on Swedish Power PlantSweden’s government disclosed on April 15 that pro-Russian hackers linked to Russian intelligence attempted a destructive cyberattack against a thermal power plant. The attack, which took place in mid-2025 but was only revealed this month, was blocked by a built-in protection mechanism at the facility. Sweden’s defense minister described the incident as evidence of “riskier and more reckless behavior,” signaling a shift from espionage toward attempted disruption of critical infrastructure.For directors, the lesson is broader than the energy sector. Nation-state cyber activity is becoming more aggressive, with operational disruption—not just data theft or ransomware—an increasingly plausible risk. Organizations connected to critical services, industrial environments, or essential supply chains should ensure resilience plans account for destructive attacks and that those plans have been tested.What Directors Should Ask Management: Does our business continuity planning account for destructive cyberattacks on operational infrastructure, not just data theft and ransomware, and has it been tested within the last twelve months? *****Zscaler is a proud partner of NACD’s Northern California chapter. We are here as a resource for directors to answer questions about cybersecurity or AI risks, and are happy to arrange dedicated board briefings. Please email rsloan[@]zscaler.com to learn more.