In Part 1, I showed what a cloud-native monitoring stack for Zscaler Cloud Connectors can look like using AWS CloudWatch — complete with dashboards, metrics, and alerting. In this post, we’ll take the next step: automating the setup. I’ll walk you through a reusable CloudFormation template that builds the dashboard, sets up alarms, and ties in your CloudWatch Network Monitor probes (with a few caveats). Whether you’re experimenting in a lab or looking to accelerate production readiness, this template gives you a head start.Why CloudFormationI get it — many organizations standardize on Terraform, and everything here can absolutely be done with it as well. But for the sake of time (I do have a day job at Zscaler!), it was simply faster — and selfishly, easier — for me to use CloudFormation. It’s not perfect, but what I’ve put together is:Easy to deploy consistently without changesEasy to extend and customize for your needsParameterized to support required inputsAble to optionally create alarms for email alertsThat said, you can always take this base and “translate” it into Terraform — the metrics, math expressions, and logic are identical regardless of how you choose to deploy.Deployment Create CloudWatch Synthetic MonitorIn order to utilize the Network RTT in the Dashboard (and alerts) you will need to manually create a CloudWatch Synthetic Monitor with two probes in order to reference. Please note this is currently a limitation with CloudFormation — it does not currently support the creation of these resource types.note: I currently have this configured for a single AZ so you would need to alter the dashboard and alerts to account for multi-AZ configsNavigate to AWS %26gt; CloudWatch %26gt; Networking Monitoring %26gt; Synthetic monitorsCreate a new Network monitor with settings:Monitor name: something like “zscc-vpc1234-monitor”Subnets: Select the Zscaler Subnet where the Cloud Connectors are deployed AND one subnet that is configured to route to ZscalerDestination 1: Insert the public IP of a trusted destination. This is a bit subjective as the destination servers could be the problem and not related to AWS or Zscaler, but in my lab I took the resolved IP address for the login page of the Zscaler cloud my lab is in, such as login.zscalerthree.netAdvanced Settings:Protocol: TCPPort: 80Packet size: 56Once the monitor is created, navigate to the Monitor details page and take note of the two probe ids that were created along with which one is the Zscaler subnet vs the Workload subnet. You’ll need these items for the CloudFormation templateCaveats and NuancesBefore you run with this setup, a few important notes to keep in mind (some of them I’m restating to be clear). This will help you with knowing what is perfect information vs a starting point you can expand on:Single ASG across AZs Assumption. Cloud Connectors can be deployed with CFT or Terraform to be 1 ASG across the AZs or 1 ASG per AZ. For simplicity my lab is 1 ASG across AZs to make it simpler to obtain all the metricsOne NAT Gateway for the VPC. For simplicity even with multiple AZs my lab has 1 NAT Gateway. Realistically most customers will have 1 per AZData Transfer ≠ Exact Science: Metrics like GWLB Processed Bytes, Cloud Connector (custom metric) Data Plane Bytes In/Out, and NAT Gateway Bytes are all helpful – but they don’t always align perfectly. That’s because:They measure different points in the flowThey may report data at different intervalsThey can miss spikes or larger transfers depending on timingFor example, a 10GB file transfer within a short 5-minute window might not show up in the same way across all of these metrics. There are directional differences to keep in mind so it’s best to use these data points as a baseline to further guide tuning and investigation when there are issues or anomaliesClosing: What’s NextShort and sweet. This wraps up Part 2. You now have a template to stand up your own monitoring baseline with CloudWatch—alerts, dashboards, and probes included.In Part 3, we’ll do something similar by exploring what’s possible in Azure. GCP fans, you’re next!  

​[#item_full_content] In Part 1, I showed what a cloud-native monitoring stack for Zscaler Cloud Connectors can look like using AWS CloudWatch — complete with dashboards, metrics, and alerting. In this post, we’ll take the next step: automating the setup. I’ll walk you through a reusable CloudFormation template that builds the dashboard, sets up alarms, and ties in your CloudWatch Network Monitor probes (with a few caveats). Whether you’re experimenting in a lab or looking to accelerate production readiness, this template gives you a head start.Why CloudFormationI get it — many organizations standardize on Terraform, and everything here can absolutely be done with it as well. But for the sake of time (I do have a day job at Zscaler!), it was simply faster — and selfishly, easier — for me to use CloudFormation. It’s not perfect, but what I’ve put together is:Easy to deploy consistently without changesEasy to extend and customize for your needsParameterized to support required inputsAble to optionally create alarms for email alertsThat said, you can always take this base and “translate” it into Terraform — the metrics, math expressions, and logic are identical regardless of how you choose to deploy.Deployment Create CloudWatch Synthetic MonitorIn order to utilize the Network RTT in the Dashboard (and alerts) you will need to manually create a CloudWatch Synthetic Monitor with two probes in order to reference. Please note this is currently a limitation with CloudFormation — it does not currently support the creation of these resource types.note: I currently have this configured for a single AZ so you would need to alter the dashboard and alerts to account for multi-AZ configsNavigate to AWS %26gt; CloudWatch %26gt; Networking Monitoring %26gt; Synthetic monitorsCreate a new Network monitor with settings:Monitor name: something like “zscc-vpc1234-monitor”Subnets: Select the Zscaler Subnet where the Cloud Connectors are deployed AND one subnet that is configured to route to ZscalerDestination 1: Insert the public IP of a trusted destination. This is a bit subjective as the destination servers could be the problem and not related to AWS or Zscaler, but in my lab I took the resolved IP address for the login page of the Zscaler cloud my lab is in, such as login.zscalerthree.netAdvanced Settings:Protocol: TCPPort: 80Packet size: 56Once the monitor is created, navigate to the Monitor details page and take note of the two probe ids that were created along with which one is the Zscaler subnet vs the Workload subnet. You’ll need these items for the CloudFormation templateCaveats and NuancesBefore you run with this setup, a few important notes to keep in mind (some of them I’m restating to be clear). This will help you with knowing what is perfect information vs a starting point you can expand on:Single ASG across AZs Assumption. Cloud Connectors can be deployed with CFT or Terraform to be 1 ASG across the AZs or 1 ASG per AZ. For simplicity my lab is 1 ASG across AZs to make it simpler to obtain all the metricsOne NAT Gateway for the VPC. For simplicity even with multiple AZs my lab has 1 NAT Gateway. Realistically most customers will have 1 per AZData Transfer ≠ Exact Science: Metrics like GWLB Processed Bytes, Cloud Connector (custom metric) Data Plane Bytes In/Out, and NAT Gateway Bytes are all helpful – but they don’t always align perfectly. That’s because:They measure different points in the flowThey may report data at different intervalsThey can miss spikes or larger transfers depending on timingFor example, a 10GB file transfer within a short 5-minute window might not show up in the same way across all of these metrics. There are directional differences to keep in mind so it’s best to use these data points as a baseline to further guide tuning and investigation when there are issues or anomaliesClosing: What’s NextShort and sweet. This wraps up Part 2. You now have a template to stand up your own monitoring baseline with CloudWatch—alerts, dashboards, and probes included.In Part 3, we’ll do something similar by exploring what’s possible in Azure. GCP fans, you’re next!