Deploying and configuring cloud workload security shouldn’t have to be so difficult. If you’re still working with the complex traditional way of deploying and managing legacy firewalls or VPNs in the cloud, it’s high time to move on and look at Zscaler Workload Communications.

Zscaler Workloads Communications has now expanded its support to Google Cloud, one of the most widely adopted clouds, alongside AWS and Microsoft Azure.
How it works

Before we jump into design options for Workload Communications on Google Cloud, if you need a quick refresher on Zscaler Cloud Connector (VMs that facilitate secure egress traffic for cloud workloads and enable Workload Communications), you can read about it here.
Workload Communications on Google Cloud Platform

Let’s take a closer look at different Google Cloud networking design options as well as the pros and cons of each design.

Google Cloud has an interesting feature called Shared VPC Architecture or Shared Project, which provides great flexibility for the Networking team to centralize cloud security management and control. Using Shared VPC Architecture, a developer can focus on the development side while the Networking team completely manages and controls networking. Using Shared VPC Architecture in Google Cloud is a recommended best practice. For more information, check out Shared VPC | Google Cloud.
Google Cloud Provisioning Responsibilities

Roles

Responsibilities

Shared Project (Host Project)

Owned by the networking team and includes complete network constructs like Shared VPC, subnets, routing, and more.
Cloud Connector instances are part of this project.
Network resources in Shared Project are shared with Service Projects. For example, subnets are shared with different Service Projects.

App Project (Service Project)

Owned by the development team.
Owners will use whatever network resources are shared by the Shared Project for deploying any instances in App Projects.

Single Shared VPC Regional Cloud Connector Design

This is based on a Single Shared VPC where:

The workloads and Cloud Connectors are part of the same VPC but different Projects
Cloud Connectors are part of a Shared Project in complete control of the Networking team
Subnets from this shared VPC will be shared with Services projects for Developers to deploy any app VMs or serverless apps

A VPC in GCP is a global construct that can span all supported regions. In most cases, if you want to avoid VPC peering and use plain Single Shared VPC for each environment (Prod, UAT, Dev, Pre-Prod, etc.), you can proceed with this design.

By default, Google Cloud doesn’t allow subnet-to-subnet communication inside the same VPC. Therefore, even though the workloads and Cloud Connectors are part of the same VPC, you still have access control at the subnet level using Google Firewall, and you can span multiple regions with a single VPC as it’s a global construct in GCP.

Pros and cons of this design:

Pros:

Zscaler Cloud Connectors are deployed regionally—workloads can access the internet using regional Cloud Connectors along with regional load balancers.
Provides a low-latency solution.
Avoids cross-region traffic flows, optimizing customer costs.
Plain vanilla design with Single Shared VPC per environment.
Decentralized design improves fault tolerance.
Enables grouping and sharing of Cloud Connector instances at the region level.
Minimal VPCs or VPC peerings as workloads and Zscaler Cloud Connectors are part of the same VPC.

Cons:

Requires network tags for workloads to forward traffic to regional Cloud Connectors.
Automation pipelines should be in place for tagging workloads.
Requires strong IAM controls as Project-level network tags can be changed at any time by the Project owner or editor. Tag edits could impact the traffic flow for the specific instance.

Single Shared VPC Centralized Cloud Connector Design

This design is similar to the first, except Cloud Connectors are hosted in a centralized location, while workloads can be part of different regions. As a Single Shared VPC design with cross-regional access, it is mostly used in cases where workloads span multiple regions and you want to group geographically closer regions to send traffic through a centralized location.

This helps avoid the need to deploy and manage Cloud Connectors in each region for geographically closer workloads.

Pros and cons of this design:

Pros:

Easy to deploy, with no need for any network tags for workloads.
Plain vanilla design with a Single Shared VPC per environment.
Simple routing changes with two default routes: one for workloads without any network tags and another for CCs with tags pointing to internet GW.

Cons:

Cross-region traffic flow design.
No low latency, no fault tolerance.
Cross-regional traffic flow cost will need to be accounted for in this design. Cloud Connectors are deployed centrally in a single region.

Multi-VPC Shared VPC Cloud Connector Design

This is mainly for cases where you want VPC-level isolation for each Project in your organization. Because Google doesn’t support Transitive VPC architecture yet, this design requires you to configure Hub & Spoke VPC peering as well as peering between Workload VPCs.

Once again, the VPCs are completely managed by the Networking team as part of the Shared Project ownership and shared this VPC’s with Spoke Projects along with Peering and routing.

As part of routing, you just need to make sure to export/import the default route from the Hub VPC to Spoke VPCs.

Pros and cons of this design:

Pros:

Easy to deploy, with no need for any network tags for workloads.
Simple routing changes with two default routes: one for workloads without any network tags and another for CCs with tags pointing to internet GW.
VPC level of isolation for each Project.

Cons:

VPC peering for Workload VPCs and Cloud Connector VPCs—Google has a limit on the number of VPC peerings, but doesn’t support Transitive Traffic, and thus requires VPC peering for any traffic flow.
Complex routing changes depending on the traffic flow requirements.

Conclusion

Every design has pros and cons depending on your organization’s requirements. Whichever design you choose, Zscaler Workload Communications provides the flexibility to secure it seamlessly, with complete automation support using Terraform.

There’s no need for Trust/Untrust VPCs—Zscaler Cloud Connectors can be deployed as part of a Single Shared VPC shared across workloads or as part of an Isolated VPC as mentioned in the above designs.

If your organization is looking for seamless multicloud security with unlimited scale for firewall, proxy, TLS decryption, DLP, and more, look no further than Zscaler Workload Communications.

To learn more, visit our product page.

You can also sign up for our self-guided hands-on lab.  

 Deploying and configuring cloud workload security shouldn’t have to be so difficult. If you’re still working with the complex traditional way of deploying and managing legacy firewalls or VPNs in the cloud, it’s high time to move on and look at Zscaler Workload Communications.

Zscaler Workloads Communications has now expanded its support to Google Cloud, one of the most widely adopted clouds, alongside AWS and Microsoft Azure.
How it works

Before we jump into design options for Workload Communications on Google Cloud, if you need a quick refresher on Zscaler Cloud Connector (VMs that facilitate secure egress traffic for cloud workloads and enable Workload Communications), you can read about it here.
Workload Communications on Google Cloud Platform

Let’s take a closer look at different Google Cloud networking design options as well as the pros and cons of each design.

Google Cloud has an interesting feature called Shared VPC Architecture or Shared Project, which provides great flexibility for the Networking team to centralize cloud security management and control. Using Shared VPC Architecture, a developer can focus on the development side while the Networking team completely manages and controls networking. Using Shared VPC Architecture in Google Cloud is a recommended best practice. For more information, check out Shared VPC | Google Cloud.
Google Cloud Provisioning Responsibilities

Roles

Responsibilities

Shared Project (Host Project)

Owned by the networking team and includes complete network constructs like Shared VPC, subnets, routing, and more.
Cloud Connector instances are part of this project.
Network resources in Shared Project are shared with Service Projects. For example, subnets are shared with different Service Projects.

App Project (Service Project)

Owned by the development team.
Owners will use whatever network resources are shared by the Shared Project for deploying any instances in App Projects.

Single Shared VPC Regional Cloud Connector Design

This is based on a Single Shared VPC where:

The workloads and Cloud Connectors are part of the same VPC but different Projects
Cloud Connectors are part of a Shared Project in complete control of the Networking team
Subnets from this shared VPC will be shared with Services projects for Developers to deploy any app VMs or serverless apps

A VPC in GCP is a global construct that can span all supported regions. In most cases, if you want to avoid VPC peering and use plain Single Shared VPC for each environment (Prod, UAT, Dev, Pre-Prod, etc.), you can proceed with this design.

By default, Google Cloud doesn’t allow subnet-to-subnet communication inside the same VPC. Therefore, even though the workloads and Cloud Connectors are part of the same VPC, you still have access control at the subnet level using Google Firewall, and you can span multiple regions with a single VPC as it’s a global construct in GCP.

Pros and cons of this design:

Pros:

Zscaler Cloud Connectors are deployed regionally—workloads can access the internet using regional Cloud Connectors along with regional load balancers.
Provides a low-latency solution.
Avoids cross-region traffic flows, optimizing customer costs.
Plain vanilla design with Single Shared VPC per environment.
Decentralized design improves fault tolerance.
Enables grouping and sharing of Cloud Connector instances at the region level.
Minimal VPCs or VPC peerings as workloads and Zscaler Cloud Connectors are part of the same VPC.

Cons:

Requires network tags for workloads to forward traffic to regional Cloud Connectors.
Automation pipelines should be in place for tagging workloads.
Requires strong IAM controls as Project-level network tags can be changed at any time by the Project owner or editor. Tag edits could impact the traffic flow for the specific instance.

Single Shared VPC Centralized Cloud Connector Design

This design is similar to the first, except Cloud Connectors are hosted in a centralized location, while workloads can be part of different regions. As a Single Shared VPC design with cross-regional access, it is mostly used in cases where workloads span multiple regions and you want to group geographically closer regions to send traffic through a centralized location.

This helps avoid the need to deploy and manage Cloud Connectors in each region for geographically closer workloads.

Pros and cons of this design:

Pros:

Easy to deploy, with no need for any network tags for workloads.
Plain vanilla design with a Single Shared VPC per environment.
Simple routing changes with two default routes: one for workloads without any network tags and another for CCs with tags pointing to internet GW.

Cons:

Cross-region traffic flow design.
No low latency, no fault tolerance.
Cross-regional traffic flow cost will need to be accounted for in this design. Cloud Connectors are deployed centrally in a single region.

Multi-VPC Shared VPC Cloud Connector Design

This is mainly for cases where you want VPC-level isolation for each Project in your organization. Because Google doesn’t support Transitive VPC architecture yet, this design requires you to configure Hub & Spoke VPC peering as well as peering between Workload VPCs.

Once again, the VPCs are completely managed by the Networking team as part of the Shared Project ownership and shared this VPC’s with Spoke Projects along with Peering and routing.

As part of routing, you just need to make sure to export/import the default route from the Hub VPC to Spoke VPCs.

Pros and cons of this design:

Pros:

Easy to deploy, with no need for any network tags for workloads.
Simple routing changes with two default routes: one for workloads without any network tags and another for CCs with tags pointing to internet GW.
VPC level of isolation for each Project.

Cons:

VPC peering for Workload VPCs and Cloud Connector VPCs—Google has a limit on the number of VPC peerings, but doesn’t support Transitive Traffic, and thus requires VPC peering for any traffic flow.
Complex routing changes depending on the traffic flow requirements.

Conclusion

Every design has pros and cons depending on your organization’s requirements. Whichever design you choose, Zscaler Workload Communications provides the flexibility to secure it seamlessly, with complete automation support using Terraform.

There’s no need for Trust/Untrust VPCs—Zscaler Cloud Connectors can be deployed as part of a Single Shared VPC shared across workloads or as part of an Isolated VPC as mentioned in the above designs.

If your organization is looking for seamless multicloud security with unlimited scale for firewall, proxy, TLS decryption, DLP, and more, look no further than Zscaler Workload Communications.

To learn more, visit our product page.

You can also sign up for our self-guided hands-on lab.