Post Content  

Zscaler Cloud Connector is a VM-based solution built to forward traffic from cloud-based workloads to public and/or private destinations using the Zscaler cloud. As such, it needs to be able to initiate traffic to Zscaler Service Edges, which requires public IP addresses (more detailed information on Cloud Connector communication can be found at https://help.zscaler.com/cloud-branch-connector/networking-flows-cloud-connector). 

In general, Zscaler recommends setting up Cloud Connector with a NAT Gateway as it solves a number of required functions:

It assigns public IP addresses to all interfaces for outbound traffic
It prevents unsolicited inbound connections (from the internet)
It allows for the use of private IP space within the cloud, making for an easier local routing setup

                     Diagram: recommended Cloud Connector setup with NAT-GW

However, NAT Gateways can introduce significant additional costs, especially when combined with high data throughput. At the same time, Cloud Connectors are designed to be exposed to the internet and only require outbound internet access, which makes them less of a target and, in turn, non-reliant on the NAT Gateway for security. Moreover, since Cloud Connectors act as the default forwarding function, this means that internal workloads don’t need a NAT Gateway either. 

This document describes a Cloud Connector setup that replaces the NAT Gateway functionality where it makes sense, while still maintaining the same security considerations.

                  Diagram: alternative Cloud Connector setup without NAT-GW

Note that the main article describes setup and considerations; a few configuration examples have been added at the bottom of this document.

Setting up public IP addresses to the CC interfaces

The first thing to do is to assign public IP addresses to these interfaces. Note that (as the NAT Gateway already implied) this doesn’t have to be a fixed address, as long as it’s consistent during the Cloud Connector’s uptime. 

In Azure, you link public IP addresses to the Cloud Connector interfaces. First, ensure there is no NAT Gateway associated with the subnet (or remove it when there is). Then go into the Cloud Connector VM, select the Network Interface, select IP Configuration, and toggle the Public IP address settings to “Associate”. Do this for all interfaces.

In AWS, you need to place the Cloud Connector in a public subnet which will assign one public IP address to it, and assign Elastic IP addresses to all other interfaces. 

When using Terraform, this can be achieved by first creating an aws_eip resource, then associating it to the Cloud Connector interface-ids through aws_eip_association

When using CloudFormation, you must assign a public subnet when creating the stack. This will automatically assign one Public IP address to the instance and, as such, to one of the Interfaces. Allocate an Elastic IP address and, once the Cloud Connector EC2 instance is created, associate it with another interface. Repeat until all Cloud Connector interfaces have a public IP association. 

Note: By default, AWS only allows a limited number of E-IPs per Region. For additional addresses, the customer has to create a support ticket with AWS. See https://docs.aws.amazon.com/vpc/latest/userguide/amazon-vpc-limits.html for more details.

Since requesting additional E-IPs can be a cumbersome process, and since most of the cost for NAT-GW comes with the throughput used, it can be interesting to not have E-IPs assigned to all interfaces, but to the service interfaces only and still use NAT Gateway for the management interface:

                  

                     Diagram: alternative Cloud Connector setup with partial NAT-GW

Protecting against Internet-sourced attacks

Setting up Cloud Connectors without a NAT Gateway requires that they’re placed in a public subnet, which makes them addressable from the internet. The attack surface of a Cloud Connector is limited; it is hardened and only allows limited direct access. Still, the management interface allows inbound SSH access, which can be a target for both compromise and denial-of-service and should be protected.

More fundamentally, the CC service interface must accept traffic coming from the internal Cloud workloads but should never accept unsolicited traffic from the internet. However, if an attacker can mimic/spoof Workload traffic, CC will pick it up and process it as normal. This opens up attack vectors towards ZIA and ZPA resources, which need to be mitigated. 

Fortunately, some attacks are unfeasible due to regular routing, and Azure and AWS have a few useful options that allow for a ruleset that doesn’t need continuous updating after adding new workloads:

Transparent access from the internet through Cloud Connectors to ZIA or ZPA resources will be prevented by regular Internet routing (the traffic will never end up at the CC in the first place)
AWS and Azure have anti-spoofing measures to block inbound traffic using cloud-local IP space
Azure has default labeling for local Cloud resources. This means you don’t have to change the Security Groups each time you add a new subnet

Unfortunately, although AWS and Azure do provide protection against spoofing (Cloud-) local addresses, it can’t protect against spoofed internet address space. And, since the CC service interface must respond to DNS requests, it could be used as a target by itself and as facilitator to (D)DoS public and private services. Incidentally, it could also lead to Zscaler counting these spoofed addresses towards the ZIA and ZPA Workload licenses. Combined, this leads to the following attacks and mitigation measures:

Attack (Internet Sourced)


Mitigation


Cust Risk


Mitigate


Attacking (to compromise or DoS) the Cloud Connector management interface using open listening services (SSH)


Inbound Security Group on management interface


Med


Should


DDoS ZPA resources through CC


Inbound Security Group on service interface


Med


Must


DDoS internet resources using CC DNS
(also incurring BW cost)


Inbound Security Group on service interface


High


Must

So, we need a number of Security Group rules to mitigate these risks by making sure that only local resources can use the CCs. 

In Azure, this is straightforward. In fact: the Zscaler ARM and Terraform provisioning scripts create the correct Security Group rules by using Azure defined network TAGs. For the management interface only sources on “VirtualNetwork” should be allowed access to listening services, like SSH. Of course, if you have a specific subnet to manage workloads from (containing management systems and/or jump hosts), then you should further limit SSH access to only those systems. Additionally, the management interface needs public outbound access towards DNS (UDP/TCP 53), (D)TLS (UDP/TCP 443) and NTP (UDP 123). 

For the service interface, this means only sources on “VirtualNetwork” are allowed full TCP/UDP access to ANY destination behind the Cloud Connector. Note that if you have additional networks connected (through Direct Access, virtual WAN or VPN) that also want to use Cloud Connector to protect their traffic going out, you’ll need to manually add policy rules for them as well.

In AWS, this configuration is slightly less convenient; you’ll have to define these ACLs using your local IP subnets manually. Again, the management interface should only allow inbound SSH from a management subnet or from specific bastion/jump-hosts. The management also needs public outbound access towards DNS (UDP/TCP 53),  (D)TLS (UDP/TCP 443) and NTP (UDP 123).

For the service interface this means only your locally defined subnets (or IP ranges from other connected networks, if they want to use Cloud Connector to protect their traffic going out) should be allowed full TCP/UDP access to ANY destination behind the Cloud Connector (including the Cloud Connector itself). Note that since AWS will protect against traffic with (spoofed) private (RFC1918) IP addresses, allowing inbound connections only from RFC1918 sources protects against all external connection attempts.