ExpressRoute Circuit And Load Balancing In Azure

When we configure connectivity through on-premise environments to cloud resources using ExpressRoute, we have to configure what is known as ExpressRoute routing domains. This is the configuration property of the ExpressRoute circuit. The circuit does not map to any physical entity; it is, rather, the service to which we want to connect.

An ExpressRoute circuit represents a logical connection between the on-premises infrastructure and Microsoft cloud services through a connectivity provider. Multiple ExpressRoute circuits can be ordered. Each circuit can be in the same or different regions and can be connected to their premises through different connectivity providers.

A circuit is uniquely identified by a standard GUID called as a service key (s-key). The service key is the only piece of information exchanged between Microsoft, the connectivity provider, and customer’s premises. There is a 1:1 mapping between an ExpressRoute circuit and the s-key.


So, as shown in the above figure, we see each routing domain here through the colored lines. The red line is Microsoft Peering and this simply defines connectivity through to the standard Microsoft cloud resources of Office 365, Skype, Exchange, and SharePoint Online, etc. More details on Microsoft’s peering locations can be found at here.

In the purple line, we see Azure public peering and this is the connectivity to Azure resources via their public IPs. And then, Azure private peering is connecting to the resources via their private IP address.

So, each circuit has these associated routing domains with it. And, it should also be noted that when we configure these, it will get set up and configured identically on a pair of routers and active-active or load sharing configuration for high availability built right in.

Now, when most environments choose to implement ExpressRoute, it is often because they have high demands on the service, a lot of traffic, and a lot of bandwidth. So, what needs to be implemented then is load balancing. And this can be done either at the DNS level – which is implemented through the Azure Traffic Manager feature using a round robin approach – or it can be done at the network level, which implements an Azure load balancer in combination with an internal load balancer, as shown below.


Basically, we simply want to distribute the traffic for a particular service. Azure load balancer is a Layer 4 (TCP, UDP) load balancer that distributes incoming traffic among healthy instances of services defined in a load-balanced set. It provides network-level distribution of traffic across instances of an application running in the same Azure data center. It uses hash-based distribution algorithm. But, since we are dealing with these two protocols, it's all based on port forwarding.

Some of the other features of Azure Load Balance are as follows.

  • Automatic reconfiguration of scale out or scale down option so that you can restrict the traffic and only pay for what you use.
  • Service monitoring so you can always be aware of the health.
  • Source Network Address Translation or SNAT.

Azure load balancer is the Internet-facing load balancer. And then, we see the internal load balancer underneath that, which enhances the security because only the Internet traffic will see the Azure load balancer. So, that provides load balancing in that Internet-facing level. In this above figure, we see a Web Server tier, obviously that's facing the Internet. We are distributing the load across multiple virtual machines there. From that point, the internal load balancer is accessible to only those virtual machines, in the Web Server tier or through any kind of specific or dedicated VPN connection, for an organization’s own administrative purposes.

The public Internet does not have access to that internal load balancer configuration. We can have multiple load balanced IPs as well. It allows for multiple SSL endpoints.


As shown above, we see from the Internet that multiple IP addresses are requesting a resource over the same port – a secured HTTP port here of 443. All of those requests are coming in over the same port. They hit the Azure load balancer. That can then be distributed by port number to, in this case, four different SSL web sites but each one implementing a different port number. So we can distribute it again at that transport layer level using port forwarding.

Of course, this depends entirely upon the way load balancing is implemented. But these features are available and they allow you to simply distribute your traffic much more effectively keeping the services running at optimal levels.

Now, there can also be Application Gateway offering various Layer 7 load balancing capabilities for the application. Application Gateway provides Application Delivery Controller (ADC) as a service. It allows customers to optimize web farm productivity by offloading CPU intensive SSL termination to the Application Gateway. Other Layer 7 routing capabilities include

  • ·  Round robin distribution of incoming traffic
  • ·  Cookie-based session affinity
  • ·  URL path-based routing
  • ·  Ability to host multiple websites behind a single Application Gateway

Application Gateway can be configured as internet facing gateway, internal only gateway, or a combination of both. Application Gateway is fully Azure managed, scalable and highly available. It provides a rich set of diagnostics and logging capabilities for better manageability.

The following figure explains Traffic Manager, Application Gateway, and Internal Load Balancing architecture.

References

  • https://docs.microsoft.com/en-us/azure/expressroute/expressroute-circuit-peerings
  • https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-load-balancing-azure


Similar Articles