What Is Load Balancing In Networking?

What Is Load Balancing In Networking?

Introduction 

 
 
A load balancer evens out workloads to prevent servers from being overwhelmed. Its other main use is to provide high availability. Imagine, for example, a service being available on several hosts but all traffic being sent to just one node. In this situation, a load balancer will redirect ALL traffic - just in case there’s a failure.
 
Rather like traffic police directing traffic to keep it flowing and free from congestion, it distributes incoming network and application traffic across multiple servers (together known as a server pool). It only routes traffic to servers that are able to fulfill the client request and do so in a way that prevents any single server being over-burdened while maximizing overall network speed and use of resources. How does it know that a server is able to handle requests? By carrying out health checks in which it periodically attempts to connect with the server. If a response is received, the load balancer knows that the server is available. If a server fails to respond (i.e., is offline), the load balancer diverts traffic to the servers that are still online. When a new server is added to the server pool, the load balancer immediately starts sending traffic it's the way.
 
The NSX load balancing service is specially designed for IT automation and uses the same central point of management and monitoring as other NSX network services. It’s rich with features and functionalities. Here are just a few:
  • The Edge VM active-standby mode provides high availability for the load balancer
  • Support for any TCP (Transmission Control Protocol) application
  • Support for UDP (User Datagram Protocol) applications
  • Health checks for multiple connection types (TCP, HTTP, HTTPS), including content inspection
  • The NSX platform can also integrate load-balancing services offered by 3rd party vendors
  • NSX Edge offers support for two deployment models: proxy mode (a.k.a. one-arm mode), and inline mode (a.k.a. transparent mode).
In proxy mode, an NSX Edge is connected directly to the logical network where load-balancing services are required. The external client sends traffic to the Virtual IP Address (VIP) provided by the load balancer. (VIPs are multiple virtual IP addresses assigned to servers that share a regular IP address based on a single NIC.) The load balancer performs two address translations on the original packets received from the client: destination NAT (DNAT) to replace the VIP with the IP address of one of the servers in the server pool, and source NAT (SNAT) to replace the client IP address with the IP address identifying the load balancer itself. SNAT forces through the load balancer traffic on its way back from the server pool to the client. The server in the server pool replies by sending the traffic to the load balancer. The load balancer again performs a source and destination NAT service to send traffic to the external client, using its VIP as the source IP address.
 
Proxy mode is simpler to deploy and provides greater flexibility than traditional load balancers. It allows the deployment of load balancer services (e.g., NSX Edge appliances) directly on the logical segments without requiring any modification on the centralized NSX Edge that is providing routing communication to the physical network.
 
One limitation of proxy mode is that it requires provisioning more NSX Edges and requires the deployment of source NAT, which means that the servers in the data center do not have the original client IP address. The load balancer can insert the original IP address of the client into the HTTP header before SNAT is performed – a function named Insert X-Forwarded-For HTTP header. The servers, therefore, have the client's IP address. This is, however, limited to HTTP traffic.
 
With inline mode, the NSX Edge is inline with the traffic destined for the server pool. The external client sends traffic to the VIP provided by the load balancer. The load balancer – a centralized NSX Edge – performs only destination NAT (DNAT) to replace the VIP with the IP address of one of the servers deployed in the server pool. The server in the server pool replies to the original client IP address. The traffic is received again by the load balancer since it is deployed inline, typically as the default gateway for the server pool. The load balancer performs source NAT to send traffic to the external client, using its VIP as the source IP address.
 
Inline mode is also quite simple, and additionally, the servers have the original client IP address.