fbpx
Glossary of Terms

What is a Load Balancer and How Does Load Balancing Work?

Websites faced two problems back in the late 90s: scalability (how many clients could simultaneously access the server) and availability (the need for minimal downtime). The solution was load balancing: using commodity servers and distributing the input/output load amongst them.

How Load Balancing Works

A load balancer (versus an application delivery controller, which has more features) acts as the front-end to a collection of web servers so all incoming HTTP requests from clients are resolved to the IP address of the load balancer. The load balancer then routes each request to one of its roster of web servers in what amounts to a private cloud. When the server responds to the client, the response is sent back to the load balancer and then relayed to the client.

Load balancing is transparent to the clients (as far as they’re concerned there’s just one endpoint to talk to) and solves multiple service issues:

  • Scalability: Need more input/output capability? Just add more web servers to the load balancer’s roster and performance is increased.
  • High availability (HA): If a server fails for whatever reason, high availability load balancing detects the outage immediately and stops sending requests to that server.
  • Maintainability: If any of the back-end servers need to be repaired or updated, they are simply removed from the load balancer’s roster.
  • Security: A feature that was implemented later in the evolution of this technology was the “hardening” of the load balancer to protect the web servers it manages. This is a critical feature when the servers are running complex web applications with all of the potential vulnerabilities that can entail.

5 Ways to Simplify the Chaos When Load Balancing in the Cloud

The benefits of advancing load balancing don’t just help your operations teams: your decision-makers, security teams, and DevOps departments will also feel the benefits.

Read the eBook

Avoiding the Single Point of Failure Problem

Even though a load balancer solves the webserver high availability problem, the load balancer itself needs redundancy because it becomes a single point of failure. The solution is to implement “failover,” which is a handover from one load balancer to another and with both front-ending the same group of web servers. This can be achieved with a router that switches traffic from the primary to the standby upon failure (note that the router also requires redundancy) or as a built-in feature of the load balancers.

Load Balancing Solutions

The first implementations of load balancing used custom hardware which had the advantage of extremely high performance and high availability. The downsides were, and still can be, cost—custom hardware can be more expensive—and that it’s a physical solution that is at odds with the move to software defined network (SDN) and software defined data center (SDDC) environments.

Virtual Load Balancing

Virtual load balancers are software applications that work with SDN environments whether they are private cloud, public cloud, and hybrid cloud (multi-cloud) deployments providing configuration and management flexibility that can be at a lower cost than hardware-based solutions. The performance of virtual load balancers is limited to the performance of the underlying hardware.

Global Server Load Balancing

When you’re looking to provide high-performance web services at the scale of, for example, Facebook or eBay, you need to minimize network latency and improve response times required to connect to end users who could be anywhere in the world. Using multiple, geographically distributed data centers was the answer which, in turn, required a new solution: Global server load balancing (GSLB, also called cloud load balancing or multi-cloud load balancing).

Global server load balancing technology handles multi-cloud, multi-region environments with automatic scaling, regional and cross-regional failover, and centralized management. Supporting disaster recovery is a prime use case of GSLB. For example, when a DDoS attack reduces data center service performance, GLSB transfers traffic to another fully functioning data center with minimal service interruption from the client’s point of view.

How A10 Networks Can Help

A10’s market-leading products including the A10 Thunder® Application Delivery Controller (ADC) showcases our expertise in load balancing and application delivery to ensure server availability, protection of vulnerable applications, and accelerated content delivery. A10’s products offer superior processing power as well as outstanding cost-efficiency, typically 10x to 100x lower cost per subscriber versus traditional network vendors.

< Back to Glossary of Terms

How cloud-ready and modernized are your application services?

Take this brief multi-cloud application services assessment and receive a customized report.

Take the Survey How cloud-ready and modernized are your application services?