Skip to main content Skip to search
Start Your Free Trial
Glossary of Terms

What is a Load Balancer and How Does Load Balancing Work?

When it comes to effective network operations, companies face two perennial problems: scalability (how many clients can simultaneously access the server) and availability (access with minimal downtime). The solution is load balancing in networking: using commodity servers and distributing the input/output load across those servers.

To expand on that, load balancing in networking is a process that spreads network traffic, computing workloads, and other service requests over a group of resources or services. The incoming network traffic is distributed over commodity servers to balance the overall workload. The key benefits of network load balancing are scalability, optimized service reliability, increased network availability, and overall manageability.

Services are load balanced based on algorithms such as round robin, least connections, and fastest response.

How Network Load Balancing Works

A network load balancer (versus an application delivery controller, which has more features is discussed below) acts as the front end to a collection of web servers so all incoming HTTP requests from clients are resolved to the IP address of the load balancer. The network load balancer then routes each request to one of its rosters of web servers in what amounts to a private cloud. When the server responds to the client, the response is sent back to the load balancer and then relayed to the client.

The beauty of load balancing in networking is that it is transparent to your clients because as far as they’re concerned there’s just one endpoint to talk to.

How Valuable is Load Balancing in Networking?

By implementing network load balancing, you’ll be solving multiple service issues:

  • Scalability: As your company grows and requires more input/output capability, you simply add more commodity web servers to your network load balancer’s roster. Immediately, your network performance increases.
  • High availability (HA): If a server fails for any reason, high availability load balancing detects the outage immediately and stops sending requests to that server.
  • Maintainability: If any of the back-end servers need to be repaired or updated, they are simply removed from the network load balancer’s roster until they can be reinstated.
  • Security: “Hardening” of the load balancer has become a crucial feature to load balancing in networking to protect the web servers it manages. This feature delivers security for your servers as they run complex web applications, facing a myriad of potential vulnerabilities.


5 Ways to Simplify the Chaos When Load Balancing in the Cloud

The benefits of advancing load balancing don’t just help your operations teams: your decision-makers, security teams, and DevOps departments will also feel the benefits.
Read eBook


Avoiding the Single Point of Failure Problem

Even though a network load balancer solves the web server high-availability problem, the load balancer itself needs redundancy. Otherwise it becomes a single point of failure.

The solution is to implement “failover,” which is a handover from one network load balancer to another, with both load balancers front-ending the same group of web servers. This can be achieved with a router that switches traffic from the primary network load balancer to the standby upon failure or as a built-in feature of the load balancers. That said, the router also requires redundancy.

The Growth of Network Load Balancing Solutions

The first implementations of network load balancing used custom hardware, which has the advantage of extremely high performance and high availability. The downsides are the high cost and the fact that hardware is a physical solution that is at odds with the move to software-defined networks (SDN) and software-defined data center (SDDC) environments.

These deficiencies led to companies now using a variety of techniques to perform load balancing in networking, which we describe below.

Server Load Balancing

A server load balancer is a hardware or virtual software appliance that distributes the application workload across an array of servers, ensuring application availability, elastic scaling of server resources, and health management of back-end server and application systems.

Global Server Load Balancing

When you’re looking to provide high-performance web services at a large scale, you need to minimize network latency and improve the response times required to connect to end users who could be anywhere in the world. For larger companies, multiple geographically distributed data centers are the answer. Global data centers, in turn, require global server load balancing (GSLB), also called cloud load balancing or multi-cloud load balancing.

GSLB is a technology that directs network traffic to a group of data centers in various geographical locations. Each data center provides similar application services, and client traffic is directed to the optimal site with the best performance for each of your clients. GSLB monitors the health and responsiveness of each site and, like server load balancing, directs traffic to the site with the best response times. GSLB handles multi-cloud and multi-region environments with automatic scaling, regional and cross-regional failover, and centralized management.

Supporting disaster recovery is a prime use case of GSLB. For example, when a DDoS attack reduces data center service performance, GSLB transfers traffic to another fully functioning data center with minimal service interruption from your client’s point of view.

Firewall Load Balancing

To scale complex network devices and guarantee non-stop operation, you’ll likely need to implement an array of security firewall systems. While having a solid firewall infrastructure is a cornerstone of network security, best-of-breed solutions cannot be optimized without adequate firewall load balancing. A highly available firewall is crucial in protecting your network and ensuring business continuity. Network architectures should include a network load balancing solution that guarantees high availability of your firewall defenses and that easily scales to accommodate increased demand.

Virtual Load Balancing

Virtual load balancers are software applications that work with SDN environments, whether they are private cloud, public cloud, or hybrid cloud (multi-cloud) deployments. Virtual network load balancers provide configuration and management flexibility that can incur a lower cost than hardware-based solutions. Of course, the performance of virtual load balancers can only be equivalent to the performance of your underlying hardware.

DNS Server Load Balancing

DNS servers in large or mission-critical infrastructures usually deploy DNS services on a cluster of DNS servers. In general, this cluster of DNS servers is behind a server load balancing system. This architecture overcomes the shortcomings in the standard DNS failover mechanism and greatly increases performance.

Application Delivery Controllers

Application delivery controllers (ADCs) are network servers providing application reliability, acceleration, and application server services. Application delivery controllers include server load balancing with additional technologies, including SSL offloading, security firewall services, application firewall systems, DDoS protection, service chaining, and others.

How A10 Networks Can Help You

A10’s products, including the A10 Thunder® Application Delivery Controller (ADC), are built on our expertise in network load balancing and application delivery. A10’s products ensure your server availability, protect vulnerable applications, and accelerate content delivery. What’s more, A10’s products offer superior processing power as well as outstanding cost efficiency, with typically 10x to 100x lower cost per subscriber versus traditional network vendors.

The core of the A10 Thunder ADC platform covers a wide range of options for network load balancing methods, health checks, application availability with full-proxy L4-7 load balancing leveraging agile traffic control, and aFleX® scripting.

< Back to Glossary of Terms