Websites faced two problems back in the late 90s: scalability (how many clients could simultaneously access the server) and availability (the need for minimal downtime). The solution was load balancing: using commodity servers and distributing the input/output load amongst them.
A load balancer (versus an application delivery controller, which has more features) acts as the front-end to a collection of web servers so all incoming HTTP requests from clients are resolved to the IP address of the load balancer. The load balancer then routes each request to one of its roster of web servers in what amounts to a private cloud. When the server responds to the client, the response is sent back to the load balancer and then relayed to the client.
Load balancing is transparent to the clients (as far as they’re concerned there’s just one endpoint to talk to) and solves multiple service issues:
The benefits of advancing load balancing don’t just help your operations teams: your decision-makers, security teams, and DevOps departments will also feel the benefits.
Even though a load balancer solves the webserver high availability problem, the load balancer itself needs redundancy because it becomes a single point of failure. The solution is to implement “failover,” which is a handover from one load balancer to another and with both front-ending the same group of web servers. This can be achieved with a router that switches traffic from the primary to the standby upon failure (note that the router also requires redundancy) or as a built-in feature of the load balancers.
The first implementations of load balancing used custom hardware which had the advantage of extremely high performance and high availability. The downsides were, and still can be, cost—custom hardware can be more expensive—and that it’s a physical solution that is at odds with the move to software defined network (SDN) and software defined data center (SDDC) environments.
Virtual load balancers are software applications that work with SDN environments whether they are private cloud, public cloud, and hybrid cloud (multi-cloud) deployments providing configuration and management flexibility that can be at a lower cost than hardware-based solutions. The performance of virtual load balancers is limited to the performance of the underlying hardware.
When you’re looking to provide high-performance web services at the scale of, for example, Facebook or eBay, you need to minimize network latency and improve response times required to connect to end users who could be anywhere in the world. Using multiple, geographically distributed data centers was the answer which, in turn, required a new solution: Global server load balancing (GSLB, also called cloud load balancing or multi-cloud load balancing).
Global server load balancing technology handles multi-cloud, multi-region environments with automatic scaling, regional and cross-regional failover, and centralized management. Supporting disaster recovery is a prime use case of GSLB. For example, when a DDoS attack reduces data center service performance, GLSB transfers traffic to another fully functioning data center with minimal service interruption from the client’s point of view.
A10’s market-leading products including the A10 Thunder® Application Delivery Controller (ADC) showcases our expertise in load balancing and application delivery to ensure server availability, protection of vulnerable applications, and accelerated content delivery. A10’s products offer superior processing power as well as outstanding cost-efficiency, typically 10x to 100x lower cost per subscriber versus traditional network vendors.
Take this brief multi-cloud application services assessment and receive a customized report.Take the Survey