We will teach you about the four Modes of Server Load Balancing: Routed Mode, One-Arm Mode, Transparent Mode and Direct Server Return (DSR) Mode.
When adding the AX Series into your infrastructure, the AX platform offers several key benefits. By deploying an AX Series you can maximize application availability, scalability, and performance. The AX will also help to increase infrastructure efficiency and deliver a faster end user experience.
Welcome to A10 Networks Quick Classes for the AX Server Load Balancer. A10 Networks is the technology leader in advanced application delivery solutions. Our AX Series Application Delivery Controllers are New Generation Server Load Balancers that are Faster, Better, and Greener than any competing solution on the market today.
I'm "A10 Man" and I'll be your eLearning trainer. Today we'll be talking about the four Modes of Server Load Balancing.
When adding the AX Series into your infrastructure, the AX Server Load Balancer, offers several key benefits. By deploying an AX Server Load Balancer you can Maximize application availability, scalability, and performance. The AX will also help to increase infrastructure efficiency, and lastly, the AX will help you deliver a faster end user experience. There are four ways to integrate an AX Series appliance into your network infrastructure in order to obtain these benefits.
We will start our discussion of the four different AX integration methods by covering Routed Mode, which is by far the most popular method. In Routed Mode, the AX acts as a Layer 3 router. One AX interface is usually connected to a public network, and another AX interface is usually connected to a private network. In Routed Mode, the AX routes traffic back and forth between the two networks.
Client traffic accesses the Virtual IP address (or VIP) on the AX. In this example, the VIP has the IP address of "203.0.113.2". Client requests are Destination "NAT-ed" with the IP address of the servers. That is, the AX changes the destination address by replacing the VIP's IP address with the server's IP address. Likewise, server responses are Source "NAT-ed" with the VIP address, meaning the AX changes the source IP address by replacing the server's IP address with the VIP address.
Routed Mode offers multiple benefits. It's a simple and non-intrusive installation that requires no configuration changes on the clients and servers. In addition, the servers will retain the ability to see clients' real IP address. When deploying an AX in Routed Mode, there are a few points to keep in mind. The servers must use the AX Server Load Balancer as their default gateway, and the clients must be on a different subnet than the servers.
Another popular AX Series integration is called "One-Arm Mode". This mode is particularly useful for testing AX benefits without having to change the existing infrastructure or applications. In One-Arm Mode, the AX is simply plugged into a switch that is located on the same network as the servers. Incoming traffic from the clients accesses the Virtual IP address (or VIP) on the AX. In this example, the VIP has IP address "203.0.113.2". Client requests are Source "NAT-ed", meaning that the AX replaces the client's source IP address with the AX Source NAT IP. The client request is Destination "NAT-ed", meaning that the destination address, which is the AX's VIP, is replaced with the with the IP address of the server. Likewise, server responses are Source "NAT-ed", meaning that the server IP is replaced with the VIP address of the AX, and the server response is Destination "NAT-ed" by replacing the AX Source NAT IP with the Client IP address.
One-Arm Mode offers multiple benefits. There are no configuration changes needed on the clients and servers, so the AX can be easily added to your existing infrastructure. In addition, One-Arm Mode makes it easy to test, and unlike Routed Mode, the clients can be on the same subnet as the servers. When deploying an AX in One-Arm Mode, there are a few points to keep in mind. The servers do not retain visibility of the client IP addresses, and Source NAT must also be configured on the AX.
The third mode, called "Transparent Mode", is less popular than the first two modes, but it still has a few adopters. In Transparent Mode, the AX is installed as a Layer 2 switch. One AX interface is connected to the cloud, while another AX interface is connected to the servers or to a switch that the servers are connected to.
Incoming traffic from the client is Destination "NAT-ed", meaning that the client's destination address, which is the VIP, is replaced with the IP address of the servers. Likewise, outbound responses from the server are Source "NAT-ed", meaning that the server's source IP address is replaced with the VIP address of the AX.
Transparent Mode also offers multiple benefits. It's a transparent installation, so clients and servers are not aware that there is a load balancer between them. In addition, servers retain the ability to see the clients' real IP addresses. The main point to keep in mind with Transparent Mode is that it requires a specific infrastructure in which server responses must pass through the AX, so it may be a little more difficult to implement.
The fourth integration mode, "Direct Server Return Mode", has become a less popular mode of deployment as Application Delivery Controllers have gotten faster. In Direct Server Return Mode, also called "DSR", the AX is simply plugged into a switch. When the servers are on the same network, as shown in this example, then this approach is called Layer 2-DSR. However, the servers could also be behind a router, in which case this mode is called Layer 3-DSR.
Incoming traffic from the clients access the VIP on the AX. In this example, the IP address of the VIP is "203.0.113.2". Neither Source nor Destination NAT is used. The AX simply sends the traffic to the server. The IP addresses are not changed -- only the MAC address is changed. Outbound responses from the server are sent directly to the client using the source IP address of the VIP. This is achieved by creating a loopback address on the server that is the same as the VIP. The traffic doesn't pass through the AX.
DSR Mode has one major benefit - it is very fast. This is because the AX only has to process incoming traffic. Outbound traffic is sent directly to the clients, bypassing the AX. There are some points to keep in mind when deploying an AX in DSR Mode. There is no support for the AX's advanced Layer 7 features, such as HTTP optimization and SSL offload. In addition, DSR Mode requires that the administrator perform additional configurations on each server, such as adding a loopback address for Layer 2-DSR mode or configuring IP stack changes for Layer 3-DSR mode.
That's all folks! You now have the knowledge of the basic methods of integrating an AX Server Load Balancer, so you can make a more informed decision about how to deploy the AX Application Delivery Controller into your network. This completes our second A10 Quick Class video, but be sure to return to the A10 Networks website for additional videos on configuring the AX Server Load Balancer. This is "A10 Man" saying "Happy Load Balancing with A10 Networks!"