Skip to main content Skip to search
Start Your Free Trial
Glossary of Terms

What is Network Latency?

The Causes of Network Latency

When a client sends a request to a server across the Internet a complicated series of network transactions are involved. A typical request path will have the client sending the request to a local gateway which in turn routes the request to a sequence of routers, through firewalls and load balancers, and finally to the server. Each step or “hop” involves:

  • Receiving of the request
  • Decoding the protocol
  • Figuring out where the request should go (or even whether to route the request at all)
  • Possibly modifying the request to meet protocol or routing requirements
  • Sending the request onwards to the next device

All of this takes time, so each hop introduces a delay. Network latency is the total time, usually measured in milliseconds, required for a server and a client to complete a network data exchange. Even if there are no intermediate hops, which is never the case for communications across the Internet, latency is still involved because the request has to traverse layers of software and hardware at each end.

How Latency is Measured

There are two ways to measure network latency:

If we’re concerned about how network latency affects application performance, then the Round Trip Time is what we care about. On the other hand, if we’re trying to optimize Internet of Things (IoT) transactions we’ll usually be more concerned about Time to First Byte latency.


Learn more about a global financial services firm that ensures availability and security of low-latency trading applications

Network Latency in the Real World

When it comes to real-world applications such as high frequency stock trading, minimizing communications latency by even a millisecond potentially gives the trader a huge advantage. This is why, for example, Hibernia Atlantic (now acquired by GTT Atlantic) spent $300 million laying a 6,021km (3,741 mile) fiberoptic link from New York to London to deliver a Round Trip Time of 59 milliseconds, 6 milliseconds less than the next best link latency. It’s been estimated that the reduced network latency could give a large hedge fund an additional profit of close to $100 million per year.

How to Minimize Network Latency

Minimizing network latency is about optimizing all the elements of the networking infrastructure. Even when you’ve deployed ultra-high-performance hardware, optimizing software and protocols is the key and Application Delivery Controllers (ADCs) provide a range of features that deliver optimizations including:

  • HTTP Acceleration and Optimization including HTTP Connection Multiplexing (also called TCP Connection Reuse), RAM Caching, and HTTP Compression
  • SSL Offloading including SSL Termination, SSL Bridging, SSL Proxying, and SSL Session ID Reuse
  • TCP optimization including Selective Acknowledgment, Client Keep-Alive, and Window Scaling
  • HTTP Pipelining support
  • HTTP/2, SPDY protocol support

How A10 Can Help to Reduce Network Latency

Along with load balancing and infrastructure health checks, A10 Networks Thunder® Series Application Delivery Controllers (ADCs) deliver advanced traffic management and optimization features including offloading CPU-intensive SSL/TLS transactions from servers with SSL Offloading with Perfect Forward Secrecy (PFS) ciphers, content caching, compression, and TCP optimization.

< Back to Glossary of Terms