Search
Close this search box.

Tech Tips

Topics

Resources

Subscribe to Tech Tips

Why does the Performance Test use UDP?

A popular feature of the EtherScope® nXG is the Network Performance Test. This allows the EtherScope to send simultaneous streams of test traffic to up to four remote endpoints. These end devices can be configured to act as remote peers (where they can, in turn, send a separate stream of data in the opposite direction) or as reflectors (where they simply reflect the EtherScope traffic back to the origin).

The Performance Test uses a stream of data that is transmitted over the UDP transport protocol. But why is UDP used instead of TCP? Isn’t TCP the protocol in use by mission-critical applications? Isn’t TCP more reliable? Let’s discuss.

UDP traffic vs TCP streams
UDP has unfairly developed a reputation of being unreliable. This is perhaps due to the fact that the protocol is connectionless, does not retransmit lost data, and has a much simpler protocol overhead than its transport-layer cousin. While these statements are true, UDP has several advantages when used as a performance test protocol.

  1. Connectionless transmission
    UDP does not need to setup a connection in order to begin transmitting data. A TCP connection requires internal resources to be allocated in both endpoints to manage, transmit, and receive the traffic stream. While most operating systems have highly tuned TCP stacks to optimize the load of application traffic and minimize delays, when it comes to throughput testing, UDP is a much “lighter” protocol.
  2. No need to acknowledge or retransmit data
    In the Performance Test, the application tracks all transmitted traffic and is aware what data arrives at the endpoint and what is lost by the network. This is handled at layer seven, within the application itself. There is no need for an additional level of reliability and retransmission at the transport layer. Performance tests that have to wait for acknowledgments and possible retransmissions often cannot fully fill a network path due to this overhead. This is why it is not uncommon to see TCP test results at 95-98% or even less of the total utilization potential.
  3. No congestion algorithms (flow control)
    TCP does not want to transmit so aggressively that it overwhelms the network or the available bandwidth. To help it achieve this goal, it utilizes a fairly complex congestion algorithm that continually adjusts the flow of data, even overriding application-level settings. If congestion or data loss is experienced, TCP can “apply the brakes” rather quickly to avoid further loss. When it comes to performance testing, these protocol-level adjustments can impact the quality of the test, as TCP will always be more conservative in transmission than UDP, which has no flow control method.
  4. No receive window
    The receiver of a TCP stream of data has a receive buffer where it temporarily stores ingress data. Due to internal resource congestion on the target, this buffer can fill, which will cause the sender to slow the rate of transmit. In the case of a performance test, the receive buffer has little to do with the actual measurement of network bandwidth, so we don’t want a congested endpoint to needlessly slow the test. We could mistakenly blame the network for a poor test result rather than the TCP receive buffer on the receiver.

Bottom line, UDP is simple, effective, and sufficient as a transport layer carrier for the performance test. Since the smarts of the test are built into the upper layer application, there is no need for reliability, retransmission, flow control, or receive buffering at the transport layer. UDP allows the test to stress, measure, or validate the network infrastructure path without protocol interference, which is the purpose of the Performance Test.