Does IPERF Tell White Lies - InterWorking Labs ...

Click here to load reader

  • date post

    15-Aug-2020
  • Category

    Documents

  • view

    0
  • download

    0

Embed Size (px)

Transcript of Does IPERF Tell White Lies - InterWorking Labs ...

  • Does IPERF Tell White Lies?

    Kings Village Center #66190 Scotts Valley, CA 95067

    iwl.com +1.831.460.7010

    info@iwl.com

    Here at InterWorking Labs (http://iwl.com/) we usually measure bandwidth using specialized devices from

    Spirent or Ixia.

    However, the other day we had occasion to use the popular open source tool Iperf –

    http://en.wikipedia.org/wiki/Iperf.

    Our goal was to measure the bandwidth of a link that had a lot of buffering (i.e. a bufferbloat situation.)

    So we set up an Iperf server at one end and an Iperf client at the other. We ran Iperf in UDP mode – we

    did not want our results affected by TCP congestion algorithms. We ran a single-direction test. The client

    was instructed to send traffic into the link much faster than we knew the link could handle – we expected

    substantial loss of packets and we also expected a significant time delay for may of those that did get

    through we.

    We observed those expected effects.

    And we observed more that we did not expect. In particular we observed that Iperf was signi�cantly

    under reporting the actual bandwidth of the link.

    We experimented with the “-l” parameter on the Iperf client and noticed that as the packet size was

    reduced the bandwith report error increased.

    So we inserted a small switch with port monitoring capability onto the link to see what we could see.

    What we say brought to mind one word – RTFM. Had we very carefully read the Iperf documentation we

    might have noticed that Iperf measures the transport level bandwidth.

    In other words, it measures the number of UDP data or TCP data bytes that are carried. The bytes used

    by protocol wrappers – Ethernet, IP, UDP/TCP – are not included in the Iperf bandwidth reports.

    On a typical packet there are:

    ‣ 18 bytes of Ethernet frame. This includes the source and destination MAC addresses, the type/length field, and the CRC/FCS. (There are at least 4 more bytes if you use IEEE 802.1Q

    VLAN tags. There are also 8 more bytes that carry the packet preamble and the start frame

    delimiter. For convenience we won't use these in our calculations. However, we should

    remember that these exist on most media and are yet more bits that Iperf ignores.)

    ‣ 20 bytes (or more if there are options) of IPv4 header. ‣ 8 bytes of UDP header

    That adds up to at least 44 bytes per-packet that are not taken into account when Iperf calculates link

    bandwidth.

    On large packets, such as might contain 1472 bytes of UDP data, this amounts to an Iperf under report of

    roughly 3%.

    On small packets that error grows substantially. For packets that contain 60 bytes of UDP data the under

    reporting error is over 40%.

    The error can get even larger where the UDP packet has so little data that even with the UDP and IP and

    Ethernet headers it does not fill a minimum sized Ethernet frame.

    The numbers reported by Iperf are not incorrect. Rather they are potentially misleading to those who do

    not realize that Iperf is measuring transport level data and who expect that the result will be that raw

    bandwidth of the link, such as that advertised by ISPs or many consumer link speed test websites.

    Converting Iperf Results to Real Bandwidth One can convert the numbers given by Iperf into actual bandwidth.

    The formula is:

    A = ((L + E + I + U) / L) * R

    Where:

    ‣ A is the actual link bandwith in bits/second. ‣ L is the value given to Iperf's “-l” parameter. For values above 1472 (on a link with an MTU of 1500) Iperf will generate fragmented UDP packets. For our purposes this should be avoided. So it is best if

    the user specifies a specific value in the range of 60 to 1472. This will result in UDP frames with that

    amount of data.

    ‣ E is the size of the Ethernet framing. For convenience we will say that this is 18 bytes – to hold the source and destination MAC addresses, the type/length field, and the 4 byte CRC/FCS. But it can be

    more if 802.1Q VLAN tags are being used. And there are also 8 more bytes that are used for the

    packet preamble and the start frame delimiter. For convenience we won't use these here in our

    calculations.

    ‣ I is the IPv4 header size. This is 20 bytes plus IPv4 options. For convenience we will use the size without options.

    ‣ U is the UDP header size of 8 bytes. ‣ R is the bits/second value reported by Iperf. (Iperf often reports this with a 'K' or 'M' suffix to indicate kilobits or megabits.)

    (Note: This formula should be computed using floating point arithmetic; integer arithmetic will usually

    simply, but incorrectly, give 1 as the result.)

    Alternatives Iperf's weakness is in its reporting. As a packet stream generator Iperf works fine.

    One can combine Iperf with another open source tool, wireshark – http://en.wikipedia.org/wiki/Wireshark.

    1.) The best way to do this would be to acquire a small (and relatively inexpensive) consumer grade

    Ethernet switch that has port monitoring. We tend to use the Netgear GS105E (that final 'E' is

    important) for this – some configuration by the user is required.

    2.) Then we install that switch so that the Ethernet we are measuring goes through the switch. In other

    words we install the switch astride the path between our Iperf cleint and Iperf server.

    3.) Then we run a wire from the monitoring port on the switch to a machine running wireshark (and that

    is powerful enough to absorb the traffic load).

    4.) We then fire up the Iperfs and capture several seconds – typically 30 seconds – of traffic.

    5.) Then we can delve into wiresharks statistics section and create a graph that shows us the

    bandwidth utilization.

    Conclusion Iperf is a useful tool. But one must interpret its results knowing that Iperf measures transport level, i.e.

    user data, bandwidth rather than the actual bandwidth of a link.

    Most ISP's advertise the speed of their offerings based on the actual bandwidth rather than user data

    bandwith.

    The differences in the numbers between what ISPs offer and what Iperf reports can be high – over 40% in

    some cases. The actual difference will mostly depend on the traffic mix (the blend of large and small

    packets). The difference will also be affected, but to a lesser degree, by the presence of VLAN tags,

    MPLS headers, micro-packetization, and other rather technical matters.

    There is also a small error caused by Iperf not counting hidden traffic, such as ARP.

    IPv6, because it has longer IP headers than IPv4, will increase the under reporting by Iperf.

    Notes: We are ignoring the effect of other kinds of packets on the link being tested. For example, the ARP

    protocol will consume some bandwidth, but typically not a significant amount. The effect of things like

    ARP can, however, be multiplied if the time needed for the ARP transaction to complete results in the

    delay of a significant number of our test UDP packets.

    In our measurements the packet loss and delay on the link was sufficiently high so that at the end of each

    run the Iperf server could not reliably send its summary report back to the client. So we got our numbers

    by having the server report its perceived bandwidth every few seconds.

    ‣ Converting Iperf Results to (pg. 3) Real Bandwidth

    ‣ Alternatvies (pg. 4) ‣ Conclusion (pg. 4) ‣ Notes (pg. 5)

    What’s Inside...

    1

  • 2iwl.com ©2014, InterWorking Labs, Inc. ALL RIGHTS RESERVED.

    Here at InterWorking Labs (http://iwl.com/) we usually measure bandwidth using specialized devices from

    Spirent or Ixia.

    However, the other day we had occasion to use the popular open source tool Iperf –

    http://en.wikipedia.org/wiki/Iperf.

    Our goal was to measure the bandwidth of a link that had a lot of buffering (i.e. a bufferbloat situation.)

    So we set up an Iperf server at one end and an Iperf client at the other. We ran Iperf in UDP mode – we

    did not want our results affected by TCP congestion algorithms. We ran a single-direction test. The client

    was instructed to send traffic into the link much faster than we knew the link could handle – we expected

    substantial loss of packets and we also expected a significant time delay for may of those that did get

    through we.

    We observed those expected effects.

    And we observed more that we did not expect. In particular we observed that Iperf was signi�cantly

    under reporting the actual bandwidth of the link.

    We experimented with the “-l” parameter on the Iperf client and noticed that as the packet size was

    reduced the bandwith report error increased.

    So we inserted a small switch with port monitoring capability onto the link to see what we could see.

    What we say brought to mind one word – RTFM. Had we very carefully read the Iperf documentation we

    might have noticed that Iperf measures the transport level bandwidth.

    In other words, it measures the number of UDP data or TCP data bytes that are carried. The bytes used

    by protocol wrappers – Ethernet, IP, UDP/TCP – are not included in the Iperf bandwidth reports.

    On a typical packet there are:

    ‣ 18 bytes of Ethernet frame. This inc