[c-nsp] Network performance question - TCP Window issue?

John Neiberger jneiberger at gmail.com
Sun Apr 29 13:57:20 EDT 2012


The timing of this is coincidental. I've been helping to troubleshoot
a similar problem at work for days. Let's say we have three servers,
A, B and C. We transfer files between them and here is what we see:

A to B: Fast (around 18 MB/s)
B to A: Slow (around 1 MB/s)
A to C: Slow (around 1MB/s)
C to A: Fast (around 18MB/s)

In our case, Server A is fast when sending to B but not when sending
to C. C can send at a high speed when sending back to A, though.

We've checked everything we can think of. The paths aren't the same.
One path goes through a firewall, another path goes through GRE
tunnels. There are no TCP retransmits and we've verified that MTU
isn't the problem. The firewall can't be the problem because it's only
in the path of one set of transfers. All the TCP settings we've
checked on the servers seem to be the same, although I'm not a server
guy. Someone else has been checking those. The endpoints are on 1-gig
links but it's 10-gig the whole way between them. There is about 50ms
round-trip latency in all cases.

I have no idea what could account for this behavior.


On Sun, Apr 29, 2012 at 5:41 AM,  <sledge121 at gmail.com> wrote:
> I have seen this before, it's called bandwidth delay product and is linked to window size, let us know the tcp results after you have adjusted.
>
>
> Sent from my iPad
>
> On 29 Apr 2012, at 10:22, CiscoNSP_list CiscoNSP_list <cisconsp_list at hotmail.com> wrote:
>
>>
>>
>>
>>>
>>> Did you run your iperf tests also with UDP? (The numbers don't look
>>> like it.)
>>>
>>> With TCP you won't see many drops on your switches, it will adjust - and
>>> you will see less throughput.
>>>
>>> With iperf available at all three sites I would run tests with UDP streams.
>>> This won't find the maximum bandwith automatically, you have to set a
>>> bandwidth for testing and see if you have any packet loss.
>>>
>>> Keep in mind that your carrier might police on ethernet bandwidth,
>>> iperf measures IP throuput.
>>
>> Thanks Klaus - No, did not test with udp...here it is:
>> (With 100M had too many drops - 80M was the best:)[  3] local xxx.xxx.73.54 port 45790 connected with xxx.xxx.65.2 port 5001[ ID] Interval       Transfer     Bandwidth[  3]  0.0-10.0 sec  95.4 MBytes  80.0 Mbits/sec[  3] Sent 68029 datagrams[  3] Server Report:[  3]  0.0-10.0 sec  95.4 MBytes  80.0 Mbits/sec  0.044 ms    1/68028 (0.0015%)[  3]  0.0-10.0 sec  1 datagrams received out-of-order
>> So to be able to see similar performance with tcp, I will need to adjust tcp window correct?
>>
>>
>> _______________________________________________
>> cisco-nsp mailing list  cisco-nsp at puck.nether.net
>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>
> _______________________________________________
> cisco-nsp mailing list  cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/



More information about the cisco-nsp mailing list