[c-nsp] Effect of simultaneous TCP sessions on bandwidth

Brad Gould bradley at internode.com.au
Sun Nov 10 17:47:45 EST 2013


Apply a shaper (not a policer) towards the service provider at each end 
of 95Mbps or so (will probably require tweaking).

Single TCP session is probably managing to balance itself into the 
~100Mbps circuit.

Two/many TCP sessions are probably bursting into a policer (and 
effectively each other) often enough to ruin performance.

Brad

On 10/11/2013 4:12 PM, Youssef Bengelloun-Zahr wrote:
> Hello community,
>
> Need your help and hands on experience to shed some light on some problem
> I'm facing.
>
> We have contracted a Layer 2 ethernet connection hand-off between our DC
> (Frankfurt) and a customer site (Hamburg) with a carrier.
>
> Carrier provides us with an ethernet MPLS pipe up to a DC in hamburg and
> relies on a third party local loop provider to extend it up to customer
> site. Nothing new under the sun here.
>
> We have been testing this connection because we think we are facing
> bandwidth issues. Let me summarize our results :
>
>     - Carrier claims E2E Ethernet RFC2544 passed : we have been to check the
> results and they seem OK,
>
>     - UDP traffic reaches up to 95 Mbits/s for one way streams (both ways)
> and simaltaneous bi-directionnal streams,
>
>     - TCP traffic reaches up to 90 Mbits/s for one way streams (both ways),
>
>     - TCP traffic hits some kind of limit and isn't able to achieve more
> than 40-60 Mbits/s in average      <=== That's the problem we are facing.
>
> One bit of information I think is relevant :
>
>      - FRA Handoff between our provider and our PE is using a GigE port,
>
>      - HBG Handoff between our provider and local-loop provider is using a
> Fast ethernet ports between their facing equipments,
>
>      - CE in Hamburg is a Fast Ethernet port and is forced with 100 Full
> duplex,
>
>
> We have carried tests with multiple devices directly connected behind our
> PE in FRA and carrier's CE in HBG, results are always the same.
>
> In the end, we connected servers directly in order to suppress any uneeded
> equipments on the path, tests we're carried using iPerf and some other
> tools.
>
> We have been debugging this, no improvement. We have tried everything,
> disabling all policiers, etc.... nothing nails it !
>
> Our provider claims this is normal behavior for TCP. Does this sound normal
> to you ?
>
> Thanks for your help.
>
> Best regards.
> _______________________________________________
> cisco-nsp mailing list  cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/


-- 
Brad Gould, Network Engineer
iiNet / Internode
P: +61 8 8228 2999
bradley at internode.com.au



More information about the cisco-nsp mailing list