[c-nsp] Load-balancing

Matt Buford matt at overloaded.net
Wed Jun 27 18:22:19 EDT 2007


> Thanks Rodney.... will per-packet break any applications normally?  I used
> to hear stories about VOIP and other time sensative applications having
> issues with packets out of order etc... any truth or concern to that any
> longer?

I can answer this a bit, having played with this in several situations. 
Someone else may be able to provide a better explanation of how TCP handles 
it and why it causes problems, but I can provide some general real-world 
experience with out of order packets.

A lot of it depends on your circuits and your traffic.  If you per-packet 
one concurrent flow across two ISDN links you'll have a lot more reordering 
than if you per-packet 100 concurrent flows across gigabit Ethernet.

As a generalized guideline, I would suggest that T1 speeds are somewhere 
around the border of where it works pretty well with minimal reordering for 
most people.  Faster than a T1 and you'll probably see little enough 
reordering for it to not be a big deal.  Slower than a T1 and it can likely 
be a problem.

I've seen many people use this on T1.  It mostly works.  Packet reordering 
manifests into retransmissions or dropped packets depeneding on the 
protocol.  With a little bit of retransmissions or a little bit of 
packetloss, it generally isn't noticable in most cases, so people often use 
this at T1 rates without issue.

Back in like '99 or so, I had both 128k ISDN and 144k IDSL to my home, both 
going back to the ISP that I worked at.  One day I set up per-packet load 
balancing across these two links.  Because of the combination of low speeds 
plus unequal speeds, packet reodering was significant.  Some general results 
that I remember off the top of my head:

PPTP was unusable.  Out of order packets were dropped by the VPN.  It was 
like having 50% packetloss.

RealPlayer was unusable.  Out of order packets I think were ignored, but I 
can't remember the specifics.

TCP worked, and I got around 200k on a single flow.  There were a lot of 
retransmissions wasting bandwidth, so I was using perhaps 250k or so of 
bandwidth to acheive my 200k of throughput.  Kinda wasteful across the 
servers and backbone, but the end result WAS greater than any one of my 
links.

Also, I once had a gigabit connection to Global Crossings that, for reasons 
no one could explain, reordered packets moderately (even between the 
directly connected routers, so there was no per-packet equal cost links 
between them).  At first glance this wasn't noticable.  However, if you pay 
attention to single-flow moderate-latency max transfer rates, it was having 
a significant effect.  The reordered packets were making TCP flows back off. 
Global Crossings eventually moved our port from their Juniper over to a 
6500, and the reordering problem completely went away and single-flow 
transfers sped up noticably. 



More information about the cisco-nsp mailing list