RE: [nsp] Multiple T1s versus MLPPP

From: Martin, Christian (cmartin@gnilink.net)
Date: Tue Feb 13 2001 - 23:45:40 EST


 
> On Tue, Feb 13, 2001 at 11:54:57AM -0800, Jim Warner wrote:
> > Per-packet load balancing allows the router to send data
> packets over
> > successive equal-cost paths without regard to individual
> destination
> > hosts or user sessions. Path utilization is good, but
> packets destined
> > for a given destination host might take different paths
> and might arrive
> > out of order.
>
> I suppose this is not a problem if all the T1's (or E1's in
> my case) are
> talking to the same router on the other end?

Except that your users TCP sessions will experience suboptimal window growth
and excessive, unnecessary fast retransmits, especially if the T1s take
diverse transmission paths that cause the delay variation between links to
exceed 3*RTT. Note also that the RTT timer becomes very inaccurate in this
scenario, which prevents TCP from recovering from real loss efficiently.

There are numerous papers written by members of the research community on
the reordering problem. The most notable one is by Bennet, Partridge, and
Shectman titled "Packet Reordering is Not Pathological Network Behavior."
See also www.aciri.org for work done by Sally Floyd and Vern Paxson in this
area. They, and Van Jacobson, are at the forefront of TCP perfromance
research.

> I think that the CPU is not heavily used if it will just do
> round-robin,
> would it? While CEF would use up the CPU and RAM?

Without CEF, round robin is only possible at the process level, which
requires both interrupt level packet forwarding and scheduled route
lookups/layer3/2 rewrite. This is VERY CPU intensive.

>
> We're in the process of getting a NxE1 link to a US provider
> and am not sure
> of what technology would be used.

Personally, I have observed MLPPP to be the best solution for aggregating
links; howver, hardware based IMUXs perform better from the perspective of
the router. The problem with IMUXes is often an issue with the Cisco HSSI
cards. They may not recognize the clockrate sent by the MUX, thus, will
attempt to send data at rates exceeding the aggregate T1 bandwidth. MLPPP
does not suffer from this issue. CEf per-packet load sharing is the next
best option if you have flows that can grow to exceed a T1 (which is
impossible for transoceanic sessions using the default Win32/*NIX TCP RWND
settings.) I would only use CEF per-packet if the endpoints are 1) close
(<500 miles), and 2) are pushing large amounts of stateless session layer
data. I have also used it over 3xT3 to support a customer who drives an OC3
ATM link at line rate 24x7x365 serving up thousands of TCP sessions per
second. Destination-based forwarding for top-heavy packet size distribution
does not perform well in a nxT/E(1/3) sceanrio when the data is served at
rates >= n*(link speed). If the packet sizes are distributed throughout the
transmission unit space, destination based forwarding normalizes to a near
equal distribution (law of large numbers).

Per destination cef is next-best way to go when there are "normal" packet
size distributions. You can bundle many high speed interfaces this way and
get performance equal to what it would be if the there were a single link.
This means that a GSR can bundle 4 OC-12s together and get nearly the same
performance as a single OC-48. (assuming engine 2, of course!)

HTH,
chris

> --
>
> http://www.internet.org.ph The Philippine Internet Resource
> Mobile Voice/Messaging: +63-917-810-9728
>
>
>
>
>



This archive was generated by hypermail 2b29 : Sun Aug 04 2002 - 04:12:29 EDT