[nsp] limits to CEF per-packet load sharing?

Eric Matkovich ematkovi at cisco.com
Fri Feb 14 10:17:05 EST 2003


I would recommend MLPPP.  Somewhere in or around 12.1(3)T MLPPP became CEF switched.  Although this was largely implemented for support with MPLS you can derive some of the performance benefits.  Although, I'm not convinced it is fully CEF switched.  MLPPP will allow you up to 255 links.  The algo equally tries to utilize all member links in the bundle.  When used in conjunction with "Link Fragmentation and Interleaving" two different algorithms may be employed.

The "Equal Cost" algorithm and "Unequal Cost" algorithm.

Equal Cost is used when:

This is the more simple of the two and implemented when the following conditions have been met:

- All member links are of equal bandwidth (ex: 4 ISDN B-Channels).
- Interleaving hasn't been configured on the link.
- A "fragment-delay" hasn't been configured on the link.

Characteristics:

In this mode, packets are chopped into equally sized fragments based on the number of member links in the bundle. There are no upper bounds on the size other than the MTU of the individual links.

The number of fragments a packet is divided into is based on the nearest power of 2 less than or equal to the number of links in the bundle. With the conformance being the lower boundary of 42bytes and the upper of 16 fragments. Here's an example calculation:

The Multilink Bundle contains 5 ISDN B-Channels:

100bytes/4 links (power of 2) = 25byte fragments <-- Wrong, doesn't comply!!!

The next higher power 2 is 2:

100bytes/2 links (power of 2) = 50byte fragments <-- Complies, send fragments.

Unequal Cost works like this:

Whenever the above criteria hasn't been met. With this mode the constraints are now based on the fragment size rather than the number of fragments a packet is broken into. In other words, based on the fragment-delay. Without being explicitly set, this is 30ms.

Two possibilities are employed in determining what the fragment size is. 

A) making the fragment a size that will comply with fragment-delay of the link it is being transmitted on, or 

B) making the all fragments an equal size based on the slowest link in the bundle and queuing more fragments to faster links. The former would be the most efficient for reducing encapsulation overhead on the links.

As far as splitting them across different ViPs in the same 7500, I'm not sure what the performance impact would be.  Typically, any recombination of fragments be it Multilink FR, MLPPP, IP, etc. that sit on different ViPs require the RSP (read process switched).

This would be up to you decide.

Cheers,

-E-



At 09:36 AM 2/14/2003 -0300, Ezequiel Carson wrote:
>Just a comment.
>
>
>Take care about "per-packet" fashion, because you are sending each
>packet per link, if any link is having some queing problems or latency
>or whatever  you will get "Out of Sequence"  at the end point. 
>
>Ezeq.
>
>
>
>
>
>
>
>On Fri, 2003-02-14 at 01:11, Edward Henigin wrote:
>> Has anyone run into any limits over the number of interfaces that
>> CEF per-packet load sharing will work?
>> 
>> I've got a customer who wants 10 T1's (please, just don't ask :)
>> and we're pondering using MLPPP or CEF per-packet load sharing.
>> What's the most # of interfaces you've ever run CEF per-packet
>> load sharing, and seen it work well?
>> 
>> BTW, our side will be a 7500 or 7200, and the client side will be
>> a 7200.
>> 
>> Thanks,
>> 
>> Ed
>> _______________________________________________
>> cisco-nsp mailing list  cisco-nsp at puck.nether.net
>> http://puck.nether.net/mailman/listinfo/cisco-nsp
>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>> 
>
>
>_______________________________________________
>cisco-nsp mailing list  cisco-nsp at puck.nether.net
>http://puck.nether.net/mailman/listinfo/cisco-nsp
>archive at http://puck.nether.net/pipermail/cisco-nsp/ 

___________________________________________________________
Eric Matkovich                                  510 McCarthy Blvd., Bldg 24/2
Technical Marketing Engineer             Milpitas, CA 95035
Tunneling Technologies                      (408) 527.4111 office
ematkovi at cisco.com                         (800) 365.4578 pager
Internet Technologies Division (ITD) 
http://wwwin.cisco.com/ios/connectivity/
___________________________________________________________



More information about the cisco-nsp mailing list