[nsp] limits to CEF per-packet load sharing?

Vicky Mair vickyr at socal.rr.com
Fri Feb 14 10:25:44 EST 2003


hi there,

i distinctly remember that we ran into an interesting problem where for some
reason (maybe it was a cef bug) the per-packet bandwidth usage across our
t1s was not equally balanced. in other words our ftp transfers was pounding
on a single t1 as opposed to load balancing over t1's.

anyway, after pushing it to process switch (no ip route-cache cef) for the
interface(s) in question as a work around we got the result that we were
looking for :) weird but it worked. the down side is potential memory/cpu
hits of process vs fast switching. not sure the status since then as it was
handed down to the operation team for day-to-day support :-)




regards,
/vicky


-----Original Message-----
From: cisco-nsp-bounces at puck.nether.net
[mailto:cisco-nsp-bounces at puck.nether.net]On Behalf Of Oliver Boehmer
(oboehmer)
Sent: Thursday, February 13, 2003 10:47 PM
To: jlewis at lewis.org; Edward Henigin
Cc: cisco-nsp at puck.nether.net
Subject: RE: [nsp] limits to CEF per-packet load sharing?



> On Thu, 13 Feb 2003, Edward Henigin wrote:
>
> > > I think you'll find the limit for the max paths IOS will
> load share across
> > > varies from release to release.  I know we're running
> some that max out at
> > > 6, and some at 8...and the default, IIRC, is 4.  If
> you're using OSPF (as
> > > we are on our backbone) you'll have to raise the
> max-paths explicitly.  We
> >
> > They'll be staticly routed.
>
> I suspect there's still a limit.  Maybe someone from cisco
> will chime in.

Yes, you're right. While CEF (in theory) could load-balance across 16
adjacencies (the loadsharing bucket has 16 entries, as shown in "show ip
cef <prefix> internal"), the maximum number of equal cost paths we can
put in the RIB is six (no matter which routing protocol, incl. static).
This limit was increased to eight in 12.0(14)S/ST and 12.1(8)E.

So if you want to load-balance evenly across 10 T1s, you need MLPPP to
create some bundles and load-share across those. Or you can use a c7500,
put all 10 T1s on a single VIP and use distributed Multilink
(http://www.cisco.com/univercd/cc/td/doc/product/software/ios120/120newf
t/120t/120t3/multippp.htm and
http://www.cisco.com/warp/public/793/access_dial/ppp_11044.html), which
supports link bundling of up to 40 T1s given a fast VIP (VIP4/VIP6). Of
course: For redundancy purposes I would not put all the T1s on a single
VIP, rather distribute them across multiple VIPs and create multiple
distributed Multilink bundles and CEF-load-balance across those.

	oli

_______________________________________________
cisco-nsp mailing list  cisco-nsp at puck.nether.net
http://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/




More information about the cisco-nsp mailing list