[c-nsp] 7206 12.2(18) Maximum # of MLPPP T1s on PA-MC-T3

Brandon Price brandon at sterling.net
Wed Aug 20 19:49:53 EDT 2008


Just for the heck of it I Bundled all 28 once in a lab with a NPE400 on
one end and a NPE300 on the other..

I was able to pull about 40mbps or so across the link.. if memory serves
me correct the cpus spiked pretty high.. above 90% I think... I didn't
do a whole lot of testing but it definitely took all 28 members...


Brandon


-----Original Message-----
From: cisco-nsp-bounces at puck.nether.net
[mailto:cisco-nsp-bounces at puck.nether.net] On Behalf Of Daniel Lacey
Sent: Wednesday, August 20, 2008 3:51 PM
To: cisco-nsp at puck.nether.net
Subject: [c-nsp] 7206 12.2(18) Maximum # of MLPPP T1s on PA-MC-T3

Hi all,

cisco 7206VXR (NPE300) processor (revision B) with 229376K/65536K bytes 
of memory.
R7000 CPU at 262Mhz, Implementation 39, Rev 1.0, 256KB L2 Cache
6 slot VXR midplane, Version 2.0

I found that the PA-MC-T3 card will handle 12 T1 links in one MLPPP 
group and still use HW.
After 12, the 7206 will go into SW mode for the MLPPP bundle.

How many T1s can I use, albeit  in SW mode, can  I bundle together?

I know I can bundle across PA-MC-T3, again in SW mode, but how many 
total T1s can I bundle across two PA-MC-T3 cards?

How about where I will run into performance issues?

Thanks in advance!
Dan





_______________________________________________
cisco-nsp mailing list  cisco-nsp at puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


More information about the cisco-nsp mailing list