[c-nsp] GSR12008|GRP-B|4OC12/ATM-MM-SC|3GE-GBIC-SC throughput?

Jason Lixfeld jason at lixfeld.ca
Wed Apr 15 12:11:10 EDT 2009


On 15-Apr-09, at 11:34 AM, Lamar Owen wrote:

> On Tuesday 14 April 2009 18:22:03 Jason Lixfeld wrote:
>> For the life of us, we can't seem to get any more than 60Mbps
>> sustained across the ATM testing with iperf, so we're just trying to
>> figure out if the GSR just can't push any more than what it's doing  
>> or
>> if there's something else afoot.
> [snip]
>> We've done our due diligence to ensure the bits of the network  
>> between
>> the test machine and the ATM can support 100Mbps, so we're fairly
>
> Hmm, 60mb/s using a 100mb/s connected box sounds about right.  To  
> really
> strain an OC12 you need a gigabit connected tester that can really  
> do a
> gigabit of traffic.  Or multiple test PC's.

In this case, I can iperf 97Mbps between two machines connected  
together at 100Mb.

> I have a 12012 here in production, and have some of the kit  
> necessary to test
> point to point ATM connections (including a Catalyst 8540MSR with  
> OC12, ARM,
> and gigabit cards), and have a 4xOC12/ATM/MM, but it will be a few  
> days before
> I could have the time to set up a test to see if the 12012 is limited.

We've been wrestling with this for weeks now, but haven't had the  
means to be able to compare our results to anyone else to see whether  
or not we're an anomaly, so what's another day or four :)

> The LC
> engines on the ATM card and the 3GE card will be the limiting  
> factor, and
> those cards are rated for line rate on four simultaneous OC12's or  
> line rate
> on two GigE (can't do full line rate on all three with a 2.5Gb/s  
> fabric
> connection).

The load is really low, so I'd be very surprised if it was an LC  
limitation, but what do I know:

bdr1.nyc-hudson-12008#show int a2/0 load

     Interface                   bits/sec     pack/sec
--------------------           ------------  ----------
AT2/0                 Tx           48464000      14099
                      Rx          104808000      18012
bdr1.nyc-hudson-12008#show int a2/1 load

     Interface                   bits/sec     pack/sec
--------------------           ------------  ----------
AT2/1                 Tx           57581000      13032
                      Rx          116319000      14466
bdr1.nyc-hudson-12008#show int g5/0 load

     Interface                   bits/sec     pack/sec
--------------------           ------------  ----------
Gi5/0                 Tx           56851000       8981
                      Rx           35082000       7833
bdr1.nyc-hudson-12008#show int g5/1 load

     Interface                   bits/sec     pack/sec
--------------------           ------------  ----------
Gi5/1                 Tx          166072000      23424
                      Rx           70951000      19116
bdr1.nyc-hudson-12008#

So:

Total Throughput: 656128000
Total PPS:	  118963
Average Size (B): 689.4

> The GRP CPU is not involved in the data plane on a GSR; the LC
> engine CPU's/ASICs do dCEF and talk directly over the fabric.   
> Unless you have
> serious fabric issues preventing full bandwidth, in which case you  
> have bigger
> problems.

Again, the bandwidth going over the entire box is like 650Mbps spread  
more or less evenly across the two LCs.

> So I'd first check to see if your iperf test box can really generate  
> sufficient
> traffic.

Here's one of the tests we've done, and we were able to get ~97Mbps  
here:

Macbook Pro -> Linksys 100Mb -> 1811 -> 7609 -> 10GE -> 7609 -> 3550 - 
 > PC 100Mb NIC.

> What sort of ATM switch or router is on the other end of those  
> multimode short
> reach OC12's?  What sort of router is terminating them?  How are  
> your PVC's
> set up?

A2/0 and A2/1 on the GSR connect to two ports on a Fore ASX200BX.

The ASX200BX connects into the provider's SONET network.  On the Z  
side, we're taking the OC12 into a Fore ASX1000.

ATM2/0.100 (vpi/vci 0/100) on side A ultimately terminates on an  
OSM-2OC12-ATM-MM in 7609-A on side Z.
ATM2/1.110 (vpi/vci 0/110) on side A ultimately terminates on an  
OSM-2OC12-ATM-MM in 7609-B on side Z.
7609-A and 7609-B on the Z side are connected by an OC12 ATM on own  
own dark fiber.

Fores at the A and Z side are both clean as a whistle.  No error  
seconds anywhere.

> _______________________________________________
> cisco-nsp mailing list  cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>



More information about the cisco-nsp mailing list