[c-nsp] CPU comparison - bridge vs. route on 7206?

Christopher E. Brown chris.brown at acsalaska.net
Thu Jul 2 23:38:01 EDT 2009


IIRC the 7000 series PA buses are derived from classic PCI tech, or
something similar.  Is a simplex bus limited to around 600Mbit.


This imposes a 600Mbit minus overhead simplex burst limit on the bus.


Microbursts are an issue, the bus and the CPU limit how fast the buffers
on the PA can be drained.


Personally, I treat NPE-400 systems as capable of 100Mbit full duplex
average flow and NPE-G1 as capable of 200Mbit.  This leaves some
headroom for peaks/etc, as they both can (more or less) handle twice
that for most traffic mixes (assuming a clean/simple config).


I have seen an NPE-400 doing 250 - 300 one way and 50 - 100 the other
between Gig-IO and PA-GE for an extended perion of time, but it was
dropping a couple packets _every_ burst.



Moral of the story...  If you are connecting to things via line rate
GigE, and those things are happy doing GigE bursts (just about any
modern PC), use something other than a 7200


Michael Ulitskiy wrote:
>  Rodney,
> 
> Thanks for the reply. Please let me clarify it a little.
> So you're saying that switching packets through PA-GE involves 3.5 times more processing overhead
> compared to switching them through native port (btw, by native port you mean G1/G2 builtin one, right?),
> hence pps goes down from 470kpps to 127kpps. Is that right?
> I actually always thought that for the software-based platform max pps is a function of CPU.
> Do you think that these figures can be improved in G2 chassis?
> Thanks,
> 
> Michael
> 
> On Thursday 02 July 2009 11:48:26 am you wrote:
>> I found what I was looking. The test was on older code but in concept it
>> still applies.
>>
>> Bi-directional going native gige port to another native gige port on the
>> G1 you are looking at around 470 kpps (double 940 kpps bi-directional)
>> at 64 byte packets with NO features.
>>
>> At 1500 byte packets it can pretty much fill up the gig in both directions
>> without dropping frames...again with no features.
>>
>> It appears from the tet you can just about fill up the links with 256 byte
>> packets for native gige to native gige.
>>
>> However, with the PA-GE it appears it's around 127 kpps in one direction (double
>> to get bi-directional) at 64 byte packets. Which ends up being about 400 Mbps
>> total (200 M tx and 200 M rx) going from a native Gig port to the PA-GE.
>>
>> These are rough numbers from a lab test with absolutly nothing configured.
>>
>> And also this is from a test set where there are no micro-burst from the
>> real world traffic flows. We've seen that way too many times where some
>> L3 forwarding switch is connected and it overruns the GigE ability on the
>> connecting device. That's why the ASR1k is the suggested platform for that
>> space now as it can do linerate Gige.
>>
>> Hope this helps. As always with performance numbers YMMV depending on actual
>> code and configuration and design.
>>
>> Rodney
>>
>>
>>
>> On Thu, Jul 02, 2009 at 11:26:33AM -0400, Rodney Dunn wrote:
>>> Michael,
>>>
>>> I can't find the performance document I saw once before now. I'm still trying
>>> to find it.
>>>
>>> If you want real Gige you should go with the ASR1000. Even the G1 GE ports
>>> will have problems at high rates with any features enabled.
>>>
>>> Rodney
>>>
>>> On Thu, Jul 02, 2009 at 11:00:29AM -0400, Michael Ulitskiy wrote:
>>>> Could you please elaborate on the PA-GE issues? Or may be you could provide some pointers to where they're described?
>>>> We're using quite a few of those with traffic rate anywhere from 50M to 100M and I didn't notice
>>>> any issues so far, but traffic rate is increasing and I'd really like to know what to expect in the future,
>>>> especially if there are any known caveats.
>>>> Thank you,
>>>>
>>>> Michael
>>>>
>>>> On Wednesday 01 July 2009 01:41:44 pm Rodney Dunn wrote:
>>>>> The PA-GE has issues at higher speeds.
>>>>>
>>>>> You should move to L2TPV3 and see if it's better in regards
>>>>> to performance. Your best would be pure L3 forwarding.
>>>>>
>>>>> If the PA-GE is the issue you will have to get off that PA.
>>>>>
>>>>> What happens if you move it to one of the onboard GigE ports on the NPE-400?
>>>>>
>>>>> Rodney
>>>>>
>>>>> On Wed, Jul 01, 2009 at 12:56:39PM -0400, Chris Hale wrote:
>>>>>> We have a set of 7206VXR's, NPE400 CPUs on each end of a point to point OC3
>>>>>> using PA-POS-OC3 cards.  We bridge these circuits through a PA-GE interface
>>>>>> (essentially turning the 7206's into a OC-3 to GigE converter) with a single
>>>>>> bridge group.
>>>>>>
>>>>>> We are trying to push nearly 130-140Mbps, but per the MRTG graphs, we seem
>>>>>> to be capping @ ~110Mbps.  The CPU is also averaging 80-90%.  We're seeing a
>>>>>> large number of input errors (ignored, total of 5% of input packets) and a
>>>>>> fair amount of output pauses (0.12% of output packets).
>>>>>>
>>>>>> GigabitEthernet1/0 is up, line protocol is up
>>>>>>   Hardware is WISEMAN, address is 0016.46e6.1c1c (bia 0016.46e6.1c1c)
>>>>>>   MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
>>>>>>      reliability 255/255, txload 36/255, rxload 16/255
>>>>>>   Encapsulation ARPA, loopback not set
>>>>>>   Keepalive set (10 sec)
>>>>>>   Full-duplex, 1000Mb/s, link type is autonegotiation, media type is unknown
>>>>>> media type
>>>>>>   output flow-control is XON, input flow-control is XON
>>>>>>   ARP type: ARPA, ARP Timeout 04:00:00
>>>>>>   Last input 00:00:00, output 00:00:00, output hang never
>>>>>>   Last clearing of "show interface" counters 12w0d
>>>>>>   Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 208
>>>>>>   Queueing strategy: fifo
>>>>>>   Output queue: 0/40 (size/max)
>>>>>>   30 second input rate 66046000 bits/sec, 29231 packets/sec
>>>>>>   30 second output rate 141617000 bits/sec, 31690 packets/sec
>>>>>>      2816822087 packets input, 1367339773 bytes, 0 no buffer
>>>>>>      Received 7138653 broadcasts, 0 runts, 0 giants, 0 throttles
>>>>>>      143326584 input errors, 0 CRC, 0 frame, 481945 overrun, 142844639
>>>>>> ignored
>>>>>>      0 watchdog, 4536607 multicast, 0 pause input
>>>>>>      0 input packets with dribble condition detected
>>>>>>      3993978307 packets output, 979813878 bytes, 0 underruns
>>>>>>      0 output errors, 0 collisions, 0 interface resets
>>>>>>      0 babbles, 0 late collision, 0 deferred
>>>>>>      4 lost carrier, 0 no carrier, 4808187 pause output
>>>>>>      0 output buffer failures, 0 output buffers swapped out
>>>>>>
>>>>>> If we move this to a routed infrastructure with CEF, can we expect the CPU
>>>>>> to drop considerably?   The routing will be static only, very simple config
>>>>>> with no ACLs, no policy maps, etc.  We're just trying to get the routers to
>>>>>> let us push as much of the OC3 bandwidth as possible.
>>>>>>
>>>>>> We would rather not upgrade the NPE400's if possible.  The internal LAN
>>>>>> equipment is Nortel L3 switches which don't seem to support flow-control.
>>>>>>
>>>>>> Thanks in advance for any ideas.
>>>>>>
>>>>>> Chris
>>>>>>
>>>>>> -- 
>>>>>> ------------------
>>>>>> Chris Hale
>>>>>> chale99 at gmail.com
>>>>>> _______________________________________________
>>>>>> cisco-nsp mailing list  cisco-nsp at puck.nether.net
>>>>>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>>>>>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>>>>> _______________________________________________
>>>>> cisco-nsp mailing list  cisco-nsp at puck.nether.net
>>>>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>>>>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>>>>>
>>>>
>>>> _______________________________________________
>>>> cisco-nsp mailing list  cisco-nsp at puck.nether.net
>>>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>>>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>>> _______________________________________________
>>> cisco-nsp mailing list  cisco-nsp at puck.nether.net
>>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>>> archive at http://puck.nether.net/pipermail/cisco-nsp/
> 
> 
> _______________________________________________
> cisco-nsp mailing list  cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/



More information about the cisco-nsp mailing list