[c-nsp] CPU comparison - bridge vs. route on 7206?

Chris Hale chale99 at gmail.com
Thu Jul 2 14:16:43 EDT 2009


Can you give me some sample code for this?  I'm willing to try it, but need
some help!

We moved to routed mode with plain static routing, and the customer is still
seeing issues.  CPU dropped about 15-20%, but we're still being overrun
everywhere...  One side is using the GE on the IO card, and the other side
is using a PA-GE.  I'm trying to muster up some NPE-G1's for testing as
well, but if this is a buffer problem, will there be any difference between
the onboard GigE ports on the NPE-G1 vs. the PA-GE or IO/GE?

navisite#sho proc cpu hist



navisite   11:21:24 AM Sunday Apr 2 2000 UTC





    666666666666666666666666666666666666666666666666666666666666

    337777733333111112222200000333337777700000333331111133333555

100

 90

 80

 70   *****                         *****                    ***

 60 ************************************************************

 50 ************************************************************

 40 ************************************************************

 30 ************************************************************

 20 ************************************************************

 10 ************************************************************

   0....5....1....1....2....2....3....3....4....4....5....5....6

             0    5    0    5    0    5    0    5    0    5    0

               CPU% per second (last 60 seconds)





    676776776666677667767766766777666777767777777766666777677777

    728127116878800870080189179140978027095020565788988001913103

100

 90

 80                                    *  *   ****

 70 ****#***************##***********####*##########*#**########

 60 ############################################################

 50 ############################################################

 40 ############################################################

 30 ############################################################

 20 ############################################################

 10 ############################################################

   0....5....1....1....2....2....3....3....4....4....5....5....6

             0    5    0    5    0    5    0    5    0    5    0

               CPU% per minute (last 60 minutes)

              * = maximum CPU%   # = average CPU%





    787656 85676666688999999999987877566666788999999999987877666686688899999

    725865488000023924177656882061167925468753067768775014474733397914817667

100                    *******                 ********                 ****

 90        *          **###*##***           * **######**         *    **####

 80 ***    *        **##########* ***      ***#########** **     *  ***#####

 70 ##** * *  *    *#############****  * ***############******   ***########

 60 ###*** *********################*******#################*******#########

 50 ####** ***#***###################*########################**#**#########

 40 #####* *######################################################*#########

 30 #####* *################################################################

 20 ###### #################################################################

 10 ###### #################################################################

   0....5....1....1....2....2....3....3....4....4....5....5....6....6....7..

             0    5    0    5    0    5    0    5    0    5    0    5    0

                   CPU% per hour (last 72 hours)

                  * = maximum CPU%   # = average CPU%



navisite#sh int gigabitEthernet 0/0

GigabitEthernet0/0 is up, line protocol is up

  Hardware is i82543 (Livengood), address is 000f.8f58.3908 (bia
000f.8f58.3908)

  Internet address is 10.10.254.25/30

  MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,

     reliability 255/255, txload 20/255, rxload 29/255

  Encapsulation ARPA, loopback not set

  Keepalive set (10 sec)

  Full-duplex, 1000Mb/s, link type is autonegotiation, media type is T

  output flow-control is XON, input flow-control is XON

  ARP type: ARPA, ARP Timeout 04:00:00

  Last input 00:00:00, output 00:00:00, output hang never

  Last clearing of "show interface" counters never

  Input queue: 2/75/0/0 (size/max/drops/flushes); Total output drops: 82

  Queueing strategy: fifo

  Output queue: 0/40 (size/max)

  5 minute input rate 114705000 bits/sec, 33699 packets/sec

  5 minute output rate 79291000 bits/sec, 32889 packets/sec

     3562588727 packets input, 3062002285 bytes, 0 no buffer

     Received 7861538 broadcasts, 0 runts, 0 giants, 0 throttles

     297165303 input errors, 0 CRC, 0 frame, 5842451 overrun, 291322852
ignored

     0 watchdog, 5171889 multicast, 0 pause input

     0 input packets with dribble condition detected

     1554205161 packets output, 3202662663 bytes, 0 underruns

     10 output errors, 0 collisions, 1 interface resets

     0 babbles, 0 late collision, 0 deferred

     10 lost carrier, 0 no carrier, 56190635 pause output

     0 output buffer failures, 0 output buffers swapped out



POS2/0 is up, line protocol is up

  Hardware is Packet over Sonet

  Internet address is 10.10.254.22/30

  MTU 4470 bytes, BW 155000 Kbit, DLY 100 usec,

     reliability 255/255, txload 181/255, rxload 126/255

  Encapsulation HDLC, crc 16, loopback not set

  Keepalive set (10 sec)

  Scramble disabled

  Last input 00:00:06, output 00:00:00, output hang never

  Last clearing of "show interface" counters never

  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops:
260014089

  Queueing strategy: fifo

  Output queue: 0/40 (size/max)

  5 minute input rate 76517000 bits/sec, 32983 packets/sec

  5 minute output rate 110318000 bits/sec, 33701 packets/sec

     1555732979 packets input, 1503248082 bytes, 0 no buffer

     Received 1907623 broadcasts, 0 runts, 0 giants, 0 throttles

              0 parity

     479899 input errors, 342177 CRC, 0 frame, 137722 overrun, 0 ignored, 0
abort

     3301042153 packets output, 3444928001 bytes, 0 underruns

     0 output errors, 0 applique, 5 interface resets

     0 output buffer failures, 0 output buffers swapped out

     3 carrier transitions

On Thu, Jul 2, 2009 at 11:50 AM, Rodney Dunn <rodunn at cisco.com> wrote:

> One note, I'd be really interested to see how it worked if you configured
> it as a L2TPV3 tunnel to connect the L2 segments vs. bridging it.
> The bridge code was never designed for high speed switching.
>
> Can you try that?
>
> Rodney
>
>
> On Thu, Jul 02, 2009 at 11:48:26AM -0400, Rodney Dunn wrote:
> > I found what I was looking. The test was on older code but in concept it
> > still applies.
> >
> > Bi-directional going native gige port to another native gige port on the
> > G1 you are looking at around 470 kpps (double 940 kpps bi-directional)
> > at 64 byte packets with NO features.
> >
> > At 1500 byte packets it can pretty much fill up the gig in both
> directions
> > without dropping frames...again with no features.
> >
> > It appears from the tet you can just about fill up the links with 256
> byte
> > packets for native gige to native gige.
> >
> > However, with the PA-GE it appears it's around 127 kpps in one direction
> (double
> > to get bi-directional) at 64 byte packets. Which ends up being about 400
> Mbps
> > total (200 M tx and 200 M rx) going from a native Gig port to the PA-GE.
> >
> > These are rough numbers from a lab test with absolutly nothing
> configured.
> >
> > And also this is from a test set where there are no micro-burst from the
> > real world traffic flows. We've seen that way too many times where some
> > L3 forwarding switch is connected and it overruns the GigE ability on the
> > connecting device. That's why the ASR1k is the suggested platform for
> that
> > space now as it can do linerate Gige.
> >
> > Hope this helps. As always with performance numbers YMMV depending on
> actual
> > code and configuration and design.
> >
> > Rodney
> >
> >
> >
> > On Thu, Jul 02, 2009 at 11:26:33AM -0400, Rodney Dunn wrote:
> > > Michael,
> > >
> > > I can't find the performance document I saw once before now. I'm still
> trying
> > > to find it.
> > >
> > > If you want real Gige you should go with the ASR1000. Even the G1 GE
> ports
> > > will have problems at high rates with any features enabled.
> > >
> > > Rodney
> > >
> > > On Thu, Jul 02, 2009 at 11:00:29AM -0400, Michael Ulitskiy wrote:
> > > > Could you please elaborate on the PA-GE issues? Or may be you could
> provide some pointers to where they're described?
> > > > We're using quite a few of those with traffic rate anywhere from 50M
> to 100M and I didn't notice
> > > > any issues so far, but traffic rate is increasing and I'd really like
> to know what to expect in the future,
> > > > especially if there are any known caveats.
> > > > Thank you,
> > > >
> > > > Michael
> > > >
> > > > On Wednesday 01 July 2009 01:41:44 pm Rodney Dunn wrote:
> > > > > The PA-GE has issues at higher speeds.
> > > > >
> > > > > You should move to L2TPV3 and see if it's better in regards
> > > > > to performance. Your best would be pure L3 forwarding.
> > > > >
> > > > > If the PA-GE is the issue you will have to get off that PA.
> > > > >
> > > > > What happens if you move it to one of the onboard GigE ports on the
> NPE-400?
> > > > >
> > > > > Rodney
> > > > >
> > > > > On Wed, Jul 01, 2009 at 12:56:39PM -0400, Chris Hale wrote:
> > > > > > We have a set of 7206VXR's, NPE400 CPUs on each end of a point to
> point OC3
> > > > > > using PA-POS-OC3 cards.  We bridge these circuits through a PA-GE
> interface
> > > > > > (essentially turning the 7206's into a OC-3 to GigE converter)
> with a single
> > > > > > bridge group.
> > > > > >
> > > > > > We are trying to push nearly 130-140Mbps, but per the MRTG
> graphs, we seem
> > > > > > to be capping @ ~110Mbps.  The CPU is also averaging 80-90%.
>  We're seeing a
> > > > > > large number of input errors (ignored, total of 5% of input
> packets) and a
> > > > > > fair amount of output pauses (0.12% of output packets).
> > > > > >
> > > > > > GigabitEthernet1/0 is up, line protocol is up
> > > > > >   Hardware is WISEMAN, address is 0016.46e6.1c1c (bia
> 0016.46e6.1c1c)
> > > > > >   MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
> > > > > >      reliability 255/255, txload 36/255, rxload 16/255
> > > > > >   Encapsulation ARPA, loopback not set
> > > > > >   Keepalive set (10 sec)
> > > > > >   Full-duplex, 1000Mb/s, link type is autonegotiation, media type
> is unknown
> > > > > > media type
> > > > > >   output flow-control is XON, input flow-control is XON
> > > > > >   ARP type: ARPA, ARP Timeout 04:00:00
> > > > > >   Last input 00:00:00, output 00:00:00, output hang never
> > > > > >   Last clearing of "show interface" counters 12w0d
> > > > > >   Input queue: 0/75/0/0 (size/max/drops/flushes); Total output
> drops: 208
> > > > > >   Queueing strategy: fifo
> > > > > >   Output queue: 0/40 (size/max)
> > > > > >   30 second input rate 66046000 bits/sec, 29231 packets/sec
> > > > > >   30 second output rate 141617000 bits/sec, 31690 packets/sec
> > > > > >      2816822087 packets input, 1367339773 bytes, 0 no buffer
> > > > > >      Received 7138653 broadcasts, 0 runts, 0 giants, 0 throttles
> > > > > >      143326584 input errors, 0 CRC, 0 frame, 481945 overrun,
> 142844639
> > > > > > ignored
> > > > > >      0 watchdog, 4536607 multicast, 0 pause input
> > > > > >      0 input packets with dribble condition detected
> > > > > >      3993978307 packets output, 979813878 bytes, 0 underruns
> > > > > >      0 output errors, 0 collisions, 0 interface resets
> > > > > >      0 babbles, 0 late collision, 0 deferred
> > > > > >      4 lost carrier, 0 no carrier, 4808187 pause output
> > > > > >      0 output buffer failures, 0 output buffers swapped out
> > > > > >
> > > > > > If we move this to a routed infrastructure with CEF, can we
> expect the CPU
> > > > > > to drop considerably?   The routing will be static only, very
> simple config
> > > > > > with no ACLs, no policy maps, etc.  We're just trying to get the
> routers to
> > > > > > let us push as much of the OC3 bandwidth as possible.
> > > > > >
> > > > > > We would rather not upgrade the NPE400's if possible.  The
> internal LAN
> > > > > > equipment is Nortel L3 switches which don't seem to support
> flow-control.
> > > > > >
> > > > > > Thanks in advance for any ideas.
> > > > > >
> > > > > > Chris
> > > > > >
> > > > > > --
> > > > > > ------------------
> > > > > > Chris Hale
> > > > > > chale99 at gmail.com
> > > > > > _______________________________________________
> > > > > > cisco-nsp mailing list  cisco-nsp at puck.nether.net
> > > > > > https://puck.nether.net/mailman/listinfo/cisco-nsp
> > > > > > archive at http://puck.nether.net/pipermail/cisco-nsp/
> > > > > _______________________________________________
> > > > > cisco-nsp mailing list  cisco-nsp at puck.nether.net
> > > > > https://puck.nether.net/mailman/listinfo/cisco-nsp
> > > > > archive at http://puck.nether.net/pipermail/cisco-nsp/
> > > > >
> > > >
> > > >
> > > > _______________________________________________
> > > > cisco-nsp mailing list  cisco-nsp at puck.nether.net
> > > > https://puck.nether.net/mailman/listinfo/cisco-nsp
> > > > archive at http://puck.nether.net/pipermail/cisco-nsp/
> > > _______________________________________________
> > > cisco-nsp mailing list  cisco-nsp at puck.nether.net
> > > https://puck.nether.net/mailman/listinfo/cisco-nsp
> > > archive at http://puck.nether.net/pipermail/cisco-nsp/
> _______________________________________________
> cisco-nsp mailing list  cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>



-- 
------------------
Chris Hale
chale99 at gmail.com


More information about the cisco-nsp mailing list