[c-nsp] CPU comparison - bridge vs. route on 7206?
Michael Ulitskiy
mulitskiy at acedsl.com
Thu Jul 2 11:00:29 EDT 2009
Could you please elaborate on the PA-GE issues? Or may be you could provide some pointers to where they're described?
We're using quite a few of those with traffic rate anywhere from 50M to 100M and I didn't notice
any issues so far, but traffic rate is increasing and I'd really like to know what to expect in the future,
especially if there are any known caveats.
Thank you,
Michael
On Wednesday 01 July 2009 01:41:44 pm Rodney Dunn wrote:
> The PA-GE has issues at higher speeds.
>
> You should move to L2TPV3 and see if it's better in regards
> to performance. Your best would be pure L3 forwarding.
>
> If the PA-GE is the issue you will have to get off that PA.
>
> What happens if you move it to one of the onboard GigE ports on the NPE-400?
>
> Rodney
>
> On Wed, Jul 01, 2009 at 12:56:39PM -0400, Chris Hale wrote:
> > We have a set of 7206VXR's, NPE400 CPUs on each end of a point to point OC3
> > using PA-POS-OC3 cards. We bridge these circuits through a PA-GE interface
> > (essentially turning the 7206's into a OC-3 to GigE converter) with a single
> > bridge group.
> >
> > We are trying to push nearly 130-140Mbps, but per the MRTG graphs, we seem
> > to be capping @ ~110Mbps. The CPU is also averaging 80-90%. We're seeing a
> > large number of input errors (ignored, total of 5% of input packets) and a
> > fair amount of output pauses (0.12% of output packets).
> >
> > GigabitEthernet1/0 is up, line protocol is up
> > Hardware is WISEMAN, address is 0016.46e6.1c1c (bia 0016.46e6.1c1c)
> > MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
> > reliability 255/255, txload 36/255, rxload 16/255
> > Encapsulation ARPA, loopback not set
> > Keepalive set (10 sec)
> > Full-duplex, 1000Mb/s, link type is autonegotiation, media type is unknown
> > media type
> > output flow-control is XON, input flow-control is XON
> > ARP type: ARPA, ARP Timeout 04:00:00
> > Last input 00:00:00, output 00:00:00, output hang never
> > Last clearing of "show interface" counters 12w0d
> > Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 208
> > Queueing strategy: fifo
> > Output queue: 0/40 (size/max)
> > 30 second input rate 66046000 bits/sec, 29231 packets/sec
> > 30 second output rate 141617000 bits/sec, 31690 packets/sec
> > 2816822087 packets input, 1367339773 bytes, 0 no buffer
> > Received 7138653 broadcasts, 0 runts, 0 giants, 0 throttles
> > 143326584 input errors, 0 CRC, 0 frame, 481945 overrun, 142844639
> > ignored
> > 0 watchdog, 4536607 multicast, 0 pause input
> > 0 input packets with dribble condition detected
> > 3993978307 packets output, 979813878 bytes, 0 underruns
> > 0 output errors, 0 collisions, 0 interface resets
> > 0 babbles, 0 late collision, 0 deferred
> > 4 lost carrier, 0 no carrier, 4808187 pause output
> > 0 output buffer failures, 0 output buffers swapped out
> >
> > If we move this to a routed infrastructure with CEF, can we expect the CPU
> > to drop considerably? The routing will be static only, very simple config
> > with no ACLs, no policy maps, etc. We're just trying to get the routers to
> > let us push as much of the OC3 bandwidth as possible.
> >
> > We would rather not upgrade the NPE400's if possible. The internal LAN
> > equipment is Nortel L3 switches which don't seem to support flow-control.
> >
> > Thanks in advance for any ideas.
> >
> > Chris
> >
> > --
> > ------------------
> > Chris Hale
> > chale99 at gmail.com
> > _______________________________________________
> > cisco-nsp mailing list cisco-nsp at puck.nether.net
> > https://puck.nether.net/mailman/listinfo/cisco-nsp
> > archive at http://puck.nether.net/pipermail/cisco-nsp/
> _______________________________________________
> cisco-nsp mailing list cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>
More information about the cisco-nsp
mailing list