[c-nsp] Troubleshooting Lag between GigE interfaces

Rodney Dunn rodunn at cisco.com
Thu Sep 23 11:22:13 EDT 2004


That traffic hitting the input queue should ONLY
be traffic destined to/from the router.

What is the background interrupt level CPU traffic
on the RSP?

It has to be some value because you are causing
the RSP to switch 10k pps at the RSP which is
no what you want.  You want that forwarding 
decision to be done by the ingress VIP cpu in slot 8.

Let's figure that out and see if the other problems
go away.

Rodney


On Thu, Sep 23, 2004 at 11:07:59AM -0400, Paul Stewart wrote:
> This is definately related but I don't believe it's the underlying
> issue...
> 
> When I monitor the GigE interface it will look like this:
> 
> gw-7513#sh interfaces GigabitEthernet 8/1/0
> GigabitEthernet8/1/0 is up, line protocol is up
>   Hardware is cyBus GigabitEthernet Interface, address is 0001.64ef.a108
> (bia 0001.64ef.a108)
>   Description: Gig Fiber to 6509
>   Internet address is XXX.XXX.XXX.XXX/24
>   MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
>      reliability 255/255, txload 6/255, rxload 5/255
>   Encapsulation ARPA, loopback not set
>   Keepalive set (10 sec)
>   Full-duplex, 1000Mb/s, link type is autonegotiation, media type is SX
>   output flow-control is XOFF, input flow-control is unsupported
>   ARP type: ARPA, ARP Timeout 04:00:00
>   Last input 00:00:00, output 00:00:00, output hang never
>   Last clearing of "show interface" counters 00:00:04
>   Input queue: 2/75/0/0 (size/max/drops/flushes); Total output drops: 0
>   Queueing strategy: fifo
>   Output queue: 0/40 (size/max)
>   30 second input rate 20599000 bits/sec, 10267 packets/sec
>   30 second output rate 24507000 bits/sec, 10527 packets/sec
>      69387 packets input, 23751002 bytes, 0 no buffer
>      Received 13 broadcasts, 0 runts, 0 giants, 0 throttles
>      0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
>      0 watchdog, 9 multicast, 0 pause input
>      0 input packets with dribble condition detected
>      71755 packets output, 29739400 bytes, 0 underruns
>      0 output errors, 0 collisions, 0 interface resets
>      0 babbles, 0 late collision, 0 deferred
>      0 lost carrier, 0 no carrier, 0 pause output
>      0 output buffer failures, 0 output buffers swapped out
> 
> Looks clean... then I get this:
> 
> gw-7513#sh interfaces GigabitEthernet 8/1/0
> GigabitEthernet8/1/0 is up, line protocol is up
>   Hardware is cyBus GigabitEthernet Interface, address is 0001.64ef.a108
> (bia 0001.64ef.a108)
>   Description: Gig Fiber to 6509
>   Internet address is 216.168.96.1/24
>   MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
>      reliability 255/255, txload 5/255, rxload 4/255
>   Encapsulation ARPA, loopback not set
>   Keepalive set (10 sec)
>   Full-duplex, 1000Mb/s, link type is autonegotiation, media type is SX
>   output flow-control is XOFF, input flow-control is unsupported
>   ARP type: ARPA, ARP Timeout 04:00:00
>   Last input 00:00:00, output 00:00:00, output hang never
>   Last clearing of "show interface" counters 00:03:57
>   Input queue: 3/75/0/12 (size/max/drops/flushes); Total output drops: 0
>   Queueing strategy: fifo
>   Output queue: 0/40 (size/max)
>   30 second input rate 18101000 bits/sec, 6865 packets/sec
>   30 second output rate 20734000 bits/sec, 6801 packets/sec
>      1660814 packets input, 560099371 bytes, 0 no buffer
>      Received 857 broadcasts, 0 runts, 0 giants, 0 throttles
>      0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
>      0 watchdog, 188 multicast, 0 pause input
>      0 input packets with dribble condition detected
>      1669124 packets output, 657524733 bytes, 0 underruns
>      0 output errors, 0 collisions, 0 interface resets
>      0 babbles, 0 late collision, 0 deferred
>      0 lost carrier, 0 no carrier, 0 pause output
>      0 output buffer failures, 0 output buffers swapped out
> 
> See the flushes (also drops when traffic gets higher).  At that same
> time the counters go up, I also see this:
> 
> gw-7513#sh processes cpu | inc BGP
>  155     1566400   2009109        779  0.00%  0.03%  0.05%   0 BGP
> Router
>  156      103888    246950        420  0.00%  0.00%  0.00%   0 BGP I/O
>  157    53289156    280572     189938 59.17%  8.46%  6.99%   0 BGP
> Scanner
> 
> It's actually 80-90% when things are really busy...
> 
> Paul
> 
> 
> On Wed, 2004-09-22 at 19:44, Brant I. Stevens wrote:
> > I've also seen lag on routers in general with the BGP Scanner process eating
> > up CPU once a minute.  Is this a possibility in your configuration?
> > 
> > 
> > On 09/22/2004 05:15 PM, "Rodney Dunn" <rodunn at cisco.com> wrote:
> > 
> > > Clear the counters and do:
> > > 
> > > sh buffers input-interface gig 8/1/0 packet
> > > 
> > > and do it over and over to catch the packets
> > > going in the input queue.
> > > 
> > > Packets switched in the fast path (fastswitching,
> > > CEF, dCEF) never hit the input queue.
> > > 
> > > If you are process switching transit traffic
> > > you will see some delay/jitter in the traffic
> > > stream because you have to schedule the IP Input
> > > process to run.
> > > 
> > > Rodney
> > > 
> > > 
> > > 
> > > On Wed, Sep 22, 2004 at 05:02:17PM -0400, Deepak Jain wrote:
> > >> 
> > >> I would increase the size of the hold-queue "input" and see what happens
> > >> after you clear the counters. You are definitely exhausting the input
> > >> buffer on the 7513. The question is whether its just burstiness or
> > >> something else -- you don't seem to be moving very much traffic on it to
> > >> be a CPU issue. You do have output flow control on the 6509 on, but
> > >> don't have the same setting on the 7513. That is the big problem I'd guess.
> > >> 
> > >> Paul Stewart wrote:
> > >> 
> > >>> We have a 7513 and a 6509 connected via GigE SX Fiber.  Frequently we
> > >>> see "lag" on the connection lasting 5-10 seconds causing 60-80ms delay.
> > >>> 
> > >>> When I look at the interfaces I see the following:
> > >>> 
> > >>> 7513
> > >>> 
> > >>> GigabitEthernet8/1/0 is up, line protocol is up
> > >>>   Hardware is cyBus GigabitEthernet Interface, address is 0001.64ef.a108
> > >>> (bia 0001.64ef.a108)
> > >>>   Description: Gig Fiber to 6509
> > >>>   Internet address is XXX.XXX.XXX.XXX/24
> > >>>   MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
> > >>>      reliability 255/255, txload 8/255, rxload 6/255
> > >>>   Encapsulation ARPA, loopback not set
> > >>>   Keepalive set (10 sec)
> > >>>   Full-duplex, 1000Mb/s, link type is autonegotiation, media type is SX
> > >>>   output flow-control is XOFF, input flow-control is unsupported
> > >>>   ARP type: ARPA, ARP Timeout 04:00:00
> > >>>   Last input 00:00:00, output 00:00:00, output hang never
> > >>>   Last clearing of "show interface" counters never
> > >>>   Input queue: 3/75/913188/2076260 (size/max/drops/flushes); Total
> > >>> output drops: 0
> > >>>   Queueing strategy: fifo
> > >>>   Output queue: 0/40 (size/max)
> > >>>   30 second input rate 26395000 bits/sec, 9394 packets/sec
> > >>>   30 second output rate 35145000 bits/sec, 9900 packets/sec
> > >>>      1790737826 packets input, 2276828279 bytes, 0 no buffer
> > >>>      Received 1895463 broadcasts, 0 runts, 0 giants, 34543 throttles
> > >>>      0 input errors, 0 CRC, 0 frame, 36 overrun, 0 ignored
> > >>>      0 watchdog, 520626 multicast, 0 pause input
> > >>>      0 input packets with dribble condition detected
> > >>>      1655202857 packets output, 359511296 bytes, 0 underruns
> > >>>      0 output errors, 0 collisions, 0 interface resets
> > >>>      0 babbles, 0 late collision, 0 deferred
> > >>>      2 lost carrier, 0 no carrier, 0 pause output
> > >>>      0 output buffer failures, 0 output buffers swapped out
> > >>> 
> > >>> 6509
> > >>> 
> > >>> GigabitEthernet1/2 is up, line protocol is up (connected)
> > >>>   Hardware is C6k 1000Mb 802.3, address is 0006.d65b.853d (bia
> > >>> 0006.d65b.853d)
> > >>>   Description: Connection to 7513
> > >>>   MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
> > >>>      reliability 255/255, txload 8/255, rxload 11/255
> > >>>   Encapsulation ARPA, loopback not set
> > >>>   Full-duplex, 1000Mb/s, media type is SX
> > >>>   input flow-control is off, output flow-control is on
> > >>>   Clock mode is auto
> > >>>   ARP type: ARPA, ARP Timeout 04:00:00
> > >>>   Last input never, output never, output hang never
> > >>>   Last clearing of "show interface" counters never
> > >>>   Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops:
> > >>> 0
> > >>>   Queueing strategy: fifo
> > >>>   Output queue: 0/40 (size/max)
> > >>>   5 minute input rate 43500000 bits/sec, 12035 packets/sec
> > >>>   5 minute output rate 33545000 bits/sec, 11573 packets/sec
> > >>>      5952975396 packets input, 2640933868846 bytes, 0 no buffer
> > >>>      Received 1696859 broadcasts (68504 multicast)
> > >>>      0 runts, 0 giants, 0 throttles
> > >>>      0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
> > >>>      0 watchdog, 0 multicast, 0 pause input
> > >>>      0 input packets with dribble condition detected
> > >>>      6088478711 packets output, 2200836239462 bytes, 0 underruns
> > >>>      0 output errors, 0 collisions, 1 interface resets
> > >>>      0 babbles, 0 late collision, 0 deferred
> > >>>      0 lost carrier, 0 no carrier, 0 PAUSE output
> > >>>      0 output buffer failures, 0 output buffers swapped out
> > >>> 
> > >>> The 6509 looks nice and clean but the 7513 shows a tonne of buffer
> > >>> issues it seems.  Is this a buffer issue that I should start trying to
> > >>> tune or would something else be the actual cause do you think?
> > >>> 
> > >>> Thanks in advance,
> > >>> 
> > >>> Paul
> > >>> 
> > >>> 
> > >>> _______________________________________________
> > >>> cisco-nsp mailing list  cisco-nsp at puck.nether.net
> > >>> https://puck.nether.net/mailman/listinfo/cisco-nsp
> > >>> archive at http://puck.nether.net/pipermail/cisco-nsp/
> > >>> 
> > >>> 
> > >> _______________________________________________
> > >> cisco-nsp mailing list  cisco-nsp at puck.nether.net
> > >> https://puck.nether.net/mailman/listinfo/cisco-nsp
> > >> archive at http://puck.nether.net/pipermail/cisco-nsp/
> > > _______________________________________________
> > > cisco-nsp mailing list  cisco-nsp at puck.nether.net
> > > https://puck.nether.net/mailman/listinfo/cisco-nsp
> > > archive at http://puck.nether.net/pipermail/cisco-nsp/
> > 
> > 
> > _______________________________________________
> > cisco-nsp mailing list  cisco-nsp at puck.nether.net
> > https://puck.nether.net/mailman/listinfo/cisco-nsp
> > archive at http://puck.nether.net/pipermail/cisco-nsp/


More information about the cisco-nsp mailing list