[c-nsp] Troubleshooting Lag between GigE interfaces
Brant I. Stevens
branto at branto.com
Wed Sep 22 19:44:26 EDT 2004
I've also seen lag on routers in general with the BGP Scanner process eating
up CPU once a minute. Is this a possibility in your configuration?
On 09/22/2004 05:15 PM, "Rodney Dunn" <rodunn at cisco.com> wrote:
> Clear the counters and do:
>
> sh buffers input-interface gig 8/1/0 packet
>
> and do it over and over to catch the packets
> going in the input queue.
>
> Packets switched in the fast path (fastswitching,
> CEF, dCEF) never hit the input queue.
>
> If you are process switching transit traffic
> you will see some delay/jitter in the traffic
> stream because you have to schedule the IP Input
> process to run.
>
> Rodney
>
>
>
> On Wed, Sep 22, 2004 at 05:02:17PM -0400, Deepak Jain wrote:
>>
>> I would increase the size of the hold-queue "input" and see what happens
>> after you clear the counters. You are definitely exhausting the input
>> buffer on the 7513. The question is whether its just burstiness or
>> something else -- you don't seem to be moving very much traffic on it to
>> be a CPU issue. You do have output flow control on the 6509 on, but
>> don't have the same setting on the 7513. That is the big problem I'd guess.
>>
>> Paul Stewart wrote:
>>
>>> We have a 7513 and a 6509 connected via GigE SX Fiber. Frequently we
>>> see "lag" on the connection lasting 5-10 seconds causing 60-80ms delay.
>>>
>>> When I look at the interfaces I see the following:
>>>
>>> 7513
>>>
>>> GigabitEthernet8/1/0 is up, line protocol is up
>>> Hardware is cyBus GigabitEthernet Interface, address is 0001.64ef.a108
>>> (bia 0001.64ef.a108)
>>> Description: Gig Fiber to 6509
>>> Internet address is XXX.XXX.XXX.XXX/24
>>> MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
>>> reliability 255/255, txload 8/255, rxload 6/255
>>> Encapsulation ARPA, loopback not set
>>> Keepalive set (10 sec)
>>> Full-duplex, 1000Mb/s, link type is autonegotiation, media type is SX
>>> output flow-control is XOFF, input flow-control is unsupported
>>> ARP type: ARPA, ARP Timeout 04:00:00
>>> Last input 00:00:00, output 00:00:00, output hang never
>>> Last clearing of "show interface" counters never
>>> Input queue: 3/75/913188/2076260 (size/max/drops/flushes); Total
>>> output drops: 0
>>> Queueing strategy: fifo
>>> Output queue: 0/40 (size/max)
>>> 30 second input rate 26395000 bits/sec, 9394 packets/sec
>>> 30 second output rate 35145000 bits/sec, 9900 packets/sec
>>> 1790737826 packets input, 2276828279 bytes, 0 no buffer
>>> Received 1895463 broadcasts, 0 runts, 0 giants, 34543 throttles
>>> 0 input errors, 0 CRC, 0 frame, 36 overrun, 0 ignored
>>> 0 watchdog, 520626 multicast, 0 pause input
>>> 0 input packets with dribble condition detected
>>> 1655202857 packets output, 359511296 bytes, 0 underruns
>>> 0 output errors, 0 collisions, 0 interface resets
>>> 0 babbles, 0 late collision, 0 deferred
>>> 2 lost carrier, 0 no carrier, 0 pause output
>>> 0 output buffer failures, 0 output buffers swapped out
>>>
>>> 6509
>>>
>>> GigabitEthernet1/2 is up, line protocol is up (connected)
>>> Hardware is C6k 1000Mb 802.3, address is 0006.d65b.853d (bia
>>> 0006.d65b.853d)
>>> Description: Connection to 7513
>>> MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
>>> reliability 255/255, txload 8/255, rxload 11/255
>>> Encapsulation ARPA, loopback not set
>>> Full-duplex, 1000Mb/s, media type is SX
>>> input flow-control is off, output flow-control is on
>>> Clock mode is auto
>>> ARP type: ARPA, ARP Timeout 04:00:00
>>> Last input never, output never, output hang never
>>> Last clearing of "show interface" counters never
>>> Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops:
>>> 0
>>> Queueing strategy: fifo
>>> Output queue: 0/40 (size/max)
>>> 5 minute input rate 43500000 bits/sec, 12035 packets/sec
>>> 5 minute output rate 33545000 bits/sec, 11573 packets/sec
>>> 5952975396 packets input, 2640933868846 bytes, 0 no buffer
>>> Received 1696859 broadcasts (68504 multicast)
>>> 0 runts, 0 giants, 0 throttles
>>> 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
>>> 0 watchdog, 0 multicast, 0 pause input
>>> 0 input packets with dribble condition detected
>>> 6088478711 packets output, 2200836239462 bytes, 0 underruns
>>> 0 output errors, 0 collisions, 1 interface resets
>>> 0 babbles, 0 late collision, 0 deferred
>>> 0 lost carrier, 0 no carrier, 0 PAUSE output
>>> 0 output buffer failures, 0 output buffers swapped out
>>>
>>> The 6509 looks nice and clean but the 7513 shows a tonne of buffer
>>> issues it seems. Is this a buffer issue that I should start trying to
>>> tune or would something else be the actual cause do you think?
>>>
>>> Thanks in advance,
>>>
>>> Paul
>>>
>>>
>>> _______________________________________________
>>> cisco-nsp mailing list cisco-nsp at puck.nether.net
>>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>>>
>>>
>> _______________________________________________
>> cisco-nsp mailing list cisco-nsp at puck.nether.net
>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>> archive at http://puck.nether.net/pipermail/cisco-nsp/
> _______________________________________________
> cisco-nsp mailing list cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
More information about the cisco-nsp
mailing list