[c-nsp] Troubleshooting Lag between GigE interfaces
Rodney Dunn
rodunn at cisco.com
Thu Sep 23 11:51:13 EDT 2004
On Thu, Sep 23, 2004 at 11:33:52AM -0400, Paul Stewart wrote:
> gw-7513#sh cef linecard
> Slot MsgSent XDRSent Window LowQ MedQ HighQ Flags
> 8 14 351 LC wait 0 0 0 disabled
> 9 14 352 LC wait 0 0 0 disabled
> 10 135796 4020240 4836 0 0 0 up
> 11 135796 4020237 4836 0 0 0 up
>
> VRF Default-table, version 2078485, 149043 routes
> Slot Version CEF-XDR I/Fs State Flags
> 8 158 120 3 Active table-disabled
> 9 158 120 3 Active table-disabled
> 10 2078485 4001103 6 Active sync, table-up
> 11 2078485 4001103 4 Active sync, table-up
>
> I think I found the problem..:) We have an access-list applied in one
> of the FE's to block traffic (temporarily) to one particular box here..
Nope. We do ACL's in the distributed path.
Only caveat there is named ACL's in 12.0S are not supported
in the dCEF path.
> would that be causing it to punt down?
This doesn't have anything to do with punts. Here dCEF isn't
even up to the VIP in slot 8 and 9.
For some reason dCEF got disabled to those slots.
You would have to have the logs from when it got disabled
to know why.
How much memory are on those cards?
'sh diag'
You can try to re-enable it via: clear cef linecard
or do:
confi t
ip cef <that converts everything to RSP based CEF switching
ip cef distributed <that will turn dCEF back on for the box.
You have to get dCEF sync'd up to the cards first.
That should say "up and sync, table-up" in sh cef linecard.
> Paul
>
>
> On Thu, 2004-09-23 at 11:20, Rodney Dunn wrote:
> > You have distributed swithced 0 packets coming
> > in 8/1/0.
> >
> > A lot of packets going out the interface have
> > been dCEF switched by some other ingress VIP.
> >
> > You don't have any features enabled on this interface
> > that would cause packets to get punted out of the
> > dCEF path but it could be a feature on the egress
> > interface. What is the configuration of the egress
> > interface?
> >
> > Also, do clear counters and get 'sh int stat' 3 times
> > 15 seconds apart.
> >
> > Also check 'sh cef linecard' and make sure dCEF is
> > up to all VIP's.
> >
> > Most likely you have a feature enabled on the egress
> > side that is causing the ingress VIP to punt traffic.
> >
> > You can get on the vip via "if-con <slot>"
> > and do 'sh ip cef' and sometimes "sh cef interface"
> > will tell you why it's punting.
> >
> > Rodney
> >
> > On Thu, Sep 23, 2004 at 10:59:35AM -0400, Paul Stewart wrote:
> > > Thanks for the response.. here's the interface:
> > >
> > > GigabitEthernet8/1/0
> > > Switching path Pkts In Chars In Pkts Out Chars Out
> > > Processor 6734033 470049748 6320689 446014893
> > > Route cache 313258446 265741463 39358803 669296029
> > > Distributed cache 0 0 268789121 3282566919
> > > Total 319992479 735791211 314468613 4397877841
> > >
> > >
> > > Config:
> > >
> > > interface GigabitEthernet8/1/0
> > > description Gig Fiber to 6509
> > > ip address XXX.XXX.XXX.XXX 255.255.255.0
> > > no ip redirects
> > > no ip proxy-arp
> > > load-interval 30
> > > negotiation auto
> > > no cdp enable
> > >
> > > It's dcef according to other output...
> > >
> > >
> > > What is wrong here? :)
> > >
> > > Paul
> > >
> > >
> > >
> > > On Wed, 2004-09-22 at 17:00, Rodney Dunn wrote:
> > > > If you ever see hits on the "Input queue" in 'sh int'
> > > > it means you are process switching traffic which is
> > > > really bad.
> > > >
> > > > You can see this via: sh int stat
> > > >
> > > > On a properly configured 75xx all traffic
> > > > should be dCEF (Distributed) switched.
> > > >
> > > > In that environment the only realy time you
> > > > can see delay introduced on the 75xx is if you
> > > > are seeing bursty traffic and rx-side buffering
> > > > is happening.
> > > >
> > > > You can check for that by checking the ingress vip via:
> > > >
> > > > sh controller vip <slog> accumulator a couple of times
> > > > and see if the "in" counter is going up.
> > > >
> > > > You see this a lot when you have some LAN connection feeding
> > > > a low speed serial.
> > > >
> > > > If it's ingress GIG and egress GIG that shouldn't really
> > > > happen unless the rates are steady or bursty enough to
> > > > overrun the VIP CPU.
> > > >
> > > > Rodney
> > > >
> > > > On Wed, Sep 22, 2004 at 04:44:52PM -0400, Paul Stewart wrote:
> > > > > We have a 7513 and a 6509 connected via GigE SX Fiber. Frequently we
> > > > > see "lag" on the connection lasting 5-10 seconds causing 60-80ms delay.
> > > > >
> > > > > When I look at the interfaces I see the following:
> > > > >
> > > > > 7513
> > > > >
> > > > > GigabitEthernet8/1/0 is up, line protocol is up
> > > > > Hardware is cyBus GigabitEthernet Interface, address is 0001.64ef.a108
> > > > > (bia 0001.64ef.a108)
> > > > > Description: Gig Fiber to 6509
> > > > > Internet address is XXX.XXX.XXX.XXX/24
> > > > > MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
> > > > > reliability 255/255, txload 8/255, rxload 6/255
> > > > > Encapsulation ARPA, loopback not set
> > > > > Keepalive set (10 sec)
> > > > > Full-duplex, 1000Mb/s, link type is autonegotiation, media type is SX
> > > > > output flow-control is XOFF, input flow-control is unsupported
> > > > > ARP type: ARPA, ARP Timeout 04:00:00
> > > > > Last input 00:00:00, output 00:00:00, output hang never
> > > > > Last clearing of "show interface" counters never
> > > > > Input queue: 3/75/913188/2076260 (size/max/drops/flushes); Total
> > > > > output drops: 0
> > > > > Queueing strategy: fifo
> > > > > Output queue: 0/40 (size/max)
> > > > > 30 second input rate 26395000 bits/sec, 9394 packets/sec
> > > > > 30 second output rate 35145000 bits/sec, 9900 packets/sec
> > > > > 1790737826 packets input, 2276828279 bytes, 0 no buffer
> > > > > Received 1895463 broadcasts, 0 runts, 0 giants, 34543 throttles
> > > > > 0 input errors, 0 CRC, 0 frame, 36 overrun, 0 ignored
> > > > > 0 watchdog, 520626 multicast, 0 pause input
> > > > > 0 input packets with dribble condition detected
> > > > > 1655202857 packets output, 359511296 bytes, 0 underruns
> > > > > 0 output errors, 0 collisions, 0 interface resets
> > > > > 0 babbles, 0 late collision, 0 deferred
> > > > > 2 lost carrier, 0 no carrier, 0 pause output
> > > > > 0 output buffer failures, 0 output buffers swapped out
> > > > >
> > > > > 6509
> > > > >
> > > > > GigabitEthernet1/2 is up, line protocol is up (connected)
> > > > > Hardware is C6k 1000Mb 802.3, address is 0006.d65b.853d (bia
> > > > > 0006.d65b.853d)
> > > > > Description: Connection to 7513
> > > > > MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
> > > > > reliability 255/255, txload 8/255, rxload 11/255
> > > > > Encapsulation ARPA, loopback not set
> > > > > Full-duplex, 1000Mb/s, media type is SX
> > > > > input flow-control is off, output flow-control is on
> > > > > Clock mode is auto
> > > > > ARP type: ARPA, ARP Timeout 04:00:00
> > > > > Last input never, output never, output hang never
> > > > > Last clearing of "show interface" counters never
> > > > > Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops:
> > > > > 0
> > > > > Queueing strategy: fifo
> > > > > Output queue: 0/40 (size/max)
> > > > > 5 minute input rate 43500000 bits/sec, 12035 packets/sec
> > > > > 5 minute output rate 33545000 bits/sec, 11573 packets/sec
> > > > > 5952975396 packets input, 2640933868846 bytes, 0 no buffer
> > > > > Received 1696859 broadcasts (68504 multicast)
> > > > > 0 runts, 0 giants, 0 throttles
> > > > > 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
> > > > > 0 watchdog, 0 multicast, 0 pause input
> > > > > 0 input packets with dribble condition detected
> > > > > 6088478711 packets output, 2200836239462 bytes, 0 underruns
> > > > > 0 output errors, 0 collisions, 1 interface resets
> > > > > 0 babbles, 0 late collision, 0 deferred
> > > > > 0 lost carrier, 0 no carrier, 0 PAUSE output
> > > > > 0 output buffer failures, 0 output buffers swapped out
> > > > >
> > > > > The 6509 looks nice and clean but the 7513 shows a tonne of buffer
> > > > > issues it seems. Is this a buffer issue that I should start trying to
> > > > > tune or would something else be the actual cause do you think?
> > > > >
> > > > > Thanks in advance,
> > > > >
> > > > > Paul
> > > > >
> > > > >
> > > > > _______________________________________________
> > > > > cisco-nsp mailing list cisco-nsp at puck.nether.net
> > > > > https://puck.nether.net/mailman/listinfo/cisco-nsp
> > > > > archive at http://puck.nether.net/pipermail/cisco-nsp/
More information about the cisco-nsp
mailing list