[c-nsp] ME3600X Output Drops
Richard Clayton
sledge121 at gmail.com
Thu Aug 23 07:30:49 EDT 2012
George
I believe you will be able to specify a % of the available buffer for
queue-limit in a future release and you will also be able to specify 100%
of the buffer for each individual queue-limit.
Thanks
Sledge
On 23 August 2012 11:57, George Giannousopoulos <ggiannou at gmail.com> wrote:
> If I remember correctly, 2457 packets is the maximum on this platform
> We weren't given any specific version for the increase default values
>
> In case you get anything extra from your SR, it would be nice to share it
> with us
>
> George
>
> On Thu, Aug 23, 2012 at 12:10 PM, Ivan <cisco-nsp at itpro.co.nz> wrote:
>
> > Thanks George. I am raising a SR to get some more information too. Are
> > you able to explain how the queue-limit of 2457 was selected? Also were
> you
> > given a version for the increase in the default queue size? I am running
> > me360x-universalk9-mz.152-2.**S1.bin
> >
> > Cheers
> >
> > Ivan
> >
> >
> >
> > On 23/Aug/2012 5:48 p.m., George Giannousopoulos wrote:
> >
> >> Hi Ivan,
> >>
> >> In fact the default queue limit in 3800x/3600x is quite small
> >> We also had issues with drops in all interfaces, even without congestion
> >>
> >> After some research and an SR with Cisco, we have started applying qos
> on
> >> all interfaces
> >>
> >> policy-map INTERFACE-OUTPUT-POLICY
> >> class dummy
> >> class class-default
> >> shape average X00000000
> >> queue-limit 2457 packets
> >>
> >>
> >> The dummy class does nothing.
> >> It is just there because IOS wouldn't allow changing queue limit
> otherwise
> >>
> >> Also there were issues with the policy counters which should be resolved
> >> after15.1(2)EY2
> >> Cisco said they would increase the default queue sizes in the second
> half
> >> of 2012..
> >> So, I suggest you try the latest IOS version and check again
> >>
> >> 10G interfaces had no drops in our setup too.
> >>
> >> Regards
> >> George
> >>
> >>
> >> On Thu, Aug 23, 2012 at 1:34 AM, Ivan <cisco-nsp at itpro.co.nz <mailto:
> >> cisco-nsp at itpro.co.nz>**> wrote:
> >>
> >> Replying to my own message....
> >>
> >> * Adjusting the hold queue didn't help.
> >>
> >> * Applying QOS and per referenced email stopped the drops
> >> immediately - I
> >> used something like the below:
> >>
> >> policy-map leaf
> >> class class-default
> >> queue-limit 491520 bytes
> >>
> >> policy-map logical
> >> class class-default
> >> service-policy leaf
> >>
> >> policy-map root
> >> class class-default
> >> service-policy logical
> >>
> >> * I would be interested to hear if others have ended up applying a
> >> similar
> >> policy to all interfaces. Any gotchas? I expect any 10Gbps
> >> interfaces
> >> would be okay without the QoS - haven't seen any issue on these
> >> myself.
> >>
> >> *Apart from this list I have found very little information around
> this
> >> whole issue. Any pointers to other documentation would be
> >> appreciated.
> >>
> >> Thanks
> >>
> >> Ivan
> >>
> >> Ivan
> >>
> >> > Hi,
> >> >
> >> > I am seeing output drops on a ME3600X interface as shown below
> >> >
> >> > GigabitEthernet0/2 is up, line protocol is up (connected)
> >> > MTU 9216 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
> >> > reliability 255/255, txload 29/255, rxload 2/255
> >> > Encapsulation ARPA, loopback not set
> >> > Keepalive set (10 sec)
> >> > Full-duplex, 1000Mb/s, media type is RJ45
> >> > input flow-control is off, output flow-control is unsupported
> >> > ARP type: ARPA, ARP Timeout 04:00:00
> >> > Last input 6w1d, output never, output hang never
> >> > Last clearing of "show interface" counters 00:12:56
> >> > Input queue: 0/75/0/0 (size/max/drops/flushes); Total output
> >> drops: 231
> >> > Queueing strategy: fifo
> >> > Output queue: 0/40 (size/max)
> >> > 30 second input rate 10299000 bits/sec, 5463 packets/sec
> >> > 30 second output rate 114235000 bits/sec, 12461 packets/sec
> >> > 3812300 packets input, 705758638 bytes, 0 no buffer
> >> > Received 776 broadcasts (776 multicasts)
> >> > 0 runts, 0 giants, 0 throttles
> >> > 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
> >> > 0 watchdog, 776 multicast, 0 pause input
> >> > 0 input packets with dribble condition detected
> >> > 9103882 packets output, 10291542297 bytes, 0 underruns
> >> > 0 output errors, 0 collisions, 0 interface resets
> >> > 0 unknown protocol drops
> >> > 0 babbles, 0 late collision, 0 deferred
> >> > 0 lost carrier, 0 no carrier, 0 pause output
> >> > 0 output buffer failures, 0 output buffers swapped out
> >> >
> >> > I have read about similar issues on the list:
> >> > http://www.gossamer-threads.**com/lists/cisco/nsp/157217<
> http://www.gossamer-threads.com/lists/cisco/nsp/157217>
> >> > https://puck.nether.net/**pipermail/cisco-nsp/2012-July/**
> >> 085889.html<
> https://puck.nether.net/pipermail/cisco-nsp/2012-July/085889.html>
> >> >
> >> > 1. I have no QoS policies applied to the physical interface or
> EVCs.
> >> > Would increasing the hold queue help? Is there a recommended
> >> value - the
> >> > maximum configurable is 240000. What is the impact on the 44MB
> >> of packet
> >> > buffer.
> >> >
> >> > 2. If the hold queue isn't an option is configuring QoS required
> to
> >> > increase the queue-limit from the default 100us. Again are
> >> there any
> >> > recommended values and what impact is there on the available 44MB
> of
> >> > packet buffer.
> >> >
> >> > 3. I have found that when applying policies to the EVCs the
> >> "show policy
> >> > map" output does not have information for the queue-limit as I
> >> have seen
> >> > when applying polices to the physical interface. Does this mean
> >> that EVCs
> >> > will still suffer from output drops?
> >> >
> >> > Thanks
> >> >
> >> > Ivan
> >>
> >>
> >>
> >> ______________________________**_________________
> >> cisco-nsp mailing list cisco-nsp at puck.nether.net
> >> <mailto:cisco-nsp at puck.nether.**net <cisco-nsp at puck.nether.net>>
> >>
> >> https://puck.nether.net/**mailman/listinfo/cisco-nsp<
> https://puck.nether.net/mailman/listinfo/cisco-nsp>
> >> archive at http://puck.nether.net/**pipermail/cisco-nsp/<
> http://puck.nether.net/pipermail/cisco-nsp/>
> >>
> >>
> >>
> > ______________________________**_________________
> > cisco-nsp mailing list cisco-nsp at puck.nether.net
> > https://puck.nether.net/**mailman/listinfo/cisco-nsp<
> https://puck.nether.net/mailman/listinfo/cisco-nsp>
> > archive at http://puck.nether.net/**pipermail/cisco-nsp/<
> http://puck.nether.net/pipermail/cisco-nsp/>
> >
> _______________________________________________
> cisco-nsp mailing list cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>
More information about the cisco-nsp
mailing list