[c-nsp] ME3600X - tunning the output queues
Darren O'Connor
darrenoc at outlook.com
Sun May 18 06:15:36 EDT 2014
I've been using queue limit 100% on our policies for four months with no ill affects at all on our me3600x's
Thanks
Darren
http://www.mellowd.co.uk/ccie
> From: waris at cisco.com
> To: pshem.k at gmail.com; ggiannou at gmail.com
> Date: Sun, 18 May 2014 07:39:13 +0000
> CC: cisco-nsp at puck.nether.net
> Subject: Re: [c-nsp] ME3600X - tunning the output queues
>
> Hi Pshem and George,
> There are two ASICs in the system and each has the buffer of 22 MB. 2x10Gig are one ASIC and 24x1Gig are on the other ASIC so 10 Gig buffers are separate from 1 Gig buffer.
> You are experiencing microburst in your network and it happens due to speed mismatch between ingress and egress interface. Higher the speed mismatch, the probability of microburst happening is more. Microburst causes sudden burst traffic in traffic resulting in packet drops due to lack of buffers. I would recommend using queue-limit percent and you can use 100% since the configuration allows oversubrcription assuming not all queues are oversubscribed at the same time. You can refer to my following Cisco Live deck for more information,
> https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnxtd2FyaXN8Z3g6NzI1MTc2YzdjNGI2YmQ1NA
>
> Best Regards,
>
> [http://www.cisco.com/web/europe/images/email/signature/horizontal06.jpg]
>
> Waris Sagheer
> Technical Marketing Manager
> Service Provider Access Group (SPAG)
> waris at cisco.com<mailto:waris at cisco.com>
> Phone: +1 408 853 6682
> Mobile: +1 408 835 1389
>
> CCIE - 19901
>
>
> <http://www.cisco.com/>
>
>
>
> This email may contain confidential and privileged material for the sole use of the intended recipient. Any review, use, distribution or disclosure by others is strictly prohibited. If you are not the intended recipient (or authorized to receive for the recipient), please contact the sender by reply email and delete all copies of this message.
>
> For corporate legal information go to:http://www.cisco.com/web/about/doing_business/legal/cri/index.html
>
>
>
> From: Pshem Kowalczyk <pshem.k at gmail.com<mailto:pshem.k at gmail.com>>
> Date: Tuesday, March 26, 2013 at 2:05 PM
> To: 'George Giannousopoulos' <ggiannou at gmail.com<mailto:ggiannou at gmail.com>>
> Cc: "cisco-nsp at puck.nether.net<mailto:cisco-nsp at puck.nether.net>" <cisco-nsp at puck.nether.net<mailto:cisco-nsp at puck.nether.net>>
> Subject: Re: [c-nsp] ME3600X - tunning the output queues
>
> Hi,
>
> We're running 15.3 already. We got the buffers to 2MB per service, but
> still see occasional tail drop.
>
> kind regards
> Pshem
>
>
> On 27 March 2013 02:26, George Giannousopoulos <ggiannou at gmail.com<mailto:ggiannou at gmail.com>> wrote:
> Hi Pshem,
>
> We have seen the same issue with the 3800x
> In our case we use the maximum allowed packet number
> queue-limit 2457 packets
>
> If I'm not mistaken, there are improvements coming to the default queue
> sizes with the 15.3 train
>
> George
>
> On Mon, Mar 25, 2013 at 4:25 AM, Pshem Kowalczyk <pshem.k at gmail.com<mailto:pshem.k at gmail.com>> wrote:
>
> Hi,
>
> We have a couple of ME3600X (24cx) providing MPLS-based L2 services to
> anywhere between 20 and 80 customers per chassis. For the last few
> weeks we've been chasing a packet loss issue with some of those
> customers. It looks like the issue is more likely to happen on
> interfaces with multiple service instances then those with just a few.
> In most extreme cases we have customers doing 700Mb/s on a single port
> with the default queue depth (~ 50KB) and not a single dropped packet
> one one hand and a bunch of <10Mb/s on another dropping packets all
> the time.
>
> Initially we used the following QoS (per service instance):
>
> policy-map PM-CUST-DEFAULT-100M-OUT
> class class-default
> shape average 100000000
>
> This was causing massive drops even for services that were only
> transmitting 5-15Mb/s. Since queue-depth couldn't be applied with just
> the default class, we ended up with something like this:
>
> policy-map PM-CUST-DEFAULT-100M-OUT
> class CM-DUMMY
> class class-default
> shape average 100000000
> queue-limit 1536000 bytes
>
> (where CM-DUMMY matches non-existing qos-group).
>
> This made things significantly better, but I feel that the queue of
> 1.5MB per service is quite excessive (bearing in mind that the device
> has only 22MB in total for shared queues on 1G ports). I was told by
> the TAC engineer that the memory is allocated dynamically, so it's
> save to oversubscribe it.
>
> At this stage I'm still waiting to learn if its possible to monitor
> the utilisation of that RAM.
>
> But the other question still lingers - what do you use as the
> queue-limit? I know it's traffic-dependant but with only 3 profiles
> available there is not much room to move (we use one profile for the
> core-facing classes, this is the second one) and a fairly universal
> depth has to be used. On top of that we don't really know what our
> customers use the service for, so the visibility is very limited.
>
> So if you use the platform - what's your magic number?
>
> kind regards
> Pshem
> _______________________________________________
> cisco-nsp mailing list cisco-nsp at puck.nether.net<mailto:cisco-nsp at puck.nether.net>
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>
>
> _______________________________________________
> cisco-nsp mailing list cisco-nsp at puck.nether.net<mailto:cisco-nsp at puck.nether.net>
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>
> _______________________________________________
> cisco-nsp mailing list cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
More information about the cisco-nsp
mailing list