[c-nsp] ME3600X - tunning the output queues

Pshem Kowalczyk pshem.k at gmail.com
Sun Mar 24 22:25:16 EDT 2013


Hi,

We have a couple of ME3600X (24cx) providing MPLS-based L2 services to
anywhere between 20 and 80 customers per chassis. For the last few
weeks we've been chasing a packet loss issue with some of those
customers. It looks like the issue is more likely to happen on
interfaces with multiple service instances then those with just a few.
In most extreme cases we have customers doing 700Mb/s on a single port
with the default queue depth (~ 50KB) and not a single dropped packet
one one hand and a bunch of <10Mb/s on another dropping packets all
the time.

Initially we used the following QoS (per service instance):

policy-map PM-CUST-DEFAULT-100M-OUT
 class class-default
  shape average 100000000

This was causing massive drops even for services that were only
transmitting 5-15Mb/s. Since queue-depth couldn't be applied with just
the default class, we ended up with something like this:

policy-map PM-CUST-DEFAULT-100M-OUT
 class CM-DUMMY
 class class-default
  shape average 100000000
  queue-limit 1536000 bytes

(where CM-DUMMY matches non-existing qos-group).

This made things significantly better, but I feel that the queue of
1.5MB per service is quite excessive (bearing in mind that the device
has only 22MB in total for shared queues on 1G ports). I was told by
the TAC engineer that the memory is allocated dynamically, so it's
save to oversubscribe it.

At this stage I'm still waiting to learn if its possible to monitor
the utilisation of that RAM.

But the other question still lingers - what do you use as the
queue-limit? I know it's traffic-dependant but with only 3 profiles
available there is not much room to move (we use one profile for the
core-facing classes, this is the second one) and a fairly universal
depth has to be used. On top of that we don't really know what our
customers use the service for, so the visibility is very limited.

So if you use the platform - what's your magic number?

kind regards
Pshem


More information about the cisco-nsp mailing list