No subject


Wed Sep 2 11:09:22 EDT 2009


<br />
120000 x 1000 = 120000000<br />
120000000/8 = 15000000<br />
15000000/256 = 58593.75<br />
<br />
Class-map: qos-cos4 (match-any)<br />
12103420 packets, 16484858040 bytes<br />
5 minute offered rate 96063000 bps, drop rate 0000 bps<br />
Match: cos  4<br />
Queueing<br />
queue limit 60000 packets<br />
(queue depth/total drops/no-buffer drops) 0/0/0<br />
(pkts output/bytes output) 12103420/16484858040<br />
bandwidth 120000 kbps<br />
<br />
I also noticed on my priority queue that the queue limit is always set <br />
to 66 packets by default.<br />
I am assuming this is to reduce latency for rtp but by the above logic <br />
that would equal 135.168k before tail drops<br />
and I am seeing 287k with no drops.<br />
<br />
queue stats for all priority classes:<br />
Queueing<br />
queue limit 66 packets<br />
(queue depth/total drops/no-buffer drops) 0/0/0<br />
(pkts output/bytes output) 229339/49963294<br />
<br />
Class-map: qos-cos5 (match-any)<br />
229339 packets, 49963294 bytes<br />
5 minute offered rate 287000 bps, drop rate 0000 bps<br />
Match: cos  5<br />
police:<br />
cir 50000000 bps, bc 1562500 bytes<br />
conformed 226069 packets, 49250874 bytes; actions:<br />
transmit<br />
exceeded 0 packets, 0 bytes; actions:<br />
drop<br />
conformed 286000 bps, exceed 0000 bps<br />
Priority: Strict, b/w exceed drops: 0<br />
<br />
I believe you when you say the queue limit could cause drops as the <br />
logic is sound but I can't see any impact from the outputs above. maybe <br />
I' missing something.<br />
<br />
Thanks<br />
Anthony<br />
<br />
<br />
Byrd, William wrote:<br />
&gt; I usually don't reply to the list but I think this information could save<br />
&gt; a lot of hours of someone's life.<br />
&gt;<br />
&gt; The ES20 is a WAN card and not a switch card like the 6748 although it<br />
&gt; does have switch card functionality. The QoS we use on these cards is all<br />
&gt; done with the MQC and the queues are completely unlike any other Cisco<br />
&gt; card.<br />
&gt;<br />
&gt; Here's what we dug out on these cards after a protracted TAC case and in<br />
&gt; depth work with the Cisco BU.<br />
&gt;<br />
&gt; Full disclosure: This information is from a document that a colleague put<br />
&gt; together here after working with TAC so I can't take credit for it. It is<br />
&gt; tested and known to be working however and I have slightly edited it to<br />
&gt; remove the company specific information.<br />
&gt;<br />
&gt; Excerpt from the document:<br />
&gt;<br />
&gt; ES20 QoS Queue Limit<br />
&gt;<br />
&gt; The queue limit is the amount of buffer or queue space allocated to a<br />
&gt; class of traffic in a service policy.  The queue limit for the ES20 line<br />
&gt; card is designed around a 256 byte datagram buffer space rather than an<br />
&gt; actual per packet buffer as the syntax indicates.  This means that a 1500<br />
&gt; byte packet will take up 6 &ldquo;packet&rdquo; buffers in the class&rsquo;s queue. <br />
&gt; If traffic for a class arrives at a higher rate than the queue can be<br />
&gt; emptied the queue begins to fill.  Once the queue is full the traffic will<br />
&gt; begin to tail drop in which all traffic arriving will be dropped until the<br />
&gt; queue empties.  This indicates that if a class bursts traffic above the<br />
&gt; queue limit (even if it is below the reserved bandwidth) the class will<br />
&gt; begin to drop traffic.  The default queue limit is calculated<br />
&gt; automatically and we have found that the default queue limit is not always<br />
&gt; sufficient to accommodate the class&rsquo;s reserved bandwidth.  To calculate<br />
&gt; the appropriate queue limit for a class&rsquo;s bandwidth reservation use the<br />
&gt; following equation:<br />
&gt;<br />
&gt; Kbps x 1000/8 = bytes per second<br />
&gt;<br />
&gt; Once you have the kilobits converted to bytes you must then calculate the<br />
&gt; queue limit as follows:<br />
&gt;<br />
&gt; Bytes/256 = queue limit<br />
&gt;<br />
&gt; An example of this issue arose on a subrated Gigabit Ethernet service<br />
&gt; policy on a DS3 over Gigabit Ethernet link.<br />
&gt;<br />
&gt;   Policy Map SubRateGE-DS3-ES20<br />
&gt;     Class class-default<br />
&gt;       Average Rate Traffic Shaping<br />
&gt;       cir 39000000 (bps)<br />
&gt;       service-policy IPUPLINK-ES20<br />
&gt;<br />
&gt;   Policy Map IPUPLINK-ES20<br />
&gt;     Class VOICE-RTP<br />
&gt;       priority<br />
&gt;      police cir 19500000 bc 609375 be 609375<br />
&gt;        conform-action transmit<br />
&gt;        exceed-action drop<br />
&gt;     Class VOICE-SIGNALLING<br />
&gt;       bandwidth 3900 (kbps)<br />
&gt;     Class MGMT<br />
&gt;       bandwidth 1950 (kbps)<br />
&gt;     Class PREMIUM-CUSTOMER<br />
&gt;       bandwidth 3900 (kbps)<br />
&gt;     Class ROUTING<br />
&gt;       bandwidth 1950 (kbps)<br />
&gt;     Class class-default<br />
&gt;<br />
&gt; This is an example of a nested or hierarchical policy in which the<br />
&gt; &ldquo;parent&rdquo; (SubRateGE-DS3-ES20) calls on a &ldquo;child&rdquo; (IPUPLINK-ES20)<br />
&gt; policy.  In this scenario the parent policy is shaping the traffic to a<br />
&gt; rate of 39Mbps while the child policy allocates that bandwidth to each<br />
&gt; class of traffic proportionate to the class&rsquo;s bandwidth reservation.<br />
&gt;<br />
&gt; The first class specified VOICE-RTP is given priority due to the delay<br />
&gt; sensitive nature of voice traffic and policed to 19500Kbps which is 50% of<br />
&gt; the available bandwidth.  The traffic is policed to prevent the voice<br />
&gt; traffic from starving the other classes.  The remaining defined classes<br />
&gt; receive a bandwidth reservation of 1950Kbps or 3900Kbps which is 5% and<br />
&gt; 10% of the available bandwidth.  The remaining bandwidth is available to<br />
&gt; the class-default which matches any traffic not matched in a more specific<br />
&gt; class.<br />
&gt;<br />
&gt; When we let the IOS calculate the queue limits for us, we ended up with<br />
&gt; the following queue limits:<br />
&gt;<br />
&gt; router#show policy-map interface g9/0/4.3201<br />
&gt;  GigabitEthernet9/0/4.3201<br />
&gt;<br />
&gt;   Service-policy output: SubRateGE-DS3-ES20<br />
&gt;<br />
&gt;   Counters last updated 00:00:00 ago<br />
&gt;<br />
&gt;     Class-map: class-default (match-any)<br />
&gt;       159917414 packets, 82219665048 bytes<br />
&gt;       30 second offered rate 1589000 bps, drop rate 1000 bps<br />
&gt;       Match: any<br />
&gt;       Queueing<br />
&gt;       queue limit 9750 packets<br />
&gt;       (queue depth/total drops/no-buffer drops) 0/88553/0<br />
&gt;       (pkts output/bytes output) 159829710/82141730024<br />
&gt;<br />
&gt;       shape (average) cir 39000000, bc 156000, be 156000<br />
&gt;       target shape rate 39000000<br />
&gt;<br />
&gt;       Service-policy : IPUPLINK-ES20<br />
&gt;<br />
&gt;       Counters last updated 00:00:00 ago<br />
&gt;<br />
&gt;         queue stats for all priority classes:<br />
&gt;           Queueing<br />
&gt;           queue limit 66 packets<br />
&gt;           (queue depth/total drops/no-buffer drops) 0/0/0<br />
&gt;           (pkts output/bytes output) 80837512/17506671967<br />
&gt;<br />
&gt;         Class-map: VOICE-RTP (match-any)<br />
&gt;           80836920 packets, 17506542957 bytes<br />
&gt;           30 second offered rate 720000 bps, drop rate 0 bps<br />
&gt;           Match: ip precedence 7<br />
&gt;           Match: mpls experimental topmost 7<br />
&gt;           Match: ip precedence 5<br />
&gt;           Match: mpls experimental topmost 5<br />
&gt;           Priority: Strict, burst bytes 487500<br />
&gt;           police:<br />
&gt;               cir 19500000 bps, bc 609375 bytes, be 609375 bytes<br />
&gt;             conformed 80837512 packets, 17506671967 bytes; actions:<br />
&gt;               transmit<br />
&gt;             exceeded 0 packets, 0 bytes; actions:<br />
&gt;               drop<br />
&gt;             violated 0 packets, 0 bytes; actions:<br />
&gt;               drop<br />
&gt;             conformed 573000 bps, exceed 0 bps, violate 0 bps<br />
&gt;<br />
&gt;         Class-map: VOICE-SIGNALLING (match-any)<br />
&gt;           155279 packets, 89526994 bytes<br />
&gt;           30 second offered rate 0 bps, drop rate 0 bps<br />
&gt;           Match: ip precedence 3<br />
&gt;           Match: mpls experimental topmost 3<br />
&gt;           Queueing<br />
&gt;           queue limit 66 packets<br />
&gt;           (queue depth/total drops/no-buffer drops) 0/0/0<br />
&gt;           (pkts output/bytes output) 155279/89526994<br />
&gt;<br />
&gt;           bandwidth 3900 kbps<br />
&gt;<br />
&gt;         Class-map: MGMT (match-any)<br />
&gt;           50389 packets, 3274933 bytes<br />
&gt;           30 second offered rate 0 bps, drop rate 0 bps<br />
&gt;           Match: access-group name MGMT-TELNET<br />
&gt;           Match: ip precedence 2<br />
&gt;           Match: mpls experimental topmost 2<br />
&gt;           Queueing<br />
&gt;           queue limit 66 packets<br />
&gt;           (queue depth/total drops/no-buffer drops) 0/0/0<br />
&gt;           (pkts output/bytes output) 50389/3274933<br />
&gt;<br />
&gt;           bandwidth 1950 kbps<br />
&gt;<br />
&gt;         Class-map: PREMIUM-CUSTOMER (match-any)<br />
&gt;           12515000 packets, 14194344547 bytes<br />
&gt;           30 second offered rate 3000 bps, drop rate 0 bps<br />
&gt;           Match: ip precedence 1<br />
&gt;           Match: mpls experimental topmost 1<br />
&gt;           Queueing<br />
&gt;           queue limit 66 packets<br />
&gt;           (queue depth/total drops/no-buffer drops) 0/0/0<br />
&gt;           (pkts output/bytes output) 12515002/14194345277<br />
&gt;<br />
&gt;           bandwidth 3900 kbps<br />
&gt;<br />
&gt;         Class-map: ROUTING (match-any)<br />
&gt;           871200 packets, 187824161 bytes<br />
&gt;           30 second offered rate 5000 bps, drop rate 0 bps<br />
&gt;           Match: ip precedence 6<br />
&gt;           Match: mpls experimental topmost 6<br />
&gt;           Queueing<br />
&gt;           queue limit 66 packets<br />
&gt;           (queue depth/total drops/no-buffer drops) 0/0/0<br />
&gt;           (pkts output/bytes output) 871216/187827176<br />
&gt;<br />
&gt;           bandwidth 1950 kbps<br />
&gt;<br />
&gt;         Class-map: class-default (match-any)<br />
&gt;           65488626 packets, 50238151456 bytes<br />
&gt;           30 second offered rate 856000 bps, drop rate 1000 bps<br />
&gt;           Match: any<br />
&gt;           Queueing<br />
&gt;           queue limit 66 packets<br />
&gt;           (queue depth/total drops/no-buffer drops) 0/88553/0<br />
&gt;           (pkts output/bytes output) 65400312/50160083677<br />
&gt;<br />
&gt; From the above output you can see that the IOS calculated a queue limit of<br />
&gt; 66 &ldquo;packets&rdquo; for each of the defined classes and the class-default. <br />
&gt; Using the above method we can see that this leaves only enough buffer<br />
&gt; space to accommodate 135Kbps for each class.   This was causing drops<br />
&gt; during bursts of traffic that were not exceeding the available bandwidth.<br />
&gt;<br />
&gt; If you apply the above calculation to the reserved bandwidths you get the<br />
&gt; following queue limits:<br />
&gt;<br />
&gt; Reservation of 1950 Kbps:<br />
&gt;<br />
&gt; 1950 x 1000 = 1950000<br />
&gt; 1950000/8 = 243750<br />
&gt; 243750/256 = 952.1484375<br />
&gt;<br />
&gt; 1950 Kbps requires 952.1484375 256 byte buffers<br />
&gt;<br />
&gt; Reservation of 3900 Kbps<br />
&gt;<br />
&gt; 3900 x 1000 = 3900000<br />
&gt; 39000000/8 = 4875000<br />
&gt; 487500/256 = 1904.296875<br />
&gt;<br />
&gt; 3900 Kbps requires 1904.296875 256 byte buffers<br />
&gt;<br />
&gt; We will then add a little extra buffer space to each class to ensure<br />
&gt; adequate buffers and simplify configuration.<br />
&gt;<br />
&gt; 1950 = 1000<br />
&gt; 3900 = 2000<br />
&gt;<br />
&gt; That leaves us with allocating buffer space to the class-default.  If you<br />
&gt; calculate the buffer space needed for the entire shape rate it will come<br />
&gt; out to queue limit of approximately 20000. If you subtract the buffer<br />
&gt; space already allocated to the other classes you will be left with 14000. <br />
&gt; This will be assigned as the queue limit for the class-default.<br />
&gt;<br />
&gt; In order to change the queue limit for a class use the following procedure:<br />
&gt;<br />
&gt; router#configure terminal<br />
&gt; Enter configuration commands, one per line.  End with CNTL/Z.<br />
&gt; router(config)#policy-map IPUPLINK-ES20<br />
&gt; router(config-pmap)#class VOICE-SIGNALLING<br />
&gt; router(config-pmap-c)#queue-limit 2000<br />
&gt; router(config-pmap-c)#end<br />
&gt;<br />
&gt; Below is the IPUPLINK-ES20 policy map with the adjusted queue limits:<br />
&gt;<br />
&gt;   Policy Map IPUPLINK-ES20<br />
&gt;     Class VOICE-RTP<br />
&gt;       priority<br />
&gt;      police cir 19500000 bc 609375 be 609375<br />
&gt;        conform-action transmit<br />
&gt;        exceed-action drop<br />
&gt;     Class VOICE-SIGNALLING<br />
&gt;       bandwidth 3900 (kbps)<br />
&gt;       queue-limit 2000 packets<br />
&gt;     Class MGMT<br />
&gt;       bandwidth 1950 (kbps)<br />
&gt;       queue-limit 1000 packets<br />
&gt;     Class PREMIUM-CUSTOMER<br />
&gt;       bandwidth 3900 (kbps)<br />
&gt;       queue-limit 2000 packets<br />
&gt;     Class ROUTING<br />
&gt;       bandwidth 1950 (kbps)<br />
&gt;       queue-limit 1000 packets<br />
&gt;     Class class-default<br />
&gt;       queue-limit 14000 packets<br />
&gt;<br />
&gt; Your mileage may vary but this fixed a particularly troublesome issue for<br />
&gt; us and I think everyone should keep this in mind when working with these<br />
&gt; cards.<br />
&gt;<br />
&gt; We had some very bad degraded service events where we thought we had<br />
&gt; plenty of bandwidth during a failure scenario but in reality could only<br />
&gt; get approximately 75% of the loop bandwidth due to the queue depths we had<br />
&gt; configured on our service policies prior to these changes.<br />
&gt;<br />
&gt; -Will<br />
&gt;<br />
&gt; ----- Original Message -----<br />
&gt; From: &quot;Anthony McGarry&quot; <br />
&gt; Sent: Wed, September 30, 2009 4:59<br />
&gt; Subject:Re: [c-nsp] ES20 - Port Queues<br />
&gt;<br />
&gt; Hi,<br />
&gt;<br />
&gt; I have a 7609 with a WS-X6748-GE-TX. When I show the port capabilities I<br />
&gt; can see rx-(1q8t), tx-(1p3q8t)<br />
&gt; If I show queueing on an interface I see all the WRED and WRR queue<br />
&gt; configuration.<br />
&gt; This is great, I can make changes and view changes.<br />
&gt;<br />
&gt; I also have an ES20 and ES20+ line card in the chassis. When I show the<br />
&gt; port capabilities I see x-(NotDef-t), tx-(NotDef-t)<br />
&gt; When I show queueing on an interface I see nothing for the ES20. For the<br />
&gt; ES20+ I get<br />
&gt;<br />
&gt; Interface GigabitEthernet1/1 queueing strategy:  Weighted Round-Robin<br />
&gt;   Port QoS is enabled<br />
&gt;   Port is untrusted<br />
&gt;   Extend trust state: not trusted [COS = 0]<br />
&gt;   Default COS is 0<br />
&gt;<br />
&gt; So the ES20 tells me nothing and the ES20+ only shows a subset of what I<br />
&gt; see with the WS-X6748-GE-TX.<br />
&gt;<br />
&gt; Is there something I am missing in my config to show what queues I have<br />
&gt; available on the ES line cards '(1p3q8t)'<br />
&gt;  From reading the line card configuration guides I can see ingress<br />
&gt; scheduling is not supported and egress is but when I configure a egress<br />
&gt; policy-map with my random-detect configuration I still see no queueing<br />
&gt; info for the ports.<br />
&gt;<br />
&gt; Thanks<br />
&gt; Anthony<br />
&gt;<br />
&gt;<br />
&gt;<br />
&gt; _______________________________________________<br />
&gt; cisco-nsp mailing list  <a href="javascript:parent.CreateMailTo('cisco-nsp%40puck.nether.net')">cisco-nsp at puck.nether.net</a><br />
&gt; <a target="_blank" href="https://puck.nether.net/mailman/listinfo/cisco-nsp">https://puck.nether.net/mailman/listinfo/cisco-nsp</a><br />
&gt; archive at <a target="_blank" href="http://puck.nether.net/pipermail/cisco-nsp/">http://puck.nether.net/pipermail/cisco-nsp/</a><br />
&gt;<br />
&gt;<br />
&gt; ----- End of original message -----<br />
&gt;<br />
&gt;</div>
</div>
------=_20090930105410_79627
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit

The only time we saw trouble was when the links were under load. A for instance is that a 600 mb subrated GigE link would start dropping a significant amount of traffic for us around 500 mb of load. Adjusting the queue depths corrected the problem. I'm not sure anyone would actually see this trouble unless the links in question were under considerable load.

What my reply boiled down to is that a lot of our Engineers spent a very large chunk of time chasing this down and since the subject of ES20 port queues came up I figured I'd save someone else that same time if they have similar troubles while (hopefully) giving folks a better idea of how the queues work on the card.

-Will


----- Original Message -----
From: anthony.mcgarry at plannet21.ie
Sent: Wed, September 30, 2009, 11:16 AM
Subject: Re: [c-nsp] ES20 - Port Queues

Will,

Thanks for the very complete explanation.

I have a similar policy-map attached to the interfaces on the ES20s.
For this particular 1GB link I can see the queue-limit by default is not 
sufficient for the bandwidth of 120000k I have for this class.

Class-map: qos-cos4 (match-any)
7403774 packets, 10083940188 bytes
5 minute offered rate 88244000 bps, drop rate 0000 bps
Match: cos  4
Queueing
queue limit 6635 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 7450680/10147826160
bandwidth 120000 kbps

With this queue limit I could only pass 13588k of traffic before tail drops
However I see a 5 min offered rate of 88224k with no tail drops.
Is this because the link is not under congestion?

>From the calculations below I have now set the queue limit manually

120000 x 1000 = 120000000
120000000/8 = 15000000
15000000/256 = 58593.75

Class-map: qos-cos4 (match-any)
12103420 packets, 16484858040 bytes
5 minute offered rate 96063000 bps, drop rate 0000 bps
Match: cos  4
Queueing
queue limit 60000 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 12103420/16484858040
bandwidth 120000 kbps

I also noticed on my priority queue that the queue limit is always set 
to 66 packets by default.
I am assuming this is to reduce latency for rtp but by the above logic 
that would equal 135.168k before tail drops
and I am seeing 287k with no drops.

queue stats for all priority classes:
Queueing
queue limit 66 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 229339/49963294

Class-map: qos-cos5 (match-any)
229339 packets, 49963294 bytes
5 minute offered rate 287000 bps, drop rate 0000 bps
Match: cos  5
police:
cir 50000000 bps, bc 1562500 bytes
conformed 226069 packets, 49250874 bytes; actions:
transmit
exceeded 0 packets, 0 bytes; actions:
drop
conformed 286000 bps, exceed 0000 bps
Priority: Strict, b/w exceed drops: 0

I believe you when you say the queue limit could cause drops as the 
logic is sound but I can't see any impact from the outputs above. maybe 
I' missing something.

Thanks
Anthony


Byrd, William wrote:
------=_20090930105410_79627--



More information about the cisco-nsp mailing list