[c-nsp] Packet drops on ME3600X policy-map

Lobo lobotiger at gmail.com
Wed Oct 22 09:14:18 EDT 2014


Thanks!!!  This was the main culprit.  FTR, I added the queue-limit 
percent 100 command to each of my class-maps and it seems to have 
resolved the problem.

Jose

On 10/21/2014 1:54 PM, Lukas Tribus wrote:
> See:
> http://www.gossamer-threads.com/lists/cisco/nsp/169280
>
>
> > Date: Tue, 21 Oct 2014 13:23:04 -0400
> > From: lobotiger at gmail.com
> > To: cisco-nsp at puck.nether.net
> > Subject: [c-nsp] Packet drops on ME3600X policy-map
> >
> > Hey everyone, I'm trying to figure this one out and I'm banging my head
> > on it. I've got a relatively simple child-parent policy-map configured
> > on a port. The shaping is set to 400M but with traffic at less than
> > 300M I'm seeing consistent drop rates specifically in the class-default
> > section of the child policy.
> >
> > This is the configuration:
> >
> > policy-map BACKBONE_OUT_QOS
> > class RT_QOS_GROUP
> > priority
> > set mpls experimental topmost 5
> > set cos 5
> > class CT_QOS_GROUP
> > bandwidth remaining percent 5
> > set mpls experimental topmost 6
> > set cos 6
> > class FC2_QOS_GROUP
> > bandwidth remaining percent 10
> > set mpls experimental topmost 4
> > set cos 4
> > class FC_QOS_GROUP
> > bandwidth remaining percent 50
> > set mpls experimental topmost 3
> > set cos 3
> > class BC_QOS_GROUP
> > bandwidth remaining percent 20
> > set mpls experimental topmost 2
> > set cos 2
> > class SC_QOS_GROUP
> > set mpls experimental topmost 1
> > set cos 1
> > class class-default
> > set mpls experimental topmost 0
> > set cos 0
> > !
> > policy-map BACKBONE_OUT_400M
> > class class-default
> > shape average 400000000
> > service-policy BACKBONE_OUT_QOS
> > !
> >
> > The class-maps are matching their respective qos-group markings.
> >
> > And it's applied to the physical interface:
> >
> > interface GigabitEthernet0/23
> > switchport trunk allowed vlan blah
> > switchport mode trunk
> > mtu 9800
> > load-interval 30
> > service-policy input BACKBONE_IN
> > service-policy output BACKBONE_OUT_400M
> > end
> >
> > The output of a show command is as follows:
> >
> > #sh policy-map int g0/23 output
> > GigabitEthernet0/23
> >
> > Service-policy output: BACKBONE_OUT_400M
> >
> > Class-map: class-default (match-any)
> > 13340674 packets, 12526672443 bytes
> > 30 second offered rate 293739000 bps, drop rate 432000 bps
> > Match: any
> > Traffic Shaping
> > Average Rate Traffic Shaping
> > Shape 400000 (kbps)
> > Output Queue:
> > Default Queue-limit 49152 bytes
> > Tail Packets Drop: 16218
> > Tail Bytes Drop: 15933830
> >
> > Service-policy : BACKBONE_OUT_QOS
> >
> > Class-map: RT_QOS_GROUP (match-any)
> > 0 packets, 0 bytes
> > 30 second offered rate 0000 bps, drop rate 0000 bps
> > Match: qos-group 5
> > Strict Priority
> > set mpls exp topmost 5
> > set cos 5
> > Queue-limit current-queue-depth 0 bytes
> > Output Queue:
> > Default Queue-limit 49152 bytes
> > Tail Packets Drop: 0
> > Tail Bytes Drop: 0
> >
> > Class-map: CT_QOS_GROUP (match-any)
> > 358 packets, 98876 bytes
> > 30 second offered rate 0000 bps, drop rate 0000 bps
> > Match: qos-group 6
> > Bandwidth Remaining 5 (percent)
> > set mpls exp topmost 6
> > set cos 6
> > Queue-limit current-queue-depth 0 bytes
> > Output Queue:
> > Default Queue-limit 49152 bytes
> > Tail Packets Drop: 0
> > Tail Bytes Drop: 0
> >
> > Class-map: FC2_QOS_GROUP (match-any)
> > 0 packets, 0 bytes
> > 30 second offered rate 0000 bps, drop rate 0000 bps
> > Match: qos-group 4
> > Bandwidth Remaining 10 (percent)
> > set mpls exp topmost 4
> > set cos 4
> > Queue-limit current-queue-depth 0 bytes
> > Output Queue:
> > Default Queue-limit 49152 bytes
> > Tail Packets Drop: 0
> > Tail Bytes Drop: 0
> >
> > Class-map: FC_QOS_GROUP (match-any)
> > 0 packets, 0 bytes
> > 30 second offered rate 0000 bps, drop rate 0000 bps
> > Match: qos-group 3
> > Bandwidth Remaining 50 (percent)
> > set mpls exp topmost 3
> > set cos 3
> > Queue-limit current-queue-depth 0 bytes
> > Output Queue:
> > Default Queue-limit 49152 bytes
> > Tail Packets Drop: 0
> > Tail Bytes Drop: 0
> >
> > Class-map: BC_QOS_GROUP (match-any)
> > 0 packets, 0 bytes
> > 30 second offered rate 0000 bps, drop rate 0000 bps
> > Match: qos-group 2
> > Bandwidth Remaining 20 (percent)
> > set mpls exp topmost 2
> > set cos 2
> > Queue-limit current-queue-depth 0 bytes
> > Output Queue:
> > Default Queue-limit 49152 bytes
> > Tail Packets Drop: 0
> > Tail Bytes Drop: 0
> >
> > Class-map: SC_QOS_GROUP (match-any)
> > 0 packets, 0 bytes
> > 30 second offered rate 0000 bps, drop rate 0000 bps
> > Match: qos-group 1
> > set mpls exp topmost 1
> > set cos 1
> > Queue-limit current-queue-depth 0 bytes
> > Output Queue:
> > Default Queue-limit 49152 bytes
> > Tail Packets Drop: 0
> > Tail Bytes Drop: 0
> >
> > Class-map: class-default (match-any)
> > 13340316 packets, 12526573567 bytes
> > 30 second offered rate 293737000 bps, drop rate 432000 bps
> > Match: any
> > set mpls exp topmost 0
> > set cos 0
> > Queue-limit current-queue-depth 0 bytes
> > Output Queue:
> > Default Queue-limit 49152 bytes
> > Tail Packets Drop: 16218
> > Tail Bytes Drop: 15933830
> >
> > The number of tail drops is pretty close to the output drops shown on
> > the interface itself too:
> >
> > rocpe06#sh int g0/23
> > GigabitEthernet0/23 is up, line protocol is up (connected)
> > <snip>
> > MTU 9800 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
> > reliability 255/255, txload 74/255, rxload 26/255
> > Encapsulation ARPA, loopback not set
> > Keepalive not set
> > Full-duplex, 1000Mb/s, media type is SX
> > input flow-control is off, output flow-control is unsupported
> > ARP type: ARPA, ARP Timeout 04:00:00
> > Last input 00:00:00, output 00:00:00, output hang never
> > Last clearing of "show interface" counters 00:05:40
> > Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 
> 15365
> > Queueing strategy: fifo
> > Output queue: 0/40 (size/max)
> > 30 second input rate 105779000 bits/sec, 35781 packets/sec
> > 30 second output rate 293211000 bits/sec, 38999 packets/sec
> > 11830165 packets input, 4313484039 bytes, 0 no buffer
> > Received 15400 broadcasts (6262 multicasts)
> > 0 runts, 0 giants, 0 throttles
> > 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
> > 0 watchdog, 6262 multicast, 0 pause input
> > 0 input packets with dribble condition detected
> > 13145027 packets output, 12329007673 bytes, 0 underruns
> > 0 output errors, 0 collisions, 0 interface resets
> > 12 unknown protocol drops
> > 0 babbles, 0 late collision, 0 deferred
> > 0 lost carrier, 0 no carrier, 0 pause output
> > 0 output buffer failures, 0 output buffers swapped out
> > !
> >
> > Any thoughts on whether it's the policy-map that needs tweaking somehow
> > or if it's some other issue? All of the traffic that is being 
> dropped is
> > traffic that is general internet but some folks have started to notice
> > some packet loss.
> >
> > Any input would be appreciated.
> >
> > Jose
> >
> > _______________________________________________
> > cisco-nsp mailing list cisco-nsp at puck.nether.net
> > https://puck.nether.net/mailman/listinfo/cisco-nsp
> > archive at http://puck.nether.net/pipermail/cisco-nsp/



More information about the cisco-nsp mailing list