[rbak-nsp] FW: Odd QoS Rate Maximum Issue

support at ecn.net.au support at ecn.net.au
Sun Nov 18 22:43:47 EST 2012


Hi All


Just a follow up on the previous message I went ahead and set “rate maximum 100000” in the QoS policy, and presto tail drops on the circuit (which is good).

So I edged the rate maximum to 118000, the result in the show circuit counters is:

[lnsACME]bnecore3#show cir count 2/1 vlan-id 14 detail
Circuit: 2/1 vlan-id 14, Internal id: 1/2/175, Encap: ether-dot1q
Packets Bytes
-------------------------------------------------------------------------------
Receive : 306109884 Receive : 101571724646
Receive/Second : 14630.84 Receive/Second : 4014090.40
Transmit : 2757767997 Transmit : 2534858355198
Xmits/Queue Xmits/Queue
0 : 3475987 0 : 3023029838
1 : 6861924 1 : 6816020571
2 : 0 2 : 0
3 : 0 3 : 0
4 : 0 4 : 0
5 : 0 5 : 0
6 : 0 6 : 0
7 : 0 7 : 0
Xmit Q Deleted : 2747430086 Xmit Q Deleted : 2525019304789
Transmit/Second : 18479.68 Transmit/Second : 17798266.23

Am I nuts.

Rate maximum 118000
Transmit/sec: 17798266.23

Does that look right, my maths has 118000kbps as 15104000 (not 17798266) the output to the remote switch is appropriate at around 150mbps, I just don't understand why we are see yielding this result.


Any ideas?

















Hi all

I am hoping someone may be able to assist with an odd QoS Rate maximum issue we’ve hit.

On 1 particular VLAN it appears the QoS policy is being completely ignored, we can apply the same policy to any other circuit and the policy works fine, any thoughts? Config is as follows:

SmartEdge 100 - SEOS-6.1.5.7-Release


qos congestion-avoidance-map lnsACME2 pwfq
queue 0 depth 1024
queue 1 depth 3192
qos policy lnsACME2 pwfq
! max queue depth 4064 and max queue number 8
rate maximum 140000
num-queues 2
congestion-map lnsACME2
queue 0 priority 0 weight 100
queue 1 priority 1 weight 100


dot1q pvc 14
bind interface toACMELNS lnsACME
qos policy queuing lnsACME2



In theory the circuit with vlan 14 should now be limited to 140000 kbps, and I would think the queues defined with the congestion map should fill (and maybe even get some drops on the queues worst case).
However

Doing a “show cir count 2/1 vlan-id 14 det” shows that the transmit/second is well above the 140000kbps (17920000) at 19721788.
As such we’re seeing odd packet loss on that link (expected) as the far end is doing policing to discard packets over 150Mbit.

Circuit: 2/1 vlan-id 14
----------------------------------------------------------
Policy Name : lnsACME2
Policy Type : pwfq
Hierarchical Type : None
Rate-max (kbps) : 140000 Rate Source : local
..
Circuit: 2/1 vlan-id 14, Internal id: 1/2/175, Encap: ether-dot1q
Packets Bytes
-------------------------------------------------------------------------------
Receive : 265745043 Receive : 91125480966
Receive/Second : 15936.73 Receive/Second : 3865300.36
Transmit : 2707057339 Transmit : 2485601132249
Xmits/Queue Xmits/Queue
0 : 4775055 0 : 4105618797
1 : 14706271 1 : 15297898669
..
..
Xmit Q Deleted : 2687576013 Xmit Q Deleted : 2466197614783
Transmit/Second : 19832.72 Transmit/Second : 19721788.78
...
...
Tail Drops/Queue Tail Drops/Queue
0 : 0 0 : 0
1 : 0 1 : 0
2 : 0 2 : 0
3 : 0 3 : 0
4 : 0 4 : 0
5 : 0 5 : 0
6 : 0 6 : 0
7 : 0 7 : 0


Any ideas?

Regards

Matt
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://puck.nether.net/pipermail/redback-nsp/attachments/20121119/ae6eb2d3/attachment.html>


More information about the redback-nsp mailing list