[c-nsp] ASR9k QoS Scale/Usage Query

Robert Williams Robert at CustodianDC.com
Tue Mar 14 04:16:04 EDT 2017


Hi Rajendra,

> These sites would be helpful to you on understanding QoS.
> https://supportforums.cisco.com/document/59901/asr9000xr-understanding-qos-default-marking-behavior-and-troubleshooting
> https://null.53bits.co.uk/index.php?page=asr9000-lag
> http://www.alcatron.net/Cisco%20Live%202014%20Melbourne/Cisco%20Live%20Content/Service%20Provider/BRKSPG-2904%20%20ASR-9000%20IOS-XR%20Hardware%20Architecture,%20QOS,%20EVC,%20IOS-XR%20Configuration%20and%20Troubleshooting.pdf
> Are you referring to this command:
> show qoshal resource summary [np <np>]

Thanks for that, I was familiar with that command but I appear to be lacking some required information in order to use the output from it to get what I am looking for.

At a high level – I essentially need to know (or be able to calculate) what percentage* of hardware resources are being consumed by the current policies being used.

*I appreciate that this is going to be a figure ‘per NP’ and/or ‘per LC’ but regardless I need to know so that we can plan expansion.

The command you shared specifically, gives output along these lines (on a random LC here):

<snip>
SUMMARY per NP:
  =========================
   Policy Instances: Ingress 37 Egress 14  Total: 51
   Entities: (L4 level: Queues)
     Level        Chunk 0           Chunk 1           Chunk 2           Chunk 3
     L4        78(   78/   78)   14(   14/   14)   22(   22/   22)   20(   20/   20)
     L3(8Q)    19(   19/   19)    3(    3/    3)    6(    6/    6)    6(    6/    6)
     L3(16Q)    0(    0/    0)    0(    0/    0)    0(    0/    0)    0(    0/    0)
     L2         7(    7/    7)    2(    2/    2)    4(    4/    4)    4(    4/    4)
     L1        16(   16/   16)    0(    0/    0)    0(    0/    0)    0(    0/    0)
   Groups:
     Level        Chunk 0           Chunk 1           Chunk 2           Chunk 3
     L4        19(   19/   19)    3(    3/    3)    6(    6/    6)    6(    6/    6)
     L3(8Q)     9(    9/    9)    2(    2/    2)    4(    4/    4)    4(    4/    4)
     L3(16Q)    0(    0/    0)    0(    0/    0)    0(    0/    0)    0(    0/    0)
     L2         7(    7/    7)    2(    2/    2)    4(    4/    4)    4(    4/    4)
     L1        16(   16/   16)    0(    0/    0)    0(    0/    0)    0(    0/    0)
   Policers: Internal 658(658) Regular 252(252)  Parent 0(0) Child 0(0) Total 910(910)

   PROFILES:
      WFQ:
       Level         Chunk 0         Chunk 1         Chunk 2         Chunk 3
        L4  254( 254/    78)  254( 254/    14)  254( 254/    22)  254( 254/    20)
        L3  256( 256/    19)  256( 256/     3)  256( 256/     6)  256( 256/     6)
        L2  256( 256/     7)  256( 256/     2)  256( 256/     4)  256( 256/     4)
        L1   64(  64/    12)    0(   0/     0)    0(   0/     0)    0(   0/     0)
<snip>

However, I have no reference point for what the ‘maximum’ would be for any of those values (or more specifically, I’m not aware what the maximums are).

Take this output for example:

#show qos capability location 0/0/cpu0
Tue Mar 14 08:02:14.088 GMT
Capability Information:
======================
<snip>
Max Policy maps supported on this LC: 16384
Max classes per child-policy: 1024
Max classes per policy: 1024
<snip>

It shows the limitations of this LC which is good, but I cannot find where to get the ‘current usage’ in the same context as the maximums listed here.

So something like ‘number of policy maps currently active on the LC’ or ‘number of classes per policy’. Without actually walking through the configurations on all ports and cards manually of course.

I’m reasonably familiar with the actual operation of QoS on the chassis (and in general) – in reality it is the complexity and size of our QoS structure which has led to the need for us to accurately quantify the hardware usage levels.

If you have any additional input I’d very much appreciate it!

Best wishes & thanks,


Robert Williams
Custodian Data Centre
Email: Robert at CustodianDC.com
http://www.CustodianDC.com






More information about the cisco-nsp mailing list