[c-nsp] bandwidth statement on interface to match shaped value?

Benjamin Lovell belovell at cisco.com
Thu Jul 1 12:45:51 EDT 2010


adding back puck so others can reference.

Ok, well my answer was a little truncated as a full answer is a little  
convoluted.

The bandwidth statement on the interface has three effects.

1) change the eigrp metric
2) EIGRP will only use a certain % of the interface bandwidth for  
routing updates. This percent is based on the bandwidth statement so  
it can affect routing updates if eigrp is trying to use close to the  
link bandwidth for updates. Really only a concern if the bandwidth is  
set very low(tunnel interfaces default to like 9Kb or so) and this  
causes eigrp to unnecessarily throttle itself. Or if link is VERY slow  
and bandwidth metric is high eigrp will not throttle itself and  
updates will be dropped.
3) "bandwidth percent" in policy-maps. The percent will be a  
percentage of interface bandwidth statement IF "bandwidth percent" is  
in top level policy-map. If bandwidth percent is in child policy and  
shaper/policer is in the parent then "bandwidth percent" will be a  
percentage of the shape/police rate.

The reason I say the bandwidth statement on an interface does not  
affect QoS is this..

Changing the bandwidth statement on an FE interface to say 30% does  
not cause queueing to take place at 33Mb. The link will still try to  
send out 100Mbit before queueing starts and QoS has any affect. If you  
want to create congestion to cause queueing to happen then you need a  
two level QoS policy. The parent will cause congestion at say 30% 
(using as shaper/policer) and then the child policy will queue things  
up based on your classes once the parent has caused congestion.

Anytime the link speed(not bandwidth metric) does not match the actual  
rate traffic that can be sent then two level QoS must be used. Most  
commonly on ISP ethernet handoff where provided bandwidth does not  
match link speed or on Tunnel interfaces which have no link speed and  
therefor never experience congestion unless created with a shaper/ 
policer.

It's convoluted to try and explain but fairly logical once understood.

Here is an example from my home router where I have a FE handoff but  
only about 1.65Mb of usable upload


policy-map parent-policy  <==== parent policy creates congestion at  
1.65Mb
  class class-default
     shape peak 1650000
   service-policy child-policy <== then call child policy to do  
desired queueing once 1.65Mb is reached.

policy-map child-policy
  class ECT-VOICE
     priority 364
  class webby
     bandwidth percent 10
  class ETC-NOTVOICE
     bandwidth percent 30

gateway#show policy-map interface f4.10

....

         Class-map: webby (match-all)
           14944430 packets, 1625152735 bytes
           30 second offered rate 1000 bps, drop rate 0 bps
           Match: access-group 102
           Queueing
           queue limit 64 packets
           (queue depth/total drops/no-buffer drops) 0/0/0
           (pkts output/bytes output) 14944215/1625121048
           bandwidth 10% (165 kbps)  <===== 10% of shape rate not of  
interface bandwidth metric which is still 100Mb

....

-Ben


On Jul 1, 2010, at 12:06 PM, tdensmore wrote:

> Really?  I was under the impression that QoS was also dependant on  
> bandwidth statements, and when the OP mentions policy-maps and  
> classes, I'm thinking QoS.  I ask this not to question you directly,  
> but because I'm currently studying for the 642-642 QoS test and  
> would like to know if my understanding it way off base.
>
> What I'm talking about:
>
> R1#sho policy-map interface fa2/0 output class rtp
>  FastEthernet2/0
>
>   Service-policy output: backbone_llq
>
>     Class-map: rtp (match-any)
>       2743948 packets, 368861980 bytes
>       5 minute offered rate 0 bps, drop rate 0 bps
>       Match: ip dscp ef (46)
>         2743948 packets, 368861980 bytes
>         5 minute rate 0 bps
>       Queueing
>         Strict Priority
>         Output Queue: Conversation 264
>         Bandwidth 30 (%)
>         Bandwidth 30000 (kbps) Burst 750000 (Bytes)
>         (pkts matched/bytes matched) 0/0
>         (total drops/bytes drops) 0/0
> R1#conf t
> Enter configuration commands, one per line.  End with CNTL/Z.
> R1(config)#int fa2/0
> R1(config-if)#bandwidth 44210
> R1(config-if)#^Z
> R1#sho policy-map interface fa2/0 output class rtp
>  FastEthernet2/0
>
>   Service-policy output: backbone_llq
>
>     Class-map: rtp (match-any)
>       2743948 packets, 368861980 bytes
>       5 minute offered rate 0 bps, drop rate 0 bps
>       Match: ip dscp ef (46)
>         2743948 packets, 368861980 bytes
>         5 minute rate 0 bps
>       Queueing
>         Strict Priority
>         Output Queue: Conversation 264
>         Bandwidth 30 (%)
>         Bandwidth 13263 (kbps) Burst 331575 (Bytes)
>         (pkts matched/bytes matched) 0/0
>         (total drops/bytes drops) 0/0
>
> Thanks,
>
> Tim
>
> On 7/1/2010 9:30 AM, Benjamin Lovell wrote:
>>
>> The bandwidth statement just alters the EIGRP bandwidth metric. So  
>> if you are using EIGRP and want it to reflect the true bandwidth of  
>> the link, then yes. Else it does not matter.
>>
>> -Ben
>>
>> On Jul 1, 2010, at 10:43 AM, Roger Wiklund wrote:
>>
>>> Hi
>>>
>>> When using a physical interface of 100meg with an outbound policy- 
>>> map that
>>> shapes all traffic to 30meg, should the bandwidth of the physical  
>>> interface
>>> reflect the shaped value?
>>>
>>> The policy-map is also using remaining bandwidth percentage x for  
>>> different
>>> classes.
>>>
>>> I would assume you want the percentage level to calculate based on  
>>> the
>>> 30meg, rather than on the 100meg right?
>>>
>>> Thanks!
>>>
>>> Regards
>>> Roger
>>> _______________________________________________
>>> cisco-nsp mailing list  cisco-nsp at puck.nether.net
>>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>>
>> _______________________________________________
>> cisco-nsp mailing list  cisco-nsp at puck.nether.net
>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>>
>



More information about the cisco-nsp mailing list