[c-nsp] output rate-limiting not working in 7609
Tim Stevenson
tstevens at cisco.com
Tue Mar 4 02:19:08 EST 2008
At 11:12 PM 3/3/2008 -0600, Frank Bulk - iNAME observed:
>Let me just add that these kinds of caveats are most annoying and confusing.
Let me just add that marketing doesn't ask for these. :P
>AFAIK, detail in relation to PFC or DFC-enforced rate-limiting doesn't
>become clear when looking at any of the "show" output.
The sh policy-map int does deliberately show policing stats per FE so
at least there is some indication that there are multiple policing points.
>There's probably a hardware limitation,
Exactly.
>but it would be most desirable if
>policing and the like worked as simple as if I explained to layperson (i.e.
>10 Mbps both directions on the physical interface or VLAN).
Understood & could not agree more....
Tim
>Frank
>
>-----Original Message-----
>From: cisco-nsp-bounces at puck.nether.net
>[mailto:cisco-nsp-bounces at puck.nether.net] On Behalf Of Tim Stevenson
>Sent: Monday, March 03, 2008 10:30 PM
>To: Jimmy; petelists at templin.org; mtinka at globaltransit.net;
>christian at qunec.net; gniewomir.krol at aci.com.pl; cisco-nsp at puck.nether.net
>Subject: Re: [c-nsp] output rate-limiting not working in 7609
>
>At 08:15 PM 3/3/2008 -0800, Tim Stevenson observed:
> >Jimmy,
> >In 6500/7600, policing and other forwarding decisions are always
> >performed on the INGRESS card - including egress policy enforcement.
>
>Above I meant to say "the INGRESS FORWARDING ENGINE" - which may be
>just one, ie the PFC on the sup (regardless of which card the traffic
>came in on), or could be one of many, ie, one of several DFCs that
>sit on some/all cards. The rest of the below applies in that case.
>Obviously with just one FE, there is only one point of policy action.
>
>Tim
>
>
> >Therefore, in a distributed (ie, w/DFCs) system, you potentially
> >could get n times the configured rate, where n is the number of
> >forwarding engines that traffic destined for the egress interface
> >could potentially come in on.
> >
> >Of course, the problem with your workaround is that no one module
> >will ever allow more than 155M even if no traffic is coming in on
> >the other module.
> >
> >Tim
> >
> >At 11:51 AM 3/4/2008 +0800, Jimmy observed:
> >>Hi guys,
> >>
> >>Thanks for the feedback. Actually I have tried using MQC on the egress
>side.
> >>It is Layer 3 port.
> >>The port is in slot 1. For some reason when I do "show policy-map
> >>interface", it is showing an output from 2 slots instead of 1. I am using
>a
> >>dirty trick to temporarily solve the issue. I did policing to 155M instead
> >>of 310M. With this setting, the traffic can only reach 310M.
> >>
> >>Any idea why we need to configure like that? Or anyone has encountered the
> >>same issue?
> >>
> >>Cheers,
> >>Jimmy
> >>
> >>-------------------------------
> >>interface GigabitEthernet1/9
> >> ip route-cache flow
> >> load-interval 30
> >> speed nonegotiate
> >> mls netflow sampling
> >> service-policy input CUSTOMER-310m
> >> service-policy output CUSTOMER-155M
> >>
> >>policy-map CUSTOMER-155M
> >> class class-default
> >> police cir 155000000 bc 15500000 be 15500000 conform-action transmit
> >>exceed-action drop ----> POLICE to 155M
> >>
> >>gw1.hkg4#sh policy int g1/9
> >> GigabitEthernet1/9
> >>
> >> Service-policy output: CUSTOMER-155M
> >>
> >> class-map: class-default (match-any)
> >> Match: any
> >> police :
> >> 155000000 bps 15500000 limit 15500000 extended limit
> >> Earl in slot 1 :
> >> 16889514278576 bytes
> >> 30 second offered rate 196550600 bps
> >> aggregate-forwarded 13191791357655 bytes action: transmit
> >> exceeded 3697722920921 bytes action: drop
> >> aggregate-forward 157101144 bps exceed 40026752 bps
> >> Earl in slot 2 : ----------------------------> ANOTHER POLICING ???
> >> 14639062953589 bytes
> >> 30 second offered rate 174721136 bps
> >> aggregate-forwarded 13135487245073 bytes action: transmit
> >> exceeded 1503575708516 bytes action: drop
> >> aggregate-forward 159830912 bps exceed 18063232 bps
> >> Earl in slot 5 :
> >> 30560015 bytes
> >> 30 second offered rate 176 bps
> >> aggregate-forwarded 30560015 bytes action: transmit
> >> exceeded 0 bytes action: drop
> >> aggregate-forward 240 bps exceed 0 bps
> >>
> >>gw1.hkg4#sh mls qos ip g 1/9
> >> [In] Policy map is CUSTOMER-310m [Out] Policy map is CUSTOMER-155M
> >> QoS Summary [IPv4]: (* - shared aggregates, Mod - switch module)
> >>
> >> Int Mod Dir Class-map DSCP Agg Trust Fl AgForward-By
> >>AgPoliced-By
> >> Id Id
> >>--------------------------------------------------------------------------
>--
> >>---
> >> Gi1/9 1 In class-defa 0 1 dscp 0 486690994913
> >>54268431391
> >> Gi1/9 1 Out class-defa 0 2 -- 0 548444567177
> >>399451084094
> >> Gi1/9 2 Out class-defa 0 1 -- 0 492136489401
> >>404181645273 ----> SHOULDN'T HAVE ANY OUTPUT
> >> Gi1/9 5 Out class-defa 0 1 -- 0 30561099
> >>0
> >>-----------------------------------------------
> >>
> >>-----Original Message-----
> >>From: Pete Templin [mailto:petelists at templin.org]
> >>Sent: Tuesday, March 04, 2008 12:26 AM
> >>To: Jimmy
> >>Cc: cisco-nsp at puck.nether.net
> >>Subject: Re: [c-nsp] output rate-limiting not working in 7609
> >>
> >>Jimmy wrote:
> >>
> >> > I have encountered rate-limiting issue on CISCO7609 platform.
> >> >
> >> > Example is:
> >> >
> >> > interface GigabitEthernet1/9
> >> > rate-limit input 310000000 4843750 9687500 conform-action transmit
> >> > exceed-action drop rate-limit output 310000000 4843750 9687500
> >> > conform-action transmit exceed-action drop -------> NOT WORKING
> >> >
> >> > The output rate-limiting is not working. The traffic still can go
> >> > above 310M and can hit 1G.
> >> > I have created SR with cisco. They are saying there is no work around
> >> > for this except that we use ES20 to use policy-map on the interface.
> >>
> >>Your example is too short - is it a layer 3 port? If so, a policer inside
>a
> >>policy-map should work. If not, it won't work. From the Sup720
>datasheet:
> >>rate limiting is possible on "Ingress port or VLAN and egress VLAN or
> >>Layer-3 port".
> >>
> >>pt
> >>
> >>_______________________________________________
> >>cisco-nsp mailing list cisco-nsp at puck.nether.net
> >>https://puck.nether.net/mailman/listinfo/cisco-nsp
> >>archive at http://puck.nether.net/pipermail/cisco-nsp/
> >
> >
> >
> >Tim Stevenson, tstevens at cisco.com
> >Routing & Switching CCIE #5561
> >Technical Marketing Engineer, Data Center BU
> >Cisco Systems, http://www.cisco.com
> >IP Phone: 408-526-6759
> >********************************************************
> >The contents of this message may be *Cisco Confidential*
> >and are intended for the specified recipients only.
>
>
>
>Tim Stevenson, tstevens at cisco.com
>Routing & Switching CCIE #5561
>Technical Marketing Engineer, Data Center BU
>Cisco Systems, http://www.cisco.com
>IP Phone: 408-526-6759
>********************************************************
>The contents of this message may be *Cisco Confidential*
>and are intended for the specified recipients only.
>
>_______________________________________________
>cisco-nsp mailing list cisco-nsp at puck.nether.net
>https://puck.nether.net/mailman/listinfo/cisco-nsp
>archive at http://puck.nether.net/pipermail/cisco-nsp/
Tim Stevenson, tstevens at cisco.com
Routing & Switching CCIE #5561
Technical Marketing Engineer, Data Center BU
Cisco Systems, http://www.cisco.com
IP Phone: 408-526-6759
********************************************************
The contents of this message may be *Cisco Confidential*
and are intended for the specified recipients only.
More information about the cisco-nsp
mailing list