[c-nsp] Operational impact of switching from ingress to egress replication mode
Benjamin Lovell
belovell at cisco.com
Tue Sep 21 17:33:25 EDT 2010
John,
If you are a first hop mcast router then connected could matter to you but as a transit box the two you would care about the most are fib-miss(i.e. no hardware entry found so punt) and non-RPF(you came in on the wrong interface). Setting these two to a low number (100's of pps) should provide the CPU some protection for the short period needed to reprogram hardware when you change the replication mode.
-Ben
On Sep 21, 2010, at 5:19 PM, John Neiberger wrote:
> I see the "mls rate-limit multicast ipv4 connected" command, for
> directly connected sources, but is there a command that would apply to
> traffic going through box?
>
> We have very little unicast traffic but a whole bunch of multicast
> traffic, some of which is directly connected but much of it is simply
> passing through.
>
> Thanks again for all your help, I appreciate your time.
>
> On Tue, Sep 21, 2010 at 3:12 PM, Phil Mayers <p.mayers at imperial.ac.uk> wrote:
>> On 09/21/2010 09:58 PM, John Neiberger wrote:
>>>
>>> Sorry. I also meant to say it's a Sup 720-3BXL. Based on what I can
>>> see on CCO, that thing can forward 400 Mpps of ipv4 traffic. Does that
>>> mean that I can set a rate limit of, say, 300 Mpps and somewhat guard
>>> the CPU from meltdown for a few moments?
>>
>> I wish! 300Mpps hitting the CPU of a sup720 will kill it stone dead. It has
>> (two) 600MHz CPUs, and they will not survive such a load.
>>
>> There's lots of info in this in the archives, but in brief: the sup720/pfc3
>> architecture forwards most packets in hardware. Some packets are however
>> punted to CPU; these include:
>>
>> 1. Packets which need ARP resolution ("glean")
>>
>> 2. Multicast packets, which are trickled to the CPU so the CPU can see them
>> and build (and refresh) hardware forwarding state
>>
>> 3. ACL and uRPF denies, which are trickled to the CPU so it can maintain
>> counters
>>
>> 4. Various other traffic like TTL failures, needing ICMP
>>
>> 5. Obviously, packets address to the CPU (routing traffic, layer 2 PDUs)
>>
>> Because the CPUs on these boxes are very, very puny, you want to limit what
>> hits the CPU. There are two methods available:
>>
>> 1. The "mls rate-limit" commands; these will place a simple numeric rate
>> cap on certain types of traffic, and is done in hardware. There's no
>> prioritisation, but for certain types of traffic (e.g. TTL failures) you can
>> and should IMHO set low-ish limits. You SHOULD NOT use the "general" CEF
>> limiter; because you should use...
>>
>> 2. CoPP - basically QoS on traffic punted to CPU. This is superior because
>> you can write ACLs defining what is most important to you, with very
>> granular control over policy. It suffers a couple of problems -
>> broadcast/multicast and non-IP traffic are done in software, and it can't
>> distinguish between glean and receive traffic, making a default-deny policy
>> tricky.
>>
>>
>> In short, common advice seems to be:
>>
>> 1. Set low limits on the mls limiters for TTL & MTU failure, and optionally
>> ACL-drop ICMP unreach:
>>
>> mls rate-limit unicast ip icmp unreachable acl-drop 0
>> mls rate-limit all ttl-failure 100 10
>> mls rate-limit all mtu-failure 100 10
>>
>> 2. Use CoPP for everything else; DO NOT use the glean or cef receive
>> limiter
>>
>> Search the archives and the Cisco docs for more info; it's not something I
>> can summarise in 5 minutes I'm afraid.
>>
>> In your case, if you are going to perform a task which will potentially punt
>> a lot of multicast traffic to the CPU, I was suggesting that there are MLS
>> limiters which will reduce this; see my earlier email; though we run with
>> the defaults, which are (quite) high PPS values!
>>
>> sh mls rate-limit | inc MC
>> MCAST NON RPF Off - - -
>> MCAST DFLT ADJ On 100000 100 Not sharing
>> MCAST DIRECT CON Off - - -
>> MCAST PARTIAL SC On 100000 100 Not sharing
>>
More information about the cisco-nsp
mailing list