[c-nsp] Reasons for "random" ISIS flapping?

Peter Rathlev peter at rathlev.dk
Wed Aug 28 16:11:02 EDT 2013


Thank you for your very valuable comments!

On Wed, 2013-08-28 at 13:15 -0400, Pete Lumbis wrote:
> I don't know if this was mentioned before, but I'd also strongly
> advise against tight protocol timers like you are running and allow
> BFD to do that work. Because BFD is done either in hardware or under
> the interrupt the likelihood of a false positive like this is MUCH
> lower, especially with these very small CPU events. By running BFD
> and tight protocol timers you are actually putting more load on the
> CPU. I'd suggest 1sec hold /3sec dead protocol timers at the lowest.

We actually started using minimal ISIS timers instead of BFD generally
when BFD for SVIs became unavailable after SXF. We have since started
using BFD again but haven't thought about raising the ISIS timers again.
I'll definitely make sure to take a look at correcting this.

We saw some amount of BFD false positives with int 100/mul 3 though,
possible (probably even) because of the same CPU overload. Which is why
it's int 200/mul 4 now, which works well for us.

> I'm not 100% sure I think the "connected" rate limiter works like
> uRPF. If it's not on the right interface we ignore it. That's one of
> the compelling reasons to run the other rate limiters.

If I understand what you're saying here correctly (and assuming it's
right of course) then "connected" only protects against traffic
_sourced_ from the local network with a valid (for that network) source
address. So since this traffic was not with a valid source address it
should instead be caught by "mls rate-limit multicast ipv4 non-rpf",
which was not configured on the device.

If so, that makes a lot of sense. I just have one problem then: I don't
have enough hardware rate-limiters on a Sup720 to do that with my
current configuration. :'(

Maybe someone would like to suggest which of the following are less
important (when acting as a gateway for many VLANs) than multicast uRPF
in hardware? :-)

 mls rate-limit multicast ipv4 fib-miss 10000 10
 mls rate-limit multicast ipv4 igmp 5000 10
 mls rate-limit multicast ipv4 ip-options 10 1
 mls rate-limit multicast ipv4 partial 10000 10
 mls rate-limit unicast cef glean 1000 10
 mls rate-limit unicast acl input 200 10
 mls rate-limit unicast acl output 200 10
 mls rate-limit unicast ip options 10 1
 mls rate-limit unicast ip rpf-failure 200 10
 mls rate-limit unicast ip icmp unreachable no-route 200 10
 mls rate-limit unicast ip icmp unreachable acl-drop 200 10
 mls rate-limit unicast ip errors 200 10
 mls rate-limit all ttl-failure 500 10

I had hoped it could share with "multicast ipv4 ip-options" if I used
the same rate, but it can't. The above rate-limiter setup gives the
following from "show mls rate-limit | excl _Off( +-)+$":

 Sharing Codes: S - static, D - dynamic
 Codes dynamic sharing: H - owner (head) of the group, g - guest of the
group 

   Rate Limiter Type       Status     Packets/s   Burst  Sharing
 ---------------------   ----------   ---------   -----  -------
        MCAST DFLT ADJ   On               10000      10  Not sharing
        ACL BRIDGED IN   On                 200      10  Group:1 S
       ACL BRIDGED OUT   On                 200      10  Group:1 S
          ACL VACL LOG   On                2000       1  Not sharing
             CEF GLEAN   On                1000      10  Not sharing
      MCAST PARTIAL SC   On               10000      10  Not sharing
        IP RPF FAILURE   On                 200      10  Group:0 S
           TTL FAILURE   On                 500      10  Not sharing
 ICMP UNREAC. NO-ROUTE   On                 200      10  Group:0 S
 ICMP UNREAC. ACL-DROP   On                 200      10  Group:0 S
       MCAST IP OPTION   On                  10       1  Group:3 S
       UCAST IP OPTION   On                  10       1  Group:2 S
             IP ERRORS   On                 200      10  Group:0 S
            MCAST IGMP   On                5000      10  Not sharing
 ---------------------   ----------   ---------   -----  -------

Suggestions for adjusting the rates are welcome too of course.

-- 
Peter




More information about the cisco-nsp mailing list