[c-nsp] Best practices for Cat6500

Abello, Vinny Vinny_Abello at dell.com
Mon Nov 1 14:29:50 EDT 2010


Something else I just recently came across release notes from SXI2 (might be
in other release notes as well with differing information) in regard to BFD
and SSO:

When evaluating BFD SSO for the network, the customer should note the
following considerations.
. Cisco Catalyst 6500 series switches typically support upto 128 BFD
sessions with hello interval of 50ms or higher and multiplier of 3 or
higher. When configured with dual sups in SSO mode, the number of sessions
supported is 50 with timers of 500ms or higher and multiplier of 3 or
higher. This scale ensures that BFD Sessions don't flap during the time it
takes for the system to failover to the secondary supervisor.

. BFD SSO is supported on Cisco Catalyst 6500 Series E-chassis and 67xx Line
Cards only. Centralized Forwarding Cards (CFC) are not supported.

. During the ISSU cycle the line Cards are reset which causes a routing flap
in the BFD session.

. For EIGRP, the number of BFD sessions supported under BFD SSO is reduced
to 30.

Vinny Abello
Network Engineer
Dell | Physician Services
office +1 973 940 6125
mobile +1 973 868 0610
vinny_abello at dell.com


-----Original Message-----
From: cisco-nsp-bounces at puck.nether.net
[mailto:cisco-nsp-bounces at puck.nether.net] On Behalf Of Justin Krejci
Sent: Monday, November 01, 2010 1:11 PM
To: Phil Mayers
Cc: cisco-nsp at puck.nether.net
Subject: Re: [c-nsp] Best practices for Cat6500

With regards to SSO-NSF and HSRP I've read documents on ciscos site that
conflict when discussing the use of NSF. 

One indicates do not use HSRP and NSF together on the same box.
http://www.cisco.com/en/US/docs/ios/12_2s/feature/guide/fsnsf20s.html#wp1467
556
http://www.cisco.com/en/US/customer/docs/switches/lan/catalyst6500/ios/12.2S
X/configuration/guide/nsfsso.html#wp1112624


I've read in another article (sorry, don't have the URL handy at the
moment but I think I saw in a PDF from cisco.com) that either implied
the use of NSF and HSRP together was OK thru config examples or
something or else explicitly stated the use of the to together is OK.

Can anyone comment on this conflict? Also does this conflict with NSF
also apply to GLBP which can be configured in a way to behave similar to
that of HSRP? The second link above makes comment of GLBP but the first
one does not. Is GLBP newer than the first article?

Thanks.


On Mon, 2010-11-01 at 11:59 +0000, Phil Mayers wrote:

> On 01/11/10 10:00, Robert Hass wrote:
> >
> > 1) mls rate-limit
> >
> > My current configuration only consist few rate-limiters:
> >
> > mls rate-limit unicast ip rpf-failure 300 30
> > mls rate-limit unicast ip icmp unreachable no-route 300 30
> > mls rate-limit unicast ip icmp unreachable acl-drop 300 30
> > mls rate-limit unicast ip errors 300 30
> >
> > Should I consider to configure more mls rate-limiters ?
> 
> Search the archives for truly extensive discussion on these issues.
> 
> Suffice to say that some of the more useful looking rate-limiters 
> (glean, receive) have gotchas either in terms of bugs in certain 
> hardware (glean limiter subject to output ACL of input interface on some 
> PFC/DFC versions) or just sub-optimal behaviour.
> 
> We use these, in addition to what you've got, which I believe to be 
> relatively safe:
> 
> mls rate-limit all ttl-failure 100 10
> mls rate-limit all mtu-failure 100 10
> 
> 
> >
> > I would like to implement 'mls rate-limit layer2 pdu'. How I can check
> > how many layer2 pdu packets are coming to RP ? And SNMP Oid or CLI
> > command to show this ?
> 
> Personally I would avoid that, and instead ensure layer2 PDUs are 
> filtered on untrusted ports.
> 
> > 3) Automatic BGP refresh
> >
> > When I change something in route-map for inbound BGP prefixes I
> > noticed that Cat6500 automatically refresh inbound BGP router
> > (automatically doing something like clear ip bgp x.x.x.x in). Is is
> > new feature in SXI4a ?
> 
> Really? Are you sure?
> 
> >
> > 4) NetFlow only for packets going to RP/SP
> >
> > Is any way to export NetFlow (v5 or v9) information for packets coming
> > to RP/SP only ? I would like to check whats coming to software
> > switching by RP/SP for develop control-plane policing are decrease CPU
> > usage for eg. ICMP traffic.
> 
> Not sure about that, but you can use SPAN to monitor the SP/RP:
> 
> mon sess 1 type ...
>    source cpu rp
>    source cpu sp
> 
> ...etc.
> 
> >
> > 5) Supervisor Redundancy
> >
> > I would like to add redundant Sup720. Is IOS automatically will switch
> > to second Supervisor when primary :
> > a) Will crash (software error/bug)
> > b) Will fail (hardware failure)
> 
> Not automatically - you need to configure it:
> 
> redundancy
>   main-cpu
>    auto-sync running-config
>   mode sso
> 
> When configured, it is supposed to protect against many failures. It 
> works most of the time; however in early versions of SXI we saw a couple 
> of crashes where the primary sup crashed, and the SSO caused the 
> secondary to crash, dropping both to ROMMON :o(
> 
> But we have also seen successful SSO.
> 
> Beware: SSO alone might not be sufficient. You might need NSF, and 
> routing neighbours (e.g. BGP, OSPF) that support NSF, in order to 
> recover "instantly".
> 
> >
> > In my configuration I'm using old classic bus cards (3 x
WS-X6408A-GBIC).
> 
> I'm not sure if SSO supports classic bus cards; read the docs.
> _______________________________________________
> cisco-nsp mailing list  cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
_______________________________________________
cisco-nsp mailing list  cisco-nsp at puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


More information about the cisco-nsp mailing list