[c-nsp] Enabling IPv6 on Cisco 6500 breaks IPv4 Internet connectivity.
Tim Durack
tdurack at gmail.com
Sat Jun 23 18:36:39 EDT 2012
This is a curious story. Why does the SP hit 99%? I could understand
the RP maybe. TAC can't explain that?
We run Internet in a VRF. Actually we run several VRFs for different
transit providers, and one for peering. We configure "mpls label mode
all-vrfs protocol bgp-vpnv4 per-vrf" to avoid per-prefix labels (there
is no matching vpnv6 command, it is missing from the SXI train.) No
performance issues.
Admittedly we run full+default and table shave down to ~270k prefix.
Maybe the timing of 0/0 hitting the fib saves us from similar issues,
but I don't think we've experienced anything like what you describe
even before we started "shaving".
mls config:
mls ipv6 vrf
mls ip cef load-sharing full
mls ip multicast replication-mode ingress
mls ip multicast bidir gm-scan-interval 10
mls ipv6 acl compress address unicast
mls aging long 300
mls exclude acl-deny
mls netflow interface
mls flow ip interface-full
mls nde sender
mls qos
mls rate-limit multicast ipv4 fib-miss 1000 100
mls rate-limit multicast ipv4 igmp 1000 100
mls rate-limit multicast ipv4 ip-options 1000 100
mls rate-limit multicast ipv4 partial 1000 100
mls rate-limit unicast ip options 100 10
mls rate-limit all ttl-failure 100 10
mls rate-limit all mtu-failure 100 10
mls rate-limit layer2 pdu 1000 100
mls cef error action reset
mls cef maximum-routes ipv6 128
mls cef maximum-routes mpls 128
mls cef maximum-routes ip-multicast 16
mls mpls recir-agg
mls mpls tunnel-recir
This results in:
sh mls cef maximum-routes
FIB TCAM maximum routes :
=======================
Current :-
-------
IPv4 - 608k (default)
MPLS - 128k
IPv6 - 128k
IP multicast - 16k
sh mls cef summary
Total routes: 364992
IPv4 unicast routes: 340604
IPv4 non-vrf routes: 48
IPv4 vrf routes: 340556
IPv4 Multicast routes: 15
MPLS routes: 9299
IPv6 unicast routes: 15070
IPv6 non-vrf routes: 21
IPv6 vrf routes: 15049
IPv6 multicast routes: 3
EoM routes: 1
Tim:>
On Sat, Jun 23, 2012 at 5:50 PM, Jim Trotz <jtrotz at gmail.com> wrote:
> Final update:
>
> After much testing in the lab and working with Cisco TAC (almost no
> help),
> I have reached a conclusion about the problem - its a hardware
> limitation.
>
> Enabling IPV6 routing on a 6500 (with XL cards) and a full Internet
> routing
> table in a VRF exceeds the limits of SP processing, The SP goes to 99%
> utilization reconfiguring something but eventually recovers. In the lab
> this took almost 5 minutes! In real life with many 10Gb interfaces
> active
> - who knows!!
>
> The problem is that the router still passes enough traffic that EIGRP and
> BGP stay
> up, but all user traffic is "black hole'd" due to the 1-10kbs effective
> throughput.
>
> It looks like this may be a one time event, but neither Cisco TAC or the
> BU
> could say for sure this wouldn't happen again under some kind of BGP flap
> of VRF reconfig.
>
> Our TCAM limit is 512K ipV4 routes now and we have 409K routes today.
>
> We will probably resort to filtering down the BGP learned routes to
> 100-200K and
> then default for everything else to our Internet routers and then go
> shopping
> for a new router.
>
> The problem isn't noticeable until we have more than about 250K routes.
>
> There was no interest in redesigning the network to not use VRFs for the
> Internet table.
>
> Once IPV6 is enabled and all is stable we will probably go shopping for
> new routers.
>
> Thanks again for everyone's suggestions, it helped us figure out the root
> cause.
--
Tim:>
More information about the cisco-nsp
mailing list