[f-nsp] IPv6 DFZ and TCAM partitionning

Youssef Bengelloun-Zahr bengelly at gmail.com
Tue Jun 13 17:07:08 EDT 2017


No one ? Really ?!?

Y.



> Le 13 juin 2017 à 10:05, Youssef Bengelloun-Zahr <bengelly at gmail.com> a écrit :
> 
> Dear Foundry community,
> 
> I'm about to revive an old trolly thread, but boy are they fun.
> 
> With the growth of IPv6 DFZ over 32K entries, we have been receiving the following syslog error message on our MLXe gear :
> 
> Jun 13 07:51:02:A:CAM IPv6 partition warning: total 32768 (reserved 0), free 6, slot 2, ppcr 1
> Jun 13 07:51:02:A:CAM IPv6 partition warning: total 32768 (reserved 0), free 6, slot 2, ppcr 0
> 
> We receive full BGP feeds from multiple IP transit providers and YES, we DO filter prefixes over a /48 size.
> 
> Our MLXe act as MPLS PEs and run the following hardware on NI58g :
> 
> Module                                                          Status                       Ports    Starting MAC    
> M1 (left ):BR-MLX-MR2-X Management Module                       Standby(Ready State)
> M2 (right):BR-MLX-MR2-X Management Module                       Active                        
> F1: NI-X-HSF Switch Fabric Module                               Active                          
> F2: NI-X-HSF Switch Fabric Module                               Active                          
> F3: NI-X-HSF Switch Fabric Module                               Active                          
> S1: BR-MLX-10Gx4-X 4-port 10GbE Module                          CARD_STATE_UP                4        0024.38a4.fb00
> S2: BR-MLX-10Gx4-X 4-port 10GbE Module                          CARD_STATE_UP                4        0024.38a4.fb30
> S3: BR-MLX-1GFx24-X 24-port 1GbE SFP Module                     CARD_STATE_UP                24       0024.38a4.fb60
> S4: BR-MLX-1GFx24-X 24-port 1GbE SFP Module                     CARD_STATE_UP                24       0024.38a4.fb90
> 
> For years, we have been using multi-service-4 CAM profil in order to provide pure IP connectivity and MPLS connectivity (mostly VPLS) to our clients.
> 
> After investigating with BTAC, they told us that our hardware couldn't handle much more 32k with this profil :
> 
> http://www.brocade.com/content/html/en/administration-guide/netiron-05900-adminguide/GUID-F5A27733-F4A2-4367-8F83-E8A4C3DE6F0E.html
> 
> We only use MPLS VRF for management prurposes. So, we are left with either :
> 
> - Heavily filtering IPv6 bgp feeds + accepting a defaut route in order to get 1 MPLS VRF for management purposes,
> 
> - Accepting full IPv6 BGP feed by migrating to ipv4-ipv6-2 Profile, and loose MPLS VRF.
> 
> I seem to recall that someone on this list has been pushing Brocade to create a special profile to accomodate a very little number of MPLS VRFs. Did that happen ?
> 
> Other than that, how do you guys handle this ? Maybe a nifty trick I'm not aware of ?
> 
> Thank you for your wisdom.
> 
> Best regards.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://puck.nether.net/pipermail/foundry-nsp/attachments/20170613/a84ecbf2/attachment.html>


More information about the foundry-nsp mailing list