[f-nsp] IPv6 DFZ and TCAM partitionning
Youssef Bengelloun-Zahr
bengelly at gmail.com
Tue Jun 13 19:02:07 EDT 2017
Dear Daniel,
Is this really the only option ?
Either shrinking the DFZ or upgrade to newer LPs ?
Thank you.
> Le 14 juin 2017 à 00:37, Daniel Schmidt <daniel.schmidt at wyo.gov> a écrit :
>
> No one really. My 2 cents:
>
> Use option #1, have each provider send you an additional default, and do an ASN filter list with a little help from this:
> https://github.com/ipcjk/asnbuilder
>
>> On Tue, Jun 13, 2017 at 3:07 PM, Youssef Bengelloun-Zahr <bengelly at gmail.com> wrote:
>> No one ? Really ?!?
>>
>> Y.
>>
>>
>>
>>> Le 13 juin 2017 à 10:05, Youssef Bengelloun-Zahr <bengelly at gmail.com> a écrit :
>>>
>>> Dear Foundry community,
>>>
>>> I'm about to revive an old trolly thread, but boy are they fun.
>>>
>>> With the growth of IPv6 DFZ over 32K entries, we have been receiving the following syslog error message on our MLXe gear :
>>>
>>> Jun 13 07:51:02:A:CAM IPv6 partition warning: total 32768 (reserved 0), free 6, slot 2, ppcr 1
>>> Jun 13 07:51:02:A:CAM IPv6 partition warning: total 32768 (reserved 0), free 6, slot 2, ppcr 0
>>>
>>> We receive full BGP feeds from multiple IP transit providers and YES, we DO filter prefixes over a /48 size.
>>>
>>> Our MLXe act as MPLS PEs and run the following hardware on NI58g :
>>>
>>> Module Status Ports Starting MAC
>>> M1 (left ):BR-MLX-MR2-X Management Module Standby(Ready State)
>>> M2 (right):BR-MLX-MR2-X Management Module Active
>>> F1: NI-X-HSF Switch Fabric Module Active
>>> F2: NI-X-HSF Switch Fabric Module Active
>>> F3: NI-X-HSF Switch Fabric Module Active
>>> S1: BR-MLX-10Gx4-X 4-port 10GbE Module CARD_STATE_UP 4 0024.38a4.fb00
>>> S2: BR-MLX-10Gx4-X 4-port 10GbE Module CARD_STATE_UP 4 0024.38a4.fb30
>>> S3: BR-MLX-1GFx24-X 24-port 1GbE SFP Module CARD_STATE_UP 24 0024.38a4.fb60
>>> S4: BR-MLX-1GFx24-X 24-port 1GbE SFP Module CARD_STATE_UP 24 0024.38a4.fb90
>>>
>>> For years, we have been using multi-service-4 CAM profil in order to provide pure IP connectivity and MPLS connectivity (mostly VPLS) to our clients.
>>>
>>> After investigating with BTAC, they told us that our hardware couldn't handle much more 32k with this profil :
>>>
>>> http://www.brocade.com/content/html/en/administration-guide/netiron-05900-adminguide/GUID-F5A27733-F4A2-4367-8F83-E8A4C3DE6F0E.html
>>>
>>> We only use MPLS VRF for management prurposes. So, we are left with either :
>>>
>>> - Heavily filtering IPv6 bgp feeds + accepting a defaut route in order to get 1 MPLS VRF for management purposes,
>>>
>>> - Accepting full IPv6 BGP feed by migrating to ipv4-ipv6-2 Profile, and loose MPLS VRF.
>>>
>>> I seem to recall that someone on this list has been pushing Brocade to create a special profile to accomodate a very little number of MPLS VRFs. Did that happen ?
>>>
>>> Other than that, how do you guys handle this ? Maybe a nifty trick I'm not aware of ?
>>>
>>> Thank you for your wisdom.
>>>
>>> Best regards.
>>
>> _______________________________________________
>> foundry-nsp mailing list
>> foundry-nsp at puck.nether.net
>> http://puck.nether.net/mailman/listinfo/foundry-nsp
>
>
>
> E-Mail to and from me, in connection with the transaction
> of public business, is subject to the Wyoming Public Records
> Act and may be disclosed to third parties.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://puck.nether.net/pipermail/foundry-nsp/attachments/20170614/766bfb06/attachment-0001.html>
More information about the foundry-nsp
mailing list