[f-nsp] Bigiron RX4 - IPV6 causing out of nexthop entries

Wouter Prins wp at null0.nl
Sat Aug 13 09:58:24 EDT 2011


Hi Pieter,

Have you tried cam-partition tuning with 'cam-partition next-hop' in
global config?

On 13 August 2011 15:52, Pieter Taks <p.taks at nforce.com> wrote:
> Hi everyone,
>
> I seem to be having problems with IPV6 on a Bigiron RX4 causing out of
> nexthop entries.
>
> Situation:
> * We have a transit provider delivering 2x ipv6 BGP sessions with each
> ~6800 routes.
> * Router is also doing 2x ipv4 BGP sessions with the same provider (no
> problems here).
>
> Problem:
> Once in a while (every other week or so) for some reason the router gives
> the following in the syslog:
> Aug 13 11:30:07:I:INFO: Out of nexthop entries for path count 7 on slot 3.
> Aug 13 11:30:07:I:INFO: Out of nexthop entries for path count 7 on slot 1.
> Aug 13 11:30:07:I:INFO: Out of nexthop entries for path count 7 on slot 2.
> Aug 13 11:30:07:I:INFO: Out of nexthop entries for path count 7 on slot 4.
>
>
>
> When it does the following output is visible. See that it changes, I ran
> it twice pretty much no more than 5 seconds apart.
>
> #sh ip nexthop
> Module S1:
>
> Paths  Total  Free  In-use
>  1 2816 2805 11
>  2    512    512    0
>  4    512    512    0
>  8    256    0      256
>
> Module S2:
>
> Paths  Total  Free  In-use
>  1 2816 2805 11
>  2    512    512    0
>  4    512    512    0
>  8    256    0      256
>
> Module S3:
>
> Paths  Total  Free  In-use
>  1 2816 2805 11
>  2    512    512    0
>  4    512    512    0
>  8    256    0      256
>
> Module S4:
>
> Paths  Total  Free  In-use
>  1 2816 2804 12
>  2    512    512    0
>  4    512    512    0
>  8    256    0      256
>
> #sh ip nexthop
> Module S1:
>
> Paths  Total  Free  In-use
>  1 2816 2805 11
>  2    512    512    0
>  4    512    512    0
>  8    256    8      248
>
> Module S2:
>
> Paths  Total  Free  In-use
>  1    2816   2805   11
>  2    512    512    0
>  4    512    512    0
>  8    256    8      248
>
> Module S3:
>
> Paths  Total  Free  In-use
>  1    2816   2805   11
>  2    512    512    0
>  4    512    512    0
>  8    256    8      248
>
> Module S4:
>
> Paths  Total  Free  In-use
>  1    2816   2805   11
>  2    512    512    0
>  4    512    512    0
>  8    256    8      248
>
>
>
> However when I shutdown one ipv6 BGP session with the transit provider,
> the following output is given.
>
> #sh ip nexthop
>
> Module S1:
>
> Paths  Total  Free  In-use
>  1    2816   2804   12
>  2    512    512    0
>  4    512    512    0
>  8    256    232    24
>
> Module S2:
>
> Paths  Total  Free  In-use
>  1    2816   2805   11
>  2    512    512    0
>  4    512    512    0
>  8    256    232    24
>
> Module S3:
>
> Paths  Total  Free  In-use
>  1    2816   2805   11
>  2    512    512    0
>  4    512    512    0
>  8    256    232    24
>
> Module S4:
>
> Paths  Total  Free  In-use
>  1    2816   2804   12
>  2    512    512    0
>  4    512    512    0
>  8    256    232    24
>
>
>
> The VE having the IPV6 address:
>  ipv6 address x:x:x:x::1fa/126
>  ipv6 enable
>  ipv6 nd suppress-ra
>
>
> BGP config:
>  neighbor ipv6-x peer-group
>  neighbor ipv6-x remote-as y
>  neighbor ipv6-x next-hop-self
>  neighbor ipv6-x remove-private-as
>  neighbor ipv6-x soft-reconfiguration inbound
>
>  neighbor x:x:x:x::1f9 peer-group ipv6-x
>
>
>  address-family ipv6 unicast
>  maximum-paths 2
>  multipath ebgp
>  redistribute connected
>  neighbor ipv6-x activate
>  neighbor ipv6-x route-map in ipv6-x-in
>  neighbor ipv6-x route-map out ipv6-x-out
>  neighbor x:x:x:x::1f9 activate
>
>
> route-map ipv6-x-in permit  10
>  set local-preference 100
>  set metric none
>
>
>
> Also when we re-enable the ipv6 BGP session again it still gives the same
> output as above. No problems visible then anymore and it might out of blue
> appear later on again. This might also mean it has not a direct relation
> with the ipv6 BGP session(s). However it does 'resolve' the issue for the
> time being.
>
> We have ran version 2.7.3a, 2.8.0 and now version  2.7.2k. All 3 versions
> seem to have this issue.
>
> I hope someone has any idea what might be the cause of this, either way
> thank you all for reading.
>
> --
> Best regards,
>
> Pieter Taks
>
>
> _______________________________________________
> foundry-nsp mailing list
> foundry-nsp at puck.nether.net
> http://puck.nether.net/mailman/listinfo/foundry-nsp
>



-- 
Wouter Prins
wp at null0.nl




More information about the foundry-nsp mailing list