[f-nsp] Brocade XMR / Juniper iBGP Interoperability
Daniel Stephens
ds-lists at ndnx.net
Wed Nov 16 11:40:30 EST 2016
Hi James,
These are older XMR cards, basically the equivalent of the MLX –X cards.
Module Status Ports
M1 (left ):NI-XMR-MR Management Module Active
M2 (right):
F1: NI-X-SF Switch Fabric Module Active
F2: NI-X-SF Switch Fabric Module Active
F3:
S1:
S2: NI-XMR-1Gx20-SFP 20-port 1GbE/100FX Module CARD_STATE_UP 20
S3:
S4: NI-XMR-10Gx4 4-port 10GbE Module CARD_STATE_UP 4
SSH at xmr01#sh cam-partition slot 4
CAM partitioning profile: ipv4-ipv6-2
Slot 4 XPP20SP 0:
# of CAM device = 4
Total CAM Size = 917504 entries (63Mbits)
IP: Raw Size 786432, User Size 786432(0 reserved)
Subpartition 0: Raw Size 2048, User Size 2048, (0 reserved)
Subpartition 1: Raw Size 718865, User Size 718865, (0 reserved)
Subpartition 2: Raw Size 54814, User Size 54814, (0 reserved)
Subpartition 3: Raw Size 8245, User Size 8245, (0 reserved)
Subpartition 4: Raw Size 1639, User Size 1639, (0 reserved)
IPv6: Raw Size 131072, User Size 65536(0 reserved)
Subpartition 0: Raw Size 2048, User Size 1024, (0 reserved)
Subpartition 1: Raw Size 117744, User Size 58872, (0 reserved)
Subpartition 2: Raw Size 9328, User Size 4664, (0 reserved)
Subpartition 3: Raw Size 1280, User Size 640, (0 reserved)
Subpartition 4: Raw Size 384, User Size 192, (0 reserved)
Under normal conditions, the XMR would have one full table BGP session to an upstream, and 3 iBGP sessions (1 to another XMR, 2 to Juniper MX). Each of the iBGP sessions would provide full routes as well.
The issue did not surface until we attempted to integrate the MX network into the XMR network, and is affecting both of our XMR units when we send routes from the MX towards the XMR.
Thanks,
Daniel
From: James Cornman <james at atlanticmetro.net>
Date: Wednesday, November 16, 2016 at 11:31 AM
To: Daniel Stephens <ds-lists at ndnx.net>
Cc: "foundry-nsp at puck.nether.net" <foundry-nsp at puck.nether.net>
Subject: Re: [f-nsp] Brocade XMR / Juniper iBGP Interoperability
Do you have -X model 10Gbps cards?
BR-MLX-10Gx4-X 4-port 10GbE Module
What about management modules? Do you have other full-route BGP sessions on that 10Gbps card, and just the Juniper ones are having issues? With a -X module as listed above, and ipv4-ipv6-2 CAM partition, you should see something like this:
SSH at router#show cam-partition slot 2
CAM partitioning profile: ipv4-ipv6-2
Slot 2 XPP20SP 0:
# of CAM device = 4
Total CAM Size = 917504 entries (63Mbits)
IP: Raw Size 786432, User Size 786432(0 reserved)
Does it show that?
-James
On Wed, Nov 16, 2016 at 11:10 AM, Daniel Stephens <ds-lists at ndnx.net<mailto:ds-lists at ndnx.net>> wrote:
Hi everyone,
I am having a strange issue with our Brocade XMRs when I attempt to exchange routes between a new set of Juniper MX iBGP peers and was looking to see if anyone had any recommendations.
The Brocade XMRs have two line cards, a 20-port 1G and a 4-port 10G card installed, and are running 5.6.0d firmware.
We are integrating two existing networks as a consolidation, with the one network running Brocade XMR and the other running Juniper MX routers. The issue surfaced on the XMRs when we removed route filters on the iBGP sessions between the XMR and MX routers. When lifting the filters and sending routes from the XMR towards the MX, everything is functioning as normal and the MX correctly learn the routes. Our issue becomes when we lift the filter and send routes from the MX towards the XMR. The XMR begins to load the routes correctly, and at some point during this process, the XMR stops forwarding traffic on the 4-port 10G line card entirely (the card to which the MX routers interconnect), and all BGP sessions associated with that line card flap, and OSPF adjacencies drop and re-establish.
I am using the ipv4-ipv6-2 CAM partition profile on the XMR devices. Total number of routes attempting to be passed is a full table of roughly 610K routes.
Even when I remove an XMR unit from service and attempt to load routes from the MX into the XMR, the same anomaly occurs.
Any assistance would be greatly appreciated.
Thanks,
Daniel
_______________________________________________
foundry-nsp mailing list
foundry-nsp at puck.nether.net<mailto:foundry-nsp at puck.nether.net>
http://puck.nether.net/mailman/listinfo/foundry-nsp
--
James Cornman
Chief Technology Officer
jcornman at atlanticmetro.net<mailto:jcornman at atlanticmetro.net>
212.792.9950 - ext 101
Atlantic Metro Communications
4 Century Drive, Parsippany NJ 07054
Colocation • Cloud Hosting • Network Connectivity • Managed Services
Follow us on Twitter: @atlanticmetro<https://twitter.com/atlanticmetro> • Like us on Facebook<https://www.facebook.com/atlanticmetro>
www.atlanticmetro.net<https://www.atlanticmetro.net/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://puck.nether.net/pipermail/foundry-nsp/attachments/20161116/fe7bfc86/attachment-0001.html>
More information about the foundry-nsp
mailing list