[f-nsp] Brocade XMR / Juniper iBGP Interoperability
James Cornman
james at atlanticmetro.net
Wed Nov 16 13:29:54 EST 2016
Are the ports all route-only? Also note that normal SNMP monitoring of CPU
is the management module, not the individual LP's. The LPs commonly spike
w/o it being very obvious via any standard SNMP monitoring.
On Wed, Nov 16, 2016 at 12:29 PM, Daniel Stephens <ds-lists at ndnx.net> wrote:
> I am not doing any hair-pinning on the device.
>
>
>
> I do have icmp redirects disabled on all interfaces, however.
>
>
>
> Thanks,
>
> Daniel
>
>
>
> *From: *Jody Botham <jody at ask4.com>
> *Date: *Wednesday, November 16, 2016 at 12:25 PM
> *To: *Daniel Stephens <ds-lists at ndnx.net>, Ryan Harden <
> hardenrm at uchicago.edu>
>
> *Cc: *"foundry-nsp at puck.nether.net" <foundry-nsp at puck.nether.net>
> *Subject: *Re: [f-nsp] Brocade XMR / Juniper iBGP Interoperability
>
>
>
> Are you doing any hair-pinning of traffic on the XMR in/out a VLAN on the
> same physical interface? NetIron generates an ICMP redirect by default in
> this situation which with lots of traffic can kill the lp. Check if you've
> disabled icmp redirects.
>
>
>
> On Wed, 16 Nov 2016 at 17:15, Daniel Stephens <ds-lists at ndnx.net> wrote:
>
> I will need to capture the debug data when attempting to bring the session
> back up, but since these two networks are production, I am limited in the
> time frames I can do so due to it becoming service affecting when the
> sessions all drop.
>
> I do not have the exact output of “show cpu lp X”, but graphs I have of
> the processor on the management module and line cards show both going up to
> roughly 80-90% during the event.
>
> Thanks,
> Daniel
>
> On 11/16/16, 12:10 PM, "Ryan Harden" <hardenrm at uchicago.edu> wrote:
>
> What does ‘show cpu lp X’ say during the error/failure state?
>
> /Ryan
>
> Ryan Harden
> Research and Advanced Networking Architect
> University of Chicago - ASN160
> P: 773.834.5441
>
>
>
>
> > On Nov 16, 2016, at 10:52 AM, Daniel Stephens <ds-lists at ndnx.net>
> wrote:
> >
> > Hi Jörg,
> >
> > The line card does not reset. Under show logging during the event,
> the only thing are OSPF neighbor changes when they drop and reestablish.
> >
> > SSH at xmr01#sh ip bgp nei $MX_IP last-packet-with-error decode
> > No received packet with error logged for neighbor $MX_IP
> > SSH at xmr01#
> >
> > On the upstream carrier BGP neighbor, when it reset, the XMR
> reported the following –
> >
> > Notification Sent: Hold Timer Expired
> > Notification Received: Cease/Connection Rejected
> >
> > Thanks,
> > Daniel
> >
> >
> > From: Jörg Kost <jk at ip-clear.de>
> > Date: Wednesday, November 16, 2016 at 11:41 AM
> > To: Daniel Stephens <ds-lists at ndnx.net>
> > Cc: "foundry-nsp at puck.nether.net" <foundry-nsp at puck.nether.net>
> > Subject: Re: [f-nsp] Brocade XMR / Juniper iBGP Interoperability
> >
> > Hi,
> > does the line card reset?
> > Also any output from
> > show logging
> > or bgp, e.g.
> > sh ip bgp neighbors $neighbor last-packet-with-error decode
> > may be helpful.
> > Jörg
> > On 16 Nov 2016, at 17:10, Daniel Stephens wrote:
> > Hi everyone,
> >
> > I am having a strange issue with our Brocade XMRs when I attempt to
> exchange routes between a new set of Juniper MX iBGP peers and was looking
> to see if anyone had any recommendations.
> >
> > The Brocade XMRs have two line cards, a 20-port 1G and a 4-port 10G
> card installed, and are running 5.6.0d firmware.
> >
> > We are integrating two existing networks as a consolidation, with
> the one network running Brocade XMR and the other running Juniper MX
> routers. The issue surfaced on the XMRs when we removed route filters on
> the iBGP sessions between the XMR and MX routers. When lifting the filters
> and sending routes from the XMR towards the MX, everything is functioning
> as normal and the MX correctly learn the routes. Our issue becomes when we
> lift the filter and send routes from the MX towards the XMR. The XMR begins
> to load the routes correctly, and at some point during this process, the
> XMR stops forwarding traffic on the 4-port 10G line card entirely (the card
> to which the MX routers interconnect), and all BGP sessions associated with
> that line card flap, and OSPF adjacencies drop and re-establish.
> >
> > I am using the ipv4-ipv6-2 CAM partition profile on the XMR devices.
> Total number of routes attempting to be passed is a full table of roughly
> 610K routes.
> >
> > Even when I remove an XMR unit from service and attempt to load
> routes from the MX into the XMR, the same anomaly occurs.
> >
> > Any assistance would be greatly appreciated.
> >
> > Thanks,
> > Daniel
> > _______________________________________________
> > foundry-nsp mailing list
> > foundry-nsp at puck.nether.net
> > http://puck.nether.net/mailman/listinfo/foundry-nsp
> > Jörg Kost
> > Hofmeir Media GmbH
> > Kranzhornstr. 3
> > 81825 München
> > Tel: 089/48002910
> > Fax: 089/4487505
> > jk at hofmeirmedia.net
> > https://www.premium-datacenter.de
> > https://www.xing.com/profile/Joerg_Kost
> > Geschäftsführer: Dipl.-Ing. (Univ.) Stefan Hofmeir
> > Handelsregister AG München: HRB 130092
> > _______________________________________________
> > foundry-nsp mailing list
> > foundry-nsp at puck.nether.net
> > http://puck.nether.net/mailman/listinfo/foundry-nsp
>
>
>
> _______________________________________________
> foundry-nsp mailing list
> foundry-nsp at puck.nether.net
> http://puck.nether.net/mailman/listinfo/foundry-nsp
>
>
> _______________________________________________
> foundry-nsp mailing list
> foundry-nsp at puck.nether.net
> http://puck.nether.net/mailman/listinfo/foundry-nsp
>
--
*James Cornman*
*Chief Technology Officer*jcornman at atlanticmetro.net
212.792.9950 - ext 101
*Atlantic Metro Communications*
*4 Century Drive, Parsippany NJ 07054*
*Colocation • Cloud Hosting • Network Connectivity • Managed Services*
Follow us on Twitter: @atlanticmetro <https://twitter.com/atlanticmetro> *•
Like us on Facebook <https://www.facebook.com/atlanticmetro>*
www.atlanticmetro.net
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://puck.nether.net/pipermail/foundry-nsp/attachments/20161116/06da7a3d/attachment-0001.html>
More information about the foundry-nsp
mailing list