[j-nsp] SRX5k problem
Phil Fagan
philfagan at gmail.com
Thu Sep 5 16:50:03 EDT 2013
What's your version of code? Z-traffic is bad before 11.4+ at least. Don't
expect big MTU to go over the Fabric or heavy amounts of bandwidth usage
even if its 10G.
On Tue, Sep 3, 2013 at 9:38 AM, OBrien, Will <ObrienH at missouri.edu> wrote:
> The fabric carries traffic between the nodes, so it's my immediate suspect
> on the traffic loss.
>
> Are your connections configured as standard Reth interfaces? Are you using
> some form of igp?
> In active/active mode, I've seen some traffic loss, but most of it was due
> to ospf taking time to select the new pathway.
>
>
>
>
> On Sep 3, 2013, at 9:54 AM, R S wrote:
>
> we have a remote cluster, hence no direct connection is provided but a
> switched connection (through L2 infrstructure - a vlan basically)
> yes dual REs
> why do you suppose a problem on fabric links ?
>
>
>
>
> > From: ObrienH at missouri.edu<mailto:ObrienH at missouri.edu>
> > To: dim0sal at hotmail.com<mailto:dim0sal at hotmail.com>
> > CC: juniper-nsp at puck.nether.net<mailto:juniper-nsp at puck.nether.net>
> > Subject: Re: [j-nsp] SRX5k problem
> > Date: Tue, 3 Sep 2013 14:19:35 +0000
> >
> > Failover works fine on my 5800 cluster. I use direct connections for
> fabric and control.
> > It sounds like you're losing traffic in Zmode. I'd start by taking a
> serious look at your fabric links.
> > Do you have dual REs in each chassis for the double control links?
> >
> >
> > On Sep 3, 2013, at 7:34 AM, R S wrote:
> >
> > > We are having a geographic SRX5800 chassis cluster with two
> redundancy-groups (RG0 and RG1).
> > >
> > > We are having many issues while performing the failover of the RG1,
> with and without preemption. The issue is that when the RG1 becomes active
> on the node1, real traffic is being lost. Also with preemption, when the
> node0 becomes again the active for the RG1, lot of traffic is lost.
> > >
> > > There are dual Ctrl and Fabric link both through a Layer2
> infrastructure, one couple through MX960 and one couple through EX8200.
> > >
> > > Any similar nightmare ?
> > >
> > > Tks
> > > _______________________________________________
> > > juniper-nsp mailing list juniper-nsp at puck.nether.net<mailto:
> juniper-nsp at puck.nether.net>
> > > https://puck.nether.net/mailman/listinfo/juniper-nsp
> >
>
> _______________________________________________
> juniper-nsp mailing list juniper-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
--
Phil Fagan
Denver, CO
970-480-7618
More information about the juniper-nsp
mailing list