[j-nsp] Quick Question About HA Setup

Mark Menzies mark at deimark.net
Mon Jul 16 06:29:14 EDT 2012


Good point.

Basically if we use a single switch to connect 2 SRXs in a cluster we
introduce the switch as a single point of failure here.  If you are dead
set on separating your cluster nodes with switches, use 2 separate
switches, one for control, one for data and keep the traffic on different
vlans.

Although technically this DOES work and is indeed supported, for all the
reasons below, I would consider using this option carefully.

HTH

On 16 July 2012 11:20, Mike Devlin <gossamer at meeksnet.ca> wrote:

> Although it can work, its recommended that you dont.
>
> Any latency spikes between the switches can cause clustering to split, and
> you will suddenly be in a split brain scenario.
>
> I had a short talk with A-TAC about it a while back and they highly
> recommended against it for our build out.
>
>
> On Mon, Jul 16, 2012 at 5:16 AM, Mark Menzies <mark at deimark.net> wrote:
>
>> Hiya bud
>>
>> Yes that can work here.
>>
>> Just make sure that the SRXs are less than 100ms apart and each sync
>> connection, both fabric and control, is on separate VLANs.
>>
>> HTH
>>
>>
>>
>> On 16 July 2012 10:04, Spam <spam-me at fioseurope.net> wrote:
>>
>> > Is it possible to connect 2 SRX devices together into a HA Cluster by
>> > connecting
>> > the Control & Fabric Interlinks via switches or must they be directly
>> > connected.
>> >
>> > My planned setup is as follows:
>> >
>> > SRX<->Switch<->10GB Xconnect<->Switch<->SRX
>> >
>> > I can also give each connection is own dedicated VLAN if that would
>> help.
>> >
>> > Spammy
>> >
>> >
>> > _______________________________________________
>> > juniper-nsp mailing list juniper-nsp at puck.nether.net
>> > https://puck.nether.net/mailman/listinfo/juniper-nsp
>> >
>> _______________________________________________
>> juniper-nsp mailing list juniper-nsp at puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>
>
>


More information about the juniper-nsp mailing list