[j-nsp] SRX650 cluster - ethernet switching issue

Morgan McLean wrx230 at gmail.com
Mon Jan 16 15:11:08 EST 2012


Is there any reason you aren't using reth groups? You can do an AE with
those.

Morgan

On Mon, Jan 16, 2012 at 10:44 AM, Paulhamus, Jon <jpaulhamus at iu17.org>wrote:

> Thanks for the reply.
>
> The case what not that I was connecting end points directly to the SRX,
> it's that I wanted 2 trunk links between each SRX node and the switch stack
> (there is only 1 switch stack),  each of the 2 links supporting different
> VLAN's.  So - 2 links from the primary node to the  switch stack, and 2
> links from the secondary node to the switch stack as well...  this does not
> work with STP enabled as 3 of the 4 links get blocked by STP.   I needed a
> setup like this as I'm pushing over 1Gb through each switch link.
>
>
>
>
>
>
>
> -----Original Message-----
> From: Pavel Lunin [mailto:plunin at senetsy.ru]
> Sent: Monday, January 16, 2012 9:32 AM
> To: Paulhamus, Jon
> Cc: Ben Dale; juniper-nsp at puck.nether.net
> Subject: Re: [j-nsp] SRX650 cluster - ethernet switching issue
>
>
>
> Sorry, missed this reply because of the new year holidays.
>
> >> BTW, never could understand people running L2 on srx650 coupled with a
> normal switch. Especially in srx-cluster + ex-vc. What for?
> > Why not?  If you have more devices that need access to specific vlan
> zones on the SRX, and you're low on physical interfaces, why not use a
> switch.  This can be extremely handy when bringing trunks into a VMWare
> server(s).
> >
>
> When you build a FW cluster, you anyway must have a pair of supporting
> switches in almost all sorts of design. Either each SRX connected to its
> own switch (I prefer this) or full mesh (people like this but there is no
> much sense, imho). So in terms of the number of physical ports, it seems
> like this is not the SRX's job (in most cases). Although in case of port
> deficit, this can be a kind of workaround, I agree.
>
> > I'm not sure what you're saying about especially in a cluster either -
> clustering of the firewalls is soley for redundancy in my situation.
>
> If you physically connect something to SRX in cluster mode, not to the
> supporting switches, it becomes complicated to teach the device to switch
> traffic to a different SRX node in case of a failure. Say, you have SRX1
> and SRX2 in cluster. SRX1 connected to SW1 and SRX2 connected to SW2. SRX1
> is primary for RG0 and RG1. Say, SW1 and SW2 form a VC. You have some
> devices connected to the VC (most probably using LAGs) and some devices
> connected to the SRXes itself (LAG is not supported here, AFAIR). Let's say
> SW1 fails and SRX1-SW1 link does so as well. RG1 switches to the backup
> node SRX2. But how the directly connected device will know it should
> forward traffic to the second node? In case it still send it to SRX1, this
> will lead to h-shape forwarding through the swfab link (not good). xSTP can
> help, but it moves the solution further from best (I don't even want to
> think of what can happen with STP in case of SRX's RG0 switchover). You
> anyway must run xSTP though, since you now have two switch nodes instead of
> one.
>
> Add here operational expenses of managing and troubleshooting switching
> stuff of SRXs instead of just on VC and lack of some switching features.
> I think, it's cheaper overall to just add port-capacity to the switch
> cluster. So, while it does work in principle, as a design for a new setup,
> I'd say, it's a bit clumsy.
>
> _______________________________________________
> juniper-nsp mailing list juniper-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>


More information about the juniper-nsp mailing list