[j-nsp] EX4600 or QFX5110
Alex Martino
Alex.martino at protonmail.com
Fri Mar 15 17:16:52 EDT 2019
Hi,
Thank you all for sharing your expertise.
I am wondering if the EX4600 supports VXLAN as VXLAN-to-VLAN. I see many parts of the documentation which refers to VXLAN, such as https://www.juniper.net/documentation/en_US/junos/topics/concept/vxlan-constraints-qfx-series.html, but the datasheet does not mention VXLAN or EVPN anywhere.
Can people confirm if the EX4600 does support EVPN, SPB, TRILL, FABRIC or just VXLAN?
Thanks,
Alex
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Wednesday, March 13, 2019 6:37 PM, Andrey Kostin <ankost at podolsk.ru> wrote:
> Hi guys,
>
> My 0.02: we use QFX5100 in VC and it's pretty solid. But. As mentioned,
> it's a single logical switch and by design it can't run members with
> different Junos versions that means downtime when you need to upgrade
> it. There is an ISSU but it has it's own caveats, so be prepared to
> afford some downtime for reboot. For example, there was an issue with
> QoS that required both Junos and host OS upgrade, so full reboot was
> inevitable in that case. Maybe I'm missing something, would like to hear
> about your best practice regarding VC high-availability.
>
> For simple L3 routing QFX5100 works well, but when I tried to run PIM on
> irb interfaces it behaved in strange way so I had to rollback and move
> PIM to the routers because didn't have time to investigate.
> We run VC with two members only. Tried EX4300 up to 8 members but it was
> very sluggish. Thankfully for us 96 ports is enough for ToR switch in
> the most of the cases.
> Regarding VCF, as per reading docs my understanding about it is that
> it's the same control plane as VC but with Spine-Leaf topology instead
> of ring. Because we use only 2 member VCs, there is no added value in
> it. Seems to me that VCF can't eliminate concern about reboot downtime
> and more switches you have more impact you can get.
>
> I'm interested to hear about experience of running EVPN/VXLAN,
> particularly with QFX10k as L3 gateway and QFX5k as spine/leaves. As per
> docs, it should be immune to any single switch downtime, so might be a
> candidate to really redundant design. As a downside I see the more
> complex configuration at least. Adding vlan means adding routing
> instance etc. There are also other questions, about convergence,
> scalability, how stable it is and code maturity.
> I'd be appreciated if somebody could share a feedback about operation of
> EVPN/VXLAN.
>
> Kind regards,
> Andrey
>
> Graham Brown писал 2019-03-12 15:40:
>
> > Hi Alex,
> > Just to add a little extra to what Charles has already said; The EX4600
> > has
> > been around for quite some time, whereas the QFX5110 is a much newer
> > product, so the suggestion for the QFX over EX could have been down to
> > this.
> > Have a look at the datasheets for any additional benefits that may suit
> > one
> > over the over, table sizes / port counts / protocol support etc etc. If
> > in
> > doubt between the two, quote out the solution for each variant and see
> > how
> > they best fit in terms of features and CAPEX/OPEX for your needs.
> > Just to echo Charles, remember that a VC / VCF is one logical switch
> > from a
> > control plane perspective, so if you have two ToR per-rack, ensure that
> > the
> > two are not part of the same VC or VCF. Then you can afford to lose a
> > ToR /
> > series of ToRs for maintenance without breaking a sweat.
> > HTH,
> > Graham
> > Graham Brown
> > Twitter - @mountainrescuer https://twitter.com/#!/mountainrescuer
> > LinkedIn http://www.linkedin.com/in/grahamcbrown
> > On Wed, 13 Mar 2019 at 08:00, Anderson, Charles R cra at wpi.edu wrote:
> >
> > > Spanning Tree is rather frowned upon for new designs (for good
> > > reasons).
> > > Usually, if you have the ability to do stright L2 bridging, you can
> > > always
> > > do L3 on top of that. A routed Spine/Leaf design with EVPN-VXLAN
> > > overly
> > > for L2 extension might be a good candidate and is typically the answer
> > > given these days.
> > > I'm not a fan of proprietary fabric designs like VCF or MC-LAG. VC is
> > > okay, but I wouldn't use it across your entire set of racks because
> > > you are
> > > creating a single management/control plane as a single point of
> > > failure
> > > with shared fate for the entire 6 racks. If you must avoid L3 for
> > > some
> > > reason, I would create a L2 distribution layer VC out of a couple
> > > QFX5110s
> > > and dual-home independent Top Of Rack switches to that VC so each rack
> > > switch is separate. I've used 2-member VCs with QFX5100 without
> > > issue.
> > > Just be sure to enable "no-split-detection" if and only if you have
> > > exactly
> > > 2 members. Then interconnect the distribution VCs at each site with
> > > regular LAGs.
> > > On Tue, Mar 12, 2019 at 06:36:49PM +0000, Alex Martino via juniper-nsp
> > > wrote:
> > >
> > > > Hi,
> > > > I am seeking advices.
> > > > I am working on a L2/L3 DC setup. I have six racks spread across two
> > > > locations. I need about 20 ports of 10 Gbps (*2 for redundancy) ports
> > > > per
> > > > rack and a low bandwidth between the two locations c.a. 1 Gbps.
> > > > Nothing
> > > > special here.
> > > > At first sight, the EX4600 seems like a perfect fit with Virtual Chassis
> > > > feature in each rack to avoid spanning tree across all racks.
> > > > Essentially,
> > > > I would imagine one VC cluster of 6 switches per location and running
> > > > spanning-tree for the two remote locations, where L3 is not possible.
> > > > I have been told to check the QFX5110 without much context, other than
> > > > not do VC but only VCF with QFXs. As such and after doing my searches,
> > > > my
> > > > findings would suggest that the EX4600 is a good candidate for VC but
> > > > does
> > > > not support VCF, where the QFX5110 would be a good candidate for VCF
> > > > but
> > > > not for VC (although the feature seems to be supported). And I have
> > > > been
> > > > told to either use VC or VCF rather than MC-LAG.
> > > > Any suggestions?
> > > > Thanks,
> > > > Alex
> > >
> > > juniper-nsp mailing list juniper-nsp at puck.nether.net
> > > https://puck.nether.net/mailman/listinfo/juniper-nsp
> >
> > juniper-nsp mailing list juniper-nsp at puck.nether.net
> > https://puck.nether.net/mailman/listinfo/juniper-nsp
More information about the juniper-nsp
mailing list