[j-nsp] EX4600 or QFX5110

Richard McGovern rmcgovern at juniper.net
Fri Apr 19 13:22:20 EDT 2019


I know this thread is quite old, but wanted to respond with some additional info.

As for a generic comparison, the EX4600 is exact same internal hardware (PFE) as a QFX5100, just different packaging, and potentially feature support.  In this case, feature support is "what is tested and officially supported", not what the switch is [potentially] capable of.  BTW, EX4600 base unit comes with 40 x 1/10GE (48 capable), while the QFX5100/5110 base unit includes 48 x 1/10GE.

EX4600 is 'positioned' as a Campus/Ethernet 10GE capable Switch, while QFX5K series is positioned as a DC TOR Switch.  So from feature standpoint EX has Campus features, while QFX has DC feature set, in general.

There is a price difference between EX4600 and QFX as well.

As Chuck mentioned in one of his responses, Juniper has a product compare function:

https://apps.juniper.net/feature-explorer/compare-multiple.html

This link (hopefully works for all!) compares  EX4600&VC/QFX5100&VC/QFX5110 - https://apps.juniper.net/feature-explorer/compare-multiple.html#pkey=30504600%7CJunos%20OS%7C%7C30504601%7CJunos%20OS%7C%7C31705100%7CJunos%20OS%7C%7C31705101%7CJunos%20OS%7C%7C31705110%7CJunos%20OS&platforms=EX4600%7CEX4600-VC%7CQFX5100%7CQFX5100-VC%7CQFX5110&stat=0.9677659894813813

QFX5110 DOES support VC, starting with 17.3R1.  Prior SW releases had the "capability" but not tested, so not "officially supported".  I "believe" QFX5110 does not supported mixed more with EX4300, while QFX5100 and EX4300 mixed, supported for sure. 

EX4600 DOES NOT support VCF.  Besides ring design vs spine/leaf design for VC vs VCF, VCF also supports more members.  VCF supports 20 members max, VC is 10 members max.

As for EVPN/VXLAN standpoint, Juniper is deploying this network architecture in a number of production customer sites.  EX4600 and QFX5100 both support L2 VXLAN only (VLAN to VNI), while the QFX5110 supports L3 VXLAN, so both VLAN to VNI, and VNI to VNI, as well as IP (outside world) to VNI (VXLAN world). Please see my other correction email regarding QFX5110 support for IP/VLAN to VNI routing.  QFX5110 VC is NOT supported in EVPN/VXLAN use cases.  Should not be needed as ESI LAG allows dual or multiple connections to different TOR Leafs. QFX5100 does support VC with L2 VXLAN, but not highly recommended. 

MC-LAG is still supported on all of these products, although from a technology basis, Juniper (and many others) are moving away from recommending MC-LAG based designs, but instead EVPN/VXLAN (or EVPN/MPLS).  Major reasons are EVPN is Open, and all MC-LAG types are closed/proprietary. More importantly EVPN can scale horizontally (Virtual GW) while MC-LAG is always limited to 2 nodes or combinations of 2 nodes. At this time, Juniper is recommending the use of ESI LAG over MC-LAG.  VC as the choice is a completely different discussion, at least IMHO. 

Hopefully this may help all.

Rich


Richard McGovern
Sr Sales Engineer, Juniper Networks 
978-618-3342


On 3/13/19, 1:37 PM, "Andrey Kostin" <ankost at podolsk.ru> wrote:

   Hi guys,

   My 0.02: we use QFX5100 in VC and it's pretty solid. But. As mentioned, 
   it's a single logical switch and by design it can't run members with 
   different Junos versions that means downtime when you need to upgrade 
   it. There is an ISSU but it has it's own caveats, so be prepared to 
   afford some downtime for reboot. For example, there was an issue with 
   QoS that required both Junos and host OS upgrade, so full reboot was 
   inevitable in that case. Maybe I'm missing something, would like to hear 
   about your best practice regarding VC high-availability.

   For simple L3 routing QFX5100 works well, but when I tried to run PIM on 
   irb interfaces it behaved in strange way so I had to rollback and move 
   PIM to the routers because didn't have time to investigate.
   We run VC with two members only. Tried EX4300 up to 8 members but it was 
   very sluggish. Thankfully for us 96 ports is enough for ToR switch in 
   the most of the cases.
   Regarding VCF, as per reading docs my understanding about it is that 
   it's the same control plane as VC but with Spine-Leaf topology instead 
   of ring. Because we use only 2 member VCs, there is no added value in 
   it. Seems to me that VCF can't eliminate concern about reboot downtime 
   and more switches you have more impact you can get.

   I'm interested to hear about experience of running EVPN/VXLAN, 
   particularly with QFX10k as L3 gateway and QFX5k as spine/leaves. As per 
   docs, it should be immune to any single switch downtime, so might be a 
   candidate to really redundant design. As a downside I see the more 
   complex configuration at least. Adding vlan means adding routing 
   instance etc. There are also other questions, about convergence, 
   scalability, how stable it is and code maturity.
   I'd be appreciated if somebody could share a feedback about operation of 
   EVPN/VXLAN.

   Kind regards,
   Andrey


   Graham Brown писал 2019-03-12 15:40:
> Hi Alex,
> 
> Just to add a little extra to what Charles has already said; The EX4600 
> has
> been around for quite some time, whereas the QFX5110 is a much newer
> product, so the suggestion for the QFX over EX could have been down to
> this.
> 
> Have a look at the datasheets for any additional benefits that may suit 
> one
> over the over, table sizes / port counts / protocol support etc etc. If 
> in
> doubt between the two, quote out the solution for each variant and see 
> how
> they best fit in terms of features and CAPEX/OPEX for your needs.
> 
> Just to echo Charles, remember that a VC / VCF is one logical switch 
> from a
> control plane perspective, so if you have two ToR per-rack, ensure that 
> the
> two are not part of the same VC or VCF. Then you can afford to lose a 
> ToR /
> series of ToRs for maintenance without breaking a sweat.
> 
> HTH,
> Graham
> 
> Graham Brown
> Twitter - @mountainrescuer <https://urldefense.proofpoint.com/v2/url?u=https-3A__twitter.com_-23-21_mountainrescuer&d=DwIDaQ&c=HAkYuh63rsuhr6Scbfh0UjBXeMK-ndb3voDTXcWzoCI&r=cViNvWbwxCvdnmDGDIbWYLiUsu8nisqLYXmd-x445bc&m=cTV0pAFqsQD-PJn397s8yFxrYi5_Td3BXqYmWl0_Gcs&s=JeCSPPwIhrNqfuNXYtdQkRe8e9n9iJs0QJwHMatRPAc&e=>
> LinkedIn <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.linkedin.com_in_grahamcbrown&d=DwIDaQ&c=HAkYuh63rsuhr6Scbfh0UjBXeMK-ndb3voDTXcWzoCI&r=cViNvWbwxCvdnmDGDIbWYLiUsu8nisqLYXmd-x445bc&m=cTV0pAFqsQD-PJn397s8yFxrYi5_Td3BXqYmWl0_Gcs&s=J-tTcOhEN7cue_PvsqqE_C8GFGuxCylOHYtAX0n35kg&e=>
> 
> 
>> On Wed, 13 Mar 2019 at 08:00, Anderson, Charles R <cra at wpi.edu> wrote:
>> 
>> Spanning Tree is rather frowned upon for new designs (for good 
>> reasons).
>> Usually, if you have the ability to do stright L2 bridging, you can 
>> always
>> do L3 on top of that.  A routed Spine/Leaf design with EVPN-VXLAN 
>> overly
>> for L2 extension might be a good candidate and is typically the answer
>> given these days.
>> 
>> I'm not a fan of proprietary fabric designs like VCF or MC-LAG.  VC is
>> okay, but I wouldn't use it across your entire set of racks because 
>> you are
>> creating a single management/control plane as a single point of 
>> failure
>> with shared fate for the entire 6 racks.  If you must avoid L3 for 
>> some
>> reason, I would create a L2 distribution layer VC out of a couple 
>> QFX5110s
>> and dual-home independent Top Of Rack switches to that VC so each rack
>> switch is separate.  I've used 2-member VCs with QFX5100 without 
>> issue.
>> Just be sure to enable "no-split-detection" if and only if you have 
>> exactly
>> 2 members.  Then interconnect the distribution VCs at each site with
>> regular LAGs.
>> 
>> On Tue, Mar 12, 2019 at 06:36:49PM +0000, Alex Martino via juniper-nsp
>> wrote:
>>> Hi,
>>> 
>>> I am seeking advices.
>>> 
>>> I am working on a L2/L3 DC setup. I have six racks spread across two
>> locations. I need about 20 ports of 10 Gbps (*2 for redundancy) ports 
>> per
>> rack and a low bandwidth between the two locations c.a. 1 Gbps. 
>> Nothing
>> special here.
>>> 
>>> At first sight, the EX4600 seems like a perfect fit with Virtual Chassis
>> feature in each rack to avoid spanning tree across all racks. 
>> Essentially,
>> I would imagine one VC cluster of 6 switches per location and running
>> spanning-tree for the two remote locations, where L3 is not possible.
>>> 
>>> I have been told to check the QFX5110 without much context, other than
>> not do VC but only VCF with QFXs. As such and after doing my searches, 
>> my
>> findings would suggest that the EX4600 is a good candidate for VC but 
>> does
>> not support VCF, where the QFX5110 would be a good candidate for VCF 
>> but
>> not for VC (although the feature seems to be supported). And I have 
>> been
>> told to either use VC or VCF rather than MC-LAG.
>>> 
>>> Any suggestions?
>>> 
>>> Thanks,
>>> Alex
>> _______________________________________________
>> juniper-nsp mailing list juniper-nsp at puck.nether.net
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__puck.nether.net_mailman_listinfo_juniper-2Dnsp&d=DwIDaQ&c=HAkYuh63rsuhr6Scbfh0UjBXeMK-ndb3voDTXcWzoCI&r=cViNvWbwxCvdnmDGDIbWYLiUsu8nisqLYXmd-x445bc&m=cTV0pAFqsQD-PJn397s8yFxrYi5_Td3BXqYmWl0_Gcs&s=b8z41JkZEQOyETTB1383fKUcodrj2NyfQ6FrpfJnW-c&e=
> _______________________________________________
> juniper-nsp mailing list juniper-nsp at puck.nether.net
> https://urldefense.proofpoint.com/v2/url?u=https-3A__puck.nether.net_mailman_listinfo_juniper-2Dnsp&d=DwIDaQ&c=HAkYuh63rsuhr6Scbfh0UjBXeMK-ndb3voDTXcWzoCI&r=cViNvWbwxCvdnmDGDIbWYLiUsu8nisqLYXmd-x445bc&m=cTV0pAFqsQD-PJn397s8yFxrYi5_Td3BXqYmWl0_Gcs&s=b8z41JkZEQOyETTB1383fKUcodrj2NyfQ6FrpfJnW-c&e=






More information about the juniper-nsp mailing list