[j-nsp] MX104 Limitations
Phil Rosenthal
pr at isprime.com
Wed Jun 24 09:58:54 EDT 2015
Comments inline below.
> On Jun 24, 2015, at 9:08 AM, Colton Conor <colton.conor at gmail.com> wrote:
>
> We are considering upgrading to a Juniper MX104, but another vendor (not
> Juniper) pointed out the following limitations about the MX104 in their
> comparison. I am wondering how much of it is actually true about the MX104?
> And if true, is it really that big of a deal?:
>
None of these are showstoppers for everyone, but depending on your requirements, some of these might or not be a problem.
In essentially all of these, there is a question of "Well, what are you comparing it against?", as most things in that size/price range will have compromises as well.
Obviously this list came from someone with a biased viewpoint of nothing but problems with Juniper -- A Competitor. Consider that there are also positives.
For example, In Software, most people here would rank JunOS > Cisco IOS > Brocade > Arista > Force10
From question 12, it seems that you are considering Alcatel Lucent 7750 as your alternative -- Unfortunately you won't find nearly as many people with ALU experience, so it will be a bit harder to get fair commentary comparing the two. It might also be harder to find engineers to manage them.
> 1. No fabric redundancy due to fabric-less design. There is no switch
> fabric on the MX104, but there is on the rest of the MX series. Not sure if
> this is a bad or good thing?
The Switch Fabric is itself very reliable, and not the most likely point of failure. In fact, in all of my years, I have not had a switch fabric fail on any switch/router from any vendor.
I consider a redundant switch fabric "nice to have".
For us, MX480 makes much more sense than MX104. and MX480 has a redundant SF.
>
> 2. The Chassis fixed ports are not on an FRU. If a fixed port fails,
> or if data path fails, entire chassis requires replacement.
>
True. That said, I have not had a failure on any Juniper MX 10G ports in production.
The only failures we have had are a few RE SSD failures, and an undetermined MPC failure that was causing occasional resets.
Our experiences in the past with Cisco and Brocade has had much higher failure rates of fixed ethernet ports.
> 3. There is no mention of software support for MACSec on the MX104,
> it appears to be a hardware capability only at this point in time with
> software support potentially coming at a later time.
>
We do not use this.
> 4. No IX chipsets for the 10G uplinks (i.e. no packet
> pre-classification, the IX chip is responsible for this function as well as
> GE to 10GE i/f adaptation)
>
The pre-classification may or may not be an issue for you.
GE to 10GE adaption, I think you would be doing something very wrong if your goal was to connect gig-e's to these ports.
> 5. QX Complex supports HQoS on MICs only, not on the integrated 4
> 10GE ports on the PMC. I.e. no HQoS support on the 10GE uplinks
>
True. May or may not be an issue for you. There is some QoS capability on the built in ports, but it is very limited. 16x10G and 32x10G MPC cards have somewhat more QoS capability than these on the MX240/480/960. HQoS is only on the -Q cards which are much more expensive on either MX104 or the bigger MX chassis.
> 6. Total amount of traffic that can be handled via HQoS is restricted
> to 24Gbps. Not all traffic flows can be shaped/policed via HQoS due to a
> throughput restriction between the MQ and the QX. Note that the MQ can
> still however perform basic port based policing/shaping on any flows. HQoS
> support on the 4 installed MICs can only be enabled via a separate license.
> Total of 128k queues on the chassis
>
In most environments, there are a limited number of ports where HQoS is needed, so this may or may not be an issue.
> 7. 1588 TC is not supported across the chassis as the current set of
> MICs do not support edge time stamping. Edge timestamping is only
> supported on the integrated 10G ports. MX104 does not presently list 1588
> TC as being supported.
>
We do not use TC, but more comments on 12 at the bottom.
> 8. BFD can be supported natively in the TRIO chipset. On the MX104,
> it is not supported in hardware today. BFD is run from the single core
> P2020 MPC.
>
> 9. TRIO based cards do not presently support PBB; thus it is
> presently not supported on the MX104. PBB is only supported on older EZChip
> based MX hardware. Juniper still needs a business case to push this forward
>
No comments on these 2.
> 10. MX104 operating temperature: -40 to 65C, but MX5, MX10, MX40, MX80
> and MX80-48T are all 0-40C all are TRIO based. Seems odd that the MX104
> would support a different temperature range. There are only 3 temperature
> hardened MICs for this chassis on the datasheet: (1) 16 x T1/E1 with CE,
> (2) 4 x chOC3/STM1 & 1 x chOC12/STM4 with CE, (3) 20 x 10/100/1000 Base-T.
>
The MX104 is essentially a next-generation MX80. One of the major design goals was temperature hardening, enabling it for use in places like cell towers. For this use case, the port options make a lot of sense.
In a datacenter environment, if you are at 40C, you've got other problems.
> 11. Air-flow side-to-side; there is no option for front-to-back cooling
> with this chassis.
This is a major pet peeve on many platforms. It seems that the only places to have proper airflow are high-end (10G/40G) ToR switches and large chassis.
That said, as long as your datacenter is not running at very high temperatures, this should not be an issue.
Since question 12 brought up SR-A[48] ...
https://www.alcatel-lucent.com/sites/live/files/7750sr_a8_f_right.gif <https://www.alcatel-lucent.com/sites/live/files/7750sr_a8_f_right.gif>
That looks like side-to-side airflow to me.
>
> 12. Routing Engine and MPC lack a built-in Ethernet sync port. If the
> chassis is deployed without any GE ports, getting SyncE or 1588 out of the
> chassis via an Ethernet port will be a problem. SR-a4/-a8 have a built-in
> sync connector on the CPM to serve this purpose explicitly.
You should probably read this page:
http://www.juniper.net/techpubs/en_US/junos13.3/topics/concept/chassis-external-clock-synchronization-interface-understanding-mx.html <http://www.juniper.net/techpubs/en_US/junos13.3/topics/concept/chassis-external-clock-synchronization-interface-understanding-mx.html>
We do not use TC/SyncE, but SCB-E/SCB-E2 on MX240/480/960 have have built in TC ports if you would like something with "dedicated" external clock interfaces.
> _______________________________________________
> juniper-nsp mailing list juniper-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
More information about the juniper-nsp
mailing list