[f-nsp] NetIron MLX-4 vs Juniper MX240

David Kotlerewsky webnetwiz at gmail.com
Sat May 8 01:14:31 EDT 2010


Ah, good point, but... the MX80 will support RE redundancy via a Virtual
Chassis technology with another MX80, so not only RE redundancy, bu chassis
redundancy as well, which ios not something you can get with a single MLX.

On Fri, May 7, 2010 at 2:51 PM, David Ball <davidtball at gmail.com> wrote:

>    ...except that the MX80 won't have redundant REs (mgmnt modules)
> or switch fabrics like the MLX does.
>
> D
>
>
> On 7 May 2010 14:38, David Kotlerewsky <webnetwiz at gmail.com> wrote:
> > I'd say wait for MX80 to come out, and then compare the two. The MX80
> will
> > have a much more attractive price point than the MX240. Then you can have
> a
> > decent comparison between say an MLX-4 and an MX80.
> >
> > Sincerely,
> >
> > David Kotlerewsky
> > Sr. Systems Engineer
> > InterVision Systems Technologies, Inc.
> > www.intervision.com
> >
> > On Fri, May 7, 2010 at 12:56 PM, Debbie Fligor <fligor at illinois.edu>
> wrote:
> >>
> >> On May 7, 2010, at 6:21, Scott T. Cameron wrote:
> >>
> >> >
> >> > On the MLX side of things.  With a rather large Foundry switching
> >> > environment, I and my team are very comfortable on that platform.  The
> >> > switches just work, and the strangest problem I have seen interop
> problems
> >> > with a Cisco -- and I blame Cisco for that.  We have had some
> struggles
> >> > using the Foundry ServerIron due to a few bugs here and there.  I do,
> >> > however, expect that a full layer3 stack is significantly less
> complicated
> >> > of code than what the ServerIron is able to do, so should have less
> bugs.
> >> >
> >>
> >>
> >> Our backbone (core and distribution layers) are MLX routers (I'm at the
> >> Urbana campus). Our regional network that connects the three campuses
> are
> >> also MLX routers. I would not suggest you assume fewer bugs than you've
> seen
> >> in the ServerIrons.
> >>
> >> We see much fewer headaches with their L2 devices typically than with
> >> their L3 devices, but we've moved to HP over the years for
> price/performance
> >> for most of our L2 access ports.  Our experience with BGP and OSPF is
> that
> >> those protocols are pretty solid.  ACLs are buggy, at least if you use
> them
> >> on a ve instead of a physical port, and PIM/MSDP is one of those things
> you
> >> keep your fingers crossed about with every single software upgrade,
> hoping
> >> that they fix more things than they break and that they will move
> forward in
> >> being fully standards compliant. mBGP (for multicast) is hit and miss,
> at
> >> least on our MPLS based regional network.  Some of that is config
> choices we
> >> made, some is their (apparent) lack of QA for anything multicast.
> >>
> >> they are fast hardware switching, and they reboot fast though and
> they're
> >> usually pretty good about fixing problems once you finally nail down
> what
> >> the problem is. I'll echo what someone else said, if your needs are
> simple,
> >> they'll probably work fine.  I can't honestly recommend them if you run
> PIM
> >> or MSDP however, that has been (and still is) a nightmare to keep
> working
> >> correctly.
> >>
> >> I can't compare to the Juniper MX, we've not got any of those.
> >>
> >> -----
> >> -debbie
> >> Debbie Fligor, n9dn       Network Engineer, CITES, Univ. of Il
> >> email: fligor at illinois.edu          <http://www.uiuc.edu/ph/www/fligor>
> >>
> >>
> >>
> >>
> >>
> >>
> >> _______________________________________________
> >> foundry-nsp mailing list
> >> foundry-nsp at puck.nether.net
> >> http://puck.nether.net/mailman/listinfo/foundry-nsp
> >
> >
> > _______________________________________________
> > foundry-nsp mailing list
> > foundry-nsp at puck.nether.net
> > http://puck.nether.net/mailman/listinfo/foundry-nsp
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://puck.nether.net/pipermail/foundry-nsp/attachments/20100507/977230fc/attachment.html>


More information about the foundry-nsp mailing list