[f-nsp] Features on Brocade Ethernet platforms

George B. georgeb at gmail.com
Sun Mar 13 01:55:27 EST 2011


2) TurboIron 24X - as core platform
>
> Is this switch is good solution to pure 10GE core ? We want use them for :
> - interconnect to other core nodes by 1x10GE and 2x10GE LAGs
> - interconnect to upstream providers (eg. GlobalCrossing, KPN and DE-CIX)
> - downlinks to PE routers by 2x10GE LAGs
> - downlinks to access switches (customer access switches)
> - at core we transporting multicast with IPTV (around 900Mbps) it will
> be not problem ? (eg. with microbursts at core)
> - is SFP+ ER (40KM) 1550nm is supported in this switch ?
> - is QinQ supported on this switch ?
>
> If TurboIron is not good idea here then what we should consider as
> alternative ? MLX/RX is not option here as It's too expensive. We need
> just 8 x 10GE ports (best if XFP, but SFP+ is fine with us too).
>
>
An alternative to the TurboIron 24x might be something like the Arista 7100
series depending on what features you need.  They produce an ER optic.  One
nice feature of this switch is mLAG which means "multi-chassis lag" like the
Brocade MCT available on the MLX/XMR/CER/CES.  It allows you to have a pair
of uplinks bonded as a LAG from the access switches.  One link to one core
switch, other link to the second one.  They are active/active.  Basically it
allows you to get rid of spanning-tree without pushing layer3 out to the
access switches.  They do QinQ but don't do v6 routing in hardware at this
time (it's coming later this year).  I use these as an aggregation switch in
cages remote from my core infrastructure.  I might have near a dozen FCX top
of rack switches in a remote cage aggregated to a pair of Arista's and those
switches then go over a pair of uplinks (one each in an mLAG) to a pair of
MLX core switches instead of having to do a long distance uplink from each
of the access switches.  I have considered using the Aristas for a core
application but the lack of v6 routing protocol clobbered that option.  I
went instead in that case with a pair of FCX units doing core routing with a
pair of Aristas for 10G port fanout.  So the Aristas act as 10G layer 2
ports hanging off the FCX pair which are actually doing the routing in that
application.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://puck.nether.net/pipermail/foundry-nsp/attachments/20110312/178ffbbf/attachment.html>


More information about the foundry-nsp mailing list