[c-nsp] 1G (SFP) single-mode aggregation

Peter Rathlev peter at rathlev.dk
Thu Dec 15 14:38:25 EST 2011

On Thu, 2011-12-15 at 14:44 +0100, Peter Rathlev wrote:
> We've been asked to look at how one could best cram a fair amount of SFP
> links into not too much space. They are downlinks to FTTO switches. When
> going full scale we're talking about maybe 2500 FTTO switches.

Thank you for all the on and off list answers. It's all about an FTTO
solution for a hospital. We see two options for the logical topology:

 1) If these boxes are L3 capable and sane we would have them be
    PE devices in the MPLS network. The 6513E/Sup2T combination
    would fit this description in my eyes, and is probably the
    baseline that alternatives will be measured by.

 2) If the devices were not suited for L3 services we would
    deliver these on something like 6500s behind these, with
    Sup2Ts and 6816-10G modules.

Services are quite simple. At L2 we would need some kind of rapid STP
protocol (Rapid PVST+ or maybe MST), reliable storm-control and sane
multicast. QoS would mainly be an EF capability, marking and classifying
based on L4 information. What we now have with 3560X access switches is
sufficient. LLDP for our mapping and tracing tools.

At L3 we would need IS-IS, MPLS L3VPN that works together with Cisco
equipment, some kind of FHRP (like HSRP), in- and outbound ACLs, uRPF
and multicast capabilities; we're using draft-rosen for the latter
currently, though across-MPLS traffic is close to zero.

No shaping is needed, but policing and prioritizing is. Of course actual
working v4+v6 for everything, or at least a trustworthy v6 road map.

Fiber termination is in adjacent racks. Other people are speccing that,
but I suppose we can expect 48 SM/LC cables coming in horisontally at
each U. Devices with vertical slots would need some serious cable
management. In other places use vertical slots with the fiber
termination above, and we're thinking about that here too.

The point that things need to be serviceable is, believe it or not,
something we're still discussing. I seem to have colleagues that believe
in "deploy and never touch again" but I think we'll end up agreeing
serviceability is important. :-)

To sum up the different suggestions:

- Brocade FCX 624S-F was suggested. 1 RU and 24 SFP ports (960 per
rack). I'm unsure about how well the stacking works. We have no
experience with Brocade, but if the price is right it might be worth

- Juniper MX960 at 16 RU and 576 ports. So three of these in a "48 unit
rack", but I guess two in a 42 unit rack. That's 1152 ports per rack, or
1728 ports in those large racks.

- Juniper EX8216 at 21 RU and 786 ports (1536 per rack). Also
interesting, though maybe not exactly cheap. We'll look into it and
possible alternatives from them.

- HP 5412 with 12 J9537A (24 port SFP) modules in 7 RU, so 1728 SFP
ports in a rack.

- The most dense 4500 option would have to be 4506-E with 5 48-port
modules, making that 1200 ports per rack.

- Mark, who must think we're made of money, suggested the ASR 9922. ;-)
It could takes 20 A9K-40GE-L cards for a total of 760 SFP ports and an
uplink module.

Again: Thank you all for the feedback. And do keep it coming if


More information about the cisco-nsp mailing list