[c-nsp] Nexus 5K FCoE to FC breakout

Brad Hedlund brhedlun at cisco.com
Thu Apr 16 00:25:45 EDT 2009


If "legacy FC devices" means FC attached storage arrays, well that would be
just about everything out there today.  Current and next generation C-N-A's
do not operate any differently in how FC attached storage is accessed (via a
Nexus 5K with FC uplinks).  Even with FCoE attached storage the Nexus 5K is
still a key piece of the server access architecture.

iSCSI at 10GE has its challenges as there is an order of magnitude increase
in TCP processing requirements at 10GE vs. 1GE, 10x more buffers required
for TCP windowing for sustained 10GE throughput under latency, 10x more
packets-per-second requiring TCP offload processing.  All of which drives up
the cost of the 10GE iSCSI HBA.  Not all 10GE iSCSI HBA's may have these
resources, so it will be interesting to see how those adapters perform under
varying latencies and varying loads vs. FCoE.

FCoE does not have the TCP processing overhead and leverages the hardware
capabilities of the Nexus 5000 to provide the lossless transport to storage,
regardless if the array is FC or FCoE attached.


Cheers,

Brad Hedlund
bhedlund at cisco.com
http://www.internetworkexpert.org



On 4/15/09 7:21 PM, "Justin C Darby" <jcdarby at usgs.gov> wrote:

> 
> Hello David,
> 
> This is entirely my personal opinion and I'm sure some folks in the Nexus
> BU at Cisco would hit me for saying this given the chance.
> 
> Unless you are using legacy FC devices, hold off on the 5K for this. The
> reason I say this is because a new class of storage devices and HBA's that
> use 10GbE native are hitting the market. Some vendors are mostly there,
> others not at all. I beleive QLogic has HBA's available for this, and I
> know the major storage vendors are working on bringing FCoE storage devices
> to market. You've also got alternatives to FCoE that can use 10GbE for
> native transport now (iSCSI/ATA-over-Ethernet/etc).
> 
> The operating cost (relative to performance) of using 10GbE to do FCoE
> native are considerably more advantageous than just consolidating 4xFC onto
> 10GbE. However, if you've already got a bunch of FC gear and you want to
> consolidate the transport, there are people using 5K's for this (though I
> am not one of them), and given my experience with the 7K i am sure it'll
> work out as designed.
> 
> Have fun,
> Justin
> 
> P.S. Opinions here are my own, not the views of the U.S. Government, etc.
> 
> -----cisco-nsp-bounces at puck.nether.net wrote: -----
> To: "Cisco NSP ((E-mail))'" <cisco-nsp at puck.nether.net>
> From: David Hughes
> Sent by: cisco-nsp-bounces at puck.nether.net
> Date: 04/15/2009 07:07PM
> Subject: [c-nsp] Nexus 5K FCoE to FC breakout
> 
> Hi  Seeing as this is all bleeding edge, I'd be very interested in any
> first hand experiences with breaking out FCoE to traditional FC via an
> N5K.  Is it working OK?  Are you running it as a switch or in NPV   mode?
> How's the interop with your FC fabric (and who's gear are you   using for
> FC switching).  Whos CNA's are downstream of the N5K?  Any   thoughts,
> observations etc you can share about this brave new world   would be
> greatly appreciated?   Thanks  David ...
> _______________________________________________ cisco-nsp mailing list
> cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp archive at
> http://puck.nether.net/pipermail/cisco-nsp/
> 
> _______________________________________________
> cisco-nsp mailing list  cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/




More information about the cisco-nsp mailing list