[c-nsp] Nexus 5K FCoE to FC breakout
Justin C Darby
jcdarby at usgs.gov
Thu Apr 16 22:59:22 EDT 2009
Sure. I'm going to try to be really brief and as funny as possible for what
was really a traumatic experience last year trying to deal with helping a
technology group that was and is undergoing massive growth. :)
This is slightly OT but, well, this is c-nsp and I'm sure some of you
somewhere are dealing with storage I/O issues and can appreciate.
I am now near the end of a 12 step process..
Step 1: Install Blade server chassis. Populate with blades.
Step 2: Spend a month tuning PL/SQL apps deployed to Blades. Realize you
can't make any more progress because you are I/O bound. Curse Blade servers
for having limited I/O connectivity options.
Step 2.5: Realize you can't spend $300,000 on a Fiber Channel deployment
while also meeting the rest of your yearly deliverables.
Step 3: Calculate your average I/O bandwidth and IOPS load for your
application into how many hard drives you need spinning (in my case, I've
got 48 7200 RPM 1TB SATA drives - we do data warehousing, mostly, on huge
datasets).
Step 4: Find a way to attach all of these drives to something you can
install Linux and 10 Gigabit Ethernet adapters into. Make sure you aren't
oversubscribing the PCIe bandwidth. Make sure you have some kind of
redundancy and backup strategy.
Step 5: Make sure your 'something' supports NUMA and configure Linux to use
the various Zero Copy I/O mechanisms at the kernel level (more recent
2.6.x). Partition your drives (LVM or otherwise), and tune the page cache
of each one for your expected targets.
Step 6: Install vblade. http://aoetools.sourceforge.net/ .. be sure to
increase the AoE buffer count on native 10GbE networks. This takes trial
and error and depends on your hardware and switch buffer sizes.
Step 7: Install 10GbE native ethernet switches and adapters into your Blade
chassis and servers. Set MTU to 9000.
Step 8: Attach your storage device to your 10GbE LAN.
Step 9: Configure clients. Watch your I/O channel widen to 600+ MB/s. If
you did this right, your storage server will pretty easily hit over 90%
utilization of its 10GbE adapters across all attached clients. Notice that
generating client I/O demand much higher is pretty difficult.
Step 9.5: ... Unless you use the NetXen cards IBM sells for Bladecenter H,
in which case you will see maybe 450MB/s on clients because they don't
support an MTU size greater than 8000. Curse IBM and NetXen.
Step 10: Optional? :) Buy a Nexus 7000-series 10GbE switch so you can do
this on a much larger scale given how amazingly well it all worked compared
to how much you spent. If you work in a cash strapped group (like I do),
you may wind up ordering this to replace the pile of bargain basement 1GbE
6500's you've got while you budget in your 10GbE modules for next year.
Step 10.5: Curse about how much 10GbE costs, then remember how much Fiber
Channel costs.
Step 11: Become the official networks and storage guy in your group since,
somehow, all of this worked out. Thank the gods you've been working in
telecoms for years so none of this was beyond you.
Step 12: Realize maintaining all of these yourself is a lot of work and
it'd be REALLY REALLY NICE if some FCoE vendors started releasing native
FCoE hardware and maybe got them on GSA or into SEWP so I, er, you can
start comparing options. (Someone at Cisco - copy and paste this line to
your FCoE partners, thanks! *ahem*)
:)
As an amendment to Step 9.5, IBM now sells Broadcom chips in 10GbE cards
for Bladecenter H. I haven't used them, yet, though I will still say the
following: These work better. And support 9000-byte MTU's. They can not
possibly be worse. Buy these instead.
Also, the Blade Networks Technologies 10GbE switch for the Bladecenter H is
pretty decent for what it does, but there are days I wish I had a nice
Cisco CLI and feature set to work on with them like I do the Gigabit Cisco
switches I've got going for LAN.
Justin
P.S. Personal comments, not governments, etc.
-----cisco-nsp-bounces at puck.nether.net wrote: -----
To: cisco-nsp at puck.nether.net
From: "Wilkinson, Alex"
Sent by: cisco-nsp-bounces at puck.nether.net
Date: 04/16/2009 07:37PM
Subject: Re: [c-nsp] Nexus 5K FCoE to FC breakout
0n Thu, Apr 16, 2009 at 11:06:48AM -0400, Justin C Darby wrote:
>We're actually using in-house built ATA-over-Ethernet devices which
have >similar advantages, but this isn't very 'enterprisey' - this
was us trying >to find a way to deal with extreme I/O loads on
giant Oracle databases >(which are now back to being CPU bound for
the first time in years). They >also beat the heck out of 4x FC
interfaces, preforming at 600-800MB/s, for >most of our
applications under load. There are a bunch of people jumping This
sounds interesting. Care to share a nutshell summary of how you are
doing this ? -aW IMPORTANT: This email remains the property of the
Australian Defence Organisation and is subject to the jurisdiction of
section 70 of the CRIMES ACT 1914. If you have received this email in
error, you are requested to contact the sender and delete the email.
_______________________________________________ cisco-nsp mailing list
cisco-nsp at puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp archive at
http://puck.nether.net/pipermail/cisco-nsp/
More information about the cisco-nsp
mailing list