[c-nsp] UCS to 4900M to EMC iscsi performance

David Hubbard dhubbard at dino.hostasaurus.com
Fri Dec 3 11:37:35 EST 2010


Wondering if anyone has researched the same issue I'm
having or has a best practices list.  I have a Cisco UCS
platform which is not production yet, so just me doing
testing.  It has multiple ten gig links to redundant
fabrics in end host mode.  Those each have ten gig links
to a pair of 4900M's.  An EMC CX4-480 also has multiple
10 gig links to the same pair of 4900M's.  The UCS blades
are running vmware esxi 4.1 enterprise plus and EMC
powerpath multipath I/O software.  The storage is on
a dedicated vlan and each end is tagging to it, no
routing involved.

>From a guest running redhat 5 with vmware tools and a
paravirtualized scsi adaptor, I can't seem to do better
than about 250 MB/sec reading or writing over iscsi.  I
have tried all MTU's the EMC supports between standard
and 9000 but I get nearly the same results except at 9000
byte where it actually gets a bit slower.  Not that
250 MB/sec is bad, but I was expecting to hit 400 MB/sec
running benchmarks since the EMC drive enclosures are
4gig FC attached and it has 8 GB of cache memory with
no other activity on the system other than my testing.
I should add that I have no issues having two virtual
machines on different IP ranges, different UCS chassis
and different blades talk to each other using network
benchmarks at nearly 10 gig wire speed, and that's
traffic that has to leave the cluster, go to the 4900's
and come back down since we're running end host mode.
So it's not a connectivity/4900 issue as far as I can
tell.

I notice a regular increase in pause frame count on the
EMC interface of the 4900's which made me think maybe
the EMC is lacking in buffers on the ten gig card?  
Would enabling/disabling a non-default flow control
help?  I've tried both on and off for tcp delayed ack
on the vmware side.

Thanks,

David



More information about the cisco-nsp mailing list