[c-nsp] Nexus 5000?

Ryan Hughes rshughes at gmail.com
Wed May 6 15:04:04 EDT 2009


The other con to deploying N2K/N5K today is that they don't yet support port
channeling of 1G connections down to the hosts which is sometimes common for
Oracle RAC clusters or VMware ESX environments. This will be resolved when
they start supporting virtual Port-Channels in the N5K series sometime later
this summer.

You can negate some of the cost of the 10G between switch and hosts through
what they're calling Twinax connectivity which is a molded SFP+ connection
which has serious distance limitations (5-7m cable being the longest) for
row to row connectivity but in most cases sufficient for inrack or rack to
rack connectivity. List price is around $250 per cable which includes both
SFP+ to light up the connection. Cisco is additionally looking at another
cost effective solution for 10G connectivity this summer called Ultra Short
Reach.

You additionally cannot connect another switch up to the 2148 as it is
intended only for host connectivity (BPDU Guard is enabled by default and
cannot be disabled). Best description of the 2148 is that it is a remote
line card off of the 5000 and cannot be used without it - similar to a
linecard without the hardware forwarding capability for local traffic. But
again, the price point of it makes it very attractive.

To summarize why Cisco might leading with Nexus instead of the classic
Catalyst solutions in the data center is that they've taken some of the
engineering benefits of both the 4948's (redundant power, fast switching)
and 3750's (stack management) and pulled that into the N5K/N2K offering
without tying you into a modular switch solution that leads to some tougher
cabling costs (patch panels) as you can get the switch physically closer to
the host.

Ryan

On Wed, May 6, 2009 at 1:28 PM, Jay Ford <jay-ford at uiowa.edu> wrote:

> On Wed, 6 May 2009, ChrisSerafin wrote:
>
>> I have a client that Cisoc is recommending the Nexus line of switches for
>> their data center. They will be using IBM blade switches and I'm guessing
>> these would be the 'core'.
>>
>> They are looking at (2) Nexus 5010's and (2) Nexus 2000's.....totaling
>> 60K.
>>
>> I'm wondering why this would be recommended, since the only added feature
>> of the Nexus line from Cisco.com's video is that they have 10GB
>> ports.....and really nothing else.
>>
>> I'm almost ready to recommend my favorites....3750G's for this scenario.
>>
>> Anyone have real world experience wirking with these devices and can share
>> comments? good or bad, and why you went with them?
>>
>
> We don't have any yet, but we're looking at them.
>
> Nexus 5000 pros (+) & cons (-):
>   +  front-to-back air flow
>   +  redundant power supplies & fans
>   +  high throughput (1.04 Tbps in 5020, 520 Gbps in 5010)
>   +  interface flexibility (due to SFP+ ports)
>   -  have to buy an SFP/SFP+ module/cable for every port you want to light
>   -  no 10/100; copper Ether is 1G only
>   -  only first few ports (16 in 5020, 8 in 5010) can do 1G;
>      the rest are 10G only
>
> The Nexus 2000 fabric extender also seems limited to 1G only; no 10/100.
> Note that it isn't a normal switch, with port-to-port switching;  all
> inbound edge-port traffic is sent to the uplinks for switching by the host
> 5000 box. This isn't necessarily a problem, but it is different.
>
> It's a tough choice right now between established top-of-rack switches
> (3750, 4948, 4900m) & the Nexus boxes.
>
> ________________________________________________________________________
> Jay Ford, Network Engineering Group, Information Technology Services
> University of Iowa, Iowa City, IA 52242
> email: jay-ford at uiowa.edu, phone: 319-335-5555, fax: 319-335-2951
>
> _______________________________________________
> cisco-nsp mailing list  cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>


More information about the cisco-nsp mailing list