[c-nsp] Nexus equipment in corporate networks

chris stand cstand141 at gmail.com
Sun Mar 13 07:18:42 EDT 2011


We have two data centers as well but with only a 7K pair at one side, other
side is VSS pair with our server farms having metro ethernet L2 links
between both centers.

After going through the Cisco documents on vDCs we figured that making the
7K be a spoke with a powerful hub with both user vlans and core servers
terminating their gateways on the 7K pair - the way the VSS pair does it was
just "easier" and since we don't have clients / servers that are segregated
from the rest of company nor do we host "foreign" servers in a colo
datacenter - although "test" is one reason we have thought of using a vDC,
we would not have to change a thing from production.

.  Our application traffic patterns are such that 100% of client traffic
needs to flow to servers so it will be going there anyway, why force it to
leave a vDC and just re-enter another one ? ( this could require 8 sfp+
optics and use that many ports ).
Our end user applications are Secure Socket HTTP based so our ability to
provide much access-list functionality between vDC1 to vDC2 routed links
above what we would get on the inbound client L2 vlan connections is
minimal.

If the traditional Access -> Distribution -> Core can collapse to
Access/Distribution -> Core why not simplify it to Access:Core ?

I guess I was looking for super compelling reason other than "because you
can" ? And I am not sure that cutting the server L2 domain with 30 vlans
apart from the user L2 domain with 40 vlans is that strong a reason if the
7K is as super as it was sold to be.

????



> ------------------------------
>
> Message: 5
> Date: Sat, 12 Mar 2011 12:45:40 -0700
> From: quinn snyder <snyderq at gmail.com>
> To: cisco-nsp at puck.nether.net
> Subject: Re: [c-nsp] Nexus equipment in corporate networks
> Message-ID: <4D7BCD64.1050903 at gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> been using n7k deployed with vdc to have a physical collapsed core in a
> logical two-tier (distribution, core) model.  we've used this to keep
> used features to a minimum within each context (i.e. i'm not going to
> run vpc within my core context).
>
> also deployed vdc to create isolation between production and test/dev
> server environments.
>
> my pitch/reasoning is anytime you want consolidation of airgapped
> chassis into a single device -- you can use vdc.
>
> q.
>
> --
> ()  ascii ribbon campaign - against html e-mail
> /\  www.asciiribbon.org   - against proprietary attachments
>
> On 03/12/2011 12:26 PM, Chris Evans wrote:
> > Can anyone provide their reasoning for using VDC? Everytime we review it
> > there is no compelling reason for us to use it over a vrf.
> >
> > Interested in seeing others opinions.
> >
> > Thanks
> > On Mar 12, 2011 1:14 PM, "Federico Cossu"<federico.cossu at gmail.com>
>  wrote:
> >> 1) yes we do
> >> 2) no management vdc, but yes we do that as well.
> >>
> >> bye
> >>
> >>
> >> 2011/3/12 chris stand<cstand141 at gmail.com>:
> >>> Hello,
> >>>
> >>>    Is anyone here using Nexus 7Ks in their corporate networks ?
> >>> Other than the management vDC are you breaking up your networks into
> >>> multiple vDCs ?
> >>>
> >>>
> >>> thank you.
> >>>
> >>> Chris
> >>> _______________________________________________
> >>> cisco-nsp mailing list  cisco-nsp at puck.nether.net
> >>> https://puck.nether.net/mailman/listinfo/cisco-nsp
> >>> archive at http://puck.nether.net/pipermail/cisco-nsp/
> >>>
> >>
> >>
> >>
> >> --
> >> Lo hai detto hermano. No se escherza con Jesus! (Jesus Quintana)
>


More information about the cisco-nsp mailing list