[c-nsp] Small DC switch design
Dan Letkeman
danletkeman at gmail.com
Tue May 15 23:14:54 EDT 2012
Jason,
Thank you for the response. I have a few more questions and maybe
some clarification if you could.
On Tue, May 15, 2012 at 10:58 AM, Jason Gurtz <jasongurtz at npumail.com> wrote:
> Your size sounds fairly close to our situation... Do you have a spare
> fiber pair going to each location?
>
>> Right now in each of the 7 buildings has a 3560G as an aggregation
>> switch connected back to the DC. The DC also has a few 3560G's and
>> 3750G's for the sans and servers.
> [...]
>> What I would like to know (costs being the biggest factor) is what
>> would be a better switch design for the current and future traffic in
>> this network. Some options I was thinking about are as follows:
>
> Without more details I'm guessing here. Like many smaller shops I've been
> around the thing has grown from a long time ago and there may be a
> primarily flat L2 design in place, maybe there are some vlans. Maybe there
> is some (or a lot of) daisy chaining of switches; maybe the spanning-tree
> configuration hasn't gotten a lot of thought. OTOH, hopefully you're in a
> better spot than this?
Yes things have been around a while and have seen alot of growth.
Still have many closets with original cat5 cable. I have however been
eliminating the small closets with one or two switches and
consolidating them in most buildings, removing the daisy chains. I
have also added many vlans, as all of our access switches are 2960's.
Distribution switches are 3560's running eigrp. I have also added
etherchannel links between distribution closets, and I have added
redundant uplinks to form a ring in most of the larger buildings. I
did a spanning tree project two years ago including RSTP and verifying
vlan priorites, so this part has been working well, and it makes for a
much easier time when doing upgrades and maintenance. Most buildings
have 2-4 access vlans, voice vlans, wireless vlans, etc. As far as
the fiber connections, each building that is connected to the DC has
at least two pairs back to the DC, and then another pair is spliced so
that it connects to the next closest building forming a ring. Each
building has at least two paths back to the DC, and a 3560G or two as
an aggregation switch which connects to the DC and to the next closest
building in case of sfp or switch failure. I'm sure there is more I
can do, but I am in an ok spot as of right now.
>
> In the Cisco world I think you're right on the money with Cat45xx; the
> 49xx series are related... Skim over this document and see if the general
> idea makes sense. You have L3 capable switches everywhere so it's a no
> brainer in a way:
> https://www.cisco.com/application/pdf/en/us/guest/netsol/ns432/c649/ccmigr
> ation_09186a00805fccbf.pdf
>
> We used this as a model, a pair of 4900M switches as the core and a few
> 4507-E w/SUP-6E as our access switches running OSPF; it is collapsed-core
> w/10G links fanning out (no separate distribution layer). As a whole we
> are very happy with the system. The nice thing about routing everything is
> it fails in more pleasant ways than the typical spanning-tree disaster.
So just to clarify my design idea. I was thinking to use an ME3600X,
with an ip services licensing for routing, as my core/aggrigation
switch for all of the fiber coming into the DC. The ME3600X would
also have the internet routers and firewalls connected to them, then
have a 10G uplink to the 4500-E which would host the servers and sans.
In the future I would look at adding another 4500-E and possibly
another ME3600X, but for now I would just be one of each.
Crude drawing:
routers, firewalls----------------------------------
|
building a ----------1gig fiber ------------- ME3600X (Layer 3)
--------------10g fiber -----------------4500-E--------servers and
sans.
|
building b -------------1gig fiber ---------------
|
building c -----------2gig fiber ------------------
Most high bandwidth traffic is to and from the servers and sans, and
would stay within the 4500-E, second to that would be the traffic from
all of the users from all the buildings to and from the servers, and
then all of the internet traffic. Some of the things I would like to
do with the me3600x is PBR, possibly some shaping or policing, eigrp
routing, and some access lists. Netflow would be nice, but it doesn't
seem like it supports it.
Do you know what the buffer size is on an me3600x? What about on a
4500-E with a sup6l-e?
Do you know if an me3600x has support for eigrp without an extra license?
>
> The 45xx line has seen a major upgrade. You probably want a "+E" chassis
> instead of "-E". Also, the SUP-7E is out and it has netflow amongst other
> upgrades. There is an SUP-7L-E as well for a cheaper option. Check with
> your rep about bundles as it's definitely money saving. For the core, look
> at the 4900M or the newer 4500-X; these two switches are basically a
> semi-fixed version of the cat45xx (fixed sup, replaceable line cards).
> Note with sup-7 based switches you are going to IOS-XE instead of classic
> IOS. Another budget-wise choice for the core and aggregation may be the
> ME3600X/ME3800X. It's marketed at the ISP space but search through the
> archives of this list for discussion of it.
The SUP-7E or 7L-E would be nice because of the Netflow support but
will probably be out of the budget range. In the case of my design
could I purchase just a bundled 4500-E switch without any extra
licensing? Run eigrp stub for now, and in the future if I have two
4500's run hsrp?
>
> Even if you aren't going down the road of L3 in the access layer I can't
> recommend enough making sure a hierarchical design is in place. It is much
> easier to troubleshoot and changes are much easier to implement.
>
A
> ~JasonG
>
>
>
> _______________________________________________
> cisco-nsp mailing list cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
More information about the cisco-nsp
mailing list