[c-nsp] Maximum spannig tree instances

Matt Buford matt at overloaded.net
Sat Jul 18 16:06:31 EDT 2009


On Fri, Jul 17, 2009 at 11:55 PM, Tim Stevenson <tstevens at cisco.com> wrote:

> The 6500/sup720 on 33SXI supports 100K logical ports in MST, and 12K in
> RPVST. That's up from 50K/10K in every prior release.
>

Did the per-slot limitation change too?

N7K supports 75K in MST & 16K in RPVST today. There are no per-module
> limitations on N7K.
>
> Those numbers are based on the requirements we expressed to QA/system test
> prior to FCS. The original numbers were confirmed prior to 33SXI was
> released. Since then, we have not had a customer requirement/request to
> support more, so frankly we have not felt compelled to go and requalifify
> for anything greater. Would be curious to know how many logical ports you
> are running today & in what protocol?
>

First, I'm sorry for not being clear.  While the virtual port per-slot
limitation is an issue with our distribution switches, when we discussed a
Nexus based solution with Cisco the big sticking point was actually with
using the 5000 series as access switches for customer servers to plug into
in the data center.  There were 2 major issues:

1.  Wiring is a huge issue for us, especially as we migrate to all gigabit
and lose RJ21 support on the switches.  Cisco's suggestion was that we could
use the Nexus and have only a single network cable to every server (just tag
all the networks you want, plus FCoE).  However, we use private vlans for
the backup network, and sometimes for other things.  We can't tag a pvlan to
a customer, and the switch has no way to present what needs to effectively
be a pvlan host port as a tag to a customer.  Cisco did say that a feature
to deal with this might be coming.

2.  Nexus 5000 only supported 256 VLANs at the time, with support for 512
coming soon.  I have ~550 VLANs today, and the number is only that low
because I artificially chopped my largest data center into 2 smaller
networks because of STP hardware limitations in Cisco's switches.
 Additionally, when I asked about the virtual port limitations, I was told
there is no per-slot limit, but there is a 3000 logical port limitation on
the chassis as a whole, which again doesn't even come close to meeting my
needs.  This discussion was about 6 months ago, and really focused on the
Nexus 5000 series as an option to replace both our distribution and
access/edge switches.  I can't really remember if the 7000 series was
discussed at all.  It may have been skipped over for price reasons.

I am running rapid-PVSTP, and my switches are running various SXF code.
 Below is the busiest switch I could find.  This is an "old" network whose
growth has been capped because of STP hardware limitations.  We don't allow
any new customers on this network, and built a 2nd network in the same
building for new customers.   The VLAN count on this switch does still
slowly rise though as existing customers on the old network continue to
expand.

#sh vlan vir

Slot 1
-------
Total slot virtual ports 6448

Slot 2
-------
Total slot virtual ports 1636

Total chassis virtual ports 8084

#sh mod
Mod Ports Card Type                              Model              Serial
No.
--- ----- -------------------------------------- ------------------
-----------
  1   16  SFM-capable 16 port 1000mb GBIC        WS-X6516A-GBIC
  2    4  CEF720 4 port 10-Gigabit Ethernet      WS-X6704-10GE
  5    2  Supervisor Engine 720 (Active)         WS-SUP720-3B


It's pretty much always the 6516-GBIC cards that lead downstream to access
switches that have the high virtual port counts.

In the end, one of the biggest hassles this creates is the physical server
placement problems after chopping things up into smaller networks.  We might
start out saying we'll cap network growth to roughly room-sized.  Room 1 is
network 1.  When that gets mostly full, we start putting new customers on
network 2 in room 2.  At some point in the future, both room 1 and room 2
become nearly full and we start network 3 in room 3.  Room 1 and 2 continue
to grow slowly due to existing customer expansion, and at some point they
become 100% full and those customers still want to add servers.  Then we end
up with network 1 being extended to row 1 of room 3, network 2 in row 2 of
room 3, and then rows 3 and higher in room 3 are network 3.  Wait, the
building is 100% full now, so we need to expand the 3 old networks to a few
rows in a new building down the street?  Building 1 network 1 will be
present in building 2 room 1 row 1 too, etc....  This is all great fun to
try to keep straight.  :)


More information about the cisco-nsp mailing list