[c-nsp] Maximum spannig tree instances

Matt Buford matt at overloaded.net
Mon Jul 20 15:34:41 EDT 2009


On Mon, Jul 20, 2009 at 3:30 AM, <A.L.M.Buxey at lboro.ac.uk> wrote:

> > It's pretty much always the 6516-GBIC cards that lead downstream to
> access
> > switches that have the high virtual port counts.
>
> yep - which is where you limit the VLANs that go down so that match what is
> needed
> - switchport trunk allowed vlan x,y,z,666,999,blah,blah.   you are over the
> 1800 limit
> on that blade - but are you seeing the platform exceeded in your system
> logs?
>

I believe so, however I think that message only shows up as your cross that
limit.  I've been above the limit forever, so I don't exactly see the log
message very often.

I agree with your suggested solution, and this is what I suggested
internally as a possible fix.  However, this is one of those things that
sounds easy enough in theory, but in a dynamic datacenter where many servers
are constantly changing, it is more complex.

First, a quick search for SNMP support makes me think that we won't be doing
any changes to this over SNMP.  You have to read the bitmask, edit it, and
then rewrite it with your changes.  This means if 2 people attempt to edit
the allowed VLANs on the same port at the same time, they'll overwrite each
other's changes.  So now, our web based port-VLAN form (which is already a
little slow due to lots of SNMP tables to walk) will also have to also SSH
to the 2 upstream switches to issue the "add" commands.  We're already
headed down a rough road here...

Second, there is the question of removing VLANs from this list.  When a tech
goes to the web form to set the VLAN for a port, we need to go through all
ports on the switch and see if anyone else is using the old VLAN that the
port used to be on.  Don't forget to check for it being used on tagged
ports!  If no one remains using that VLAN on the access switch, then it can
be pruned from the uplinks.  Then there's the (unlikely, but possible) case
where someone added a port to that VLAN in between the time you started
walking the table and when you deleted the VLAN from the ACL.  Oops.
 Overall, I think this part sounds dangerous and I'd probably just avoid any
automated cleanup (I'd have scripts only add VLANs to the ACL), and then
just settle for an occasional audit to trim down the lists.  Even with only
an occasional cleanup, this would probably significantly reduce the virtual
port usage.

Third, perhaps it's not really a big deal, but adding a steady flow of
topology changes throughout the business day to spanning tree for production
VLANs (just because a server got added to a VLAN) makes me a little
uncomfortable.  It's something I'd prefer to avoid.  Those pesky "100%
network uptime" SLAs require being pretty conservative about this kind of
thing.

Finally, there is cost associated with the development work and
administrative hassle that this VLAN pruning requires.  I'd prefer to spend
that money on hardware that Just Works, as opposed to having my own staff of
software developers and network engineers maintain this system of ACLs.  One
time fees for hardware to make my problems disappear forever and my network
configuration less complex are appreciated.  :)

Anyway, I don't mean to make this sound impossible.  It is a workable
solution.  Depending on how dynamic a data center is, it might not even be a
big deal to do this manually.  In my case, VLANs change continually all day,
and that makes this a potentially workable, but undesirable solution.  For
now, the pain of dividing my data centers up into 2 or 3 smaller networks
(VLAN domains) is less than my estimation of the pain involved in
implementing and running VLAN allowed ACLs.  Of course, higher hardware
limitations would be ideal.


More information about the cisco-nsp mailing list