[c-nsp] VSS - Horror stories, show-stoppers, other personal experience?

Bradley Williamson bwilliamson at eatel.com
Sat Jun 18 11:02:55 EDT 2011


We used it in a large multicast environment, and it did not scale well. We have ~300 channels and we ran into problems with multicast LTL resources on the VSS. 

We originally went VSS for port density, MEC(L3), and the benefit of no spanning tree convergence. 

We had to split the VSS this week because of the LTL resource problem. Multicast LTL's are around 30K on a standalone 6509, but they drop to 20k for the VSS.

In the VSS configuration we were using 15k out of 20k LTL resources. After splitting the VSS into two chassis, we are using ~7K of 30K per chassis.

Once the VSS get under 200 available multicast LTL available, it disables igmp snooping and floods the switch causing the switch processor to crash.

Other than the resource limitation the VSS worked as advertised and I think it would be great in a Unicast DataCenter environment.

~brad

Sent from my iPhone

On Jun 18, 2011, at 9:31 AM, "Andrew Miehs" <andrew at 2sheds.de> wrote:

> On Saturday, June 18, 2011, Alexander Clouter <alex at digriz.org.uk> wrote:
>> Murphy, William <William.Murphy at uth.tmc.edu> wrote:
>>> 
>>> We are running VSS for distribution layer switching in a campus
>>> environment and have been quite pleased with it...  Benefits for us
>>> are simplification, faster convergence and better performance
>>> (distribution of traffic)...
>>> 
>> Only curious, VSS we (a small university) felt was way to expensive to
>> do and did not give us many benefits.
> 
> We have 2 datacenters approx. 500m apart, and use the vss
> functionality to have it look and feel like one data center -
> especially for nodes which run some form of layer 2 HA between them.
> It means you only need to configure one switch ad can make better use
> of the uplinks.
> 
> In a second case we use it because we required more redundancy than a
> single switch, but need want the hassel of confguring 2.
> 
> The third case is port density - its good if you simply need more ports.
> 
>> 
>>> No more STP blocking ports, MCE to access-layer so both links are
>>> utilized, faster convergence, no need for HSRP, also our two 10G
>>> uplinks are equal-cost even though they are connected to separate
>>> chassis...
>>> 
>> Would you say it's easier than just running an IGP (OSPF, EIGRP, ISIS or
>> iBGP) and pushing L3 to the access layer of your network, or has VSS
>> really made things a lot simpler?  Only asking you as I know no one
>> nearby who went the VSS route and unfortunately the only people raving
>> about it are sales people, hardly a great frame of reference :)
> 
> Its easier.
> 
>> I can see VSS helping out when you have VLAN's spanning buildings[1],
>> and it be a real uphill struggle to get the sysadmin's of the systems on
>> those VLANs to use localised subnets instead, but surely it's more cost
>> effective and does not limit your future options to do a migration to L3
>> up to the access layer everywhere than deploy VSS?
> 
> Depends on the size of your subnets/ buildings.
> Normally we have one subnet/ vlan per floor per building
> 
>> Plus, the cynic in me is more interested in the failure modes.  If
>> everything goes horribly wrong, I am more comfortable pulling apart
>> OSPF/EIGRP frames rather than some new fango Cisco thingy mcwhatsit :)
> 
> I am not so worried about diagnosing the protocols/ more that this box
> is a single point of failure - with the advantage that some problems
> will only crash one chasis leaving you with half of your connections.
> 
> Btw - i would recommend using both 10g ports on the sup720 10g for the
> vss links. I am not sure if is possible yet to set different buffering
> on the 2 10g ports and vsl sets its own buffering which was not good
> for our data traffic on the second port. And If the sup720 has real
> problems you will probably loose the chasis anyway....
> 
> Cheers
> 
> Andrew
> 
> _______________________________________________
> cisco-nsp mailing list  cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
> 



More information about the cisco-nsp mailing list