[c-nsp] Redundant switch fabric

Justin C. Darby jcdarby at usgs.gov
Tue Mar 31 13:58:25 EDT 2009


We had issues with 4.0(?) releases, mostly related to strange behavior 
of a few features (dhcp relay, DAI, port security, etc) that required a 
full reload after a software upgrade to clear up completely. 4.1(?) has 
been fine so far, and the last upgrade we did was 4.1(2) to 4.1(4) and 
it went through without any downtime. We skipped over 4.1(3) since we 
never got around to scheduling it.

Justin

Tony Varriale wrote:
> I've had a colleague run into an issue going to 4.1.3 (long story but 
> it's intrusive either way you slice it and is how all boxes are).  
> What was your upgrade from and to?
>
> tv
> ----- Original Message ----- From: "Justin C. Darby" <jcdarby at usgs.gov>
> To: "Brad Hedlund" <brhedlun at cisco.com>
> Cc: <cisco-nsp at puck.nether.net>
> Sent: Tuesday, March 31, 2009 11:51 AM
> Subject: Re: [c-nsp] Redundant switch fabric
>
>
>> Mike,
>>
>> Just to chime in here a bit with some experience - we've had Nexus 7K 
>> switch backplane modules fail - unless you are pushing near 100% 
>> backplane utilization you don't even notice until it emails you or 
>> your config monitoring program notices the failed module. In recent 
>> NX-OS releases, In Service Software Upgrades are working properly 
>> 100% of the time for us, and outside of the fact it can take 3-4 
>> hours to upgrade a fully loaded switch, there's no real downtime if 
>> you've got working port redundancy across modules, and modules only 
>> go down one at a time like they're supposed to.
>>
>> Considering how distributed and redundant components of the switch 
>> are - it's pretty unlikely you'd run into huge redundancy problems 
>> with any single component. I don't have enough N7K's to play with 
>> Virtual Port Channels (vPCs), but it'd be interesting to see if they 
>> have any issues when upgrading switches. vPCs can add extreme (and 
>> usable) redundancy to multi-chassis design, if you want to go a step 
>> farther.
>>
>> Justin
>>
>> P.S. Comments made here are my own and should not in any way be 
>> considered an endorsement by the U.S. Federal Government.
>>
>> Brad Hedlund wrote:
>>> Mike,
>>> The 6500 and 4500 have the "switch fabric" on the supervisor 
>>> engines, so by
>>> having dual supervisors, you in effect have a redundant fabric.
>>>
>>> The 6748 actually has 4 traces, each 20G.  2 traces connect to the 
>>> active
>>> supervisor containing the active switch fabric.  The remaining 2 
>>> traces are
>>> standby connections to the standby supervisor/fabric.  So, when a 
>>> supervisor
>>> engine and its fabric fails, the 2 standby traces are enabled and 
>>> the full
>>> 40G of bandwidth remains.  You never, under normal circumstances, 
>>> have only
>>> a single trace active on 6748.  Newer versions of IOS provide a "hot
>>> standby" fabric feature which allows this fabric trace switch over 
>>> to happen
>>> faster - roughly 50ms.
>>>
>>> For the best in redundant designs, consider the Nexus 7000, where 
>>> the switch
>>> fabric is decoupled from the supervisor engines into a series redundant
>>> "fabric modules" installed into the back of the switch.  Should a 
>>> supervisor
>>> engine fail in Nexus 7000 there is ZERO impact to the switch fabric, 
>>> because
>>> the supervisor engine does not forward data plane traffic.
>>>
>>> Cheers,
>>>
>>> Brad Hedlund
>>> bhedlund at cisco.com
>>> http://www.internetworkexpert.org
>>>
>>>
>>> On 3/31/09 9:05 AM, "Mike Louis" <MLouis at nwnit.com> wrote:
>>>
>>>
>>>> I have a solution design that requires redundant switch fabrics. I am
>>>> interpreting this beyond just have redundant supervisors meaning 
>>>> redundant
>>>> backplanes on the switch cards. Do the 6500 and 4500 support redundant
>>>> fabrics? Will a 6748 function with one trace failed?
>>>> ________________________________
>>>> Note: This message and any attachments is intended solely for the 
>>>> use of the
>>>> individual or entity to which it is addressed and may contain 
>>>> information that
>>>> is non-public, proprietary, legally privileged, confidential, 
>>>> and/or exempt
>>>> from disclosure. If you are not the intended recipient, you are hereby
>>>> notified that any use, dissemination, distribution, or copying of this
>>>> communication is strictly prohibited. If you have received this 
>>>> communication
>>>> in error, please notify the original sender immediately by 
>>>> telephone or return
>>>> email and destroy or delete this message along with any attachments
>>>> immediately.
>>>> _______________________________________________
>>>> cisco-nsp mailing list  cisco-nsp at puck.nether.net
>>>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>>>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>>>>
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> cisco-nsp mailing list  cisco-nsp at puck.nether.net
>>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>>>
>>
>> _______________________________________________
>> cisco-nsp mailing list  cisco-nsp at puck.nether.net
>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>> archive at http://puck.nether.net/pipermail/cisco-nsp/ 
>
> _______________________________________________
> cisco-nsp mailing list  cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/



More information about the cisco-nsp mailing list