[c-nsp] SUP720 GigE ports

F. David Sinn dsinn at dsinn.com
Thu Mar 23 13:32:58 EST 2006


Predominantly the problem is using non-fabric cards in a chassis with 
fabric cards where you are not using distributed-forwarding cards.

When you have no DFC's, the PFC on the supervisor is performing all of 
the L2/L3 lookup's.

When you have a chassis with only fabric-enabled cards, the backplane 
can go into compressed mode.

Data on the backplane is sent in the form of fixed size chunks known as 
"Cisco Cells".  Everything needs to evenly fit into the fixed size, so 
when needed, things are padded.  In non-compressed mode there is also a 
start and stop cell so that everything in the box can synchronize into 
when to expect data.  In compressed mode they can depreciate the start 
and stop cells since now all that needs to go across the backplane is a 
some packet identifier information and the header of the packet.  This 
happens to fit nicely in two cells, which makes the line-cards job of 
data synchronization easy.

Now, if you have a non-fabric enabled card in the chassis, data to/from 
it needs to traverse the backplane.  This means that the full packet 
needs to traverse, and we now get back into variable numbers of cells 
per-packet, so starts and stops return (among other things).

So overall performance decreases in relation to the number of packets 
too and from the non-fabric card, since the backplane needs to revert 
to carrying the packets from those cards and because they can no longer 
run the backplane in purely compressed mode.

DFC's modify the issue because if all of your fabric cards have DFC's 
then the line-cards *should* no longer need to even send the headers on 
the backplane because each card can locally do the L2/L3 lookup's and 
then use the fabric to get the data to the destination.  The backplane 
is then fairly un-used and thus if you *have* to mix in non-fabric 
cards, falling out of compressed mode isn't as painful since it is then 
*mostly* just traffic to/from the non-fabric cards.

* There is a lot of simplification in the above and a few corner cases 
that I've not discussed.  If you want the really in-depth details, I'd 
suggest chatting with your account team.

David

On Mar 23, 2006, at 12:30 AM, Kim Onnel wrote:

> Can anyone clarify why its said that one blad can decrease the overall
> performance of a switch and what cases this might occur ?
>
> On 3/23/06, Virgil <virgil at webcentral.com.au> wrote:
>>
>> On 13/3/06 9:31 PM, "Tim Stevenson" <tstevens at cisco.com> wrote:
>>
>>> Yes, there are 9 GE uplinks, 8 SFP & 1 10/100/1000, & all 9 are
>>> available for use simultaneously - there is no alternative wiring
>>> (eg, like gig x/2 on the sup720).
>>
>> Tim,
>>
>> In a dual SUP720 configuration, are 2 ports per SUP available?
>>
>> And will WS-X67xxxxxxx blades work with a Sup32 (obviously without 
>> fabric
>> connections) ?
>>
>>
>> Regards
>> Virgil
>>
>> --
>> Virgil
>> System Engineer, AS7496
>> virgil at webcentral dot com
>>
>> _______________________________________________
>> cisco-nsp mailing list  cisco-nsp at puck.nether.net
>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>>
> _______________________________________________
> cisco-nsp mailing list  cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/



More information about the cisco-nsp mailing list