[c-nsp] 6500 - SUP720 - IOS - traffic problem

Gabriel Mateiciuc mgabi at ase.ro
Mon Jan 7 02:37:10 EST 2008


Hello Phil,
Thanks for the reply.
See the answers bellow.

I've kind of solved the problem for now. The trick seemed to be putting the
ether-channels will all ports on the same card. Not very well documented by
cisco but it seems by even putting ports on different fabric enabled cards
would put more traffic on the bus. True, cisco did say that the bugs
described bellow manifests under certain (unspecified) traffic conditions. 
Well, in short, we swapped a classic card for a fabric enabled one, we moved
some of the links there so that we get fabric utilization between 40-60 per
card and I configured the "fabric buffer-reserve 1010" - that seemend the
best number of buffers.
I'm well aware that is only a temporary fix - until the fabric/bus rises in
utilization - but that should be enough until we get more fabric cards (no,
we don't have any DFC's by the way)
Waiting on the SXF13 (the issue was fixed in the interim 12.5 as they say).
I'm reluctant in trying the new SXH, although there are some nice features
there like the adaptive hashing for 

-----Original Message-----
From: Phil Mayers [mailto:p.mayers at imperial.ac.uk] 
Sent: 6 ianuarie 2008 13:03
To: Gabriel Mateiciuc
Cc: cisco-nsp at puck.nether.net
Subject: Re: [c-nsp] 6500 - SUP720 - IOS - traffic problem

Gabriel Mateiciuc wrote:
> And for those having enough patience to read the details, here's the
> question/problem:

Do you have DFCs on the 67xx cards?

No, we don't have DFC's

> On the 4-th linecard (6724-SFP) we have links grouped in etherchannels
> (4xGigabit backbone links), with respect to keeping most of the

Where is the traffic going in/out of those etherchannels going to/coming 
from? Specifically is it likely to be going to/coming from the cards in 
slots 1,2,7-9 i.e. the classis bus cards?

There are various possibilities.

Classic cards - servers + clients. Fabric cards - big clients + bb links.

> etherchannels with their ports grouped on the same asic/linecard. The

Why?

Personally I would use ports on different cards for redundancy,

Agree for the redundancy part - the links were spread between at least 2
cards each - but then the performance was affected.

> load-balancing used is src-dst-ip. Looking at the figures above I guess
> anyone would say there are plenty of resources left yet our
graphs/interface
> summary shows us that somere between 40-50% fabric utilization, both
ingress

That just means the card is doing 8-10Gbps - is this the number you 
would expect?

Well, after the modifications I've seen the fabric usage for that card rise
to 60%+ - no more congestion.



More information about the cisco-nsp mailing list