[c-nsp] 3560 buffering
Peter Rathlev
peter at rathlev.dk
Tue Oct 13 11:59:55 EDT 2009
On Tue, 2009-10-13 at 07:10 -0500, Jeff Bacon wrote:
> I use a number of 3560Gs as distribution switches in my co-lo farm,
> with 4G uplinks to 4948s. The load is fin-svcs, and can be quite
> bursty, tends to be a lot of small packets, but with a mix of larger
> stuff and standard file transfer etc.
>
> I've been seeing a number of output drops on the etherchannel ports to
> the 4948s, as well as to the servers themselves.
>
> What's an output drop mean, in a 3560 context? Queue to the host is
> full? Host interface already maxed out, "there was a packet on the
> wire being transmitted to the host, so sorry I'll drop the packet"? Or
> can it be an ingress/switching decision of some sort?
Given a new enough IOS (at least 12.2(50)SE works) the "show interface
counters errors" should name the specific cause. A guess might be
"OutDiscards", which are output buffer overruns.
> How deep is the individual buffer/ring on a gig PHY? Or is it buffered
> per-ASIC? (Doesn't seem to be)
AFAIK the SRR platforms (3560/3750) have a somewhat special way of
buffering. My guess is that the TX ring is relatively small and the
buffering happens primarily in the SRR logic. This is configured via the
"mls qos" global config commands.
http://www.cisco.com/en/US/docs/switches/lan/catalyst3750e_3560e/software/release/12.2_50_se/configuration/guide/swqos.html
http://tinyurl.com/yg76fm5
> Is there a way to tell the 3560 to buffer more aggressively?
There might be. If I remember correctly, the "no QoS enabled" defaults
from 12.2(25)SEE and earlier and 12.2(46)SE and later are somewhat
better than the version in between. We had many problems with too
aggressive drops when moving traffic coming in a Gigabit link and out a
FastEthernet link.
I have attached what should be a minimal configuration enabling QoS and
adjusting the output buffer sizes. This seemed to help in our lab tests,
making the switch behave almost like a 3550.
--
Peter
-------------- next part --------------
! Enable QoS
mls qos
!
! Only output queue-set 1, queue 2 is used. Adjust all thresholds to
! 400% of default. (This is AFAIK the maximum even though the parser
! accepts up to 3200%.)
mls qos queue-set output 1 threshold 2 400 400 100 400
!
! Assign all buffers to queue 2 (also used by the CPU)
mls qos queue-set output 1 buffers 0 100 0 0
!
! Map all output traffic to queue 2, threshold 3
! CoS-map
mls qos srr-queue output cos-map queue 2 threshold 3 0 1 2 3 4 5 6 7
! DSCP-map
mls qos srr-queue output dscp-map queue 2 threshold 3 0 1 2 3 4 5 6 7
mls qos srr-queue output dscp-map queue 2 threshold 3 8 9 10 11 12 13 14 15
mls qos srr-queue output dscp-map queue 2 threshold 3 16 17 18 19 20 21 22 23
mls qos srr-queue output dscp-map queue 2 threshold 3 24 25 26 27 28 29 30 31
mls qos srr-queue output dscp-map queue 2 threshold 3 32 33 34 35 36 37 38 39
mls qos srr-queue output dscp-map queue 2 threshold 3 40 41 42 43 44 45 46 47
mls qos srr-queue output dscp-map queue 2 threshold 3 48 49 50 51 52 53 54 55
mls qos srr-queue output dscp-map queue 2 threshold 3 56 57 58 59 60 61 62 63
!
! By default all interfaces are in queue-set 1. If not, move them there:
!
! interface range Fa0/1 - 24 , Gi0/1 - 2
! queue-set 1
! exit
! !
!
! By default the switch will zeroize DSCP with QoS enabled. If needed, one
! can configure trust, e.g. DSCP trust:
!
! interface range Fa0/1 - 24 , Gi0/1 - 2
! mls qos trust dscp
! exit
! !
!
More information about the cisco-nsp
mailing list