[c-nsp] 3560 buffering (was: 3560 mtu miss-match causing output drops)

Peter Rathlev peter at rathlev.dk
Mon Mar 15 16:11:06 EDT 2010


On Mon, 2010-03-15 at 20:35 +0100, Pavel Bykov wrote:
> Yeah, your calculations are wrong.
> 1. "Packet" is 256 bytes, not 1500. "cell" is a beter term, since  
> pointer reference links memory blocks of 256 bytes each. Only content
> of one packet can exist in any cell at any one time. E.g. 2x64 byte  
> packets will use up 2x memory cells, or 512 bytes. I'd talk more about
> system proprietary overhead, but i don't know what is being asked.
> 2. Buffers vary per model from 384K per 8 ports to 2MB per 2 ports.  
> All depends on type and version of Asic.
> There are no port buffers, buffer is always per whole group, and can  
> be oversubscribed.

Thank you, that's very interesting info. I've been searching for
something "official" from Cisco about the specifics of the
2960/3560/3750 family, but haven't had any luck so far.

For the 6500 modules we have this:

Buffers, Queues & Thresholds on Catalyst 6500 Ethernet Modules
http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps708/prod_white_paper09186a0080131086.html

Does anything similar exist for the smaller switches? (Or would an SE be
able to find out, and maybe share under NDA?)

> You can buffer up to 8,4 ms on a port without causing instabilities.

That could very well be true, but it seems at least the 3550 have much
largers buffers. We have had very bad performance from factory default
3560s (running 12.2(50)SE3).

A PC in a 100 Mbps port (uplink is gigabit) experiences serious packet
losses when trying a bulk TCP transfer. It's the switch dropping the
packets as per "show interface | i drop". (Tested both with a HTTP
download of random data and with iperf.)

Different congestion control algorithms (on linux 2.6) give varying
success; CUBIC and HTCP seem to cope okay-ish, BIC and Reno are worse.
None of then can pull more than ~75 Mbps. When replacing the 3560 with a
3550 it can pull 97 Mbps with no drops.

Since we currently only prioritise voice traffic we've simple allocated
all other buffer space to one queue to carry data. This works for us.

-- 
Peter




More information about the cisco-nsp mailing list