[f-nsp] Rate limiting on Foundry switches
David J. Hughes
bambi at Hughes.com.au
Thu Dec 16 00:01:40 EST 2004
Hi,
I've never tried this with Foundries so can' help with any input there.
However, in a previous life, my company delivered rate limited
ethernet services in several corporate office buildings. That was with
low end Cisco switches. We were happy with the performance. The key
was to ensure that there was some burst available.
The info below was floated around recently on the Cisco-NSP mailing
list for applying a consistent 4mbps CAR setup. Perhaps the logic can
be applied to the Foundry.
The following line produces a razor sharp 4 meg chart.
rate-limit input 4000000 6000000 12000000 conform-action transmit
exceed-action drop
Value 1. bits per second
Value 2. Normal burst bytes = bits per second * 1.5
Value 3. Maximum burst bytes = normal burst bytes * 2
Hope that helps
David
...
On 16/12/2004, at 2:00 PM, Robert Brewer wrote:
> I'm working on an upgrade of our colocation Ethernet infrastructure,
> and one thing I would like to be able to do is rate limit customers to
> a particular speed with a granularity of 1 Mbit/s. I want it to work
> as close as possible to a virtual communications link of the specified
> speed. For example, an interface limited to 3 Mbit/s should act like a
> pair of bonded T1s even though it is actually running over a 100BaseT
> full duplex interface. Apparently this is hard to do, since the only
> hardware I have seen that can do this well is Cisco IOS's rate-limit
> and traffic-shape.
>
> Anyway, our local Foundry reps said that the Foundry FastIron Edge
> Switch (FES) could do rate limiting. I configured a FES4802 and found
> that the "fixed rate-limiting" it supports was unacceptable. Using
> iperf, I was unable to get TCP performance close to the configured
> rate, and it got worse as I increased the configured rate-limit.
>
> Next I tried the FastIron 4802 (FWS4802) that supports "adaptive rate
> limiting". Unfortunately, it also did not perform acceptably. Using
> iperf to test, for TCP traffic I would see substantially lower
> throughput than the configured rate. For example for a configured rate
> of 6 Mbit/s I would see 3.2 Mbit/s. Doing UDP tests with iperf showed
> the expected behavior: throughput that closely tracked the configured
> rate.
>
> It looks like the algorithm the Foundry stuff uses to rate limit hoses
> TCP performance, presumably because it drops packets in ways that
> upset the TCP flow control methods.
>
> Has anyone gotten acceptable performance for rate limiting from
> Foundry hardware for this kind of application? If not, any suggestions
> on hardware that does get this right? Mahalo!
> --
> Robert Brewer ph:
> 808-532-8246
> Assistant Manager, System Department fax:
> 808-532-8246
> LavaNet, Inc.
> rbrewer at lava.net
> _______________________________________________
> foundry-nsp mailing list
> foundry-nsp at puck.nether.net
> http://puck.nether.net/mailman/listinfo/foundry-nsp
>
More information about the foundry-nsp
mailing list