We recently had a demo M5 unit and I was trying to test the traffic policing
features of JUNOS. Really what we want to do is replicate the traffic-shape
functionality found in Cisco IOS. For example:
interface FastEthernet0/0
description LavaNet Fast Ethernet LAN
[...]
traffic-shape group 106 1000000 500000 500000 1000
!
access-list 106 permit tcp host X.XX.XXX.X any eq nntp
access-list 106 deny ip any any
The above configuration will limit outgoing NNTP traffic from that particular
IP address to roughly 1 Mbit/s. Note that we aren't doing any fancy QoS
processing or marking packets for further processing on a backbone.
I tried to replicate this in JUNOS, with disappointing results. It appears
that the relevant configuration parameter is traffic-policing. As a simple
test, I tried to police a stream sent between two FE ports on the M5 using
iperf <http://dast.nlanr.net/Projects/Iperf/>. Here is the JUNOS
configuration I used:
fe-0/0/1 {
description "Test network";
unit 0 {
family inet {
no-redirects;
filter {
output fe-0/0/1.0-out;
}
address XX.XX.XXX.137/29;
}
}
}
filter fe-0/0/1.0-out {
policer test {
if-exceeding {
bandwidth-limit 30m;
burst-size-limit 100m;
}
then discard;
}
term test {
/* only police traffic to one address, so we can test with some */
/* streams policed and some not */
from {
destination-address {
XX.XX.XXX.138/32;
}
}
then policer test;
}
term default {
then accept;
}
}
With the filter disabled, I could get 92 Mbit/s with TCP traffic. With the
filter on, I got 2.1 Mbit/s, not the hoped for 30 Mbit/s!! Multiple streams
didn't really help: each seemed to get about 2 Mbit/s. I also did some ssh
scp's over the link, and they saw similar performance. However, when using
UDP streams, they were policed very closely to the bandwidth-limit (just with
lots of packet loss if the provided stream was greater than the bandwidth
limit).
My guess as to what is happening is this: when TCP traffic goes over the
bandwidth limit, JUNOS drops lots of sequential packets. This causes lots of
TCP retransmissions and shrinks the window size. The reduction in throughput
causes packets to stop getting dropped, and TCP ramps up again. The net
result is very poor throughput. This seems to be confirmed by some tests I
ran which showed normal throughput on a policed stream as long as it never
offered more traffic than the bandwidth-limit.
I'm wondering if anyone else has encountered this problem, and whether they
were able to solve it? Our SE is still trying to find out whether or not this
is expected behavior or not. Juniper has a white paper about "Rate-limiting
and Traffic-policing Features" on their web site, but it doesn't discuss this
issue at all. In fact, it has an example which uses 'hard policing' to
provide 256 Kbit/s subrate service on a full T1 circuit. If my test is any
indication, the customer wouldn't get nearly 256 Kbit/s of throughput for TCP
traffic!:)
As far as I can tell, all the plp and ToS bit rewriting features of JUNOS
aren't really relevant since we don't need to communicate any information
beyond the router directly at hand.
I hoping that there is some magic configuration which will make this work, or
that our testing methodology is flawed and does not represent real-world
performance. However, I fear this might be one of those things that Cisco
does in software and JUNOS doesn't really do at all.
Mahalo in advance for any insight that can be provided.
This archive was generated by hypermail 2b29 : Mon Aug 05 2002 - 10:42:38 EDT