[j-nsp] policing TCP traffic

Robert O'Hara rohara@juniper.net
Fri, 30 Aug 2002 12:33:09 -0700

Hi Jason,

The burst-limit defines the maximum amount of time (as well as amount of
traffic) that you will allow any single packet to sit in the burst

For example: to limit any single packet from sitting in the burst buffer
for any longer than 5ms, you would use the following generic formula.

---------> Burst size =3D bandwidth * 5 ms

That is, to keep your burst size large enough to allow
5 ms of burst.

Burst size is expressed in kbits.

Another example, let's say I had an OC48, and I wanted to rate-limit a
specific subset of traffic on that interface to 622Mbps. To limit
packets running at an OC12 link rate to less than 1ms wait time in the
burst buffer, I would use the following:

burst size limit =3D 1 ms * 622mbit/sec =3D 622k bits =3D 77 k byte.

So, for 20Mb rate, I would set a burst size of 1000k (1 Meg), and that
would ensure
that packets are not queued for more than 5ms.  ~5ms max-delay is
usually  adequate for most traffic.

Also, and this does not apply to your case, but as a rule of thumb, the
burst limit should never be smaller than the MTU for the interface.

If the burst size is too small, you could end
up getting something smaller then policed rate.
If it is too big, as is the case here, you are extending=20
the burst-buffer to way beyond the intended limit.

Think of the burst-limit buffer as being a threshold that is monitored
used for throttling traffic that bursts above the bandwidth-limit many
times in a second.


Bob O'Hara =20

Systems Engineer

Juniper Networks - 'Every Bit IP'

. Email:       rohara@juniper.net       .
. Cell:        603.498.8119             .
. Home Office: 603.382.3894             .
. Westford Office: 978.589.0127         . =20

-----Original Message-----
From: Jason Parsons [mailto:jparsons-juniper@saffron.org]
Sent: Friday, August 30, 2002 12:39 PM
To: juniper-nsp@puck.nether.net
Subject: [j-nsp] policing TCP traffic

While testing TCP performance through a policer in the lab, we noticed=20
some strange results.  It appears that with 15M and 20M policers, we=20
get significantly less throughput (as a percentage of the policer=20
setting) than with other settings.

I'm not sure this can be explained by the TCP window closing, as it=20
seems to happen only at 15M and 20M.

We are using the netperf tool to generate traffic:
     netperf -H -l 200 -- -i

The policer is configured as follows, under 5.4R1 on an M10:

     fe-0/0/3 {
         unit 0 {
             family inet {
                 filter {
                     input test-filter;

     family inet {
         filter test-filter {
             policer p1 {
                 if-exceeding {
                     bandwidth-limit 15m;
                     burst-size-limit 100m;
                 then discard;

Here is a summary of our results at different settings.  Yes, I know=20
that the burst-size is set high, but that doesn't seem to make a=20
difference in this test.

=3D=3D> summary_5m_tcp_output.txt <=3D=3D
Avg. bw per round =3D   4.8100 Avg. delay per round =3D  10.0780 ms
=3D=3D> summary_10m_tcp_output.txt <=3D=3D
Avg. bw per round =3D   9.6600 Avg. delay per round =3D   5.3340 ms
=3D=3D> summary_15m_tcp_output.txt <=3D=3D
Avg. bw per round =3D  11.2200 Avg. delay per round =3D   3.5190 ms
=3D=3D> summary_20m_tcp_output.txt <=3D=3D
Avg. bw per round =3D  11.5500 Avg. delay per round =3D   3.4150 ms
=3D=3D> summary_25m_tcp_output.txt <=3D=3D
Avg. bw per round =3D  24.0700 Avg. delay per round =3D   6.5240 ms
=3D=3D> summary_30m_tcp_output.txt <=3D=3D
Avg. bw per round =3D  28.9100 Avg. delay per round =3D   9.3030 ms
=3D=3D> summary_35m_tcp_output.txt <=3D=3D
Avg. bw per round =3D  33.7400 Avg. delay per round =3D   7.8210 ms
=3D=3D> summary_40m_tcp_output.txt <=3D=3D
Avg. bw per round =3D  38.5700 Avg. delay per round =3D   8.9690 ms
=3D=3D> summary_45m_tcp_output.txt <=3D=3D
Avg. bw per round =3D  43.3100 Avg. delay per round =3D   8.5940 ms
=3D=3D> summary_50m_tcp_output.txt <=3D=3D
Avg. bw per round =3D  47.7100 Avg. delay per round =3D   4.5710 ms
=3D=3D> summary_55m_tcp_output.txt <=3D=3D
Avg. bw per round =3D  52.1700 Avg. delay per round =3D   8.3280 ms
=3D=3D> summary_60m_tcp_output.txt <=3D=3D
Avg. bw per round =3D  57.2800 Avg. delay per round =3D   6.2030 ms
=3D=3D> summary_65m_tcp_output.txt <=3D=3D
Avg. bw per round =3D  62.0700 Avg. delay per round =3D   4.9650 ms
=3D=3D> summary_70m_tcp_output.txt <=3D=3D
Avg. bw per round =3D  67.4900 Avg. delay per round =3D   4.6020 ms
=3D=3D> summary_75m_tcp_output.txt <=3D=3D
Avg. bw per round =3D  72.3200 Avg. delay per round =3D   5.4870 ms
=3D=3D> summary_80m_tcp_output.txt <=3D=3D
Avg. bw per round =3D  77.2300 Avg. delay per round =3D   3.7220 ms
=3D=3D> summary_85m_tcp_output.txt <=3D=3D
Avg. bw per round =3D  82.0500 Avg. delay per round =3D   6.1610 ms
=3D=3D> summary_90m_tcp_output.txt <=3D=3D
Avg. bw per round =3D  86.8900 Avg. delay per round =3D  15.3260 ms
=3D=3D> summary_95m_tcp_output.txt <=3D=3D
Avg. bw per round =3D  91.7300 Avg. delay per round =3D  14.5570 ms
=3D=3D> summary_100m_tcp_output.txt <=3D=3D
Avg. bw per round =3D  94.0500 Avg. delay per round =3D  14.7480 ms

Any pointers would be appreciated.

  - Jason Parsons

juniper-nsp mailing list juniper-nsp@puck.nether.net