[c-nsp] LLQ delays

Volodymyr Yakovenko vovik at dumpty.org
Tue Apr 19 19:49:38 EDT 2005


Hello!

 I am trying to investigate simple LLQ test case to understand why LLQ in some
 circumstances can cause significant delay. There are two routers:

kv-krest-cis1	(2610, 12.2(27))
kv-mo-cis2	(3660, 12.2(19)a)

 connected over 2M Serial link:

kv-mo-cis2:Ser2/3 <-> Ser0/0:kv-krest-cis1

 First of all I have configured both routers interfaces with simplest LLQ:

policy-map MINIMAL
  class class-default
   fair-queue
interface Loopback1
 ip address 172.20.255.136 255.255.255.255
interface Serial0/0
 description kv-mo-cis2:Ser2/3
 bandwidth 2000
 ip unnumbered Loopback1
 max-reserved-bandwidth 100
 service-policy output MINIMAL
 encapsulation ppp
 ip route-cache flow
 load-interval 30

policy-map MINIMAL
  class class-default
   fair-queue
interface Loopback1
 ip address 172.20.255.2 255.255.255.255
interface Serial2/3
 description kv-krest-cis1:Ser0/0
 bandwidth 2000
 ip unnumbered Loopback1
 max-reserved-bandwidth 90
 service-policy output MINIMAL
 encapsulation ppp
 load-interval 30
 serial restart-delay 0

 ICMP ping from one router to another (or transit ping) is normal (4-8ms).

 After that I have saturated link using two high-volume TCP sessions:

  30 second input rate 2025000 bits/sec, 276 packets/sec
  30 second output rate 2023000 bits/sec, 276 packets/sec

 RTT measurement from any of router itself or from any other hosts shows
 RTT increase for approx. 120ms:

kv-krest-cis1#ping 172.20.255.2 repeat 100
Type escape sequence to abort.
Sending 100, 100-byte ICMP Echos to 172.20.255.2, timeout is 2 seconds:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Success rate is 100 percent (100/100), round-trip min/avg/max = 108/129/141 ms

kv-mo-cis2#ping 172.20.255.136 repeat 100
Type escape sequence to abort.
Sending 100, 100-byte ICMP Echos to 172.20.255.136, timeout is 2 seconds:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Success rate is 100 percent (100/100), round-trip min/avg/max = 116/127/140 ms

100 packets transmitted, 100 packets received, 0% packet loss
round-trip min/avg/max/stddev = 119.946/133.275/143.430/4.460 ms

 Both routers CPU load looks normal:

kv-krest-cis1#sh proc cpu
CPU utilization for five seconds: 27%/21%; one minute: 23%; five minutes: 21%

kv-mo-cis2#sh proc cpu
CPU utilization for five seconds: 7%/6%; one minute: 7%; five minutes: 8%

 I have tried to improve policy-map MINIMAL with separate priority class for
 ICMP:

class-map match-all ICMP
  match protocol icmp
policy-map MINIMAL
  class ICMP
    priority 100
  class class-default
   fair-queue
 
 it looks like ICMP is matched:

kv-krest-cis1#sh policy-map interface serial 0/0 output class ICMP
 Serial0/0 

  Service-policy output: MINIMAL

    Class-map: ICMP (match-all)
      1220 packets, 107360 bytes
      30 second offered rate 0 bps, drop rate 0 bps
      Match: protocol icmp
      Queueing
        Strict Priority
        Output Queue: Conversation 264 
        Bandwidth 100 (kbps) Burst 2500 (Bytes)
        (pkts matched/bytes matched) 1218/107184
        (total drops/bytes drops) 0/0

 but the RTT value remains stable - 120 and up.

 What is the rumor of the situation - simple replacement of 

service-policy output MINIMAL

 to

fair-queue

 drops RTT value to 20ms and less. 

 Can someone comment what is going on? What is the difference from

policy-map MINIMAL
  class class-default
   fair-queue

 and just 'fair-queue'? 

-- 
Regards,
Volodymyr.



More information about the cisco-nsp mailing list