[c-nsp] MLPPP QoS issue on SPA-1XCHSTM1/OC3 - traffic stops when link is congested

Vladimir Rabljenovic vlado_r at net.hr
Sat Oct 17 12:16:38 EDT 2009


hello,


I have Cisco 7606, RSP720, SIP-400 linecard with SPA-1XCHSTM1/OC3 in it. 
Ch STM-1 is connected to SDH MUX, which delivers 2 x E1 from CPE location. 
configuration is done with ML-PPP bundling, ppp MRU is set currently to 680 bytes,
and MRRU to 674
bytes (small packets are transferred, target is to have as small as possible
delay/jitter without fragmentation).
on Multilink interface there is outgoing QoS policy, one priority class, 2 AF classes
and class default. AF classes and class default are sharing available bandwidth with remaining
ratio percentage-based (1% configured for class default).
with one test tool we create realtime and important traffic going to EF and AF
classes/queues respectively, while with traffic generator we create UDP stream of 4Mbps (packet size
was 1500 at the beginning, now it is changed to around 500bytes), going into default class.

at some point in time, suddenly, traffic in outgoing direction stops completely, we
saw it with Nethawk, cisco 7606 did not send almost nothing on the line. what is only being
captured is regular LCP packets. Since both PPP and ML-PPP stayed up, it seems it is not PPP related
issue, but somehow QoS. on QoS map, in EF and AF class, we see some strange counters, some small
offered
rate, and exactly the same value for drop rate (it looks like counters are not accurate, since
there should be more traffic in those classes). So, link was congested for a some hours, and then
suddenly stop to send the traffic out on the interface.
at the same time, incoming packets are arriving on the interface normally, and being
forwarded to other egress interface.
when UDP traffic generator is stopped, connectivity is again there.

if someone has some idea how exactly does QoS policy with remaining ratio reflects in
this hardware and ML-PPP configuration, or has some idea what we should look for, i would really
appreciate it.
 
also, could someone explain what is the connection between MLPPP fragmentation in
hardware (for this card,it is possible to have 3 values, 128,256 and 512 bytes, 512 is enabled by
default) and MRU/MTU/MRRU values?

  Receive buffer limit 10784 bytes, frag timeout 1000 ms
  Bundle is Distributed
    0/0 fragments/bytes in reassembly list
    0 lost fragments, 0 reordered
    0/0 discarded fragments/bytes, 0 lost received
    0x0 received sequence, 0x0 sent sequence
  Distributed fragmentation on. Fragment size 512.  Multilink in Hardware.
  Member links: 2 active, 0 inactive (max not set, min not set)
    Se2/1/0.1/1/1/2:1, since 01:57:16
    Se2/1/0.1/1/1/1:1, since 01:57:16



thanks in advance,
vlado


More information about the cisco-nsp mailing list