[nsp] MLPPP fragment reordering

From: Steven W. Raymond (steven_raymond@eli.net)
Date: Tue Feb 05 2002 - 19:55:04 EST


Problem:
User complains about TCP packets arriving out-of-order on the other end
of a 4 T1 MLPPP bundle. Assertion is that the misordering is causing
throughput problems when TCP detects the out of order MLPPP fragments.

Issuing 'sh ppp multilink' the "reordered" stat is climbing like crazy.
We're talking 300-1000 per second:

router#sh ppp multilink

Multilink2, bundle name is
  36 lost fragments, 15371129 reordered, 0 unassigned
  3 discarded, 21644 lost received, 125/255 load
  0x375D0E received sequence, 0x45CAD4 sent sequence
  Member links: 4 active, 0 inactive (max not set, min not set)
    Serial5/1/0/27:0
    Serial3/1/0/17:0
    Serial11/1/0/22:0
    Serial10/1/0/28:0
Multilink2, bundle name is
  36 lost fragments, 15371695 reordered, 0 unassigned
  3 discarded, 21644 lost received, 125/255 load
  0x37633C received sequence, 0x45CD6C sent sequence
  Member links: 4 active, 0 inactive (max not set, min not set)
    Serial5/1/0/27:0
    Serial3/1/0/17:0
    Serial11/1/0/22:0
    Serial10/1/0/28:0

Questions:
1) Does the "reordered" statistic suggest that TCP packets will/can
arrive out-of-order?
2) Is there any way to improve this behavior? Do have "no ppp multilink
fragmentation" configured on the multilink interface, but I'm not sure
what it does exactly or even if it is relevant to this issue.
3) Are there better ways of load balancing across 4 T1s?

Have previously load balanced using "ip load-sharing per-packet" &
"per-destination" but have the understanding that this is an even worse
method for potentially misordering packets.

Cisco advises that some reordering is normal and okay, but it seems to
me that at this rate it is too high and could lead to TCP packets out of
order, causing less than full throughput.

Any advice is appreciated.

Thanks!



This archive was generated by hypermail 2b29 : Sun Aug 04 2002 - 04:13:31 EDT