[c-nsp] Multilink PPP (MLPPP) Asymmetrical Throughput Problem NxT1

Rodney Dunn rodunn at cisco.com
Wed Jun 6 10:50:46 EDT 2007


On Tue, Jun 05, 2007 at 11:02:07PM -0400, Sean Shepard wrote:
> Thank you for the reply on this.  We did exactly what you mention here
> (trying to isolate channels) and found the performance metrics didn't change
> very much except that there seemed to be little impairment with just a
> single T-1.

Good test.

  We do not believe that variance in latency exists to the point
> that we should be having a severe issue and it has since reoccurred on a
> couple of other bundled connections (on this same particular router - see
> below).

Fair enough. There were a lot of MLPPP bugs in older releases too.
MLPPP can be pretty complicated too becuase there are a lot of dependencies
on the driver code to report backpressure correctly to the bundle.
There is no queueing on the interface level so if the driver code doesn't
put the backpressure to the MLPPP virtual interface correctly you will
have probelems.

> 
> None of the T-1s seem to take errors in any of the bundles.  We do see a lot
> of output queue drops on the Multilink interfaces but not sure how
> concerning that really is.

That's a problem. If they are valid drops you are overrunning the bundle
member links. 

> 
> The only difference between this device and similar ones on our network is
> that we have exceeded the number of fast interfaces (4 vs. recommended 3 -
> but the card in question is in the middle and should be getting its SRAM
> allotment okay) and we do terminate some ATM/PPPoE/L2TP sessions on this
> device.  The system is:

I'd be amazed if that had anything to do with it.

Did you disable MLPPP fragmentation "no ppp multilink fragmentation"
or it's one with the disable CLI. We changed it at some point along the
way.

> 
> 7206 (non-VXR)
> NPE-200 with IO-FE
> IOS 12.2(31) [bootldr 12.0(13)S]
>   (is there perhaps an issue in 12.2(31) with MLPPP?
>    I'd like to go to a 12.3 release but need to verify 
>    Support for the CT3/4T1 for two of our boxes).
> 
> We're using the older CT3/4T1 cards on this edge device and haven't had
> problems with MLPPP in the past on a similar system (running 12.2(23)c).

See above. There are driver dependencies for each card for MLPPP to work.
Can you get 'sh controller' just to see if it shows anything interesting
that's different between the two?

> 
> Download speed continues to perform okay in most tests but uploads get
> woefully bad and we start losing packets above 1.6 to 2.0 mbps (2% observed
> today as things crept over 2mbps) regardless of the number of bundled trunks
> [2 or 3].  It "seems" that performance improves in the evenings when there
> is less traffic going through the device, it's lightly loaded even during
> the day (maybe a total of 10 mbps being handled on this one system).

To really isolate that you first need to determine direction of loss/latency
and then narrow down the debugging. That's easier said than done.

> 
> I considered tweaking the buffers, but if it's an issue of emptying the
> queues fast enough (perhaps because it's servicing one too many high speed
> interfaces?) than putting more in the buffers that it can't get to might
> just make things worse.

My experience would say that's pretty much surely not the case. But I've
been wrong before. I don't know if we even have CEF support for MLPPP back
that far. In 'sh int stat' what does it look like for the bundle interface?

> 
> We have several customers utilizing VoIP and have some policy-maps on those
> interfaces, none of them using MLPPP [yet] but a few on the same box and
> even the same card in question here.  No complaints about lost packets or
> voice quality there so the overall system seems sound and CPU utilization is
> generally in the low double digits.  Various debug outputs don't seem to
> barking either.

It gets complicated but you would have to get the multilink debugs and
compare to see if you are seeing loss/delay for the fragments.

does sh ppp multilink show anything when you are doing a transfer
that is slow?

> 
> Any suggestions are appreciated.  I think I'm close to just dropping another
> chassis in with this DS3 on it and seeing if the problem cleans up.

Get some upgraded code (late 12.3 or 12.4) would be a good recommendation.

> 
> 
> ADDITIONAL OUTPUTS
> 
> 7206#show int multilink3
> 
> Multilink3 is up, line protocol is up
>   Hardware is multilink group interface
>   Internet address is xx.xx.xx.xx/30
>   MTU 1500 bytes, BW 3072 Kbit, DLY 100000 usec,
>      reliability 255/255, txload 10/255, rxload 189/255
>   Encapsulation PPP, loopback not set
>   Keepalive set (10 sec)
>   DTR is pulsed for 2 seconds on reset
>   LCP Open, multilink Open
>   Open: IPCP
>   Last input 15:29:57, output never, output hang never
>   Last clearing of "show interface" counters 20:35:24
>   Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 32796
>   Queueing strategy: fifo
>   Output queue: 0/40 (size/max)
>   30 second input rate 2278000 bits/sec, 236 packets/sec
>   30 second output rate 131000 bits/sec, 139 packets/sec
>      7130649 packets input, 312942772 bytes, 0 no buffer
>      Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
>      0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
>      6358409 packets output, 2746174328 bytes, 0 underruns
>      0 output errors, 0 collisions, 1 interface resets
>      0 output buffer failures, 0 output buffers swapped out
>      0 carrier transitionsLLDC-7206#show buffers failures
> 
> 
> 7206#show buffers failures
> Caller       Pool          Size      When
> 0x606D1B64  Middle            60    12:13:34
> 0x606D1B64  Small             62    10:51:14
> 0x606D1B64  Small             62    10:51:14
> 0x606D1B64  Small             62    10:51:14
> 0x606D1B64  Small             62    10:51:14
> 0x606D1B64  Small             62    05:03:08
> 0x606D1B64  Small             62    05:03:08
> 0x606D1B64  Small             62    05:03:08
> 0x606D1B64  Small             62    05:03:08
> 0x606D1B64  Small             62    05:03:08
> 
> 
> CPU utilization for five seconds: 9%/9%; one minute: 10%; five minutes: 10%
>  PID QTy       PC Runtime (ms)    Invoked   uSecs    Stacks TTY Process
> 
> 
> 
> 7206#show proc | incl IP
>   11 Mwe 606DA624            0          1       0 5644/6000   0 IPC Zone
> Manager
>   12 Mwe 606DA38C           60      74335       0 5704/6000   0 IPC Periodic
> Tim
>   13 Mwe 606DA330           56      74335       0 5708/6000   0 IPC Deferred
> Por
>   14 Mwe 606DA438            0          1       0 5600/6000   0 IPC Seat
> Manager
>   40 Mwe 60755094       281008     834496     33610332/12000  0 IP Input
>   45 Mwe 608D4FC4           24        186     129 4884/6000   0 PPP IP Add
> Route
>   49 Mwe 607CD814         1352      16162      83 7328/9000   0 IP
> Background
>   50 Mwe 607D31C8           72       1436      50 7916/9000   0 IP RIB
> Update
>   71 Lsi 60816C44          172       1239     138 5232/6000   0 IP Cache
> Ager
>  115 Lwe 607918C8            0          2       011472/12000  0 IP SNMP
> 
> 
> 7206#show proc | incl PPP
>    3 Mwe 608EFC18         1084        433    250321644/24000  0 PPP auth
>   42 Mwe 6087ED38            0          1       0 5636/6000   0 PPPATM
> Session d
>   45 Mwe 608D4FC4           24        186     129 4884/6000   0 PPP IP Add
> Route
>  102 Mwe 60F15E78            0          1       0 5632/6000   0 PPPOE
> discovery
>  103 Mwe 60F15F48            0          1       0 5624/6000   0 PPPOE
> background
>  110 Mwe 608D5200        34400     196190     17521944/24000  0 PPP manager
>  111 Hwe 609090AC          976      74368      13 4996/6000   0 Multilink
> PPP
>  112 Hwe 608FF1A0            0          2       0 5576/6000   0 Multilink
> PPP ou
> 
> 7206#show proc | incl Multilink
>  111 Hwe 609090AC          976      74411      13 4996/6000   0 Multilink
> PPP
>  112 Hwe 608FF1A0            0          2       0 5576/6000   0 Multilink
> PPP ou
>  113 Mwe 609091B0           12         18     666 5060/6000   0 Multilink
> event
> 
> 
> 
> 
> -----Original Message-----
> From: Rodney Dunn [mailto:rodunn at cisco.com] 
> Sent: Saturday, May 26, 2007 8:30 AM
> To: Sean Shepard
> Cc: cisco-nsp at puck.nether.net
> Subject: Re: [c-nsp] Multilink PPP (MLPPP) Asymmetrical Throughput Problem
> NxT1
> 
> I saw this exact problem from a customer a few months ago. What his
> turned out to be was some extra latency on one of the T1 links.
> I don't remember if it was in one direction or not though.
> 
> How about trying the combinations of 2xT1 bundles to see if you
> can isolate one T1 as a particular problem?
> 
> Capture 'sh ppp mul' on both sides when you are seeing the
> problem and not.
> 
> Rodney
> 
> 
> 
> 
> On Fri, May 25, 2007 at 10:15:52PM -0400, Sean Shepard wrote:
> > I'm encountering an odd problem bonding T1 circuits with Multilink PPP
> > (Cisco 7206 down to a 2651XM).  In this particular case, I've got three
> (3)
> > T-1 lines bonded together and, taking into account overhead, I'm getting
> > (via direct Ethernet -> laptop connection - no other traffic):
> > 
> > 4.2 to 4.3 mbps downstream, but .
> > Only around 2.0 mbps upstream (tried two different "check my speed"
> sites).
> > 
> > I was able to initiate multiple large packet repeating pings between the
> two
> > devices and the show interface statistics showed in the neighborhood of
> 4.0
> > mbps in both directions.  This put the CPU in the 60 to 70 percent
> > utilization range on the remote 2651XM device (with it originating about
> 50%
> > of the pings but not otherwise routing any traffic, Ethernet interface was
> > disconnected) 7206 Utilization (handling traffic for numerous locations)
> was
> > well under 10%.  No ACLs or Policy-Maps at either end of the links. 
> > 
> > Any suggestions or insight on the problem are greatly appreciated.  I've
> not
> > observed this kind of difference on various other 2xT1 (3 mbps)
> connections
> > that I've got in place but this is the first 3xT1.  Thanks a bunch!
> > 
> > 
> > HOST END
> > Cisco 7206 (non VXR)
> > 
> > interface Multilink2
> >  description Multilink (ch16-18)
> >  ip unnumbered Loopback2
> >  ppp multilink
> >  no ppp multilink fragmentation
> >  multilink-group 2
> > 
> > interface Serial3/0:16
> >  description MLPPP #1 
> >  mtu 1524
> >  no ip address
> >  no ip redirects
> >  encapsulation ppp
> >  no fair-queue
> >  no ppp lcp fast-start
> >  ppp multilink
> >  multilink-group 2
> > !
> > interface Serial3/0:17
> >  description MLPPP #2
> >  mtu 1524
> >  no ip address
> >  no ip redirects
> >  encapsulation ppp
> >  no fair-queue
> >  ppp multilink
> >  multilink-group 2
> > !
> > interface Serial3/0:18
> >  description MLPPP #3
> >  mtu 1524
> >  no ip address
> >  no ip redirects
> >  encapsulation ppp
> >  no fair-queue
> >  ppp multilink
> >  multilink-group 2
> > 
> > 
> > 
> > REMOTE END
> > CISCO 2651XM w/WIC-DSU-T1 Cards
> > IOS C2600 Version 12.3(6a)
> > 
> > Router> show run (relevant sections)
> > 
> > network-clock-participate slot 1
> > no network-clock-participate wic 0
> > no aaa new-model
> > ip subnet-zero
> > ip cef
> > !
> > !
> > !
> > interface Multilink1
> >  description MultiLink PPP 3xT1
> >  ip address xx.xx.xx.xx xx.xx.xx.xx
> >  no ip mroute-cache
> >  load-interval 30
> >  ppp multilink
> >  ppp multilink fragment disable
> >  ppp multilink group 1
> > ! 
> > interface FastEthernet0/0
> >  description LAN Interface
> >  ip address xx.xx.xx.xx 255.255.255.248
> >  speed auto
> >  full-duplex
> >  no cdp enable
> > !
> > interface Serial0/0
> >  description MLPPP T1 1
> >  mtu 1524
> >  no ip address
> >  encapsulation ppp
> >  load-interval 120
> >  no fair-queue
> >  ppp multilink
> >  ppp multilink group 1
> >  ppp multilink endpoint none
> > !
> > interface Serial0/1
> >  description MLPPP T1 2
> >  mtu 1524
> >  no ip address
> >  encapsulation ppp
> >  load-interval 120
> >  no fair-queue
> >  ppp multilink
> >  ppp multilink group 1
> >  ppp multilink endpoint none
> > !
> > interface Serial1/0
> >  description MLPPP T1 3
> >  mtu 1524
> >  no ip address
> >  encapsulation ppp
> >  load-interval 120
> >  no fair-queue
> >  ppp multilink
> >  ppp multilink group 1
> >  ppp multilink endpoint none
> > !
> > 
> > 
> > 
> > ROUTER> show interface
> > 
> > Serial0/0 is up, line protocol is up
> >   Hardware is PQUICC with Fractional T1 CSU/DSU
> >   Description: MLPPP T1 1
> >   MTU 1524 bytes, BW 1544 Kbit, DLY 20000 usec,
> >      reliability 255/255, txload 1/255, rxload 1/255
> >   Encapsulation PPP, LCP Open, multilink Open, loopback not set
> >   Last input 00:00:00, output 00:00:01, output hang never
> >   Last clearing of "show interface" counters 05:13:27
> >   Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
> >   Queueing strategy: fifo
> >   Output queue: 0/40 (size/max)
> >   2 minute input rate 0 bits/sec, 0 packets/sec
> >   2 minute output rate 0 bits/sec, 0 packets/sec
> >      179815 packets input, 246334673 bytes, 0 no buffer
> >      Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
> >      1 input errors, 0 CRC, 1 frame, 0 overrun, 0 ignored, 0 abort
> >      175062 packets output, 233229743 bytes, 0 underruns
> >      0 output errors, 0 collisions, 2 interface resets
> >      0 output buffer failures, 0 output buffers swapped out
> >      0 carrier transitions
> >      DCD=up  DSR=up  DTR=up  RTS=up  CTS=up
> > 
> > Serial0/1 is up, line protocol is up
> >   Hardware is PQUICC with Fractional T1 CSU/DSU
> >   Description: MLPPP T1 2
> >   MTU 1524 bytes, BW 1544 Kbit, DLY 20000 usec,
> >      reliability 255/255, txload 1/255, rxload 1/255
> >   Encapsulation PPP, LCP Open, multilink Open, loopback not set
> >   Last input 00:00:04, output 00:00:04, output hang never
> >   Last clearing of "show interface" counters 05:13:35
> >   Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
> >   Queueing strategy: fifo
> >   Output queue: 0/40 (size/max)
> >   2 minute input rate 0 bits/sec, 0 packets/sec
> >   2 minute output rate 0 bits/sec, 0 packets/sec
> >      179809 packets input, 246361009 bytes, 0 no buffer
> >      Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
> >      2 input errors, 1 CRC, 1 frame, 0 overrun, 0 ignored, 0 abort
> >      175057 packets output, 233616175 bytes, 0 underruns
> >      0 output errors, 0 collisions, 2 interface resets
> >      0 output buffer failures, 0 output buffers swapped out
> >      0 carrier transitions
> >      DCD=up  DSR=up  DTR=up  RTS=up  CTS=up
> > 
> > Serial1/0 is up, line protocol is up
> >   Hardware is DSCC4 with integrated T1 CSU/DSU
> >   Description: MLPPP T1 3
> >   MTU 1524 bytes, BW 1544 Kbit, DLY 20000 usec,
> >      reliability 255/255, txload 1/255, rxload 1/255
> >   Encapsulation PPP, LCP Open, multilink Open, loopback not set
> >   Last input 00:00:05, output 00:00:05, output hang never
> >   Last clearing of "show interface" counters 05:13:40
> >   Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
> >   Queueing strategy: fifo
> >   Output queue: 0/40 (size/max)
> >   2 minute input rate 0 bits/sec, 0 packets/sec
> >   2 minute output rate 0 bits/sec, 0 packets/sec
> >      179807 packets input, 246218180 bytes, 0 no buffer
> >      Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
> >      5 input errors, 0 CRC, 5 frame, 0 overrun, 0 ignored, 0 abort
> >      175054 packets output, 233403514 bytes, 0 underruns
> >      0 output errors, 0 collisions, 2 interface resets
> >      0 output buffer failures, 0 output buffers swapped out
> >      0 carrier transitions
> >      DCD=up  DSR=up  DTR=up  RTS=up  CTS=up
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > _______________________________________________
> > cisco-nsp mailing list  cisco-nsp at puck.nether.net
> > https://puck.nether.net/mailman/listinfo/cisco-nsp
> > archive at http://puck.nether.net/pipermail/cisco-nsp/


More information about the cisco-nsp mailing list