[c-nsp] MLPPP throughput
Rodney Dunn
rodunn at cisco.com
Wed Jul 15 22:19:55 EDT 2009
Depending on your apps ability to handle out of order frames on the end
stations of course.
On Wed, Jul 15, 2009 at 09:59:04PM -0400, Rodney Dunn wrote:
> I bet your out of order is getting so bad you are dropping the packets.
>
> I'm not a PPPox expert...but could you create 7 dialers and do CEF
> per packet over them?
>
> On Wed, Jul 15, 2009 at 10:07:24AM -0500, Dave Weis wrote:
> >
> > I'm bringing up a MLPPP PPPoA bundle with 4 7-meg DSL lines. It had worked
> > fine with only 2 lines in the bundle and provided the full expected speed.
> > Adding the next two lines didn't provide an increase in speed, it actually
> > might have decreased a bit. It tops out at around 10 megabits with 4 links
> > in the bundle.
> >
> > The hardware on the customer side is a 3745 running 12.4(4)T1. It has 4
> > WIC-1ADSL's installed. The config on the ADSL interfaces are all
> > identical:
> >
> > interface ATM0/0
> > no ip address
> > no atm ilmi-keepalive
> > dsl operating-mode auto
> > hold-queue 224 in
> > pvc 0/32
> > encapsulation aal5mux ppp dialer
> > dialer pool-member 1
> > !
> >
> > interface Dialer0
> > ip address negotiated
> > no ip proxy-arp
> > encapsulation ppp
> > dialer pool 1
> > dialer vpdn
> > dialer-group 1
> > ppp pap sent-username <removed>
> > ppp link reorders
> > ppp multilink
> > ppp multilink fragment disable
> > !
> >
> > We've tried it with and without the reorders and fragment changes in the
> > config.
> >
> > The server side is a 7206 with an NPE-G1. We're not topping out the
> > processor on either side during transfers.
> >
> > The multilink bundle shows a lot of discards and reorders. This is after a
> > reset and downloading less than a gig of data on the client:
> >
> > Virtual-Access3, bundle name is isprouter
> > Endpoint discriminator is isprouter
> > Bundle up for 01:15:43, total bandwidth 400000, load 1/255
> > Receive buffer limit 48768 bytes, frag timeout 1000 ms
> > Using relaxed lost fragment detection algorithm.
> > Dialer interface is Dialer0
> > 0/0 fragments/bytes in reassembly list
> > 242 lost fragments, 1237543 reordered
> > 29169/15194784 discarded fragments/bytes, 16700 lost received
> > 0x1F9178 received sequence, 0x6A517 sent sequence
> > Member links: 4 (max not set, min not set)
> > Vi4, since 01:15:43, unsequenced
> > PPPoATM link, ATM PVC 0/32 on ATM0/0
> > Packets in ATM PVC Holdq: 0, Particles in ATM PVC Tx Ring: 0
> > Vi6, since 01:15:43, unsequenced
> > PPPoATM link, ATM PVC 0/32 on ATM1/0
> > Packets in ATM PVC Holdq: 0, Particles in ATM PVC Tx Ring: 0
> > Vi5, since 01:15:43, unsequenced
> > PPPoATM link, ATM PVC 0/32 on ATM0/2
> > Packets in ATM PVC Holdq: 0, Particles in ATM PVC Tx Ring: 0
> > Vi2, since 01:15:43, unsequenced
> > PPPoATM link, ATM PVC 0/32 on ATM0/1
> > Packets in ATM PVC Holdq: 0, Particles in ATM PVC Tx Ring: 0
> > No inactive multilink interfaces
> >
> >
> > Any ideas to get this closer to 20+ megs?
> >
> > THanks
> > dave
> >
> >
> >
> >
> > --
> > Dave Weis
> > djweis at internetsolver.com
> > http://www.internetsolver.com/
> >
> > _______________________________________________
> > cisco-nsp mailing list cisco-nsp at puck.nether.net
> > https://puck.nether.net/mailman/listinfo/cisco-nsp
> > archive at http://puck.nether.net/pipermail/cisco-nsp/
> _______________________________________________
> cisco-nsp mailing list cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
More information about the cisco-nsp
mailing list