[c-nsp] how "fast" is L2TP "fast switching" under 12.3T?
David Luyer
david at luyer.net
Tue Oct 26 23:58:21 EDT 2004
> Your problem is fragmentation, since the L2TP tunnel is delivered over
> Ethernet, the max MTU is 1500. Every customer frame that is over 1452
> causes fragmentation since 1452 + 8(PPPoE) + 12(L2TP) + 8(UDP) + 20(IP) =
> 1500.
> You can add "ip tcp adjust-mss 1412" to eliminate fragmentation of TCP
> packets, but you won't be able to do anything about UDP and IPSec packets.
>
> You friend has ATM, he has a bigger MTU, so no fragmentation.
>
> I have an identical setup to you, I graph CPU usage and reassembly, the
> graphs match perfectly.
As I responded off-list ... the two potential issues are checksumming
and fragmentation. Decent providers will give you increased MTU on the
Ethernet so you can avoid fragmentation rather than using the
adjust-mss workaround. 1600 is a nice MTU (covers L2TP, IPsec, some
MPLS labels, Q-in-Q, etc, safely) but even 1540-1548 will generally be
sufficient.
Much as I dislike ATM, it was giving us one good thing - 4470 MTU
paths which we could run 1500 MTU tunnels of all kinds across without
significant CPU overhead (other than the intrinsic CPU overhead that
comes with most ATM adaptors).
You also want to make sure you have turned on
"vpdn ip udp ignore checksum" in case your provider is putting in
checksums on their L2TP packets -- this causes process switching.
David.
> K
>
> On Tue, 26 Oct 2004, Robert E. Seastrom wrote:
>
> >
> > So... I have a DSL wholesaler providing me L2TP handoff (slight
> > variant on the classic LAC to LNS VPDN model: customer's PPPoE
> > translated to L2TP by the wholesaler, handed off on a .1q trunked
> > ethernet, with one or more LACs on each trunk).
> >
> > I'm running c7200-js-mz.123-8.T4.bin on a 7206 VXR with an NPE300
> > (PA-FE-TX facing the wholesaler in slot 2, IO-FE talking to the core)
> > as the LNS, and noticed that my CPU seemed abnormally high for the
> > amount of traffic I'm moving, compared to a friend's VXR running
> > 12.3(5a) mainline with ATM handoff to another wholesaler.
> >
> > A bit of investigation showed that 12-14% of the CPU (overall CPU
> > utilization 40-45% with 15 mbit/sec towards the customers and 4
> > mbit/sec from the customers) was spent in "IP Input". A capture of
> > packets to the log buffer by "debug IP packet" showed incoming packets
> > (from the LAC to the LNS) as "routed via RIB" whereas outgoing packets
> > over the virtual-access interface showed as "routed via FIB".
> > Everything else looks nominal, but "routed via RIB" sounds an awful
> > lot like process switching to me.
> >
> > I've gone over the config and not seen anything obvious that would be
> > making this happen - is there perhaps a better choice? should i be
> > moving to 12.3 mainline and get off the T train?
> >
> > Thoughts?
> >
> > ---Rob
> >
> > _______________________________________________
> > cisco-nsp mailing list cisco-nsp at puck.nether.net
> > https://puck.nether.net/mailman/listinfo/cisco-nsp
> > archive at http://puck.nether.net/pipermail/cisco-nsp/
> >
>
> _______________________________________________
> cisco-nsp mailing list cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
More information about the cisco-nsp
mailing list