[c-nsp] EoMPLS ?s
Ge Moua
moua0100 at umn.edu
Thu Oct 13 19:23:08 EDT 2011
In the past, we've used hw engines like the Cisco PXF (NSE-1xx) on the
73xx router platforms to do hw-based L2TPv3 processing; otherwise L2TPv3
in CPU is very resource intensive.
One can also consider doing something like MPLSoGRE/ATOM, then allow for
the L3/GRE headers to do frag/defrag where needed (for 1500 MTU links).
Throughput may be a concern here as frag/defrag typically punted to CPU
(unless there are ASICS now that can do that which I'm not aware of).
There are platforms that handle GRE processing in hw, but not
necessarily fragmentation for the large payload inside the GRE pkts.
I've been told by Cisco TAC that one should do some baseline testing
then see if performance related to chosen transport/signaling
methodology is sufficient base on needs.
Also on more than one occasion, Cisco TAC has recommended a MPLS
transport variation for extending L2 nets in-lieu of doing L2 over
native IP like L2TPv3 (assuming one already have a MPLS core);
justification was something about more SMEs for MPLS vs L2TPv3.
Good luck.
--
Regards,
Ge Moua
Univ of Minn Alumnus
--
On 10/13/11 11:00 AM, Jason LeBlanc wrote:
> I may not be able to use this option then as I have no control of MTU
> between the sites, and I am assuming it is 1500 bytes. No room for
> MPLS headers. Not sure I can get the throughput with L2TPv3 unless
> this can be done in HW on some platform.
>
> On 10/13/2011 09:19 AM, Ge Moua wrote:
>> Once upon a time, we too did this between two sites about 90 miles
>> apart with:
>> * a transport in the middle with a partner/service provider doing
>> MPLS CsC
>> * on the edge sites with EoMPLS
>>
>> lesson learned:
>> * as previously mentioned already, do as large MTU as possible for
>> all transit links
>> * large MTU at the core
>>
>> we had a situation where we forgot to enable jumbo frame one of the
>> core transit links & needless to say traffic traversing that path was
>> being dropped if pkt size was greater than that specify MTU (which
>> was just the default of 1500); fix was to enable jumbo frame there
>> too then all was working
>>
>> MPLS is pretty unforgiving in that area (also as previously mentioned)
>>
>> --
>> Regards,
>> Ge Moua
>> Univ of Minn Alumnus
>> --
>>
>>
>> On 10/12/11 3:02 PM, Arie Vayner (avayner) wrote:
>>> Jason,
>>>
>>> There is no fragmentation in MPLS. Either you can forward the
>>> packet, or
>>> it is dropped.
>>> You need to either have a larger MTU on the core (usually the way it is
>>> implemented today), or reduce MTU at both sides.
>>> As this is a L2 link, you can't use things like MSS adjust etc...
>>>
>>> Arie
>>>
>>> -----Original Message-----
>>> From: cisco-nsp-bounces at puck.nether.net
>>> [mailto:cisco-nsp-bounces at puck.nether.net] On Behalf Of Jason LeBlanc
>>> Sent: Wednesday, October 12, 2011 20:53
>>> To: cisco-nsp at puck.nether.net
>>> Subject: [c-nsp] EoMPLS ?s
>>>
>>> We're considering using EoMPLS port mode to bridge two datacenters
>>> together temporarily for a move using sup720-3BXL on both ends with
>>> 6724
>>> blades, probably 2 or 4 gig links, possibly 10g if I can get them to
>>> buy
>>> the HW. The question I have primarily is with regard to MTU. I have
>>> heard there are issues with ensuring both sides match, not much concern
>>> there. But the network between the two facilities may be lower than
>>> the
>>> 1518 bytes, causing fragmentation. I know this gets punted to the RP,
>>> and is going to be a problem. Is there any work around?
>>>
>>> Thanks,
>>> Jason
>>> _______________________________________________
>>> cisco-nsp mailing list cisco-nsp at puck.nether.net
>>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>>>
>>> _______________________________________________
>>> cisco-nsp mailing list cisco-nsp at puck.nether.net
>>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>> _______________________________________________
>> cisco-nsp mailing list cisco-nsp at puck.nether.net
>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>> archive at http://puck.nether.net/pipermail/cisco-nsp/
> _______________________________________________
> cisco-nsp mailing list cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
More information about the cisco-nsp
mailing list