[j-nsp] Longest Match for LDP (RFC5283)
Krasimir Avramski
krasi at smartcom.bg
Tue Jul 24 12:25:27 EDT 2018
Hi
It is used in Access Nodes(default route to AGN) with
LDP-DOD(Downstream-on-Demand) Seamless MPLS architectures - RFC7032
<https://tools.ietf.org/html/rfc7032>
A sample with LDP->BGP-LU redistribution on AGN is here
<https://www.juniper.net/documentation/en_US/junos12.2/topics/example/mpls-ldp-downstream-on-demand.html>
.
Best Regards,
Krasi
On 24 July 2018 at 16:35, <adamv0025 at netconsultings.com> wrote:
> Hi James
>
> > Of James Bensley
> > Sent: Tuesday, July 24, 2018 11:17 AM
> >
> > Hi All,
> >
> > Like my other post about Egress Protection on Juniper, is anyone using
> what
> > Juniper call "Longest Match for LDP" - their implementation of
> > RFC5283 LDP Extension for Inter-Area Label Switched Paths (LSPs) ?
> >
> > The Juniper documentation is available here:
> >
> > https://www.juniper.net/documentation/en_US/junos/topics/concept/long
> > est-match-support-for-ldp-overview.html
> >
> > https://www.juniper.net/documentation/en_US/junos/topics/task/configur
> > ation/configuring-longest-match-ldp.html
> >
> > As before, as far as I can tell only Juniper have implemented this:
> > - Is anyone use this?
> > - Are you using it in a mixed vendor network?
> > - What is your use case for using it?
> >
> > I'm looking at IGP/MPLS scaling issues where some smaller access layer
> > boxes that run MPLS (e.g. Cisco ME3600X, ASR920 etc.) and have limited
> > TCAM. We do see TCAM exhaustion issues with these boxes however the
> > biggest culprit of this is Inter-AS MPLS Option B connections. This is
> because
> > Inter-AS OptB double allocates labels, which means label TCAM can run out
> > before we run out of IP v4/v6 TCAM due to n*2 growth of labels vs
> prefixes.
> >
> > I'm struggling to see the use case for the feature linked above that has
> been
> > implemented by Juniper. When running LDP the label space TCAM usage
> > increments pretty much linearly with IP prefix TCAM space usage.
> > If you're running the BGP VPNv4/VPNv6 address family and per-prefix
> > labeling (the default on Cisco IOS/IOS-XE) then again label TCAM usage
> > increases pretty much linearly with IP prefix TCAM usage. If you're using
> per-
> > vrf/per-table labels or per-ce labels then label TCAM usage increases in
> a
> > logarithmic fashion in relation to IP Prefix usage, and in this scenario
> we run
> > out of IP prefix TCAM long before we run out of label TCAM.
> >
> > My point here is that label TCAM runs out because of BGP/RSVP/SR usage,
> > not because of LDP usage.
> >
> > So who is using this feature/RFC on low end MPLS access boxes (QFX5100 or
> > ACX5048 etc.)?
> > How is it helping you?
> > Who's running out of MPLS TCAM space (on a Juniper device) before they
> > run out of IP prefix space when using LDP (and not RSVP/SR/BGP)?
> >
> I certainly was not aware of this one,
> Interesting concept I'm guessing for OptB in Inter-Area deployments? (or a
> neat alternative to current options?),
>
> Suppose I have ABR advertising default-route + label down to a stub area,
> And suppose PE-3 in this stub area wants to send packets to PE1 and PE2 in
> area 0 or some other area.
> Now I guess the whole purpose of "Longest Match for LDP" is to save
> resources on PE-3 so that all it has in its RIB/FIB is this default-route +
> LDP label pointing at the ABR.
> So it encapsulates packets destined to PE1 and PE2 with the only transport
> label it has and put the VPN label it learned via BGP from PE1 and PE2 on
> top and send the packets to ABR,
> When ABR receives these two packets -how is it going to know that these are
> not destined to it and that it needs to stitch this LSP further to LSPs
> toward PE1 and PE2 and also how would it know which of the two packets it
> just received is supposed to be forwarded to PE1 and which to PE2?
> This seem to defeat the purpose of an end-to-end LSPs principle where the
> labels stack has to uniquely identify the label-switch-path's end-point (or
> group of end-points)
> The only way out is if ABR indeed thinks these packets are destined for it
> and it also happens to host both VRFs and actually is advertised VPN
> prefixes for these VRFs to our PE-3 so that PE-3 sends packets to PE1 and
> PE2 these will land on ABR ain their respective VRFs and will be send
> further by ABR to PE1 and PE2.
>
> In the old world the PE-3 would need to have a route + transport label for
> PE1 and PE2.
> Options:
> a) In single area for the whole core approach, PE3 would have to hold these
> routes + transport labels for all other PEs in the backbone -same LSDB on
> each host requirement.
> b) In multi-area with BGP-LU (hierarchical MPLS) we could have ABR to
> advertise only subset of routes + labels to PE-3 (or have PE-3 to only
> accept routes it actually needs) -this reduction might suffice or not,
> note:
> no VPN routes at the ABR.
> c) I guess this new approach then further reduces the FIB size requirements
> on PE-3 by allowing it to have just one prefix and transport label (or two
> in case of redundant ABRs), but it increase requirements on ABRs as now
> they
> need to hold all VPN routes -just like RRs (i.e. require much more FIB than
> a regular PE).
>
> I guess running out of FIB due to share size of the MPLS network can happen
> in environments where you have just a few VRFs per PE with just a few
> routes
> each but you have 10s or 100s of thousands of PEs, -it's a toll of MPLS all
> the way down to the access layer -that's partly why they came up with
> MPLS-TP.
>
> adam
>
> netconsultings.com
> ::carrier-class solutions for the telecommunications industry::
>
> _______________________________________________
> juniper-nsp mailing list juniper-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
More information about the juniper-nsp
mailing list