[j-nsp] Segment Routing Real World Deployment (was: VPC mc-lag)

adamv0025 at netconsultings.com adamv0025 at netconsultings.com
Sun Jul 8 16:22:20 EDT 2018


> Of Mark Tinka
> Sent: Sunday, July 08, 2018 9:20 AM
> 
Hi Mark,
 two points

> 
> 
> On 7/Jul/18 23:10, Saku Ytti wrote:
> 
> > Alexandre's point, to which I agree, is that when you run them over
> > LSP, you get all the convergency benefits of TE.
> 
> Unless you've got LFA (or BFD, for the poor man), in which case there is
no
> real incremental benefit.
> 
> We run BFD + LFA for IS-IS. We've never seen the need for RSVP-TE for FRR
> requirements.
> 
> 
> >  But I can understand
> > why someone specifically would not want to run iBGP on LSP,
> > particularly if they already do not run all traffic in LSPs, so it is
> > indeed option for operator. Main point was, it's not an argument for
> > using LDP signalled pseudowires.
> 
> We run all of our IPv4 and l2vpn pw's in (LDP-generated) LSP's. Not sure
if
> that counts...
> 
> I'm not sure whether there is a better reason for BGP- or LDP-signaled
pw's. I
> think folk just use what makes sense to them. I'm with Alexandre where I
> feel, at least in our case, BGP-based signaling for simple p2p or p2mp
pw's
> would be too fat.
> 
> 
> > If there is some transport problems, as there were in Alexandre's
> > case, then you may have lossy transport, which normally does not mean
> > rerouting, so you drop 3 hellos and get LDP down and pseudowire down,
> > in iBGP case not only would you be running iBGP on both of the
> > physical links, but you'd also need to get 6 hellos down, which is
> > roughly 6 orders of magnitude less likely.
> >
> > The whole point being using argument 'we had transport problem causing
> > BGP to flap' cannot be used as rationale reason to justify LDP
> > pseudowires.
> 
> So LDP will never be more stable than your IGP. Even with
over-configuration
> of LDP, it's still pretty difficult to totally mess it up that it's
unstable all on it
> own.
> 
> If my IGP loses connectivity, I don't want a false sense of session uptime
> either with LDP or BGP. I'd prefer they tear-down immediately, as that is
> easier to troubleshoot. What would be awkward is BGP or LDP being up, but
> no traffic being passed, as they wait for their Keepalive Hello's to time
out.
> 
The only way how you can be 100% sure about the service availability is
inserting test traffic onto the PW, that's why in Carrier Ethernet a good
practice is to use CFM so you can not only turn the L2ckt down if corrupted
but also be able to pinpoint the culprit precisely which in p2p L2 services
(with no mac learning) is quite problematic.     

> 
> >
> > I would provision both p2mp and p2p through minimal difference, as to
> > reduce complexity in provisioning. I would make p2p special case of
> > p2mp, so that when there are exactly two attachment circuits, there
> > will be no mac learning.
> > However if you do not do p2mp, you may have some stronger arguments
> > for LDP pseudowires, more so, if you have some pure LDP edges, with no
> > BGP.
> 
> Agreed that EVPN and VPLS better automate the provisioning of p2mp pw's.
> However, this is something you can easily script for LDP as well; and once
it's
> up, it's up.
> 
> And with LDP building p2mp pw's, you are just managing LDP session state.
> Unlike BGP, you are not needing to also manage routing tables, e.t.c.
> 
We have to distinguish here whether you're using BGP just for the VC
endpoint reachability and VC label propagation (VPLS) or also to carry
end-host reachability information (EVPN) and only in the latter you need ot
worry about the routing tables I the former the BGP function is exactly the
same as the function of a targeted LDP session -well in the VC label
propagation bit anyways (not the auto discovery bit of course). 


adam

netconsultings.com
::carrier-class solutions for the telecommunications industry::



More information about the juniper-nsp mailing list