[j-nsp] L3VPNs and on-prem DDoS scrubbing architecture
Michael Hare
michael.hare at wisc.edu
Thu Apr 4 12:15:03 EDT 2024
Alexandre,
Thanks for your emails. I finally got around to trying it myself; it definitely works! I first "broke" my A.B.C.D destination and =then= added a static. When I reproduced this, instead of putting the static route into inet.0 I chose to install in my cleanVRF, which gets around the admin distance issues. Any reason you install the routes in global instead of cleanVRF that I'm overlooking?
I'm curious to know how safe it is to rely on working in the future. How long have you been using this trick? I'll probably follow up with our Juniper support channels, as Saku suggests, maybe something even better can come out of this.
Thanks again,
-Michael
=========/========
@# run show route A.B.C.D
inet.0: 933009 destinations, 2744517 routes (932998 active, 0 holddown, 360 hidden)
+ = Active Route, - = Last Active, * = Both
A.B.C.D/32 *[BGP/170] 00:24:03, localpref 100, from 2.3.4.5
AS path: I, validation-state: unverified
> to 5.6.7.8 via et-0/1/10.3099
cleanVRF.inet.0: 319 destinations, 1179 routes (318 active, 0 holddown, 1 hidden)
Limit/Threshold: 5000/4000 destinations
+ = Active Route, - = Last Active, * = Both
A.B.C.D/32 *[Static/5] 00:07:36
> to A.B.C.D via ae17.3347
@# run show route forwarding-table destination A.B.C.D
Routing table: default.inet
Internet:
Destination Type RtRef Next hop Type Index NhRef Netif
A.B.C.D/32 user 0 indr 1048588 3
5.6.7.8 ucst 981 5 et-0/1/10.3099
A.B.C.D/32 dest 0 0:50:56:b3:4f:fe ucst 1420 3 ae17.3347
Routing table: cleanVRF.inet
Internet:
Destination Type RtRef Next hop Type Index NhRef Netif
A.B.C.D/32 user 0 0:50:56:b3:4f:fe ucst 1420 3 ae17.3347
> -----Original Message-----
> From: Alexandre Snarskii <snar at snar.spb.ru>
> Sent: Tuesday, April 2, 2024 12:20 PM
> To: Michael Hare <michael.hare at wisc.edu>
> Cc: juniper-nsp at puck.nether.net
> Subject: Re: [j-nsp] L3VPNs and on-prem DDoS scrubbing architecture
>
> On Tue, Apr 02, 2024 at 07:43:01PM +0300, Alexandre Snarskii via juniper-
> nsp wrote:
> > On Tue, Apr 02, 2024 at 03:25:21PM +0000, Michael Hare via juniper-nsp
> wrote:
> >
> > Hi!
> >
> > Workaround that we're using (not elegant, but working): setup a
> > "self-pointing" routes to directly connected destinations:
> >
> > set routing-options static route A.B.C.D/32 next-hop A.B.C.D
>
> Forgot to note one thing: these self-pointing routes shall have
> preference of 200 (or anytning more than BGP's 170):
>
> set routing-options static route A.B.C.D/32 next-hop A.B.C.D
> set routing-options static route A.B.C.D/32 preference 200
>
> so, in case when traffic shall be diverted to scrubbing, bgp route
> will be active in inet.0 and static route will be active in cleanL3VPN:
>
> snar at RT1.OV.SPB> show route A.B.C.D/32
> inet.0: ...
> + = Active Route, - = Last Active, * = Both
>
> A.B.C.D/32 *[BGP/170] 00:06:33, localpref 100
> AS path: 65532 I, validation-state: unverified
> > to Scrubbing via ae3.232
> [Static/200] 00:02:22
> > to A.B.C.D via ae3.200
>
> cleanL3VPN.inet.0: ....
> + = Active Route, - = Last Active, * = Both
>
> A.B.C.D/32 *[Static/200] 00:02:22
> > to A.B.C.D via ae3.200
>
>
> and the corresponding forwarding entry:
>
> Routing table: default.inet [Index 0]
> Internet:
>
> Destination: A.B.C.D/32
> Route type: user
> Route reference: 0 Route interface-index: 0
> Multicast RPF nh index: 0
> P2mpidx: 0
> Flags: sent to PFE, rt nh decoupled
> Nexthop: Scrubbing
> Next-hop type: unicast Index: 2971 Reference: 6
> Next-hop interface: ae3.232
> RPF interface: ae3.200
> RPF interface: ae3.232
>
> Destination: A.B.C.D/32
> Route type: destination
> Route reference: 0 Route interface-index: 431
> Multicast RPF nh index: 0
> P2mpidx: 0
> Flags: none
> Nexthop: 0:15:17:b0:e6:f8
> Next-hop type: unicast Index: 2930 Reference: 3
> Next-hop interface: ae3.200
> RPF interface: ae3.200
>
> [...]
> Routing table: cleanL3VPN.inet [Index 6]
> Internet:
>
> Destination: A.B.C.D/32
> Route type: user
> Route reference: 0 Route interface-index: 0
> Multicast RPF nh index: 0
> P2mpidx: 0
> Flags: sent to PFE, rt nh decoupled
> Nexthop: 0:15:17:b0:e6:f8
> Next-hop type: unicast Index: 2930 Reference: 3
> Next-hop interface: ae3.200
>
>
> >
> > and export these to cleanL3VPN. Resulting forwarding-table:
> >
> > Routing table: default.inet [Index 0]
> > Internet:
> >
> > Destination: A.B.C.D/32
> > Route type: user
> > Route reference: 0 Route interface-index: 0
> > Multicast RPF nh index: 0
> > P2mpidx: 0
> > Flags: sent to PFE, rt nh decoupled
> > Nexthop: 0:15:17:b0:e6:f8
> > Next-hop type: unicast Index: 2930 Reference: 4
> > Next-hop interface: ae3.200
> > RPF interface: ae3.200
> >
> > [...]
> >
> > Routing table: cleanL3VPN.inet [Index 6]
> > Internet:
> >
> > Destination: A.B.C.D/32
> > Route type: user
> > Route reference: 0 Route interface-index: 0
> > Multicast RPF nh index: 0
> > P2mpidx: 0
> > Flags: sent to PFE, rt nh decoupled
> > Nexthop: 0:15:17:b0:e6:f8
> > Next-hop type: unicast Index: 2930 Reference: 4
> > Next-hop interface: ae3.200
> >
> > Unfortunately, we found no way to provision such routes via BGP,
> > so you have to have all those in configuration :(
> >
> > If there is a better workaround, I'd like to know it too :)
> >
> >
> > > Hi there,
> > >
> > > We're a US research and education ISP and we've been tasked for coming
> up with an architecture to allow on premise DDoS scrubbing with an appliance.
> As a first pass I've created an cleanL3VPN routing-instance to function as a
> clean VRF that uses rib-groups to mirror the relevant parts of inet.0. It is in
> production and is working great for customer learned BGP routes. It falls apart
> when I try to protect a directly attached destination that has a mac address in
> inet.0. I think I understand why and the purpose of this message is to see if
> anyone has been in a similar situation and has thoughts/advice/warnings
> about alternative designs.
> > >
> > > To explain what I see, I noticed that mac address based nexthops don't
> seem to be copied from inet.0 into cleanL3VPN.inet.0. I assume this means
> that mac-address based forwarding must be referencing inet.0 [see far below].
> This obviously creates a loop once the best path in inet.0 becomes a BGP /32.
> For example when I'm announcing a /32 for 1.2.3.4 out of a locally attached
> 1.2.3.0/26, traceroute implies the packet enters inet.0, is sent to 5.6.7.8 as
> the nexthop correctly, arrives in cleanL3VPN which decides to forward to
> 5.6.7.8 in a loop, even though the BGP /32 isn't part of cleanL3VPN [see
> below], cleanL3VPN Is dependent on inet.0 for resolution. Even if I could copy
> inet.0 mac addresses into cleanL3VPN, eventually the mac address would age
> out of inet.0 because the /32 would no longer be directly connected. If I want
> to be able to protect locally attached destinations so I think my design is
> unworkable, I think my solutions are
> > >
> > > = use flowspec redirection to dirty VRF, keep inet.0 as clean and use
> flowspec interface filter-group appropriately on backbone interfaces [routing-
> options flow interface-group exclude, which I already have deployed
> correctly]. This seems easy but is less performant.
> > > = put my customers into a customerVRF and deal with route leaking
> between global and customerVRF. This is a well-known tactic but more
> complicated to approach and disruptive to deploy as I have to airlift basically
> all the customers to into a VRF to have full coverage.
> > >
> > > For redirection, to date I've been looking at longest prefix match solutions
> due to the presumed scalability vs using flowspec. I have an unknown
> amount of "always on" redirects I might be asked to entertain. 10? 100?
> 1000? I'm trying to come up with a solution that doesn't rely on touching the
> routers themselves. I did think about creating a normal [non flowspec] input
> firewall term on untrusted interfaces that redirects to dirty VRF based in a
> single destination prefix-list and just relying on flowspec for on demand stuff
> with the assumption one firewall term with let's say 1000 prefixes is more
> performant than 1000 standalone flowspec rules. I think my solution is
> fundamentally workable but I don't think the purchased turnkey ddos
> orchestration is going to natively interact with our Junipers, so that is looked
> down upon, since it would require " a router guy " or writing custom
> automation when adding/removing always-on protection. Seems technically
> very viable to me, I j
> > us
> > > t bring up these details because I feel like without a ton of effort VRF
> redirection can be made to be nearly as performant as longest prefix match.
> > >
> > > While we run MPLS, currently all of our customers/transit are in the global
> table. I'm trying to avoid solutions for now that puts the 1M+ RIB DFZ zone
> into an L3VPN; it's awfully big change I don't want to rush into especially for
> this proof of concept but I'd like to hear opinions if that's the best solution to
> this specific problem. I'm not sure it's fundamentally different than creating a
> customerVRF, seems like I just need to separate the customers from the
> internet ingress.
> > >
> > > My gut says "the best" thing to do is to create a customerVRF but it feels a
> bit complicated as I have to worry about things like BGP/static/direct and will
> lose addPath [I recently discovered add-path and route-target are mutually
> exclusive in JunOS].
> > >
> > > My gut says "the quickest" and least disruptive thing to do is to go the
> flowspec/filter route and frankly I'm beginning to lean that way since I'm
> already partially in production and needed to have a solution 5 days ago to
> this problem :>
> > >
> > > I've done all of these things before [flowspec, rib leaking] I think it's just a
> matter of trying to figure out the next best step and was looking to see if
> anyone has been in a similar situation and has thoughts/advice/warnings.
> > >
> > > I'm talking about IPv4 below but I ack IPv6 is a thing and I would just do
> the same solution.
> > >
> > > -Michael
> > >
> > > ===/===
> > >
> > > @$myrouter> show route forwarding-table destination 1.2.3.4 extensive
> > > Apr 02 08:39:10
> > > Routing table: default.inet [Index 0]
> > > Internet:
> > >
> > > Destination: 1.2.3.4/32
> > > Route type: user
> > > Route reference: 0 Route interface-index: 0
> > > Multicast RPF nh index: 0
> > > P2mpidx: 0
> > > Flags: sent to PFE
> > > Next-hop type: indirect Index: 1048588 Reference: 3
> > > Nexthop: 5.6.7.8
> > > Next-hop type: unicast Index: 981 Reference: 3
> > > Next-hop interface: et-0/1/10.3099
> > >
> > > Destination: 1.2.3.4/32
> > > Route type: destination
> > > Route reference: 0 Route interface-index: 85
> > > Multicast RPF nh index: 0
> > > P2mpidx: 0
> > > Flags: none
> > > Nexthop: 0:50:56:b3:4f:fe
> > > Next-hop type: unicast Index: 1562 Reference: 1
> > > Next-hop interface: ae17.3347
> > >
> > > Routing table: cleanL3VPN.inet [Index 21]
> > > Internet:
> > >
> > > Destination: 1.2.3.0/26
> > > Route type: user
> > > Route reference: 0 Route interface-index: 0
> > > Multicast RPF nh index: 0
> > > P2mpidx: 0
> > > Flags: sent to PFE, rt nh decoupled
> > > Next-hop type: table lookup Index: 1 Reference: 40
> > > _______________________________________________
> > > juniper-nsp mailing list juniper-nsp at puck.nether.net
> > >
> https://urldefense.com/v3/__https://puck.nether.net/mailman/listinfo/junip
> er-
> nsp__;!!Mak6IKo!OP5fgdGtjPWTngVcQt8mG10zVeOT1BQTtzQaIzT9MWqOM
> OPJvY_goFJTJVA1kek_IGLylCiYOLgImFms0w$
> > _______________________________________________
> > juniper-nsp mailing list juniper-nsp at puck.nether.net
> >
> https://urldefense.com/v3/__https://puck.nether.net/mailman/listinfo/junip
> er-
> nsp__;!!Mak6IKo!OP5fgdGtjPWTngVcQt8mG10zVeOT1BQTtzQaIzT9MWqOM
> OPJvY_goFJTJVA1kek_IGLylCiYOLgImFms0w$
More information about the juniper-nsp
mailing list