[j-nsp] ACL for lo0 template/example comprehensive list of 'things to think about'?
adamv0025 at netconsultings.com
adamv0025 at netconsultings.com
Thu Jul 12 11:28:36 EDT 2018
> From: Saku Ytti [mailto:saku at ytti.fi]
> Sent: Thursday, July 12, 2018 12:21 AM
>
> Hey,
>
> > And there don't seem to be a way in Junos how to restrict
> > management-plane protocols only to certain interfaces no matter what RE
> filter says.
> > In XR it's as easy as specifying a list of OOB or in-band interfaces
> > against a list of management protocols,
>
> In practical life IOS-XR control-plane is better protected than JunOS, as
> configuring JunOS securely is very involved, considering that MX book gets it
> wrong, offering horrible lo0 filter as does Cymru, what chance the rest of us
> have?
>
> But IOS-XR also cannot be configured very securely, JunOS can. Main
> problems in IOS-XR:
>
> a) Policers are per protocol, one BGP customer having L2 loop, and all BGP
> customers in NPU suffer (excessive flow trap may alleviate, but it's not turn-
> key and it can be configured perfectly)
>
Well yes the granularity is per LC per NPU but not per interface/sub-interface.
If LPTS is using the same TCAM as transit traffic then there should be enough space for this additional granularity.
> b) LPTS packets are not subject to MQC, so you cannot complement LPTS
> with MQC. Imagine one customer congesting specific LPTS policer, and you
> want to add MQC policer to interface, to relieve the LPTS policer from trash
> generated by this customer, not possible
>
Yes if the LPTS would have per sub-interface granularity then or this complexity could have been offloaded onto MQC that would be much better.
> c) IOS-XR does not guarantee that packets not dropped by LPTS are not
> dropped later, JunOS technically does not, but in practice it's extremely rare
> to drop packets once punted from NPU. After LPTS punt, TCP packets are
> hashed to 8 TCP workers, in lab situation single TCP worker can handle lot
> more than what single NPU LPTS protocol policer can admit, but in production
> environment TCP worker performance may degrade so much that your XIPC
> workers are dropping before there are any LPTS drops, meaning you'll lose
> 1/8th of your BGP sessions.
>
This one I was not aware of actually, so you say that theoretically aggregate from all LPTS policers can be more than what a single worker queue can handle resulting in tail-drops (well assuming that the hashing is imperfect congesting this one worker queue), is that right?
But what is the theoretical probability of that happening in production? I mean the hash and packet keys would need just line up to result in very bad distribution resulting in congestion of one of the 8 queues.
> Both A and C are being fixed, thanks CSCO. But I'm not very happy how they
> chose to fix it.
>
How do they plan on fixing that please?
> I think best compromise would be, that JNPR would offer good filter,
> dynamically built based on data available in config and referring to empty
> prefix-lists when not possible to infer and customer can fill those prefix-lists
> if needed. And also have functional ddos-protection configuration out-of-
> the-box. People who want and can could override and configure themselves.
>
Yeah that's the nice thing about LPTS that it automatically punches holes (or pps rates) into the RSP filters based on protocols configured (sessions established or just configured, etc...)
Thank you
adam
netconsultings.com
::carrier-class solutions for the telecommunications industry::
More information about the juniper-nsp
mailing list