[j-nsp] Mpls down qfx 5100

Saku Ytti saku at ytti.fi
Mon Nov 12 10:23:23 EST 2018


No 500pps won't be enough to avoid drops, your arrival rate was
226kpps. But like I said in the original post, these are not
indicative of your problem.

These, particularly TTL==0 is normal consequence of routing convergence.

So I do not recommend spending time on this, as it's not going to fix
what ever issue you're talking about.


On Mon, 12 Nov 2018 at 17:18, Rodrigo Augusto <rodrigo at 1telecom.com.br> wrote:
>
> Before I increased this parameter I saw this:
> Protocol Group: L3MTU-fail
>
>
>   Packet type: aggregate (Aggregate for L3 MTU Check fail)
>     Aggregate policer configuration:
>       Bandwidth:        50 pps
>       Burst:            10 packets
>       Recover time:     300 seconds
>       Enabled:          Yes
>     Flow detection configuration:
>       Detection mode: Automatic  Detect time:  0 seconds
>       Log flows:      Yes        Recover time: 0 seconds
>       Timeout flows:  No         Timeout time: 0 seconds
>       Flow aggregation level configuration:
>         Aggregation level   Detection mode  Control mode  Flow rate
>         Subscriber          Automatic       Drop          0  pps
>         Logical interface   Automatic       Drop          0  pps
>         Physical interface  Automatic       Drop          50 pps
>     System-wide information:
>       Aggregate bandwidth is being violated!
>         No. of FPCs currently receiving excess traffic: 1
>         No. of FPCs that have received excess traffic:  1
>         Violation first detected at: 2018-11-12 11:28:10 BRT
>         Violation last seen at:      2018-11-12 11:33:00 BRT
>         Duration of violation: 00:04:50 Number of violations: 219
>       Received:  64825768            Arrival rate:     2387 pps
>       Dropped:   51410693            Max arrival rate: 225833 pps
>     Routing Engine information:
>
>
>
>
>
> After I put this parameter to 500 pps, so I have a question, does this
> value is enough for this violation?
>
> Rodrigo Augusto
> Diretor BackBone IP Grupo Um
> http://www.connectoway.com.br <http://www.connectoway.com.br/>
> http://www.1telecom.com.br <http://www.1telecom.com.br/>
> * rodrigo@ <mailto:rodrigo at connectoway.com.br>1telecom.com.br
> ( (81) 3497-6060
> ( INOC-DBA 52965*100
>
>
>
>
> On 11/11/18 05:59, "Saku Ytti" <saku at ytti.fi> wrote:
>
> >Hey,
> >
> >These are not related to your issue.,
> >
> >The first one is complaining that you got bunch of packets to your
> >device with TTL==1, you need to punt these and generate TTL exceeded
> >message. Because it's done in software, it's limited to certain amount
> >of packets.
> >This is operationally normal during convergence due to microloops and
> >such.
> >
> >
> >The second one is complaining that packet came in which wanted to go
> >out via interface which has smaller MTU, these also need to be punted
> >so we can generate fragmentation needed but DF set message. Doesn't
> >indicate anything to help with your original problem, but you might
> >want to know why do you have such an small egress MTU, ideally you
> >wouldn't ever decrease MTU inside your network.
> >
> >What ever your problem is, no one can help you with these messages.
> >
> >On Sat, 10 Nov 2018 at 23:07, Rodrigo 1telecom <rodrigo at 1telecom.com.br>
> >wrote:
> >>
> >>
> >> Hi folks.... recently we have some trouble with some mpls tunnels....
> >>sometime these tunnels goes down:
> >> Follow out logfiles:
> >>
> >> Nov  9 20:03:42  PE-REC-A01-BKB-SW-001 jddosd[1769]:
> >>DDOS_PROTOCOL_VIOLATION_SET: Warning: Host-bound traffic for
> >>protocol/exception  TTL:aggregate exceeded its allowed bandwidth at fpc
> >>0 for 212 times, started at 2018-11-09 20:03:41 BRT
> >> Nov  9 20:03:42  PE-REC-A01-BKB-SW-001 jddosd[1769]:
> >>DDOS_PROTOCOL_VIOLATION_SET: Warning: Host-bound traffic for
> >>protocol/exception  L3MTU-fail:aggregate exceeded its allowed bandwidth
> >>at fpc 0 for 212 times, started at 2018-11-09 20:03:41 BRT
> >> Can someone help us?
> >> Enviado via iPhone •
> >> Grupo Connectoway
> >> _______________________________________________
> >> juniper-nsp mailing list juniper-nsp at puck.nether.net
> >> https://puck.nether.net/mailman/listinfo/juniper-nsp
> >
> >
> >
> >--
> >  ++ytti
>
>


-- 
  ++ytti


More information about the juniper-nsp mailing list