[j-nsp] Bgp peer sessions flap in 165k-245k pps/sec DoS

Samit janasamit at wlink.com.np
Mon Feb 16 00:26:50 EST 2009


After doing further investigation, I found that in-fact my Cisco-vxr
Npe-g2 and g1  in the path (between M7i and customer router) suffered
the Dos and due to cpu saturation the bgp flapped. Earlier I did not
noticed because the cpu utilization graph of Cisco showed only 50% in
npe-g2 and 80% in npe-g1 and straightened perhaps it was not responding
mrtg polling, however "show proc cpu history" showed the different
story.

M7i was not affected...bravo Juniper..!

Thanks everyone.

Regards,
Samit



Nilesh Khambal wrote:
> I don't see any drops in the sofware or hardware queues towards RE. So
> it does not look like it was this router that was affected by DOS attack
> and caused BGP flap. As Stefan mentioned, check the logs for the BGP
> notification reason and to find out if we sent or received the
> Notification.
> 
> For your M7i, this traffic is the transit traffic and should be handled
> in the PFE itself. It is possible that this router may get affected by
> DoS, if the BGP peer that flapped during DOS and the destination of DOS
> are both reachable via same egress interface and the egress interface
> does not have enough bandwidth to handle the traffic. As you mentioned,
> the DOS traffic was seen at the rate of 90 Mbps. It can saturate egress
> interface's queues if it is an FE interfaces, with other production
> traffic in the background. Check the egress interface queues and
> evidence of drops in either best-effort or network-control queue.
> 
> Did you have a filter on ingress or egress interface to drop such
> traffic with filter action "reject"?
> 
> Thanks,
> Nilesh.
> 
> 
> 
> 
> 
> 
> On Feb 15, 2009, at 2:50 AM, "Samit" <janasamit at wlink.com.np> wrote:
> 
>> I do have filter in placed to protect the RE. But the attack is not
>> targeted or directed to any interfaces of my router. My customer network
>> as under DoS attacked , tcpdump snapshot   attached below "x" is source
>> and "y" is target.
>>
>> 04:16:18.225986 IP x.x.x.x.12372 > y.y.y.y.18990: UDP,
>> length 36
>> 04:16:18.226063 IP x.x.x.x.12372 > y.y.y.y.18990: UDP,
>> length 36
>> 04:16:18.226072 IP x.x.x.x.12372 > y.y.y.y.18990: UDP,
>> length 36
>> 04:16:18.226091 IP x.x.x.x.12372 > y.y.y.y.18990: UDP,
>> length 36
>> 04:16:18.226095 IP x.x.x.x.12372 > y.y.y.y.18990: UDP,
>> length 36
>> 04:16:18.226112 IP x.x.x.x.12372 > y.y.y.y.18990: UDP,
>> length 36
>> 04:16:18.226115 IP x.x.x.x.12372 > y.y.y.y.18990: UDP,
>> length 36
>> 04:16:18.226131 IP x.x.x.x.12372 > y.y.y.y.18990: UDP,
>>
>> I don't have pfe stat during Dos but this is how it the output look like
>> now.
>>
>> Packet Forwarding Engine traffic statistics:
>>    Input  packets:          40918149601               102324 pps
>>    Output packets:          40903880367               102281 pps
>> Packet Forwarding Engine local traffic statistics:
>>    Local packets input                 :              4603616
>>    Local packets output                :              5077330
>>    Software input control plane drops  :                    0
>>    Software input high drops           :                    0
>>    Software input medium drops         :                    0
>>    Software input low drops            :                    0
>>    Software output drops               :                    0
>>    Hardware input drops                :                    0
>> Packet Forwarding Engine local protocol statistics:
>>    HDLC keepalives            :               143360
>>    ATM OAM                    :                    0
>>    Frame Relay LMI            :                    0
>>    PPP LCP/NCP                :                    0
>>    OSPF hello                 :                    0
>>    OSPF3 hello                :                    0
>>    RSVP hello                 :                    0
>>    LDP hello                  :                    0
>>    BFD                        :                    0
>>    IS-IS IIH                  :                    0
>> Packet Forwarding Engine hardware discard statistics:
>>    Timeout                    :                    0
>>    Truncated key              :                    0
>>    Bits to test               :                    0
>>    Data error                 :                    0
>>    Stack underflow            :                    0
>>    Stack overflow             :                    0
>>    Normal discard             :             14002963
>>    Extended discard           :                41297
>>    Invalid interface          :                    0
>>    Info cell drops            :                    0
>>    Fabric drops               :                    0
>> Packet Forwarding Engine Input IPv4 Header Checksum Error and Output MTU
>> Error statistics:
>>    Input Checksum             :                  196
>>    Output MTU                 :                    0
>>
>>
>> I don't have JTAC support access..  :)
>>
>> Regards,
>> Samit
>>
>>
>>
>>
>> Nilesh Khambal wrote:
>>> Hi Samit,
>>>
>>> Do you have the output of "show pfe statistics traffic" from this
>>> router?
>>>
>>> What was the type of DoS attack traffic?  Was it directed to any of the
>>> interfaces on the router? Did you have any filter applied to loopback
>>> interface to drop such traffic? If yes, did any of the filters that were
>>> applied to the interface matching DoS traffic had reject action in them?
>>> Is any syslogging enabled in any of the filter terms that were matching
>>> the attack traffic?
>>>
>>> Also, I would recommend involving JTAC during  such incidents in future.
>>> They can help you figure out the problem.
>>>
>>> Thanks,
>>> Nilesh
>>>
>>>
>>> On Feb 14, 2009, at 11:19 PM, "Samit" <janasamit at wlink.com.np> wrote:
>>>
>>>> Hi,
>>>>
>>>> Today early in the morning around 4am we had a udp based DoS from the
>>>> Internet destinate to one of my customer network for about over 1.5hr.
>>>> The pps rate was from 165k to 245k peak and at the rate of around
>>>> 90Mbps
>>>> as per the mrtg graphs. I don't have any Qos running, but I noticed
>>>> later that all Bgp peer sessions flapped during that period though I
>>>> have plenty of capacity in my upstream as well as in downstream links,
>>>> therefore I don't call it M7i fully survived and handled it. M7i is
>>>> capable of forwarding 16million pps and additionally I have plenty of
>>>> free bandwidth available, so there should not be any interface buffer
>>>> exhaustion or link saturation.  Therefore, I failed to understood the
>>>> reason of the BGP flaps. Can anyone help me explain to understand?
>>>>
>>>>
>>>> Regards,
>>>> Samit
>>>>
>>>> _______________________________________________
>>>> juniper-nsp mailing list juniper-nsp at puck.nether.net
>>>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>>
>>>
> 
> 


More information about the juniper-nsp mailing list