[j-nsp] Junipers and broadcast storm issues

Mark Johnson juniper-nsp at avensys.net
Thu Feb 10 08:59:56 EST 2005


Hi Dennis,

Thanks for that.

When the storm is in progress the __default_arp_policer__ does not increment 
so I don't think this is the problem. Additionally, the ospf/bgp drops 
straight away without waiting for arp timeouts.

When the storm is in progress it also takes about 7 minutes for the commit 
to complete to shut the affected port down.

Is there anyway I can check the utilisation of fxp1?

Kind regards,

Mark


>> -----Original Message-----
>> From: Dennis Woods [mailto:dwoods at juniper.net]
>> Sent: 09 February 2005 18:23
>> To: Mark Johnson; juniper-nsp at puck.nether.net
>> Subject: RE: [j-nsp] Junipers and broadcast storm issues
>>
>>
>> Hi,
>>
>> The default policer box-wide in scope and implemented in hardware.  It
>> protects the RE from the negative effects of a broadcast
>> storm howerver
>> you can see problems if one inteface is using up all the arps
>> allowed by
>> the default policer.  The result would be that when arps on other
>> interfaces timeout they get renewed slowly or not at all.
>>
>> Try setting the per-interface arp policer to something 32kbits.  Then
>> check with show policer to see which arp policer is incrementing.
>>
>> guest at rock# show interfaces ge-0/1/0
>> unit 0 {
>>     family inet {
>>         policer {
>>             arp blah;
>>         }
>>         address 4.0.0.1/8;
>>     }
>> }
>>
>> [edit]
>> guest at rock# show firewall policer blah
>> filter-specific;
>> if-exceeding {
>>     bandwidth-limit 32k;
>>     burst-size-limit 2800;
>> }
>> then discard;
>>
>> [edit]
>> guest at rock# commit
>> commit complete
>>
>> [edit]
>> guest at rock# run show policer
>> Policers:
>> Name                                              Packets
>> __default_arp_policer__                                 0<----this one
>> should stop incrementing
>> blah-ge-0/1/0.0-inet-arp                                0<----this one
>> should be incrementing
>>
>> --dennis
>>
>> -----Original Message-----
>> From: juniper-nsp-bounces at puck.nether.net
>> [mailto:juniper-nsp-bounces at puck.nether.net] On Behalf Of Mark Johnson
>> Sent: Wednesday, February 09, 2005 12:54 PM
>> To: juniper-nsp at puck.nether.net
>> Subject: Re: [j-nsp] Junipers and broadcast storm issues
>>
>>
>> Hi,
>>
>> I've set this up in the lab and can replicate the problem by simply
>> looping
>> a couple of ports on a connected switch with spanning tree disabled.
>> OSPF,
>> BGP, fail.
>>
>> Interestingly, I connected an old 7200 NPE-200 (admittedly at 100Mb
>> while
>> the Juniper is at Gig) and while its CPU hit 99% it stayed up and ran
>> fine.
>>
>> I guess the issue is that the crap coming from the switch is going to
>> the RE
>> and the internal PFE-RE 100Mb link is getting saturated (as with the
>> MPLS
>> vulnerability released today). There is a default arp policer in place
>> and I
>> did try setting my own at 3Mb/s with no difference so the crap isn't
>> simply
>> arp packets.
>>
>> Can anyone give any pointers please (especially someone from Juniper)?
>>
>> Kind regards,
>>
>> Mark
>>
>>
>> >> -----Original Message-----
>> >> From: Mark Johnson [mailto:juniper-nsp at avensys.net]
>> >> Sent: 06 February 2005 22:54
>> >> To: juniper-nsp at puck.nether.net
>> >> Subject: [j-nsp] Junipers and broadcast storm issues
>> >>
>> >>
>> >> Hi,
>> >>
>> >> I'm a little disappointed so far in the way my M7i's handle
>> >> broadcast storms
>> >> at peering points. I'm hoping someone on the list could
>> >> enlighten me if this
>> >> is normal or if I can improve my config.
>> >>
>> >> Previously, with Cisco 7200 routers, I have seen increased
>> >> CPU on the router
>> >> during such events but it has never impacted traffic flowing
>> >> through the
>> >> router.
>> >>
>> >> The first broadcast storm was when an IXP's link went
>> >> unidirectional. It was
>> >> a FE port. Our monitoring system showed that a few ping tests
>> >> that went
>> >> through the affected router were dropped. I raised a ticket
>> >> with Imtech who
>> >> provide support but couldn't provide any reason for packets
>> >> passing through
>> >> the router to be dropped. They just pointed out that all
>> the packets
>> >> arriving would need to be processed by the RE and this might
>> >> max out the RE
>> >> or the link to the RE.
>> >>
>> >> The second storm was on a GigE port when an IXP had a port
>> >> looped by a
>> >> member. The event was a little longer than the first one and
>> >> the effects a
>> >> bit more severe. MRTG 5 minute average showed that the port
>> >> received about
>> >> 70Mb/s and 140kpps. MRTG also showed that the RE/FE CPU
>> >> utilisation only
>> >> increased marginally and stayed below 20%.
>> >>
>> >> Our monitoring system showed that traffic flowing through the
>> >> router was
>> >> degraded. The routers logs showed no OSPF or BGP drops (other
>> >> than those to
>> >> the affected IXP) and there were no other entries whatsover
>> >> in the router's
>> >> log.
>> >>
>> >> Any advice appreciated.
>> >>
>> >> Kind regards,
>> >>
>> >> Mark
>> >>
>> >> _______________________________________________
>> >> juniper-nsp mailing list juniper-nsp at puck.nether.net
>> >> http://puck.nether.net/mailman/listinfo/juniper-nsp
>> >>
>> >
>> >
>>
>> _______________________________________________
>> juniper-nsp mailing list juniper-nsp at puck.nether.net
>> http://puck.nether.net/mailman/listinfo/juniper-nsp
>>
>
> 



More information about the juniper-nsp mailing list