[j-nsp] traffic drops to 8 Gb/s when a firewall filter is applied

Matjaž Straus Istenič juniper at arnes.si
Wed May 30 16:38:10 EDT 2012


On 30. maj 2012, at 21:55, Keegan Holley wrote:

> What version of JunOS were you running?  Any interesting logs/stats from
> the DPC itself?

While DPC (or FPC in old terms) was online, a few upgrades were done, starting with 9.6.?. During those the card was non-stop online. We currently stick at 10.4R4.5. No interesting logs were found, nothing strange -- and the problem could not be reproduced on a freshly rebooted card. Maybe something got wrong during the updates and the cards got stuck into ... whatever -- it is history now.

Regards,
	Matjaž

> 
> 
> 2012/5/30 Matjaž Straus Istenič <juniper at arnes.si>
> 
>> Hi list,
>> 
>> no, this is not a joke ;-) -- our problem disappeared when FPC was
>> _power-cycled_ after almost a year uptime. JTAC and the local Juniper
>> partner were very helpful in the troubleshooting and they even supplied a
>> new FPC for a test. We replicated the same behaviour on two MXs. We still
>> don't know what caused the problem. Hope new FPCs with a higher revision
>> number are immune to this kind of behaviour.
>> 
>> Thank you all for your feedback,
>> Regards,
>> 
>>       Matjaž
>> 
>> On 15. dec. 2011, at 03:04, Keegan Holley wrote:
>> 
>>> I
>>> 
>>> 
>>> 2011/12/14 Richard A Steenbergen <ras at e-gerbil.net>
>>> 
>>>> On Fri, Dec 09, 2011 at 01:19:54PM -0500, Keegan Holley wrote:
>>>>> Yea but it should have enough silicon to do simple policing in
>>>>> hardware unless you have every single other feature on the box
>>>>> enabled. If a policer with no queueing, and no marking etc, caused
>>>>> throughput to decrease by 20% across the board I'd inquire about their
>>>>> return policy.  Hopefully, it's the policer config.  Most of my 10G
>>>>> interfaces do not require policers, but I've got 1G interfaces with
>>>>> hundreds of logicals each with a unique policer.
>>>> 
>>>> Unfortunately not... There are all kinds of ways to make I-chip cards
>>>> not deliever line rate performance even with relatively simple firewall
>>>> rules, and it's very poorly logged when this does happen. Admittedly
>>>> I've never seen a simple "then accept" push it over the edge, but maybe
>>>> it was RIGHT on the edge before... Try looking for some discards, such
>>>> as WAN_DROP_CNTR, on the *INGRESS* interface (i.e. not the one where you
>>>> added the egress filter). For xe-x/y/0 do:
>>>> 
>>>> start shell pfe network fpc<x>
>>>> show ichip <y> iif stat
>>>> 
>>>> example:
>>>> 
>>>> Traffic stats:
>>>>           Counter Name            Total           Rate      Peak Rate
>>>> ---------------------- ---------------- -------------- --------------
>>>>             GFAB_BCNTR 4229125816477883         949530     1276098290
>>>>               KA_PCNTR                0              0              0
>>>>               KA_BCNTR                0              0              0
>>>> Discard counters:
>>>>           Counter Name            Total           Rate      Peak Rate
>>>> ---------------------- ---------------- -------------- --------------
>>>>          WAN_DROP_CNTR              298              0             82
>>>>          FAB_DROP_CNTR             1511              0            419
>>>>           KA_DROP_CNTR                0              0              0
>>>>         HOST_DROP_CNTR                0              0              0
>>>> 
>>>> --
>>>> Richard A Steenbergen <ras at e-gerbil.net>
>> http://www.e-gerbil.net/ras
>>>> GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1
>> 2CBC)
>>>> 
>>>> 
>>> 
>>> I see your point, but I'd still be surprised if a defaulted box with a
>>> "then accept" filter would drop by this much.  You could see the be queue
>>> discarding packets in the sh int output.  The be queue is given 95% of
>> the
>>> buffer in the default schedule map which still leaves 1G plus unaccounted
>>> for.  Maybe it's a little bit of both. ...




More information about the juniper-nsp mailing list