[j-nsp] ms-mic cpu pinned, then reset conns?
ryanL
ryan.landry at gmail.com
Tue Nov 11 16:25:50 EST 2014
hi. just closing the loop on this. it's a software defect, whereby
specifying a nat pool as a prefix causes some confusion on the mx when the
ms-mic uses the highest ip in the prefix (broadcast address) for nat
purposes. this caused a crazy amplification of traffic on the mis-mic, and
drives the cpu to max.
the work-around is to specify address-range "low" and "high", ensuring you
exclude the network ID and the broadcast address from the desired prefix.
you lose two IP in the range, but it certainly solved the issue for me.
supposedly fixed in an upcoming release.
On Tue, Oct 21, 2014 at 3:29 PM, ryanL <ryan.landry at gmail.com> wrote:
> i've determined that the 'reset' of the cpu back to normal levels is when
> i have committed a change to the NAT config, whereby i permitted new hosts.
> it seems that this must tickle the ms-mic somehow. i just confirmed this by
> committing a benign change it resets the cpu back down to 1%.
>
> On Tue, Oct 21, 2014 at 2:04 PM, ryanL <ryan.landry at gmail.com> wrote:
>
>> we've had the ms-mic working pretty well for NAT on the mx80, until i
>> discovered this.
>>
>> http://ryry.foursquare.com/image/3a3o1J1M1o27
>>
>> graph shows two different mx80's with their respective RE and ms-mic cpu
>> usage. seems like the maybe the connections build up, hammering the ms-mic
>> cpu and my guess is dumping all active NAT connections. the card itself
>> doesn't appear to be reloading.
>>
>> i don't believe we're doing anything overly crazy. just letting some
>> machines call out to the world if the destination isn't our private network.
>>
>> this is current state. what i find particularly interesting is the in/out
>> rate. that's a crazy amount of traffic seen on the ms-mic interface. we
>> don't even have that much traffic flowing thru the mx80's combined. the
>> jtac engineer suspects it's cosmetic, but now i'm guessing it relates to
>> the elevated cpu. i'm going to start graphing that interface as well.
>>
>> ry at iad1-er2> show services sessions utilization extensive
>> Session %Count Setup %Rate Drop Teardown %CPU
>> Interface Count Rate Rate Rate
>> ms-0/2/0 661 0.00 20 13 34.20
>> Green
>>
>> ry at iad1-er2> show interfaces ms-0/2/0
>> Physical interface: ms-0/2/0, Enabled, Physical link is Up
>> Interface index: 151, SNMP ifIndex: 539
>> Type: Adaptive-Services, Link-level type: Adaptive-Services, MTU: 9192,
>> Speed: 20000mbps
>> Device flags : Present Running
>> Interface flags: Point-To-Point SNMP-Traps
>> Link type : Full-Duplex
>> Link flags : None
>> Last flapped : 2014-10-07 23:11:09 UTC (1w6d 21:50 ago)
>> Input rate : 731899664 bps (1313864 pps)
>> Output rate : 900069112 bps (1313850 pps)
>>
>>
>>
>
More information about the juniper-nsp
mailing list