[j-nsp] jtree0 Memory full on MX480?
Phil Rosenthal
pr at isprime.com
Tue Jul 21 19:06:38 EDT 2015
Can you paste the output of these commands:
show conf | display set | match rpf-check
show ver
show route sum
DPC should have enough memory for ~1M FIB. This can get divided in half if you are using RPF. If you have multiple routing instances, this also can contribute to the problem.
Best Regards,
-Phil Rosenthal
> On Jul 21, 2015, at 6:56 PM, Jeff Meyers <Jeff.Meyers at gmx.net> wrote:
>
> Hello list,
>
> we seem to be running into limits with a MX480 with RE-2000 and 2x DPCE-4XGE-R since we are seeing these new messages in the syslog:
>
>
> Jul 22 00:50:36 cr0 fpc0 RSMON: Resource Category:jtree Instance:jtree0-seg0 Type:free-dwords Available:83072 is less than LWM limit:104857, rsmon_syslog_limit()
> Jul 22 00:50:36 cr0 fpc0 RSMON: Resource Category:jtree Instance:jtree1-seg0 Type:free-pages Available:1326 is less than LWM limit:1638, rsmon_syslog_limit()
> Jul 22 00:50:36 cr0 fpc1 RSMON: Resource Category:jtree Instance:jtree0-seg0 Type:free-pages Available:1316 is less than LWM limit:1638, rsmon_syslog_limit()
> Jul 22 00:50:37 cr0 fpc1 RSMON: Resource Category:jtree Instance:jtree0-seg0 Type:free-dwords Available:84224 is less than LWM limit:104857, rsmon_syslog_limit()
> Jul 22 00:50:37 cr0 fpc0 RSMON: Resource Category:jtree Instance:jtree1-seg0 Type:free-dwords Available:84864 is less than LWM limit:104857, rsmon_syslog_limit()
>
>
> Here is some more output from the FPC:
>
>
> jeff at cr0> request pfe execute target fpc0 command "show rsmon"
> SENT: Ukern command: show rsmon
> GOT:
> GOT: category instance type total lwm_limit hwm_limit free
> GOT: -------- ----------- ------------ -------- --------- --------- --------
> GOT: jtree jtree0-seg0 free-pages 32768 1638 4915 1245
> GOT: jtree jtree0-seg0 free-dwords 2097152 104857 314572 79680
> GOT: jtree jtree0-seg1 free-pages 32768 1638 4915 22675
> GOT: jtree jtree0-seg1 free-dwords 2097152 104857 314572 1451200
> GOT: jtree jtree1-seg0 free-pages 32768 1638 4915 1267
> GOT: jtree jtree1-seg0 free-dwords 2097152 104857 314572 81088
> GOT: jtree jtree1-seg1 free-pages 32768 1638 4915 23743
> GOT: jtree jtree1-seg1 free-dwords 2097152 104857 314572 1519552
> GOT: jtree jtree2-seg0 free-pages 32768 1638 4915 1266
> GOT: jtree jtree2-seg0 free-dwords 2097152 104857 314572 81024
> GOT: jtree jtree2-seg1 free-pages 32768 1638 4915 23732
> GOT: jtree jtree2-seg1 free-dwords 2097152 104857 314572 1518848
> GOT: jtree jtree3-seg0 free-pages 32768 1638 4915 1232
> GOT: jtree jtree3-seg0 free-dwords 2097152 104857 314572 78848
> GOT: jtree jtree3-seg1 free-pages 32768 1638 4915 23731
> GOT: jtree jtree3-seg1 free-dwords 2097152 104857 314572 1518784
> LOCAL: End of file
>
> {master}
> jeff at cr0> request pfe execute target fpc0 command "show jtree 0 memory extensive"
> SENT: Ukern command: show jtree 0 memory extensive
> GOT:
> GOT: Jtree memory segment 0 (Context: 0x44976cc8)
> GOT: -------------------------------------------
> GOT: Memory Statistics:
> GOT: 16777216 bytes total
> GOT: 15299920 bytes used
> GOT: 1459080 bytes available (660480 bytes from free pages)
> GOT: 3024 bytes wasted
> GOT: 15192 bytes unusable
> GOT: 32768 pages total
> GOT: 26528 pages used (2568 pages used in page alloc)
> GOT: 4950 pages partially used
> GOT: 1290 pages free (max contiguous = 373)
> GOT:
> GOT: Partially Filled Pages (In bytes):-
> GOT: Unit Avail Overhead
> GOT: 8 674344 0
> GOT: 16 107840 0
> GOT: 24 13296 4792
> GOT: 32 288 0
> GOT: 48 2832 10400
> GOT:
> GOT: Free Page Lists(Pg Size = 512 bytes):-
> GOT: Page Bucket Avail(Bytes)
> GOT: 1-1 140288
> GOT: 2-2 112640
> GOT: 3-3 76800
> GOT: 4-4 49152
> GOT: 5-5 7680
> GOT: 6-6 15360
> GOT: 7-7 25088
> GOT: 8-8 8192
> GOT: 9-11 5632
> GOT: 12-17 6656
> GOT: 18-26 22016
> GOT: 27-32768 190976
> GOT:
> GOT: Fragmentation Index = 0.869, (largest free = 190976)
> GOT: Counters:
> GOT: 465261655 allocs (0 failed)
> GOT: 0 releases(partial 0)
> GOT: 463785484 frees
> GOT: 0 holds
> GOT: 9 pending frees(pending bytes 88)
> GOT: 0 pending forced
> GOT: 0 times free blocked
> GOT: 0 sync writes
> GOT: Error Counters:-
> GOT: 0 bad params
> GOT: 0 failed frees
> GOT: 0 bad cookie
> GOT:
> GOT: Jtree memory segment 1 (Context: 0x449f87e8)
> GOT: -------------------------------------------
> GOT: Memory Statistics:
> GOT: 16777216 bytes total
> GOT: 5123760 bytes used
> GOT: 11650408 bytes available (11609600 bytes from free pages)
> GOT: 2704 bytes wasted
> GOT: 344 bytes unusable
> GOT: 32768 pages total
> GOT: 9912 pages used (8976 pages used in page alloc)
> GOT: 181 pages partially used
> GOT: 22675 pages free (max contiguous = 22672)
> GOT:
> GOT: Partially Filled Pages (In bytes):-
> GOT: Unit Avail Overhead
> GOT: 8 25352 0
> GOT: 16 11072 0
> GOT: 32 384 0
> GOT: 40 440 32
> GOT: 48 1056 256
> GOT: 56 448 8
> GOT: 64 448 0
> GOT: 72 360 8
> GOT: 80 400 32
> GOT: 168 336 16
> GOT: 256 512 32
> GOT:
> GOT: Free Page Lists(Pg Size = 512 bytes):-
> GOT: Page Bucket Avail(Bytes)
> GOT: 3-3 1536
> GOT: 27-32768 11608064
> GOT:
> GOT: Fragmentation Index = 0.004, (largest free = 11608064)
> GOT: Counters:
> GOT: 29941803 allocs (0 failed)
> GOT: 0 releases(partial 0)
> GOT: 29888786 frees
> GOT: 0 holds
> GOT: 1 pending frees(pending bytes 8)
> GOT: 0 pending forced
> GOT: 0 times free blocked
> GOT: 0 sync writes
> GOT: Error Counters:-
> GOT: 0 bad params
> GOT: 0 failed frees
> GOT: 0 bad cookie
> GOT:
> GOT:
> GOT: Context: 0x4296cc58
> LOCAL: End of file
>
>
> I furthermore found this article on Juniper KB:
>
>
> http://kb.juniper.net/InfoCenter/index?page=content&id=KB19015&actp=search&viewlocale=en_US&searchid=1236602855555
>
>
> Is it really possible the MX480 cannot handle more than roughly 500k routes in the FPC? What are my options here? Do I have to upgrade the SCB + get some new interfaces modules in order to keep this box running?
>
> What are my options to get some time? Where is the right knob to aggregate routes (if that's a good idea) to - let's say - /23?
>
>
> Thanks in advance!
>
>
>
> Jeff
> _______________________________________________
> juniper-nsp mailing list juniper-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
More information about the juniper-nsp
mailing list