[j-nsp] Event Log RPD_SCHED_SLIP M20
Juniper
juniper at iber-x.com
Mon Apr 12 12:35:18 EDT 2010
Hello guys,
First at all, thanks for replies.
We are very worry about this event in the syslog. It hasn't appeared in
the syslog since one week but even so we are going to mantain the
monitorization of this event.
Replying yours questions:
- It's a border router.
- There isn't a change in the router's configuration. And it hasn't
happened after running any command, it suddenly appear.
- Outputs:
abcXXX at xxyy.sss2> show system processes extensive
last pid: 3551; load averages: 0.02, 0.02, 0.00 up 165+20:25:00
13:23:09
65 processes: 1 running, 64 sleeping
Mem: 733M Active, 237M Inact, 183M Wired, 404K Cache, 143M Buf, 854M Free
Swap: 2048M Total, 2048M Free
PID USERNAME PRI NICE SIZE RES STATE TIME WCPU CPU COMMAND
2625 root 2 0 429M 426M kqread 44.4H 0.39% 0.39% rpd
* 2606 root 2 0 407M 405M kqread 54.6H 0.00% 0.00% rpd*
2609 root 2 15 43004K 42188K select 890:49 0.00% 0.00% sampled
2585 root 2 0 9908K 2960K select 243:46 0.00% 0.00% chassisd
2615 root 2 0 2020K 1408K select 58:51 0.00% 0.00% ppmd
2629 root 2 0 4472K 2660K select 43:14 0.00% 0.00% snmpd
2605 root 2 0 3620K 2244K select 28:35 0.00% 0.00% mib2d
2630 root 2 0 3808K 2960K select 26:53 0.00% 0.00% dcd
2626 root 2 0 1388K 764K select 17:00 0.00% 0.00% irsd
2586 root 2 0 1824K 1080K select 16:16 0.00% 0.00% alarmd
2712 root 2 0 0K 0K peer_s 14:30 0.00% 0.00% peer proxy
2495 root 2 0 1300K 820K select 9:30 0.00% 0.00% syslogd
7 root 18 0 0K 0K syncer 7:14 0.00% 0.00% syncer
77737 root 2 0 1256K 728K select 3:04 0.00% 0.00% ntpd
2621 root 2 0 1928K 1168K select 2:38 0.00% 0.00% bfdd
2628 root 2 0 2532K 1272K select 1:48 0.00% 0.00% pfed
2716 root 2 0 0K 0K peer_s 1:43 0.00% 0.00% peer proxy
2611 root 2 0 2792K 1428K select 1:39 0.00% 0.00% rmopd
2618 root 2 0 1928K 1140K select 1:07 0.00% 0.00% fsad
6 root -2 0 0K 0K vlruwt 0:56 0.00% 0.00% vnlru
5 root -18 0 0K 0K psleep 0:53 0.00% 0.00% bufdaemon
11 root -18 0 0K 0K psleep 0:50 0.00% 0.00%
vmuncachedaemo
2627 root 2 0 1944K 1160K select 0:41 0.00% 0.00% dfwd
2591 root 2 0 1004K 392K sbwait 0:32 0.00% 0.00% tnp.sntpd
2590 root 2 0 1284K 808K select 0:32 0.00% 0.00% inetd
2582 root 2 0 996K 360K select 0:30 0.00% 0.00% watchdog
2552 root 10 0 1132K 632K nanslp 0:24 0.00% 0.00% cron
2631 root 2 0 4580K 2304K select 0:22 0.00% 0.00% kmd
2588 root 2 0 13012K 6956K select 0:20 0.00% 0.00% mgd
2738 root 2 0 0K 0K peer_s 0:13 0.00% 0.00% peer proxy
3 root -18 0 0K 0K psleep 0:12 0.00% 0.00% pagedaemon
2599 root 10 0 1072K 504K nanslp 0:08 0.00% 0.00% eccd
1 root 10 0 916K 576K wait 0:04 0.00% 0.00% init
2595 root 10 0 1040K 460K nanslp 0:03 0.00% 0.00% smartd
101 root 10 0 2051M 35448K mfsidl 0:01 0.00% 0.00% newfs
3509 ibx333 2 0 9280K 4432K select 0:01 0.00% 0.00% cli
3507 root 2 0 5412K 1864K select 0:00 0.00% 0.00% sshd
2619 root 2 0 2044K 1276K select 0:00 0.00% 0.00% spd
2607 root 2 -15 2556K 1184K select 0:00 0.00% 0.00% apsd
2608 root 2 0 2600K 1292K select 0:00 0.00% 0.00% vrrpd
2612 root 2 0 2856K 1572K select 0:00 0.00% 0.00% cosd
2613 root 2 0 1928K 1172K select 0:00 0.00% 0.00% nasd
2622 root 2 0 1736K 960K select 0:00 0.00% 0.00% sdxd
3551 root 34 0 21728K 844K RUN 0:00 0.00% 0.00% top
2623 root 2 0 1784K 1040K select 0:00 0.00% 0.00% rdd
2610 root 2 0 2072K 960K select 0:00 0.00% 0.00% ilmid
3510 root 2 0 13100K 8192K select 0:00 0.00% 0.00% mgd
2614 root 2 0 1848K 1084K select 0:00 0.00% 0.00% fud
2616 root 2 0 1988K 1132K select 0:00 0.00% 0.00% lmpd
2620 root 2 0 1944K 1072K select 0:00 0.00% 0.00% pgmd
2624 root 2 0 1672K 968K select 0:00 0.00% 0.00% lrmuxd
2587 root 2 0 1928K 788K select 0:00 0.00% 0.00% craftd
2617 root 2 0 1284K 636K select 0:00 0.00% 0.00% rtspd
9 root 2 0 0K 0K pfeacc 0:00 0.00% 0.00%
if_pfe_listen
2583 root 2 0 1128K 620K select 0:00 0.00% 0.00% tnetd
2600 root 3 0 1084K 524K ttyin 0:00 0.00% 0.00% getty
2601 root 3 0 1080K 500K siodcd 0:00 0.00% 0.00% getty
2479 root 2 0 448K 264K select 0:00 0.00% 0.00% pccardd
0 root -18 0 0K 0K sched 0:00 0.00% 0.00% swapper
12 root 2 0 0K 0K picacc 0:00 0.00% 0.00%
if_pic_listen
10 root 2 0 0K 0K cb-pol 0:00 0.00% 0.00% cb_poll
13 root 2 0 0K 0K scs_ho 0:00 0.00% 0.00%
scs_housekeepi
8 root 29 0 0K 0K sleep 0:00 0.00% 0.00% netdaemon
4 root 18 0 0K 0K psleep 0:00 0.00% 0.00% vmdaemon
2 root 10 0 0K 0K tqthr 0:00 0.00% 0.00% taskqueue
abc111 at xxyy.YYY2> show chassis routing-engine
Routing Engine status:
Slot 0:
Current state Master
Election priority Master (default)
Temperature 23 degrees C / 73 degrees F
CPU temperature 22 degrees C / 71 degrees F
DRAM 2048 MB
Memory utilization 47 percent
CPU utilization:
User 2 percent
Background 0 percent
Kernel 0 percent
Interrupt 0 percent
Idle 97 percent
Model RE-3.0
Serial ID P10865701888
Start time 2009-10-28 15:49:40 CET
Uptime 165 days, 20 hours, 29 minutes, 57
seconds
Load averages: 1 minute 5 minute 15 minute
0.00 0.00 0.00
Thanks so much for your reply.
Matthew
El 09/04/2010 17:38, Nilesh Khambal escribió:
> The message shows that the scheduler slips were caused due to a user process
> taking up CPU for longer than 4 seconds. This could very well be some
> task/job inside RPD or could be some other process. Was there any
> configuration change done recently on the router that triggered these
> messages? Are they seen after running any command? What is the role of this
> router in your network? I would suggest running the "show system processes
> extensive" and "show chassis routing-engine" immediately after the message
> shows up in the syslog.
>
> You could, for short term, also enable task accounting with "set task
> accounting on" from operational level and monitor the output of "show task
> accounting" after the message is seen. Usually, there will be an additional
> message after enabling task accounting, showing which task inside RPD (if it
> is indeed RPD) is taking up the CPU. Do not forget to disable it after you
> have seen a couple of incidents of these messages.
>
> Thanks,
> Nilesh.
>
>
>
>
> On 4/9/10 3:00 AM, "Juniper"<juniper at iber-x.com> wrote:
>
>
>> Hello there,
>>
>> Since few days we are detecting these message in our log message in a
>> Juniper M20 with the JUNOS version 7.3R1.4 :
>>
>> Apr 6 06:00:15 xxxx-yyy2.abc-d.net LEV[2625]: RPD_SCHED_SLIP: 5 sec
>> scheduler slip, user: 4 sec 940542 usec, system: 0 sec, 14925 usec
>> Apr 6 05:58:07 xxxx-yyy2.abc-d.net LEV[2625]: RPD_SCHED_SLIP: 4 sec
>> scheduler slip, user: 4 sec 75182 usec, system: 0 sec, 0 usec
>>
>> Does anyone know what could be the problem?
>>
>> Thanks in advance,
>>
>>
>>
>> _______________________________________________
>> juniper-nsp mailing list juniper-nsp at puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>
>
>
More information about the juniper-nsp
mailing list