[j-nsp] HDD Write Error

juniper at iber-x.com juniper at iber-x.com
Wed Oct 19 16:27:51 EDT 2011


Hi,

Thanks! We will try it and we will let you know.

Thanks,

El 19/10/2011 20:25, Jonas Frey (Probe Networks) escribió:
> Hello,
>
> yes that will work. You need to check the interface config of course
> (because you have less interfaces).
>
> The M20 will install the correct PFE module when it loads (it will spit
> out an error saying its running the incorrect PFE module for this
> architecture). So booting will take a minute longer than usual but thats
> it.
>
> Jonas
>
> Am Mittwoch, den 19.10.2011, 20:14 +0100 schrieb juniper at iber-x.com:
>> Hi experts,
>>
>> Thanks for your replies and advices.
>>
>> Just a quick question, as we have an old M5 and M10, we were wondering
>> if we could replace directly the HDD from one of these two routers to
>> our M20. Is it that possible? any experience doing that? If both of
>> them have the same JUNOS installed, and then copy the current
>> configuration..  thoughts?
>>
>> Many thanks,
>>
>>
>> El 19/10/2011 16:56, Jonas Frey (Probe Networks) escribió:
>>> Hello,
>>>
>>> havent you changed the HDD yet? Like to live dangerously eh? :-)
>>>
>>> This is very likely related because those errors messages causes writes
>>> to the HDD....and if your HDD is dead/has bad sectors that will cause
>>> trouble.
>>>
>>> Good luck,
>>> Jonas
>>>
>>> Am Mittwoch, den 19.10.2011, 16:19 +0200 schrieb Juniper GOWEX:
>>>> Hi all,
>>>>
>>>> Twenty days later, the error reappeared. The error appears in the log
>>>> always after a "RPD_SCHED_SLIP: 5 sec scheduler slip, user: 0 sec 0
>>>> usec, system: 4 sec, 228450 usec" messages:
>>>>
>>>>
>>>>          /
>>>>          Oct 13 23:16:35.278 2011   LEV[2625]: RPD_SCHED_SLIP: 5 sec
>>>>          scheduler slip, user: 0 sec 0 usec, system: 4 sec, 228450 usec
>>>>          Oct 13 23:17:35.862 2011   LEV[2625]: RPD_SCHED_SLIP: 4 sec
>>>>          scheduler slip, user: 0 sec 0 usec, system: 3 sec, 804772 usec
>>>>          Oct 13 23:20:07.655 2011   LEV[2625]: RPD_SCHED_SLIP: 4 sec
>>>>          scheduler slip, user: 0 sec 0 usec, system: 3 sec, 750490 usec
>>>>          Oct 13 23:27:43.598 2011   LEV[2625]: RPD_SCHED_SLIP: 4 sec
>>>>          scheduler slip, user: 0 sec 0 usec, system: 3 sec, 78894 usec
>>>>          Oct 13 23:28:14.755 2011   LEV[2625]: RPD_SCHED_SLIP: 5 sec
>>>>          scheduler slip, user: 0 sec 0 usec, system: 3 sec, 903324 usec
>>>>          Oct 13 23:29:16.124 2011   LEV[2625]: RPD_SCHED_SLIP: 5 sec
>>>>          scheduler slip, user: 0 sec 0 usec, system: 4 sec, 166013 usec
>>>>          Oct 13 23:31:18.118 2011   LEV[2625]: RPD_SCHED_SLIP: 5 sec
>>>>          scheduler slip, user: 0 sec 0 usec, system: 3 sec, 598753 usec
>>>>          Oct 13 23:35:46.293 2011   ssb NH: resolutions from iif 82 throttled
>>>>          Oct 13 23:38:25.256 2011   LEV[2625]: RPD_SCHED_SLIP: 4 sec
>>>>          scheduler slip, user: 0 sec 0 usec, system: 3 sec, 762067 usec
>>>>          Oct 13 23:38:55.759 2011   LEV[2625]: RPD_SCHED_SLIP: 5 sec
>>>>          scheduler slip, user: 0 sec 0 usec, system: 4 sec, 171438 usec
>>>>          Oct 13 23:41:01.342 2011  ssb NH: resolutions from iif 88 throttled
>>>>          Oct 13 23:42:16.283 2011  ssb NH: resolutions from iif 93 throttled
>>>>          Oct 13 23:46:05.391 2011  smartd[2595]:  Device: /dev/ad1a,
>>>>          Failed attribute: (200)Write Error Rate
>>>>          /
>>>>
>>>> Could this be related?.
>>>>
>>>> Best Regards
>>>>
>>>> Isidoro
>>>>
>>>>
>>>>
>>>> El 22/09/2011 8:05, Josh Farrelly escribió:
>>>>> Could you put them both in a Linux box and just 'dd if' them?
>>>>>
>>>>> -----Original Message-----
>>>>> From: juniper-nsp-bounces at puck.nether.net [mailto:juniper-nsp-bounces at puck.nether.net] On Behalf Of Jonas Frey (Probe Networks)
>>>>> Sent: Thursday, 22 September 2011 12:52
>>>>> To: Isidoro Cristobal
>>>>> Cc: juniper-nsp at puck.nether.net
>>>>> Subject: Re: [j-nsp] HDD Write Error
>>>>>
>>>>> Dear Isidoro,
>>>>>
>>>>> you cant copy the data 1:1....atleast not without alot of work.
>>>>> The best thing would be if you reinstall JunOS via a install media (pcmcia/cf card) once you replaced the hard disk.
>>>>> Its very easy to replace the hard disk on either RE2/3/4/5...its normally only secured by 4 screens on the RE.
>>>>> Make sure to save your config files (JunOS config, SSH keys, other data like home directorys, logs etc) before you replace the HDD if neccessary.
>>>>>
>>>>> Best regards,
>>>>> Jonas
>>>>>
>>>>> Am Mittwoch, den 21.09.2011, 17:18 +0200 schrieb Isidoro Cristobal:
>>>>>> Hi,
>>>>>>
>>>>>> First of all thank you very much for your quick response .
>>>>>>
>>>>>> How to save the data to the new hard disk? Do you know a procedure for
>>>>>> replacing hard disk ?
>>>>>>
>>>>>> Best Regards,
>>>>>>
>>>>>> Isidoro
>>>>>>
>>>>>>
>>>>>>
>>>>>> El 20/09/2011 17:29, Jonas Frey (Probe Networks) escribió:
>>>>>>> Hi,
>>>>>>>
>>>>>>> you are correct, the disk exceeded the maximum write errors
>>>>>>> permitted by the SMART value and thus is marked as bad. Prepare for
>>>>>>> a complete failure of the drive soon (1-30 days likely).
>>>>>>> May be the right time to upgrade the harddisk to a SSD.
>>>>>>> http://juniper.cluepon.net/Replacing_the_harddisk_with_solid_state_f
>>>>>>> lash
>>>>>>>
>>>>>>> Best regards,
>>>>>>> Jonas
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Am Dienstag, den 20.09.2011, 17:09 +0200 schrieb Juniper GOWEX:
>>>>>>>> Hi all,
>>>>>>>>
>>>>>>>>     From yesterday at the log of my M20 are the following message :
>>>>>>>>
>>>>>>>>            smartd[2595]:  Device: /dev/ad1a, Failed attribute: (200)Write
>>>>>>>>            Error Rate
>>>>>>>>
>>>>>>>> It´s informative, but i think that there is a problem with my HDD (
>>>>>>>> I still have to run the smartd commands ) .
>>>>>>>>
>>>>>>>> Somebody had this problem ?
>>>>>>>>
>>>>>>>>
>>>>>>>> Best Regards
>>>>>>>>
>>>>>>>> Isidoro
>>>>>>>>
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> juniper-nsp mailing list juniper-nsp at puck.nether.net
>>>>>>>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>>>> _______________________________________________
>>>>> juniper-nsp mailing list juniper-nsp at puck.nether.net
>>>>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>>> _______________________________________________
>>>> juniper-nsp mailing list juniper-nsp at puck.nether.net
>>>> https://puck.nether.net/mailman/listinfo/juniper-nsp
H


More information about the juniper-nsp mailing list