[cisco-voip] CUCM Upgrade failure...

Dave Goodwin dave.goodwin at december.net
Tue Nov 17 12:30:07 EST 2015


I had a similar symptom a couple months ago - discovery timed out on one or
more nodes. I found a supportforum post from another individual having that
issue, and it appeared that PCD got "stuck" installing
the ciscocm.ucmap_platformconfig.cop file in the middle. The way to work
around it was to log into the Software Install/Upgrade page on the affected
node(s) and you can see you'll have the option to Assume Control of a
currently running install. Do that, and click through to complete the
install. Then try to do the discovery again. Unknown if there was an
existing bug causing this problem, but the above workaround steps solved my
problem.

On Tue, Nov 17, 2015 at 12:25 PM, Jonathan Charles <jonvoip at gmail.com>
wrote:

> I just posted the log... I am not sure from reading it...
>
> discovery node 1617 failed with errorcode 1-3
>
>
> Jonathan
>
> On Tue, Nov 17, 2015 at 11:23 AM, Ryan Huff <ryanhuff at outlook.com> wrote:
>
>> Jonathan,
>>
>> What timesout on the publisher? Are you referencing when PCD tries to do
>> the cluster discovery on the existing cluster?
>>
>> -Ryan
>>
>>
>> Sent from my T-Mobile 4G LTE Device
>>
>>
>> -------- Original message --------
>> From: Jonathan Charles
>> Date:11/17/2015 12:06 PM (GMT-05:00)
>> To: Anthony Holloway
>> Cc: cisco-voip at puck.nether.net
>> Subject: Re: [cisco-voip] CUCM Upgrade failure...
>>
>> Nope, not all good... new error... the upgrade failed so I deleted the
>> cluster and re-added... it finds all of the subscribers, but it times out
>> on the publisher....
>>
>> I have verified all services are running on the Pub and it looks clean...
>>
>>
>>
>> Jonathan
>>
>> On Mon, Nov 16, 2015 at 10:20 AM, Anthony Holloway <
>> avholloway+cisco-voip at gmail.com> wrote:
>>
>>> Looks like you're all good now, but as a heads up to everyone else,
>>> don't stop at checking NTP with "utils ntp status".  You will fail to
>>> upgrade if your NTP configuration has an FQHN for the NTP server which
>>> begins with a digit.
>>>
>>> E.g., 0.pool.ntp.org
>>>
>>> You will not see the hostname in the output of "utils ntp status", as it
>>> will only show you the resolved IP address.  So, you will also need to
>>> issue a "utils ntp config" to see what value was entered by the
>>> administrator.
>>>
>>> This is the only defect reference I found, though my upgrade I hit it on
>>> was an 8.6 to 10.5 Refresh Upgrade (RU) (Not PCD).
>>>
>>> https://tools.cisco.com/bugsearch/bug/CSCtj07817
>>>
>>> On Sun, Nov 15, 2015 at 10:43 PM, Jonathan Charles <jonvoip at gmail.com>
>>> wrote:
>>>
>>>> OK, a reboot of CPCD got it passed that error...
>>>>
>>>>
>>>> Jonathan
>>>>
>>>> On Sun, Nov 15, 2015 at 9:40 PM, Jonathan Charles <jonvoip at gmail.com>
>>>> wrote:
>>>>
>>>>> Yeah, the error I am getting is:
>>>>>
>>>>> 1 nodes(s) in Export task action ID #1127... on the Publisher...
>>>>>
>>>>> I will try rebooting everything...
>>>>>
>>>>>
>>>>>
>>>>> Jonathan
>>>>>
>>>>> On Sun, Nov 15, 2015 at 9:35 PM, Ryan Huff <ryanhuff at outlook.com>
>>>>> wrote:
>>>>>
>>>>>> Looks healthy ...
>>>>>>
>>>>>> I recall trying PCD once and I hit really strange issues too. For
>>>>>> that upgrade, I ultimately abandoned PCD and built new VMs with the Answer
>>>>>> File Generator then a DRS backup/restore.
>>>>>>
>>>>>> Not sure where you are in your timeline or if it is that important
>>>>>> but it is definitely something I would consider. Sometimes you can spend
>>>>>> more time trying to get the silly tools to work, than to just do the work
>>>>>> yourself.
>>>>>>
>>>>>> Google is littered with PCD weirdness; great idea of an application,
>>>>>> just not there yet IMO.
>>>>>>
>>>>>> -Ryan
>>>>>>
>>>>>>
>>>>>>
>>>>>> Sent from my iPad
>>>>>> On Nov 15, 2015, at 10:19 PM, Jonathan Charles <jonvoip at gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>> Everything looks good....
>>>>>>
>>>>>>
>>>>>> admin:utils ntp status
>>>>>> ntpd (pid 19674) is running...
>>>>>>
>>>>>>      remote           refid      st t when poll reach   delay
>>>>>> offset  jitter
>>>>>>
>>>>>> ==============================================================================
>>>>>>  127.127.1.0     .LOCL.          10 l   21   64  377    0.000
>>>>>>  0.000   0.001
>>>>>>  10.0.31.2       10.0.31.3        3 u  175 1024  377    0.635
>>>>>> -5.771   2.042
>>>>>> *10.0.31.3       129.6.15.29      2 u  970 1024  377    0.510
>>>>>>  -11.340   0.449
>>>>>>  10.1.31.2       10.0.31.3        3 u  490 1024  377    0.850
>>>>>> -9.114   4.881
>>>>>> +10.1.31.3       129.6.15.29      2 u  184 1024  377    0.817
>>>>>> -4.085   5.355
>>>>>>
>>>>>>
>>>>>> synchronised to NTP server (10.0.31.3) at stratum 3
>>>>>>    time correct to within 68 ms
>>>>>>    polling server every 1024 s
>>>>>>
>>>>>> Current time in UTC is : Mon Nov 16 03:16:14 UTC 2015
>>>>>> Current time in America/Chicago is : Sun Nov 15 21:16:14 CST 2015
>>>>>> admin:
>>>>>>
>>>>>> admin:utils diagnose module validate_network
>>>>>>
>>>>>> Log file: platform/log/diag1.log
>>>>>>
>>>>>> Starting diagnostic test(s)
>>>>>> ===========================
>>>>>> test - validate_network    : Passed
>>>>>>
>>>>>> Diagnostics Completed
>>>>>>
>>>>>> admin:#
>>>>>>
>>>>>> admin:utils dbreplication runtimestate
>>>>>>
>>>>>> DB and Replication Services: ALL RUNNING
>>>>>>
>>>>>> Cluster Replication State: Replication repair command started at:
>>>>>> 2014-06-20-23-22
>>>>>>      Replication repair command COMPLETED 541 tables processed out of
>>>>>> 541
>>>>>>      Errors or Mismatches Were Found:
>>>>>>
>>>>>>      Use 'file view activelog
>>>>>> cm/trace/dbl/sdi/ReplicationRepair.2014_06_20_23_22_51.out' to see the
>>>>>> details
>>>>>>
>>>>>> DB Version: ccm8_6_2_20000_2
>>>>>> Number of replicated tables: 541
>>>>>>
>>>>>> Cluster Detailed View from PUB (5 Servers):
>>>>>>
>>>>>>                                 PING            REPLICATION     REPL.
>>>>>>   DBver&  REPL.       REPLICATION SETUP
>>>>>> SERVER-NAME     IP ADDRESS      (msec)  RPC?    STATUS          QUEUE
>>>>>>   TABLES  LOOP?       (RTMT) & details
>>>>>> -----------     ------------    ------  ----    -----------     -----
>>>>>>   ------- -----       -----------------
>>>>>> IPTCMS02  10.0.126.12     0.196   Yes     Connected       0
>>>>>> match   Yes         (2) Setup Completed
>>>>>> IPTCMS01  10.0.126.11     0.151   Yes     Connected       0
>>>>>> match   Yes         (2) Setup Completed
>>>>>> IPTCMP    10.0.126.10     0.065   Yes     Connected       0
>>>>>> match   Yes         (2) PUB Setup Completed
>>>>>> IPTCMS03  10.1.126.13     0.545   Yes     Connected       0
>>>>>> match   Yes         (2) Setup Completed
>>>>>> IPTCMS04  10.1.126.14     0.527   Yes     Connected       0
>>>>>> match   Yes         (2) Setup Completed
>>>>>>
>>>>>> admin:
>>>>>>
>>>>>> On Sun, Nov 15, 2015 at 9:14 PM, Ryan Huff <ryanhuff at outlook.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Also worth noting that if CCM NTP is synchronized to a Windows
>>>>>>> server (even if it shows Stratum 3 or better); that is a problem you'll
>>>>>>> need to correct as SNTP can play hell with UCOS and do some pretty weird
>>>>>>> stuff.
>>>>>>>
>>>>>>> Thanks,
>>>>>>>
>>>>>>> Ryan
>>>>>>>
>>>>>>> On Nov 15, 2015, at 10:06 PM, Ryan Huff <ryanhuff at outlook.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>> If the FROM CCM version was unrestricted, it would say "
>>>>>>> *Unrestricted*" after the version number on the "*Active Master
>>>>>>> Version"* line. If it does not say "*Unrestricted*", then it is the
>>>>>>> more common restricted version.
>>>>>>>
>>>>>>> As to your original issue, I would start with all the usual
>>>>>>> suspects. Is the FROM CCM cluster healthy to start with; dns, ntp,
>>>>>>> replication ... etc?
>>>>>>>
>>>>>>> From CCM:
>>>>>>>
>>>>>>> 1.) #utils diagnose module validate_network
>>>>>>>         (Should see *Passed*)
>>>>>>>
>>>>>>> 2.) #utils ntp status
>>>>>>>         (Pub should be synchronized and Strata 3 or better)
>>>>>>>
>>>>>>> 3.) #utils dbreplication runtimestate
>>>>>>>         (Should see *2 - Setup Completed* for all nodes)
>>>>>>>
>>>>>>> If PCD is moving the apps to a new platform/chassis, make sure the
>>>>>>> *target* environment can reach all the same network assets as the
>>>>>>> *from* environment.
>>>>>>>
>>>>>>> Thanks,
>>>>>>>
>>>>>>> Ryan
>>>>>>>
>>>>>>> On Nov 15, 2015, at 9:34 PM, Jonathan Charles <jonvoip at gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>> It just says the version number...
>>>>>>>
>>>>>>> admin:show version active
>>>>>>> Active Master Version: 8.6.2.20000-2
>>>>>>> Active Version Installed Software Options:
>>>>>>> cmterm-7942_7962-sccp.9-3-1ES27-rel.cop
>>>>>>> cmterm-devicepack8.6.2.24118-1.cop
>>>>>>> ciscocm.refresh_upgrade_v1.1.cop
>>>>>>> ciscocm.ucmap_platformconfig.cop
>>>>>>> ciscocm.migrate-export-v1.12.cop
>>>>>>> admin:
>>>>>>>
>>>>>>>
>>>>>>> Jonahan
>>>>>>>
>>>>>>> On Sun, Nov 15, 2015 at 7:26 PM, Ryan Huff <ryanhuff at outlook.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Did you actually dump the logs to a serial interface (curious what
>>>>>>>> it shows)?
>>>>>>>>
>>>>>>>> On the FROM CCM, goto the CLI of pub (or sub) and do a "show
>>>>>>>> version active"; it will tell you if you have the UNREST.
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>>
>>>>>>>> Ryan
>>>>>>>>
>>>>>>>> > On Nov 15, 2015, at 7:55 PM, Jonathan Charles <jonvoip at gmail.com>
>>>>>>>> wrote:
>>>>>>>> >
>>>>>>>> > Using PCD on CUCM 10.5.2.11901 got the following error:
>>>>>>>> >
>>>>>>>> > <image.png>
>>>>>>>> >
>>>>>>>> > It seems to imply I am not matching restricted vs. unrestricted...
>>>>>>>> >
>>>>>>>> > Any easy way to find out?
>>>>>>>> >
>>>>>>>> >
>>>>>>>> >
>>>>>>>> > Jonathan
>>>>>>>> > _______________________________________________
>>>>>>>> > cisco-voip mailing list
>>>>>>>> > cisco-voip at puck.nether.net
>>>>>>>> > https://puck.nether.net/mailman/listinfo/cisco-voip
>>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> cisco-voip mailing list
>>>>>>> cisco-voip at puck.nether.net
>>>>>>> https://puck.nether.net/mailman/listinfo/cisco-voip
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>> _______________________________________________
>>>> cisco-voip mailing list
>>>> cisco-voip at puck.nether.net
>>>> https://puck.nether.net/mailman/listinfo/cisco-voip
>>>>
>>>>
>>>
>>
>
> _______________________________________________
> cisco-voip mailing list
> cisco-voip at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-voip
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://puck.nether.net/pipermail/cisco-voip/attachments/20151117/446910ff/attachment.html>


More information about the cisco-voip mailing list