[cisco-voip] Not supported I'm sure..... but what do you think?

Erick Wellnitz ewellnitzvoip at gmail.com
Sat Oct 29 20:06:59 EDT 2016


You will lose report data for the time since uograde and switch version
and  when you roll back using switch version. Depending on that duration it
could be a big deal.

I recently had to find a way to retrieve that report data for a customer.
Thankfully they are dilligent about backups.

On Oct 27, 2016 2:48 PM, "Anthony Holloway" <avholloway+cisco-voip at gmail.com>
wrote:

> Thanks for asking Ryan.
>
> First of all, I feel that the ideas expressed here in this thread are a
> direct result of the failure on the current method to upgrade.  I doubt I'm
> alone on this, but it's not like we have a night shift team.  I work during
> the day planning the upgrade and attending project meetings, and then go
> right into working through the night performing the actual upgrade.  If I
> could stage the new version during the day, and limit my night time work to
> 30 minutes, that would be wonderful.
>
> Back to the Cisco active/inactive partition idea, which is the main focus
> of this reply, my first complaint is that this is a process which is only
> possible with one type of upgrade: the SU (formerly known as L2; yes it
> will be confusing when talking about Service Updates.  I.e., SU3  Thanks
> Cisco.)
>
> [image: Inline image 2]
> [image: Inline image 3]
>
> My second complaint is, for some products, and I haven't inventoried all
> of them, but I can at least speak for UCCX 10.6(1)SU1 to 10.6(1)SU2
> (because dropped a call center mid-day just two days ago), you cannot even
> perform an SU, without it impacting production.
>
> [image: Inline image 4]
>
> With those two points alone, you might as well not even have this
> feature.  It's inconsistent in its implementation, and saves you very
> little time.  That's why people are thinking of creative ways around it.
>
> But I will continue...
>
> Did you know that it takes UCCX up to 90 minutes in some cases to actually
> switch versions, and then you have to wait another 30 minutes for all of
> the services to start (I'm looking at your Cisco Finesse Tomcat).  If I
> have to flip two UCCX's over, that's 4 hours right there.  I might as well
> do what everyone else is doing, and spare my late night activities.
>
> I tried to find a source to cite for this next comment, but I was unable,
> and now I'm unwilling to look harder, but I'm pretty sure Cisco has a CYA
> out there which states that you must perform the SU in a maintenance
> window, do to the possibility that it will impact CPU/Disk/etc., and thus
> impact calls.  I see the slides above show that you can perform the upgrade
> "while the system is functioning," but that's not what I recall reading in
> the docs.  If I'm wrong in recalling that, then I'm sorry for making it
> up...but I think I'm recalling correctly.  I just can't find the source atm.
>
> Let's pretend I'm wrong though, and Cisco says you can actually perform
> the upgrade during production.  Since the configuration and reporting data
> is tied to the partition, I have to be willing to lose all reporting data
> which happens between kicking off the upgrade and when I actual switch the
> version.  Or in the case of users changing their SNR settings, CFA, etc,
> they might lose that too.  It's not enough to just say "implement a change
> freeze."
>
> Then, the fact that you can roll-back an upgrade to the previous release
> is a laughable joke, because the configuration data is sitting there
> getting stale, while the business moves forward.  If you're going to roll
> back, it would have to be in the same maintenance window, else you stand to
> lose configuration and reporting data.  Reporting data being the more
> critical of the two.
>
> Cisco should have abstracted the configuration and reporting data from the
> application logic.  I would be more forgiving if say upgrading from 7x to
> 10x, but for SU1 to SU2 is ridiculous.
>
> You mentioned testing the new version first, and while I have seen some
> people want to test, say for example Finesse changes, these people could
> just use a LAB/NPS/dCloud for that purpose.  I wouldn't say this is a major
> concern for me, but it might be for others.
>
> I could rip apart the rest of the upgrade process too if you'd like (read
> me's, release notes, upgrade guides, cop files, defects), but I feel like
> I'm already hijacking this thread.
>
> Again, thanks for asking.  I appreciate you caring enough to ask.
>
>
> On Thu, Oct 27, 2016 at 12:22 PM, Ryan Ratliff (rratliff) <
> rratliff at cisco.com> wrote:
>
>> Honest question, what exactly is it about the current implementation that
>> fails to deliver on this?
>>
>> Is it something in the design of the upgrade process?
>>
>> Is it that the upgrade takes too long to be done during any reasonable
>> maintenance window?
>>
>> Is it that you have to test the new version before you roll it into
>> production?
>>
>> Is it <your answer goes here>>
>>
>> -Ryan
>>
>> On Oct 27, 2016, at 12:02 PM, Anthony Holloway <
>> avholloway+cisco-voip at gmail.com> wrote:
>>
>> If only there was an upgrade process wherein you install the new version
>> to an inactive partition, and then could switch to the new version when
>> you're ready.  /sarcasm
>>
>> But seriously though, everyone in this thread is essentially coming up
>> with their own clever way of replicating the promise Cisco failed to
>> deliver on, which is performing your upgrades during production on the
>> inactive partition and then switching versions in a maintenance window.  If
>> they would have only held themselves to a higher standard, we wouldn't need
>> this complex of an alternate solution.
>>
>> On Tue, Oct 25, 2016 at 2:45 PM, Ryan Huff <ryanhuff at outlook.com> wrote:
>>
>>> Matthew is correct, copying is listed as "Supported with Caveats" at:
>>> http://docwiki.cisco.com/wiki/Unified_Communications_VMware_Requirements;
>>> The caveat being found at http://docwiki.cisco.com/wi
>>> ki/Unified_Communications_VMware_Requirements#Copy_Virtual_Machine
>>>
>>>
>>> The VM needs to be powered down first and the resulting VM will have a
>>> different MAC address (unless it was originally manually specified); so
>>> you'll need to rehost the PLM if it is co-res to any VM that you copy.
>>>
>>>
>>> Where I have seen folks get into trouble with this is where a subscriber
>>> is copied, and the user mistakenly thinks that by changing the IP and
>>> hostname it becomes unique and can be added to the cluster as a new
>>> subscriber. I have also seen users make a copy of a publisher and change
>>> the network details of the copy, thinking it makes a unique cluster and
>>> then wonders why things like ILS wont work between the two clusters (and it
>>> isn't just because the cluster IDs are the same).
>>>
>>>
>>> Having said all of that, I would NEVER do this in production ... maybe
>>> that is just me being cautious or old school, but that is just me. Even
>>> without changing network details on the copy, I have seen this cause issues
>>> with Affinity. At the very least, if you travel this path I would make sure
>>> that the copy runs on the same host and even in the same datastore.
>>>
>>>
>>> === An alternative path ===
>>>
>>>
>>> Admittedly, this path is longer and there is a little more work involve
>>> but is the safer path, IMO and is what I would trust for a production
>>> scenario.
>>>
>>>
>>> 1.) Create a private port group on the host. If the cluster is on
>>> multiple hosts, span the port group through a connecting network to the
>>> other hosts but DO NOT create an SVI anywhere in the the topology for that
>>> DOT1Q tag (remembering to add a DOT1Q tag on any networking devices between
>>> the two hosts and allowing on any trunks between the two hosts).
>>>
>>>
>>> 2.) Upload Cisco's CSR1000V to the host. If you're not familiar with the
>>> product it is at the core and unlicensed, a virtual router with three
>>> interfaces by default. Out of the box, it is more than enough to replicate
>>> DNS/NTP on your private network which is all you'll need. Assign the
>>> private port group to the network adapters and configure DNS and NTP
>>> (master 2) on this virtual router.
>>>
>>>
>>> 3.) Build out a replica of your production UC cluster on the private
>>> network.
>>>
>>>
>>> 4.) Take a DRS of the production UC apps and then put your SFTP server
>>> on the private network and do a DRS restore to the private UC apps.
>>>
>>>
>>> 5.) Upgrade the private UC apps and switch your port group labels on the
>>> production/private UC apps during a maintenance window.
>>>
>>>
>>> Thanks,
>>>
>>>
>>> Ryan
>>>
>>>
>>>
>>>
>>> ------------------------------
>>> *From:* cisco-voip <cisco-voip-bounces at puck.nether.net> on behalf of
>>> Matthew Loraditch <MLoraditch at heliontechnologies.com>
>>> *Sent:* Tuesday, October 25, 2016 3:01 PM
>>> *To:* Tommy Schlotterer; Scott Voll; cisco-voip at puck.nether.net
>>>
>>> *Subject:* Re: [cisco-voip] Not supported I'm sure..... but what do you
>>> think?
>>>
>>> I can’t see any reason it wouldn’t be supported honestly. Offline
>>> Cloning is allowed for migration/backup purposes. I actually did the NAT
>>> thing to do my BE5k to 6K conversions. Kept both systems online.
>>>
>>>
>>> The only thing I can think to be thought of is ITLs, does an upgrade do
>>> anything that you’d have to reset phones to go back to the old servers if
>>> there are issues? I don’t think so, but not certain.
>>>
>>>
>>> Matthew G. Loraditch – CCNP-Voice, CCNA-R&S, CCDA
>>> Network Engineer
>>> Direct Voice: 443.541.1518
>>>
>>> Facebook <https://www.facebook.com/heliontech?ref=hl> | Twitter
>>> <https://twitter.com/HelionTech> | LinkedIn
>>> <https://www.linkedin.com/company/helion-technologies?trk=top_nav_home>
>>> | G+ <https://plus.google.com/+Heliontechnologies/posts>
>>>
>>>
>>> *From:* cisco-voip [mailto:cisco-voip-bounces at puck.nether.net] *On
>>> Behalf Of *Tommy Schlotterer
>>> *Sent:* Tuesday, October 25, 2016 2:49 PM
>>> *To:* Scott Voll <svoll.voip at gmail.com>; cisco-voip at puck.nether.net
>>> *Subject:* Re: [cisco-voip] Not supported I'm sure..... but what do you
>>> think?
>>>
>>>
>>> I do a similar, but supported process. I take DRS backups and then
>>> restore on servers in a sandbox VLAN. Works well. Make sure you check your
>>> phone firmware and upgrade to the current version before the cutover or all
>>> your phones will have to upgrade on cutover.
>>>
>>>
>>> Also make sure you don’t change Hostname/Ip addresses in the sandbox as
>>> that will cause your ITL to regenerate and cause issues with phone
>>> configuration changes after cutover.
>>>
>>>
>>> Thanks
>>>
>>> Tommy
>>>
>>>
>>> *Tommy Schlotterer | Systems Engineer*
>>> *Presidio | **www.presidio.com <http://www.presidio.com/>*
>>> *20 N. Saint Clair, 3rd Floor, Toledo, OH 43604*
>>> *D: 419.214.1415 <419.214.1415> | C: 419.706.0259 <419.706.0259> | **tschlotterer at presidio.com
>>> <tschlotterer at presidio.com>*
>>>
>>>
>>> *From:* cisco-voip [mailto:cisco-voip-bounces at puck.nether.net
>>> <cisco-voip-bounces at puck.nether.net>] *On Behalf Of *Scott Voll
>>> *Sent:* Tuesday, October 25, 2016 2:43 PM
>>> *To:* cisco-voip at puck.nether.net
>>> *Subject:* [cisco-voip] Not supported I'm sure..... but what do you
>>> think?
>>>
>>>
>>> So my co-worker and I are thinking about upgrades.  we are currently on
>>> 10.5 train and thinking about the 11.5 train.
>>>
>>>
>>> What would be your thoughts about taking a clone of every VM.  CM, UC,
>>> UCCx, CER, PLM,
>>>
>>>
>>> placing it on another vlan with the same IP's.  NAT it as it goes onto
>>> your network so it has access to NTP, DNS, AD, etc.
>>>
>>>
>>> do your upgrade on the clones.
>>>
>>>
>>> Then in VM ware shut down the originals,and change the Vlan (on the
>>> clones)  back to the production vlan for your voice cluster.
>>>
>>>
>>> it would be like a telco slash cut.  10 minute outage as you move from
>>> one version to the other.
>>>
>>>
>>> Thoughts?
>>>
>>>
>>> Scott
>>>
>>>
>>>
>>>
>>> *This message w/attachments (message) is intended solely for the use of
>>> the intended recipient(s) and may contain information that is privileged,
>>> confidential or proprietary. If you are not an intended recipient, please
>>> notify the sender, and then please delete and destroy all copies and
>>> attachments. Please be advised that any review or dissemination of, or the
>>> taking of any action in reliance on, the information contained in or
>>> attached to this message is prohibited.*
>>>
>>> _______________________________________________
>>> cisco-voip mailing list
>>> cisco-voip at puck.nether.net
>>> https://puck.nether.net/mailman/listinfo/cisco-voip
>>>
>>>
>> _______________________________________________
>> cisco-voip mailing list
>> cisco-voip at puck.nether.net
>> https://puck.nether.net/mailman/listinfo/cisco-voip
>>
>>
>
> _______________________________________________
> cisco-voip mailing list
> cisco-voip at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-voip
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://puck.nether.net/pipermail/cisco-voip/attachments/20161029/2e0eaddc/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 82771 bytes
Desc: not available
URL: <https://puck.nether.net/pipermail/cisco-voip/attachments/20161029/2e0eaddc/attachment.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 35254 bytes
Desc: not available
URL: <https://puck.nether.net/pipermail/cisco-voip/attachments/20161029/2e0eaddc/attachment-0001.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 112370 bytes
Desc: not available
URL: <https://puck.nether.net/pipermail/cisco-voip/attachments/20161029/2e0eaddc/attachment-0002.png>


More information about the cisco-voip mailing list