[cisco-voip] VMTools - CUCM 10.5

Baha Akman (makman) makman at cisco.com
Thu Feb 25 10:40:19 EST 2016


Glad I was able to save at least one of you from this nugget :)

I’ve got 90+ CUCMs waiting to be cleaned not sure if I’m going to be able to even though all have remote access enabled prior to blowing up.

--
Baha


On Feb 25, 2016, at 9:13 AM, Ed Leatherman <ealeatherman at gmail.com<mailto:ealeatherman at gmail.com>> wrote:

So yeah - we were actually getting ready to log in a change board request to do the reboots to get vmtools installed and then selinux back to enforce, which would have happened while I was away on vacation out of cell phone range :) Soooooo glad I glanced through my emails today and saw Scott and Baha's messages!!! We're going to get a case started anyway as he has already done the whole operation on PLM.


On Thu, Feb 25, 2016 at 7:09 AM, Baha Akman (makman) <makman at cisco.com<mailto:makman at cisco.com>> wrote:
I’m sorry to report, the problem with latest VMTools update failure is not like CSCul78735 as many of you may have experienced when you upgraded to CUCM 10.0 and was familiar with the workaround.

Latest ESXi 5.5 Builds as well as 6.0 Builds bundles a new version of VMtools version 10.0.0.50046 (build-3000743) aka (10240)

See https://packages.vmware.com/tools/versions for tools version mapping.

This new VMTools has a new functionality built-in called vmware-caf See release notes of it here; http://pubs.vmware.com/Release_Notes/en/vmwaretools/1000/vmware-tools-1000-release-notes.html

CUCM 9.1 releases are immune to this new functionality their selinux policies don't interfere with it. However CUCM 10.X and 11.X builds explicitly block vmtools-caf from functioning properly.

If you managed to follow the workaround documented in CSCul78735 where you put selinux to permissive, update vmtools to this new 10.0 release, then put it back to enforcing, you will run out of Root Disk Space and Virtual Memory.

The new defect tracking this issue is CSCux27503 - "Vmware Tools update on ESXi 6.0 is failing” please inform TAC about this as the word is still getting around. Not sure if this is going to be the ultimate defect to fix it but track this one for now.

For those of you who have not yet experienced this serious issue, My suggestion would be to hold off on updating your VMTools after you have patched your ESXi 5.5 or 6.0 builds to the latest builds. If you have already put selinux to permissive mode as a workaround to get vmtools updated DO NOT put it back to enforcing mode. Contact TAC. If you have already put it back to enforcing mode after upgrading vmtools to 10.0 then contact TAC immediately as you will certainly run out of Memory and Disk Space soon.

Hate to report this here, but this one is a doozy.


--
Baha


On Feb 25, 2016, at 6:46 AM, Ed Leatherman <ealeatherman at gmail.com<mailto:ealeatherman at gmail.com>> wrote:

We just patched vsphere this past weekend and ran into the selinux bug with vmtools, and have also noted some weird stuff with memory and disk space but on Unity Connection. I'll check in with the guy that was handling it and see if there was any other strangeness, this doesn't give me warm and fuzzies.

On Wed, Feb 24, 2016 at 9:13 PM, Hughes, Scott GRE-MG <SHughes at grenergy.com<mailto:SHughes at grenergy.com>> wrote:
Has anyone dealt with bugID
CSCul78735 lately?
We just installed our latest round of VMware 5.5 patches. Along with it came new VMtools. Usual (automatic) upgrade method did not work. The VMware tools status went from 'out of date' to 'not installed'

We were able to get the tools installed on our two subscribers by switching SELinux to permissive mode as the bug describes. We switched them back to enforcing. Then, we found that the active partition filled over the next 2 days before the subscriber died a horrible death and had to be rebuilt.

I believe that /var and /etc/selinux had most of the runaway logs. TAC couldn't even create a root support account because the partition was full.

Just a caution to those who patch VMware regularly. Make sure you have alerting on filesystem capacity. Ours were 2500 user nodes so there's only ~3 GB free on the active partition normally.

I would be very interested if anyone has further info on this bug.




NOTICE TO RECIPIENT: The information contained in this message from
Great River Energy and any attachments are confidential and intended
only for the named recipient(s). If you have received this message in
error, you are prohibited from copying, distributing or using the
information. Please contact the sender immediately by return email and
delete the original message.



_______________________________________________
cisco-voip mailing list
cisco-voip at puck.nether.net<mailto:cisco-voip at puck.nether.net>
https://puck.nether.net/mailman/listinfo/cisco-voip



--
Ed Leatherman
_______________________________________________
cisco-voip mailing list
cisco-voip at puck.nether.net<mailto:cisco-voip at puck.nether.net>
https://puck.nether.net/mailman/listinfo/cisco-voip




--
Ed Leatherman

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://puck.nether.net/pipermail/cisco-voip/attachments/20160225/ebf8459b/attachment.html>


More information about the cisco-voip mailing list