[cisco-voip] CUCM Cluster Expansion

Charles Goldsmith w at woka.us
Thu Feb 13 20:31:09 EST 2020


Agreed 100% on this, unless you are on be6k stuff.  Prior to the m5
hardware, it was cut and dry, if you had > 2.5ghz processors, you could use
the 7500 user or larger with no problem.  1000 user ova was less can be on
2.0 - 2.4ghz, and most of the be6k stuff came with 2.4ghz.  There are some
other restrictions on cpu types, but in the enterprise, I haven't seen much
that didn't fit, other than just speed.

Some of the be6k for the m5 hardware, I'm seeing some other cpu's in use
now (like 2.2ghz).  Cisco also has some additional criteria if you don't
want to do 2.5ghz and not 1:1 vCpu to core, but it's on an approval basis.

Read up here
https://www.cisco.com/c/dam/en/us/td/docs/voice_ip_comm/uc_system/virtualization/collaboration-virtualization-hardware.html

Basically, if your hardware supports it, go with the 7500 user, makes your
life easier down the road.



On Thu, Feb 13, 2020 at 5:40 PM NateCCIE <nateccie at gmail.com> wrote:

> I always do the 7.5k cucm size.  I hate single cpu cucm, ram is usually
> not a problem and I’d rather have the 110GB disk because upgrades about
> never work on the 80gb without clearing some space.  Even 110GB has become
> a problem lately.
>
> Sent from my iPhone
>
> On Feb 13, 2020, at 3:29 PM, Ryan Huff <ryanhuff at outlook.com> wrote:
>
> 
> For 11.x, but I've found this helpful:
> https://www.cisco.com/web/software/283088407/126036/cucm-11.0.ova.readme.txt
>
> Thanks,
>
> Ryan
> ------------------------------
> *From:* Matthew Loraditch <MLoraditch at heliontechnologies.com>
> *Sent:* Thursday, February 13, 2020 5:24 PM
> *To:* Ryan Huff <ryanhuff at outlook.com>; cisco-voip at puck.nether.net <
> cisco-voip at puck.nether.net>
> *Subject:* RE: CUCM Cluster Expansion
>
>
> Yeah, I’m just trying to understand (as I read the ovf file) what the
> actual difference is between the 1000/2500 user OVA. I seem to be missing
> something (or maybe not). CPU is actually 1 less starting but same
> reservation, same RAM, same HDD.
>
>
>
> Matthew Loraditch​
> Sr. Network Engineer
> p: *443.541.1518* <443.541.1518>
> w: *www.heliontechnologies.com*
> <https://nam12.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.heliontechnologies.com%2F&data=02%7C01%7C%7Cb901de494ade429758b908d7b0d38c4d%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637172294940368611&sdata=ww50XQlOxzvtFefHk4RctzhWcpQQKqPCH4xs8OZ%2B%2FDw%3D&reserved=0>
>  |  e: *MLoraditch at heliontechnologies.com*
> <MLoraditch at heliontechnologies.com>
>
> <image370933.png>
>
> <https://nam12.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.heliontechnologies.com%2F&data=02%7C01%7C%7Cb901de494ade429758b908d7b0d38c4d%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637172294940368611&sdata=ww50XQlOxzvtFefHk4RctzhWcpQQKqPCH4xs8OZ%2B%2FDw%3D&reserved=0>
>
> <image243092.png>
>
> <https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Ffacebook.com%2Fheliontech&data=02%7C01%7C%7Cb901de494ade429758b908d7b0d38c4d%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637172294940368611&sdata=RD2UyFewYGZLwT%2FGsd5HarqTn%2FMfDk43r1rdmjAIcsk%3D&reserved=0>
>
> <image307094.png>
>
> <https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Ftwitter.com%2Fheliontech&data=02%7C01%7C%7Cb901de494ade429758b908d7b0d38c4d%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637172294940378606&sdata=JO5DlIcPtTlBXNw5256jlxtNOsf3npNKOWm%2FgVtC4Lo%3D&reserved=0>
>
> <image105143.png>
>
> <https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.linkedin.com%2Fcompany%2Fhelion-technologies&data=02%7C01%7C%7Cb901de494ade429758b908d7b0d38c4d%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637172294940378606&sdata=4JQ%2B5HXNAIXx7CcRWUJZdQIa1zew0C%2Bs1TngQP54PbU%3D&reserved=0>
>
> <image566398.jpg>
>
> *From:* Ryan Huff <ryanhuff at outlook.com>
> *Sent:* Thursday, February 13, 2020 5:21 PM
> *To:* Matthew Loraditch <MLoraditch at heliontechnologies.com>;
> cisco-voip at puck.nether.net
> *Subject:* Re: CUCM Cluster Expansion
>
>
>
> [EXTERNAL]
>
>
>
> I wouldn't see a reason not to just up-size the two nodes you have now to
> the 2.5k OVA (use 2 vCPU on each node). For the *15 pieces of flair*, I'd
> then add in a 3rd 2.5k OVA w/o the CCM service enabled and run TFTP.. etc
> on it and give the pub a break.
>
>
>
> -Ryan
>
>
> ------------------------------
>
> *From:* cisco-voip <cisco-voip-bounces at puck.nether.net> on behalf of
> Matthew Loraditch <MLoraditch at heliontechnologies.com>
> *Sent:* Thursday, February 13, 2020 5:10 PM
> *To:* cisco-voip at puck.nether.net <cisco-voip at puck.nether.net>
> *Subject:* [cisco-voip] CUCM Cluster Expansion
>
>
>
> One of my biggest customers is experiencing issues that appear to be
> related to resource utilization. I’ve never had a customer who needed more
> than a 2 node 1000 user cluster.
>
>
>
> They are getting close to some of the capacity levels listed in the sizing
> guides.
>
>
>
> I’m looking for some opinions on what the best way to deal with this. I
> have the hardware capacity for either method.
>
>
>
> Add a Third 1000 user Subscriber and turn off call processing and tftp on
> the Pub?
>
>
>
> Rebuild both existing servers to 2500 user OVAs?
>
>
>
> Add a third and do the rebuild also?
>
>
>
> Can I just make the existing server be the 2500 capacity level? I actually
> don’t understand the difference between the 2500 and 1000 user OVAs, the
> 2500 appears to actually be lesser capacity by default (1 less cpu). So go
> to 7500?
>
>
>
> I’d appreciate any opinions out there. Going to be doing some reading over
> the next few days to try and figure this out.
>
>
>
> Thanks all!
>
>
>
> *Matthew Loraditch**​*
>
> *Sr. Network Engineer*
>
> p: *443.541.1518* <443.541.1518>
>
> w: *www.heliontechnologies.com*
> <https://nam12.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.heliontechnologies.com%2F&data=02%7C01%7C%7Cb901de494ade429758b908d7b0d38c4d%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637172294940388599&sdata=yAzbJWPtOMwm29dX1obySnDEihiAcKFqy1ylYXqF2R0%3D&reserved=0>
>
>  |
>
> e: *MLoraditch at heliontechnologies.com* <MLoraditch at heliontechnologies.com>
>
>
> <https://nam12.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.heliontechnologies.com%2F&data=02%7C01%7C%7Cb901de494ade429758b908d7b0d38c4d%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637172294940388599&sdata=yAzbJWPtOMwm29dX1obySnDEihiAcKFqy1ylYXqF2R0%3D&reserved=0>
> <image006.png>
> <https://nam12.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.heliontechnologies.com%2F&data=02%7C01%7C%7Cb901de494ade429758b908d7b0d38c4d%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637172294940388599&sdata=yAzbJWPtOMwm29dX1obySnDEihiAcKFqy1ylYXqF2R0%3D&reserved=0>
>
>
> <https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Ffacebook.com%2Fheliontech&data=02%7C01%7C%7Cb901de494ade429758b908d7b0d38c4d%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637172294940398594&sdata=OC39dwtF40eWAxrHHlX8cHJ%2FqcslFDUgw6d5xJ2lIX4%3D&reserved=0>
> <image002.png>
> <https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Ffacebook.com%2Fheliontech&data=02%7C01%7C%7Cb901de494ade429758b908d7b0d38c4d%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637172294940398594&sdata=OC39dwtF40eWAxrHHlX8cHJ%2FqcslFDUgw6d5xJ2lIX4%3D&reserved=0>
>
>
> <https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Ftwitter.com%2Fheliontech&data=02%7C01%7C%7Cb901de494ade429758b908d7b0d38c4d%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637172294940398594&sdata=XagZoBF9%2FQ%2FDbYYB6Vy43XWLlcX6b5AdDF6SVtp09BA%3D&reserved=0>
> <image003.png>
> <https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Ftwitter.com%2Fheliontech&data=02%7C01%7C%7Cb901de494ade429758b908d7b0d38c4d%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637172294940398594&sdata=XagZoBF9%2FQ%2FDbYYB6Vy43XWLlcX6b5AdDF6SVtp09BA%3D&reserved=0>
>
>
> <https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.linkedin.com%2Fcompany%2Fhelion-technologies&data=02%7C01%7C%7Cb901de494ade429758b908d7b0d38c4d%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637172294940398594&sdata=Hh3mroBv5ieE0E3vgzCnfRVti5ZpQPMusFwn%2F%2F342T8%3D&reserved=0>
> <image004.png>
> <https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.linkedin.com%2Fcompany%2Fhelion-technologies&data=02%7C01%7C%7Cb901de494ade429758b908d7b0d38c4d%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637172294940398594&sdata=Hh3mroBv5ieE0E3vgzCnfRVti5ZpQPMusFwn%2F%2F342T8%3D&reserved=0>
>
> <image005.jpg>
>
>
> _______________________________________________
> cisco-voip mailing list
> cisco-voip at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-voip
>
> _______________________________________________
> cisco-voip mailing list
> cisco-voip at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-voip
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://puck.nether.net/pipermail/cisco-voip/attachments/20200213/1c4bef97/attachment.htm>


More information about the cisco-voip mailing list