SSL checks are *extremely* expensive and I have run serverirons out of CPU doing them before. I suggested to Brocade many years ago (back when they were Foundry) that they not generate new RSA keys each time for health checks. Give an option where the same key can be re-used for healthchecks only. This would greatly reduce the load. I am trying to remember how I remedied the situation but it has been years. I think I ended up just doing a regular http health check as that would tell me if the daemon was up and running on the server or not. I didn't need to verify so much that the remote host could gen keys and the same process listened on both 80 and 443, so I just checked 80 and assumed 443 was working too.<br>
<br>It has been a while, though. Or maybe I just did a TCP check to the port to make sure it was listening, I don't remember. But if they had the option not to gen new keys for each health check, it would greatly reduce load on checks. That suggestion is probably long down the memory hole at this point, though.<br>
<br>George<br><br><br><br><div class="gmail_quote">On Sat, Feb 19, 2011 at 11:12 AM, Drew Weaver <span dir="ltr"><<a href="mailto:drew.weaver@thenap.com">drew.weaver@thenap.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
Howdy again,<br>
<br>
I hate replying to my own messages but I have made progress =)<br>
<br>
It seems that the pings are failing when the health checks are running.<br>
<br>
Are healthcks really resource intensive?<br>
<br>
(especially ones like this):<br>
<br>
healthck node-ssl tcp<br>
dest-ip 222.222.222.222<br>
port ssl<br>
protocol ssl<br>
protocol ssl url "GET /test/gif.gif"<br>
protocol ssl use-complete<br>
l7-check<br>
<br>
I noticed that with the above configuration that pings to the switch fail quite regularly.<br>
<br>
If I add 'interval 30' to the configuration it seems like pings only fail once every 30 seconds.<br>
<br>
The goal is to not have it fail at all..<br>
<br>
Anyone seen this before, know how to fix it?<br>
<br>
Thanks,<br>
-Drew<br>
<br>
<br>
From: <a href="mailto:foundry-nsp-bounces@puck.nether.net">foundry-nsp-bounces@puck.nether.net</a> [mailto:<a href="mailto:foundry-nsp-bounces@puck.nether.net">foundry-nsp-bounces@puck.nether.net</a>] On Behalf Of Drew Weaver<br>
Sent: Thursday, February 17, 2011 11:41 AM<br>
To: foundry-nsp<br>
Subject: [f-nsp] ServerIron XL hard coded ICMP limit<br>
<div><div></div><div class="h5"><br>
Does anyone know if there is a hard coded ICMP limit in a serveriron XL for both packets directed at the system and passed through it?<br>
<br>
I am having the weirdest issues where ping monitoring a serveriron XL and anything directly connected to the serveriron xl gets messed up even though there is no real reason why on the network.<br>
<br>
It is not configured (by me) to have any sort of rate-limit.<br>
<br>
Anyone have any thoughts?<br>
<br>
-Drew<br>
<br>
<br>
</div></div>_______________________________________________<br>
foundry-nsp mailing list<br>
<a href="mailto:foundry-nsp@puck.nether.net">foundry-nsp@puck.nether.net</a><br>
<a href="http://puck.nether.net/mailman/listinfo/foundry-nsp" target="_blank">http://puck.nether.net/mailman/listinfo/foundry-nsp</a><br>
</blockquote></div><br>