[f-nsp] multiple service failover

Manu Chao linux.yahoo at gmail.com
Wed Jul 15 11:08:31 EDT 2009


i am not working for any vendor but you need a monitor & dependancy feature
already available on bigip

On Wed, Jul 15, 2009 at 4:57 PM, Oliver Adam <oadam at madao.de> wrote:

> Do you have any traces from the time the problem occured? The config itself
> seems to be fine - testing this quickly at a 4G result in log messages like:
>
> Dynamic Log Buffer (50 lines):
> Jul 15 16:56:34:N:L4 server 192.168.9.101 rs101 port 80 is down due to
> healthcheck
> Jul 15 16:56:34:C:Real server rs101 track group 80 443  state changed from
> ACTIVE to DOWN
> Jul 15 16:49:45:N:L4 server 192.168.9.101 rs101 port 80 is up
> Jul 15 16:49:45:C:Real server rs101 track group 80 443  state changed from
> DOWN to ACTIVE
>
> The track group is working as expected. Is it anyhow possible that you had
> problems with sessions which were open already at the time the problem
> occured? The SI is not going to cut all the sessions hardly by default. Have
> a look at "reset-on-port-fail" as an option in this area. On top of that I
> am confused because you are using healthck's why do not you do it this way:
>
>
>  server real server1 192.168.0.60
>> source-nat access-list 1
>> port http
>> port http url "GET /status.html"
>> port http content-match Content_Match
>> port ssl
>> port ssl keepalive
>> port ssl l4-check-only
>> port 8080
>> port 9000
>> port 4443
>> hc-track-group 80 443
>>
>
>
> No healthck needed - much shorter and simple to understand - same
> behaviour.
>
> Please ensure you do have "server no-fast-bringup" in the config - this is
> to ensure the health check is only successful in case everything up to L7 is
> working.
>
> Something to look at in the future: http://community.brocade.com/adi
>
> Best regards,
>
> Oliver
>
>
> At 15:16 15.07.2009, David Miller wrote:
>
>> Oliver Adam wrote:
>>
>>> I am not sure why you would like to solve this problem with another
>>> vendors box. I would suggest to look at the features of the 4G. There is
>>> something called health check track groups.
>>>
>>> Out of the documentation:
>>>
>>> ServerIron(config)# server real r1 1.1.1.1
>>> ServerIron(config-real-server-r1) port 80
>>> ServerIron(config-real-server-r1) port ftp
>>> ServerIron(config-real-server-r1) port dns
>>> ServerIron(config-rsr1) hc-track-group 80 21 53
>>>
>>> The ServerIron now tracks health status for ports 80, 21, and 53. If any
>>> of these ports is down then the combined
>>> health would be marked as failed and the ServerIron will not use these
>>> ports for load balancing traffic.
>>>
>>> You would have to combine port 80 and port 443 in a health check track
>>> group.
>>>
>>> Is not that what you are looking for?
>>>
>>
>> Ahh, now that's just what I was looking for.  I already have that though:
>>
>>
>> healthck Server1_HC tcp
>>  dest-ip 192.168.0.60
>>  port http
>>  protocol http
>>  protocol http url "GET /status.html"
>>  protocol http content-match Content_Match
>>  l7-check
>>
>>
>> server real server1 192.168.0.60
>> source-nat access-list 1
>> port http
>> port http healthck Server1_HC
>> port http url "HEAD /"
>> port ssl
>> port ssl keepalive
>> port ssl l4-check-only
>> port 8080
>> port 9000
>> port 4443
>> hc-track-group 80 443
>>
>>
>> server virtual vserver 1.2.3.4
>> sym-priority 110
>> port http
>> port http lb-pri-servers backup-stay-active
>> port ssl sticky
>> port ssl ssl-terminate Action
>> port ssl lb-pri-servers backup-stay-active
>> bind http server1 8080 real-port http server2 8080 real-port http
>> bind ssl server1 4443 real-port ssl server2 4443 real-port ssl
>>
>>
>> However, we recently ran into the situation where server1 was responding
>> very slowly and http failed over to server2 but ssl remained on server1.
>>
>>
>> The 8080 and 4443 are so we can access the real server for testing before
>> binding it to the LB VIP.  Are they what's causing the problem here?  Should
>> I have hc-track-group 80 443 8080 4443 ?
>>
>> Thanks!  I love the S/N ratio on this list!
>>
>> --- David
>>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://puck.nether.net/pipermail/foundry-nsp/attachments/20090715/0117aec5/attachment.html>


More information about the foundry-nsp mailing list