[f-nsp] Bouncing L4 Health Checks
George Bonser
george at shorelink.com
Thu Jun 12 03:04:16 EDT 2003
I have seen this problem when the default document doesnt exist and one or
more of the servers in the farm is set to not allow display of the index.
Take a browser and try to hit the ip address default doc like this:
http://<ip-address>/
and if you get a "forbidden", that might be the cause of the problem.
In your gslb setup, you might try:
telnet at hostname#conf ter
telnet at hostname(config)#gslb dns zone <zone>
telnet at hostname(config)#host-info <hostname> http status-code 200 500
and see if that clears your problem
WHat that does is tells the healthcheck to consider any response for the
default document that is equal to or greater than 200 and less than or
equal to 500 as OK. I might not RUN with that for long but if that clears
the problem, it might tell you that you have one server that is configured
slightly differently and is clobbering the healthcheck from time to time.
On Wed, 11 Jun 2003, Ethan Burnside wrote:
> Greetings,
>
> I've been using a ServerIron XL for SLB and GSLB and have been
> seeing the health checks bounce up and down for quite some time now,
> similar to the following:
>
> 0 days 1:7:52 notification L4 server 206.163.128.131
> front-01-800mcmahan port 110 is up
> 0 days 1:7:52 notification L4 server 206.163.128.131 front-01-800mcmahan
> port 110 is down due to healthcheck
> 0 days 1:7:46 notification L4 server 206.163.128.131 front-01-800mcmahan
> port 80 is up
> 0 days 1:7:46 notification L4 server 206.163.128.131 front-01-800mcmahan
> port 80 is down due to healthcheck
> 0 days 1:7:32 notification L4 server 206.163.128.131 front-01-800mcmahan
> port 110 is up
> 0 days 1:7:32 notification L4 server 206.163.128.131 front-01-800mcmahan
> port 110 is down due to healthcheck
> 0 days 1:7:31 notification L4 server 206.163.128.131 front-01-800mcmahan
> port 80 is up
> 0 days 1:7:31 notification L4 server 206.163.128.131 front-01-800mcmahan
> port 80 is down due to healthcheck
>
> The server itself hasn't really seen any interruptions in service.
> I can connect directly to it over and over without trouble. The logs
> actually look similar for all of the hosts on the ServerIron, it's not
> limited to a single host. All of the hosts are directly connected to
> the ServerIron.
>
> I see the same behavior under both of the following images:
>
> Compressed Pri Code size = 1724176, Version 07.3.06T12
> Compressed Sec Code size = 1873161, Version 07.1.21T12 (SLB07121.bin)
>
> If I disable the L4 health checks, it seems to decide to not do the
> L7 checks. The status remains "active" seemingly no matter what I do,
> (shut down apache, etc.) until I shut down the server at which time it
> changes to "enabled". (I assume because of the failure of the L3
> check.) I'd really like to use the L4 checks anyway. It's just that
> this "flapping" is causing all kinds of problems with the GSLB stuff.
> We're using the GSLB for a backup/failover "we're working on it" error
> page and to avoid "cannot connect" errors with smtp, pop3, etc. But
> with the L4 checks failing, we're seeing people ending up on the backup,
> despite the primary being fully accessible, etc. I suspect they get the
> backup when the L4 checks on the primary fail simultaneously for both
> the SLB machines.
>
> TYIA!
>
> Cheers,
>
> ~Ethan B.
>
>
> --
> --------------------------
> Ethan Burnside
> Kattare Internet Services
> http://www.kattare.com
> --------------------------
>
>
>
> Quoting Brent Van Dussen <vandusb at attens.com>:
>
> > You'll need to keep the serveriron and the customers webservers in
> > the same
> > L2 domain. If the webservers and the serveriron are all part of the
> > same
> > customer installation I don't see why it has to be separated out into
> > VLAN's.
> >
> > DSR will do everything else that you need it to, just remember that
> > you'll
> > have to configure Loopbacks on each of the real servers.
> >
> > If the real servers are in a different subnet than the serveriron you
> > can
> > use the source-ip or just put both subnets on the upstream L3 device
> > and
> > the serveriron will route health checks up to the router and back
> > down to
> > the real servers.
> >
> > -Brent
> >
> >
> > At 10:36 AM 1/22/2003, Clifton Royston wrote:
> > > I am trying to configure a particular load-balancing+failover
> > setup
> > >for a web customer who will be colo'ed with us, and am wondering if
> > >there is a way to do this. I've got 2 original ServerIrons and one
> > >ServerIron XL, I'm planning to put this onto the XL.
> > >
> > > I would like the configuration to have the following properties:
> > >
> > >1) The ServerIron can determine when any of the real servers is
> > down
> > > (i.e. failover works correctly)
> > >
> > >2) The customer web servers do not have to be physically connected
> > > "through" the ServerIron.
> > >
> > >3) The original source IP address of the connection is preserved
> > (they
> > > need that for their logging and analysis.)
> > >
> > >4) Preferably, the customer servers are in their own address block
> > and
> > > VLAN (Ethernet broadcast domain.)
> > >
> > > Is there any way to get all of these at one time?
> > >
> > > I know I can achieve 1, 3, and 4 by physically routing their
> > >connection through a ServerIron port dedicated to their VLAN;
> > that's
> > >close to our standard configuration so I'm not showing that here.
> > >That's my fallback solution, but I'd like to be able to do this
> > without
> > >dedicating a port.
> > >
> > > I think I could achieve 2, 3, and 4 by defining the servers as
> > >"remote" instead of "real" and configuring DSR, but the
> > documentation
> > >seems to imply that the ServerIrons can't automatically detect a
> > failed
> > >server in that case.
> > >
> > > I know I can achieve the combination of properties 1, 2, and 4
> > by
> > >configuring a tagged VLAN on the main Ethernet link to our main
> > switch
> > >and configuring their servers with source NAT like this; this
> > rewrites
> > >the source IP, but routes everything correctly, distributes load
> > >fairly, detects failed servers, and keeps them in their own VLAN:
> > >
> > >server source-ip xx.yy.zz.14 255.255.255.240 xx.yy.zz.1
> > >real server their-server-1 xx.yy.zz.2
> > > source-nat
> > > port http
> > > port http url "HEAD /"
> > >real server their-server-2 xx.yy.zz.3
> > > source-nat
> > > port http
> > > port http url "HEAD /"
> > >server virtual virtual-85 ww.vv.uu.tt
> > > sym-priority 100
> > > port http
> > > bind http their-server-1 their-server-2
> > >
> > > Is there any way to get all of what I want - failover detection,
> > not
> > >dedicating a port to put the servers "behind" the ServerIron, source
> > IP
> > >preserved, and keeping them in their own VLAN?
> > >
> > > Thanks in advance for any help.
> > > -- Clifton
> > >
> > >--
> > > Clifton Royston -- LavaNet Systems Architect --
> > cliftonr at lava.net
> > >
> > > "If you ride fast enough, the Specialist can't catch you."
> > > "What's the Specialist?" Samantha says.
> > > "The Specialist wears a hat," says the babysitter. "The hat makes
> > noises."
> > > She doesn't say anything else.
> > > Kelly Link, _The Specialist's Hat_
> > >_______________________________________________
> > >foundry-nsp mailing list
> > >foundry-nsp at puck.nether.net
> > >http://puck.nether.net/mailman/listinfo/foundry-nsp
> >
> >
> > _______________________________________________
> > foundry-nsp mailing list
> > foundry-nsp at puck.nether.net
> > http://puck.nether.net/mailman/listinfo/foundry-nsp
> >
>
> _______________________________________________
> foundry-nsp mailing list
> foundry-nsp at puck.nether.net
> http://puck.nether.net/mailman/listinfo/foundry-nsp
>
More information about the foundry-nsp
mailing list