Requests

Here you can post your own suggestions and/or requests for future SureAliveD versions.
To do so, please add a comment to this page and we will try to relate to that.

Also please note that adding comments is moderated so your post will show up if we accept it, this inconvenience is necessary if we want to keep order in this section.

Thank you in advance!

 

Comments

Multi-IP Virtual Service

I would like to configure more than one IP in a statement.

The reason is, I am offering a service on multiple IPs. The same realservers act behind these IPs and traditionally every realserver is checked for *every* IPVS (for everey service ip...) instance. I just need *one* checking instance to synchronously add or remove a server to/from a bunch of services.

(This effect doubles twice for me since it's true for TCP and UDP and also for IPv6, this means four times unneccessary running checkers!)

This is the reason why we are doing the nodechecking stuff on our own, but I would like to use something more elaborated with a larger user community than to reinvent the wheel.

,.oO(I assumed IPv6 to be working, no traces in the docs, have to check on this later)

Problem with activation

Hi there, I dont know if I am writing in a proper board but I have got a problem with activation, link i receive in email is not working... http://surealived.sourceforge.net/?5e48e4aa503686ab1a74ca87cd2,

Well you don't have to create

Well you don't have to create account to post here, we don't want you to make useless accounts - we would moderate it anyway. So please feel free to post anonymously and we will be more than happy to answer you :-)

Re: Multi-IP Virtual Service

Currently we don't plan to separate IP to multi-ip virtuals. Multiple tests doesn't significally increase servers load (they are in noise level) so we don't want to touch it.

IPv6 - we've plans to implement it, but can't tell when this time will come.

Re: Multi-IP Virtual Service

I was more concerned about the loadbalancer doing duplicate work.
For me it's now okay since the dns checker is a builtin, but for custom checks which spawn a process every time it'll might sum up....

Thanks!

Re: Multi-IP Virtual Service

You've right about loadbalancer load when you're using spawn (mod_exec in surealived), that's why we implemented mod_lua to perform the test in one surealived process. This allows you to use high level language (regexps or normal text comparision). This is limited to the TCP text protocol (like http, ftp, etc.). UDP protocol or binary (both TCP and UDP) must be implemented as separate modules. Of course there's a possibility to add mod_lua_udp (changing only a socket creation protocol) but I don't have such a text UDP protocol service yet.

ServiceUP and ServiceDOWN hooks?

I would like to see and react on the state of a service - e.g. when enough realservers are alive or when enough total weight of running realservers sums up a service is called "up".
Perhaps run a script-hook when a service goes down or after it comes up - or provide builtin functions. ;-)

Currently I am running a BGP daemon to announce a service IP address if the service is up. When the service is considered "down" the IP is removed from a dummy interface and hence no longer announced via BGP.
I would like to realize this with surealived. So when a service is considered "down" due to lacking realserver power, remove the service IP from the system. Or - let this be done via a script hook by the deployer.

Re: ServiceUP and ServiceDOWN hooks?

Calling external scripts is simple, problem is - when surealived is (re)starting ServiceUp/ServiceDown hook will be called, depending to offline states. So - you need to add some logic in your scripts (of course if you're adding/removing IP address to/from the interface it can't be added/removed twice). I think expected weight sum could be set also as percent value to easily skip sum recalculation when you'll add more reals to the virtual.

If this acceptable give an info.

Great!

This is totally fine!

I think scripts should almost normally deal with that. If an IP should be deconfigured that already had been deconfigured - then the contract "IP is not configured" is held after execution. Same applies the other way round for adding an IP....

More problematic would be missing a state-change, but if surealived is supposed to never crash or being watchdog'd and only possibly double-calls a hook but never omits one - this seems fine to me.

A percent trigger is a nice idea! I personally favour simpler metrics, plain "n [%] servers available" is enough for me. But I guess setting the weigth of all servers to "1" will do the trick... :-)

Re: Great!

You can use notify_up, notify_down, notify_min_reals, notify_min_weight also as integer or percent value. See onet-notify.xml example.

Re: Great!

Perfect!
Now I can switch to surealived!

Thanks for the fast reaction!

Sorry Server Support?

One of the blocking features for us to switch to surealived from ldirectord as implemented is the lack of sorrysever support. Any plans on adding the ability for surealived to add a "downpage" server into LVS rotation if all the nodes for a virtual (cluster) are unavailable (fail the tester)?

Such as:
<snip>
<tester loopdelay="1" timeout="4" retries2fail="1" retries2ok="1" proto="http" testport="80" url="/status" host="STATUS" retcode="200" debugcomm="0"/>
<real name="web01" port="0" addr="192.168.1.101" weight="100"/>
<real name="web02" port="0" addr="192.168.1.102" weight="100"/>
<real name="web03" port="0" addr="192.168.1.103" weight="100"/>
<down name="downpage" port="0" addr="192.168.1.104" weight="1"/>

Additionally any thoughts on being able to have multiple tester stanza's per virtual (different nodes), or even multiple service.xml files which surealived could parse?

Looks very promising.. Keep up the good work.

Re: Sorry Server Support?

When we started the project our goal was to replace incorrectly working keepalived tester. Now the project has slightly lower priority so we have less time to complete secondary goals. There are:
- detailed statistics (average, etc.) - I'm currently working on it, I need it to implement dynamic weight change for real servers.
- dynamic weight balancing will depend on algos:
a) weight increment depends on test state (see post below),
b) weight is calculated from average response time (detailed statistics I mentioned),
c) user defined (depends on user defined metrics like cache size, etc. - for example metric could be taken from lua returned value which will be for example tested real cache size).
- sorry servers (more than one, tested or not).
- ipv6.

Currently we don't plan to define more than one tester per virtual, due to some fields can be overwritten in xml file (testport for example). Splitting xml to different files is also possible but I don't know what kind of splitting you would like to have? All xml files in one directory or <include> tag for example?

Re: Sorry Servers

The splitting of XML files question stems again from our current usage of ldirectord which has a configuration file per process (tester). In addition we use pacemaker (aka heartbeat) to start and stop ldirectord processes via OCF wrapper scripts. My hope is to have a single process (surealived) load multiple xml configuration files from a directory (IE: /etc/surealived/services.d/*.xml). We would then be able to utilize an OCF script, similarly to what we have now, to dynamically add/remove configuration files (virtuals) from a running surealived process.

Internally our tests have shown that surealived does a much better job in terms of resources and failure detection time, than (our usual) equivalent 60+ ldirectord processes running on the same hardware.

Re: Sorry Servers

For example I can simply create temporary /tmp/services.xml file which will contain all files from /etc/surealived/services.d/*.xml. If this is good enough solution give me an info - I won't spend a lot of time to implement that. Of course you know, that adding / removing virtual requires surealived reload (HUP signal points to tester to finish the tests and exit, watchdog is spawning tester again and the play starts from the beginning). So dynamically adding / removing requires de facto surealived process restart. This of course isn't painful but I couldn't leave your note about OCF script (dunno what OCF really is) and dynamically adding / removing virtuals from "running" surealived process without comment.

Request - using forum

I have a great request - could you use forum instead of link "user request"? This forum is much easier to read and write so it will be much easier for us all.

Link to surealived forum: http://sourceforge.net/projects/surealived/forums/forum/993872/

Thanks a lot!

ipvssync change scheduler

Hi,
i think that when i change the scheduler type in XML file and restart surealived, then the ipvssync doesn't make any changes in LVS.
Is it making diffs only on whole lines ?

ipvssync change scheduler

Are you sure no changes in IPVS are made? When you're restarting the checker (surealived) xml is parsed and configuration file for ipvssync is created (/var/lib/surealived/ipvsfull.cfg). Apart of that a reload file is created (ipvsfull.reload) which points to the ipvssync there's a need to resynchronize IPVS table. I'm using this scenario (reconfigure/reload) every day and I didn't experience any problems with that. Are you sure you have such scheduler in your kernel?

DNS and slow-start

Is there a way to add real servers back into a pool with a weight that increase over time until the "weight" defined ?? which means when a machine is joined to the pool, depending on the sched algo, it might be over-loaded with requests to match over nodes in the pool, i want it start "slowly".

another question, can i define DNS names in the real addres instead of IP Address ? this can make managing pools very easy, using dynamic DNS updates.

DNS and slow-start

Hmm, if I understand you correctly, you want to have a possibility to increase real server weight after it was downed/offlined (removed/weight was set to 0). So for example - you have 5 servers, and one of them failed the test and its weight was set to 0. After setting for example tester attribute increase="1" you're starting from weight 1 (test succeed) to weight defined (for example 20) for that server. It's not complicated to implement it, but I'm not sure you'll not get a swing for this virtual. Of course deciding if you want to use that algo is up to you. Second case - when a new machine is added to the pool it's not possible to check whether it is a new one or old one. This slightly complicates what initial weight it should have. At the beginning a xml defined weight can be used, so too huge value could be used for the real. Apart of that you can use your own scripts and cmd interface which allows you to override server weights. It will really complicate everything but it's possible to implement it on your own.

Second question - I don't think DNS names are really what I want to have on my lvses. When you'll have a problem with dns how can you build a configuration file or put valid IP addresses to IPVS (you don't know these addresses)? Imagine a situation - you're reloading a surealived process and for 3 seconds you have no dns service available? What will you expect to see in IPVS?

Thanks for the reply, for the

Thanks for the reply,

for the first issue, any machine added to the pool, either because it is new or went to weight 0, should increase weight slowly, by steps defined somewhere,

for example, weight=10, initial-weight=1, steps=1, will add a real server to the pool with weight 1, and increment the weight until 10 in "1" steps, during a period of time, when a machine "recovers", it will also start at the same weight,

the gain i see in this is for applications who needs database initialization for example, usually that happens only after the first request, so the LB is sending "normal" traffic (and for wrr, more then normal traffic, to match the others) to the real, which overloads it , and makes the first few requests timeout/have a long response time.

for the DNS issue, there are some options to resolve hostname, from /etc/hosts, using nscd, so not having a DNS server is a different issue, that will break mail and some more things, but i don't think its in the scope of this problem,
i want to have the option to define actual host names in the real servers and VIPs, so i can easily read my configs and understand what i see.

Agree to implement that (slow start)

Hi again,

I'll implement "slow start" solution in few days - it will be in the new release. I think I'll slightly change algo you suggest:
- weight=1 maxweight=20 step=1
This will allow me to write currently calculated weights to "override" file. So, when you'll add a new server it will start from 1 (no override entry will be found). Its weight will be constantly incremented until maxweight will be reached. Restarting surealived will read weights from override - IPVS state will be exactly the same as before restart. One minus - override weight set by cmd interface won't be constant (each test will change its value).

About dns - I remain unconvinced, I think configuration xml file has to be fully independent from dns. Only that ensures me IPVS table will be filled by valid virtual/real servers. I think if you really need to use dns mappings, try to write your own xml builder (which will resolve IP addresses and will create valid surealived xml configuration file).

including CARP in the future?

Hi,

I currently switched from ldirectord to keepalived because of MISC_CHECK and its integrated router redundancy protocol, which makes it a good all-in-one solution for me. Unfortunately keepalived uses the patented VRRP instead of CARP.
So, is CARP on surealived's roadmap, and if not, how can I make a surealived setup redundant? Only with a classic heartbeat setup?

including CARP in the future?

We don't plan to implement CARP or VRRP in surealived. Our goal was to replace keepalived in testing part - we're using keepalived VRRP and it is good enough for us. Keepalived (we use 1.1.15) allows you to separate virtual addresses failover from checking part (try use keepalived -P switch). We're using keepalived VRRP but of course you can use heartbeat if you want.
In surealived we're testing reals all the time at all lvses, even they're inactive (there's no possibility to avoid testing from inactive surealived instance using keyword like ha_suspend in keepalived). We're doing that because when the switch occurs or one machine will die we need to have consistent services state.
MISC_CHECK (exec module in surealived) is in our opinion last hope - we plan to implement mod_lua which will allow you to analyze flow in one process using LUA interpreter instead of fork and heavy exec solution. On lvses where there's a lot of traffic such solution is slightly risky (dropping packets occurs when there's no enough CPU resources - and then testing scripts could fail even if your real server works perfect).

ok, thanks

Ah, didn't know about the -P switch. Makes sense then. Thanks for the feedback

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd>
  • Lines and paragraphs break automatically.

More information about formatting options