File: HighAvailability.html

package info (click to toggle)
lvs 0.9.7-2
  • links: PTS
  • area: main
  • in suites: woody
  • size: 724 kB
  • ctags: 448
  • sloc: ansic: 1,305; perl: 424; sh: 72; makefile: 63
file content (425 lines) | stat: -rw-r--r-- 13,086 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
<html>
<head>
   <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
   <title>High Availability</title>
</head>
<body bgcolor="#FFFFFF">

<h1>High Availability</h1>

<p>As more and more critical commercial applications move on the
Internet, providing highly available services becomes increasingly
important. One of the advantages of a clustered system is that it has
hardware and software redundancy. High availability can be provided by
detecting node or daemon failures and reconfiguring the system
appropriately so that the workload can be taken over by the remaining
nodes in the cluster.

<p>In fact, high availability is a big field. An elegant highly
available system may have a reliable group communication sub-system,
membership management, quoram sub-systems, concurrent control
sub-system and so on. There is a lot of works. However, we can use
some existing software to construct highly available LVS systems
now. There must be lots of methods to build highly available LVS
systems, please drop me a message if you have your elegant
methods. The following two solutions is only for reference.

<h2>1. The mon+heartbeat+fake+coda solution</h2>

<p>The high availability of virtual server can be provided by using of
"<a href="http://www.kernel.org/software/mon/">mon</a>", "<a
href="http://www.linux-ha.org/">heartbeat</a>" , "<a
href="http://vergenet.net/linux/fake/">fake</a>" and "<a
href="http://www.coda.cs.cmu.edu/">coda</a>" software. The "mon" is a
general-purpose resource monitoring system, which can be used to
monitor network service availability and server nodes. The "heartbeat"
code currently provides the heartbeats among two node computers
through serial line and UDP heartbeats. Fake is IP take-over software
by using of ARP spoofing. The high availability of Linux Virtual
Server is illustrated in the following figure.

<center><img SRC="HA.jpg" height=890 width=736></center>

<p>The server failover is handle as follows: The "mon" daemon is
running on the load balancer to monitor service daemons and server
nodes in the cluster. The fping.monitor is configured to detect
whether the server nodes is alive every t seconds, and the relative
service monitor is also configured to detect the service daemons on
all the nodes every m seconds. For example, http.monitor can be used
to check the http services; ftp.monitor is for the ftp services, and
so on. An alert was written to remove/add a rule in the linux virtual
server table while detecting the server node or daemon is
down/up. Therefore, the load balancer can automatically mask service
daemons or servers failure and put them into service when they are
back.

<p>Now, the load balancer becomes a single failure point of the whole
system.  In order to mask the failure of the primary load balancer, we
need setup a backup server of the load balancer. The "fake" software
is used for the backup to takeover the IP addresses of the load
balancer when the load balancer fails, and the "heartbeat" code is
used to detect the status of the load balancer to activate/deactivate
the "fake" on the backup server. Two heartbeat daemons run on the
primary and the backup, they heartbeat the message like "I'm alive"
each other through the serial line periodically. When the heartcode
daemon of the backup cannot hear the "I'm alive" message from the
primary in the defined time, it activates the fake to take over the
Virtual IP address to provide the load-balancing service; when it
receives the "I'm alive" message from the primary later, it deactivate
the fake to release the Virtual IP address, and the primary comes back
to work again.

<p>However, the failover or the takeover of the primary load balancer
will cause the established connection in the hash table lost in the
current implementation, which will require the clients to send their
requests again.

<p><a href="http://www.coda.cs.cmu.edu/">Coda</a> is a fault-tolerant
distributed file systems, a descendant of Andrew file system. The
contents of servers can be stored in Coda, so that files can be highly
available and easy to manage.

<h3>Configuation example</h3>

The following is an example to setup a highly available virtual web
server via direct routing.

<p><B>The failover of real servers</B></p>

The "mon" is used to monitor service daemons and server nodes in the
cluster.  For example, the fping.monitor can be used to monitor the
server nodes, http.monitor can be used to check the http services,
ftp.monitor is for the ftp services, and so on. So, we just need to
write an alert to remove/add a rule in the virtual server table while
detecting the server node or daemon is down/up. Here is an example
calleded lvs.alert, which takes virtual service(IP:Port) and
the service port of real servers as parameters.

<blockquote>
<pre>
#!/usr/bin/perl 
# 
# lvs.alert - Linux Virtual Server alert for mon 
# 
# It can be activated by mon to remove a real server when the 
# service is down, or add the server when the service is up. 
# 
# 
use Getopt::Std;
getopts ("s:g:h:t:l:P:V:R:W:F:u");

$ipvsadm = "/sbin/ipvsadm";
$protocol = $opt_P;
$virtual_service = $opt_V; 
$remote = $opt_R;

if ($opt_u) {
    $weight = $opt_W;
    if ($opt_F eq "nat") {
	$forwarding = "-m";
    } elsif ($opt_F eq "tun") {
	$forwarding = "-i";
    } else {
	 $forwarding = "-g";
    }

    if ($protocol eq "tcp") {
        system("$ipvsadm -a -t $virtual_service -r $remote -w $weight $forwarding");
    } else {
	system("$ipvsadm -a -u $virtual_service -r $remote -w $weight $forwarding");
    }
} else { 
    if ($protocol eq "tcp") {
	system("$ipvsadm -d -t $virtual_service -r $remote");
    } else {
	system("$ipvsadm -d -u $virtual_service -r $remote");
    }
};
</pre>
</blockquote>

<p>The lvs.alert is put under the /usr/lib/mon/alert.d directory.  The
mon configuration file (/etc/mon/mon.cf or /etc/mon.cf) can be
configured to monitor the http services and servers in the cluster as
follows.

<blockquote>
<pre>
#
# The mon.cf file
#
#
# global options
#
cfbasedir   = /etc/mon 
alertdir   = /usr/lib/mon/alert.d
mondir     = /usr/lib/mon/mon.d
maxprocs    = 20
histlength = 100
randstart = 30s

#
# group definitions (hostnames or IP addresses)
#
hostgroup www1 www1.domain.com 

hostgroup www2 www2.domain.com

#
# Web server 1
# 
watch www1
    service http 
       	interval 10s 
	monitor http.monitor 
	period wd {Sun-Sat} 
	    alert mail.alert wensong 
	    upalert mail.alert wensong 
	    alert lvs.alert -P tcp -V 10.0.0.3:80 -R 192.168.0.1 -W 5 -F dr
	    upalert lvs.alert -P tcp -V 10.0.0.3:80 -R 192.168.0.1 -W 5 -F dr

#
# Web server 2
# 
watch www2 
    service http 
        interval 10s 
	monitor http.monitor 
	period wd {Sun-Sat} 
	    alert mail.alert wensong 
	    upalert mail.alert wensong 
	    alert lvs.alert -P tcp -V 10.0.0.3:80 -R 192.168.0.2 -W 5 -F dr
	    upalert lvs.alert -P tcp -V 10.0.0.3:80 -R 192.168.0.2 -W 5 -F dr
</pre>
</blockquote>

<p>Note that you need to set the paramter of lvs.alert like "lvs.alert
-V 10.0.0.3:80 -R 192.168.0.3:8080" if the destination port is
different in LVS/NAT.

<p>Now the load balancer can automatically mask service daemons or
servers failure and put them into service when they are back.

<p><B>The failover of the load balancer</B></p>

<p>In order to prevent the load balancer becoming a single failure
point of the whole system, we need setup a backup of the load balancer
and let them heartbeat each other periodically. Please read the
GettingStarted document include the heartbeat package, it is simple to
setup 2-node heartbeating system.

<p>For an example, we assume that the two load balancers have the
following addresses:

<table BORDER COLS=2 CELLSPACING=0 CELLPADDING=0 WIDTH="600">
<tr>
<td>lvs1.domain.com (primary)</td>
<td>10.0.0.1</td>
</tr>

<tr>
<td>lvs2.domain.com (backup)</td>
<td>10.0.0.2</td>
</tr>

<tr>
<td>www.domain.com (VIP)</td>
<td>10.0.0.3</td>
</tr>
</table>

<p>After install the heartbeat on both lvs1.domain.com and
lvs2.domain.com, simply create the /etc/ha.d/ha.conf as follows:
<blockquote>
<pre>
#
#       keepalive: how many seconds between heartbeats
#
keepalive 2
#
#       deadtime: seconds-to-declare-host-dead
#
deadtime 10
#       hopfudge maximum hop count minus number of nodes in config
hopfudge 1
#
#       What UDP port to use for udp or ppp-udp communication?
#
udpport 1001
#       What interfaces to heartbeat over?
udp     eth0
#
#       Facility to use for syslog()/logger (alternative to log/debugfile)
#
logfacility     local0
#
#       Tell what machines are in the cluster
#       node    nodename ...    -- must match uname -n
node    lvs1.domain.com
node    lvs2.domain.com
</pre>
</blockquote>

<p>The /etc/ha.d/haresources file is as follows:
<blockquote>
<pre>
lvs1.domain.com 10.0.0.3 lvs mon
</pre>
</blockquote>


<p>The /etc/rc.d/init.d/lvs is as follows:
<blockquote>
<pre>
#!/bin/sh

#
# You probably want to set the path to include
# nothing but local filesystems.
#
PATH=/bin:/usr/bin:/sbin:/usr/sbin
export PATH

IPVSADM=/sbin/ipvsadm

case "$1" in
    start)
	if [ -x $IPVSADM ]
	then
            echo 1 > /proc/sys/net/ipv4/ip_forward
	    $IPVSADM -A -t 10.0.0.3:80
	    $IPVSADM -a -t 10.0.0.3:80 -r 192.168.0.1 -w 5 -g
	    $IPVSADM -a -t 10.0.0.3:80 -r 192.168.0.2 -w 5 -g
	fi
	;;
    stop)
	if [ -x $IPVSADM ]
	then
	    $IPVSADM -C
	fi
	;;
    *)
    	echo "Usage: lvs {start|stop}"
	exit 1
esac

exit 0
</pre>
</blockquote>

<p>Finally, make sure that all the files are created on both the lvs1
and lvs2 nodes and alter them for your own configuration, then start
the heartbeat daemon on the two nodes.

<p>Note that "fake" (IP takeover by Gratuitous Arp) is already
included in the heartbeat package, so there is no need to setup "fake"
separately. When the lvs1.domain.com node fails, the lvs2.domain.com
will take over all the haresources of the lvs1.domain.com, i.e. taking
over the 10.0.0.3 address by Gratuitous ARP, start the
/etc/rc.d/init.d/lvs and /etc/rc.d/init.d/mon scripts. When the
lvs1.domain.com come back, the lvs2 releases the HA resources and the
lvs1 takes them back.

<h2>2. The ldirectord+heartbeat solution</h2>

<p>The ldirectord (Linux Director Daemon) written by  <a
href="mailto:jacob.rief@tis.at">Jacob Rief</a> is a stand-alone daemon
to monitor services of real servers, currently http and https
service. It is simple to install and works with the <a
href="http://www.linux-ha.org/">heartbeat</a> code. The ldirectord
program can be found under the contrib directory inside the ipvs tar
ball, or you can check the CVS repository of heartbeat for the latest
version. See 'perldoc ldirectord' for all the information about
ldirectord. Thank <a href="mailto:jacob.rief@tis.at">Jacob Rief</a>
for writing this great program!

<p>The advantages of ldirectord over mon are as follows:

<ul>

<li>The ldirectord is specially written for LVS monitoring. 
<br>It reads configuration files like /etc/ha.d/xxx.cf, which contains
all the information about the IPVS routing table configuration. When
the ldirectord is up, the IPVS routing table will be configured
properly. You can also save different virtual service configuration in
multiple configuration files, so that it is possible to modify
parameters of some services without bringing down other services.

<li>The ldirectored can be easily started and stopped by heartbeat.
<br>Put the ldirectord under the /etc/ha.d/resource.d/ directory, then
you can add a line in the /etc/ha.d/haresources like: 
<pre>
    node1 IPaddr::10.0.0.3 ldirectord::www ldirectord::mail
</pre>

</ul>

<p>Anyway, the ldirectord can also be started and stopped
manually. You can use it in your LVS cluster without the backup of
load balancer.

<h3>Configuation example</h3>

<p>For the example introduced in the  mon+heartbeat+fake+coda
solution, you can configure the /etc/ha.d/www.cf as follows:

<blockquote>
<pre>
#
# The /etc/ha.d/www.cf for ldirectord
#

# the number of second until a real server is declared dead
timeout = 10

# the number of second between server checks
checkinterval = 10

#
# virtual = x.y.z.w:p
#     protocol = tcp|udp
#     scheduler = rr|wrr|lc|wlc
#     real = x.y.z.w:p gate|masq|ipip [weight]
#     ...
#     

virtual = 10.0.0.3:80
     protocol = tcp
     scheduler = wlc
     real = 192.168.0.1:80 gate 5
     real = 192.168.0.2:80 gate 5
     request = "/.testpage"
     receive = "test page"
</pre>
</blockquote>

<p>The /etc/ha.d/haresources file is simple as follows:
<blockquote>
<pre>
lvs1.domain.com IPaddr::10.0.0.3 ldirectord::www
</pre>
</blockquote>
<p>

<p>You need to create the .testpage file at the DocumentRoot directory of each web server.
<blockquote>
<pre>
echo "test page" > .testpage
</pre>
</blockquote>
<p>

<p>Start the heartbeat daemons on the primary and the backup. If there
is anything wrong, you may check the /var/log/ha-log and
/var/log/ldirector.log respectively.

<p><hr>

<p align="center">
<font size="-1">
$Id: HighAvailability.html,v 1.3 2000/01/11 14:00:52 wensong Exp $
<br>Created on: 1998/12/5
</font>

</body>
</html>