Friday, October 14, 2011
6:15 PM

How to install and configure Linux-HA high avaliability heartbeat - 3.0 cluster on rhel5

Heartbeat is a High Availabily cluster software in linux platform. Here we will discuss how to
install and configure heartbeat-3.0.3 in redhat enterprise linux. In this example we will configue
a webserver using apache and we will cluster it. It can be implemented on centos, fedora and other redhat flavors.

Heartbeat Version is : heartbeat-3.0.3

Requirements:

2 linux nodes, rhel5.4.
Node1: 192.168.0.33 hb_test1.lap.work
Node2: 192.168.0.34 hb_test2.lap.work
LAN & Internet connection.
A yum server.

Initial Steps:

Set the fully qualified hostnames and give corresponding entries in /etc/hosts and
/etc/network/network.

Configuring Apache:

#yum install httpd*

On node1

#vi /var/www/html/index.html
This is node 1 of Heartbeat HA cluster

On node2
 
#vi /var/www/html/index.html
This is node 2 of Heartbeat HA cluster

On both nodes:

#vi /etc/httpd/conf/httpd.conf
Listen 192.168.0.222:80

Now start the service in both nodes.

#service httpd start                                #it wont work untill heartbeat is started. So dont worry

#chkconfig httpd on
Confirm them from broswer.

Install the following packages in both nodes:

#yum install glibc*
#yum install gcc*
#yum install lib*
#yum install flex*
#yum install net-snmp*
#yum install OpenIPMI*
#yum install python-devel
#yum install perl*
#yum install openhpi*

Save the repo file for clusterlabs online repository in both machines:

Its availabile in http://www.clusterlabs.org/rpm/epel-5/clusterlabs.repo

it is as follows:

[clusterlabs]
name=High Availability/Clustering server technologies (epel-5)
baseurl=http://www.clusterlabs.org/rpm/epel-5
type=rpm-md
gpgcheck=0
enabled=1

[root@hb_test2 ~]# cat /etc/yum.repos.d/clusterlabs.repo
[clusterlabs]
name=High Availability/Clustering server technologies (epel-5)
baseurl=http://www.clusterlabs.org/rpm/epel-5
type=rpm-md
gpgcheck=0
enabled=1

[root@hb_test2 ~]#

After that install heartbeat packages on both nodes:

#yum install cluster-glue*

Four packages will be installed

cluster-glue
cluster-glue-libs
cluster-glue-libs-devel
cluster-glue-debuginfo

#yum install heartbeat*

Five packages will be installed including one dependency

heartbeat.i386 0:3.0.3-2.el5
heartbeat-debuginfo.i386 0:3.0.3-2.el5
heartbeat-devel.i386 0:3.0.3-2.el5
heartbeat-libs.i386 0:3.0.3-2.el5

Dependency:

resource-agents.i386 0:1.0.3-2.el5

#yum install resource-agents*

One package will be installed

resource-agents-debuginfo.i386 0:1.0.3-2.el5

Setting Configuration files:

We can do all configuration in one system and copy the /etc/ha.d to the second node.

#cd /etc/ha.d
#cat README.config

The details about configuration files are explained in this file. We have to copy three
configuration files to this directory from samples in documentation.

[root@hb_test1 ha.d]# cp /usr/share/doc/heartbeat-3.0.3/authkeys /etc/ha.d/
[root@hb_test1 ha.d]# cp /usr/share/doc/heartbeat-3.0.3/ha.cf /etc/ha.d/
[root@hb_test1 ha.d]# cp /usr/share/doc/heartbeat-3.0.3/haresources /etc/ha.d/

We have to edit the authkeys file:

We are using sha1 algorithm:

#vi authkeys
edit as follows
auth 2
#1 crc
2 sha1 test-ha
#3 md5 Hello!

And change the permission of authkeys to 600
#chmod 600 authkeys

We have to edit the ha.cf file:

#vi ha.cf
uncomment following lines and make edits

logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 15
warntime 10
initdead 120
udpport 694
bcast eth0
auto_failback on
node hb_test1.lap.work # in both nodes command #uname -n should
node hb_test2.lap.work # give the these hostnames
We have to edit the haresources file:

#vi haresources

hb_test2.lap.work 192.168.0.222 httpd

NOTE: You dont have to create an interface and set this IP or make a IP alias. Heartbeat
will take care of it. Automaticaly.


Now exchange and save authorized keys between node1 and node2.

Key exchange:

On node1:

Generate the key:

[root@hb_test1 ~]# ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/root/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_dsa.
Your public key has been saved in /root/.ssh/id_dsa.pub.
The key fingerprint is:
9f:5d:47:6b:2a:2e:c8:3e:ee:8a:c2:28:5c:ad:57:79 root@hb_test1.lap.work

Pass the key to node2:
[root@hb_test1 ~]# scp .ssh/id_dsa.pub hb_test2.lap.work:/root/.ssh/authorized_keys

On node2:

Generate the key:

[root@hb_test2 ~]# ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/root/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_dsa.
Your public key has been saved in /root/.ssh/id_dsa.pub.
The key fingerprint is:
40:66:t8:bd:ac:bf:68:38:22:60:d8:9f:18:7d:94:21 root@hb_test2.lap.work

Pass the key to node1:
[root@hb_test2 ~]# scp .ssh/id_dsa.pub hb_test1.lap.work:/root/.ssh/authorized_keys

Now copy the /etc/ha.d of node1 to node2:
[root@hb_test1 ~]# scp -r /etc/ha.d hb_test2.lap.work:/etc/

Starting the service:

On both nodes:

#/etc/init.d/heartbeat start

You may have to restart the heartbeat service a few times. Check #ifconfig in one node you can
see an interface eth0:1 is up with IP 192.168.0.222. In that node httpd is running and in the other
node it is stopped. When the running node fails, the other one will start.

0 comments:

Post a Comment