Red Hat Cluster

19
Configure Red Hat Cluster Hostname: Station10.example.com: 192.168.5.10 Station20.example.com: 192.168.5.20 Station30.example.com: 192.168.5.30 Luci Admin Console: station10.example.com (192.168.5.10) Cluster Node: station10.example.com, station10.example.com, station10.example.com Cluster Name: Cluster_01 VIP: 192.168.5.100 Installation: Station10: Install packages- # yum install luci # /usr/sbin/luci_admin password # /etc/init.d/luci restart # chkconfig luci on # yum install ricci # chkconfig ricci on Admin console URL: https://station10.example.com:8084 Station20 and Station30: Install packages-

Transcript of Red Hat Cluster

Page 1: Red Hat Cluster

Configure Red Hat ClusterHostname:

Station10.example.com: 192.168.5.10

Station20.example.com: 192.168.5.20

Station30.example.com: 192.168.5.30

Luci Admin Console: station10.example.com (192.168.5.10)

Cluster Node: station10.example.com, station10.example.com, station10.example.com

Cluster Name: Cluster_01

VIP: 192.168.5.100

Installation:

Station10:

Install packages-

# yum install luci

# /usr/sbin/luci_admin password

# /etc/init.d/luci restart

# chkconfig luci on

# yum install ricci

# chkconfig ricci on

Admin console URL: https://station10.example.com:8084

Station20 and Station30:

Install packages-

# yum install ricci

# /etc/init.d/ricci start

# chkconfig ricci on

Page 2: Red Hat Cluster

# grep locking_type /etc/lvm/lvm.conf

locking_type = 1

Create a New Cluster:

log in to luci admin console:

Click Cluster > Create a new cluster > Add Node Host Name and password:

Give Cluster Name: Cluster_01

Node: station20.example.com

Node: station30.example.com

Click view SSL cert fingerprints to verify the communication.

Page 3: Red Hat Cluster

After the finish:

On Any Cluster Node:

# clustat

Cluster Status for Cluster_01 @ Fri Jun 1 15:56:08 2012

Member Status: Quorate

Member Name ID Status

------ ---- ---- ------

station30.example.com 1 Online

station20.example.com 2 Online, Local

Page 4: Red Hat Cluster

# cman_tool status

Version: 6.2.0

Config Version: 1

Cluster Name: Cluster_01

Cluster Id: 25517

Cluster Member: Yes

Cluster Generation: 8

Membership state: Cluster-Member

Nodes: 2

Expected votes: 1

Total votes: 2

Quorum: 1

Active subsystems: 9

Flags: 2node Dirty

Ports Bound: 0 11 177

Node name: station20.example.com

Node ID: 2

Multicast addresses: 239.192.99.17

Node addresses: 192.168.5.20

# grep locking_type /etc/lvm/lvm.conf

locking_type = 3

Page 5: Red Hat Cluster

Configure Fence:

On Any one node run:

# ls /etc/cluster/

cluster.conf

Add a Fence Device:

Click Cluster Name > Shared Fence Devices > Add a Fence Device > In Fencing Type select “Virtual Machine Fancing” > Give Name “XEN_Fencing” > Click “Add this shared fence device” > OK

Add Failover Domains:

Click Cluster Name > Failover Domains > Add a Failover Domain

Page 6: Red Hat Cluster

Add Fence Device to each node:

Cluster > Cluster List > Click Cluster Name > Nodes > Click on Node “station20.example.com” > Add a fence device to this level > Main Fencing Method > Select “XEN_Fencing (Virtual Machine Fencing)” > Remember to give the XEN VM hypervisor Name for host station20.example.com (not machine host name) Domain “XEN_VM01” > Click update main fence properties.

Repeat the same for station30.example.com. Remember to give the XEN VM hypervisor Name for host station30.example.com (not machine host name) Domain “XEN_VM02

Configure fence key and distribute to both node:

Click Cluster name > Fence > Check Run XVM fence daemon > Give the node name > Click Retrieve Cluster Node > Create and Distribute Keys > Apply

Page 7: Red Hat Cluster

On Any one node run :

# ls /etc/cluster/

cluster.conf fence_xvm.key

Copy “/etc/cluster/fence_xvm.key” to luci admin host in /etc/cluster/fence_xvm.key

Verify Fencing:

On any node say station30.example.com:

# fence_xvm -H XEN_VM01

This will reboot/fence station20.example.com

Format Clustered LVM with GFS2 file system:

/dev/vg0/lv0 is an existing lvm

# mkfs.gfs2 -p lock_dlm Cluster_01:vg0 -j 3 /dev/vg0/lv0

mkfs.gfs2: More than one device specified (try -h for help)

Page 8: Red Hat Cluster

[root@station30 ~]# mkfs.gfs2 -p lock_dlm -t Cluster_01:vg0 -j 3 /dev/vg0/lv0

This will destroy any data on /dev/vg0/lv0.

It appears to contain a gfs filesystem.

Are you sure you want to proceed? [y/n] y

Device: /dev/vg0/lv0

Blocksize: 4096

Device Size 0.48 GB (126976 blocks)

Filesystem Size: 0.48 GB (126973 blocks)

Journals: 3

Resource Groups: 2

Locking Protocol: "lock_dlm"

Lock Table: "Cluster_01:vg0"

UUID: A4599910-69AF-5814-8FA9-C1F382B7F5E5

#mount /dev/vg0/lv0 /var/www/html/

#gfs2_tool df /dev/mapper/vg0-lv0

Add Resource:

IP:

Page 9: Red Hat Cluster

GFS File System:

Http Script:

Page 10: Red Hat Cluster

Add Service Group:

Add resources in dependency order > IP > File System > script to run the service successfully.

Start the Webby Service:

Verify:

# clusvcadm -r Webby -m station30.example.com

Quorum Disk:

First Add station10.example.com as a third node.

Page 11: Red Hat Cluster

Create a partition /dev/sdi1

LV name is : /dev/qdisk-vg/qdisk-lv

# mkqdisk -c /dev/qdisk-vg/qdisk-lv -l qdisk

# mkqdisk –L [Run on both node]

Page 12: Red Hat Cluster

On all Nodes:

# /etc/init.d/qdiskd restart

# chkconfig qdiskd on

Page 13: Red Hat Cluster

With 3 Nodes:

# clustat

# cman_tool status

Page 14: Red Hat Cluster

Power off Station30:

# clustat

# cman_tool status

Page 15: Red Hat Cluster

Power off Station20:

# clustat

# cman_tool status

Page 16: Red Hat Cluster

Power on Station20:

Power on Station30:

Page 17: Red Hat Cluster