In this tutorial I am using 3 CentOS Virtual Machines running CentOS 5.3 in VMware ESX 3.5. For the GFS2 File System I am using a vmdk built with the thick option, that is shared among all the Virtual Machines. You also can use iscsi or fiber… This option is up to you.
Always make sure your iptables (If you know the port’s and protocols for clustering, then add it to iptables ) and selinux is OFF. If not you will run into issues.
The 3 machines I am using are called
- gfs1 == 192.168.101.100
- gfs2 == 192.168.101.101
- gfs3 == 192.168.101.103
Since I’m using VMware ESX for the 3 machines above I will also be using vmware for fencing. The info is below for my test setup
- ESX Host Name == esxtest
ESX IP Address == 192.168.101.50
- ESX user login info below
login == esxuser
password == esxpass
- ESX admin login info below
login == root
password == esxpass
The 1st command you need to know for creating and modifying your cluster is the ‘ccs_tool‘ command.
Below I will show you the necessary steps to create a cluster and then the GFS2 filesystem
- First step is to install the necessary RPM’s..
yum -y install modcluster rgmanager gfs2 gfs2-utils lvm2-cluster cman
- Second step is to create a cluster on gfs1
ccs_tool create GFStestCluster
- Now that the cluster is created, we will now need to add the fencing devices.
( For simplicity you can just use fence_manual for each host.. ccs_tool addfence -C gfs1_ipmi fence_manual )
But if you are using VMware ESX like I am you should use fence_vmware like so…
ccs_tool addfence -C gfs1_vmware fence_vmware ipaddr=esxtest login=esxuser passwd=esxpass vmlogin=root vmpasswd=esxpass port=”/vmfs/volumes/49086551-c64fd83c-0401-001e0bcd6848/eagle1/gfs1.vmx”
ccs_tool addfence -C gfs2_vmware fence_vmware ipaddr=esxtest login=esxuser passwd=esxpass vmlogin=root vmpasswd=esxpass port=”vmfs/volumes/49086551-c64fd83c-0401-001e0bcd6848/gfs2/gfs2.vmx”
ccs_tool addfence -C gfs3_vmware fence_vmware ipaddr=esxtest login=esxuser passwd=esxpass vmlogin=root vmpasswd=esxpass port=”/vmfs/volumes/49086551-c64fd83c-0401-001e0bcd6848/gfs3/gfs3.vmx”
- Now that we added the Fencing devices, it is time to add the nodes..
ccs_tool addnode -C gfs1 -n 1 -v 1 -f gfs1_vmware
ccs_tool addnode -C gfs2 -n 2 -v 1 -f gfs2_vmware
ccs_tool addnode -C gfs3 -n 3 -v 1 -f gfs3_vmware
- Now we need to copy this configuration over to the other 2 nodes from gfs1 or we can run the exact same commands above on the other 2 nodes..
scp /etc/cluster/cluster.conf root@gfs2:/etc/cluster/cluster.conf
scp /etc/cluster/cluster.conf root@gfs3:/etc/cluster/cluster.conf
- You can verify the config on all 3 nodes by running the following commands below..
- You are ready to proceed with starting up the following daemons on all the nodes in the cluster, once you either copied over the configs or re ran the same commands above on the other 2 nodes
- You can now check the status of your cluster by running the commands below…
- If you want to test the vmware fencing you can do so by doing the following.. ( run the command below on the 1st node and use the 2nd node as the node to be fenced )
fence_vmware -a esxtest -l esxuser -p esxpass -L root -P esxpass -n “/vmfs/volumes/49086551-c64fd83c-0401-001e0bcd6848/gfs2/gfs2.vmx” -v
- Before we start to create the LVM2 volumes and Proceed to GFS2, we will need to enable clustering in LVM2.
- Now it is time to create the LVM2 Volumes…
pvcreate MyTestGFS /dev/sdb
vgcreate -c y mytest_gfs2 /dev/sdb
lvcreate -n MyGFS2test -L 5G mytest_gfs2
- You should now also start clvmd on the other 2 nodes..
- Once the above has been completed, you will now need to create the GFS2 file system.. Example below..
mkfs -t <filesystem> -p <locking mechanism> -t <ClusterName>:<PhysicalVolumeName> -j <JournalsNeeded == amount of nodes in cluster> <location of filesystem>
mkfs -t gfs2 -p lock_dlm -t MyCluster:MyTestGFS -j 4 /dev/mapper/mytest_gfs2-MyGFS2test
- All we need to do on the 3 nodes, is to mount the GFS2 file system.
mount /dev/mapper/mytest_gfs2-MyGFS2test /mnt/
- Once you mounted your GFS2 file system You can the following commands..
Now it is time to wrap it up with some final commands…
- Now that we have a fully functional cluster and a mountable GFS2 file system, we need to make sure all the necessary daemons start up with the cluster..
chkconfig –level 345 rgmanager on
chkconfig –level 345 clvmd on
chkconfig –level 345 cman on
chkconfig –level 345 gfs2 on
- If you want the GFS2 file system to be mounted at startup you can add this to /etc/fstab..
echo “/dev/mapper/mytest_gfs2-MyGFS2test /GFS gfs2 defaults,noatime,nodiratime 0 0″ >> /etc/fstab
In the next up coming tutorials I will show you how to do the same as above but with the Red Hat Conga gui and I will also show you how to optimize your GFS2 Cluster setup.