Features

Building A Virtual Cluster with Xen (Part One)

Master NFS Server Configuration:
The master node will act as a NFS server to hold the home directory of the user accounts, and to create a directory as an easy way to exchange data for all the nodes. In order to do it, we have to install the nfs-utils package and modify some files (for information on configuring NFS see for example NFS Howto and Red Hat Reference Guide). We will also install an editor, plus gcc and make, since we will need them later to compile code. As root we do the following: {mosgoogle right}

 
-bash-3.00# yum install nfs-utils emacs gcc make

It complains about the public key of font config. Issue the command again and everything will be installed fine

 
-bash-3.00# yum install nfs-utils emacs gcc make
-bash-3.00# mkdir /chsare
-bash-3.00# chmod 777 /chsare

Then we have to modify the files /etc/exports, /etc/hosts.deny, /etc/hosts.allow, /etc/fstab, and /etc/hosts with the contents shown below (do not forget the last line in the file /etc/fstab. Without it we would get "Permission denied" in the clients, see this page for more information.):

 
-bash-3.00# cat /etc/exports
/cshare         192.168.1.0/255.255.255.0(rw,sync)
/home           192.168.1.0/255.255.255.0(rw,sync)

-bash-3.00# cat /etc/hosts.deny
portmap:ALL
lockd:ALL
mountd:ALL
rquotad:ALL
statd:ALL

-bash-3.00# cat /etc/hosts.allow
portmap: 192.168.1.0/255.255.255.0
lockd: 192.168.1.0/255.255.255.0
mountd: 192.168.1.0/255.255.255.0
rquotad: 192.168.1.0/255.255.255.0
statd: 192.168.1.0/255.255.255.0

-bash-3.00# cat /etc/fstab
# This file is edited by fstab-sync - see 'man fstab-sync' for details
/dev/sda1               /                       ext3    defaults 1 1
/dev/sda2               none                    swap    sw       0 0
none                    /dev/pts                devpts  gid=5,mode=620 0 0
none                    /dev/shm                tmpfs   defaults 0 0
none                    /proc                   proc    defaults 0 0
none                    /sys                    sysfs   defaults 0 0
none                    /proc/fs/nfsd           nfsd    defaults 0 0

-bash-3.00# cat /etc/hosts
127.0.0.1               localhost

192.168.1.10            boldo
192.168.1.11            slave1
192.168.1.12            slave2
192.168.1.13            slave3
192.168.1.14            slave4

Lastly, we will set NFS to start automatically and create a user account:

 
-bash-3.00# chkconfig --level 345 nfs on
-bash-3.00# useradd angelv
-bash-3.00# passwd angelv

And now we can reboot the machine to test it. (To reboot it cleanly, issue the command halt from inside the master node (boldo) and then issue the command sudo xm create -c /etc/xen/cray/master.cfg from the host node (yepes).

Slave Configuration:
The slaves also need some files modifed to use the NFS server in the master node, plus we need to install portmap, but we don't have an internet connection yet. Both things can be done from the host machine (yepes). First we obtain the portmap RPM that we will install in the slaves:

 
angelv@yepes:~$ wget -nd
http://mirror.centos.org/centos/4/os/i386/CentOS/RPMS/portmap-4.0-63.i386.rpm

After this, we will repeat the following three steps for the four slaves (change slave1 for the corresponding slave node): We mount the OS image and modify the files ifcfg-eth0, fstab, network, and hosts so that the contents are similar to what is shown below. The contents of these files should be the same for the four slaves, except the IPADDR field in the file ifcfg-eth0 (x.x.x.11 for slave1, x.x.x.12 for slave2, etc.) and the HOSTNAME field in the file network.

 
angelv@yepes:~$ sudo mount -o loop /opt/xen/vcluster/slave1/centos.4-3.img tmp_img/

angelv@yepes:~$ cat tmp_img/etc/sysconfig/network-scripts/ifcfg-eth0
TYPE=Ethernet
DEVICE=eth0
BOOTPROTO=none
IPADDR=192.168.1.11
ONBOOT=yes
NETMASK=255.255.255.0
USERCTL=no
PEERDNS=no
NETWORK=192.168.1.0
BROADCAST=192.168.1.255

angelv@yepes:~$ cat tmp_img/etc/fstab
# This file is edited by fstab-sync - see 'man fstab-sync' for details
/dev/sda1               /                       ext3    defaults 1 1
/dev/sda2               none                    swap    sw       0 0
none                    /dev/pts                devpts  gid=5,mode=620 0 0
none                    /dev/shm                tmpfs   defaults 0 0
none                    /proc                   proc    defaults 0 0
none                    /sys                    sysfs   defaults 0 0
192.168.1.10:/home      /home                   nfs     rw,hard,intr 0 0
192.168.1.10:/cshare    /cshare                 nfs     rw,hard,intr 0 0

angelv@yepes:~$ cat tmp_img/etc/sysconfig/network
NETWORKING=yes
HOSTNAME=slave1

angelv@yepes:~$ cat tmp_img/etc/hosts
127.0.0.1               localhost

192.168.1.10            boldo
192.168.1.11            slave1
192.168.1.12            slave2
192.168.1.13            slave3
192.168.1.14            slave4
To finalize the configuration in the slaves, we need to install portmap, create the cshare directory and a user account. We do it by chrooting to the directory where we mounted the OS image.

 
angelv@yepes:~$ sudo cp portmap-4.0-63.i386.rpm tmp_img/tmp/
angelv@yepes:~$ sudo chroot tmp_img/

bash-3.00# rpm -ivh tmp/portmap-4.0-63.i386.rpm
bash-3.00# mkdir cshare
bash-3.00# useradd -M angelv
bash-3.00# passwd angelv
bash-3.00# exit

angelv@yepes:~$ sudo umount /home/angelv/tmp_img

We start slave1 and verify that the directories are actually mounted with NFS, and that as user angelv I can write in my home directory and that I can ping the master node.

 
angelv@yepes:~$ sudo xm create -c /etc/xen/cray/slave1.cfg

[...]
CentOS release 4.3 (Final)
Kernel 2.6.12.6-xenU on an i686

slave1 login: root
Password:                (remember that the password is ''password'' by default)

-bash-3.00# passwd
-bash-3.00# su - angelv

[angelv@slave1 ~]$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1             986M  336M  600M  36% /
none                   35M     0   35M   0% /dev/shm
192.168.1.10:/home    986M  466M  470M  50% /home
192.168.1.10:/cshare  986M  466M  470M  50% /cshare

[angelv@slave1 ~]$ touch delete.me
[angelv@slave1 ~]$ touch /cshare/delete.me

[angelv@slave1 ~]$ ping boldo
PING boldo (192.168.1.10) 56(84) bytes of data.
64 bytes from boldo (192.168.1.10): icmp_seq=0 ttl=64 time=0.163 ms
64 bytes from boldo (192.168.1.10): icmp_seq=1 ttl=64 time=0.154 ms

After doing the previous steps for the four slaves, we now have the basic settings for our virtual cluster. In the host machine, we can see that there are (besides Domain-0) five machines running, the master and the four slaves, by issuing the commando sudo xm list. This configuration is the basis for our next installment which will show how to install the Modules package for easily switching environments, the C3 command suite, a version of MPICH for running parallel programs, and the Torque/Maui for job queue management.

Angel de Vicente, Ph.D, has been working during the last three years at the Instituto de Astrofisica de Canarias, giving support to the astrophysicists about scientific software and being in charge of supercomputing at the institute. Being in the process of upgrading their Beowulf cluster, he lives of late in a world of virtual machines and networks, where he feels free to experiment.

    Search

    Feedburner

    Login Form

    Share The Bananas


    Creative Commons License
    ©2005-2012 Copyright Seagrove LLC, Some rights reserved. Except where otherwise noted, this site is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 2.5 License. The Cluster Monkey Logo and Monkey Character are Trademarks of Seagrove LLC.