Yuming It Up
You will need to install the perl-Term-Screen rpm on the master node first. However, since we have configured the yum repositories, this is very easy do to do. Just use yum to install it for you by entering:% yum install perl-Term-ScreenAt the end of a long list of header files, you should see something like (enter "y"):
I will do the following: [install: perl-Term-Screen 1.02-2.redhat.noarch] Is this ok [y/N]:
Now we're ready to install Warewulf. Enter the following:
% yum install warewulfLike the perl-Term-Screen above, yum will report that it is doing and then ask something similar to (enter "y"):
I will do the following: [install: warewulf 2.2.4-1.i386] I will install/upgrade these to satisfy the dependencies: [deps: warewulf-tools 2.2.4-1.i386] [deps: dhcp 2:3.0.1rc14-1.i386] [deps: tftp-server 0.33-3.i386]In this case, yum is installing some other package(s) on which Warewulf depends. For example, if you didn't install dhcp then it will be installed as part of installing Warewulf.
Almost There
We're almost there -- only a few more steps. The nodes in a Warewulf cluster can boot a couple of ways. We're going to have them boot using PXE boot. Consequently we need to use syslinux to get a PXE capable boot loader. However, this is really easy to do. Enter the following commands from the head node:
% wget http://www.kernel.org/pub/linux/utils/boot/syslinux/Old/syslinux-2.11.tar.gz /tmp % cd /tmp % tar xzvf syslinux-2.11.tar.gz % cd syslinux-2.11You will see a file called pxelinux.0. Copy this file to /tftpboot.
% cd /tmp/syslinux-2.11 % cp pxelinux.0 /tftpboot
Before we start up Warewulf we need to edit the configuration files to match our cluster. The configuration files are located in /etc/warewulf on the master node. The first file we will edit is /etc/warewulf/master.conf. At the top of the file you will need to change lines to reflect the following:,
nodename = kronos node-prefix = kronosThis tells Warewulf that the nodes will know the master node as kronos and the nodes will have the "kronos" prefix. By the way, feel free to name your cluster to what ever you want. We also need to make these changes:
node number format = 02d boot device = eth2 admin device = eth2 sharedfs device = eth2 cluster device = eth1This tells Warewulf what interfaces are used for what functions and keeps the number prefixes short. Next we need to edit /etc/warewulf/vnfs.conf to reflect the following:
kernel modules = modules.dep sunrpc lockd nfs jbd ext3 mii crc32 e100 e1000 bonding sis900In this case we are adding only the kernel modules that we might need.
The last file we need to edit is /etc/warewulf/nodes.conf. This file tells Warewulf how the nodes are configured. Edit the file to reflect these changes:
admin device = eth0 sharedfs device = eth0 cluster device = eth1 users = laytonjb, deadlineAgain, we are telling Warewulf which interfaces handle what type of traffic. We also set the user names -- of course set this to your specific names. Finally "un-comment" (remove the "#") the following line:
# pxeboot = pxelinux.0This change tells Warewulf that the nodes will be booting using PXE.
Really Almost There
Now we are ready to configure Warewulf on the head node. This task is actually very easy to do. Just run the following command.
% wwmaster.initiate --initIf everything is setup correctly, you should be able to just hit "Enter" each time the script asks for input. This script does a number of things. It starts the Warewulf daemon on the head node, it configures NFS exports for the compute nodes, it starts dhcp on the head node, and it will create the VNFS in /vnfs/default. It also creates a compressed image that is sent to each node. You will see a lot of yum output when the VNFS is building. When wwmaster.init is finished, the image file is placed in /tftpboot/generic.img.gz.
Kronos Lives
At this pint none of the worker nodes should be powered up. Another very nice thing about Warewulf is that you can just turn on the nodes and Warewulf will find the nodes, configure DHCP , boot the nodes using PXE, and send the ramdisk image.
The command to start the process is wwnode.add. Just run the following command on the master node.
% wwnode.add --scanThis command will keep running forever, so when we're done, you will need to use Ctrl-C to stop it.
Once the command starts, Warewulf is listening for DHCP requests from new nodes to come up on the cluster network. So, after you run the command power up the first node first node and watch the screen. When Warewulf finds a node, it will look something like the following.
Scanning for new nodes.................. ADDING: '00:0b:6a:3d:71:cd' to nodes.conf as [kronos00] Re-writing /etc/dhcpd.conf Shutting down dhcpd: [ OK ] Starting dhcpd: [ OK ] Shutting down warewulfd: [ OK ] Starting warewulfd: [ OK ] Scanning for new nodes. ISPY: ':00:0b:6a:3d:71:cd' making DHCP requestOnce this happens open another window and try pinging the node.
% ping kronos00When the ping is successful, the node should be up and ready. Try using rsh to the node to see what the node looks like.
% rsh kronos00
Be sure to run the df -h command to see how much space is used by the ramdisk (it's not much at all). Repeat this process for the second node and be sure to wait for the node to come up. Then boot the third node, and so on. After the last node is done, stop wwnode.scan by entering a Ctrl-C. It may also be useful/interesting to place a monitor on the compute nodes as they boot.
You will only have to run wwnode.add -- scan once for the cluster. Call it "node discovery" if you like. After this, you will only have to turn the node on and it will boot via PXE and get its image from the master node.
At this point, it maybe best to check and make sure warewulfd daemon is is running. As root, enter /usr/sbin/warewulfd. If warewulfd is running it will let you know.
Next, to allow user access to the nodes run wwnode.sync. And finally, enter wwtop. You should see all you nodes listed in a nice "top like" display. LEt us know if you have any questions. We will pick up there in part three.
Sidebar Resource: |
Warewulf |
Fedora |
Douglas Eadline is the