Cluster Newbie

Don't know a thing about clusters. Not to worry. Prodigious cluster scrivener Robert Brown is here to present the basics in a clear and concise set of introductory columns (and then some). Welcome to High-tech hardball.

Ready for Real Parallel Computation, as if there was any other

In the last column we introduced the Parallel Virtual Machine (PVM) subroutine library, the original toolset that permitted users to convert a variety of computers on a single network into a "virtual supercomputer". We reviewed its history and discussed how it works, then turned our attention to what you might have to do to install it and make it work on your own cluster playground (which might well be a very simple Network of Workstations -- NOW cluster -- that are ordinary workstations on an ordinary local area network).

Pass The Messages Please

The idea of a homemade parallel supercomputer predates the actual Beowulf project by years if not decades. In this column (and the next), we explore "the" message passing library that began it all and learn some important lessons that extend our knowledge of parallelism and scaling. We will do "real" parallel computing,using the message passing library that made the creation of Beowulf-style compute clusters possible: PVM. PVM stands for "Parallel Virtual Machine", and that's just what this library does -- it takes a collection of workstations or computers of pretty much any sort on a TCP/IP network and lets you "glue" them together into a parallel supercomputer.

That Free Lunch You Wanted...

Clustering seems almost too good to be true. If you have work that needs to be done in a hurry, buy ten systems and get done in a tenth of the time. If only it worked with kids and the dishes. Alas, kids and dishes or cluster nodes and tasks, linear speedup on a divvied up task is too good to be true, according to Amdahl's Law, which strictly limits the speedup your cluster can hope to achieve.

In the first two columns we explored parallel speedup on a simple NOW(network of workstations) style cluster using the provided task and taskmaster program. In the last column, we observed that there were some fairly compelling relations between the amount of work that we were doing in parallel on the nodes, the amount of communications overhead associated with starting the jobs up and receiving their results, and the speedup of the job as we split it up on more and more nodes.

Putting your cluster to work on parallel tasks.

In our previous installment, we started out by learning how to use pretty much an arbitrary Linux LAN as the simplest sort of parallel compute cluster. In this column we continue our hands-on approach to learning about clusters and play with our archetypical parallel task on our starter cluster to learn when it runs efficiently and just as important, when it runs inefficiently.

If you've been following along, in last column we introduced cluster computing for the utter neophyte by "discovering" that nearly any Linux LAN can function as a cluster and do work in parallel. Following a few very general instructions, you were hopefully able to assemble (or realize that you already owned) a NOW (Network of Workstations) cluster, which is little more than a collection of unixoid workstations on a switched local area network (LAN). Using this cluster you ran a Genuine Parallel Task (tm).

The simplest parallel cluster (one that you might already have)!

Introduction

Getting started with what appears to be a very powerful, very complex idea (in computing, at least) is often a daunting proposition, and cluster computing is no exception. There is so much to learn! So many things can go wrong! It might require new, specialized, expensive hardware and software! Looking over some of the articles on this website can easily reinforce this view, as some of them describe very sophsticated tools and concepts.

Search

Login And Newsletter

Create an account to access exclusive content, comment on articles, and receive our newsletters.

Feedburner


This work is licensed under CC BY-NC-SA 4.0

©2005-2023 Copyright Seagrove LLC, Some rights reserved. Except where otherwise noted, this site is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International. The Cluster Monkey Logo and Monkey Character are Trademarks of Seagrove LLC.