Article Index

Distributed PID Space In general a distributed PID space cluster requires user authentication on each node, rsh/ssh, NFS or local copy of the OS. Each compute node generally has a full distribution of Linux and can often can booted as a Linux host when removed from the cluster. The advantages of this type of design are:

  • Built "on top of" an existing Linux distribution
  • No need for kernel patches or modification
  • Each node is an "island" (independent workstation)
  • Can scale easily, just add nodes
  • Good tools available to manage PID spaces

The possible drawbacks of this this method are:

  • No global process ID (PID), blind head node
  • Repeated authentication required on nodes
  • Software skew, and the need for synchronization and imaging of nodes (in some distributions)

The disadvantages of this design can be mitigated somewhat by the use some of the distributions mentioned in the Resources sidebar. For instance, the Warewulf package provides a nice method for managing version skew on nodes by allowing all administration to be done on the head node. Even with all the tools, however, it is still possible to create situation where direct process control would be of great value.

Bproc Clusters There is no authentication required on each node and bpsh (Bproc version of sh) provides a method of starting remote jobs. There is no local copy of the OS on worker nodes, the kernel and supporting software are downloaded at boot time from the head node. The Bproc approach has the following advantages.

  • All jobs show up in the the head node process table.
  • Jobs can be migrated to and from any of the nodes.
  • Ability to run disk-less or with hard drives.
  • Worker nodes do not require version tracking and imaging.
  • No need to manage remote PIDs. Tools such as batch schedulers and system monitors are much easier to implement.

The possible drawbacks of the Bproc approach are:

  • Some application packages do not support this environment
  • Requires kernel modification
  • Solving problems on nodes is limited because there is no user accessible local OS

A global PID space is definitely desirable in a cluster environment. The Bproc approach accomplishes this in a very efficient fashion that is attractive to many HPC cluster administrators. However, some administrators have found a distributed PID space model to be a very workable solution as well. No mater how you choose to mange your PID space, there is a large amount of experience from both camps. You can find much more on this topic by consulting the the various distributions in the Resource sidebar. As always, the options are plentiful so choose what works best for you and your users.

Sidebar One: Resources

Bproc (has not been updated since 2001)

The following are some popular distribution's and distribution managers, The ($) indicates a commercial product.

Distributed PID Space

Rocks Clusters (Disk-full nodes)

Oscar (Disk-full nodes)

Oscar HA (High Availability)

Clic (Disk-full nodes)

Clusterit (For workstation networks)

Warewulf (RAM Disk Nodes)

Onesis (NFS Based Nodes)

Scali Manage ($) (Disk-full nodes)

Single System Image

openMosix (Automatic process Migrations)

SSI (Open Single System Image)

Scyld/Penguin ($) (Bproc directed migration)

ClubMask (Bproc directed migration)

This article was originally published in ClusterWorld Magazine. It has been updated and formated for the web. If you want to read more about HPC clusters and Linux you may wish to visit Linux Magazine.

Douglas Eadline is the swinging Head Monkey at ClusterMonkey.net.

You have no rights to post comments

Search

Login And Newsletter

Create an account to access exclusive content, comment on articles, and receive our newsletters.

Feedburner


This work is licensed under CC BY-NC-SA 4.0

©2005-2023 Copyright Seagrove LLC, Some rights reserved. Except where otherwise noted, this site is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International. The Cluster Monkey Logo and Monkey Character are Trademarks of Seagrove LLC.