Article Index

Cluster mechanicals and cluster distros - Me likes!

The Beowulf mailing list provides detailed discussions about issues concerning Linux HPC clusters. In this article I review some postings to the Beowulf list on using single power supplies for multiple nodes, a discussion about cooling and general machine room topics, and about cluster distribution concepts.

One Power Supply For Multiple Nodes

More and more people are thinking about custom cases and custom mountings for their clusters. On June 28, 2004, Frank Joerdens posted to the Beowulf mailing list asking if anyone sold power supplies in the kilowatt range so he could attach several nodes to a single power supply. He also asked if anyone had experience with them, what the price was, and if they were any good.

List contributor Alvin Oga posted that a 700W to 1000W power supply could probably provide power for ten or so mini-itx systems. He also thought that you could save a few dollars by using a single power supply for multiple nodes, but it may or may not be cheaper to get a small power supply and case for each system.

Joel Jaeggli also posted that he thought a single power supply for multiple nodes would results in very large conductors. He recommended going with telecommunication dc power supplies (Note: Rackable Systems is already doing this in production racks). According to Joel you could then use a dc-dc power supply for the nodes that is very efficient and very compact. Joel also thought that 1 kW would be enough for 4-6 dual Opteron nodes. Alvin Oga responded that telecommunication power supplies are much more expensive that typical power supplies (5x-10x in Alvin's opinion). He also pointed out that if you lose a power supply, you would lose the compute capability of the nodes attached to the power supply.

Frank Joerdens then responded that he thought large conductors might not be such a big deal because you could be creative and use aluminum tubes that double as part of the structure. He also thought that such large power supplies might become expensive.

Then Dr. Power himself (Jim Lux) posted to this thread. Jim said that with modern PWM power supply design, the maximum efficiency is largely independent of the power output (same power consumption for one large power supply or a bunch of small ones). However, Jim pointed out that efficiency is not a big driver in typical PC power supplies. Jim also provided some general comments. He said that a single large power supply will have fewer components than multiple power supplies, so the probability of failure is lower. However, if you do lose one, then the impact is larger. Jim also pointed out that running large (2-3 meter) lengths of cable to connect to motherboards would also introduce problems because of the change in resistance due to the length. He also mentioned that if you want to remove the nodes you will have to think about connectors and/or service loops in the cables.

Finally, Frank Joedens, the originator of the thread posted that he agreed with Jim's comments and then mentioned that veering away from COTS (Commercial Off The Shelf) doesn't really buy you anything.

This discussion was very interesting because it shows that people are "thinking outside of the box" to further improve clusters and that informed opinions are one of the trademarks of the Beowulf list.

Cooling Units? Raised Floor?

As you can tell from many of the postings to the Beowulf list in the last year, power consumption, power usage, and machine room design are becoming increasingly important issues. Brian Dobbins posted to the list that he has recently put together a machine room design but wanted to get opinions on cooling design and/or layout in general. He had some specific questions about his design and cited ClusterMonkey's own Robert Brown for his Linux Magazine article on machine room design (Also see Getting Serious: Cluster Infrastructure.

Jim Lux was the first one to respond to Brian's post. Jim thought the amount of cooling Brian proposed (4.7 tons) was fairly small (household AC units are 3-5 tons). Jim also thought that having a raised floor was not such a big issue if you only have one row of racks as Brain does. In addition Jim thought that if you have rows and rows of racks, then a raised floor might be a good idea. Finally, Jim suggested that they partition the systems across various UPS units so that they don't all go down together. Jim also suggested a coat rack for jackets and a temperature/humidity recorder.

You have no rights to post comments

Search

Login And Newsletter

Create an account to access exclusive content, comment on articles, and receive our newsletters.

Feedburner


This work is licensed under CC BY-NC-SA 4.0

©2005-2023 Copyright Seagrove LLC, Some rights reserved. Except where otherwise noted, this site is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International. The Cluster Monkey Logo and Monkey Character are Trademarks of Seagrove LLC.