- Published on Monday, 01 December 2008 11:26
- Written by Jeff Layton
- Hits: 4031
Solid State Storage
Another theme that I think is close to the GPU theme in magnitude is Solid-State storage. For some time I think we've all seen articles coming out about SSD's. While they are still expensive and have limitations that many people are aware of (e.g. they can actually lose data). But there were two companies that I would like to highlight - Texas Memory and Solid Access.
One of the reasons that SSD's and the like have become so popular is that people are looking for increased performance and possibly lower power consumption for applications (even if it is "perceived" need for increased performance). But in general, people are starting to examine creating "tiers" for the storage behind a file system with HSM. Figure One below illustrates the concept.
The width of the triangle indicates capacity and the height indicates performance (however, you want to measure performance - throughput, IOPS, etc.). The general premise with this illustration is that as you move up the triangle, costs increase as well. So faster storage costs more (makes sense). Therefore to save money, don't put all of your storage on the fastest, most expensive storage. It's better to put only the data that needs that extremely faster storage on the something like SSD's or Ramdisks, and then move the data to something a lot slower, such as SATA drives with limited bandwidth to the file system. This is the HSM concept (move the data up and down as needed). So people are looking at SSD's and Ramdisks the to get best performance possible but they want to combine with existing storage to be more cost effective.
Texas Memory has been around for a number of years, but I think their importance in the HPCC storage market is about to take a quantum leap because of the tiering approach. They have a variety of products that have both SSD as the storage medium as Ramdisks as the storage medium. For example, they have a unit called the RamSan-500 that consists of 1TB to 2TB of Flash Raid along with 16GB to 64GB of cache. It can be connected via a 4X FC links (2-8 of them). This box alone can do 2GB per second of throughput and 100,000 IOPS from the flash storage (as a comparison, a single hard drive could do maybe 50MB/s and around 100 IOPS). Their RamSan-440 is a RAM based storage unit with 256 to 512GB of storage. It can do up to 4.5GB/s throughput and 600,000 IOPS.
Texas Memory has a range of storage options including a 42U rack with flash based storage and memory cache. The RamSan-5000 has up top 10-20TB of flash based storage and 160GB to 640GB of storage. In aggregate, it can do 20GB/s to the flash storage and achieve over 1,000,000 IOPS. Keep a eye on Texas Memory - they are going to start shaking the HPCC Storage market.
The other company that has an SSD solution as a stand-alone unit is Solid Access. They have several products that offer various approaches to adding solid-state storage. The base product, the USSD 200, is a 2U box that has a maximum capacity of 128GB but with a throughput of 3.6GB/s when you use multiple FC links. It can be connected in a variety of ways including 320 MB/s SCSI-3 Ultra-wide LVD, 3 Gb/s SAS, and 4 Gb/s FC.
During SC08, they also announced a new 1U box (USSD 300 series) that have up to 256GB of flash storage. It can do 100,000 IOPS per single FC port, and 4GB/s with aggregated network. They also announced a USSD 320 which is a 2U unit with up to 256GB of storage.
TACC and Visualization
While I don't think it was a "theme" of the show, the TACC announcement of their new visualization center that includes a new viz wall called Stallion. This project is very noteworthy because it's built totally from commodity parts and uses Kubuntu Linux. It has 24 Dell XPS 690 workstations (one of them is a head node). Each of the 23 compute nodes have two Nvidia graphics cards each with 1GB of video memory, 4.5GB of memory, and a single Intel quad-core CPU. These are connected to a total of 45 Dell 30" monitors (I guess each workstation is connected to 2 monitors). The monitors are capable of 2560 x 1600 in resolution and they are arranged in 15 columns of 5 monitors each. That's a total of 307 million pixels.
Stallion now is the largest tiled display in the world, passing the San Diego Supercomputer Center, which is amazing, but I think the coolest aspect to the whole project is that it's using standard workstations, standard displays, standard video cards, standard networking, along with Linux and some open-source viz software. It's not a specialized system customer built and customer integrated as in the good old SGI days. It follows the same tenants of beowulf clusters but for viz clusters. Not a bad concept IMHO.
I hate to say it given my extremely limited time on the show floor and time to talk to vendors and others, these are the highlights for me. I think Doug has additional comments that he will be posting. Next SC I will do my level best not to end up in the hospital so I can at least give a reasonable overview of the show.
Dr. Jeff Layton hopes to someday have a 20 TB file system in his home computer. He lives in the Atlanta area and can sometimes be found lounging at the nearby Fry's, dreaming of hardware and drinking coffee (but never during working hours).
- << Prev
Login And Newsletter