10/10/2018 |

The Zero-Waste OpenIO cluster

Part 1 - Hardware


“One man’s trash is another man’s treasure.”

This idea motivated us to launch one of our side projects this year at OpenIO. We took on the challenge of building a small object storage cluster out of legacy hardware, using OpenIO SDS, to create a zero-waste storage solution.

Throughout this series, we’re going to discuss how we achieved that goal, starting at the hardware level, then, in the upcoming second part, we'll look at monitoring, connected applications, and benchmarks. Without further ado, let’s look at our plan, as well as the actual hardware build.
 

The plan

Hardware is central to this project, and we knew it would take us a lot of time. We wanted to get enough compatible parts to set up three machines. We challenged ourselves to reuse parts that we had lying around, which would inevitably lead to all three servers having completely different hardware.

We then planned to set up OpenIO SDS on it, as well as an application that could connect to our S3 gateway. This would allow us to use the cluster for both testing and storage purposes, knowing that the data would be safely stored by our software, and would still be available even if one of our jury-rigged nodes failed us.

Specs and components

How to size an OpenIO cluster on x86 servers

One of our restrictions for hardware was to exclusively use parts that we already had. We made an exception for the cases, because, despite the different internals, we wanted the rigs to have a similar look. That’s why we went with Cooler Master E500L mid-tower cases, which come in three different colors: blue, white, and red. In the end this decision would yield much more aesthetically pleasing results than the dusty 20-year-old cases we had planned to use originally.

Node CPU Motherboard RAM HDD1 HDD2 SSD
Blue AMD Athlon XP 1900

ASRock

 K7S8X

 2GB   DDR

300GB 120GB 240GB
White Intel Pentium 4 3Ghz

Asus

P5GDC

 4GB  DDR2 300GB 1TB 120GB
Red Intel Core 2 Duo E740

Asus

P5Q

4GB DDR2 500GB 500GB  240GB
 

We also got our hands on three sets of CPUs and their motherboards (details above), as well as about 2.7TB of raw storage split between six HDDs and three SSDs; we used the latter to speed the system up, as well as for metadata storage. SSDs weren’t really a thing when the rest of this hardware was cutting edge ten years ago, but they make the setup process much faster, which is why we decided to use them. Unfortunately, the blue node didn’t support SATA, so we had to add a SATA controller for the SSD, while both hard drives still remained on the IDE bus.

For networking, we managed to get motherboards with gigabit ethernet links, which is a must for storage performance. Each node also was also given a GPU, because none of them would even POST without one due to a lack of integrated graphics. We also added CD drives; these weren’t necessary, as the motherboards support USB boot options, but we decided to add them anyway for better consistency, and for a touch of nostalgia. The blue node didn’t have 1Gbps ethernet, so a gigabit ethernet card we had lying around came in handy.

Image d’iOS

Assembly process

Even though we had all the components at hand, it didn’t mean that everything was working properly. We ended up having to scrap an entire motherboard and its CPU because the old capacitors had started to leak. Liquid capacitors in PCs were common back then, and they sure didn’t stand the test of time. Fortunately, all three batteries were already button cells instead of the old 3.6v CMOS ones, so we avoided leaks in that regard. We still tried to re-solder new capacitors on the motherboard, but it still was unstable, at which point we decided to give up on it, replacing it completely with another motherboard we had at our disposal.

We’ve also had trouble with defective RAM sticks; fortunately we had a lot of them in stock, so, after testing several of them for POSTing and stability, we installed several GB of RAM into each PC.

Surprisingly, the hard drives and the power supplies were in good condition, and didn’t cause any trouble. One inconvenience was the cable length of the power supply units, which didn’t play well at all with the bottom slots in the much more recent cases, and which in turn made cable management slightly more difficult. Also, one of the PSUs had a broken cooling system, so we added a fan controlled externally by a variable resistor, and powered by a molex. We tried to fix everything we could.

In the end, even though we knew what we were doing, assembling three stable rigs out of old hardware was a challenge, but it was worthwhile. When in operation, the cluster pulls around 350W of power, which is very high compared to some modern alternatives such as ARM nodes. But we didn’t take operating costs into account, simply because we didn’t know how long we would be able to run this cluster. After all, this is pretty old consumer-grade hardware. 

After several hours of work, this is what our cluster looks like.

 

Image d’iOS (1)

 

What’s next?

Now that we have a platform to put some object storage on, that will be our next step. This will be a great opportunity to demonstrate that OpenIO SDS is truly hardware agnostic, and to make it do something more productive than just eating up power.

Stay tuned for the second part of our series to learn how we set up OpenIO SDS on our zero-waste cluster to get over 1.5TB actual storage capacity from it.

How to size an OpenIO cluster on x86 servers