13/01/2017 |

All but blindfolded… why ARM is the future of storage.

shutterstock_400992955.jpg

On January 2017 we read this article by Chris Mellor on The Register. While we can't speak for others, we were mentioned in the piece too and we think it should be clarified as to why adopting ARM CPUs will be much more effective than Chris seems to think.

Why ARM

I don’t know whether you’re all aware of the fact that on December 2016 we launched our ARM-based appliance which can scale up to 1152 TB in 4U.  So, where did we find the power to manage 1PB+ of capacity out of ARM CPUs while providing high performance as well as all the features that you can usually find with our SDS software?  Simple –  we did what we’re good at: scaling out and efficiency! 

Nano-nodes and massive scale

Object Storage on ARM: Cost Effective, Efficient, and Easy!Object storage is not very CPU hungry, and even when erasure code is enabled, there is plenty of CPU left. That’s why we started to implement a Serverless, event triggered, compute framework called Grid for Apps. Thanks to this, our customers can run applications directly into the storage platform.  It has several use cases  (mail scanning, real time encoding and so on) and it vastly improves storage infrastructure efficiency. 

On the other hand, most of our competitors have nothing  similar to Grid for Apps and  take a different approach to improving infrastructure efficiency. Usually, they build "fat nodes" with plenty of disks which  introduce other issues.  In fact, you can get a very good $/GB because, chassis, CPU, RAM and networking costs are split between 80 or more disks. But, at the same time, the failure domain is huge and performance is very poor. We could  do the same , and for some of our customers it’s  "good enough"… but we knew that with our technology we could do better  than that, much bettter. Now, here at OpenIO, we think  customers prefer having more than capacity only, even when capacity is their primary goal. It gives them options, at least! 

In the last couple of years, we worked a lot with Kinetic drives (HDDs with an Ethernet interface and some CPU power) but they weren’t  enough/sufficient. The idea is good but in practice they are hardly the solution. You can just offload parts of the backend logic to them, but most of the intelligence remains on an external CPU which, again, leads to fat x86 nodes. So, we went back to the drawing board and, thanks to the help given by our partner Marvell, we found what we consider to be the best solution: the Nano-node.

A Nano-node has all the components you can  think of: CPU, RAM, a small flash memory for booting and storing local data and high speed connectivity. You can think about it as a Raspberry Pi on steroids. The nano-node  is a small board with a SATA interface supporting HDDs and SDDs, the size of the front of a 3.5" hard disk (and that’s exactly where it’s installed). At the end of the day, each single disk has its own CPU, RAM and connectivity, without the end user having to worry about connecting hundreds or thousands of nodes together to get the cluster working. The nano-nodes are installed in a 4U chassis which also provides all the links and 2 6-port 40Gb/s switches for front-end connectivity and back-to-back expansion.  By doing so we get 96 CPUs (192 cores), 40Gbit/s networking and more than 1PB in 4U with a failure domain equivalent  to one disk!

The failure domain is important for two reasons. The first is that losing a single disk is never like losing an entire 80-disk server and, on top of that, most of our competitors use distributed hash tables that could be very painful to rebuild (a problem that occurs also when you expand the cluster!) which can easily lead to performance consistency issues… and let me tell you that we overcome that issue by doing it differently and not by using distribute hash tables. Secondly, as a consequence, the fat node is not for everyone. If what you need is a relatively small infrastructure of inexpensive storage, by building it out of fat nodes you end up trading capacity for performance and it becomes harder to justify object storage for smaller installations (or start small and grow over time). This is especially important in the enterprise space where 1PB is still a lot and, in many cases, the first installations start at less than 200TB and grow over time by consolidating several different workloads… and, again, this is why you always need capacity, scalability and performance without compromises.  We can start as small as a 3-node cluster (no matter the CPU architecture) and grow up to hundreds of Petabytes, while mixing different node types in terms of capacity and CPU as well! 

Back to Chris' article

Making my case on ARM and Nano-nodes has taken longer than I expected, but it was necessary to explain why ARM is great for object storage. That said, our software is identical on both ARM and x86! The only thing that changes is the order of magnitude in the number of nodes involved in a cluster of a similar size (but we are good at scaling, so it's actually not an issue).

The software does all the magic, and SDS is designed to have an efficient lightweight back-end with a smart way to balance the load among the available nodes (we call it Conscience Technology) and, as mentioned earlier, it’s helped us to develop Grid for Apps. Now - and this is the important part - all we can do on fat x86 nodes can also be done on ARM-based nano-nodes… and maybe more.

By adopting ARM, no  visibility is lost over data or metadata and we can run all of our software on it too. If it can be done  on x86, it can be done  on ARM too. The only reason we’re not doing it today is that we already have Grid for Apps running on x86 and the primary goal for SLS-4U96 was to build the most efficient object storage platform for enterprises and ISPs. In this case efficiency means capacity for datacenter footprint without compromising the low power consumption and good performance… and to do so we’ve chosen to start with a small and cheap ARM SoC (sufficient for running SDS, but not enough for additional applications).

The next steps

We are 100% committed to maintaining both x86 and ARM versions of SDS (as well as parity of features between them), and are already working on a much more powerful version of our nano-node that will bring more CPU power on the table. By doing so, we’ll be able to run Grid for Apps in production environments with ARM as well. 

Running Big Data analytics, deep learning and AI applications and more, directly into the storage system? Why not? Isn't it the current industry trend after all? Yesterday was all about  Hypervisor, now it's all about containers and now everyone is excited about serverless computing. Applications running directly into the storage system, isn't it the most hyperconverged infrastructure ever??!

Object Storage on ARM: Cost Efficient, Effective, and Easy with OpenIO on Ambedded Mars200