Since OpenIO was founded, the company has been working on object storage. Two years ago, the team came up with the idea of developing a serverless computing framework that can be integrated with the object store. It is part of our core strategy now, and I'd like to explain why Grid for Apps is enabling OpenIO to build a future-proof cloud-edge solution.
A few days ago, somebody asked me why we are developing our own serverless computing framework, since there are plenty of options available; the cloud is full of services like this. My TL;DR answer was this: because we want to provide a better user experience with an integrated product. But this is not enough to explain what's behind Grid for Apps.
Lightweight and Flexible vs. Features
If you look at the market, you'll find that many object storage vendors have started offering features that allow users to do more with objects stored in their systems. Some of them are embedding index/search capabilities and others are building integrations with external tools. Some object stores can now even interact with external services, such as AWS Lambda. There is nothing wrong with this approach, but it is far from optimal if you want to exploit the full potential of data stored in your system. We wanted to do more and take advantage of the low resources needed to run OpenIO SDS.
At the same time, we had another challenge to face. We are very proud of the lightness of our object store, which can run with just 400MB RAM and a single ARM CPU core. Continuing to add new features to the object store would result in a more complex product, eliminating the ability to run on smaller devices at the edge or on nano-nodes in the data center.
Lightweight design and flexibility are key to our success; they are made possible by Conscience technology. They are a huge differentiator when compared to other object stores. They are at the foundation of our ability to run SDS in all-flash configurations and provide consistent performance, even with heterogeneous hardware in the cluster. We didn't want to lose these features by clogging SDS's core, but, at the same time, we wanted to give our customers a broader set of features and continuously improve the product.
The answer was easier than we thought. We started using our own serverless computing framework to add and improve features on OpenIO SDS.
Serverless is a perfect mechanism for a scale-out infrastructure like ours, and Grid for Apps has some specific functionalities that were designed to do more than just trigger functions based on events; for example, it can also schedule batch jobs to run when wanted, such as at times when the infrastructure is less solicited.
Functions are relatively small pieces of code that are abstracted from the underlaying infrastructure. They are easy to develop and maintain, giving us a huge advantage, because they do not directly interact with the core.
By having complete control of the stack (object store + serverless) we can now decide where to implement a new feature. If we choose to develop it as a function, or a set of functions, we inherit all the advantages of modern development processes, such as agile and continuous delivery without impacting the SDS core. This approach can bring improvements to features much more quickly, and can be rolled back immediately if necessary without impacting the rest of the infrastructure.
Our future roadmap contains more and more new and innovative features. Not all customers will need all of them, but since they are not present in the core of the product, performance, stability, and the key design principles of SDS won't be affected.
Let me give you an example. Chargeback is a must-have feature, but only if you are an ISP or you need to show how the object store is being used for budgeting reasons. If you need this feature, you want it to be well designed, granular, and with all the metrics needed to match your business model.
By thinking about chargeback as a function we do not need to make any modification to the core of SDS, even if you ask for the unthinkable. We already catch all events that occur in the system, and we can pass them on to Grid for Apps. A function computes all the necessary information and gives you all the metrics you need. It's asynchronous, scales with the rest of the cluster, and resources to run it are allocated dynamically. And, if a customer asks for a different metric or output, we can probably add it in a matter of days, making everybody happier. On the other hand, if you do not need it, the code is just not present.
Grid for Apps was originally designed to take advantage of unused resources in SDS clusters and run code directly onto the object store, helping to offload some operations from the rest of the infrastructure. It is a powerful tool and we are now using it to offload features from SDS too. By doing so, we will keep the SDS core lightweight, hence flexible, and fast for the foreseeable future, while it will be easier to implement new features and improve them quickly.
Furthermore, our customers and our partners can add their own new features to SDS very quickly, and they can also share them with the rest of the community.