Wow! My first week at OpenIO was a blast. I received a lot of warm wishes from many friends, fellow bloggers and from all the rest of the industry. And the most common question asked by everyone was: “what made you do it?”… leaving a “cool” lifestyle to join a small startup??
Why OpenIO then?
You know, in my previous job I covered object storage a lot. There are many different solutions out there now, both from primary vendors and startups of all sizes. Some of them are very mature, with plenty of features and an interesting ecosystem of solutions which are capable of covering an increasing number of use cases.
Most of these object storage vendors see large multi-petabyte installations as the holy grail, which is partially true, but the number of customers is limited and object storage is becoming more and more present in the enterprise too. These two segments are totally different from each other, both in the size of the infrastructures and features requested by customers. This poses some challenges and not all the object storage systems are ready to support them. They are simply not flexible enough. Flexibility is important then, but how much?
Where is object storage going?
The short answer is IoT. But it’s just an oversimplification. Most of the storage vendors have already put IoT on top of the list of use cases. They see Object Storage as a central repository to collect data from devices in remote locations and maybe analyze it afterwards. From my point of view this is only a part of the story. Yes, it's perfectly true that you want to concentrate and analyze data directly from within the storage platform (something that OpenIO SDS can already do, BTW) but, again, this is not the end-to-end solution that will solve some of the IoT challenges.
In fact, even though there is this perception that everything will be connected all the time and with very high bandwidth, it's also true that data generated from most of these devices is more than what can actually be transmitted over the air and not all of it has to be moved to a remote location for processing… simply put, it’s inefficient and too costly.
Take the example of the connected car. One single car can generate up to 25GB/hour, some of which has to be processed locally in real time, the other lasting as long as a single trip and the rest could be necessary for a much longer period of time. Transmitting all of it to the cloud is simply unnecessary, doing it in real time is even worse. The local network of devices will need CPU power as well as storage. We will be living in a world of micro-datacenters, made out of small IoT devices. This is precisely the point……the storage layer of these infrastructures must be distributed, shared, resilient, efficient, transparent to applications and highly automated. What better than object storage to perform these tasks? Nothing… but, as the title of this blog states, not all object storage systems are created equal.
Last December OpenIO launched the SLS-4U96, a 96-disk appliance with each single disk connected to two 2.5gb/s links through a small ARM-based server called nano-node. These nano-nodes run OpenIO SDS, and the solution has several advantages when compared to x86 fat nodes. You may think this is a radical scale-out solution, but OpenIO has gone further… much further.
A few months back, when I started collaborating with OpenIO on a project, the engineering team showed me a Raspberry PI based cluster. Nice, but not unique (even though there aren't many others that can do that, right?). Well, the other day we were talking about installing OpenIO SDS on Raspberry Pi zeros!!! And, in this case, we are talking about a $5 computer with 512MB of RAM, a limited storage and even more limited CPU power.
Now, think about an object storage system that can be installed on small devices, connected together, the kind of devices you could expect to see in a car, a train, an off-shore platform or any other unattended industrial IoT infrastructure. A system so efficient that it leaves some resources available for data processing. This system, using standard protocols such as HTTP, should easily replicate all, or part, of the data generated and saved locally to a central repository that uses the same identical technology while providing a framework to process data locally or remotely. This system is already available and it’s called OpenIO SDS + Grid for Apps.
Closing the circle
Unmatched architecture flexibility for future applications, but also a solid and mature technology to tackle today’s challenges... precisely one of the foremost reasons why I decided to join OpenIO.
OpenIO SDS and Grid for Apps are so flexible that they can be installed in single Raspberry Pi zero as well as in a large datacenter to manage several Petabytes of data and trillions of objects on hundreds or thousands of nodes. Something that has already been proven in the field by our customers (and eventually by the pre-installed Raspberries that we plan to give away after presentations and events ;) ).
My role here at OpenIO? ...to shape the strategy around this vision. Isn't it exciting?!