19/12/2018 |

The Real-World Benefits of Dynamic Data Protection Policies

We recently wrote an article about how DDP (dynamic data protection) is implemented and how it works in OpenIO SDS. As a follow up to that article, I’d like you to tell about a real use case and show you the real-world benefits of dynamic data protection.

Object storage is generally appreciated for its durability, resilience, and efficiency.

But these features usually come at a cost: flexibility. In the early days of object storage, solutions available adopted a data protection scheme based on multiple replicas. Compared to what was available at that time, the amount of data protected, and the limited CPU power available, multiple data copies were the best compromise.

How to size an OpenIO cluster on x86 serversLater, thanks to the increase in CPU power and other resources available on x86 servers, erasure coding became the protection mechanism of choice. This opened up additional scenarios, including geographic data distribution instead of simple geo-replication of data. Unfortunately, erasure coding is not efficient for all use cases, particularly with small files. Choosing between erasure coding and data replication increases the rigidity of the object store, and, in some cases, has a negative effect on overall system efficiency.

OpenIO SDS has been designed from the beginning with flexibility, performance, and efficiency in mind. And, by serving customers whose storage clusters range from a dozen terabytes to hundreds of petabytes and billions of objects, we learned in the field that you can’t plan everything in advance, and you need as much flexibility as possible to adapt to ever changing application and workload scenarios. Especially in the enterprise space, most of our customers start with a single application, such as backup, for example, then consolidate more and more. Each of these workloads works differently from the others and needs different forms of protection. And in some cases, an application can save different file types and sizes and customers want the most efficient system both in terms of capacity optimization and performance.

Dynamic data protection is a feature of OpenIO SDS that enables our object store to select the best data protection mechanism, automatically and on the fly, according to policies already available on the cluster. This offers optimal flexibility and performance, no matter what workloads or applications access the system.

A real-world scenario

Earlier this year, one of our customers, who offers a long-term data storage service (7-10 years) adopted OpenIO SDS as their primary platform for active archiving. Their customers pay on a capacity basis no matter what type or size their files are, and they access data through a web application designed to simplify the document management process.

Because of the nature of the service and the type of end users, we immediately understood that file sizes could vary a lot from customer to customer, and that customers sometimes manage different type of files. Each customer could store from thousands to millions of files and efficiency is the key. The more efficient the storage platform, the better the savings that can be passed on to end users, improving competitivity on the market.

 

Dynamic Data Protection in practice

Each customer needs their data protected, but how this is done is up to the service provider.

We set up a DDP rule to store small files, up to 64 KB, with a 3-replica data protection scheme, and used erasure coding (6+3) for large files. With this setting we were able to get the best of both worlds.

You may ask why we did this. Erasure codes are inefficient and demanding on small files. A small file creates a large number of segments anyway, and when it is time to retrieve the object, the number of IOPs is higher than if the entire object is read from a single device. To optimize space, the chunks created from the erasure coding process are distributed across several nodes; not only you have to read multiple chunks, but this operation involves data gathering from multiple nodes to reconstruct the object, adding latency.

Moreover, as each chunk is internally indexed by the metadata services, splitting the object into many parts results in a non-negligible internal metadata overhead. For really small files (a few kB), this overhead can even take up more storage space than the data itself, which si why erasure coding might not the adequate option for this type of data.

In this specific use case, for a 64KB file, you may end up with 192KB used, but with a very limited number of IOPS performed on a single HDD. On the other hand, with erasure coding 6+3, you would end up with 96KB of space consumed on the backend (without accounting for the overhead). But you’ll need a larger number of IOPs and compute power to rebuild the object. And this is the worst-case scenario, where the trade-off is an edge case, depending on which kind of efficiency you are looking for (capacity vs. performance).

 

The benefits of Dynamic Data Protection

It turns out that, for this specific installation, this was the right choice. Some customers produce millions of very small files per month , in the range of 32-50K. By adopting triple replica it was possible to save lots of IO operations and buy smaller servers, with fewer hard drives per node, while still offering good performance.

Smaller nodes allow this provider to expand the cluster granularly without making large investments. Thanks to this strategy, larger files are managed correctly as well; there is no tangible difference in performance and the nodes are always responsive, mostly because of Conscience technology. With a large number of smaller nodes, Conscience is even more effective, and enables the software to take advantage of nodes that are less accessed at any moment, distributing the IO operations accordingly.

 

Key takeaways

Dynamic data protection helps our customers optimize data protection on their system, allowing them to save money and get better performance out of the OpenIO SDS cluster. This translates to better services for their customers and more competitiveness overall.

Dynamic data protection policies enable to choose the best storage policies depending on object size, and can be modified over time. This allows our customers to set up different policies depending on the workloads or applications involved and react quickly to new business or application needs.

A large set of applications can take advantage of DDP because it is transparent and does not impact performance. In fact, the gateways already come pre-configured with a set of policies, de facto enabling applications to work seamlessly with the best possible performance and data footprint optimization.

How to size an OpenIO cluster on x86 servers