How Object Storage Is Taking Storage Virtualization to the Next Level

We live in an increasingly virtual world. Because of that, many organizations not only virtualize their servers, they also explore the benefits of virtualized storage.

Gaining popularity 10-15 years ago, storage virtualization is the process of sharing storage resources by bringing physical storage from different devices together in a centralized pool of available storage capacity. The strategy is designed to help organizations improve agility and performance while reducing hardware and resource costs. However, this effort, at least to date, has not been as seamless or effective as server virtualization.

That is starting to change with the rise of object storage – an increasingly popular approach that manages data storage by arranging it into discrete and unique units, called objects. These objects are managed within a single pool of storage instead of a legacy LUN/volume block store structure. The objects are also bundled with associated metadata to form a centralized storage pool.

Object storage truly takes storage virtualization to the next level. I like to call it storage virtualization 2.0 because it makes it easier to deploy increased storage capacity through inline deduplication, compression, and encryption. It also enables enterprises to effortlessly reallocate storage where needed while eliminating the layers of management complexity inherent in storage virtualization. As a result, administrators do not need to worry about allocating a given capacity to a given server with object storage. Why? Because all servers have equal access to the object storage pool.

One key benefit is that organizations no longer need a crystal ball to predict their utilization requirements. Instead, they can add the exact amount of storage they need, anytime and in any granularity, to meet their storage requirements. And they can continue to grow their storage pool with zero disruption and no application downtime.

Greater security

Perhaps the most significant benefit of storage virtualization 2.0 is that it can do a much better job of protecting and securing your data than legacy iterations of storage virtualization.

Yes, with legacy storage solutions, you can take snapshots of your data. But the problem is that these snapshots are not immutable. And that fact should have you concerned. Why? Because, although you may have a snapshot when data changes or is overwritten, there is no way to recapture the original.

So, once you do any kind of update, you have no way to return to the original data. Quite simply, you are losing the old data snapshots in favor of the new. While there are some exceptions, this is the case with the majority of legacy storage solutions.

With object storage, however, your data snapshots are indeed immutable. Because of that, organizations can now capture and back up their data in near real-time—and do it cost-effectively. An immutable storage snapshot protects your information continuously by taking snapshots every 90 seconds so that even in the case of data loss or a cyber breach, you will always have a backup. All your data will be protected.

Taming the data deluge

Storage virtualization 2.0 is also more effective than the original storage virtualization when it comes to taming the data tsunami. Specifically, it can help manage the massive volumes of data—such as digital content, connected services, and cloud-based apps—that companies must now deal with. Most of this new content and data is unstructured, and organizations are discovering that their traditional storage solutions are not up to managing it all.

It’s a real problem. Unstructured data eats up a vast amount of a typical organization’s storage capacity. IDC estimates that 80% of data will be unstructured in five years. For the most part, this data takes up primary, tier-one storage on virtual machines, which can be a very costly proposition.

It doesn’t have to be this way. Organizations can offload much of this unstructured data via storage virtualization 2.0, with immutable snapshots and centralized pooling capabilities.

The net effect is that by moving the unstructured data to object storage, organizations won’t have it stored on VMs and won’t need to backup in a traditional sense. With object storage taking immutable snaps and replicating to another offsite cluster, it will eliminate 80% of an organization’s backup requirements/window.

This dramatically lowers costs. Because instead of having 80% of storage in primary, tier-one environments, everything is now stored and protected on object storage.

All of this also dramatically reduces the recovery time of both unstructured data from days and weeks to less than a minute, regardless of whether it’s TB or PB of data. And because the network no longer moves the data around from point to point, it’s much less congested. What’s more, the probability of having failed data backups goes away, because there are no more backups in the traditional sense.

The need for a new approach

As storage needs increase, organizations need more than just virtualization..[…] Read more »

 

Share