Before hypervisors like VMware, Hyper-V and KVM came to market, data centers had few options when it came to managing the growth of their server infrastructure. They could buy one big server that ran multiple applications, which, while it simplified operations and support, meant that one application was at the mercy of the other applications in terms of reliability and performance. Alternatively, IT professionals could buy a server for each application as it came online but this sacrificed operational efficiency and IT budget to the demands of fault and performance isolation. Until hypervisors came to market, the latter choice was considered to be the best practice.
Hypervisors changed everything; suddenly, workloads running on the same server were effectively fault isolated from one another. They could be run together on the same server and if one failed it would not impact the other workloads. These workloads were also portable; they could be moved between physical servers running a hypervisor with relative ease. Most recently, these workloads can now have quality of service parameters applied to them so that no single workload can dominate an entire physical server and cause the other workloads to be starved for resources.
Storage is now in the same state today as servers were five or six years ago. It is difficult for a storage system to be all things to all workloads. Some are better at high performance, some better at consistency and cost efficient performance, and still others are better at long term and very cost efficient archiving.
As a result, the modern data center is a hodgepodge of different storage systems. It is not uncommon at all for a data center to have a storage system dedicated to a virtual desktop infrastructure (VDI), two or three storage systems dedicated to the virtual server infrastructure, or one or two systems dedicated to stand alone business applications like MS-SQL, Oracle and Exchange. There is also often a storage system dedicated to user data like Word, PowerPoint and Excel. Finally, there is a storage system dedicated to store the ever-increasing unstructured data set being generated from surveillance cameras, sensors and other Internet of Things type of devices.
Each of these workloads has very unique attributes in terms of storage capacity and performance. Storage systems designed for one of these workloads have a clear advantage in terms of either cost or performance in comparison to a more general purpose system that tries to manage a mixed workload environment.
Is Storage Consolidation A Lost Cause?
IT professionals have two options when it comes to solving this problem. First, they could jump into hyper-convergence with both feet, replacing the entire storage (and server) infrastructure with a hyper-converged architecture that consolidates storage and server resources together. While this simplifies management, it is a significant “rip and replace” of the environment. And hyper-convergence has its own issues when it comes to managing the workload mixture described above.
Change, or rip and replace, is difficult for the data center. It is expensive and it is an unknown. As a result, most data centers choose the other option: continue to build out the existing storage infrastructure as the workloads demand. For example, many data centers are deploying all-flash arrays so that they can meet the demands of a highly dense virtual infrastructure or an object storage system to cost effectively store Internet of Things data.
While this second “strategy” layers in another tier of storage silos, it does meet the specific performance and/or capacity demand of the given workload. It also requires no change to current IT processes and procedures. This strategy does however increase the operational and capital expense of the overall storage infrastructure. Today, IT organizations are brute forcing their way through these problems, but it is reasonable to assume that this approach won’t work long term, and may already be costing the organization more than they can afford to spend.
To keep pace with the new agile data center, storage does need to become more like the hypervisors it is complementing. To accomplish this, a new storage management paradigm is required: data mobility. Data mobility solutions need to meet five basic requirements: complement existing storage, support all bare-metal operating systems and hypervisors, provide unlimited but independent scaling, provide application level quality of service, reduce management overhead and realize an immediate return on investment.
[to continue, click HERE]