Storage Tiering in a Virtual Age

If you haven’t changed how you specify and build out storage tiers in the last year or so, you will need to soon.  Storage technology has rapidly evolved over the last few years.  In order to make the most out of our investments and fully leverage the new capabilities, we must also change the way we approach storage. 

There is a lot of confusion on the best approach to tiering storage today—most of it brought on by over-zealous sales teams.  The landscape of storage is definitely changing, but not everything is changing.  The purpose of this post is to describe what tiering strategies I think make sense and sales ideologies you should avoid. 

As far as tiering goes, I classify storage into two areas; Virtual Storage and Specialized Storage.

Virtual Storage

There is a strong movement towards virtual storage. Customers are looking for storage technologies that simplify management, maximize utilization and auto-tier for performance and there are a lot of good storage systems that provide these capabilities. With new advances in software, customers can manage more storage, more efficiently than ever before.

Type: Virtual Storage

Description: General purpose storage for most business applications.  Virtualization capabilities pool resources, lower cost and help balance performance.

Hardware: Multiple tiers of storage wrapped in pools.  SSDs here are used as a “compute” resource while SAS and SATA drives make up most of the capacity.

Use Case: Applications that require mild to moderate performance and capacities.  Applications that can be densely consolidated onto a single storage system.  Virtual machines, most databases and file-servers.

Cost: Higher cost per GB and IOP than other storage.  Software and virtualization costs are usually per TB and virtualization overhead can require more hardware to achieve same IOPs as other storage.  Savings is found through pooling resources and reducing waste found in traditional storage systems.

Virtual storage does not fit all storage needs within an organization.  The purpose of auto-tiering is really to allow pools of storage to achieve better IOPs by mixing and balancing load between less expensive SATA disks and faster performing SSD disks.  It is not “ILM in a box” and it is not a substitute for archiving.  The cost/GB of SATA storage in a high-end, auto-tiering array is still much, much higher than archive storage due to software costs and, knowledgeable storage admins can still out-perform the best auto-tiering logic when designing around a specific application.

Specialized Storage

While most applications can affordably operate on virtual storage, some organizations have applications that need specialized configurations in order to meet performance, capacity or feature requirements.  Attempting to lump these applications into virtual pools can limit performance, impact other applications or substantially increase cost over purpose-built solutions.  There are several classifications I use for specialized storage.

Type: Maximum Performance

Description: Storage dedicated for specific, high-performance use cases.  Performance for this storage type is measured in Gigabytes/sec or 100Ks of IOPs.

Hardware: Depending on the capacity requirements, usually a single tier of pure SSD storage or a large pool of 15K RPM drives.

Use Case: Applications that demand more than half of a storage systems IOP potential.   Mission critical batch-loads, heavy analytics or highly transactional databases.

Cost: Highest cost/GB, but lowest cost/IOP makes this storage perfect for high-performance needs.

Type: Massive Capacity

Description: Storage dedicated to applications with huge capacity requirements. Usually, these applications have lighter performance demands, so the purpose of this storage type is really about storing large amounts of data as inexpensively as possible.

Hardware: Large capacity SATA drives on lighter duty controllers.  Some environments, such as backup and imaging may have moderate to high sequential throughput requirements that SATA drives can easily sustain.

Use Case: Disk-to-Disk backup environments, archiving, large email and imaging applications are a few examples.  In many cases, the applications using this storage may use software capabilities such as file compression or deduplication to reduce capacity natively.  So, storage system software features may not be necessary. 

Cost: Best cost/GB and sequential throughput, but very high cost/IOP.  Limiting unnecessary capacity-based software licenses really reduce the cost of this type of storage.

Storage with the best price/IOP and price/GB are generally mutually exclusive.  Only in Big Data analytics farms will you find requirements for both.  Specialized storage should not be sprinkled all around your datacenter unless many of your applications meet the requirements.  It is more difficult to manage when compared to virtual storage, so you should really make sure new applications can’t be consolidated onto virtual environments first.

Final thoughts

There are many opportunities for storage consolidation and virtualization.  Entire organizations may be able to run all applications on virtual storage while reducing costs.  However, be aware that storage virtualization really is focused at consolidated and shared environments.  Specialized applications may require dedicated storage in order to meet performance or capacity requirements without overspending for features you don’t need and won’t use.

Leave a Reply

 

 

 

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

No CAPTCHA challenge required.