Storage Virtualization Showdown: Conclusion

This is the conclusion post of the Storage Virtualization Showdown series.  For the full details of the commentary below, be sure to check out the rest of the series.

Storage virtualization can have a measureable impact in reducing both CAPEX and OPEX spend through giving customers more leverage when purchasing storage hardware and reducing the labor overhead associated with managing information. Below are some of my observations based on what I know about the platforms.  It is important to realize these products are ever-evolving, so some of concerns may already be addressed released versions.

NetApp V-Series

I worked for NetApp for almost 5 years as a Professional Services Consultant performing solution implementations and I consider them to be a strong leader in storage innovation.   The V-Series product line has the richest storage efficiency capabilities of any of the competitors when considering the granularity of Thin Provisioning and the unique Primary Storage Deduplication.  The implementation of storage efficiency on NetApp is also done well, with little performance overhead and even some performance gain through deduplicated cache.

However, the High Availability design of paired controllers subjects the platform to greater likelihood of downtime and greater impact during non-disruptive upgrades than some competing products which allow for cascading failure protection.  The downtime requirements of generational hardware upgrades should also be considered.  The largest consideration is the 21% storage overhead required to virtualize external storage.  This starts NetApp at a storage efficiency deficit that thin provisioning, deduplication and other capabilities must first satisfy before generating real storage savings.

To be the leader in Storage Virtualization, NetApp must make good on its vision of scale-out virtualization clustering that includes fully non-disruptive generational hardware upgrades and discover ways to reduce its hefty virtualization overhead.

EMC VPLEX

EMC VPLEX’s greatest strength and product focus is in the Active/Active federation of storage.  EMC offers a unique solution in this space that other competitors must try to work around.  Deep integration with VMware make it the most suitable and manageable use case for VPLEX—enabling virtual infrastructures to seamlessly failover across metro-distances.

The VPLEX, however, stops short of providing the rich storage virtualization capabilities of competing products and this leaves it at a tremendous disadvantage.  By leveraging capabilities at both the VPLEX and external storage, admins double the work required to provision and maintain storage.  This also limits an organizations ability to seamlessly transition from one external array to another, as the data protection and storage efficiency capabilities of the external arrays must be taken into account and re-architected appropriately.

To be the leader in Storage Virtualization, EMC must bring the unique VPLEX storage federation capabilities into the VMAX or the VMAX rich storage efficiency capabilities into the VPLEX.  Unless and until these capabilities are combined into a single product, EMC will have trouble successfully competing in the storage virtualization arena.

HDS VSP

The strength of the VSP platform is rooted in the hybrid monolithic/x86 architecture.  It grants the VSP platform the highest availability of any of the competitors by protecting against cascading hardware failures and has the mildest impact to host applications during non-disruptive code upgrades.  The hardware implementation and larger granularity of capabilities like Thin Provisioning and Dynamic Tiering also allow the VSP to perform these functions with little performance overhead.

One side-effect to the large page size (42MB) that helps minimize virtualization overhead, is that it can reduce or eliminate the savings when thin provisioning certain filesystems.  HDS also lacks some flexibility and simplicity when managing many snapshots or clones—compared to competing products.  The largest consideration for VSP, however, is the fork-lift upgrade and data migration required for generational hardware upgrades.

For VSP to lead this space, HDS should enhance the platform with a mechanism to non-disruptively upgrade from hardware generation to generation.  Continued simplification of storage management tasks and data protection functions would also aid them in surpassing the competition.

IBM SVC

The power of the SVC platform really stems from IBM’s decision to separate the software virtualization capabilities that really define the SVC from the commodity hardware it runs on.  This allows SVC customers the unique capability of seamless and non-disruptive, generational hardware upgrades.  The unique ability to externally virtualize and tier across SSDs also give SVC and edge when compared to the competition—however this gap will most likely close very soon.

The HA-pair based failover design of the platform, similar to V-Series, does not protect against cascading failures that can compromise both nodes of a pair and can also impact performance during upgrades.  Another concern with the SVC platform is the processing of thin provisioning meta-data on disk which can cause significantly greater performance overhead than the competing solutions.

For SVC to become the storage virtualization leader, IBM should focus on adding additional high-availability capabilities into the platform and reduce the overhead generated by thin provisioning.  Further rapid advances in virtualization and tiering capabilities could really show the flexibility of the mostly software-based platform.

 

Conclusion

So who wins?  I’ll leave that for you to decide based on the virtualization capabilities that most fit your needs.  I hope this series brings you closer to a wise decision.  I appreciate you taking the time to read this series and I look forward to any comments or criticisms you would like to add.

Leave a Reply

 

 

 

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>