Storage Virtualization Showdown (Part 2): Architecture Overview

This post is part of a series comparing leading storage virtualization solutions.  In this post I’ll review the architecture of each platform and discuss pros and cons in relation to how they are managed.  The architecture plays a big part in all areas of this series, so the concepts discussed here will be carried on throughout the series.image

VPLEX Architecture

First up in the architecture overview is VPLEX. 

Before I even get started, I want to bring attention the direction and purpose of the VPLEX as it compares to its competitors.  While many of the Storage Virtualization products primarily focus on simplifying storage management through, pooling, tiering, provisioning, etc.. the focus of the VPLEX was really in distance caching and storage federation.  So, while VPLEX can virtualize 3rd party arrays and is often positioned as a competing product, it is missing many of the capabilities customers look for in storage virtualization, yet while also enabling some features they can’t find anywhere else.

VPLEX Storage Clusters are comprised of 1-4 Engines each containing 2 Directors.  Each director has its own dedicated CPU and Memory used for I/O and Caching of host and meta-data information.

imageStorage volumes are provisioned from external arrays and then divided into slices called extents.  Devices are then created using one or more extents.  Devices can also contain other devices which allows for more advanced/complex RAID structures such as striping and mirroring.  Devices are then presented as Virtual Volumes to hosts.

Virtual Volumes can be presented through any director in the cluster which aids in load-balancing and High Availability.

Manageability

The management interface and provisioning flow are easy enough and seasoned storage administrators won’t have any trouble managing the platform.  The main manageability drawback to the architecture is the decision to leverage backend-array capabilities rather than have them integrated into the VPLEX.  This essentially doubles the management overhead of storage tasks by requiring the administrator actively manage both the backend array as well as the VPLEX when performing provisioning, tiering, data protection and other storage tasks.

NetApp V-Series Architecture

Data ONTAP is available in both 7-mode or Cluster-mode.  Cluster-mode is still being built out and will eventually replace 7-mode.  However, Cluster-mode does not include Fibre Channel SAN capability today, so 7-mode will be used for this series.

imageLike VPLEX, V-Series HA Pairs are comprised of 2 nodes both utilizing their own CPUs and cache.  Unlike VPLEX, however, the nodes act as independent storage controllers instead of a distributed cluster.  While both nodes in the HA Pair are active, they are managed as separate storage nodes each with their own aggregates, volumes and LUNs.  While LUNs are visible through the partner node, this is through un-optimized (or proxy) paths that require I/Os to be forwarded to the controller that owns the volume. 

External LUNs are grouped into pools called aggregates.  Unlike the other solutions, there is no concept of slicing the external luns into extents or pages.  WAFL, the virtualization layer that enables all NetApp storage capabilities, evenly stripes incoming blocks across all external LUNs in the aggregate dynamically, as writes occur.  I’ll go into more detail on this later in the series.

Manageability

NetApp offers multiple interfaces for managing V-Series systems.  There is a web-based GUI, CLI and the new Windows-based System Manager. NetApp is in the middle of transitioning element management from the web-based GUI into System Manager and the future looks promising here.  Currently though, V-Series users will most likely find themselves bouncing between CLI and System Manager regularly to perform tasks.  Since each V-Series HA-Pair is managed individually, scaling out can also add additional management overhead, unlike solutions where all nodes are managed as part of a single cluster. Unlike VPLEX, The vast array of virtualization and data protection capabilities ONTAP provides eliminates the need to leverage external array capabilities.  This allows V-Series customers to reduce storage management overhead by centralizing management at the virtualization layer.

IBM SVC

The IBM SAN Volume Controller nodes are configured in 2-node I/O Groups.  Up to 4 I/O Groups can be combined within a cluster.  Each node has dedicated CPU and Cache.  Storage volumes provisioned from external arrays appear as Mdisks (managed disks) and are combined to form storage pools.  Mdisks are subdivided into extents—determined by the extent size of the pool—and extents can be combined either sequentially within an mdisk or round-robin across mdisks to create volumes.

image

Individual volumes are only accessible through a single I/O group in the cluster and typically accessed only through the “Preferred Node” for that volume.  Volumes are typically round-robin provisioned across nodes and I/O Groups to allow for better workload distribution as throughput and capacity requirements increase.

Manageability

The SVC management interface has been newly redesigned to behave much more like the easy-to-use XIV interface.  This has been positively received by the customers I’ve spoken with.  Centralized management of the cluster also enhances manageability as a customer’s environment’s scale.  The biggest manageability downside to SVC is the balancing of workload within the cluster.  Although the 8 nodes are all part of a cluster, workload can only be non-disruptively moved between nodes within an I/O Group.  If that I/O Group is overwhelmed, a disruptive volume migration to another I/O Group is required.  SVC has a broad range of storage virtualization and data protection capabilities much like the V-Series, so customers who deploy the SVC can also eliminate much of the management overhead from the external arrays.

HDS VSP

The Virtual Storage Platform, unlike its competitors, uses what I consider to be a hybrid monolithic and x86 architecture.  For this reason, I’ll spend a bit more time describing the internal architecture of the VSP as it is unique.  VSP base hardware begins with at least imageone Disk Controller Unit (DKC), but a second DKC can be added and connected through a proprietary, shared everything, Hi-Star PCIe interconnect to enable greater expansion.  While the other storage virtualization platforms utilize the same x86 processors for all I/O and virtualization metadata processing, the VSP splits these functions.  Specialized ASICs are used for Front-End (FED) and Back-End (BED) Directors while x86 processors are used for the Virtual Storage Directors (VSDs).  FED ASICS handle I/O processing for host requests, BEDs handle I/O processing for internal disks (including RAID computation) and the VSDs process metadata for storage virtualization as well as replication and data protection operations.  Data Cache Adapters are dedicated caching modules for user data. Finally, Grid Switches act as the interconnect hubs for all VSP internal communication.  The separation of I/O processing onto specialized hardware in addition to the ability to add BEDs, FEDs, VSDs, Cache and Grid Switches individually and as needed, give the VSP an edge as a both a high performance and highly scalable platform.

External LUNs (eLUNs) are grouped into storage pools and then partitioned into a fixed, 42MB page size.  Pages are then allocated during provisioning to create Hitachi Dynamic Provisioning Volumes (DPVols).  DPVol I/O processing can be active/active between DKCs and hosts can leverage I/O resources throughout the entire architecture when accessing any DPVol.

Manageability

As a former HDS Storage Admin, I always cringe when I think about storage management and HDS.  I’ve wasted hours of my life waiting for configurations to be loaded and tasks to be carried out.  Not to mention the hair I’ve turned grey while sweatily performing the complex storage decommissioning tasks that were as close in resemblance to diffusing bombs as I’ll ever get. The good news is, HDS has made significant improvements in this area recently including an entirely redesigned management suite that stacks up well against the competition.  Due to the more proprietary—hardware integrated design of the architecture, you can still expect certain management tasks to take longer than the competition—but not by much.  Like the V-Series and SVC, HDS hosts a wide variety of storage virtualization and data protection options and should easily be able to reduce external array management overhead in doing so.

 

Next Up

Each platform has their own architectural strengths and weaknesses.  However, there is much more to consider.  In the next post, I’ll discuss Data Mobility capabilities, why it is important and what you should know about the platforms.

Feel free to discuss any topics above and let me know if I’ve made any errors or left out vital information.  The comments below are your chance to set the record straight or share your experience.

2 comments to Storage Virtualization Showdown (Part 2): Architecture Overview

  • This SVC statement is no more true: “Although the 8 nodes are all part of a cluster, workload can only be non-disruptively moved between nodes within an I/O Group.”

    Since 2012, volumes can be rebalanced non-disruptively between I/O Groups. Per default, volumes are allocated to I/O Groups in an evenly distributed way, transparent to the administrator (although pinning is possible). This also applies to clustered storage controllers based on the SVC code stack, like the IBM Storwize V7000.

    Thanks for this very useful comparison!

Leave a Reply

 

 

 

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>