EMC VNXe 3100: Sweet entry-level NAS and SAN

EMC delivers an all-purpose, unified storage array tailor made for the IT generalist and the small-business budget

EMC is easily the largest enterprise storage player on the planet, with more worldwide storage revenue than its two closest competitors (IBM and NetApp) combined. But no matter how popular EMC's high-end Symmetrix and VNX product lines have been with large customers, EMC was rarely considered a great choice for the small-to-midsize-business sector.

All that changed with the release of the VNXe series early last year. Though the VNXe is based on many of the same concepts as the larger VNX, it's more than a pared-down knockoff of the must-have features found in its big brother. Instead, the VNXe is a multiprotocol, virtualized implementation of the file and block-level storage engines of the VNX. Through virtualization, EMC found an innovative way to deliver enterprise-class functionality and performance in an small-business-sized package and at a small-business price.

[ Also on InfoWorld: Download the Server Virtualization Deep Dive Report. | Read Matt Prigge's Information Overload blog. | Subscribe to InfoWorld's storage newsletter and stay on top of the latest info. ]

The VNXe in the lab In testing the VNXe line, my goal was to replicate the experience of an IT generalist after buying a new array for use in a virtualization environment. Though the VNXe has been popular in a range of different roles (such as a replicated branch office storage solution or even as storage for embedded industrial hardware), the sweet spot for the VNXe is most certainly the small to midsize business. For many of the small businesses that may consider buying a VNXe, this will be their first experience with shared, centralized storage of any kind. Thus, the ease and simplicity of installing and growing the system is paramount.

The configuration I was provided included a dual-controller VNXe 3100 equipped with six 300GB 15,000-rpm SAS disks. I was also provided with a separately boxed set of six 1TB 7,200-rpm NL-SAS drives to serve as a growth platform. As many small to midsize business buying their first shared storage have a parallel interest in leveraging the clustering functionality found in many virtualization hypervisors, the bulk of my testing was performed on a trio of HP ProLiant DL385 G7 servers loaded with embedded VMware vSphere 5.0.

Read on for the full details, and see the short sidebar, "EMC VNXe 3100 performance check," for the results of my simple performance tests. As the resulting scorecard shows, I found the EMC VNXe 3100 to be a solid entry-level array -- one that I would recommend to anyone charged with single-handedly running a small shop on a limited budget. The VNXe offers a wide range of performance and availability features that are clearly derived from EMC's long experience delivering storage to large enterprise, and the Unisphere for VNXe management interface is incredibly easy to use. Any IT generalist will find Unisphere simple to navigate and get what they need, though (as always) that very simplicity may be a source of frustration for admins with more storage experience.

The VNXe at a glance The VNXe hardware wouldn't appear to set it apart from much of its entry-level competition. The smaller VNXe 3100 can be shipped with either one or two controllers equipped with dual-core processors, supporting up to 96 3.5-inch SAS and NL-SAS disks (48 disks in a single-controller configuration). The larger VNXe 3300 is always shipped in a dual-controller, quad-core configuration and supports up to 120 disks while also adding support for flash SSDs. Both models can be upgraded with so-called eSLIC interface cards, which currently add 1Gbps (3100 and 3300) or 10Gbps Ethernet functionality (on the 3300 only).

From a software perspective, the VNXe includes file-level NFS and block-level iSCSI compatibility out of the box, with Active Directory-integrated CIFS-based NAS functionality as an option. Additional features such as local, demand-allocated snapshots, remote replication, and application-aware replication are available through a simple add-on license on the VNXe 3300, while the VNXe 3100 includes local snapshots in the base license. Storage management is handled via the extremely easy-to-use, on-array Unisphere for VNXe interface. A serial or SSH-based command-line interface is also available, but it would generally be implemented only at the behest of EMC support or by very advanced users.

From a high-availability standpoint, the VNXe operates in an active-active controller model. While a given storage resource (file level or block level) is always served by only one of the two controllers, both controllers can be configured to actively serve different volumes at the same time. Thus, the performance of both controllers can be leveraged simultaneously -- though this requires a bit of planning to do well in practice.

Similarly, the all-important write caches are mirrored between the controllers, ensuring that a controller failure will never result in data loss. Even the least expensive, single-controller VNXe 3100 is equipped with a cache mirror in place of the second controller. While you may not avoid prolonged downtime with a single controller setup, you will at least avoid data loss.

Out of the box When unboxing the VNXe, the very first thing you'll find is a poster-sized getting started guide. This was not the first time I've seen one of these, but I was impressed with its thoroughness in walking the user through all of the steps of configuring a VNXe, from initial setup to presenting storage.

Interestingly, one of the very first steps was to register for an account on EMC's PowerLink support portal. This would turn out to be an important and decidedly necessary step, as the portal is the source for the initial setup wizard, core software updates, license registration, and remote support.

After I had racked up the array and attached its management ports to the network, I downloaded the setup wizard from the PowerLink portal and installed it on my laptop. Then I placed the laptop on the same network segment (VLAN) as the management interfaces and fired up the setup wizard. Within a few seconds, it had detected the VNXe and allowed me to address the management interfaces. From there on out, the configuration of the array could be completed through the onboard Unisphere management interface.

Into the Unisphere The slick, Flash-based Unisphere management interface is divided into five major segments, each of which is populated with no more than eight large, descriptive buttons. This simplicity means you can usually jump from the main splash screen to any common task in two or three clicks -- including checking the hardware status, reviewing storage usage, deploying a new storage volume, opening a live chat with support, and ordering customer-replaceable hardware.

The crisp and clean Unisphere GUI makes it easy for accidental storage admins to find what they need.  

Crafting a management interface that is both easy to use and powerful is no simple feat. Although EMC has decidedly erred on the side of making Unisphere a cinch to use (sometimes to the frustration of seasoned admins who need to find certain bits of information), it is possible to dig into most of the nitty-gritty if you like. Many times, that requires clicking a Show Advanced link to uncover options such as multipathing, interface teaming, and jumbo framing. However, many of these are features that small-business users aren't likely to need and that experienced storage pros will know to dig for.

Deploying storage After logging into the Unisphere interface, I was prompted to configure the installed disk. Unisphere makes this process fairly simple by employing a concept of storage pools. These would typically include a Performance Pool that might contain your high-speed SAS disks, a Capacity Pool that would contain the larger and slower NL-SAS disks, and a Hot Spare pool that would contain -- you guessed it -- hot spares for each type of disks you have deployed. You can also define your own pools, allowing you to segregate different workloads onto different physical disks. In my case, the array came with six 300GB SAS disks, so I dumped five of them into the Performance Pool and allowed the sixth to fall into the Hot Spare pool by default (though I could have overridden this if I wanted to).

After the storage pool was deployed, my first goal was to run through the basic tasks of configuring my first iSCSI (block) and Shared Folder (file) servers. Unlike many other arrays, which are iSCSI-only or NAS-only solutions, the VNXe allows you to deploy both. For example, you might create two iSCSI servers (one tied to each of the two controllers in a dual-controller array) and two Shared Folder servers (split between the two controllers), or you could have one controller manage your Shared Folder services while the other owned all iSCSI services.

How you configure the array will vary depending upon what mix of file and block services you intend to deploy and how you want to spread that load across the controllers (a topic that requires some reading into if you want to optimize the performance). At first, I configured a single iSCSI server that, by default, assigned itself to the first storage processor and one of the two NICs.

After that, I was ready to deploy a new volume to my vSphere hosts -- and here's where things get interesting. Unlike just about any other entry-level array I've worked with, the VNXe will actually configure the vSphere host for you. All you need to do is provide host or vCenter authentication credentials; it does the rest. This includes configuring the VNXe's IPs in vSphere's iSCSI Initiator, rescanning the HBA, finding the iSCSI device, and formatting the device with the VMFS file system -- all tasks you'd normally have to perform manually.

You literally can go from having a completely untouched VNXe sitting next to a few basically configured vSphere hosts to having a VMFS or NFS volume created and attached to your vSphere hosts in well under an hour. This speed of deployment isn't unusual in the marketplace today, but it is extremely unusual for an EMC product -- especially one with such a wide range of features.

Unisphere even helps you locate hardware components, via an interactive, graphical system view.

Adding and maintaining storage The next task on my evaluation checklist should be familiar to anyone who's owned any kind of centralized storage for any length of time: adding more storage. To start with, I slid the six 1TB NL-SAS disks into the array and watched as their LEDs cycled through a variety of initialization and warning states, until they eventually turned green as they spun up. Only a few seconds later, they were available in Unisphere, and I was able to add five of them to the Capacity disk pool (auto-allocating one to the Hot Spare pool as with the first set of disks). Frankly, it could not have been any easier.

However, all was not perfect in the storage management landscape, thanks to a combination of operator error and the very same automation and integration that made the VNXe so easy to connect to my hosts in the first place. I wasn't paying a great deal of attention while I cleaned up a few volumes that I had finished testing and deleted a volume that still had active VMs on it. Unisphere does prompt you to confirm a deletion, but like all IT gangsters, I don't read dialogs and hit OK reflexively.

That I mistakenly deleted the wrong SAN volume isn't the interesting fact here. Anyone who isn't careful when they delete things deserves what they get. What is interesting is that the VNXe's vSphere integration was comprehensive enough to reach into vSphere to cleanly delete the VMFS volume that this SAN volume was providing the storage for. That attempt failed because vSphere knew it had registered VMs on the volume and will generally not let you delete a volume that it knows it is using.

At this point, as vSphere gallantly resisted Unisphere's repeated instructions, I thought I had been saved. But then the VNXe gave up trying to get vSphere to do the dirty work and unceremoniously deleted the VMFS volume itself. All I could do was helplessly watch Unisphere display an utterly noninteractive "Deleting..." notification while the VNXe lowered the boom over vSphere's strenuous objections.

The VNXe's vSphere integration is hampered by other "Unisphere knows best" rough edges. For instance, there's the insistence that vSphere create a VMFS3 volume rather than VMFS5, forcing you to delete the volume and reformat it with VMFS5 if you want a native VMFS5 volume. For the same reason, Unisphere will not allow you to create a VMFS volume larger than 1.9TB -- the limit for VMFS3, but not VMFS5. Of course, you can get around these problems by deploying the storage as a Generic iSCSI volume (eschewing all of the integration by doing everything manually) or just using NFS, but they highlight one of the dangers of integrating too tightly with external software.

1 2 Page 1
Page 1 of 2
Shop Tech Products at Amazon