More SSD News

Speedy storage: Pros and cons of SSDs and flash

Speedy vs. Kid Flash

Image: "Speedy vs. Kid Flash" by JD Hancock (cc:by)

Enterprise storage architects have developed many ways to increase application performance by implementing flash memory. This has raised the bar for IT professionals, who need to be able to understand the differences in order to evaluate their options.

So here’s a handy guide, including use-cases and caveats for typical applications.

The building blocks for flash storage systems

Enterprise-class flash storage is currently offered in three basic flavors:

  • SSD (solid state drives)—on the outside, these may look a lot like hard disk drives (HDDs). On the inside, however, there’s no spinning disk to be found. It’s all silicon-based, with no moving parts. SSDs emulate the behavior of HDDs making them easier to incorporate into existing servers and storage systems, without the need for specialized device drivers.
  • PCIe (Peripheral Component Interconnect Express) cards—these plug into standard card slots within servers or storage controllers, emulating the behavior of traditional memory expansion cards, using lightweight caching device drivers.
  • PCIe as SSD–to further complicate things, some PCIe cards emulate SSD drives. Although primarily a matter of price, packaging and density, there can be implications with this approach, such as the ability to hot-swap cards.

There are several ways to use these building blocks within the compute/storage stack. The approach taken determines how the stack incorporates the hardware and how applications can benefit from flash performance.

There are four distinct use cases:

1. Flash as cache in the server: In this scenario, server flash is coupled with an enterprise storage system to cache active data in close proximity to the server’s CPU. This requires sophisticated caching software and can turn any server-based PCIe flash card or server-based SSD into an extended storage system cache.

Usage example: This implementation is useful for mission-critical servers that experience occasional and unpredictable bursts in activity from multiple application sources, such as OLTP or data analytics.

Caveats: While this use case offers the fastest access to cached data, it does require server management. When using this type of flash implementation, you will want to consider how the solution handles data coherency (write operations that change a block on the storage system, while the same block may still exist in the server cache). You should also look at how the solution handles cache persistence (keeping the server cache ‘warm’ or consistent after reboot of a virtual or physical server, versus deleting the whole cache).

2. Flash as cache in the storage system: In this use case, instead of being used in a server, PCIe cards or SSDs are used directly in a storage system along with storage tiering software. This type of implementation offers an additional layer in the storage system which automatically promotes hot data from HDD to flash.

Usage example: This is a great way to increase read performance across all of the servers accessing a networked storage system. Although latencies will be higher than with a server-side cache, no extra server management is required, and you will not have to deal with server reboots erasing the cache contents.

Caveats: While this method offers automatic caching of all hot data in a storage system, it may quickly fill if not sized properly. And this approach primarily speeds the reading of data: Writes to flash may actually be slower than to a fast HDD array, since each write requires a relatively slow flash erase cycle. When used for writes, as many streams as possible should run concurrently, in order to maintain sufficient write throughput. Otherwise, the cache should have an asynchronous, “write-around” provision (as opposed to a synchronous, “write-through” operation).

3. Flash as disk in the server: In this case, an enterprise server simply incorporates SSD (or PCIe emulating SSD) in place of HDD for any direct-connect storage used within that server.

Usage example: This can be useful when a dedicated server with direct-attached storage is needed for a high-performance application and networked storage is not a major consideration.

Caveats: Although this is often the easiest implementation of flash and SSDs, the inability to upgrade or reconfigure flash components without halting the server may present problems in some environments

4. Flash as disk in the storage system: This use case is exemplified by the new breed of all-flash arrays (AFAs). As their name implies, these storage systems have no magnetic hard disk drives and are fully comprised of SSDs alone.

Usage example: This can be useful when a large amount of dedicated storage is needed for performance-critical applications, such as those that require fast database access. In this case, all of the application data is always stored on flash.

Caveats: Of the four use cases, AFAs currently represent the highest cost. When evaluating AFAs, you should find out how each solution handles a common weakness of flash: flash-memory write cliffs—a sudden and precipitous drop in performance. A write cliff can occur when all cells are busy erasing and unable to write new data. Here, advanced cell mapping techniques, intelligent background garbage collection, and  non-volative-RAM staging can ensure flash cells are still available for write operations.

Consider your options, in two dimensions

As you can see, there are a number of options available when it comes to flash storage, so split the decision into two: Decide on the form factor (PCIe card or SSD device), and then decide how best to implement flash for your application (i.e., as a replacement for HDD, or as an HDD cache accelerator).

FREE Computerworld Insider Guide: IT Certification Study Tips
Join the discussion
Be the first to comment on this article. Our Commenting Policies