Non-volatile memory's future is in software

New memory technology to serve dual roles of mass storage and system memory

1 2 3 Page 3
Page 3 of 3

The idea behind the effort is to figure out how to speed up the performance of an operating system so that any application would also benefit from the performance boost.

"Another aspect not available in storage systems today is intelligent interrogation of what the capabilities of the storage are," Pappas said. "That's pretty rudimentary. How can an OS identify what features are available and be able to load modules specific to the characteristics of that device."

Second, the task force will work on new interfaces through the OS to applications, which would allow applications to have a "direct access mode" or "OS bypass mode" fast I/O lane to the NVM. A direct access mode would allow the operating system to configure NVM so that it's exclusive to an application, cutting out a buffer and multiple instances of data, which adds a great deal of latency.

For example, an operating system would be able to offer a relational database application direct access to NVM. IBM, with DB2, and Oracle have already demonstrated how their applications would work with direct access to NVM.

By far, the most difficult job the task force faces is the development of a specification that allows NVM to be used as system memory and as mass storage at the same time.

"This is still a brand new effort," Pappas said. "Realistically, the [new NVM] media will take several years to materialize. So what we're doing here is having the industry come together, identifying future advancements ... and defining a software infrastructure in advance so we can get full benefit of it when it arrives."

NAND flash increasingly under pressure

Although new NVM technology will become available in the next few years, NAND flash is not expected to go anywhere anytime soon, since it could take years for new NVM media to reach the price point of NAND flash. But NAND flash is still under pressure due to technology limitations.

Over time, manufacturers have been able to shrink the geometric size of the circuitry that makes up NAND flash technology from 90 nanometers a few years ago to 20nm today. The process of laying out the circuitry is known as lithography. Most manufacturers are using lithography processes in the 20nm-to-40nm range.

The smaller the lithography process, the greater the amount of data that can fit on a single NAND flash chip. At 25nm, the cells in silicon are 3,000 times thinner than a strand of human hair. But as geometry shrinks, so too does the thickness of the walls that make up the cells that store bits of data. As the walls become thinner, more electrical interference, or "noise," can pass between them, creating more data errors and requiring more sophisticated error correct code (ECC). A comparison of the amount of noise to the amount of data that can be read by a NAND flash controller is known as the noise-to-signal ratio.

The processing overhead for hardware-based signal decoding is relatively high, with some NAND flash vendors allocating up to 7.5% of the flash chip as spare area for ECC. Increasing the ECC hardware decoding capability not only boosts the overhead further, but also hurts its effectiveness as NAND's noise-to-signal ratio increases.

Some experts predict that once NAND lithography drops below 10nm, there will be no more room for denser, higher-capacity products, which in turn will usher in newer NVM media with greater capabilities.

Lucas Mearian covers storage, disaster recovery and business continuity, financial services infrastructure and health care IT for Computerworld. Follow Lucas on Twitter at  @lucasmearian, or subscribe to Lucas's RSS feed . His email address is lmearian@computerworld.com.

See more by Lucas Mearian on Computerworld.com.

Copyright © 2012 IDG Communications, Inc.

1 2 3 Page 3
Page 3 of 3
  
Shop Tech Products at Amazon