InfoWorld's top 10 emerging enterprise technologies

Which of today's newest shipping technologies will triumph over the long haul? Here are our best guesses

Everyone is a trend watcher. But at a certain point, to determine which trends will actually weave their way into the fabric of business computing, you need to first take a hard look at the technologies that gave life to the latest buzz phrases.

That's the idea behind InfoWorld's top 10 emerging enterprise technologies of 2011. We're every bit as excited as the most vociferous pundit about big changes in the direction of enterprise IT, from the consumerization of IT to infrastructure convergence. But what actual, vapor-free technologies have emerged that enable these big ideas to take shape? That's InfoWorld's stock in trade.

[ Go deep into key business technologies with InfoWorld's series of Deep Dive PDF special reports, including HTML5, private cloud, mobile management, server virtualization, Windows 8, and BI/analytics. | Get the key perspectives, reviews, and news from the InfoWorld Daily newsletter. ]

Among the host of enterprise technologies shipping but not yet widely adopted, we think the following 10 will have the greatest impact. Our selection criteria are subjective rather than objective, derived from many years of evaluating products in the InfoWorld Test Center, observing the ebb and flow of the industry, and taking stock of what appeals to enterprise customers. In other words, this list is based on the collective judgment and experience of InfoWorld editors and contributors, not some magic formula.

Except for the purposes of example, we have for the most part avoided specific product descriptions (visit the InfoWorld Test Center for that). We're focusing on technologies rather than their specific product implementations frozen in time, simply because technology evolves so quickly.

You may not agree with our picks -- in fact, given the contentious world of IT, we'd be surprised if you did. So please post your thoughts in our comments section (Add a comment).

10. HTML59. Client-side hypervisors 8. Continuous build tools 7. Trust on a chip 6. JavaScript replacements 5. Distributed storage tiering 4. Apache Hadoop 3. Advanced synchronization 2. Software-defined networks 1. Private cloud orchestration

huge amount about HTML5, but we spent some time debating internally whether to include it in this list. The naysayers pointed out that we've been putting tags together to form Web pages since the beginning of the World Wide Web. HTML5 has simply added new tags. Did we stop what we were doing to celebrate when someone invented the <strong> tag?

Others took the practical view that while HTML5 looks similar to old-fashioned HTML, the tasks it accomplishes are dramatically different. The local data storage, the <canvas> tag, and the video tag make it possible to do much more than pour marked-up words and images into a rectangle. Plus, the new HTML5 WebSockets spec defines a new way to conduct full-duplex communication for event-driven Web apps.

In the end, Adobe's decision to end development of mobile Flash tipped the debate. Suddenly an entire corner of the Web that used to deliver video, casual games, and other animated content is back in play. An entire sector of the Web development industry is going to retool as we move to HTML5 from Flash. And that represents a tectonic shift for Web developers. --Peter Wayner

desktop virtualization has faltered for two key reasons: It requires a continuous connection between client and server, and the server itself needs to be beefy to run all those desktop VMs.

A client hypervisor solves both problems. It installs on an ordinary desktop or laptop, leveraging the processing power of the client. And laptop users can take a "business VM" with them containing the OS, apps, and personal configuration settings. That VM is secure and separate from whatever else may be running on that desktop -- such as a malware some clueless user accidentally downloaded -- and you get all the virtualization management advantages, including VM snapshots, portability, easy recovery, and so on.

Type 2 client-side hypervisors such as VMware Player, VirtualBox, and Parallels Desktop have been in existence for years; they run on top of desktop Windows, Linux, or OS X to provide a container for a guest operating system. Type 1 client-side hypervisors -- which run on bare metal and treat every desktop OS as a guest -- provide better security and performance. They're also completely transparent to the end user, never a drawback in a technology looking for widespread adoption.

Client hypervisors point to a future where we bring our own computers to work and download or sync our business virtual machines to start the day. Actually, you could use any computer with a compatible client hypervisor, anywhere. The operative word is "future" -- Citrix, MokaFive, and Virtual Computer are the only companies so far to release a Type 1 client hypervisor, due in part to the problem Windows has dealt with for years: supplying a sufficient number of drivers to run across a broad array of hardware. However, these companies will be joined next year by Microsoft itself, which plans to include Hyper-V in Windows 8.

Make no mistake, Windows 8 Hyper-V will require 64-bit Intel or AMD hardware. Don't expect bare-metal virtualization from your ARM-based Windows 8 tablet -- or any other tablet -- anytime soon. Note too that, unlike Citrix, MokaFive, and Virtual Computer, which built their client hypervisors with the express purpose of easing Windows systems management, Microsoft has stated that Windows 8 Hyper-V will be aimed strictly at developers and IT pros.

But hey, we're talking about Microsoft. It won't stop with developers and IT pros. Yes, tablets are making their way into the workplace, but the fact of the matter is that large-scale Windows desktop deployments are not going away, and Microsoft will be under more pressure than ever to make them easier to manage. With more and more employees working outside of the office -- or using a stipend to buy their own PCs and bring them to work -- the security and manageability of the client-side hypervisor will offer a compelling desktop computing alternative. --Eric Knorr

Jenkins, Hudson, and other "continuous integration" servers, which put all code through a continuous stream of endless tests: The lone cowboy coders shriek with horror at the way that they're shackled to a machine that rides herd over them. The more collaboratively minded among us like the way continuous build tools help us work together for the betterment of the whole.

When a continuous integration server sends you a scolding email about the problems with the code you checked in 10 seconds ago, it doesn't want to ruin your feeling of accomplishment. It's just trying to keep us all moving toward the same goal.

Tools like Hudson or Jenkins aren't new because there have been a number of slick proprietary continuous integration tools for some time. Rational Team Concert, Team City, and Team Foundation Server are just a few of the proprietary tools that are pushing the idea of a team. But the emergence of open source solutions encourages the kind of experimentation and innovation that comes when programmers are given the chance to make their tools better.

There are at least 400 publicly circulated plug-ins for Jenkins and an uncountable number of hacks floating around companies. Many of them integrate with different source code repositories like Git or arrange to build the final code using another language like Python. When the build is finished, a number of plug-ins compete to announce the results with MP3s, Jabber events, or dozens of other signals. Backups, deployment, cloud management, and many uncharacterized plug-ins are ready.

This work is quickly being turned into a service. Cloudbees, for instance, offers a soup-to-nuts cloud of machines that bundles Jenkins with a code repository that feeds directly into a cloud that runs the code. While some cloud companies are just offering raw machines with stripped-down Linux distros, Cloudbees lets you check in your code as it handles everything else in the stack. --Peter Wayner

Princeton memory freeze and electron microscope attacks showed, but they beat software-only protection solutions. The hardware protection schemes will only get better. Soon enough, every computer device you can use will have a hardware/software protection solution running. --Roger A. Grimes

JavaScript. The language may be the most commonly executed code on the planet, thanks to its position as the foundation for Web pages. If that's not enough, its dominance may grow stronger if server-based tools like Node.js gain traction.

Yet for all of JavaScript's success, everyone is moving on to the next thing. Some want to build entirely new languages that fix all of the troubles with JavaScript, and others are just finding ways to translate their code into JavaScript so that they can pretend they don't use it.

Translated code is all the rage. Google's Web Toolkit cross-compiles Java into JavaScript, so the developer types only properly typed Java code. It continues to get better, and Google has integrated it directly with its App Engine cloud so that you can deploy to it with one button.

Some of the translations are purely cosmetic. Programmers who write their instructions in CoffeeScript don't need to worry about much of the punctuation that makes JavaScript look a bit too old school. The cross-compiler kindly inserts it before it runs.

Other translations are more ambitious. Google's recently announced Dart, a language that will apparently fix many of the limitations that the development team thinks make JavaScript a pain. There are classes, interfaces, and other useful mechanisms for putting up walls between the code, an essential feature for large software projects. Spelling out the type of data held in a variable is now possible, but it's only optional. The Dart lovers say they eventually want to replace JavaScript, but for the time being they want to get their foothold by providing a way to translate Dart into JavaScript. In other words, they want to replace JavaScript by making JavaScript the core of their plan. --Peter Wayner

capacities of SSDs steadily on the rise, the days of disk drives in servers and SANs appear to be numbered.

The best part: Having flash storage in servers introduces a possibility that simply wasn't practical with disk -- namely, managing server-side storage as an extension of the SAN. In essence, server-side flash becomes the top tier in the SAN storage pool, drawing on intelligence within the SAN to store the most frequently accessed or most I/O-intensive data closest to the application. It's like caching, but smarter and more cost-effective.

The huge performance advantages of flash have made automated tiering within the SAN more compelling than ever. All of the leading SAN vendors now offer storage systems that combine solid-state drives, hard disk drives, and software that will dynamically migrate the "hottest" data to the fastest drives in the box. The next step will be to overcome the latency introduced by the distance between SAN and servers. The speed of flash and block-level autotiering software -- which operates in chunks as fine as kilobytes or megabytes -- will combine to close this last mile.

Unlike traditional caching, which requires duplicating storage resources and flushing writes to the back-end storage, distributed storage tiering promises both higher application performance and lower storage costs. The server owns the data and the bulk of the I/O processing, reducing SAN performance requirements and stretching your SAN dollar.

The price of these benefits is, per usual, increased complexity. We'll learn more about the promise and challenges of distributed storage tiering as EMC's Project Lightning and other vendor initiatives come to light. --Doug Dineley

MapReduce as the top emerging enterprise technology, mainly because it promised something entirely new: analysis of huge quantities of unstructured (or semi-structured) data such as log files and Web clickstreams using commodity hardware and/or public cloud services. Over the past two years, Apache Hadoop, the leading open source implementation of MapReduce, has found its way into products and services offered by Amazon, EMC, IBM, Informatica, Microsoft, NetApp, Oracle, and SAP -- not to mention scores of startups.

Hadoop breaks new ground by enabling businesses to deploy clusters of commodity servers to crunch through many terabytes of unstructured data -- simply to discover interesting patterns to explore, rather than to start with formal business intelligence objectives. But it must be remembered that Hadoop is basically a software framework on top of a distributed file system. Programs must be written to process Hadoop jobs, developers need to understand Hadoop's structure, and data analysts face a learning curve in determining how to use Hadoop effectively.

Early on, tools were developed to make exploiting Hadoop easier for developers. Apache Hive provides SQL programmers with a familiar SQL-like language called HiveQL for ad hoc queries and big data analysis. And Apache Pig offers a high-level language for creating data analysis programs that are parallel in nature, often a requirement for large processing jobs.

1 2 Page 1
Page 1 of 2
8 highly useful Slack bots for teams
  
Shop Tech Products at Amazon