Human beings tend to take incremental change in stride. For example, the loaf of bread that was 50 cents a few decades ago that now costs $3 isn’t a big deal to us because the price rose gradually and steadily year by year. What we aren’t adapted for is exponential change. Which explains why we tend to be taken by surprise by developments that involve digital technologies, where order-of-magnitude improvements, driven by Moore’s Law, occur continuously.
I thought about this reality earlier this summer when I visited the National Center for Atmospheric Research (NCAR), which is located on top of a hill overlooking Boulder, Colo., and is one of the world’s leading sites for the study of weather prediction and climate modeling. To support its work, which is often based on complex mathematical models, the NCAR has long been a pioneer in the use of advanced computer systems. In fact, a plaque on a wall at the center indicates that in 1976, it had purchased the world’s first production Cray-1A supercomputer (for $8.9 million, the equivalent of $38 million today). Over the next 25 years, the NCAR continued to perform scientific research on later generations of Crays.
As I toured the NCAR, I thought about how a mid-’70s Cray supercomputer compared to the iPhone in my pocket. Sure enough, the raw computing power of my phone dwarfed that of the Cray-1A. The Cray operated at a rate of 80MHz and was capable of performing 80 million floating-point operations per second (FLOPS). By comparison, the graphics-processing unit in my iPhone 5S is capable of 76.8 GFLOPS, making it nearly 1,000 times more powerful.
The supercomputer in my pocket
Today’s garden-variety smartphone is, in fact, capable of performing functions like pattern recognition and complex visual rendering that have traditionally been the exclusive domain of supercomputers that were housed in special facilities and required the care of a cadre of specialists. Today, many mobile apps provide what are essentially supercomputer-like abilities.
A nice example of visual pattern recognition is Leafsnap, a free mobile app created by Columbia University, the University of Maryland and the Smithsonian Institution that enables users to identify different tree species by simply taking a photo of a leaf. Verbal pattern recognition is the basis for applications like Siri, Google Now and Microsoft’s Cortana that have the ability to understand spoken input and (most of the time) respond appropriately.
An example of the advanced visual rendering capabilities of mobile devices can be seen in Samsung’s new Gear virtual reality headset, which delivers a digitally immersive experience using a Galaxy Note smartphone. Or Epic Zen Garden, a free game for iPhones and iPads that features richly detailed visual environments that users can explore.
The power of phone + cloud
As impressive as the ability of the modern generation of microprocessors is, their power is magnified many times over by the ability to combine them with access to virtually unlimited amounts of computing power in the cloud through a broadband wired or wireless network. Every time a user performs a Google search, he or she makes use of the massive computer resources that Google has assembled to keep track of and index the vast reaches of the Internet. In fact, Google runs on what could be the world’s most powerful supercomputer: In 2012, it was estimated that Google ran on some 13.6 million cores — over 20 times as many as in the largest operating supercomputer at the time — and had demonstrated its ability to link 600,000 of them together to work on a single specific task.
In addition to basic search, Google employs this capacity to provide applications that would have seemed like science fiction just a few years ago, such as the ability to search for images as well as words; the ability to find the fastest driving route from one point to another, taking current traffic conditions into account; or the ability to instantly translate text from one language to another. And still in development, but clearly on their way, are such things as the self-driving car and autonomous robots that depend on access to massive computing power.
Computers that (almost) think
Yet another remarkable manifestation of the exponential growth in processing power is so-called cognitive computing. By leveraging techniques of artificial intelligence, including natural language processing and machine learning, cognitive computing provides the capability to approach and, in some instances, even to exceed human thought. An early example of the emerging human-like capabilities of computers came in 1997, when IBM’s Deep Blue defeated world champion chess player Gary Kasparov, disproving the belief that only humans could play chess at the highest level. More recently, the triumph of IBM’s Watson at Jeopardy in 2011 demonstrated that a computer could compete successfully against the best human players in a challenging test of general knowledge.
In addition to playing games, cognitive computing is being put to work on a range of practical tasks that computers were previously unable to perform. Rather than simply crunching numbers or processing data in structured ways, it is now possible for computers to absorb large quantities of information and identify associations or generate context-based hypotheses about that information to improve human decision-making. IBM is actively engaged in developing specialized versions of Watson for applications ranging from healthcare (diagnosing disease) to financial services (personalizing investment advice) to customer service (improving call center support).
A supercomputer of one’s own
As cool as these applications are, perhaps the most significant aspect of this trend is the ability of anyone with an Internet connection to make direct use of supercomputing capabilities for his or her own purposes. The availability of these resources makes it possible for users to rapidly develop and deploy powerful new applications or carry out sophisticated data analyses without the need to invest in hardware
Services like Google’s Compute Engine, Amazon’s Web Services and Microsoft’s Azure are competing fiercely to provide access to computing power in the cloud by lowering prices and providing tools to simplify use. In fact, these services typically offer free trials and minute-by-minute billing for usage that make these capabilities readily available to everyone from large corporations and government agencies to tiny startups and even individuals. For instance, this summer, a pair of researchers in England disclosed that they had built a digital currency-mining program at no cost by taking advantage of free cloud-based supercomputing trial offers. Using publicly available tools and the free supercomputer time, the team was able to earn $1,750 per week in Litecoin — an alternative to Bitcoin — through its operations.
What the world needs now
To fully realize the potential of the newly pervasive supercomputing environment, two things are needed: a new type of literacy that will enable us to use the technology properly, and the appropriate network infrastructure to provide full access to its capabilities.
Just as the spread of computers and the Internet created a need for digital literacy skills, so the emergence of supercomputing will require a new kind of literacy that will allow us to appreciate what the technology can — and can’t — do. According to my colleague at the Institute for the Future Mike Liebhold, these new skills include an understanding of the basic principles of logic and statistics (for example, the difference between correlation and causation), the ability to factor problems in ways that can be addressed by the parallel processing abilities of supercomputers, and familiarity with the use of data visualization techniques to simplify complex problems. We also need to remember that as powerful as these tools are, they are intended to support and enhance human capabilities, not replace them.
Second, we need networks that have the technical characteristics needed to deliver the power of supercomputing in close to real time. Getting full access to high-performance computers that send and receive high volumes of data currently requires a dedicated connection with customized capabilities (such as those available through the National LambdaRail fiber optic network that serves universities and advanced research labs across the country).
Bringing supercomputing into our daily lives will require the wide availability of networks that provide high bandwidth and reliably low latency times. Ensuring that network operators are able to provide users with such capabilities when they need them should be taken into account in both the current debate over “network neutrality” and longer-term legislative efforts to modernize the regulation of telecommunications. It will also require a major investment in network infrastructure. Industry is already putting tens of billions of dollars each year into network upgrades, and we need to get public policies right to support these efforts.
And, given that exponential change is likely to continue, what might lie beyond the rapidly emerging world of supercomputing? In the more distant future looms the prospect that the power of computers will eventually outstrip human cognition. In his provocative new book, Superintelligence, Oxford philosophy professorNick Bostrom suggests than when machine brains surpass human brains, we may become dependent on them in ways that we do not altogether like. But for now, he concludes, we remain in control of the machines, and we still have the power to use them for our own benefit.
It’s our move.
Richard Adler is a distinguished fellow at the Institute for the Future in Palo Alto, Calif. He has written widely about the future of broadband and its impact on fields such as education, healthcare, government and commerce.