2020 Vision: You won't recognize the Internet in 10 years

To borrow from John Lennon: Imagine there's no latency, no spam or phishing, a community of trust. Imagine all the people, able to get online.

This is the kind of utopian network architecture that leading Internet engineers are dreaming about today.

As they imagine the Internet of 2020, computer scientists across the country are starting from scratch and rethinking everything: from IP addresses to DNS to routing tables to Internet security in general. They're envisioning how the Internet might work without some of the most fundamental features of today's ISP and enterprise networks.

Their goal is audacious: To create an Internet without so many security breaches, with better trust and built-in identity management. Researchers are trying to build an Internet that's more reliable, higher performing and better able to manage exabytes of content. And they're hoping to build an Internet that extends connectivity to the most remote regions of the world, perhaps to other planets.

10 fool-proof predictions for the Internet in 2020 and a slideshow version

The Evolution of the Internet

This high-risk, long-range Internet research will kick into high gear in 2010, as the U.S. federal government ramps up funding to allow a handful of projects to move out of the lab and into prototype. Indeed, the United States is building the world's largest virtual network lab across 14 college campuses and two nationwide backbone networks so that it can engage thousands -- perhaps millions -- of end users in its experiments.

"We're constantly trying to push research 20 years out," says Darleen Fisher, program director of the National Science Foundation's Network Technology and Systems (NeTS) program. "My job is to get people to think creatively, potentially with high risk but high payoff. They need to think about how their ideas get implemented, and if implemented how it's going to [affect] the marketplace of ideas and economics."

The stakes are high. Some experts fear the Internet will collapse under the weight of ever-increasing cyberattacks, an increasing demand for multimedia content and the requirements for new mobile applications unless a new network architecture is developed.

The research comes at a critical juncture for the Internet, which is now so closely intertwined with the global economy that its failure is inconceivable. As more critical infrastructure -- such as the banking system, the electric grid and government-to-citizen communications -- migrate to the Internet, there's a consensus that the network needs an overhaul.

At the heart of all of this research is a desire to make the Internet more secure.

"The security is so utterly broken that it's time to wake up now and do it a better way," says Van Jacobson, a research fellow at PARC who is pitching a novel approach dubbed content-centric networking. "The model we're using today is just wrong. It can't be made to work. We need a much more information-oriented view of security, where the context of information and the trust of information have to be much more central."

NSF ramps up research

Futuristic Internet research will reach a major milestone as it moves from theory to prototype in 2010.

The NSF plans to select anywhere from two to four large-scale research projects to receive grants worth as much as $9 million each to prototype future Internet architectures. Bids will be due in the first quarter of 2010, with awards expected in June.

"We would like to see over-arching, full-scale network architectures," Fisher says. "The proposals can be fairly simple with small, but profound changes from the current Internet, or they can be really radical changes.''

The NSF is challenging researchers to come up with ideas for creating an Internet that's more secure and more available than today's. They've asked researchers to develop more efficient ways to disseminate information and manage users' identities while taking into account emerging wireless and optical technologies. Researchers also must consider the societal impacts of changing the Internet's architecture.

The NSF wants bidders to consider "economic viability and demonstrate a deep understanding of the social values that are preserved or enabled by whatever future architecture people propose so they don't just think as technicians," Fisher says. "They need to think about the intended and unintended consequences of their design."

Key to these proposals is how researchers address Internet security problems.

"One of the things we're really concerned about is trustworthiness because all of our critical infrastructure is on the Internet," Fisher says. "The telephone systems are moving from circuits to IP. Our banking system is dependent on IP. And the Internet is vulnerable."

The NSF says it won't make the same mistake today as was made when the Internet was invented, with security bolted on to the Internet architecture after-the-fact instead of being designed in from the beginning.

"We are not going to fund any proposals that don't have security expertise on their teams because we think security is so important," Fisher says. "Typically, network architects design and security people say after the fact how to secure the design. We're trying to get both of these communities to stretch the way they do things and to become better team players."

The latest NSF funding is a follow-on to the NSF's Future Internet Design (FIND) efforts, which asked researchers to conduct research as if they were designing the Internet from scratch. Launched in 2006, NSF's FIND program has funded around 50 research projects, with each project receiving $500,000 to $1 million over three to four years. Now, the NSF is narrowing these 50 research projects down to a handful of leading contenders.

World's largest Internet testbed

The Internet research projects chosen for prototyping will run on a new virtual networking lab being built by BBN Technologies. The lab is dubbed GENI for the Global Environment for Network Innovations.

The GENI program has developed experimental network infrastructure that's being installed in U.S. universities. This infrastructure will allow researchers to run large-scale experiments of new Internet architectures in parallel with -- but separated from -- the day-to-day traffic running on today's Internet.

"One of the key goals of GENI is to let researchers program very deep into the network," says Chip Elliott, GENI Project Director. "When we use today's Internet, you and I can buy any application program that we want and run it....GENI takes this idea several steps further. It allows you to install any software you want deep into the network anywhere you want. You can program switches and routers."

BBN was chosen to lead the GENI program in 2007 and has received $45 million from the NSF to build it. BBN received an $11.5 million grant in October to install GENI-enabled platforms on 14 U.S. college campuses and on two research backbone networks: Internet 2 and the National Lambda Rail. These installations will be done by October 2010.

"GENI won't be in a little lab on campus. We'd like to take the whole campus network and allow it to run experimental research in addition to the Internet traffic," Elliott says. "Nobody has done this before. It'll take about a year."

The GENI project involves enabling three types of network infrastructure to handle large-scale experiments. One type uses the OpenFlow protocol developed by Stanford University to allow deep programming of Ethernet switches from vendors such as HP, Arista, Juniper and Cisco. Another type of GENI-enabled infrastructure is the Internet 2 backbone, which has highly programmable Juniper routers. And the third type of GENI-enabled infrastructure is a WiMAX network for testing mobile and wireless services.

Once these GENI-enabled infrastructures are up and running, researchers will begin running large-scale experiments on them. The first four experiments have been selected for the GENI platforms, and they will test novel approaches to cloud computing, first responder networks, social networking services and inter-planetary communications.

"All of these experiments are beyond the next-generation Internet," Elliott says. "All of these efforts are targeting the Internet in 10 to 15 years."

The benefit of GENI for these projects is that researchers can test them on a very large scale network instead of on a typical testbed. That's why BBN and its partners are GENI-enabling entire campus networks, including dorm rooms.

"What's distinctive about GENI is its emphasis on having lots and lots of real people involved in the experiments," Elliott says. "Other countries tend to use traffic generators....We're looking at hundreds or thousands or millions of people engaged in these experiments."

Another key aspect of GENI is that it will be used to test new security paradigms. Elliott says the GENI program will fund 10 security-related efforts between now and October 2010.

"If I were rank ordering the experiments we are doing, security is the most important," Elliott says. "We need strong authentication of people, forensics and audit trails and automated tools to notice if [performance] is going south."

Elliott says GENI will be the best platform for large-scale network research that's been available in 20 years.

"You could argue that the Arpanet back in the '70s and early '80s was like this. People simultaneously did research and used the network," Elliott says. "But at some point it got impossible to do experimentation. For the past 20 years or so we have not had an infrastructure like this."

1 2 Page 1
Page 1 of 2
  
Shop Tech Products at Amazon