Warning: Complexity Ahead!

Clay Shirky is a writer, consultant and teacher whose specialty is new technologies, especially those for the Internet. He was professor of new media at Hunter College at The City University of New York from 1998 to 2000 and now teaches a course called Thinking About Networks at New York University. He has written for The New York Times, The Wall Street Journal and the Harvard Business Review.

Shirky has concentrated lately on the rise of decentralizing technologies such as peer-to-peer (P2P). He recently told Computerworld's Gary H. Anthes why he thinks they will provide both opportunities and problems for IT managers.

With the advent of peer-to-peer computing, the world of client/server isn't so simple anymore, is it? With a Web server/Web browser pair, client and server are defined for all time. But when you are running Napster or [the instant messaging application] ICQ, sometimes you are behaving as a client and sometimes as a server. And when you look at the implementation of SOAP [Simple Object Access Protocol], it pretty much looks like P2P implementation language.

The idea is that any two computers that can package a SOAP envelope can engage in application-to-application communication. So if everything speaks SOAP, the difference between client and server is really situational; it's not defined in advance.

What does that mean for the IT department? It's tough. With users operating their desktops as servers, it becomes harder to understand what's going on in your enterprise. With something like Groove [Networks'] collaborative P2P software, there is no central file server storing canonical versions of files and backing them up. So the tension in the P2P world is between a great increase in individual productivity vs. a loss of centralized control by the IT department. It's a huge cultural issue, and it's only going to get bigger.

Are IT managers losing control in the face of complexity? It is now physically impossible to operate with an accurate picture of global state. Any local node cannot operate with a picture of what's going on in all other parts of the system. Typically, enterprise software has tried to keep track of everything going on in the system.

The promise of the enterprise resource planning model was that you'll have a globally accurate snapshot of your entire business down to the minute. But that doesn't work past a certain scale.

You've suggested that we look to biological models for ideas. Biological systems operate within a local context. Your kidneys only know what's going on in the kidneys, yet the whole organism functions. The kidneys say, "Here comes some poison, and I'm going to get rid of it." They don't know how the poison got there. They weren't talking to the mouth or the stomach; it just came in for processing.

How can computer systems be made to work like that? Applications become the new objects. They have a great deal of complexity that's encapsulated in a fairly opaque way, and they have a handful of simple, well-documented interfaces in the same way that object-oriented programming uses that as a model for managing complexity.

What does that mean for software developers? Designers of successful applications are going to rely more on protocols and less on APIs [application programming interfaces], in part because protocols are simpler and change less, and in part because they are defined independently of the software. One of the huge surprises of Internet scale is that well-defined protocols, which are almost brain-dead in their simplicity, have superior survival characteristics to beautifully designed and crafted APIs that change once a year.

How will this shift-in-design approach affect end users? Users will see an increase in the number of absolutely inexplicable failures. Systems will fail more often but less catastrophically. In biology, there is much more failure than in computing, but the failure is much less significant. If you have a few cells die, you don't get a blue screen of death. Biological systems have a property called homeostasis, which is the ability to return to some kind of internal norm.

And that ability to return to some kind of norm despite all kinds of external forces is going to be critical for any kind of system exposed to the Internet.

Can you give an example of a system like that? To most people, Napster meant kids stealing music. But to application designers, what it did was build a five 9s [99.999% uptime] service on fantastically unreliable hardware. At its height, Napster had 70 million unpaid system administrators, each operating a tiny, unreliable server. But if you needed a Britney Spears song at 3 a.m., it was there, period.

Related:

Copyright © 2002 IDG Communications, Inc.

Download: EMM vendor comparison chart 2019
  
Shop Tech Products at Amazon