Alex Wright

The information architect talks about the lessons of computing history, the re-emergence of oral culture and all the data that Google doesn't index.

Current Job Listings

Alex Wright is a writer and information architect at The New York Times and the author of Glut: Mastering Information Through the Ages, a reflection on the current state of IT and its roots in history.

He has written for The New York Times, The Christian Science Monitor, Harvard Magazine and other publications. He has led research and design projects for Harvard University, IBM, Microsoft, The Long Now Foundation, the Internet Archive and Yahoo.

Do IT professionals pay too little attention to history? There is a tendency in computer science to ignore history. The German philosopher and programmer Werner Kunzel said, "Computer theory is currently so successful that it has no use for its own history." There is this tendency to fixate on the future of IT, and the IT industry -- so driven by new releases and product innovation -- encourages that. We are always encouraged to look forward, sometimes at the expense of developing any sort of perspective on how we got here.

Can you give an example of the kind of thing we might learn from IT history? If you look at the history of hypertext that preceded the Web, there were some very promising ideas that were left by the wayside. If you look at the work of people like Ted Nelson or Doug Engelbart or Andries van Dam, you'll find some really interesting alternate ways of thinking about how networked information systems could work. Especially Ted Nelson. He laid out an incredibly thoughtful vision of how hypertext would work. His great project, Xanadu, has some important ideas in there. One was the idea that all hyperlinks should be bidirectional. When you get to a document, you should be able to see not only what it points to, but also what points into it. That can add an important layer of meaning. You can even trace that idea back to Vannevar Bush's famous essay "As We May Think," in 1945. [Unidirectionality] is a fundamental limitation of the Web today, but it's interesting to see how some developers have tried to approximate bidirectionality in things like TrackBack and Facebook.

Could we learn lessons from way back -- say, from hundreds of years ago? If you look at scribes in the Middle Ages, they developed new forms of what you might call information technology in the form of illuminated manuscripts. There's a case to be made that they were an early form of hypertext. The scribes developed new tools for managing information inside books, things like tables of content and indexes. Eventually they came up with canon tables, which were basically visual indexes to the Bible. They'd take stories from each of the Gospels and cross-reference them in a visual index -- a kind of illuminated hypertext -- that gave you a way to scan the contents and move between related sections.

Is there a software interface idea floating around in there somewhere? Maybe. It's interesting to fish a little and look at these early ideas to see if they spark any ideas for today.

Your book describes efforts over many centuries to categorize knowledge via taxonomies and ontologies. Are we finished with that now? People say the Web has ushered in a new era, where the old ways of categorizing information are anachronistic. They say the Web is much more bottom-up and self-organizing, so that those top-down systems of control, like library catalogs or indexes, are no longer viable in a world of billions of documents. Nobody is going to be able to catalog all that stuff. That argument is held out against the Semantic Web crowd, suggesting they are trying to achieve the impossible, making ontologies that will make sense of the world.

I think there is a role for ontologies and taxonomies, but they will be machine-generated and will derive meaning out of large bodies of information. Some of the work around the deep Web right now, looking at ways to reverse-engineer databases and extracting structure from data sources, may create new kinds of frameworks.

Could IT learn anything from so-called primitive societies? We live in a culture that is very biased toward literacy, and there is this kind of unexamined bias we have toward nonliterate people, a tendency to see them as primitive and unsophisticated. Linguist Walter J. Ong wrote a lot about oral culture, and he argued that we don't understand oral culture or take it seriously. He argued that with the rise of electronic media, we are seeing the re-emergence of oral culture, and a lot of the old assumptions about literacy are being challenged. If you look at the way people interact on social networking sites, blogs, e-mail, IM, Twitter and so on, they have more in common with oral communications than with traditional [written] communications. There is something emerging here that we don't completely understand.

Are we keeping up with the information glut? A lot of people don't realize how limited a view of the Web we get by using things like Google. They have tens of billions of Web pages indexed, but they know about more than a trillion Web pages, most of which they don't index for a variety of reasons. And even beyond that, there's a larger web of data out there -- in scientific databases, discussion forums, Twitter feeds, online commerce catalogs and so on -- not showing up in search engines. There's some interesting work going on in deep Web research trying to figure out how to tame this even vaster array of information.

Are we nearing the end of print journalism? A lot of these end-of-print predictions are a little overblown. People are still buying newspapers. We will be in a multichannel world, with people consuming news from a variety of platforms, including print, the Web and mobile devices. And it will include a wave of new, special-purpose reading devices, like the Kindle,'s new wireless "electronic paper" display.

How collaboration apps foster digital transformation
Shop Tech Products at Amazon