Doug Cutting, the creator of the open-source Hadoop framework that allows enterprises to store and analyze petabytes of unstructured data, led the team that built one of the world's largest Hadoop clusters while he was at Yahoo. The former engineer at Excite, Apple and Xerox PARC is also the developer of Lucene and Nutch, two open-source search engine technologies now being managed by the Apache Foundation. Cutting is now an architect at Cloudera, which sells and supports a commercial version of Hadoop and which this week will host the Hadoop World conference in New York. In an interview, Cutting talked about the reasons for the surging enterprise interest in Hadoop.
How would you describe Hadoop to a CIO or a CFO? Why should enterprises care? At a really simple level it lets you affordably save and process vastly more data than you could before. With more data and the ability to process it, companies can see more, they can learn more, they can do more. [With Hadoop] you can start to do all sorts of analyses that just weren't practical before. You can start to look at patterns over years, over seasons, across demographics. You have enough data to fill in patterns and make predictions and decide, 'How should we price things?' and 'What should we be selling now?' and 'How should we advertise?' It is not only about having data for longer periods but also richer data about any given period, as well.
What are Hive and Pig? Why should enterprises know about these projects? Hive gives you [a way] to query data that is stored in Hadoop. A lot of people are used to using SQL and so, for some applications, it's a very useful tool. Pig is a different language. It is not SQL. It is an imperative data flow language. It is an alternate way to do higher level programming of Hadoop clusters. There is also HBase, if you want to have real time [analysis] as opposed to batch. There is a whole ecosystem of projects that have grown up around Hadoop and that are continuing to grow. Hadoop is the kernel of a distributed operating system and all the other components around the kernel are now arriving on the stage. Pig and Hive are good examples of those kinds of things. Nobody we know of uses just Hadoop. They use several of these other tools on top as well.
Why do you think there's so much interest in Hadoop right now? It is a relatively new technology. People are discovering just how useful it is. I think it is still in a period of growth where people are finding more and more uses for it. To some degree, software has lagged hardware for some years and now we are starting to catch up. We've got software that lets companies really exploit the hardware they can afford. And now that they can, they are discovering all sorts of things they can do with it.
What is it about incumbent relational database technologies that make them unsuitable for some of the tasks that Hadoop is used for? Some of it is technological challenges. If you want to write a SQL query that has a 'join over tables' that are petabytes [in size] -- nobody knows how to do that. The standard way you do things in a database tops out at a certain level. They weren't designed to support distributed parallelism, to the degree that people now find affordable. You can buy a Hadoop-based solution for a 10th of the price [of conventional relational database technology]. So there is the affordability. Hadoop is a fairly crude tool, but it does let you really use thousands of processors at once running over all of your data in a very direct way.
What are enterprises using Hadoop for? Well, we see a lot of different things, industry by industry. In the financial industry, people are looking at fraud detection, credit card companies are looking to see which transactions are fraudulent, banks are looking at credit worthiness -- deciding if they should give someone a loan or not. Retailers are looking at long-term trends analyzing promotions, analyzing inventory. The intelligence community uses this, a lot for analyzing intelligence.