Massive data volumes making Hadoop hot
Complex data analytics requirements are driving interest in open source Hadoop technology, say users and analysts
Computerworld - Increasing efforts by enterprises to glean business intelligence from the massive volumes of unstructured data generated by web logs, clickstream tools, social media products and the like has led to a surge of interest in open source Hadoop technology, analysts say.
Hadoop, an Apache data management software project with roots in Google's MapReduce software framework for distributed computing, is designed to support applications that use massive amounts of unstructured and structured data.
Unlike traditional relational database management systems, Hadoop is designed to work with multiple data types and data sources. Hadoop's Distributed File System (HDFS) technology allows large application workloads to be broken up into smaller data blocks that are replicated and distributed across a cluster of commodity hardware for faster processing
The technology is already used widey by some of the world's largest Web properties, such as Facebook, EBay, Amazon, Baidu and Yahoo. Observers note that Yahoo has been one of the biggest contributors to Hadoop.
Increasingly, Hadoop technology is used in banks, advertising companies, life science firms, pharmaceutical companies and by other corporate IT operations, said Stephen O'Grady, an analyst with RedMonk.
What's driving Hadoop is the desire by companies to leverage massive amounts of different kinds of data to make business decisions, O'Grady said. The technology lets companies process terabytes and even petabytes of complex data relatively effectively and at substantially lower cost than conventional relational database management systems, experts say.
"The big picture is that with Hadoop you can have even a one and two person startup being able to process the same volume of data that some of the biggest companies in the world are," he said.
Hadoop user Tynt, a Web analytics firm, provides analytics services for more than 500,000 websites. Its primary offering is a service that lets content publishers get insight into how their content is being shared. On an average day Tynt collects and analyzes close to 1 terabyte of data from hundreds of millions of web interactions on the sites that it monitors.
The company switched to Hadoop about 18 months ago when its MySQL database infrastructure began collapsing under the sheer volume of data that Tynt was collecting.
"Philosophically, Hadoop is a whole different animal," said Cameron Befus, Tynt's vice president of engineering.
Relational database technologies focus the speed of data retrieval, complex query support and transaction reliability, integrity and consistency. "What they don't do very well is to accept new data quickly," he said.
"Hadoop reverses that. You can put data into Hadoop at ridiculously fast rates," he said. Hadoop's file structure allows companies to essentially capture and consolidate pretty much any structured and complex data type, such as web server logs, metadata, audio and video files, unstructured e-mail content, Twitter stream data and social media content, he said.
The technology therefore is ideal for companies looking to analyze massive volumes of structured and unstructured data.
Retrieving raw data from the HDFS and processing it, however, is not nearly easy or as convenient as typical database systems, because the data is not organized or structured, Befus said. "Essentially what Hadoop does is to write data out in large files. It does not care what's in the files. It just manages them and makes sure that there are multiple copies of them."
Early on, users had to write jobs in a programming language like Java in order to parse and then query raw data in Hadoop. But tools are now available that can be used to write SQL-like queries for data stored in Hadoop, Befus said.
Tynt uses a popular tool called Pig for writing queries to Hadoop. Another widely used option is Hive.
- Best iPhone, iPad Business Apps for 2014
- 14 Tech Conventions You Should Attend in 2014
- 10 Desktop Apps to Power Your Windows PC
- How to Add New Job Skills Without Going Back to School
- Slideshow: 7 security mistakes people make with their mobile device
- iOS vs. Android: Which is more secure?
- 11 sure signs you've been hacked
- The value of smarter oil and gas fields With global energy requirements continuing to rise, the exploration, development and production of new oil and gas resources are shifting to increasingly challenging...
- Smarter Environmental Analytics Solutions: Offshore Oil and Gas Installations Example This IBM Redbooks® Solution Guide describes a solution for implementing smarter environmental monitoring and analytics for oil and gas industries. The solution implements...
- Piecing Together the Business Intelligence Puzzle Business intelligence (BI) technology collects and analyzes company data, delivering relevant information to corporate decision-makers in an effort to produce favorable outcomes.
- Harness IT -- An Introduction to Business Intelligence Solutions Learn the key selection criteria required to provide your organization with the capability to address structured data, unstructured data and mobile demands so...
- Live Webcast Increasing the Value of Your Reports and Dashboards Learn how incorporating other analytical capabilities such as predictive modeling and visualization can increase the value of your reports and dashboards by providing...
- The Software-Defined Data Center: Is your ADC ready? Data center transformation is accelerating beyond virtualization to next-generation cloud architectures and software-defined data centers, bringing new challenges for application performance, scalability and...
- Application Acceleration: Optimize the End-User Experience Watch this on-demand webcast and learn how you can optimize your web content, accelerate performance across any device and browser combination, and offload... All Business Intelligence/Analytics White Papers | Webcasts