Enterprises that are thinking about big data need to realize that it isn't just about analyzing vast amounts of data, but also how that information is stored, Amazon CTO Werner Vogels said during a keynote at the Cebit trade show.
Vogels' speech was entitled "Data without limits" and besides encouraging enterprises to think about the big picture, he also presented a blueprint for how Amazon's cloud can be used to ease some of the pain of implementing big data systems.
"Big data is not only about analytics, it's about the whole pipeline. So when you think about big data solutions you have to think about all the different steps: collect, store, organize, analyze and share," said Vogels.
To make full use of the growing amounts of data many enterprises collect and to gain a competitive advantage, innovation has to occur in all of these areas, not just analytics, according to Vogels.
Amazon itself has been doing big data and analytics for a long time to try to target customers and come up with relevant recommendations. What it has learned along the way is that bigger, in this case, is better, according to Vogels. When mistakes have been made, it's because there isn't enough data to back up a recommendation, for instance, he said.
But Amazon isn't just using big data itself, it is also helping drive demand for its cloud, which is the great enabler of this market, according to Vogels.
"It is really important that if you go into this big data world that you have limitless possibilities in your hand. You should not be restricted in the way you store things or the way you process it," said Vogler.
Amazon Web Services offers a number services that can help enterprises collect, store, organize, analyze and share their data.
For example, Direct Connect allows enterprises to establish a dedicated network connection from a customer's site to Amazon. For really large amounts of data there is also AWS Import/Export, which allows enterprises to send portable storage devices to Amazon, which are then uploaded to Amazon's cloud storage.
"You should not underestimate the bandwidth of a Fedex box," said Vogels.
Other services that are also a good fit for big data include Amazon's Simple Storage Service, the DynamoDB NoSQL database and the Apache Hadoop-based Elastic MapReduce, which can be used to perform data-intensive analytics tasks.
The purported advantages are the same as when using cloud services in other areas -- having to pay only for resources used, faster deployment times, less management, and the ability to add more computing power quickly.
Vogels also had some homework for his audience, recommending a book called "The Fourth Paradigm: Data-Intensive Scientific Discovery," which tells the origins of big data.
Send news tips and comments to firstname.lastname@example.org