Amazon launches workflow orchestration service
The AWS Data Pipeline can stream big data analysis jobs
IDG News Service - Users of Amazon Web Services will soon be able to orchestrate workflows across different AWS services and their own internal resources, using a new orchestration engine called the AWS Data Pipeline.
Amazon Chief Technology Officer Werner Vogels introduced the technology at the company's Re:Invent conference, held this week in Las Vegas. The service is now available in limited beta preview, though Vogels did not say when it would be commercially available, nor what the price would be.
The service can "automate the movement and processing of any amount of data using data-driven workflows and built-in dependency checking," according to a blog post AWS issued that further explained the technology.
Amazon designed the service to automate the process of parsing large sets of data. For example, one pipeline can move log data from an AWS EC2 (Elastic Cloud Compute) instance to the AWS S3 (Simple Storage Service) once a day, and then, once a week, evoke an analysis job on the data on an AWS Elastic MapReduce cluster.
To set up a workflow pipeline, the user identifies some data sources and describes the steps that AWS should take to process the data. The user would also identify the destination for the processed data as well as a schedule for when the pipeline should be executed. Preconditions can also be established that the service will check before executing a job, such as checking if a file that is needed for the operation exists.
Pipelines can run across EC2, Elastic MapReduce clusters, and the user's own hardware. Pipelines can be set up in the AWS Management Console or by writing a script.
This is not the first workflow engine on AWS. The company also launched the Amazon Simple Workflow in February. However, AWS Data Pipeline is more focused on executing data-driven jobs.
The AWS Data Pipeline is one of a number of announcements Amazon made at the conference. The company also unveiled a data warehouse service and an auto-discovery service to ease the management of its ElastiCache. It has also cut the prices of some of its storage services and created two new EC2 instance types, for high-memory usage and large data usage.
- Hadoop for Dummies Today, organizations in every industry are being showered with imposing quantities of new information. Along with traditional sources, many more data channels and...
- The Top Five Ways to Get Started with Big Data Despite the increased focus on big data over the past few years, most organizations are still talking about what big data is rather...
- Data Warehouse Augmentation: The Queryable Data Store While organizations have, to date, been busy exploring and experimenting, they are now beginning to focus on using big data technologies to solve...
- The IBM Big Data Platform IBM is unique in having developed an enterprise class big data platform that allows you to address the full spectrum of big data...
- Live Webcast Best Practices: How to Improve Business Continuity with Virtualization VMware solutions include a range of business continuity capabilities to help ensure availability for applications across your virtualized environment. Learn More>>
- Cloud Knowledge Vault Learn how your organization can benefit from the scalability, flexibility, and performance that the cloud offers through the short videos and other resources...
- Endpoint Data Management: Protecting the Perimeter of the Internet of Things Not surprisingly, "Internet of Things" (IoT) and Big Data present new challenges AND opportunities for enterprise IT. Teams need to harness, secure and... All Data Center White Papers | Webcasts