Amazon cites cause of recent outage, issues refunds
Network World - An unexpected bug cropped up after new hardware was installed in one of Amazon Web Service's Northern Virginia data centers, which caused the more than 12-hour outage last week that brought down popular sites such as Reddit, Imgur, AirBNB and Salesforce.com's Heroku platform, according to a post-mortem issued by Amazon.
In response to the outage, AWS says it is refunding certain charges to customers affected by the outage, specifically those who had trouble accessing AWS application program interfaces (APIs) during the height of the downtime event.
AWS says the latest outage was limited to a single availability zone in the US-East-1 region, but an overly aggressive throttling policy, which it has vowed to fix as well, spread the issue for some customers into multiple zones.
The problem arose Oct. 22 from what AWS calls a "latent memory bug" that appeared after a failed piece of hardware had been replaced in one of Amazon's data centers. The system failed to recognize the new hardware, which caused a chain reaction inside AWS's Elastic Block Storage (EBS) service, and eventually spread to its Relational Database Service (RDS) and its Elastic Load Balancers (ELBs). Reporting agents inside the EBS servers kept attempting to use the failed server that had been removed.
"Rather than gracefully deal with the failed connection, the reporting agent continued trying to contact the collection server in a way that slowly consumed system memory," the post-mortem reads. It goes on to note that "our monitoring failed to alarm on this memory leak."
AWS says it's difficult to set accurate alarms for memory usage because the EBS system dynamically uses resources as needed, therefore memory usages fluctuate frequently. The system is supposed to work with a degree of fault tolerance for missing servers but eventually the memory loss became so severe that it started impacting customer requests. From there, the issue snowballed -- "the number of stuck volumes increased quickly," AWS reports.
AWS first reported a small issue at 10 a.m. PT but within an hour said the issue was impacting a "large number of volumes" in the affected availability zone. This seems to be the point when major sites such as Reddit, Imgur, AirBNB and Salesforce.com's Heroku platform all went down. By 1:40 p.m. PT, AWS said, 60% of the impacted volumes had recovered, but AWS engineers were still baffled as to why.
"The large surge in failover and recovery activity in the cluster made it difficult for the team to identify the root cause of the event," the report reads. Two hours later the team figured out the problem and restoration of the remaining impacted services continued until it was almost fully complete by 4:15 p.m. PT.
- The business impact of BYOA: Five major challenges and how your enterprise can solve them This E-Book reviews five major challenges of BYOA with key subject matter experts and outlines how businesses can solve them.
- The BYOA Opportunity Visual demonstration of problems that unmonitored, employee-introduced cloud apps can cause a business, and why IT managers need a solution to help and...
- BYOA: Embracing the Opportunity, Controlling the Risk This whitepaper explores the shift from BYOD to BYOA (bring-your-own-application) and how IT departments today can address this new change in the IT...
- AppGuru Reference Guide: Conquer BYOA Challenges, Leverage BYOA Benefits As the advantages of Bring-Your-Own-Application environments become increasingly apparent, BYOA is quickly becoming a reality for organizations of all sizes. But with the...
- E-Signature RFP Checklist Webcast If your organization is looking to adopt e-signatures, you may be overwhelmed by the number of providers that offer seemingly similar solutions. How...
- Cloud and Collaboration: Driving Your Business Value Mission Critical Cloud from Peer 1 Hosting is enterprise-grade. All Cloud Computing White Papers | Webcasts