Skip the navigation

Amazon cites cause of recent outage, issues refunds

By Brandon Butler
November 1, 2012 04:51 PM ET

Network World - An unexpected bug cropped up after new hardware was installed in one of Amazon Web Service's Northern Virginia data centers, which caused the more than 12-hour outage last week that brought down popular sites such as Reddit, Imgur, AirBNB and Salesforce.com's Heroku platform, according to a post-mortem issued by Amazon.



In response to the outage, AWS says it is refunding certain charges to customers affected by the outage, specifically those who had trouble accessing AWS application program interfaces (APIs) during the height of the downtime event.



AWS says the latest outage was limited to a single availability zone in the US-East-1 region, but an overly aggressive throttling policy, which it has vowed to fix as well, spread the issue for some customers into multiple zones.



The problem arose Oct. 22 from what AWS calls a "latent memory bug" that appeared after a failed piece of hardware had been replaced in one of Amazon's data centers. The system failed to recognize the new hardware, which caused a chain reaction inside AWS's Elastic Block Storage (EBS) service, and eventually spread to its Relational Database Service (RDS) and its Elastic Load Balancers (ELBs). Reporting agents inside the EBS servers kept attempting to use the failed server that had been removed.



"Rather than gracefully deal with the failed connection, the reporting agent continued trying to contact the collection server in a way that slowly consumed system memory," the post-mortem reads. It goes on to note that "our monitoring failed to alarm on this memory leak."



AWS says it's difficult to set accurate alarms for memory usage because the EBS system dynamically uses resources as needed, therefore memory usages fluctuate frequently. The system is supposed to work with a degree of fault tolerance for missing servers but eventually the memory loss became so severe that it started impacting customer requests. From there, the issue snowballed -- "the number of stuck volumes increased quickly," AWS reports.



AWS first reported a small issue at 10 a.m. PT but within an hour said the issue was impacting a "large number of volumes" in the affected availability zone. This seems to be the point when major sites such as Reddit, Imgur, AirBNB and Salesforce.com's Heroku platform all went down. By 1:40 p.m. PT, AWS said, 60% of the impacted volumes had recovered, but AWS engineers were still baffled as to why.



"The large surge in failover and recovery activity in the cluster made it difficult for the team to identify the root cause of the event," the report reads. Two hours later the team figured out the problem and restoration of the remaining impacted services continued until it was almost fully complete by 4:15 p.m. PT.



Originally published on www.networkworld.com. Click here to read the original story.
Reprinted with permission from NetworkWorld.com. Story copyright 2012 Network World, Inc. All rights reserved.
Our Commenting Policies
Consumerization of IT: Be in the know
consumer tech

Our new weekly Consumerization of IT newsletter covers a wide range of trends including BYOD, smartphones, tablets, MDM, cloud, social and what it all means for IT. Subscribe now and stay up to date!