Google offers tips on reducing latency on large scale systems
In the latest ACM magazine, Google fellows offer a few secrets to keeping Web systems responding to users as quickly as possible
IDG News Service - Running the world's most popular website, Google engineers know a thing or two about keeping a site responsive under very high demand. In the latest issue of the ACM (Association for Computer Machinery) monthly magazine, Google reveals a few secrets to maintaining speedy operations on large-scale systems.
Systems as large as Google's can suffer from even a few sluggish individual nodes, write the article's authors, Jeffrey Dean, a Google fellow in the company's systems infrastructure group, and Luiz AndrA(c) Barroso, a Google fellow who is technical lead of Google's core computing infrastructure. The good news is that while slow nodes can never be eliminated entirely, a system can be designed to still offer speedy service to the user, the authors wrote.
"It's an important topic. When you have a [user] request that needs to gather information from many machines, inherently some of the machines will be slow," said Ion Stoica, an ACM reviewer who is a computer science professor at the University of California Berkeley, as well a co-founder of video stream optimization software provider Conviva.
"As [Internet services] try to reduce the response times more and more, the problem will become more difficult because [the systems] will have less time to decide what to do when something goes wrong. So it will be an area of research and development that will get attention over the next few years," he said.
Looking at performance variability is particularly important with large distribution systems such as Google's, because performance troubles on even a single node can result in delays that affect many users. "Variability in the latency distribution of individual components is magnified at the service level," the authors wrote.
For instance, consider a server that typically responds to a request within 10 milliseconds but takes an entire second to fulfill a request every 100th time. In a single server environment, this means that only every 100th user would get a slow response. But if each user request is handled by 100 servers -- each with the same latency characteristics -- then 63 out of every 100 users would get a slow response, the authors calculated.
Performance variability can take place for a number of reasons, the authors note. Sharing resources, such as running multiple application on a single server, can affect the response time of each application. The length of a component's work queue may also have a factor, as would routine maintenance jobs that can take up resources.
The Google engineers offered a number of techniques for mitigating slow performance from individual nodes, such as breaking jobs into smaller components and better managing routine maintenance tasks.
- A Reference Architecture for the Internet of Things The aim of this is to provide Architects and Developers of IoT projects with an effective starting point that covers the major requirements...
- How to Reduce Hardware & Infrastructure Costs Through Data In this paper, we take a look at how organizations are revisiting their network and server architecture in a bid to address the...
- Software Build Acceleration, Analytics and Build Clouds Discover how to dramatically speed up software builds by automatically distributing build jobs over scalable resource clouds and multi-core desktops, with potential savings...
- Printer Installer: Eliminating Print Servers Printer Installer is an on-premise web application that enables you to centrally manage and deploy Windows shared or direct iP printers.
- On-demand webinar - 7 Keys to Service Catalog Implementation Success Watch this webinar to learn 7 crucial keys to make your service catalog a success!
- Transform Your IT Service Management Watch this webinar, to learn how EasyVista can increase IT productivity & efficiency and deliver streamlined & integrated IT Service & Asset Mgmt. All Hardware White Papers | Webcasts
Our new weekly Consumerization of IT newsletter covers a wide range of trends including BYOD, smartphones, tablets, MDM, cloud, social and what it all means for IT. Subscribe now and stay up to date!