How do they do IT?

Jumping on a plane every second week to descend upon a different city in one of 19 far flung corners of the globe is a travel dream most of us never have the chance to experience. It’s too expensive and maliciously out of reach.

But it’s the epitome of the affluent jet-setter, especially when you consider the locales include Monte Carlo, Valencia, Shanghai, Sao Paulo, Monza and Melbourne. Did I just say Melbourne? Yes, Victoria’s capital city is considered exotic by many non-Aussies. Can you hear the chuckles in Sydney?

Jokes aside, imagine being in a position to dive off a yacht to swim in the azure waters of the Mediterranean off Monte Carlo one week, wander the Byzantine churches and Ottoman mosques in Istanbul the next and party in Montreal the week after. Delectable? Yes. Picture doing it again for the another 16 locales. The mind wanders and wonders.

But who does this, aside from the elite of elite playboys and girls of the world? Backpackers? Perhaps, but not on this scale. Airline staff? Yes, but often they work one route and have to turn around and come back the next day. Cruise ship crew? Not likely in this time frame. Tennis and golf stars? Possibly.

No, it’s a well-heeled group of IT managers and their amply — financially speaking — supported posse. Nine months, 19 locations.

It sounds too good to be true and it is. Sadly, this trip is no holiday.

Humour me, and this time imagine in each of those 19 locations you have to set up the IT systems of the equivalent of a medium-size enterprise in less than seven days and, a week later, pack it all up, ship it of to the next destination and do it all over again. Each time, the environment is uncertain, the weather conditions unpredictable and most off-the-shelf kit wouldn’t last an hour.

Add in the fact that the systems are a vital support mechanism to a driven (mad?) man hurtling around a race track at more than 300 kilometres an hour just a few centimetres above the bitumen. For four days, you are a mission-critical part of what is often considered the pinnacle of motor sports. If the IT fails, the driver is in mortal danger and bucket loads of cash could be lost — to say nothing of your own job. It’s Formula 1 and it’s the life of a handful of IT managers that work in the glamorous sport. And no, they don’t get much of a chance to chat with the Pit Lane girls.

Next: How technology has changed F1 racing

Page Break

R-evolution

“When I compare what we did when I started eight years ago, it has changed quite a lot,” BMW Sauber head of IT, Peter Furrer, notes. “Before, there was just a small amount of IT at the track. We had a few working spaces, a few had laptops and a connection back to the factory. We would use an analogue telephone line to call people and to get an email connection.”

Like many industries where the centrality of IT systems to core operations has become far more prevalent, Formula 1 Grand Prix teams have also become heavily reliant on advanced technologies.

In accordance with the Federation Internationale de L’Automobile (FIA) regulations, each car has roughly 100 sensors placed in key data capture positions and send anywhere up to 20 gigabyes of data back to the pits during a race.

The pit crews and team management are not allowed to send data directly to the driver when they are screaming around the course, but they can advise him to make changes via a control panel on the steering wheel. And this could be the difference between first or failure.

“They are collecting four to six megabytes per lap. It depends on the track layout and the quality of the coverage but we transfer about 70 per cent in real time to the garage or the pits,” Furrer said. “The other data is downloaded when we connect the car to the network in the garage.

“When we convert data from the real time stream into data we can use I have three servers involved for each car. And then we have the file servers, the main controllers and some auto servers. They are all physical machines; we have about 32 servers on the track. Therefore we are planning to virtualise most of them [on VMware] and, afterwards, have about three or four physical servers and run all the necessary servers as virtual machines.”

The existing setup for BMW Sauber weighs close to 3200 kilograms. Making the virtualisation move would save close to 1500 kilograms in freight or about $200 per kilogram Furrer claims.

Admittedly, the teams use off-the-shelf equipment for some aspects of their track side infrastructure — McLaren Racing, for example, went with a BlackBerry unified communications solution and Lenovo hardware for workstations and laptops — but the core servers are often placed in extreme conditions that include carbon dust, oil, vibration, rain, heat, variable power and a need for portability.

The Renault F1 team, for instance, travels with three special ‘shock proof’ mobile racks housing its critical IT service equipment — HP Proliant Servers for Windows and Linux services, NetApp Filers for data storage, Cisco networking, and UPS systems — and two half-height mobile racks for networking requirements. Many of the teams build the fully sealed and air-conditioned racks in-house as they are difficult to source.

Head of IS at Renault, Graeme Hackland, says that as more teams squeeze into garage space, teams have increased virtualisation and the use of remote services . Renault is also looking at deploying solid state disks (SSDs) in the 2010 season. “All our travelling personnel will have new HP laptops with SSD for better reliability and increased performance,” he says. “The pit wall machines and some servers will also have SSDs to cope with the extreme environmental conditions and vibration.”

Next: Data crunching at F1 speed

Page Break

Data crunching at F1 speed

Year on year, each team collates data sets to be used as reference and comparison in the event of the day. They also carry engineering data, FIA regulations, strategy data including competitor analysis and are provided with weather updates, GPS and timing information from the race organisers.

In return, the teams often provide the organisers (and by default broadcasters) with a feed of the driver’s radio and some other information on the car such as the gear changes.

In the same way corporations use business intelligence tools to assist with decision making the F1 teams crunch their data — often in a much shorter time frame and with more immediately at stake — at the ‘pit perch’.

“This is where the management, race engineers and strategists sit during the race and make important decisions regarding pit stops and strategy,” Trackside Infrastructure Engineer for Red Bull Racing, Graeme Hackland, explains. “The pit wall is connected to the garage LAN in order to receive telemetry, strategy, timing feeds as well as intercom communications with the garage, race control and even the operations room in the UK. The operations room is basically an extension of the trackside operations. It is occupied by specialist groups that assist with operations trackside. They are constantly in communications with the engineers at the track and receive all the data as if they were located at the track itself.”

Clearly, with the intense time, financial and safety pressures involved in contemporary F1 racing, the networks become crucial. Most teams will have a garage LAN at the track, a factory LAN back at headquarters, a WAN between the track garage and the factory and a telemetry system (a Multi-protocol Label Switching or MPLS network) from the track to the garage, which is provided by the organisers.

Critical data is transferred while the cars are on the track and a full download takes place over an Ethernet connection when they returns to the pits. Teams are not able modify the car after qualifying so the data from the third practice session is most vital because qualifying takes place two hours later. In this short window, all the data must be sent to the engineers (often back at headquarters), crunched and used in final preparations.

“The garage network connects the cars, the engineers, mechanics and support staff with the servers, storage and essential software applications over the LAN,” Williams head of IT, Chris Taylor, explains.

“This LAN is connected to the factory LAN with an ATT eVPN [enhanced Virtual Private Network] which is a direct private connection [not an internet VPN]. The link will run at 12Mbps in 2010 and have WAN acceleration which puts the WAN connection at almost LAN speed. This ATT WAN is critical to our operations and is a fully delivered and supported global network from ATT. We are presented with a connection to this network at every race so we can send essential setup data, component service data, emails and telemetry files so supporting engineers at the factory can assist with setup and diagnosis if required.”

IT teams work to tight deadlines, in highly variable locations across the world, in an environment not suitable for IT equipment, in a sport that demands absolute precision and performance. But what happens if things do fail?

“I hate even contemplating this. Besides the obvious risks and safety concerns of not having live telemetry, the difference between just racing and racing with a challenging and constantly changing strategy is data analysis,” Renault’s Hackland says. “The most critical time is when the car is on the track; you can get away with short periods of downtime during the rest of the event, but when the car is on the track, everything has to work!”

They may not share the limelight with the drivers, cars and team managers but F1 IT managers are under no less pressure. Ensuring everything works for the 19 times in nine months races are held is no holiday. But it would certainly get your engines racing.

Next: A regular week for an F1 IT team

Page Break

A regular week for an F1 IT team

Red Bull Racing is the team with which Australia’s Mark Webber has been driving since 2007. His trackside infrastructure engineer, Olaf Janssen, explains what it’s like to operate the IT systems during a race week such as that to occur at the 5.303 kilometre Albert Park track in Melbourne in late March.

“The setup crew for the team would arrive in Australia on the Sunday before the race [March 28] and report to the track the following Monday morning to begin setup. The first task is the unloading of the tons of freight that travels with the team. From an IT perspective, we set up an IT rack area in the garage that contains the server racks and car racks that are used for telemetry and data analysis,” Janssen says.

“We then establish a WAN link with the factory in the UK so that we can use all the services located there, including everyday applications like email, IP telephony and various in-house production and manufacturing systems. This enables us to stay in direct contact with the factory and keep all systems up to date.

“We also use the link to send data for analysis by specialist groups in the operations room back in the UK during the practice sessions, qualifying and the race. The data is available for analysis literally minutes after the car returns to the garage. “Once we have a link to the UK, we look to set up the local LAN at the track to be used by the management, engineers, drivers, engineers, marketing and hospitality areas. We would also wire the garage for communications from the car racks to the overheads in the garage which are used for connecting to the engine control unit — the ECU — located in the car. This is used for sending data sets to the car as well as offloading the data logged by the ECUs while the car is out on the track. It normally takes us up to the end of Wednesday. On Thursday we set up the pit wall, which is used by the management, race engineers and strategy during the race.

“This is networked to the garage so that all the telemetry, timing and strategy information can be displayed on the numerous monitors that are on the pit wall. After this we setup the intercom system, which is used for communication within the team and even the factory in the UK during the sessions. Once all this is completed we carry out tests on all the services that are needed for the first session on Friday morning. From Friday’s practice sessions to Sunday’s race, we monitor the systems to ensure every thing is running smoothly and are on hand to deal with any issues that may be encountered by the end users.

“We are in constant communication with the operations room in the UK to ensure that they have all the necessary information that they require. Most members of the team have multi-tasking roles and you often have pit stop duties during qualifying and the race. As soon as the race is completed on a Sunday and, hopefully, the celebrations are over the team begins tearing down the garage and packing the freight.

“From an IT perspective the network is pulled down after the post-race analysis is complete, all the race data is transferred back to the UK for analysis by the engineers first thing Monday morning. All the server racks are packed down and secured in the freight. The whole process normally takes six to seven hours, meaning we would leave the circuit after 10pm on a Sunday evening. This hopefully leaves enough time for a quick beer at the hotel bar.”

Related:

Copyright © 2010 IDG Communications, Inc.

8 simple ways to clean data with Excel
  
Shop Tech Products at Amazon