WMF Vulnerability Sparks Patch Program

When the need to patch a major Windows hole arises, our security manager sees an opportunity to implement a process that's been resisted.

The Windows Metafile (WMF) vulnerability, which emerged in the last week of 2005 and was resolved with a patch that Microsoft released off its regular patch schedule at the end of the first week of 2006, wasn't good news at all. But I managed to wring a good outcome out of the situation, since it allowed me to give some structure to our patch management process.

Before this threat arose, efforts to deploy a patch management process had been met with excuses. Resources were short. A Systems Management Server (SMS) upgrade was being deployed. And as a general rule, engineers are resistant to patching because it could harm their ability to work. When the WMF vulnerability came to light, I saw an opportunity to finally institute a patch management process without listening to a lot of moaning. Unfortunately, it sometimes takes a serious incident or the threat of one to bring about change in an IT organization.

What made the threat posed by the WMF vulnerability particularly potent was one way a hacker could take advantage of it. With the WMF vulnerability, all a hacker has to do is embed malicious code in an image, place the image on a Web site and then lure unsuspecting users to that site. Once the user browses the Web site, his operating system will execute the malicious code contained in the image -- no downloading or clicking on a link is necessary.

Even though I hadn't heard of any incidents, I didn't want to take any chances. I've been through several other incidents involving vulnerabilities in the past, and it's never fun to clean up the mess. This WMF vulnerability just reeks of long nights in the data center operations war room.

My strategy for establishing a patch management process was fairly straightforward. My first priority was to get all of our desktops patched for the latest WMF vulnerability. We sent an e-mail to all 8,000 employees advising them to enable Windows Update so that critical patches would be installed automatically, or to click on a link to a Microsoft Web site where they could download only the WMF patch. We gave the employees 24 hours to comply, and then we used SMS to push the patch to all of the desktops.

With the WMF problem properly disposed of, the next step was to ensure that all desktops were current with critical patches. My security team reviewed all of the critical updates that Microsoft released in 2005 and made a recommendation on which patches were critical to our environment. We had installed several updates throughout the year to address zero-day worm infestations, such as Zotob. But until now, our desktops haven't been fully patched.

When I received the list of recommendations from the security team, I provided it to our desktop technology group, advising it to use the same approach to get our desktops up to date. The group has put a schedule together, so this part of the new process is well under way.

Finally, I am mandating a once-per-month patch review and update day. I'm calling this Patch Thursday, and it will fall at least nine days after Microsoft's well-known Patch Tuesdays. On our Patch Thursdays, we will review all new patches and decide which are critical, thus ensuring that our desktops remain compliant. Of course, I will reserve the right to deploy some patches immediately, just as we did for the WMF patch.

The Server Side

I will be instituting the same program for our Windows and Unix servers, as well as any Unix or Linux desktops. We may have to address the server environment a little differently from the desktops. The main problem here is that Windows servers typically need to be rebooted after a patch is applied, so monthly updates in a complex server configuration that includes clustered environments and the use of virtual machines may not be feasible. In addition, the process would need to include some fairly comprehensive testing before patches could be deployed in the production environment. The last thing we want to do is risk the company's ability to generate revenue or its reputation by deploying an untested patch on a key server.

Unix servers are also considered critical infrastructure, and although many of the recommended Solaris patches don't require a reboot, they still need to be fully tested. But we don't always have a test environment available for every server in production, so testing will be a challenge.

I also wanted to ensure that our standard corporate images are maintained at the same patch level as the desktops. After some discussion, members of the desktop group and I agreed that they would review recommended patches and keep desktop images up to date on a quarterly basis but that prior to issuing a new laptop or desktop, they would run the Windows Update program to ensure that all patches were installed. I'll expect the same for the servers.

I solidified this new patch-management process by writing down some guidelines on matters such as roles, responsibilities and prioritization. I'll distribute this at the upcoming patch management meeting, where we'll assign appropriate duties. In addition, I'll be using our SMS infrastructure to create regular reports that provide details on compliance. My company holds a weekly service-review meeting in which each manager of a major department presents various metrics and status reports relevant to his department. I will include these new metrics along with my other reports so that my peers and the CIO can be kept abreast of the effectiveness of the patch management process.

What Do You Think?

This week's journal is written by a real security manager, "Mathias Thurman," whose name and employer have been disguised for obvious reasons. Contact him at mathias_thurman@yahoo.com.


Copyright © 2006 IDG Communications, Inc.

7 inconvenient truths about the hybrid work trend
Shop Tech Products at Amazon