A comprehensive audit of our firewalls just moved up on my list of priorities. The urgency arises from a recent incident that, fortunately, wasn't as bad as it could have been.
Around the world, we have over 60 individual firewalls. We use a centralized platform for managing the rules and baseline configuration, but it's still important to audit every firewall to track down the inevitable inconsistencies. We had scheduled that audit for later this year, but now we're planning to do it much sooner.
Last week, while troubleshooting a problem with network performance at a large overseas office, our network team decided to monitor the traffic leaving the office. Bad news: The firewall and router logs showed a massive amount of traffic destined for a single host in Vietnam.
The traffic originated from hundreds of externally addressable IP addresses on our internal network. This was highly suspicious, since we use internal private IP addresses for our protected network.
I assembled our crisis action team, since it looked as if we had been hit by a distributed denial-of-service (DDoS) attack. Of course, we immediately modified the firewall rules to block access to the destination IP address. Next, we enabled antispoofing rules on the affected firewall interface to block traffic originating from public IP addresses on our internal network. Then, we enabled anti-DDoS profiles for the firewall, allowing us to control traffic floods and set a maximum number of concurrent sessions. These last two configurations, by the way, should have already been enabled -- but more on that later.
We tracked down the affected device by locating the switch port it was connected to. It turned out to be an enterprise-class server that an R&D engineer had attached to the Ethernet port at his desk -- which is a no-no. We used administrative access to install EnCase, a forensic examination tool, on that server and found something consistent with malware that was previously identified as opening connections to a server in Vietnam from multiple spoofed IP addresses. That sure fit the facts of our case!
We disabled the malicious service at once, and what do you know -- the malicious traffic went away. That done, we moved on to a more thorough forensic examination. By sniffing the network traffic that had originated from the infected server, we found that there had been no data loss or unauthorized access. Those had been my real worries.
Running a companywide inventory, we found that same malware on some other overseas machines, and on some in our corporate office. Luckily, none of those resources had been as completely compromised as the first machine.
Preventing Future Incidents
With the damage contained, I drew up a list of action items. For one thing, it's apparent that we need to review our firewalls to ensure that basic configuration settings such as antispoofing and anti-DDoS are enabled. But I also want to look into why our security incident event monitoring (SIEM) tool didn't alert us that a server was communicating with a known malicious host. The incident also makes clear that we need to address some inconsistencies in our endpoint protection compliance, since the infected servers were not up to date with the latest pattern files. Finally, we recently enabled some advanced malware detection capabilities that are supposed to evaluate all downloaded executable files and run them in a sandbox environment to determine whether they are malicious in nature. I'd like to find out where the breakdown in that technology occurred.
But my No. 1 priority is that firewall audit. I'm sure that in addition to some basic interface configurations, there are gaps in the firewall rules base.
This week's journal is written by a real security manager, "Mathias Thurman," whose name and employer have been disguised for obvious reasons. Contact him at email@example.com.
Join in the discussions about security! computerworld.com/blogs/security