Readers of my blog will certainly be aware of the importance I place on the collection and handling of system logs. These logs contain critical data related to what is happening to your systems and networks that is not readily obvious, not the least of which are indicators that your network is being probed by potential hackers.
Proper log collection and review is part of every major security standard in existence. As an example, PCI DSS requirement 10 addresses various aspects of logging. Section 10.6 states “Review logs and security events for all system components to identify anomalies or suspicious activity.”
Log requirements are also explicitly addressed in the Gramm–Leach–Bliley Act (GLBA), Sarbanes-Olxey (SOX), Health Insurance Portability and Accountability Act (HIPAA), and Federal Information Security Management Act (FISMA).
It is also clear that proper log collection and analysis is critical to an organization’s protection against breaches. The Verizon 2015 PCI Compliance Report compares that company’s findings from their routine PCI certification customers with those customers from their post-breach forensic business. The difference is astounding, with 91.1 percent compliance with PCI requirement 10 in their routine customers, versus 0 percent in their post-breach customers. The importance of proper log handling is clear.
While proper log analysis is critical, I don’t want to understate the challenge of doing it well. A single Windows server can easily generate over 5,000 log records a day across all categories. Most of these will be routine, and not of interest from a security perspective. That being said, you do have to sift through all of them to find the ones of interest. Without some automation, you might still be looking through Monday’s log entries on Thursday.
Unfortunately, the challenge only gets worse, because of the number of ancillary systems generating log entries that may have relevance to security. These include:
- Access points
- Authentication systems
- Intrusion detection/protection systems
- Anti-malware software
- Application software
And the list goes on.
In order for log records to be of forensic value in an investigation, or to be admissible in court, there are more hoops to jump through. Controls must be established to ensure that log records cannot be deleted or altered. Log entries must be monitored to confirm appropriate log access.
NIST publication 800-92, entitled Guide to Computer Security Log Management, does a good job describing the challenges:
- Many log sources
- Inconsistent log content
- Differing systems time stamps
- Inconsistent log formats
At this point, the complexity of the problem may have you wondering why you should even bother trying to keep up. You could just take the approach that many companies I have worked with do, and ignore them until something goes wrong.
Unfortunately, by the time you know there is a problem and begin investigating, it may already be too late. Your data may already be for sale on the dark web. My point here is that logs often contain evidence that your network and systems are being probed, long before intrusions actually occur. If you see the warning signs, you have a chance to shut the attacks down before they succeed, but only if you are very proactive.
I managed security for a SaaS document management company a few years ago. I frequently reviewed system logs from our Web server farm, and was surprised at the number of attempts made by people in other countries, China most frequently, to penetrate their systems using known vulnerabilities. As a result of these reviews, I was able to lock out large blocks of IP addresses, eliminating the problem before it happened.
So, how do you start an effective log management program?
Make it a priority
Such a program will cost time and money. Your organization must decide this is important, and allocate appropriate resources to make it work.
Synchronize your time stamps
Reviewing an incident often involves looking at logs from multiple systems. If you have ever tried to do this using systems that had differing timestamps, you recognize the impossibility of success.
Further, without proper timestamps, your logs would not be admissible as evidence in most courts. You need to use a consistent time source for all of your systems. It is recommended that you use an established outside time source, such as one from this list provided by NIST for a few internal systems, and synchronize the balance of your systems to these. This time synchronization must be done securely, as it is a bit of a security exposure in itself, and you must track and control who makes changes to time information.
The only practical way to address the issue of logs from many systems is to consolidate them.
There are a variety of products that help with this, including some good SaaS-based offerings, as well as some good open-source systems. I discussed these options, with vendor links, in The one-minute security manager. These products will facilitate collection of logs from various systems into a single repository, and will do some of the work to make the formats common
When you have your records in one place, you need to filter out the routine entries, so you can focus on those of significance. Some of the above products include good filtering capabilities, but it will take some work, along with trial and error, to get it right.
Know what to look for
Once you have narrowed your log entries down to a subset of interest, you need to be able to interpret them. Google can be your friend on this, as interpretation varies by the type of log. There are tools that can help as well, such as Apache-scalp for Web logs.
It is important that you archive and preserve log entries for later analysis if an event occurs. Different standards require different retention periods.
Bottom line -- log management is a pain, but absolutely essential to preventing security breaches.
This article is published as part of the IDG Contributor Network. Want to Join?