Consider the following hypothetical, but probably typical scenario: Your organization experiences a minor (if there is such a thing) security breach, which you accidentally discover, and correct immediately. You then call in the best information security consultant you can find, and follow their advice to the letter. You bring in new technology, and run extensive vulnerability scans on all of your applications. You correct everything you find, and then have the consultant check your security one more time. You get a clean bill of health. You then tell your board that all is well, take a deep breath, and relax.
If you are resting on your laurels for more than a few days, I would suggest that you are another security breach waiting to happen. Why? After all, the best security consultant available said you were fine.
The issue is that cyber security is a rapidly moving target. An application that is vulnerability free this week may have major exposures next week. I have encountered this a number of times with customers, surprised at the difference between two vulnerability scans run a short time apart.
The issue we all face with information security is the dynamic nature of the threat. I follow a variety of security organizations, such as US CERT, on Twitter. I am constantly amazed at the number of new vulnerabilities discovered with existing software programs and products on a weekly basis. Some recent examples:
- Adobe Flash vulnerability allowing remote code execution
- NTP (Network Time Protocol) issue allowing denial-of-service attacks
- Cisco ASA firewall exposure allowing for denial-of-service attacks
- Apple, thought for a long time to be invulnerable, releasing iOS 9, quickly followed by additional releases to correct newly discovered exposures
And the list goes on. All of these exposures relate to vulnerabilities in existing products that were unknown just weeks ago. The affected parties did not have to change anything to be exposed, or be negligent in any way. They were secure until some hacker discovered a new vulnerability to exploit. They were secure one minute, and insecure the next.
If this seems unfair, welcome to life. Our inability to stay on top of threats is largely a function of our static approach to security.
Most of us in the information technology have many responsibilities other than security. On the other hand, we fight against a growing number of hackers whose only job is to find new ways to break into our systems. Their motivations vary from the hobbyist who just loves technology and lacks ethics, to those sponsored by organized crime, to those propped up by foreign governments. Regardless of the reason, they have vast amounts of time to put into finding ways into our networks and applications.
Since we can't keep up, should we just give up? While tempting, this is obviously not possible. As such, we must find ways to replace our static approach to security with a dynamic one.
A few weeks ago, former NSA Director Keith Alexander, speaking at a conference, put it well stating that "We need to move now to a new approach to cybersecurity — an approach that is proactive, agile and adaptive." We need to view information security as a constantly changing landscape, rather than a still painting of a landscape. Given our deficit of resources as compared to those we battle against, this is not an easy undertaking. Challenging or not, however, dynamic security is essential if we are to have a fighting chance of winning. I would suggest the following as good starting points in moving from a static to a dynamic model:
Make vulnerability checks a regular and frequent task
I recently worked with a HIPAA customer who had a clean vulnerability scan that was close to a year old. They felt pretty comfortable about their situation, given that nothing significant had changed on their environment during that time. My first act on their behalf was to have them rerun the tests, which of course found numerous issues, some of them major. I would suggest that external vulnerability scans need to happen at least monthly. Internal scans should take place on the same schedule, or when software or configuration changes are made, whichever happens first. The latter is a specific requirement of PCI DSS.
Fortunately, there are excellent tools to help us with this task. My favorite is Qualys, which makes such scans an easy process, can handle internal and external scans, and offers some good free options. There are a growing number of other options for this, include some like OpenVAS that are open source.
Pay attention to the fundamentals
I have preached for some time that, despite the availability of new, expensive products to prevent and detect security breaches, the answer actually lies in the fundamentals -- the day to day things we should do, such as checking logs and auditing access rights. Fortune reminds us of this in their recent article, "Asking these 4 questions will stop up to 90% of hacks."
Stay in top of firmware updates
Many of the exposures we face today result from issues found in the firmware of devices attached to our networks. These may be core devices, including routers or firewalls, or Internet of Things devices, such as printers and copiers. Know what devices you have, and check for firmware updates frequently.
Be threat aware
Bottom line -- migrate your security efforts from static to dynamic, and sleep better tonight.
This article is published as part of the IDG Contributor Network. Want to Join?