It's a great thing when a security manager doesn't have to go into battle mode every time a new corporate initiative emerges. When other departments show signs that they aren't putting security last, I can relax a bit. But just a little bit. Even in those cases, I want to have input.
For the most part, I was happy when the R&D department came to me last week to discuss their plan to create a software security test lab. The R&D team has been charged with enhancing the security of the software portion of our products, and one of their requirements is to create an environment in which they can run hacking and assessment tools and code-scanning software. That will free them up to conduct such activity whenever they want, without notifying anyone. When my department conducts security assessments or penetration testing against our corporate applications, we schedule the activity at a time that minimizes the impact, and we let everyone know.
Before the architecture team went to work designing the lab, I created a set of security requirements. The first and most important was that the lab must be segmented from our production network. Other requirements included a separate firewall protecting the lab from the corporate network and extremely limited access to the public Internet. I don't want any inquisitive engineers running scans against resources on the Internet -- that could get us into trouble. Also, access to the lab must be controlled and logged.
The lab will be segmented into several virtual LANs, with firewall rules in place to protect one VLAN from another. For example, one VLAN will contain the various security tools for running assessments, penetration testing, code scanning and other activity. The products to be tested will reside on another VLAN, while any source code will reside on yet another. Most of the resources will be installed on virtual machines, so the servers can be quickly taken down and redeployed if necessary. We will set up a bastion host, with access to the lab network restricted to those who have access to the lab itself.
At least at first, we'll stock the lab with some fairly common tools, and then upgrade as the engineers get properly trained on how to conduct assessments. One will be Nessus, a fairly easy-to-use tool that scans for server misconfigurations and also has an extensive menu of plug-ins, including a variety of application vulnerability checks. Another tool will be Metasploit, which is one of my favorites. It can be very helpful in running attacks against potentially vulnerable systems. For example, if you discover a SQL injection vulnerability, Metasploit can attempt several SQL attacks that will validate the vulnerability -- you don't have to be an expert in SQL. That's definitely handy, since SQL injection has been used in many recent attacks compromising user passwords.
Another of my favorites is BurpSuite, a set of application assessment utilities that let you do things like intercept traffic between the client browser and Web application. For example, if an application's password-reset logic isn't written properly, you could use BurpSuite to intercept and alter the parameters in an attempt to change another user's password.
We'll have other utilities, of course, as well as a tool to run static code analysis. That tool will eventually be incorporated into our software development life cycle and will be employed to assess the sanity of our source code.
We need our engineers to use all these tools properly, and I want them to learn to think like a hacker. To help, I'll find a trusted third party to provide training and guidance in application assessments and penetration testing. Slowly but surely, all of this will get all of our engineers to thinking about security early and often in the development process.
This week's journal is written by a real security manager, "Mathias Thurman," whose name and employer have been disguised for obvious reasons. Contact him at firstname.lastname@example.org.
Join in the discussions about security! computerworld.com/blogs/security