My company has outgrown its offices and will be moving to a new facility next year. While the company as a whole will have more space, the data center will shrink to less than half the square footage it now occupies. The goal is to decrease the data center footprint by 60%.
At this point, we have a lot of experience with cloud infrastructure. We usually choose software-as-a-service vendors for new enterprise applications, our engineering departments build demos in public cloud environments, and even our own product is a SaaS offering.
We will be hosting our servers in three types of environments. The first, a public cloud provider such as Amazon EC2, will have no relationship with our internal network. The second is what I call a hybrid cloud in which we host infrastructure (including virtual servers) at a third-party data center and build a VPN tunnel back to our company, creating a trust relationship. The third is a private cloud, where we will host a virtual environment on our own network.
To govern, automate, control and gain visibility into these various environments, we've been looking at a couple of companies that offer a one-stop shop for the provisioning of servers in all three cloud environments. This is the part that scares me. I don't want engineers who access this new platform to be able to provision a server on our company's DMZ, by mistake or otherwise. Nor do I want them to be able to provision critical production servers on Amazon. I'm very sensitive about our Internet exposure.
I'm also uncomfortable with the idea that much of our data center infrastructure will be accessible from anywhere on the Internet. Today, if an engineer wants to provision a server, he has to be physically located in one of our facilities or be on our company network. The cloud opens things up so much that a server could be provisioned from an untrusted Internet kiosk in Mexico, for example.
Therefore, I've asserted five security requirements for this initiative.
The first is that access to the new platform, and any company-sensitive data stored on it, must either be restricted by IP address or incorporate some form of two-factor authentication. Regardless, access needs to be encrypted.
The next requirement is for strong profiles that limit and, if necessary, build workflow for the provisioning of certain servers. This would prevent the unnecessary build-out of a DMZ or production server and keep our intellectual property from being exposed in less secure environments. These profiles must integrate with our company's Active Directory infrastructure, so when an employee is terminated, access to the new platform will be removed.
Third, all servers must comply with our configuration management policies regarding things like patch management, antivirus protection, the disabling of unnecessary services and central management.
The fourth requirement concerns availability and calls for sufficient fail-over, a disaster recovery plan and the expectation that the SaaS provisioning application will be operated out of a data center that complies with SAS 70 or SSAE 16.
Finally, the provisioning service must offer robust reporting and logging so we can identify any abuse or security issues. Of course, the logs must be compatible with and able to be transmitted to our event monitoring infrastructure.
These are the main security, application and infrastructure controls that we must address as we progress toward this new era of server provisioning.
This week's journal is written by a real security manager, "Mathias Thurman," whose name and employer have been disguised for obvious reasons. Contact him at email@example.com.
Join in the discussions about security! Computerworld.com/blogs/security