New Wrinkle in PLM Security Controls

Unexpectedly, data must be accessible by some partners, and our manager wants it all done securely.

Our product life-cycle management (PLM) implementation has a number of security considerations, and I've been providing input on the security best practices that I think should be included when this application is rolled out.

The PLM application will become the repository for a lot of sensitive data, such as source code and design specifications, that's housed in various legacy applications. Those applications predate my arrival at the company, and they are lacking in several areas related to security. I want to be sure we start out on the right foot with this new application as part of my efforts to better protect the company's intellectual property.

It hasn't always been easy, though, and I don't imagine that I am very well liked among the members of the PLM team. Just this week, a new requirement for the PLM application came to my attention that was a bit alarming.

But let me review the history of this project before I get to the latest wrinkle. About four months ago, I met with the PLM team. Our first decision was to assign the data that will reside in the PLM application one of two classifications that are consistent with the company's data classification model: Confidential, and Confidential Special Handling. As things went along, I was expected to provide my security requirements. I was very specific while trying to be reasonable. I didn't ask for things that went beyond the scope of the project, were too expensive to implement or, like identity management, didn't fit in with the company's current security model.

I required that each user have a unique ID on the system - no more sharing of credentials, as has always been the case with our legacy applications. I want everything that happens in the PLM application and to the data stored in it to be audited: checkout, check-in, log-on, log-off and the granting of administrative privileges. We have to be able to document these things so we can properly investigate any theft of intellectual property. Our current systems don't provide this essential information, so our investigations have been stymied.

I also demanded that we apply the rule of least privilege so that users will have authorization to do only the things they need to do to accomplish their jobs. Our legacy applications lack this, and since data isn't segmented by authorization level, anyone with an account can gain access to data that's classified as Confidential Special Handling.

I also wanted to deploy Secure Sockets Layer between the clients and the Web server, but the business decided not to implement SSL. So I followed my standard practice in cases like this: I crafted an e-mail laying out the risks inherent in rejecting the SSL deployment, and then I had the program manager sign off on this risk accountability and acceptance document. Once he did, I forwarded it to the CIO. Now, should a breach occur that would have been prevented had we deployed SSL, I have documentation showing that I identified the threat but was overruled on the mitigation.

Outside Interests

That's where things stood until this week, when I met with the IT project coordinator for the PLM application. He had a new and rather alarming requirement: People outside the company, mainly suppliers, should be granted access to certain documents that are classified as Confidential Special Handling. There's a legitimate business need to do this, but it means that we have to go back to our legal department and run an additional classification by them. Certainly, the application controls will have to remain the same, especially in regard to logging the check-in and checkout of documents.

But this new requirement raises another issue entirely: How do we get these very large documents to the external entities? For this, I will be looking at implementing an enterprise FTP server. We currently have an old Linux server sitting in our DMZ that runs an FTP daemon. Employees use it to let customers transfer files from our company -- but only files that don't contain sensitive information. The main reason we provide for such FTP transfers is because some companies we do business with limit the size of e-mail attachments that can be received. FTP keeps the very large files that need to be sent from being blocked by restrictive e-mail server settings. (An alternative is to burn the file onto a CD or some other external media and ship it, but sometimes that takes too much time.)

But this public FTP server has no encryption, no robust access control and no method for properly segmenting data, so I'll be calling some vendors within the next couple of weeks to find a Web-based FTP environment that will allow users, through their Web browsers, to log onto an environment within our DMZ and provide credentials that will identify them by their relationship to our company and then compartmentalize them based on that identity. The last thing we want is to commingle data or reveal the identities of suppliers, partners and customers to one another. The uploading and downloading of files will be done through a Web browser, bypassing the need for a special FTP client or for the user to issue FTP commands. I'll also be looking at ensuring that this infrastructure encrypts the data in transit and at rest on our server.

In the future, I'll mandate the use of digital rights management to further protect sensitive intellectual property. In fact, that mandate may be issued sooner rather than later, since we're rekindling a DRM project that was cut last year because of a lack of funds.

What Do You Think?

This week's journal is written by a real security manager, "Mathias Thurman," whose name and employer have been disguised for obvious reasons. Contact him at, or join the discussions in our security blogs.

To find a complete archive of our Security Manager's Journals, go online to


Copyright © 2006 IDG Communications, Inc.

Shop Tech Products at Amazon