I took somebody's word for something, and I didn't subsequently check it out to my own satisfaction. Result: big trouble. Lesson: always verify.
I learned that lesson last week, when one of my security analysts notified me that our data loss prevention (DLP) tool had detected an incident involving some source code leakage. When we initially set up our DLP rule for such events, we got a lot of false positives, so we partnered with engineering, which provided us with strings of characters (commented out in the code) that would indicate the leakage of our most sensitive source code -- the algorithmic portions of the code that sets our products apart.
The trigger for this particular event was a senior software engineer in India sending a snippet of code from his corporate Microsoft Exchange email account to his personal Gmail account. When confronted, the engineer told us that he had set up a rule to auto-forward all of his corporate email to his personal account. He did this, he said, because he hasn't been issued a corporate laptop and he wanted to work from home.
There were other options, but he didn't know about them. He was unaware, for example, that he could access his corporate email from home via Outlook Web Access (OWA), or that he could access some applications via the corporate clientless SSL VPN portal.
Tip of the Iceberg?
This was all interesting, but it begged a question: Why was it even possible to auto-forward to an external account?
And now to my failure to verify. We recently migrated from an on-premises Microsoft Exchange environment to Microsoft's Office 365 hosted Exchange. During the architecture review, I was assured that all of our security settings, including the one preventing auto-forwarding, would migrate to the hosted environment. So much for assurances. Now I was worried. Email is probably our No. 1 repository of sensitive data, including sales forecasts, customer and personnel data, prerelease financial information and, of course, source code.
To rectify the oversight, I initiated an audit of the Office 365 deployment, and we uncovered several other configuration differences from the previous Exchange deployment. For one thing, the deployment was supporting POP and IMAP, enabling employees to use third-party email clients and apps that could give them email access from mobile devices while bypassing Microsoft ActiveSync and the security policy that we apply to mobile devices to enforce the use of device passwords, enable device timeout and support remote wiping.
Another discovery was that employees could use the Microsoft Outlook application on any PC, on or off the corporate network. When Exchange was on-premises, the only way a remote user could access corporate email was via VPN. This increase in availability is bad, because once email is pulled down to a client, it remains there, even after the user exits Outlook. Using OWA is preferable, since it's browser-based; once the browser is closed, all email is removed (as long as the user clears the cache and any temp files).
What will help? Mobile device management might, and we hope to deploy that next year. Then there's the use of machine certificates, which can be issued to corporate PCs for validating authorization to access the Outlook client. We could do that while still providing some flexibility related to OWA and mobile devices, via ActiveSync. We've also spoken to Microsoft about this, and we'll be investigating our options with Office 365 a bit further.
One thing's certain: The email team's never-ending list of action items just got a good deal longer.
This week's journal is written by a real security manager, "Mathias Thurman," whose name and employer have been disguised for obvious reasons. Contact him at email@example.com.
Join in the discussions about security!