Organizations continue to place themselves in the line of fire by the same set of avoidable flaws that invariably show up in software -- both small and large organizations alike wind up with copius amounts of un-quantified risk. However, it’s important to note that measured risk is at least quantified and on an organization’s radar, yet most risk introduced by custom-built software remains unknown by the organization.
How does this continue to be such a universal issue, and how can we better protect our groups, divisions, organizations and ourselves from this continuous problem? First let’s briefly discuss how software is built. A stakeholder of some kind outlines an idea of software that needs to be built to solve some kind of problem, or satisfy some demand internally or externally. Much time and effort is usually spent outlining the exact nuances of how the software is to work, the user experience, the interface, and so on. Think of these as the application’s “positive” attributes.
We human beings have a well-documented positivity bias, which often lead us to focus primarily on the benefits of an endeavor, without the same considerations for potential negative consequences. In software development, we see the same bias -- the positive aspects of building software are the primary focus and the security or risks involved are often minimized, trivialized, or ignored altogether.
So how can we compensate for this on an organizational scale? First, all employees, not just developers, should be educated in the risks associated with building software. Developers alone cannot bear the burden of keeping an organization secure. They are certainly on the front line as far as preventing vulnerabilities from festering in software, but they are not in a position to perpetually argue with stakeholders on security -- neither should a dedicated security team. If internal security teams seem overly draconian in an organization, the problem may not be with the security team, but rather a lack of security and risk awareness throughout the organization.
Just like we train all users not to download suspicious attachments and not to open dodgy emails, we should also be training the fundamentals of software security to all users who even tangentially touch software development. Given the large number of high-profile attacks against software in the recent past, we have a large number of examples to pull from. This type of training should be a mandatory learning requirement, and should be given top status along with other mandatory trainings -- such as harassment, key corporate policies, and so on.
The key takeaway? The more exposure we all have to understanding the risks we face, the better chance we have of facing and limiting the problems associated with these risks.