Hackers are ready to eat your app for LUNCH

school food chicken nuggets tray lunch school lunch food 000000723036
Credit: iStockphoto

How to avoid the pitfalls of building apps that will be devoured by hackers


Normally I don’t go for cheap gimmicks like mnemonics, but this one (literally) wrote itself. I was perusing the "Information Is Beautiful" visualization on hacks and breaches when I decided to write down the basic reasons for these kinds of failures with an eye towards training teams on common mistakes and oversights.

And that’s where the mnemonic in the article's title comes from: LUNCH - Lazy, Unqualified, Naive, Cheap, Hubris.

Now, I am not trying to throw stones at any one development team when I use these terms; we’ve all exhibited these characteristics at one time or another. The point here is to recognize when we, as a development team, are doing this sort of thing so that we can stop it and take a step back to make better decisions.


Robert Heinlein once wrote of a man who rose through the ranks of the Navy and later became wealthy due to his innate laziness, a characteristic that drove him to find the easiest and simplest method to accomplish a task. Rather than “doing it as it always had been done,” finding the path that led to the same result with the least amount of effort resulted in the man being rewarded handsomely for finishing tasks more quickly and efficiently. And this quality is something most engineers, innovators and entrepreneurs possess, that drive to finish tasks efficiently, this ability to see tasks differently and discover new paths to completion.

However, software security is one spot where developing for efficiency and simplicity can work against you. Security often works best in the realm of entropy, presenting barriers to threats by forcing those threats to work harder to overcome, to expose, to penetrate. It means building in complexity and introducing natural confusion to the normal order of things, creating anything but a straight line from the starting point to the finished task.  


Recently (true story), my water heater burst in my basement, resulting in the need for a plumber to come in and fix it. My children, being the curious sort, wanted to know exactly how the plumber was going to replace the water heater, to which my only response was, “I have no idea” (because I am wholly unqualified to touch implements of destruction like a blow torch). A day after replacing it, a city inspector came to the home to inspect the installation. He carried with him some air detector gear, some checklists, a code book that outlined the rules and regulations, and 40 years of working knowledge.

During the inspection he pointed out some flaws in the installation; nothing incredibly dangerous, but actions the plumber had performed that were no longer recommended in the new code books. Simple changes resulted in a significant reduction in risk of fire, for which I was eternally grateful.

Having the skill set to perform the operations required to accomplish tasks is not always enough; sometimes you need a qualified, experienced person looking over the work and making recommendations for improvement and risk reduction.

This is as important in plumbing as it is in software security. The threats to a system are constant evolving based on changes in the environment, learnings from previous exploits, and collective sharing of information.  The developer constructing a security solution likely has neither the time nor the interest to research and study every aspect of secure development required, especially as context changes from project to project. Instead, qualified security personnel are needed who are focused on staying abreast of the changes in industry, the impact of the market and regulations and the evolving threat landscape.


“Why would anyone do that?” is a question often posed by those steeped in idealism and less so in the real world, where bad people do bad things in pursuit of money and/or power. Sometimes this pursuit can take on an air of complexity and mystery that rivals the greatest heist movies ever written, but most times it is as simple as any other “smash and grab” robbery. Consider the lengths that some people go to in order to steal copper piping, scaling heights and invading danger power sub-stations, or risking injury and arrest when tearing apart homes under construction, all in order to run away with a hunk of metal to sell at a scrapyard. But there was, and is, significant money to be made in volume theft of this nature, and the epidemic reached such proportions that in 2008 the FBI created a separate bulletin just for it. 

The same holds true for the theft of digital identify and financial data; while one small item may fetch little on the open market, the opportunity for bulk sales in the black market makes it all (somewhat) worth it to the hackers.

In order to defend against it, software security must anticipate the lengths a hacker will go in order to steal the data -- not because the one individual’s data is so valuable, but because the hacker’s ability to harvest volumes of individuals’ data offsets the cost of acquisition. It's not enough to consider whether a person or group in your own position would go to those lengths to steal data, it is important that team's put themselves in the positions of the attackers, understand their motives, their drive, their goals, their lives and their hunger.


Again, this not to cast aspersions with such an ugly word, but we as corporations are in the business of maximizing profits and minimizing expenses; it’s the way the world works. In order to do this, we tend not to overspend for tools, training or resourcing, and we often compress our timeframes for development in order to push out product as quickly as possible. And it’s not generally for giant margins -- it’s done more to shave costs down in order to remain competitive and deliver products to market at an attractive price.

This drive to bare minimum, however, can have a dramatic effect on the security of the product. Investing in time and tooling and education is critical to establishing a strong security posture, and necessary to battle against those attackers who, at the end of the day, have a great advantage over the producers. Once the product is in market, the smart attacker has the advantage of a wealth of time to probe the product for weakness, expose vulnerabilities, and exploit them or create tools to sell on the black market for others to use.


In the summer of 1812, Napoleon Bonaparte was by all accounts the ruler of Europe, commanding half a million soldiers and wearing the crowns of France and Italy. But within six months he would be stumbling south from Russia with that same army decimated down to less than 5% of its original size by the weather and the fighting. He had thought himself invincible and therefore perfectly capable of conquering Russia, but his excessive confidence in self prevented him from appreciating and preparing for the threat landscape he met in the cold north.

We all think ourselves clever; we would likely not be in the business we’re in unless some sense of confidence and pride in our abilities existed. But brilliance is not limited to one side of the battle, and sometimes our over-confidence in our skills, our tooling and our technology can lead to a lack of awareness of, and appreciation for, the threats that loom over the horizon.  

It’s important to recognize that we can all be beat at some point, that our controls can be bypassed and that our alarms can be silenced, and that we must remain vigilant in our secure development practices, in our monitoring and in our defense-in-depth designs.

I’ll close with this, which is generally how I open a talk to a wide audience: I will often ask “raise your hand if you’ve never experienced a breach,” and then will nod to those starting to raise their hands and say “not so fast.”

The truth of it is, we don’t know if or when we’ve been breached, either personally or professionally, but the likelihood of occurrence is quite high. This is because the hackers and accidents and disgruntled employees pressuring your systems and services don’t take breaks; they don’t try, fail and leave, but instead continue testing and probing and pushing the systems to their natural or unnatural failure points.  

To defend against this push, we as product owners and protectors cannot rest -- instead, we must continuously train, remain cynical, invest properly and must not let our egos get in the way of good, solid security practices.

This article is published as part of the IDG Contributor Network. Want to Join?

The march toward exascale computers
View Comments
Join the discussion
Be the first to comment on this article. Our Commenting Policies