Opinion by Kenneth van Wyk

Kenneth van Wyk: Where mobile apps go wrong

More so than Web-based applications, mobile apps tend to have security design flaws that attackers can exploit

Opinion by Kenneth van Wyk

Show More

Mobile apps aren't just fun. They speak volumes about your system's overall security architecture, and they may well reveal defects that can hurt your company -- and that's no fun at all. How can that be?

For starters, we're seeing more and more serious business apps showing up on mobile platforms like Android and iOS. For the most part, those mobile apps are complementing companies' existing Web-based applications, especially for consumer-facing sorts of functions.

A primary difference between most Web apps and most mobile apps lies in what companies put into consumers' hands. With Web apps, there is of course the HTML, JavaScript and whatever other technical components the company deploys. In the majority of situations, the client code makes up the presentation layer and much of the user interface elements. That is, not much actual business logic is implemented in the client-side "footprint."

Mobile apps quite often change that around, at least to some degree. On Android, mobile apps are generally coded using Java, and on iOS in Objective C. Both of these languages are capable of significant processing functionality. (That's not to denigrate the capabilities of JavaScript and other client-side active content.)

As a result, mobile apps often perform more functions than just presentation-layer aesthetics. That's where the risks can hide.

On Android, it's quite feasible to decompile Java class files, even those built for the Davlik JVM (which is a bit different from a more traditional JVM). Similarly, reverse-engineering iOS apps is actually quite easy, even those that are protected via encryption on Apple's App Store.

Attackers can and do examine apps by reverse-engineering them and looking for exploitable defects. Defects essentially fall into two categories: implementation bugs and design flaws. Bugs make up coding mistakes such as using mutable SQL queries in the form of dynamically built query strings that can be injected with poisonous data. They're also generally quite easy to remediate, for example by using immutable SQL query APIs, including PreparedStatement in Java.

On the other hand, design flaws can be far more heinous and are usually substantially more difficult to remediate. Flaws can be quite costly to fix.

Now, these statements are true for all software, so why should there be more concern about mobile apps? The answer lies in the software's design itself. If the architect has designed a system in which there are sensitive functions in the client code, those design mistakes will be entirely visible to the attackers when they reverse-engineer the mobile apps.

Whenever we build a mobile app and place that app out in an app store or market, we're giving away some clues about our software. Most users will be blissfully unaware of how an app works -- and they don't care how it works. They just want it to work.

But those users who want to attack us will care deeply about how things work. They are going to statically and dynamically analyze our apps to see how the app works. They are going to run our apps on rooted/jailbroken devices and examine them under microscopes, metaphorically speaking.

In examining the apps, our attackers are going to be seeking design flaws that can be exploited. It would be one thing if a single app on a mobile device could be attacked and that person's data stolen or corrupted. But a design flaw might well unlock larger possibilities.

Imagine a mobile app that contains a client-side authentication function. When the app is launched, the user enters his username and password to use the app. Now, imagine if that authentication function contains a single Boolean data field denoting whether the user is authenticated or not. And imagine the attacker is able to change that data field to TRUE on a running app, without first having to enter a valid username/password. It's possible, in this hypothetical situation, that the system could be tricked into allowing a user to pass.

This simple example illustrates the perils of putting security controls onto a client, and trusting those controls. Web apps can also contain client-side problems like this, but mobile apps tend to exacerbate the problem, because people underestimate their attackers' capabilities.

Reverse-engineering an Android or iOS app to find design flaws turns out to be not all that tough to achieve. Tools and techniques are readily available on both of these platforms to help the attacker out.

It's up to the security-minded designer to ensure that security controls, sensitive data, protocols and algorithms do not ever get exposed to the attackers. That requires a degree of caution in choosing a system's design. It may be hugely tempting to put sensitive functions out on the client side, but doing so might well reveal more than is prudent.

So what are security-minded software engineers to do? There's a lot to consider, after all. Here are a few things to think about. While this list isn't comprehensive, it covers the most important things.

* Don't store things on the client. Unless absolutely essential, don't store anything remotely sensitive on the client. If you must store something, use a container such as iOS's keychain. It's far from perfect, but it's still worlds better than storing in plain text.

* Don't put security controls on the client. All security (and other operational) decisions must be made on the server, and never on the client.

* Assume the client is under the control of an adversary. Code your software as though your adversary is seeing everything you do -- because she will.

Does this mean we can't have anything nice on our mobiles? Of course not. As long as we perceive the benefits of mobile apps to be worth the risks, we're going to continue enjoying neat stuff on our mobiles. But that implies we know what those risks are.

With more than 20 years in the information security field, Kenneth van Wyk has worked at Carnegie Mellon University's CERT/CC, the U.S. Deptartment of Defense, Para-Protect and others. He has published two books on information security and is working on a third. He is the president and principal consultant at KRvW Associates LLC in Alexandria, Va.

Copyright © 2014 IDG Communications, Inc.

It’s time to break the ChatGPT habit
Shop Tech Products at Amazon