Apple's mobile privacy letter to Congress omits an awful lot of context

Apple's letter was designed to alleviate congressional fears about the company invading its customers' privacy. But a close reading of the letter does the opposite.

privacy
Josh Hallett (Creative Commons BY or BY-SA)

Apple's official letter of response to the chairman of the U.S. House Committee on Energy and Commerce this month was designed to alleviate congressional fears about the company invading its customers' privacy. But a close reading of the letter does the opposite, pointing out the many ways sensitive data is retained even when the consumer says no. And that retained data is only one crafty cyberthief away from getting out.

The problem with the letter is that it assumes that technology always works perfectly and that security safeguards are never overcome by attackers — or even nosey, technically astute romantic partners. Such thinking, that we live in a state of nirvana, is the one of the biggest privacy and security problems today, with vendors routinely — and unrealistically and arrogantly — assuming that they have anticipated and negated all security holes.

Vendors often forget — or, more likely, pretend to forget — that technology can behave very differently in the field than in the lab. In the field, where the tech has to interact with icky humans (also known to Star Trek fans as ugly giant bags of mostly water) and real-world environments, the difference between how the coding is supposed to work and how it actually works becomes evident. Amazon discovered this when one of its Echo devices broadcast overheard conversations to a random person on the device owner's contact list. Oops!

Now let's drill into the letter and see what mobile privacy surprises Apple has in store for us.

By the way, Apple didn’t directly answer the points I make below in an email exchange and declined requests for a phone interview.

First off, we have the mandated Apple platitude: "We believe privacy is a fundamental human right and purposely design our products and services to minimize our collection of data. When we do collect data, we're transparent about it and work to disassociate it from the user. We utilize on-device processing to minimize data collection by Apple."

Actually, that is a fair and accurate statement. As Apple points out, due to its business model, it can afford to collect less data than companies such as Google, Facebook and even Uber. But what's more interesting is what is not said. When it comes to privacy, which is what Congress explicitly asked about, what's most important is not what the vendor can see (Apple's point); it's what information is collected in a way that makes it accessible to bad actors. (By bad actors, I mean cyberthieves and lovers with evil intent, as opposed to Keanu Reeves, although the confusion is understandable.)

In short, any data collected is data that can be accessed by identity thieves and others. No safeguard is perfect, as Silicon Valley reminds us almost daily.

The letter continues: "If a user has iPhone Location Services turned off, then iPhone's location information stays on the device and is shared only to aid response efforts in emergency situations. For safety purposes, and aligned with legal and regulatory requirements, an iPhone's location information may be used when an emergency call is placed to aid response efforts regardless of whether Location Services is enabled."

This is a good move by Apple, but as any penetration tester will tell you, anything stored on the phone can be accessed by anyone with physical (and sometimes simply wireless) access to that phone. Apple's motives and intentions are likely good, but it's important to remember that there is privacy exposure here. It may be a privacy compromise most are willing to make — and for good reason — but it's still a compromise.

By the way, speaking of emergency mobile calls, this Washington Post story from Friday (Aug. 24) is more frightening than usual. If true, it means that anyone whose phone is being tracked by law enforcement — as well as mobile users near them — may be unable to make emergency 911 calls. That's a tragedy just waiting to happen.

Apple also mentioned that its 911 service is about to be strengthened, which is also good for emergency calling and potentially bad for privacy: "Later this year, Apple will make available in the United States an Enhanced Emergency Data (EED) service in iOS 12 to provide first responders with more accurate information more quickly, in an effort to reduce emergency response times. EED supplements existing iPhone emergency call features. iPhones running iOS 12 will continue to deliver location data to emergency responders using methods present in iOS 11 and also share information through EED. Consistent with Apple's belief that individuals should be able to exercise choice around the handling of their data, iPhone users can disable EED at any time by visiting the Settings app on their iPhone."

Before we get into more details about what EED does, let's put this passage into context. First, Apple will (properly, in my view) make this the default selection. Most iPhone users don't monkey with their settings much, so default settings are very important. Secondly, even those of us who do opt to review and change many default settings are extremely unlikely to change anything described as helping the user during an emergency. This is not a bad move by Apple, but let's put it into proper context.

More on EED: "EED works by providing information about an iPhone making an emergency call to a database relied on by first responders. When a 911 call is made from an iPhone running iOS 12, the phone will provide its estimated longitude and latitude — and how confident it is in this estimation — phone number and mobile network to the database. Emergency responders can view this information in the database by entering the phone number of the emergency call they received."

Although this, again, is a very worthwhile program from a consumer safety perspective, it is problematic from a security and privacy perspective. Not only is this highly sensitive data now stored on the phone; it is being transmitted to a third-party network as well. This opens up three spots of vulnerability: to anyone with access (physical or non-physical) to the device; to anyone sniffing the information as it leaves the device and travels to this third-party network; and to anyone who has access — legitimate or otherwise — to that third-party network. A third-party network that, I should stress, has unknown security protections.

Even if this third-party network has excellent security — unlikely — the universe of civilian emergency responders is massive. If just one cyberthief also happens to be a trained paramedic or volunteer firefighter, there's your data leak.

Back to Apple's letter on EED: "Because emergency contexts are especially sensitive, Apple takes extra steps to ensure that our products and services protect the confidentiality, integrity and availability of our users' data during an emergency call. Apple only sends the information to the database used by emergency responders, which is run by third-party RapidSOS, if the emergency call is made from within an area where emergency responders rely on the database. If the call is made from a location where first responders do not use the database, no information is shared. Apple also requires that when RapidSOS receives the EED information, it performs its own check that the emergency responders in the area where the call originated rely on the database. If they do not, Apple requires RapidSOS to immediately discard the information."

This is a fine example of an idea that sounds perfect on the whiteboard in an Apple conference room, but can fall apart in implementation. How aggressively will Apple employees track every area where the database is and isn't used? How often will that information be updated? What are the odds that the data will be old and that the information isn't shared in an area where it should have been — and someone dies as a result?

Secondly, this places an onus on RapidSOS to check what local responders do and whether or not they use the database. In an emergency, these people will be focused on saving lives and won't likely think extensively about database updates. And if they do, Apple places the responsibility for discarding the data on the third party? And if they don't? And even if they do, how long will it take for someone to get around to deleting that data? Data at rest is exposed.

Apple — sort of — addresses some of the EED privacy concerns, but again it focuses on Apple's servers, which are only one small point of vulnerability. Indeed, Apple's server security is far higher than those of consumers and most third parties, so it's not even an especially weak area of vulnerability.

Still, that's where Apple chose to focus: "Apple also takes measures to protect the EED messages at rest and in transit. EED messages originate on iPhone and are never logged on Apple servers. EED messages are encrypted by Apple both in transit and at rest and Apple requires that RapidSOS do so as well. Apple relies on strong credentials to help ensure that EED messages are only transmitted between systems that have established their identities. And Apple requires that RapidSOS delete EED messages no later than 12 hours after receipt. Apple has the right to audit RapidSOS to ensure that it is complying with its commitment regarding the handling of user data."

Encryption is certainly good, although most security folk would have simply assumed that Apple did that anyway. Still, it's good. This line, however, offers less comfort: "Apple relies on strong credentials to help ensure that EED messages are only transmitted between systems that have established their identities." As opposed to what? Was Apple considering sending this data to anyone it ran into? So it's a bit obvious, but good that Apple is at least claiming to check the ID of the network with whom it shares this ultra-sensitive data.

Then there's this: "Apple has the right to audit RapidSOS to ensure that it is complying with its commitment." It's nice that it contractually has that right, but will it be enforced? And how often? And what happens if Apple does check and the audit results are bad? There's only one entity controlling this EED effort. Is Apple going to cut them off? Data protection is not going to be — nor should it be — a RapidSOS priority. Apple should handle this directly. Again, not especially comforting.

Moving to other geolocation information from the letter, Apple also provided an interesting description of how it — and other mobile players — have accelerated location availability, both for authorized users and, regrettably, thieves and other evildoers. As for other evildoers, what if a violent criminal is hunting a specific victim. Spook the victim enough to call 911, and the victim's exact, constantly updated information is transmitted.

"Location-based services rely on a mobile device's ability to provide location information quickly and consumer expectations demand the ability to identify device location nearly instantaneously. Calculating a device's location using GPS satellite data alone can take minutes. iPhone uses an industry-standard practice called assisted GPS to reduce this time to just a few seconds by using Wi-Fi hotspot, cellular tower and Bluetooth data to find GPS satellites or to triangulate location when GPS satellites are not available, such as when the user is in a basement. iOS calculates the location on the iPhone itself, using a crowdsourced database of information on cellular towers and Wi-Fi hotspots. The crowdsourced database used by iOS to help quickly and accurately approximate located is generated anonymously by tens of millions of iPhones. Whether an iPhone participates in the creation of the crowdsourced database depends on whether the iPhone has enabled Location Services. iPhones with Location Services enabled collect information on the cellular towers and Wi-Fi hotspots that the iPhones observe. iPhone does not crowdsource Bluetooth beacon information. iOS saves this information locally on iPhone until it is connected to Wi-Fi and power, at which point the device makes an anonymous and encrypted contribution to the crowdsourced database. If iPhone cannot contribute the data to the crowdsourced database within seven days — (such as) if the iPhone was not connected to Wi-Fi and power during this period — iOS permanently deletes the data."

That's one of the best descriptions of how iPhone uses geolocation crowdsourcing. Submitted to you without further comment.

Moving on, the letter also describes its "Hey Siri" procedures.

"If a user has enabled 'Hey Siri' functionality on the iPhone, Siri can be accessed using the clear, unambiguous audio trigger 'Hey Siri.' A speech recognizer on iPhone runs in a short buffer on the device and listens for the two words 'Hey Siri.' The speech recognizer uses sophisticated local machine learning to convert the acoustic pattern of a user's voice into a probability that they uttered a particular speech sound and, ultimately, into a confidence score that the phrase uttered was 'Hey Siri.' Up to this point, all audio data is only local on the device in the short buffer. If the score is high enough, iOS passes the audio to the Siri app and Siri wakes up and appears on screen."

1 2 Page 1
Page 1 of 2
Bing’s AI chatbot came to work for me. I had to fire it.
Shop Tech Products at Amazon