On 10 March 2015, The Intercept revealed how the U.S. Central Intelligence Agency targets Apple hardware and software. Based on top-secret documents exposed by Edward Snowden, the article implies that Apple’s Xcode development environment, iOS encryption keys, and Software Update may all have been compromised by the CIA in the name of broad surveillance. Fortunately, the truth is more nuanced.
While I have no doubt the CIA (and every other intelligence agency and criminal organization in the world) targets Apple along with all the other major technology companies, that doesn’t mean they necessarily succeed, or that any successes last long. It’s also important to note that most of the attacks listed in the article are better suited for targeting individuals, not the mass spying that has so dominated the headlines.
Apple has been a particular target since the company started taking privacy and security both seriously and personally — even beginning to architect parts of its products and services to resist government-level attacks. But Apple isn’t perfect. The company still struggles to find the right balance between security, privacy, design, and usability in the face of advanced criminal- and government-level attacks.
The Intercept’s piece was mostly a condemnation of government tactics, but failed to address what matters to Apple’s customers — whether Apple’s products remain secure and safe to use. Overall, the information is quite optimistic, but the article highlights the complexities of modern security, privacy, and intelligence gathering. We are still in the early days of what is likely to be a generational issue as society continues to adjust to the digital age.
Because That’s Where the Intelligence Is — As a taxpayer, if the CIA and every other U.S. intelligence agency doesn’t target Apple products, I want a refund.
Our instinct is to express outrage at U.S. intelligence agencies targeting the products of U.S. companies, but this is far from the first time it has happened, is far from the last time it will happen, and is absolutely essential for those agencies to do their jobs. As the entire world relies completely on technology for all forms of communication, tapping into that technology is critical for intelligence gathering.
As much as it might pain me to live in a world that requires spies, that’s the reality. When U.S. companies dominate a tech sector, it’s only natural that intelligence agencies will target their products. We in the security industry have long suspected that agencies have planted employees in manufacturers and supply chains or engaged in direct operations, in most cases targeting products destined to specific countries. Over the past two years, those rumors and jokes at the bar at security conferences have migrated to the headlines.
While The Intercept’s article focuses on Apple, it also mentions intelligence agency attempts to target Microsoft and the chip manufacturers who make the Trusted Platform Module (not a Microsoft-specific product) used to help power encryption in various platforms. This is on top of previous leaks indicating potential government attacks that have compromised all sorts of encryption hardware, software, and services.
Put bluntly, intelligence agencies must target U.S. companies since those companies’ products — whether we’re talking the iPhone or Windows — are used by all their international targets.
I actually find The Intercept’s article highly encouraging. The fact that government researchers needed to expend such intense effort to access devices and services shows that the security technologies Apple and others provide for us are effective (if not perfect) and the technology companies aren’t simply rolling over for the intelligence community, which would be far more concerning.
Hard Targets — Much has happened in the security world since 2012, when the documents revealed by Edward Snowden were created, including major changes to the iOS security model. Let’s look at the main products mentioned in the documents, and what the risks are.
Keep in mind, I’m not dismissing these risks. But by every reading they seem targeted, temporary, and self-limiting.
Xcode is the tool all Mac and iOS developers use to create applications. The government researchers claimed that they created a cracked version of Xcode that could embed attack code into any apps created using that version, without the developer knowing about it. If someone were able to compromise the canonical version of Xcode distributed by Apple, it could affect every application, anywhere. It’s the sort of thing that keeps security-conscious developers up at night.
Compromising a developer tool chain, especially a compiler, is as bad as it gets. Imagine the capability to insert malicious code into major, popular applications without the developer knowing about it. But believe it or not, there is an upside here. Based on the researchers’ presentations, they don’t have access to the canonical version of Xcode itself, but instead need to target a particular developer and swap out his copy of Xcode on his particular computer. That’s the sort of attack I’d rather see, since it means intelligence agencies can target developers in foreign countries on an individual basis (to get to the users of a particular developer’s apps), and not compromise the entire ecosystem of Apple development.
Even if the CIA were able to get to the canonical version of Xcode, every Xcode update could eliminate the malicious version, unless the Software Update mechanism were also compromised, depending on how often the developer updates.
In fact, software update mechanisms are another high-value target, and one the documents indicate government researchers have worked on. But as with Xcode, unless Apple’s entire Software Update infrastructure were compromised, the attack is limited to specific targets.
iOS encryption is so effective that law enforcement agencies want the government to compel Apple to provide a back door. It’s no wonder intelligence agencies want to crack it. According to The Intercept, researchers targeted the Group ID key used by all Apple devices sharing the same hardware. While this key could help an attacker install malicious software on the device, it can’t be used to recover data on the device if the user sets a good passcode.
There is some nuance here, in that a malicious system software update could potentially expose all the data on a device under some circumstances (the user doesn’t know it was installed, and keeps using the device). Cracking the GID key can’t, for example, allow law enforcement to extract data without the device being unlocked and accessed. If that were true, Apple (as holder of the GID key) could access any user’s data for any law enforcement agency, which the company has explicitly and vociferously claimed is impossible.
In each case, the attacks are serious, but they also show just how hard it is to break into Apple devices, especially iOS devices. None of the attacks seem well suited for mass surveillance, since that would increase the potential for exposure. As the researchers themselves stated:
The Intelligence Community (IC) is highly dependent on a very small number of security flaws, many of which are public, which Apple eventually patches.
In short, the intelligence community wants to keep their attacks secret so they can be used in highly targeted ways, against specific people. I can live with that.
For Apple, It’s Personal — Since those 2012 presentations, Apple has taken even stronger stances to improve security and privacy. In a Macworld article last year, I mostly attributed this to economic reasons, but in ongoing background conversations with people at Apple, it has become clear they take privacy personally.
As an industry analyst, I tend to think of corporations as being driven purely by the bottom line, but that isn’t always the case. Even eliminating any altruistic motivations, Apple is a company where the employees’ entire lives revolve around both the creation and the use of their products and services.
I’m pretty sure the thought of being spied upon gives Apple executives and employees the same itchy feeling that the rest of us feel. Worse, I think they feel that they will have failed technically if their products turn out to be easily hacked — there’s little that developers hate more than being shown how their code could be subverted. No one wants to see the sort of condemnations Apple faced when a simple programming mistake led to a serious flaw in SSL.
This shows through in Apple’s product decisions. iCloud Keychain can be configured to be essentially NSA-proof (see “How to Protect Your iCloud Keychain from the NSA,” 1 March 2014). Apple even physically destroys the hardware smart cards needed to access key management appliances on the off chance that an employee is actually serving multiple masters.
Both FaceTime and Messages support end-to-end encryption. Apple can technically manipulate and change that process, but not easily since it would require what appears to be a major architectural update. And the company’s stance on iOS encryption is clear.
Think about it from Apple’s position. For reasons beyond its control, Apple is now the biggest technology target in the world, from criminal organizations, foreign intelligence agencies, and even its own government. And what is being targeted are the tools and services that all Apple employees rely on every moment of every day. It’s hard to find greater motivation to double down on security and privacy, and the fact that even CIA researchers struggle to maintain offensive capabilities against Apple products shows that these security investments pay off.
That’s strong motivation, especially since it’s the job of spies to spy, even when it means finding flaws in products made at home, used abroad. Apple security isn’t perfect — new flaws will continue to be revealed, likely more often than any of us would like — but now that Apple has upped its security game to block even the most capable intelligence agencies in the world, the main beneficiary is the average user.