Apple’s security is, across the board, stronger now than at any time in the nearly eight years I’ve been researching and writing about the company’s products and services. Which is important, since Apple also faces more security challenges than at any time in its past.
When I first began writing about Apple security, the situation was bleak yet meaningless. Bleak thanks to a company that didn’t prioritize security and not only responded poorly to issues, but also left the platform wildly exposed to potential attacks. Meaningless, since said attacks never actually happened in the real world. As much as I may have fretted over the lack of security features or what the future might hold, my worries were trifles considering the absence of actual problems for users.
Outside of Mac OS X users, vanishingly few people used .Mac, the iPhone was brand new and locked down, iOS hadn’t yet been named, there was no iPad, iPods were still music players, and Apple products were almost universally banned from enterprises.
Today Apple is the second most popular brand in the world, trailing only Coca-Cola. It is also one of the most profitable companies in the world, with massive sales in smartphones, laptops, and tablets — a category Apple essentially defined — and a reported 85 million users of the iCloud online service. Never before have so many users relied so much on the security efforts of the company from Cupertino.
This popularity hasn’t been ignored by the media, security researchers, or criminals. The slightest Apple security or privacy glitch creates an instant media frenzy, the online equivalent of the local news telling parents that drinking water will poison their children. 2012 also saw the first widespread, albeit non-damaging, Mac malware. “BYOD” (bring your own device) is the biggest hot-button issue in enterprise security, and is predominantly driven by user demands that their organizations support iPhones, iPads, and Macs. I can no longer walk into a meeting with enterprise IT without at least some Macs or iPads in the room, officially supported or not.
This is a nearly complete reversal from just five years ago. And while Apple seems up to the task, it’s clear that the intensely private company is struggling to find the balance between its close-mouthed corporate culture and its new responsibilities as a global technology leader in the post-PC era. Let’s look at what Apple has done with OS X, iOS, and its cloud services.
OS X — It’s trite to say OS X 10.8 Mountain Lion is the most secure version of OS X yet, since it’s also the latest version. But Mountain Lion did introduce one significant new security feature with the potential to reduce the risk of widespread malware on Macs even as the platform increases in popularity.
Gatekeeper (which I described extensively in “Gatekeeper Slams the Door on Mac Malware Epidemics,” 16 February 2012) is a new feature in OS X designed to alter the economics of mass malware. In its default setting, Gatekeeper allows the user to run only those downloaded applications that come from the Mac App Store, or that are digitally signed by the developer using a key issued by Apple. Since most widespread malware today relies on tricking the user into installing and running unapproved applications, this presents a serious roadblock to attackers.
Applications in the Mac App Store are now required to implement sandboxing and undergo review by Apple. This reduces both the chance of a malicious app making it into the Mac App Store, and the potential harm of either a malicious app or one compromised by an attacker. Sandboxing on other platforms (like iOS) has been shown to be an effective technique of increasing the cost of attacks, even if the sandbox is broken. Both Mac App Store applications and those independently distributed and signed with an Apple-issued Developer ID are digitally signed, which helps the operating system detect if the application was tampered with; again, adding yet another roadblock attackers need to circumvent.
Mandatory sandboxing hasn’t been popular among developers, but as I discussed in “Answering Questions about Sandboxing, Gatekeeper, and the Mac App Store” (25 June 2012), it is an incredibly important security tool for protecting users, even if we lose some functionality. Unfortunately, since sandboxing is mandatory in the Mac App Store, that means it has also been caught up with otherwise-unrelated issues in distributing applications through Apple that complicate the lives of developers.
While Gatekeeper certainly won’t prevent all infections, and I’m sure there is still a contingent of Mac users who will be tricked into installing malware, it disrupts the economics of fooling users into installing malicious software. Today’s Internet criminals are in it for the money; tools like Gatekeeper dramatically increase the cost of running a malware campaign, reduce their profits, and make the Mac a less appealing target.
Gatekeeper is only one of a number of security controls built into OS X. FileVault 2, introduced in 10.7 Lion, transparently encrypts hard drives to protect data in the event of physical loss. It’s a massive improvement over the original FileVault, so much so that Apple probably should have used another name due to all the negative connotations associated with the previous version. Mountain Lion also extended FileVault 2 to cover external drives, including Time Machine backups (with a little extra work). FileVault can even be combined with Find My Mac to wipe your hard drive remotely in case of loss, although this can be dangerous if anyone accesses your iCloud account and you lack current backups (see “Watch TidBITS Presents “Protecting Your Digital Life”,” 22 August 2012).
In terms of the core operating system, Mountain Lion extends ASLR (Address Space Layout Randomization), a powerful tool to limit the ability of attackers to exploit vulnerabilities, to the kernel itself. Mountain Lion also added some additional memory exploitation protection techniques for those on current processors, like the Core i5 and i7, included in new Macs. These processors include extra hooks that operating system vendors like Apple can leverage to further complicate an attacker’s efforts.
In 2012 Apple also showed a clear ability to make difficult decisions in favor of protecting users from the most common forms of attack. In response to the Flashback malware infection earlier in the year (see “How to Detect and Protect Against Updated Flashback Malware,” 5 April 2012), Apple began tightening the screws on the two most common sources of Web browser based malware infections: Java and Adobe Flash.
Java is an extremely common source of infections across Macs, Windows PCs, and any other computing device it runs on. Java applets are easy to embed in Web pages and tend to run, by default, in the Web browser. Java is also very difficult to sandbox off from the rest of the operating system. Thus a good Java exploit dropped on a Web page may easily infect most visitors (though generally on a platform-specific basis). This was especially pernicious on Macs since Apple did a poor job of maintaining its own version of Java, often letting it go unpatched for weeks or months after the official version was fixed and thus giving potential attackers a roadmap to success.
When Java attacks like Flashback started to increase, Apple performed three key actions via a series of updates. First, Apple disabled Java from running not only in Safari, but in any major Web browser on your Mac unless you explicitly turned it on. Even then, OS X would disable Java again in 90 days if you didn’t use any Java applets. Second, Apple also stopped installing Java on Macs by default at all (before Flashback), although it’s fairly common for users to add it back in. Third, Apple handed the responsibility for updating Java on the Mac back to Oracle, so now Mac users receive patches at the same time as all other platforms. Java is now uninstalled by default, blocked in your browser unless you explicitly enable and use it, and patched on time.
Apple then extended similar protections to Adobe Flash, another common source of browser-based vulnerabilities. Although Flash hasn’t been installed by default for years, it was nearly universally installed by users, and rarely updated. Apple and Adobe worked together (sort of) to address this situation. Recent versions of Flash include a self updater to ensure users are using the latest, patched versions (see “Flash Player 10.3.181.26,” 23 June 2011). Since many Mac users weren’t using the self-updating versions, an Apple security update disabled any version of Flash that lacked the self-updating function, essentially forcing users to update.
It’s hard to overstate the effectiveness of these combined improvements. Our Macs are well-protected against physical loss thanks to FileVault. The combination of Gatekeeper, the Mac App Store, code signing with Developer IDs, and sandboxing dramatically raise the cost to attack Macs by tricking users into installing malware. The constant improvements in inherent operating system security continue to reduce the chances of attackers exploiting vulnerabilities. And by reducing the exposure to Java and Flash in the Web browser, the cost to attack Mac users through Web vulnerabilities is materially higher.
Apple showed its hand with Mountain Lion and ongoing security updates — the company is focused not only on hardening the operating system, but addressing common user behaviors that enable attackers. This combination won’t stop every attack, nor even every widespread attack, but it is hard to imagine Macs ever suffering an ongoing malware epidemic even as they increase market share. The key takeaway is that all these technologies are aimed at attacking the economics of malware.
iOS — There’s a short version and a long version of the iOS security narrative. The short version? iPads and iPhones are the most secure consumer computing devices available. They have never suffered any widespread malware, exploits, or successful attacks in their entire history. None. Zero. Zip.
The long version? iOS is hardly immune from security issues. There is no perfect security, and iOS suffers vulnerabilities just like every other platform. iOS 6 itself contained well over 100 fixes for various security flaws. But iOS 5 was difficult for attackers to exploit, and iOS and Apple’s latest processors (the A6 and A6X chips) continue to add ever more security hardening. The best indicator of iOS security is the availability of jailbreaks, since every jailbreak is technically a security exploit. As of this writing, there are no jailbreaks available for iOS 6 on the iPhone 5 or fourth-generation iPad (using the A6 and A6X processors), and only limited (tethered) jailbreaks for the iPhone 4S, the iPad 2, and the third-generation iPad (which use the A5 processor).
Another strong indicator of iOS security is that digital forensics firms, those who produce the software used by law enforcement to recover data from mobile phones and computers, are as yet unable to crack data protected by the highest level of iOS encryption enabled by default (for email and participating apps) when you set a good passcode.
iOS is highly restrictive, allowing only apps from the App Store, extensively sandboxing applications from each other, nearly eliminating shared storage, code-signing all apps to limit tampering, and disallowing background applications. All this is on top of a hardened platform that makes wide use of security features built into the underlying hardware.
The main security enhancements in iOS 6 were the addition of kernel ASLR and other memory protections similar to those in OS X, which isn’t surprising since the two operating systems still share much of their code base. iOS 6 also added a series of enhanced privacy protections. Apple stopped allowing the unique device identification (UDID) to be available to apps in order to limit user tracking by independent developers. Also, users must now explicitly approve access to locations, contacts, calendar entries, and photos on a per-app basis, and can revoke those rights at any time through the Settings app. This move came in direct response to widespread reports of abuse by some application developers accessing private data that their applications didn’t technically need.
Apple also added some additional features to support enterprise deployments of iOS, such as a global proxy setting to manage Internet connections, blocking of iMessage, Passbook, Game Center, Photo Stream sharing, and the iBookstore (in addition to existing application and feature limits), time-limited configuration profiles, and improved certificate and profile management. I spend a lot of time talking with enterprises about iOS security, and their main concerns are supporting employee personal devices while still protecting enterprise data, not malware or other external attacks.
As strong as iOS security is, we know it isn’t perfect. Vulnerabilities are discovered, new jailbreaks are created, and rumors in the security world are that some governments have paid hundreds of thousands of dollars for single exploits to enable them to hack iPhones and iPads remotely. Strong encryption (called Data Protection) covers only email messages and attachments by default, and other apps that enable the API, potentially leaving large amounts of data recoverable if you lose the device. Also, shorter passcodes like the default four digits can still be broken via brute force attacks.
None of these should be concerning to the average user not facing a hostile government (which, we admit, does happen in parts of the world). Government-level exploits are rare, expensive, and not wasted on the average user. If you lose an iOS device, the odds are against the person finding it trying to steal your data — he or she is far more interested in selling the device. If you are concerned with law enforcement or a government peeking at your information, using a longer passcode and Data Protection-enabled apps will probably thwart their efforts.
As for the “Great Mobile Malware Epidemic” constantly predicted every year by various publications and security companies? It doesn’t seem to be an issue for iOS.
iCloud and the iTunes Store — Discussing Apple’s online services — iCloud and the iTunes Store — is far more difficult, thanks to a complete lack of transparency from Cupertino. Unlike OS X and iOS, security updates are handled on Apple’s servers with no public notification. Apple makes no public statements regarding the security particulars of either platform, and what little information is publicly available is little more than marketing statements. Within those limitations, here is what we can infer.
Apple states that all iCloud communications are encrypted, and all data is encrypted when stored on its servers, with the exception of Mail and Notes, which are encrypted only over the network (stored data encryption is rare for online mail services). This data is protected using keys Apple manages, and thus Apple employees can technically see your content. The easiest way to determine if any online service can access your data is to see if you can access the content through a Web browser. Unless the company writes complex code to encrypt and decrypt within your browser (which is extremely rare outside password-related services like LastPass), that means its Web server, and thus its employees, can access the data.
An important aspect to iCloud security is that your iOS device backups in the cloud are potentially accessible to Apple, or to anyone with a warrant or subpoena. If you’ve ever restored from iCloud, you probably noticed that you have to re-enter most of your usernames and passwords, which you don’t need to do if you restore from a local and encrypted backup. That’s because Apple scrubs the keychain for unencrypted backups, either local or iCloud.
iCloud data is not stored encrypted on your Macs or iOS devices, unless you turn on some other additional encryption. The network connections to Apple are encrypted, and the data encrypted on Apple’s servers, but it is accessible by Apple.
Apple’s security guide for iOS states that both iMessage and FaceTime support “client-to-client” encryption. The most common interpretation of this means your messages are encrypted even from Apple. But since Apple manages the encryption certificates and keys, there’s always the chance that an Apple employee could perform a “Man in the Middle” attack to sniff your data. My belief is that, although Apple has this capability, at worst it is something only accessible to and used by law enforcement, like any other wire tap, and we have no idea if that has ever occurred.
There isn’t much more to say about iCloud. Apple doesn’t talk about security incidents, but I’m unaware of any that have been reported publicly. Apple also doesn’t discuss iCloud security controls, such as how the company restricts access by employees to your data, so we have no idea how good or bad those practices are. On the upside, although Apple’s privacy policies technically allow the company to look at or share your private data, we are unaware of Apple mining your data for other purposes, as do Google, Facebook, and other advertising-supported services.
The iTunes Store and iTunes in the Cloud also use encrypted communications, but obviously handle less-sensitive information. The main security concern with them is credit card data, and there have been reports of illegal purchases, phishing attacks, and other financial crimes associated with iTunes Store (including the App Store and Mac App Store) accounts. Earlier this year, Apple implemented enhanced security, including sending you email when new devices make purchases, notifying you via email of account changes, and requesting account verifications at least once a year. Also, Apple does mine your iTunes usage for Genius and ratings, but, again, we do not believe any user-specific data is ever shared or used for advertising.
It’s hard to know what’s really going on since Apple doesn’t make public statements about the reports of iTunes Store attacks, and there is often a distinct lack of consistency that might support getting at the root of the problem. If there even is a problem or flaw, we honestly don’t know.
Unfortunately, silence fosters fear when it comes to active security incidents. Once incidents become public enough, users rightfully worry and look to Apple for answers. But this is a lesson Apple will learn for itself, and adapt to in its own way. While I don’t ever expect to see Apple respond as quickly and publicly as a company like Microsoft, I do see early signs that Apple is taking security communications more seriously.
Four events from the last two years show this gradual change. During the Lion development process Apple invited select security researchers to participate in the beta testing for free, instead of hoping they might join Apple’s official developer program. Before the release of Mountain Lion, Apple pre-briefed a security researcher (yes, me) under NDA so the rest of the press would have a security expert to discuss Gatekeeper with. Apple also, for the first time, released a detailed iOS security guide discussing the operating system internals. Finally, at this year’s Black Hat security conference Apple delivered a presentation for the very first time, talking about iOS security but refusing to take questions.
Culture is difficult to evaluate. It lacks the objective indicators of operating system or hardware updates, especially when much of the discussion takes place in private or under NDA. On the objective side, I see Apple adding more security features, giving them a more prominent role in the operating systems, and responding more quickly and directly to security issues. Subjectively, the company is still secretive, but far more responsive and communicative than in the past.
It’s clear Apple recognizes that security plays an essential role in maintaining the growth of the company. To that end, the company has not only been more responsive, albeit in its own way, but has made important long-term investments in the security of the Apple ecosystem. These efforts paid off in 2012, and we are likely to see them continue to pay off for years to come.