Examining Apple’s Security Efforts in 2012
Apple’s security is, across the board, stronger now than at any time in the nearly eight years I’ve been researching and writing about the company’s products and services. Which is important, since Apple also faces more security challenges than at any time in its past.
When I first began writing about Apple security, the situation was bleak yet meaningless. Bleak thanks to a company that didn’t prioritize security and not only responded poorly to issues, but also left the platform wildly exposed to potential attacks. Meaningless, since said attacks never actually happened in the real world. As much as I may have fretted over the lack of security features or what the future might hold, my worries were trifles considering the absence of actual problems for users.
Outside of Mac OS X users, vanishingly few people used .Mac, the iPhone was brand new and locked down, iOS hadn’t yet been named, there was no iPad, iPods were still music players, and Apple products were almost universally banned from enterprises.
Today Apple is the second most popular brand in the world, trailing only Coca-Cola. It is also one of the most profitable companies in the world, with massive sales in smartphones, laptops, and tablets — a category Apple essentially defined — and a reported 85 million users of the iCloud online service. Never before have so many users relied so much on the security efforts of the company from Cupertino.
This popularity hasn’t been ignored by the media, security researchers, or criminals. The slightest Apple security or privacy glitch creates an instant media frenzy, the online equivalent of the local news telling parents that drinking water will poison their children. 2012 also saw the first widespread, albeit non-damaging, Mac malware. “BYOD” (bring your own device) is the biggest hot-button issue in enterprise security, and is predominantly driven by user demands that their organizations support iPhones, iPads, and Macs. I can no longer walk into a meeting with enterprise IT without at least some Macs or iPads in the room, officially supported or not.
This is a nearly complete reversal from just five years ago. And while Apple seems up to the task, it’s clear that the intensely private company is struggling to find the balance between its close-mouthed corporate culture and its new responsibilities as a global technology leader in the post-PC era. Let’s look at what Apple has done with OS X, iOS, and its cloud services.
OS X — It’s trite to say OS X 10.8 Mountain Lion is the most secure version of OS X yet, since it’s also the latest version. But Mountain Lion did introduce one significant new security feature with the potential to reduce the risk of widespread malware on Macs even as the platform increases in popularity.
Gatekeeper (which I described extensively in “Gatekeeper Slams the Door on Mac Malware Epidemics,” 16 February 2012) is a new feature in OS X designed to alter the economics of mass malware. In its default setting, Gatekeeper allows the user to run only those downloaded applications that come from the Mac App Store, or that are digitally signed by the developer using a key issued by Apple. Since most widespread malware today relies on tricking the user into installing and running unapproved applications, this presents a serious roadblock to attackers.
Applications in the Mac App Store are now required to implement sandboxing and undergo review by Apple. This reduces both the chance of a malicious app making it into the Mac App Store, and the potential harm of either a malicious app or one compromised by an attacker. Sandboxing on other platforms (like iOS) has been shown to be an effective technique of increasing the cost of attacks, even if the sandbox is broken. Both Mac App Store applications and those independently distributed and signed with an Apple-issued Developer ID are digitally signed, which helps the operating system detect if the application was tampered with; again, adding yet another roadblock attackers need to circumvent.
Mandatory sandboxing hasn’t been popular among developers, but as I discussed in “Answering Questions about Sandboxing, Gatekeeper, and the Mac App Store” (25 June 2012), it is an incredibly important security tool for protecting users, even if we lose some functionality. Unfortunately, since sandboxing is mandatory in the Mac App Store, that means it has also been caught up with otherwise-unrelated issues in distributing applications through Apple that complicate the lives of developers.
While Gatekeeper certainly won’t prevent all infections, and I’m sure there is still a contingent of Mac users who will be tricked into installing malware, it disrupts the economics of fooling users into installing malicious software. Today’s Internet criminals are in it for the money; tools like Gatekeeper dramatically increase the cost of running a malware campaign, reduce their profits, and make the Mac a less appealing target.
Gatekeeper is only one of a number of security controls built into OS X. FileVault 2, introduced in 10.7 Lion, transparently encrypts hard drives to protect data in the event of physical loss. It’s a massive improvement over the original FileVault, so much so that Apple probably should have used another name due to all the negative connotations associated with the previous version. Mountain Lion also extended FileVault 2 to cover external drives, including Time Machine backups (with a little extra work). FileVault can even be combined with Find My Mac to wipe your hard drive remotely in case of loss, although this can be dangerous if anyone accesses your iCloud account and you lack current backups (see “Watch TidBITS Presents “Protecting Your Digital Life”,” 22 August 2012).
In terms of the core operating system, Mountain Lion extends ASLR (Address Space Layout Randomization), a powerful tool to limit the ability of attackers to exploit vulnerabilities, to the kernel itself. Mountain Lion also added some additional memory exploitation protection techniques for those on current processors, like the Core i5 and i7, included in new Macs. These processors include extra hooks that operating system vendors like Apple can leverage to further complicate an attacker’s efforts.
In 2012 Apple also showed a clear ability to make difficult decisions in favor of protecting users from the most common forms of attack. In response to the Flashback malware infection earlier in the year (see “How to Detect and Protect Against Updated Flashback Malware,” 5 April 2012), Apple began tightening the screws on the two most common sources of Web browser based malware infections: Java and Adobe Flash.
Java is an extremely common source of infections across Macs, Windows PCs, and any other computing device it runs on. Java applets are easy to embed in Web pages and tend to run, by default, in the Web browser. Java is also very difficult to sandbox off from the rest of the operating system. Thus a good Java exploit dropped on a Web page may easily infect most visitors (though generally on a platform-specific basis). This was especially pernicious on Macs since Apple did a poor job of maintaining its own version of Java, often letting it go unpatched for weeks or months after the official version was fixed and thus giving potential attackers a roadmap to success.
When Java attacks like Flashback started to increase, Apple performed three key actions via a series of updates. First, Apple disabled Java from running not only in Safari, but in any major Web browser on your Mac unless you explicitly turned it on. Even then, OS X would disable Java again in 90 days if you didn’t use any Java applets. Second, Apple also stopped installing Java on Macs by default at all (before Flashback), although it’s fairly common for users to add it back in. Third, Apple handed the responsibility for updating Java on the Mac back to Oracle, so now Mac users receive patches at the same time as all other platforms. Java is now uninstalled by default, blocked in your browser unless you explicitly enable and use it,
and patched on time.
Apple then extended similar protections to Adobe Flash, another common source of browser-based vulnerabilities. Although Flash hasn’t been installed by default for years, it was nearly universally installed by users, and rarely updated. Apple and Adobe worked together (sort of) to address this situation. Recent versions of Flash include a self updater to ensure users are using the latest, patched versions (see “Flash Player 10.3.181.26,” 23 June 2011). Since many Mac users weren’t using the self-updating versions, an Apple security update disabled any version of Flash that lacked the self-updating function, essentially forcing users to update.
It’s hard to overstate the effectiveness of these combined improvements. Our Macs are well-protected against physical loss thanks to FileVault. The combination of Gatekeeper, the Mac App Store, code signing with Developer IDs, and sandboxing dramatically raise the cost to attack Macs by tricking users into installing malware. The constant improvements in inherent operating system security continue to reduce the chances of attackers exploiting vulnerabilities. And by reducing the exposure to Java and Flash in the Web browser, the cost to attack Mac users through Web vulnerabilities is materially higher.
Apple showed its hand with Mountain Lion and ongoing security updates — the company is focused not only on hardening the operating system, but addressing common user behaviors that enable attackers. This combination won’t stop every attack, nor even every widespread attack, but it is hard to imagine Macs ever suffering an ongoing malware epidemic even as they increase market share. The key takeaway is that all these technologies are aimed at attacking the economics of malware.
iOS — There’s a short version and a long version of the iOS security narrative. The short version? iPads and iPhones are the most secure consumer computing devices available. They have never suffered any widespread malware, exploits, or successful attacks in their entire history. None. Zero. Zip.
The long version? iOS is hardly immune from security issues. There is no perfect security, and iOS suffers vulnerabilities just like every other platform. iOS 6 itself contained well over 100 fixes for various security flaws. But iOS 5 was difficult for attackers to exploit, and iOS and Apple’s latest processors (the A6 and A6X chips) continue to add ever more security hardening. The best indicator of iOS security is the availability of jailbreaks, since every jailbreak is technically a security exploit. As of this writing, there are no jailbreaks available for iOS 6 on the iPhone 5 or fourth-generation iPad (using the A6 and A6X processors), and only limited (tethered) jailbreaks for the iPhone 4S, the iPad 2, and the
third-generation iPad (which use the A5 processor).
Another strong indicator of iOS security is that digital forensics firms, those who produce the software used by law enforcement to recover data from mobile phones and computers, are as yet unable to crack data protected by the highest level of iOS encryption enabled by default (for email and participating apps) when you set a good passcode.
iOS is highly restrictive, allowing only apps from the App Store, extensively sandboxing applications from each other, nearly eliminating shared storage, code-signing all apps to limit tampering, and disallowing background applications. All this is on top of a hardened platform that makes wide use of security features built into the underlying hardware.
The main security enhancements in iOS 6 were the addition of kernel ASLR and other memory protections similar to those in OS X, which isn’t surprising since the two operating systems still share much of their code base. iOS 6 also added a series of enhanced privacy protections. Apple stopped allowing the unique device identification (UDID) to be available to apps in order to limit user tracking by independent developers. Also, users must now explicitly approve access to locations, contacts, calendar entries, and photos on a per-app basis, and can revoke those rights at any time through the Settings app. This move came in direct response to widespread reports of abuse by some application developers accessing private data that their
applications didn’t technically need.
Apple also added some additional features to support enterprise deployments of iOS, such as a global proxy setting to manage Internet connections, blocking of iMessage, Passbook, Game Center, Photo Stream sharing, and the iBookstore (in addition to existing application and feature limits), time-limited configuration profiles, and improved certificate and profile management. I spend a lot of time talking with enterprises about iOS security, and their main concerns are supporting employee personal devices while still protecting enterprise data, not malware or other external attacks.
As strong as iOS security is, we know it isn’t perfect. Vulnerabilities are discovered, new jailbreaks are created, and rumors in the security world are that some governments have paid hundreds of thousands of dollars for single exploits to enable them to hack iPhones and iPads remotely. Strong encryption (called Data Protection) covers only email messages and attachments by default, and other apps that enable the API, potentially leaving large amounts of data recoverable if you lose the device. Also, shorter passcodes like the default four digits can still be broken via brute force attacks.
None of these should be concerning to the average user not facing a hostile government (which, we admit, does happen in parts of the world). Government-level exploits are rare, expensive, and not wasted on the average user. If you lose an iOS device, the odds are against the person finding it trying to steal your data — he or she is far more interested in selling the device. If you are concerned with law enforcement or a government peeking at your information, using a longer passcode and Data Protection-enabled apps will probably thwart their efforts.
As for the “Great Mobile Malware Epidemic” constantly predicted every year by various publications and security companies? It doesn’t seem to be an issue for iOS.
iCloud and the iTunes Store — Discussing Apple’s online services — iCloud and the iTunes Store — is far more difficult, thanks to a complete lack of transparency from Cupertino. Unlike OS X and iOS, security updates are handled on Apple’s servers with no public notification. Apple makes no public statements regarding the security particulars of either platform, and what little information is publicly available is little more than marketing statements. Within those limitations, here is what we can infer.
Apple states that all iCloud communications are encrypted, and all data is encrypted when stored on its servers, with the exception of Mail and Notes, which are encrypted only over the network (stored data encryption is rare for online mail services). This data is protected using keys Apple manages, and thus Apple employees can technically see your content. The easiest way to determine if any online service can access your data is to see if you can access the content through a Web browser. Unless the company writes complex code to encrypt and decrypt within your browser (which is extremely rare outside password-related services like LastPass), that means its Web server, and thus
its employees, can access the data.
An important aspect to iCloud security is that your iOS device backups in the cloud are potentially accessible to Apple, or to anyone with a warrant or subpoena. If you’ve ever restored from iCloud, you probably noticed that you have to re-enter most of your usernames and passwords, which you don’t need to do if you restore from a local and encrypted backup. That’s because Apple scrubs the keychain for unencrypted backups, either local or iCloud.
iCloud data is not stored encrypted on your Macs or iOS devices, unless you turn on some other additional encryption. The network connections to Apple are encrypted, and the data encrypted on Apple’s servers, but it is accessible by Apple.
Apple’s security guide for iOS states that both iMessage and FaceTime support “client-to-client” encryption. The most common interpretation of this means your messages are encrypted even from Apple. But since Apple manages the encryption certificates and keys, there’s always the chance that an Apple employee could perform a “Man in the Middle” attack to sniff your data. My belief is that, although Apple has this capability, at worst it is something only accessible to and used by law enforcement, like any other wire tap, and we have no idea if that has ever occurred.
There isn’t much more to say about iCloud. Apple doesn’t talk about security incidents, but I’m unaware of any that have been reported publicly. Apple also doesn’t discuss iCloud security controls, such as how the company restricts access by employees to your data, so we have no idea how good or bad those practices are. On the upside, although Apple’s privacy policies technically allow the company to look at or share your private data, we are unaware of Apple mining your data for other purposes, as do Google, Facebook, and other advertising-supported services.
The iTunes Store and iTunes in the Cloud also use encrypted communications, but obviously handle less-sensitive information. The main security concern with them is credit card data, and there have been reports of illegal purchases, phishing attacks, and other financial crimes associated with iTunes Store (including the App Store and Mac App Store) accounts. Earlier this year, Apple implemented enhanced security, including sending you email when new devices make purchases, notifying you via email of account changes, and requesting account verifications at least once a year. Also, Apple does mine your iTunes usage for Genius and ratings, but, again, we do not believe any user-specific data is ever shared or used for
It’s hard to know what’s really going on since Apple doesn’t make public statements about the reports of iTunes Store attacks, and there is often a distinct lack of consistency that might support getting at the root of the problem. If there even is a problem or flaw, we honestly don’t know.
Unfortunately, silence fosters fear when it comes to active security incidents. Once incidents become public enough, users rightfully worry and look to Apple for answers. But this is a lesson Apple will learn for itself, and adapt to in its own way. While I don’t ever expect to see Apple respond as quickly and publicly as a company like Microsoft, I do see early signs that Apple is taking security communications more seriously.
Four events from the last two years show this gradual change. During the Lion development process Apple invited select security researchers to participate in the beta testing for free, instead of hoping they might join Apple’s official developer program. Before the release of Mountain Lion, Apple pre-briefed a security researcher (yes, me) under NDA so the rest of the press would have a security expert to discuss Gatekeeper with. Apple also, for the first time, released a detailed iOS security guide discussing the operating system internals. Finally, at this year’s Black Hat security conference Apple delivered a presentation for the very first time,
talking about iOS security but refusing to take questions.
Culture is difficult to evaluate. It lacks the objective indicators of operating system or hardware updates, especially when much of the discussion takes place in private or under NDA. On the objective side, I see Apple adding more security features, giving them a more prominent role in the operating systems, and responding more quickly and directly to security issues. Subjectively, the company is still secretive, but far more responsive and communicative than in the past.
It’s clear Apple recognizes that security plays an essential role in maintaining the growth of the company. To that end, the company has not only been more responsive, albeit in its own way, but has made important long-term investments in the security of the Apple ecosystem. These efforts paid off in 2012, and we are likely to see them continue to pay off for years to come.
Great post, Rich: fair and well stated. It's one that I'll direct people to as a corrective whenever I encounter a breathless "Oh, no, Apple security sux and we're all going to DIE!!" linkbait story.
I disagree. All weaknesses on all Apple software products (including the latest ones) suffer the same fate: They have shared kernel and user memory spaces that are vulnerable to modern ROP/BO attacks; They do not have Shatter attack protection; They do not have a GINA or modern Credential Provider.
Worst of all: when the data is at rest (i.e. when it should be LEAST vulnerable), all Apple devices can be made to cough up most or all data that resides on their SSDs or NAND flash -- let alone DRAM.
All of these other protections you mention matter little in light of the entry points and associated attack surface that I have described above.
OS X and iOS are not SELinux. They are not GRSecurity. They are not Windows Server 2012. Apple has not come a long way.
We have had many of these discussions in person and we clearly disagree. I'll leave it at that, and we can talk about it more next time we see each other.
What still worries me (and it is hard to know of improvement in this area) are the potential vulnerabilities in high-level Apple software included in Mac OS X or iOS, typically applications. Things like the Safari RSS vulnerability or ARDAgent loophole (linking to Foundation from a suid binary is a bad idea, who knew?), which have shown Apple has not necessarily been good at preventing vulnerabilities from being introduced in its own software development. Fundamental Unix security and core OS security features like ASLR are good, mind, but they are not enough.
This is doubly worrying as Apple is clearly not eating its own dogfood with sandboxing, or at least not as much as they are asking third-party developers to do so.
I agree- not using sandboxing for their app store apps is a big issue, and I should have dinged them on it.
Vulnerabilities are always a problem. For Apple, this is especially true of the open source components they use in the OS that may get patched before Apple patches.
My focus in this piece is that Apple, like Microsoft (who is better at it), are going after disrupting malware economics. This won't prevent smaller and targeted attacks, but *will* materially reduce the risk for the entire user base.
Wow, revisionist history. So Apple invented the laptop, the smartphone and the tablet? Really? I don't think so. And they don't "dominate in sales". Maybe they dominate in sales DOLLARS, but not sales counts.
Those comments turned me off right away.
He said that Apple defined the tablet category. I think it would be pretty hard to argue otherwise. If you want to disagree, at least disagree with what Rich actually said.
Oh, and as for sales, I thought the whole point of selling stuff was to make money. I think most businesses (the ones that stay in business, anyway), would consider the dollar value of sales to be kind of important.
I thought at some point (iOS 5 maybe?) Apple changed to full disk encryption when a passcode is set. That's why it could erase so quickly by discarding the decrypt key. Is it really still just for apps that use the API?
There is no full disk encryption, even when using the DPAPI and even when a strong password is set.
It's also possible to dump the memory using DMA techniques such as the newly released Elcomsoft Forensic Disk Decryptor. The DRAM memory will likely contain all of the key material.
All versions of all Apple iOS devices will broadcast personal data out of their data ports. All app data can be read. All versions of all apps can be seen. Even IF the files are encrypted with the DPAPI or otherwise, the names of the files can be seen. I don't think you understand how much data is actually available, but check it out for yourself using these tools and methods -- http://linuxsleuthing.blogspot.com/2011/05/open-source-iphone-exploits.html
It's also possible to gain access to the kernel and userland memoryspaces by installing a malicious app from the App Store, or through an exploitable app such as the browser -- http://securecoding.sudo.rm-f.org/archives/2012/09/27/iphone_safari_crash
"When a file is opened, its metadata is decrypted with the file system key, revealing the wrapped per-file key and a notation on which class protects it. The per-file key is unwrapped with the class key, then supplied to the hardware AES engine, which decrypts the file as it is read from flash memory.
"The metadata of all files in the file system are encrypted with a random key, which is created when iOS is first installed or when the device is wiped by a user. The file system key is stored in Effaceable Storage. Since it’s stored on the device, this key is not used to maintain the confidentiality of data; instead, it’s designed to be quickly erased on demand (by the user, with the 'Erase all content and settings' option, or by a user or administrator issuing a remote wipe command from a Mobile Device Management server, Exchange ActiveSync, or iCloud). Erasing the key in this manner renders all files cryptographically inaccessible." From Apple in May: http://tinyurl.com/7kc6s58
Test it for yourself. Don't believe Apple. Does it work? I can unsafely say that "no", none of the statements from Apple are truth at all. I have tested this. It does not work as advertised.
Great article. I think Apple has a number of additional security issues of their own making, though.
First, Apple is creating a monoculture by prohibiting competing Web browsers on iOS. When Internet Explorer 6 suffered serious security vulnerabilities, users could move to Firefox. With Safari, there is no choice.
Second, iOS devices lack an important security feature: the ability to turn the power off - the battery is in a sealed case that cannot be opened without special tools. Removing power is the only 100% reliable and verifiable way for a user to erase data from DRAM.
No Firefox, but there is iCabMobile!
There is Chrome as well. Apple has let other browsers onto iOS, although their marketshare is minimal....
Apple may have made a lot of progress with security on its devices and communicating with its stores, but there is one area that is still sadly lacking. That area is the systems that support all of this wonderful stuff.
For one, there was the story a few months ago about the fellow whose iCloud account had been hijacked and used to destroy a lot of his data. Ok, that has been tightened up on now, I understand, but....
Yesterday, I tried to buy an app on the Mac App Store, using a Mac I have been using to buy apps since I bought it in September. I had also been using it on the iTunes Store. It told me that since this was the "first time I had used this device with the App Store", I had to answer my security questions.
So problem 1: It had either forgotten that I had used this Mac before, or perhaps had not noticed it was new until yesterday. Not good.
Then problem 2: The security questions it asked me were not the questions I had set up. Oops.
It said that if I could not remember the answers, I should go to appleid.apple.com which I did.
Problem 3: I logged in with my AppleID and it wanted to know the answers to my security questions. Ummm, that was why I was there.....
So I rang Apple and a very helpful advisor tried to reset everything to how it had been. He couldn't look at all my details.
Problem 4: He said one part of the system was telling him that I had supplied him enough confidential info to allow him to look at my account, and another part was telling him I hadn't.
In the end, he managed to get the account set up with no security information at all, and I had to re-enter it all.
So here we come to...
Problem 5: The questions it offers are the most stupid things imaginable. Maybe teenagers may be able to remember the answers, but anyone over 40 is going to have a hard time remembering unambiguously the first dish they ever cooked (breakfast cereal, toast?), the name of their favourite teacher (I cannot remember any of my teachers' names), their favourite car (I have had a number), the street they grew up in (I lived in several) and so on.
The whole point of security questions is that they be things you can remember instinctively, so they don't have to be written down. As it is, I had to make up answers with no connection to the questions. This is probably not a bad approach to security in some senses, but since the answers are not instinctive, I have had to write down the questions and answers. And in fact because I may need them while travelling, I have had to put them in a file on the Internet.
The security questions I originally had were custom questions I entered along with the answers. They were things I would never forget. This option is not available once you have your Apple ID. But - the advisor told me - if you are creating a new AppleID, you *can* make up your own questions.
So the issues are: Apple's systems had a "glitch" that wiped my security information, their so-called "security improvements" actually made security worse, and their systems are inconsistent in what they ask.
The back end systems are all a part of "security", and that bit still seems to need a lot of work.
Totally agree with David Morrison, this has happened to me twice this year already and at 74 years old I have a serious problem remembering anything that didn't happen this morning.
My solution David, have several accounts and sow confusion.
I love all of my aapl products and have done so since 1987 but it doesn't hurt to point out the flaws albeit in a non spiteful "hate everything apple" way like some contributors to this forum.