Apple’s security team recently sent email to their security announcement list that they had updated their PGP public key. While this seems like an obscure or even unimportant announcement, it’s worth looking at for two reasons. First, it highlights how seriously Apple takes security these days versus about four years ago; secondly, it’s worth reviewing how you verify and use a public key to ensure the integrity of messages you receive from parties that use them.
Four years ago, Apple became more serious about using encryption to allow validation of material it sends out after the BuqTraq security list posted a brief vulnerability report noting that Apple didn’t verify the integrity of programs and patches released via Mac OS X’s Software Update feature.
Apple fixed the problem by stapling on an encryption-based validation method that ensured that downloaded updates actually came from Apple before they were installed – and released that update about 10 days after the report.
Sharing Secrets without Revealing Them — Public key encryption is an integral part of PGP (Pretty Good Privacy), a system that allows a strong encryption key for a single document or set of text to be exchanged between two or more parties over untrusted networks – i.e., the Internet or most local area networks! An untrusted network is one in which you can’t be sure of the identity of the person you’re communicating with – they could be an impostor – nor can you tell if someone is eavesdropping on your exchanges. That’s the compromise we have in using any programs that move data over the Internet, within a local academic network, or even between parties using a free Wi-Fi network in a cafe.
With PGP, each party to a message creates and maintains two encryption keys: one public, one private. These keys are related mathematically. The private key must be heavily protected and stored on a local hard drive or a removable USB drive; by contrast, the public key may and should be shared with anyone. Public keys are often published to a keyserver, or a directory of keys, and to Web sites, although that’s problematic for reasons I’ll discuss later.
The algorithms that drive public key cryptography make cracking the private key effectively impossible over epochal time, taking into account current cracking techniques, expectations in the advances in computation power and distributed computation, and the ongoing formal and malevolent testing that looks for flaws in these algorithms. In general, too, choosing keys that are longer – say 2048 bits instead of 512 – increases complexity without taxing anyone’s computer, too.
The same algorithms make it impractical to attempt to forge a digital signature that would prove that an individual was the possessor of a given public key’s private counterpart.
PGP’s clever bit – now a common approach for all kinds of secure protocols – is that it doesn’t use the slow-to-compute public key encryption to encrypt messages or files. Rather, it uses a public key to protect a strong symmetric key; data protected with a symmetric key is encrypted and decrypted with the same key, and this method is much easier for a CPU to process. PGP thus protects the vulnerable symmetric key with a very strong method. SSL/TLS (Secure Sockets Layer/Transport Layer Security), SSH (Secure Shell), IPsec (IP security often used with virtual private networks), and S/MIME (secure enclosures), among others, use similar methods.
A related benefit is that the same symmetric key can be separately encrypted for many different recipients of the same document. Rather than encrypt a 100 MB file 20 times, you can send a few thousand extra bytes for each recipient attached to a single 100 MB file.
By way of history, PGP was developed in 1991 by Philip Zimmermann, who faced a variety of legal threats from the U.S. government through the 1990s for illegal munitions exports due to how cryptography was classified and how he allowed the program to be disseminated. He went commercial with the software, and it passed through intermediate owners until ending up at PGP Corporation. PGP Corp. offers a free version of PGP Desktop Home 9 for non-commercial use; download the 30-day trial of the full-featured version and let it expire. There’s also an open-source project called GPG (GNU Privacy Guard) that uses PGP principles and conforms to the OpenPGP specification.
Zimmermann’s latest project, by the way, is an encrypted version of voice over IP that encrypts and decrypts sound packets from standard VoIP software that relies on SIP, or Session Initiation Protocol. His Zfone software is even simpler than PGP to use.
Trust but Verify — Public key encryption and PGP are typically used either for encrypting and/or signing a file to transmit or store, or for decrypting and/or validating a received or archived file. Encryption and decryption require that the sending party knows the receiving party’s public key, which they obtain directly or from a directory. The sender uses PGP or GPG to encrypt the message with the public key, and the recipient then uses their private key – handled by their encryption software – to read the original message or use the file that was encrypted.
Signing lets the sending party use PGP to compute a relatively short series of numbers that provides a kind of fingerprint of the original message, a bit like a checksum but with much higher complexity. The message can’t be reconstituted from the fingerprint – much like you can’t produce a finger from a fingerprint – and duplicating the snapshot’s number sequence from other text is almost impossible. PGP then uses the sending party’s private key to create a signature from the fingerprint. The recipient can then verify the signed message hasn’t been tampered with by using the sender’s public key.
Apple signs messages sent via its security list and also signs files that are offered for download via Software Update. In the case of the security list, you’re on your own for checking the validity of the message. If you use PGP Desktop Home 9 or similar software, you can use one of several methods to let PGP validate signed messages. (Software Update has a built-in method of checking signatures. You may even notice that Software Update itself occasionally downloads a new PGP key!)
Apple uses a similar method to help validate its security updates. If you go to a page, like the one for Security Update 2006-003 for Mac OS X 10.4.6 Client (PPC), you’ll see a note at the bottom reading:
That’s the computed fingerprint of that particular disk image. To verify that a download of that disk image is identical to what was packaged up by Apple, you can follow instructions provided on a linked page. This requires the use of Terminal.
I use Bare Bones Software’s Mailsmith 2.1 with PGP Desktop 9, enabling PGP to handle my email streams (an extra feature in PGP’s commercial version). Any incoming signed message is automatically processed by PGP, checked against keys I have stored, and converted before it reaches Mailsmith so that I can see whether a trusted or unknown key signed the message, or whether the message can’t be validated. The downside, of course, is that I now have the unencrypted messages stored on my computer; I’d have to re-encrypt them and delete the stored copies to achieve the same original security. (PGP Desktop and GPG work with other mail programs. PGP Desktop includes several plug-ins and scripts, and there’s a GPG plug-in for Apple Mail.)
For instance, PGP inserted this message into the email received from Apple on 08-May-06, about their new public key: "PGP Signed by an unverified key: 05/08/06 at 15:56:15". This alert indicates that while the signing was valid, the key was unknown.
Within PGP, I can mark a given key as verified, once I’m sure that it’s really valid. But how can I validate that a public key is valid without recourse to the same untrusted network from which I received the key? That’s the next step.
Validating a Key — For key verification, which I need perform only once per key, I have to find a method other than email – otherwise one interception could disrupt the trust for both the key and the verification of the key. This is where phone calls, faxes, and other information come in handy. You can validate that someone’s public key is really the one that they created and distributed by checking its fingerprint with the owner of that key. For the best security, you call up the owner or use another out-of-band method – something other than the Internet, for instance – to get the fingerprint. A secure Web site would also work, though it has both advantages and disadvantages I’ll discuss below.
In either version of PGP Desktop Home 9, after pasting in a public key sent via email or copied from a Web page or after importing a key from a public keyserver, you can reveal its fingerprint through these steps. First, select the key in the main PGP Desktop window. Next, press Command-I or select Show Key Info from the contextual menu. The middle of the Info dialog box shows the fingerprint.
If you and the other party use PGP 8 or later, you can use the hilarious Biometric tab, in which each number from 0 to 255 has been assigned a unique word. This is easier to read over the phone. For other versions of PGP or GPG, you’ll need to click the Hexadecimal tab and read the short sequence of groups of four hexadecimal digits. If the numbers don’t match, the public key you have isn’t the one published or sent by the party you’re talking to. Time to review your security, if that’s the case.
If the fingerprints match, which they always have for me over a decade of using PGP, you’ve accomplished your out-of-band step and have a secure PGP key that can be used in the future.
You might ask: If Web servers use SSL/TLS to secure connections, and SSL/TLS uses public keys in a similar way to PGP, how do they perform this external verification? The answer is through what’s called a certificate authority (CA), a third party that confirms some measure of the truth of identity expressed in an SSL/TLS certificate. These certificates contain a public key for the server using SSL/TLS that are signed by the CA. How does my Web browser then trust the CA? Browsers (and, for other purposes, operating systems) vouch for certificate authorities by embedding the certificates of the CAs – dozens of them – in the browser or operating system. You trust your operating system vendor or browser developer to pick trustworthy CAs, and then the CAs to identify correctly the organizations that are using the certificates the CAs have validated.
(If you need to use digital certificates for private purposes or within a company, and don’t want to pay a yearly fee for a CA-issued certificate, you can create your own. These self-signed certificates put you in the role of CA by creating a special certificate that’s separately installed on any computer with which you’d interact. Mac OS X has great tools for examining self-signed certificates when presented via a Web browser or as part of a kind of Wi-Fi network login called WPA Enterprise that also uses certificates. You can choose to trust a self-signed certificate once or always, along with other parameters. Apple includes tools for generating your own certificate and self-signing within Keychain Access. Choose Certificate Assistant from the Keychain Access application menu.)
Why Is Apple Updating Its PGP Key? That brings us to the issue I started with: Apple has updated its public PGP key for security messages – both messages it sends out on the list and messages you want to send them. Why? When you create a public/private key pair, you determine how long the keys remain valid. The expiration date is another way to limit the damages from a private key that slips into the wrong hands. (There’s also a way to revoke keys, but it’s unreliable and a bit complicated to discuss in brief.) Apple expires many of their public keys as a routine part of encryption hygiene.
Now, the one mistake Apple made with distributing their new key is that while they provided full information with their key, including the fingerprint, they provided no external validation method. The link included in the email they sent is for a plain HTTP transaction. Because HTTP transactions occur in the clear, it would be possible for an attacker at an institution – say a university or corporation – to modify both the email and the appearance of an Apple Web page that you view on your computer through a variety of well-known local area network exploits. You might see a different fingerprint and public key on the Web page served to your computer than Apple has on its.
Sure, this is extremely unlikely, but when you’re working with a key that will last a year and a process that’s designed to provide commercial-grade security for tens of millions of people, well, it’s an oversight.
I did discover that Apple’s SSL/TLS Web servers will let you request the same page through a secure transaction. If you enter "https" instead of "http" for the page containing their public key and fingerprint, your browser uses its certificate authority to ensure you’re seeing a page Apple intended for you to see. (Your CA list being cracked within the browser is an unthinkably low probability unless this list were tampered with for millions of people or as a common exploit.)
When you load the page via SSL/TLS, you may receive one warning for a Web bug (tracking image) on the page that you can safely ignore; some colleagues didn’t see that warning at all.
For most people, any step beyond viewing a plain, non-encrypted Web page at Apple is certainly unnecessary, but it’s good to review the chain of trust. For those who favor the most stringent methods of external confirmation, Apple is just a mark or two below that. It’s much more likely that any exploit would be an inside job – which has happened at some firms, but is an unlikely event – than from the outside.
I do have one rather off-beat suggestion. Provide an automated fingerprint reader by phone. Offer a telephone number that’s clearly within Apple’s known phone range and have a voice that says, "Here’s is Apple’s PGP security key fingerprint for the key expiring May 1, 2007," followed by the string of hexadecimal digits.
They could even use Talking Moose, for old times’ sake.