This article originally appeared in TidBITS on 2017-06-30 at 8:09 a.m.
The permanent URL for this article is:
Include images: Off

HEVC and HEIF Will Make Video and Photos More Efficient

by Glenn Fleishman

If you haven’t already experienced abbreviation overload, Apple has added two more to your plate: HEVC (High Efficiency Video Coding) and HEIF (High Efficiency Image File Format — yes, it’s short one F). These two new formats will be used by iOS 11 and macOS 10.13 High Sierra when Apple releases them later this year.

While you may not have heard of HEVC or HEIF before, both are attempts to solve a set of problems related to video and still images. As people take photos and shoot video at increasingly higher resolutions and better quality, storage and bandwidth start to become limitations. Even in this day of ever-cheaper and ever-faster everything, consuming less storage space and requiring less bandwidth when syncing or streaming still has many benefits.

The Current Landscape -- Those of us who have been around the block a few times have seen plenty of image formats come and go. Once, I thought the Kodak-backed FlashPix [1] format might make inroads because of how it created a hierarchy of multiple sizes of images in a single file for faster retrieval. But it was not to be.

Since the 1990s, only the PNG (Portable Network Graphics) [2] format, designed to avoid certain patents then extant, has joined the pantheon of widely supported, well-established image formats, alongside JPEG and GIF. (The long-established TIFF isn’t used on the Web, but it remains important in publishing workflows.)

The main split between image format types is whether they’re lossless or lossy. Lossless formats retain pixel-for-pixel details and tones exactly, at the expense of a larger file size. Compression in lossless formats reduces the storage required for redundant information without discarding detail. Lossy formats rely on algorithms to approximate detail and tones across regions of an image, allowing for typically much smaller file sizes.

None of the popular formats fit all needs. JPEG is lossy, so it’s great for photos but less useful for screenshots, and it doesn’t support an alpha channel for transparency, like GIF and PNG. GIF is lossless, but relies on limited color palettes, making it great for certain kinds of artwork, but poor at reproducing photos. (Reduced color palettes were more important when color displays had shallow color depth and bandwidth was more constrained). GIF also supports animation, unlike JPEG and PNG’s most common implementation. PNG, either lossy or lossless, works particularly well for screenshots and was designed around the patent encumbrances that once restricted GIF usage. PNG and JPEG are both used on about 74 percent of Web sites, but GIFs still appear on 36 percent of sites. PNG didn’t succeed in killing GIF, but it made huge inroads.

HEIF tries to combine all the best aspects of PNG, JPEG, and GIF, while dramatically improving compression and adding new features like the capability to store bursts of photographs.

Video formats have suffered from a more fraught path, because many were caught up in newer patents that hampered widespread adoption. Image formats had to deal with patents, too, but either they were near the end of their lives when the Web was young, as with LZW compression [3], or the makers of various tools to create and display images — from Photoshop to Internet Explorer — had already licensed all the necessary bits and pieces.

These video patents complicated things for Web sites. Questions arose as to whether sites would have to pay royalties for every view, free software groups debated the use of encumbered standards, and Flash took off as a cross-platform delivery package for video, because Adobe took care of all the back-end licensing and display issues.

Ultimately, the MP4 family of standards and its H.264 codec (encoder/decoder) won the day, with the vast majority of video available on the Web now using that format. (The group that pooled patents for H.264 said it wouldn’t collect royalties [4] for free Internet-delivered video. Hardware-accelerated encoding and decoding followed. That took the wind out of competitors’ sails.)

What will be the real-world benefits of these new formats? Let’s start with HEVC and move on to HEIF.

HEVC: Encompassing the Future While Shrinking the Past -- Video streaming gobbles up over 70 percent of evening Internet traffic [5], and network-management firm Sandvine estimates it’s on track to hit 80 percent in 3 years. Thus, streaming video companies, ISPs, and viewers who have monthly caps or overage fees have a huge incentive to get more from less, and HEVC is the solution. Apart from Google’s 4K-capable VP9 [6] codec, HEVC is the only reasonable path for most streaming services to affordably and practically feed out 4K Ultra-High Definition (UHD) video for mobile devices. (VP9 is built into Android starting with version 5.0 Lollipop.)

HEVC is another name for the H.265 standard, and it’s being promoted as taking 50 percent less data to produce streams or downloads of the same quality as H.264 when the resolution is 1080p or less. Netflix, which accounts for about half of the aforementioned primetime data usage in the United States, found that it did indeed achieve 50 percent savings [7]. (Google’s VP9 hit the same 50 percent mark.) Netflix suggests 5 Mbps for HD video [8] now, which would consume about 340 GB for 150 hours of content.

When it comes to 4K UHD video, HEVC uses about 40 percent less data. But that’s still tremendously more efficient. Netflix streams only 4K to televisions released starting in 2014 that had an earlier version of the HEVC hardware decoder. The streaming company currently advises a constant 25 Mbps or higher rate to stream its 4K content, which is about 1.7 TB for 150 hours of viewing. It would be over 40 Mbps at H.264 compression rates.

How can HEVC achieve such a notable improvement in compression without sacrificing quality? As with the jump from MPEG2 to H.264, it involves hardware acceleration on the encoding side. With chips that can perform specialized calculations, algorithms that perform more intensive analysis of video to find places to compress become more viable. HEVC can require up to 10 times more computation than H.264 to encode at the same bit rate!

But this is asymmetrical. In plain English, HEVC works because it’s relatively cheap to buy super-powerful computers with specialized chips to encode the video in production, but even tiny mobile devices can decode those highly compressed streams or downloads quickly and easily. Producers crunch the files; viewers reap the bandwidth benefit.

Both H.264 and HEVC break down every frame in a video into a series of rectangles (mostly squares) based on the image’s tonal values, with the goal of grouping similar tones for more compression. A frame that has a large area of blue sky and small figures walking across a desert could obviously be compressed better if the blue sky and desert regions were broken out from the people walking across it. HEVC can encode larger areas at once, which results in higher compression for less-differentiated detail.

HEVC is also much better at “predicting” how elements in a frame will change from frame to frame and in which direction those elements will move. The full explanation is eye-glazing, but the summary is that increased compression efficiency both within a frame and between frames lets HEVC gain that extra 40 to 50 percent reduction.

Smaller files and fewer bits-per-second required for streaming are great when you’re Netflix, but why should you care as an individual user or even a company using video from iPhones as part of your workflow? Because every bit saved is a bit you don’t transmit and a bit you don’t store.

For starters, if a video occupies only half the space for the same quality, your iPhone’s precious storage goes twice as far before you have to sync or offload video.

At the average user level, if you’re an iCloud storage subscriber above the free 5 GB tier, when you cross 200 GB with your current media needs, you suddenly leap from $2.99 a month for 200 GB to $9.99 a month for 2 TB. Halve your storage and you save that difference in cost. For even relatively modest video production houses storing massive amounts of video, the same scenario applies to local SSD or RAID storage and remote cloud storage, and could result in savings of tens of thousands of dollars per year.

Similarly, if you send or receive video via cellular data, you might be able to drop to a cheaper data plan without being throttled or charged for overages. And for commercial users, being able to transfer less data to cloud storage or stream at lower bit rates could reduce costs significantly. Amazon S3 and Google Cloud may offer cheap storage and transfer, but it still adds up. Half of anything is half as much!

As viewers, we should get better and more consistent quality television and movie streaming on our iOS devices and Macs, as well as on the fourth-generation Apple TV, which is slated to receive HEVC decoding in tvOS 11 (see “What’s Coming in tvOS 11 [9],” 15 June 2017). People with lower broadband throughput rates will potentially use the same amount of data and see a much crisper picture. Those with higher bandwidth connections will consume half as much data for the same results.

The central question about HEVC is how easy or hard it will be to capture, edit, and play back on various devices. Apple hasn’t named compatible devices, but in a developer presentation, it provided a clear rundown of the hardware and software support [10] (go to 22:00 in the video to listen and see).

In short, all Macs and iOS devices that run the upcoming releases will be able to decode HEVC at least in software. But for hardware decoding, you’ll need an iOS device with an A9 or later and a Mac with an Intel Skylake or Kaby Lake processor (6th and 7th generation Intel Core). On the iOS side, that means an iPhone SE, iPhone 6s or later, any iPad Pro, and the fifth-generation iPad. The 2016 MacBook Pro models have Skylake processors, and the 2017 iMac and MacBook Pros sport Kaby Lake chips.

Depending on the size of files and other parameters, HEVC software decoding might be erratic or consume much more battery life than H.264. Smart Web sites may check a device’s vintage and iOS apps can use new developer queries about supported video formats, and then feed out H.264 if HEVC might suffer from software decoding hiccups. Of course, that means older hardware that’s technically capable of HEVC might not get the full bandwidth advantage, but owners will probably then appreciate H.264’s battery savings, reduced fan noise, and smoother video.

If you want to edit and encode with HEVC, you’ll have the same issues as decoding, and it probably won’t be practical without a newer Mac.

(A technical aside for those who care about deep color. Both H.264 and HEVC allow for 10-bit color, which provides richer differentiation of tones than 8-bit color: a billion different shades instead of just over 16 million. 4K and 5K iMacs and the Mac Pro support 10-bit color, as do the 2016 and later MacBook Pros, and external monitors on some other 2015 and later Macs. 2016 MacBook Pros with Skylake chips include only 8-bit HEVC hardware encoding; Kaby Lake models handle 10-bit.)

Because not every device will display HEVC video, exporting will produce compatible formats for social media and other sharing, as you can do with Photos, iMovie, and other apps today.

With video out of the way, HEIF will seem vastly simpler by comparison.

A Container for Images, Rather Than a Simple File Format -- Even though it’s billed as an image format, HEIF is in fact a container that rethinks what an image format needs to do in today’s complex world. An HEIF file will be able to hold text, audio, video, still images, and sequences of frames for bursts and animations, and software will be able to extract and present the relevant information depending on what we’re trying to do.

HEIF is built on an ISO standard — hurray! — developed into a full spec by the Moving Picture Experts Group (MPEG). Apple has based its implementation on a second, more fully realized version that the company said at WWDC will soon be released. Also, Apple says HEIF is pronounced “heef,” rhyming with “beef.” (An Apple developer presentation offers a good amount of detail [11], if you want the not-too-gory bits.)

Once again, compression is perhaps HEIF’s most significant benefit. To make things simpler, HEIF can use HEVC compression. That’s both because HEVC compression is more efficient than JPEG, but also because Apple can use HEIF to store bursts of images and animations (think Live Photos), both of which benefit from HEVC’s inter-frame compression.

Less obvious benefits include better support for alpha channels, which are used for transparency and masking of images, and for deeper color, something Apple has been pushing into its hardware for a few years. HEIF can also break an image into rectangular regions so editing and display software quickly retrieve just the necessary adjacent pieces without loading the entire file. And it can store both an original image and images derived from the original, much like apps like Lightroom store a base image and then record a series of transformations.

Although HEIF can be used to store bracketed images — photos of the same scene taken in quick succession with different exposures — to let software produce high dynamic range (HDR) output, Apple instead generates HDR images directly in the image signal processor in iOS devices. Third-party software could opt to bypass Apple’s hardware and use HEIF for this purposes. ProCamera [12], for instance, has its own HDR mode.

But with two-camera iPhones, currently including just the iPhone 7 Plus, Apple will store the depth map that it derives for its Portrait-mode photographs in the HEIF file. The depth map identifies a series of planes at a range of distances from the foreground. This lets Apple separate out figures in the front and aesthetically blur the background to achieve the “bokeh” effect (see “Behind the iPhone 7 Plus’s Portrait Mode [13],” 24 September 2016). But it can also be used for a host of interesting effects by developers, who will be able to access the depth map in iOS 11 both as the camera is in operation and from stored HEIF images. Apple showed examples like a foreground figure being in full color, while the background was in black and white. It will also make it easy to composite foreground elements against artificial backdrops.

I won’t reiterate the advantages for storage, since they apply just as much to photos as to video. As someone with dozens of gigabytes of video and hundreds of gigabytes of photos, I’ll likely find more savings from HEIF than HEVC video.

Just like HEVC, HEIF relies on newer hardware for hardware decoding: iOS devices need an A9 or later processor, and for Macs, the same Skylake and Kaby Lake models noted above. All other iOS devices and Macs that can run iOS 11 and High Sierra rely on software decoding.

Because HEIF is a container format, it gives individual implementors like Apple a lot of flexibility about what ends up inside. I hope that doesn’t create compatibility issues when moving HEIF files between other platforms that eventually support it as a native file type. At the moment, HEIF files can be read only by Apple beta software. Conceivably, we’ll see Adobe Photoshop and other software gain support.

Web browsers won’t support HEIF initially, and it’s not inherently suited for the Web because any given HEIF file could include all sorts of excess data. I expect that Apple and others will define kinds of HEIF that will be appropriate for Web usage, such as a substitute for animated GIFs and for better compression than JPEG provides. Web servers can already supply different kinds of image and video types based on browser versions, so HEIF would just extend that capability. But Apple hasn’t said anything along those lines yet.

Apple has created developer tools that let apps assess what format an image has to be in to share or display, and export and serve that up as needed. iOS, macOS, Apple’s apps, and independent apps will perform a lot of conversions or offer export options, while retaining and passing HEIF for intra-ecosystem use.

Are HEVC and HEIF Like USB-C for Media? -- People still have issues with the USB-C connector used for USB 3.1 and Thunderbolt 3 because it requires adapters, raises compatibility issues among identical connectors, and generates anxiety about what will work with what. (I’m a big fan of USB-C — as evidenced by my owning a 12-inch MacBook and a 2017 iMac — but I understand the complaints.)

HEVC and HEIF shouldn’t suffer from the kind of confusion that plagues USB-C, however, because Apple has built its support around the notion that only devices within Apple’s ecosystem will support the formats natively. Moving outside Apple’s ecosystem will typically — at least initially — require transcoding and export, and those conversions will almost certainly happen without you even realizing. Apple is encouraging developers to keep this approach in mind, too.

The one sore spot you might hit is if you don’t upgrade all your devices to iOS 11 and High Sierra at the same time, or if you own older hardware that can’t be upgraded. For instance, if you use the same iCloud account with iCloud Photo Library across both new and old devices, I’m not clear on how pre-HEIF/HEVC platforms will deal with those images. Apple hasn’t provided guidance about that yet.

Another question that Apple has yet to answer is how your existing JPEG photos and H.264 video will be treated when you update to High Sierra. Will Photos automatically convert your entire library? What about files outside of Photos?

Nevertheless, the advantages for HEVC and HEIF are clear, and the transition shouldn’t be rocky if you move forward all at once. But hey, keep good backups, just in case.