Apple Releases Dedicated Security Research Device
Apple has announced that it is releasing a special version of the iPhone to approved security researchers as part of a new Apple Security Research Device Program (SRD). The SRD, which was originally announced during last year’s Black Hat information security conference, is a standard iPhone with key security controls disabled to allow security researchers to run their own tools. Apple will provide SRDs on one-year renewable loans to researchers who are accepted into the program.
In a private briefing, Apple provided additional details about and background on the SRD program.
Who Can Get an SRD
Apple said that anyone with a track record of “systems security” research is encouraged to apply, even if they have little to no experience with iOS. The program has two key goals: to open up opportunities for those with experience on other platforms by lowering the bar for iOS research and to make security research more efficient for existing researchers.
Until now, researchers were largely limited to two options. They could use standard, secure devices and attack them almost blindly, which was inefficient at best. Or they could rely on jailbroken devices that were often limited to earlier versions of hardware and software because every jailbreak is a severe security vulnerability that Apple tends to patch swiftly.
What Makes an SRD Different from a Regular iPhone
The SRD lacks code execution and containment capabilities on multiple levels of the hardware and software to allow researchers to run their own toolchains, including common research tools (which we assume need to be compiled for the platform). As an example, the SRD ships with a terminal shell out of the box, so the researchers can employ a wide range of debugging tools and access low-level logs that are normally inaccessible. A researcher could, for example, deploy a full network monitor onto the device.
Researchers can also run their tools with arbitrary entitlements, including Apple’s own entitlements that are never available to developers. Entitlements are the key sandboxing control on iOS to restrict what applications can do.
Although the SRD will support iOS 14, it needs to run special Apple-provided versions of iOS since regular consumer builds have all the security controls enabled.
How the SRD Will Be Useful to Researchers
One of the interesting aspects of the SRD program is how it will enable research on the Apple iOS ecosystem beyond traditional device security. For example, in 2019, the Citizen Lab discovered that Tibetan groups were being targeted with government-sponsored mobile malware, including attacks on iOS devices.
An SRD could help researchers run security tooling to better identify and research these kinds of attacks. When asked if Apple would support this general kind of research, which differs from traditional vulnerability hunting, company reps responded, “They are supporting the security of our users… that would likely qualify as strong research.”
A few other interesting points came up in our discussion:
- The SRD is based on “the latest hardware.” Apple did not specify a particular model but said that it’s important to keep the hardware up to date so it reflects what customers use. On the off-chance Apple releases new iPhones in the future, it’s likely that the company will update the SRDs.
- Researchers are required to report vulnerabilities to Apple and withhold disclosure until fixes are available. This applies only to vulnerabilities discovered using the SRD and not anything discovered independently. Researchers who have policies to disclose vulnerabilities on a preset timeline won’t be able to participate in the program without changing that practice.
- Researchers will need to comply with the terms and conditions of any App Store apps they install on the device; Apple is not adding its own restrictions. If they do discover a vulnerability, the researchers are expected to report it to both the vendor and Apple.
- Aside from basic access to the device, Apple is also supporting the program with a dedicated forum that includes Apple security engineers. Apple security leadership is also available for program feedback.
- Apple has not determined how many SRDs will be produced, and the initial run of devices is not representative of the overall program. The company plans to continue to evaluate and expand based on how things go. My assumption (which could be incorrect) is that the program will initially be relatively small and will grow over time.
Constructive Progress, but with Risks
Overall, I see this is as a highly positive move for Apple. It can be incredibly difficult to build a toolchain for a device as locked-down as the iPhone since you are effectively flying blind until you crack the security deeply enough to install some instrumentation. Assuming the SRD works as described, it will remove many of the major roadblocks to iOS security research and could result in discoveries that provide better security for us all.
On the whole, Apple’s program requirements and restrictions appear reasonable, and I look forward to seeing how they work in practice. However, there is some risk that the program restrictions will muzzle some researchers while Apple sits on vulnerabilities. Apple’s track record for fixing issues has been pretty good in recent years, but we can’t dismiss this concern out of hand.
As Michael Tsai notes, this restriction could inhibit program members from releasing other iOS vulnerability information discovered independently. Researchers might lack solid records defending the origin of an independent discovery and thus feel constrained by Apple’s restrictions.
One final note: I don’t expect Apple’s highly gated access to the program to change in the future. An opportunity like this is ripe for abuse by malicious governments and private organizations that already pay large bounties for iOS vulnerabilities. The SRD reduces barriers for research that contributes to Apple’s ecosystem security without lowering the boundaries for malicious actors.
Great initiative. I am not a security expert, but doesn’t the fact that Apple is removing the software and hardware locks to allow access kind of undermine part of the program in that it removes the very blocks attackers are likely to try to crack, and which therefore are the most interesting parts to examine? On the surface it sounds like a company asking an outside party to do a security/penetration audit and then they remove the locks on the building to let the audit company inside… ?
Well yeah, but that’s why their limiting and vetting those allowed access to these locks-removed devices and then requiring them to release their findings only to Apple.
I think it’s more subtle than that, luckily. It would be more like a bank asking an auditor to evaluate the security of its safe-deposit box room, but making sure they could get in the front doors and past the lobby guards. Those initial layers of security are good and useful, but they make it a lot harder for researchers to explore the deeper levels.
@rmogull, what I’ve been pondering since we posted this is how this plays with Apple’s claim that it won’t put a backdoor in iOS. Obviously, the SRDs will be running a custom version of iOS that won’t be running on devices confiscated by law enforcement from suspects, but it seems to be introducing a bit of a gray area if it’s that easy for Apple to disable certain security features.
The article says “SRD lacks code execution and containment capabilities on multiple levels of the hardware and software” (emphasis added). If custom hardware is essential to allowing security researchers expanded access, Apple still can’t comply with law enforcement demands to crack confiscated iPhones.
Ah yes, good point—I glossed over that. If the SRDs really are special hardware, that addresses my concern almost completely. There’s still the concern of one of these things falling into the wrong hands, but I’ll bet Apple will have remote kill switches for them too.
Join the discussion in the TidBITS Discourse forum