Skip to content
Thoughtful, detailed coverage of everything Apple for 33 years
and the TidBITS Content Network for Apple professionals
7 comments

iOS 13 Bugs Cause Apple to Overhaul Software Testing

If you’ve felt that Apple’s release of iOS 13 and iPadOS 13 has been exceptionally troubled, you’re not alone. iOS 13 has had more updates—eight in total, if you count the iOS 13.2.1 update for the HomePod—in its first two months than any of its predecessors. iOS 12 required only two updates in the same time period. A Bloomberg report by Mark Gurman has revealed that Apple knew internally that iOS 13 was a mess even before the company announced it at WWDC in June 2019.

According to Gurman, Apple is now changing its testing procedures in response. Daily test versions will have unfinished and buggy features disabled by default, with options to enable them individually, which will help Apple isolate which new features are causing problems. According to Gurman, so many teams at Apple were adding features—along with the inevitable bugs—to iOS 13 that the internal test versions were unusable even to Apple testers. Apple has already adopted this new process in the development of iOS 14. However, don’t expect a scaled-back release like iOS 12; Gurman says that iOS 14 is expected to have as many features as iOS 13, even though Apple is planning to push some features beyond the initial release.

Miscellaneous iOS 13 features

Unfortunately, Gurman’s report doesn’t mention Apple addressing many of the other problems outlined by former Apple engineer David Shayer in “Six Reasons Why iOS 13 and Catalina Are So Buggy” (21 October 2019), such as crash reports not including non-crashing bugs, triaging less important bugs such that they’re never fixed, ignoring older bugs in favor of new ones, and not using automated tests as much as possible. And—perhaps most important—iOS will inevitably continue to become more complex. We hope Apple’s new testing procedures will make things better, but we’re reserving judgment until iOS 14 ships.

Read original article

Subscribe today so you don’t miss any TidBITS articles!

Every week you’ll get tech tips, in-depth reviews, and insightful news analysis for discerning Apple users. For over 33 years, we’ve published professional, member-supported tech journalism that makes you smarter.

Registration confirmation will be emailed to you.

This site is protected by reCAPTCHA. The Google Privacy Policy and Terms of Service apply.

Comments About iOS 13 Bugs Cause Apple to Overhaul Software Testing

Notable Replies

  1. I have no evidence, but I will tell you what I expect is the case.

    Apple developers, like so many of us, do not know what a successful test is: They think software testing is running tests to see if the software works as intended. If it works, they think that is a successful test.

    I don’t want to be ruler of all software, but if the role was forced on me, my first executive order would be that no person could work at software development unless a) they had studied The Art of Software Testing and b) they understood and agreed with what Glenford J. Myers wrote in that book. Wrote 40 (count 'em, forty) years ago.

    Software testing is exercising a program with the intent of causing it to malfunction.

    “A successful test case is one that detects an as-yet undiscovered error. [Myers]”

    I have not read the 3rd Edition, which Wiley is now selling. I hope they have not watered down the message. It should be all in Chapter 2 - The Psychology and Economics of Software Testing (fourteen pages). It’s in the book!

    The rest of the book is details based on the facts of Chapter 2. The 3rd Edition was published 8 years ago, so the remaining chapters will not be right up to date. That does not matter. Unless it has been watered down, the message is in Chapter 2.

  2. I guess the software developers are hanging out for quantum computer apps to thoroughly test conventional computer software, since quantum computers are supposed to execute every possibility!
    Until then they seem to be complacent.
    :slight_smile:
    I foreshadowed this in 1999:
    http://users.tpg.com.au/users/aoaug/qtm_comp.html

  3. Software should be tested by people who have no idea how it is supposed to work.

  4. Although I believe I understand the advantage of doing so, it should not exclude testing by people who know exactly how it is supposed to work. Otherwise it could be release with bugs that prevent it’s intended use.

  5. Software developers generally test their work as they go. But they know how it is supposed to work and how they intend it to be used; so they are naturally testing the intended use, since that is what they are in the process of creating.

    People who don’t know anything about the software are likely to behave more randomly in terms of what they click on, what data they enter, etc. So they are more likely to cause the program to go through unintended (and likely untested) sequences of code, a good way to find bugs.

    Ideally, those less informed users should be watched as they use an app, because that can be informative about how well designed the user interface is. If they are struggling, then design changes may be called for.

  6. Back in the old days, Apple was known to conduct a lot of such testing. They had human factors engineers and psychologists observe and document how people carried out certain tasks they had been given. They compared results when they gave people different methods to do something and used this as another (heuristic) metric for usability. A lot of effort went into that and while it was of course kept hush hush, it was still published and known that Apple was serious about such testing. I have no idea if anything like that is still done these days. Interface guidelines don’t seem to be taken that seriously anymore either. I guess it’s entirely possible Apple still does a lot of that type of testing, but these days they’re just much better at shushing people over it, but honestly, I have my doubts.

  7. Yes, Duane! “Software should be tested by people who have no idea how it is supposed to work. [Duane Williams]”

    Yes, Al! Software should be tested “by people who know exactly how it is supposed to work. [Al Varnell]”

    Yes, Duane, there should be extensive usability testing and there should be strict testing of conformance to guidelines (even if the decision may sometimes be made to go against a guideline: that’s how improvement to guidelines can happen, after field testing of proposed changes).

    AND Software should be tested by people who are paid to produce successful tests:

    “A successful test case is one that detects an as-yet undiscovered error. [Myers]”

Join the discussion in the TidBITS Discourse forum

Participants

Avatar for jcenters Avatar for Simon Avatar for alvarnell Avatar for duanewilliams Avatar for mpainesyd Avatar for tidbits44