Apple Computer, Going Forward into 2004
In the first issue of the new year in past years, I’ve attempted to predict some of the big stories of the coming months with varying degrees of success. You can check how well I did over the last few years yourself; this year I want to change direction slightly and look at where Apple should go.
Some of these ideas come from the keynote I gave at the O’Reilly Mac OS X Conference in October of 2003; after that talk, Tim O’Reilly and I were chatting about Apple’s inconsistency in emphasizing a particular technology or feature in one product, but ignoring it entirely in another. As Tim said, iTunes has some truly neat features, such as Rendezvous-based discovery of shared music; wouldn’t it be useful if iPhoto had Rendezvous-based discovery of shared photos? To put it another way, from the outside, it looks like the "Not Invented Here" syndrome (any technology we didn’t invent isn’t worthwhile) scaled down to the product group.
Let’s look at specific areas Apple can and should focus effort to improve both their products and the Macintosh experience for the rest of us.
Proactive Hardware Testing — Mac OS X has radically improved the overall stability of the Macintosh, although I’ve found Panther to be quirkier than Jaguar, with occasional inexplicable hard freezes on my dual-processor 1 GHz Power Mac G4. Other TidBITS staffers ran into trouble with third-party RAM modules that worked fine in Jaguar, but caused constant crashes in Panther.
Specific troubles will come and go; as long as the overall trend is toward more reliability, I won’t complain too loudly. My larger concern is that we still lack sufficient diagnostic and reporting mechanisms to identify and isolate bad hardware of all sorts before it causes trouble. How do you tell if a FireWire or USB device is bad, for instance? Trial and error, just like the old days.
Apple is making some steps in the right direction. Panther’s Disk Utility can now report on the SMART status of your hard drives. SMART, or Self-Monitoring Analysis and Reporting Technology, is a feature built into most modern hard drives. By monitoring numerous variables about the drive operation, SMART can theoretically predict when a drive is about to fail. It won’t predict catastrophic failures, but identifying failures that will occur because of cumulative small problems over time is a good start. Open Disk Utility, select the drive (not a volume) in the left column, and look at the bottom of the window for the SMART status.
Similarly, Panther now reports actual network status in the Network preference pane, including telling you if a cable is unplugged or if you have a self-assigned IP address, both of which can save significant troubleshooting time. A final good move is that Apple now provides (with at least some new Macs; I can’t vouch for all of them) the Apple Hardware Test Utility for testing many of a Mac’s hardware subsystems. Look on the discs that come with your Mac; for my 12-inch PowerBook G4, Apple Hardware Test is on the Software Install and Restore DVD, and you launch it by holding down Option at startup, and then selecting it instead of a boot disk.
I’d like to see Apple wrap all these pieces together and figure out a way to run them automatically. I learned about the Apple Hardware Test utility only when the Small Dog Electronics tech support staff asked me to run it to help track down the cause of the constant kernel panics and hard freezes I was seeing on the PowerBook (it was bad RAM). A better approach might be to have a program, perhaps in firmware, look for any hardware changes at startup and automatically offer to test the new device, and then to run the tests on an automatic basis alongside the regularly scheduled Unix cleanup tasks that take place while the Mac is idle.
Such a feature might not seem sexy, but I think it would be easy for Apple to justify on a marketing basis, not to mention reduced tech support costs and increased customer satisfaction. Call it "Proactive Testing" and sell it as a technology that ensures that the user’s Mac will perform some basic verification on anything the user adds, be it RAM, an internal or external hard drive, or a USB device. Everyone knows that computers can be a royal pain when they don’t work right; being able to identify bad hardware before it causes trouble would be a big win for Apple.
Emphasize Identity — Projects like the Virginia Tech Terascale Cluster of 1,100 Power Mac G5s aside, the Mac has always concentrated more on communication than number crunching. From day one, the Macintosh has been a tool for communicating with others, with MacWrite and MacPaint in the early days, and, more recently, via the Internet with a slew of applications like iChat AV, iTunes, and iPhoto. Apple has done a good job with most of these applications, providing better interfaces, integrating communications technologies into Mac OS X, and building a better mousetrap with small devices like the iPod and iSight.
What should come next? An emphasis on identity. Central to all communication is the concept of identity, but Apple’s software and services are extremely fuzzy about identity at the moment. Think about all the places you must identify yourself: when logging into your Mac, when connecting to file servers, when sending email, when connecting to many Web sites (including Apple’s own .Mac and Apple Store), when using iChat, when sharing music via iTunes, and more. For each of those places, you probably have a different way of identifying yourself, but you’re the same person in each place.
Why do I have to identify myself repeatedly to access my own accounts on other Macs on my network? Why, when I access my own shared music in iTunes with my other Macs, do I have no more privileges to create playlists or manipulate songs than any other user? Why do I have to remember different user names and passwords not just for multiple Web sites, but for multiple Web services run by Apple itself? Where is Apple’s vaunted hardware, software, and Web services integration when it comes to identity?
Identity, and the associated concept of permissions, will be key to communications in the future. We need ways both to identify ourselves to others and to identify people wanting to communicate with us. Even more important, we need ways of saying exactly what each person is allowed to do in a particular communication scenario. Apple has the building blocks necessary to give Mac users the opportunity of creating and using a single electronic identity, and Apple has more of a chance to anoint a standard than almost any other company out there. (If Microsoft had a better reputation for trustworthiness and had chosen to make .NET Passport an open standard, I think it would have had a much large impact.)
I’ve long believed that identity will be a major issue; it’s why I put a lot of effort into the XNS (eXtensible Name Service) effort a few years back. Unfortunately, XNS suffered from intellectual property baggage and a confusing split of management responsibilities and capabilities between XNSORG (the non-profit organization I chaired) and OneName Corporation (the company that developed XNS and licensed it to XNSORG). Also damaging was OneName’s difficulty in finding funding; the company has survived, but only by the skin of its teeth, and it is currently reorganizing under Chapter 11 bankruptcy proceedings. XNSORG has essentially been in limbo for much of this time as well.
Despite these discouraging signs, and thanks almost single-handedly to the efforts of Drummond Reed, XNS is still alive, and chunks of the technical specification are now under the auspices of OASIS, a global non-profit standards consortium. The Extensible Resource Identifier specification was approved last month by the OASIS XRI Technical Committee, and the charter for a second OASIS technical committee for the XDI (XRI Data Interchange) specification was also submitted last month. With luck, having these technical specifications at OASIS will make it easier for other organizations, like Apple, to start developing identity services based on these open standards.
The Open/Proprietary Balance Beam — Mac OS X has been a grand experiment in melding the best of the open source world and the best of the proprietary source world, building Apple’s legendary ease-of-use on top of the equally legendary power of an open-source Unix implementation. Even at higher levels, Apple has done a good job of creating open frameworks that enable developers to create applications quickly and easily. Some are relatively well known, such as the WebKit framework that significantly simplifies building Web access into an application. Other examples are less well known, such as Address Book, Apple’s system-wide contact database.
To users, Address Book appears to be a relatively simple application for tracking contacts, but in fact it’s just a front end to the Address Book framework, and that in turn is just a way of talking to an open source database called MetaKit. As a result, any application can read from and write to the Address Book database, picking and choosing which fields interest them, adding custom fields, and even marking fields as coming from a specific application. An ever-increasing number of applications take advantage of these capabilities now.
There are at least two huge benefits to creating these frameworks. Developers are empowered, and as much as it might seem difficult to compete with Safari, the effort of doing so is significantly less thanks to the WebKit framework. (In fact, OmniWeb 5, a pre-release version of which will be shown at Macworld Expo this week, uses WebKit to match Safari’s rendering while offering Omni Development’s take on what a Web browser interface should be.) But the more interesting benefit is that you don’t have to worry about your data being held hostage. With Address Book, for instance, if you didn’t like Apple’s interface, you could, at least in theory, use a front-end from another developer.
But Apple isn’t consistent here. If iPhoto were merely Apple’s own front-end to a generic image cataloging framework, instead of the proprietary solution it is today, users wouldn’t have to worry about entrusting not just all their photos to iPhoto, but also all the metadata (titles, keywords, and albums) that gives meaning to all those photos. Also, if developers could extend or even replace portions of iPhoto’s functionality, those people for whom iPhoto falls down would have significantly more options.
My point here is that although some parts of Apple understand the concept of building on top of open frameworks, the religion clearly hasn’t spread everywhere within the company. Closed solutions like iPhoto may serve the purpose of making the Mac attractive to consumers, but they don’t let other developers make the Mac even more attractive to consumers.
I Need More Speed, Scotty! Performance is a tricky topic to complain about, because it’s easy to rely on speed increases thanks to hardware improvements, particularly when doing so guarantees obsolescence and encourages Mac users to buy new computers every few years. If older Macs didn’t seem slow running the latest software, many people would stick with them even longer than they do now, which would reflect poorly on Apple’s bottom line.
Mac OS X’s performance was nearly unusable in 10.0, distinctly usable in 10.1, and perfectly reasonable in 10.2, and Apple seems to have decided that perfectly reasonable is all that’s necessary. On some machines, Panther seems a little faster than Jaguar, but on my dual-processor 1 GHz Power Mac G4, I see the spinning pizza of death far more frequently in Panther than I ever did in Jaguar, and overall, the Mac just feels more sluggish. Either way, I’m disappointed because Mac OS X has never achieved the snappiness of Mac OS 9, and given the kind of hardware we have in today’s computers, I expect blazing speed.
So as much as I’d like to see Mac OS X increase in speed significantly, I don’t expect it to happen, since I don’t think Apple wants to spend the engineering resources necessary to do it, particularly when maintaining the status quo keeps money from hardware upgrades flowing in.
That said, there are three other performance-related areas where Apple could and should improve the Macintosh experience. All rely on improvements in functional performance, not computational performance. Computational performance involves increasing the speed of the processor or streamlining code so it executes more quickly. More important is functional performance, or improving ease-of-use so users can complete tasks more quickly. Ironically, functional performance usually comes at the expense of computational performance, as a program consumes CPU cycles to simplify user actions – the sheer popularity of the graphical interface is evidence of the importance of spending CPU time on functional performance.
First, Mac OS X’s dreaded spinning pizza of death indicates that an application isn’t responding, either because it crashed, at which point there’s no recovering, or because it’s performing a lengthy task. In Mac OS 9, you could often interrupt those lengthy tasks with Command-period; I’d like to see that capability return to Mac OS X. It’s so frustrating to make a mistake that results in an application going away for a few minutes without being able to interrupt the task, and it’s especially frustrating when you’re waiting for the Finder to realize that a network volume isn’t accessible.
Second, remember Hiro Yamamoto’s Boomerang utility from the days of yore? It simplified opening files and folders that you’d opened recently, and although Boomerang itself was both enhanced (to Super Boomerang) and imitated over the years, relatively few developers picked up on its overall lesson. Boomerang understood that your recent past actions were the ones you were most likely to want to repeat. To generalize that point, existing data can be used to simplify (and therefore speed up) future actions. It’s a subtle and valuable lesson, and some of Apple’s software understands it. Address Book can automatically complete many fields based on data in those fields for existing contacts; the Displays menu shows just your recently selected resolution switches; and a quick press of Command-Tab switches between your two most recently used applications rather than show the entire list of running programs. But too many applications don’t understand Boomerang’s lesson, or don’t implement it fully, so although Safari can auto-complete URLs, based on pages you’ve previously visited, it can do so only if you start typing from either the beginning of the URL or the part of the domain after www. And iPhoto currently does nothing to aid the organization and categorization of newly imported photos based either on comparisons with previously imported photos or your previous actions. Every product team, whether inside Apple or working on an independent application, should think hard about how their software could do more work for the user based on past data or actions.
Third is a frustration with system instrumentation that Tim O’Reilly has articulated on numerous occasions. Someone at Apple needs to analyze which of Apple’s products have come up with highly functional interface elements that help users work faster, and work to spread those ideas to other product groups. For instance, iTunes can track which songs you play most frequently using the Top 25 Most Played playlist; offering a similar option in iChat and Mail for the people with whom you chat or email the most would also have significant benefits. (This example actually overlaps with the Boomerang lesson, since the people you’ve communicated with recently are probably the people you’ll want to communicate with next, too.) How should this spread of ideas take place? Tim’s idea, and it’s a good one because it would serve developers outside Apple as well, is that we need a systematic document, akin to the original Apple Human Interface Guidelines, that would look at the best practices of existing applications and talk about how interconnected applications should behave.
File Sharing, The Next Generation — The Macintosh broke new ground with the introduction of Personal File Sharing because it enabled any Mac to act as a file server. We’ve come a long way since then, as evidenced by the hefty list of file sharing protocols in Mac OS X’s Sharing preference pane, and Rendezvous gave us back some of the ease of device discovery we lost in the switch from AppleTalk to TCP/IP as the primary networking protocol.
Despite these many new ways of sharing files and improvements like Rendezvous, Apple has done nothing with the most important development in file sharing over the last decade: peer-to-peer file sharing. We’re still stuck in a device-driven model of file sharing, so if you want to work with a shared file, you must know what server holds that file, have a user name and password, and explicitly make a connection to that server. And all that assumes you can make a network connection to the server.
File sharing doesn’t have to be so limited, and in fact, a product called DataClub from the early 1990s showed an alternative. With DataClub, every Mac on the network devoted some space to a "cloud" that could hold shared files. All the files in the cloud, no matter which Mac actually held them, appeared as though in a single volume, and if the Mac that held a particular file was turned off, that file was greyed out to indicate it wasn’t available.
Since the days of DataClub, we’ve seen the rise of peer-to-peer file sharing networks like Napster, Kazaa, eDonkey2000, and a host of others. But none of that technology has appeared in the Mac OS, perhaps due to the fear that the uninformed would somehow equate it with trading of music or other copyrighted content.
I’d like to see Apple build peer-to-peer file sharing technologies into Mac OS X so a network of Macs and Macintosh users could become more powerful and flexible than the mere sum of its parts. Multiple copies of shared files could be stored across multiple machines, eliminating the problem of a turned-off Mac or traveling PowerBook. Browsing for servers would become a thing of the past; shared files would simply be accessible at all times in what seemed like a local set of folders. Such a system could even automatically maintain backups of data that wasn’t shared by distributing it in encrypted chunks around other Macs on the network. All this would work best over a high-speed local network, but designed and implemented properly, nothing would stop it from extending to the Internet.
File System Databases — To finish this look into a possible future direction for Apple and Mac OS X, think briefly about what my ideas about a cloud of shared files would require at a slightly lower technical level. How would you mark a file to be shared, and how would you determine who would be allowed to access the file, and in what ways? On the other side of the equation, how would another user find the files you shared, and what would happen if that person started editing a file while someone else had it open?
The answer to all these questions lies in treating files as objects in a database, since database technology long ago addressed all these issues with permissions, users and groups, metadata, and record locking. But the file system that manages files on disk is a rather poor database, and putting a robust, high-performance database underneath the file system might be necessary for such a peer-to-peer file sharing system to work effectively. This isn’t a new concept – I hinted at it in a TidBITS April Fools article from 1992, and talked more about the utility of a system-level database in a pair of articles in 1996.
With a database underneath the file system, folders in the Finder could be more generic containers whose contents would be based on constantly updated searches, so they could hold all files shared by a particular user, for instance, or all files available to a particular group. Additional metadata could further refine the list of available files, enabling you to limit the available files to those created by a specific program, or during a specific date range. Record-locking could prevent one user from overwriting another user’s changes when two people wanted to use the same file simultaneously. In some situations, the locking could perhaps take place at a more granular level than the entire file, so multiple people could work on the same file simultaneously without stepping on each other’s changes, much as SubEthaEdit provides real-time collaborative text editing now.
The idea of a database underneath the file system is not only not new, it’s where Microsoft is going with WinFS in the next major revision to Windows, due at some point in a few years.
The Next Step — I won’t pretend that these suggestions are easy, but it’s clear that at least some groups at Apple understand their importance, so I hope my words won’t fall on deaf ears. The question is if Apple management can identify the most powerful of these general concepts and appropriately spread them to other products so the overall Mac experience can benefit from the best work being done now in specific areas.