So why do Web pages aimed at Windows users have such tiny text? Geoff Duncan explains it all this issue, with a look at points, picas, pixels, and how the Mac OS and Windows render fonts differently. Adam weighs in with some thoughts on the permanence of URLs on the Web (and how to deal with broken URLs), and, in the news, we see Macintosh Runtime for Java 2.1, Action Files 1.2, ShareWay IP 2.0, and MasterJuggler 2.0.2.
Action Files 1.2 Usurps Nav Services — The latest update to Power On Software’s popular Action Files now overrides Apple’s Navigation Services under Mac OS 8.5 in favor of the enhanced Open and Save dialog boxes provided by Action Files, plus rolls in a handful of fixes. (See "Get a Piece of the Action Files" in TidBITS-434). Currently, few applications support Navigation Services, which replaces the traditional Open and Save dialog boxes in the Mac OS; Action Files 1.2 now appears in place of all applications’ Open and Save dialog boxes. Additionally, the update features increased compatibility with Kaleidoscope, works better with aliases and volumes exceeding 4 GB, and allows more control over the program’s default settings. Action Files owners can download the new version (1.1 MB) from Power On Software and use it with existing registration numbers. [JLC]
MasterJuggler Catches Mac OS 8.5 Compatibility — Alsoft has released a free update to its font-management utility MasterJuggler, fixing a few incompatibilities with Mac OS 8.5. The MasterJuggler Pro 2.0.2 Update fixes a problem when Font Guardian (a component that examines the reliability of font resources) was set to scan when the Mac is started up. This update also corrects minor glitches related to selecting items from MasterJuggler’s pop-up menus. See "Font Outfitters" in TidBITS-334 for a comparison of MasterJuggler and Suitcase 3.0. The 2.0.2 updater is a 350K download. [JLC]
Open Door Networks Releases ShareWay IP 2.0 — Open Door Networks has shipped ShareWay IP 2.0, an upgrade to their useful networking utility for making Macs with Personal File Sharing or older AppleShare servers accessible via TCP/IP (see "Share and Share IP Alike" in TidBITS-436). The main new feature in ShareWay IP 2.0 is support for SLP (Server Location Protocol), a method of dynamically locating and accessing servers, much like using the AppleTalk Chooser. Essentially, ShareWay IP 2.0-based servers register themselves through SLP, and then users can use Open Door’s bundled AFP Engage 2.0 to display a list of available servers. Other new features in 2.0 include a background-only mode of operation, real-time logging, and enhanced security features. Due to limitations in Apple’s SLP plug-in, only the Personal and Standard editions of ShareWay IP have been upgraded to 2.0; a future version of the SLP plug-in should enable Open Door to release a version of the multi-server ShareWay IP Professional. Upgrades are free for purchases made after 03-Jan-99 and otherwise cost between $34 and $89 for single user licenses, depending on the version of ShareWay IP. Discounts are available for site license upgrades, and evaluation copies are available for download. [ACE]
MRJ 2.1 Runs Faster, Works with Explorer — Apple has released Macintosh Runtime for Java 2.1, which offers substantial performance improvements over previous versions of MRJ and can be used with Microsoft’s Internet Explorer Web browser. (Microsoft dropped support for their own Java virtual machine immediately after Sun won a preliminary injunction against Microsoft’s Java implementations; see TidBITS-456.) MRJ 2.1 complies with Sun’s Java JDK 1.1.6 specification, optionally supports Sun’s Java Foundation Classes and Swing interface toolkit (versions 1.0.3 and 1.1), and adds support for AppleScript (although we don’t know how useful this will be in the short term). Although MRJ 2.1 still doesn’t run every Java applet under the sun (in fact, the release notes warn users away from Yahoo Games while Apple investigates problems), MRJ 2.1 works with Internet Explorer – just make sure MRJ is selected as Explorer’s Java virtual machine – and Java development environments for the Macintosh. Netscape browsers use their own Java implementation and can’t currently use MRJ.
MRJ 2.1 requires a PowerPC-based Mac running Mac OS 7.6.1 or later (Mac OS 8.1 or later recommended) with at least 24 MB of RAM, 20 MB of free disk space, and Open Transport 1.1 or better. MRJ 2.1 installs Text Encoding Converter 1.4.2 on all systems, and if you’re using Mac OS 7.6.1, it installs version 1.0.3 of the Appearance control panel. Note that MRJ 2.1 does not ship with the Apple Applet Runner; you can use Apple’s Applet Runner 2.0 (which shipped with MRJ 2.0, and hence with Mac OS 8.1 and higher) or download it with the MRJ SDK from Apple’s developer Web site. MRJ 2.1 is a 7.8 MB download. [GD]
Last week’s issue of TidBITS had the second installment of our sporadic Tools We Use column; the first installment covered NewerRAM’s useful little GURU utility. Several readers followed the link in last week’s article to the GURU article and then discovered to their horror that the URL we gave for GURU back in November of 1998 no longer worked.
We often receive notes like this, since many companies don’t think ahead when redesigning Web sites and in the process break existing URLs, a process colloquially called "linkrot." Most people are quite nice about the fact that old issues of TidBITS point at broken URLs, but there’s often just a hint of irritation: why haven’t we dealt with this linkrot already?
Historical Accuracy — One of our most strongly held and frequently assaulted beliefs is that our content is almost immutable – the only thing we ever change after distributing an issue is a typographical error. Our reason for this policy is that we’re adherents to the concept of historical accuracy. We feel that if we wrote something in the 30-Nov-98 issue of TidBITS, those words should remain fixed forever. Otherwise, how could anyone viewing that issue know they were reading the same text we published on that date? And though we flatter ourselves with this thought, what about historians in the future, attempting to divine what it was about the Macintosh community that set it apart from other groups of computer users? We want to present the future with an accurate view of the past.
This policy is often tested because it’s tempting, even for us, to go back in to fix mistakes. We don’t want to look bad, and if we could just make one tiny little change in an article… No. If we make a mistake, that mistake is set in stone, and we can correct it the next week.
These attitudes harken back to publishing world driven existing entirely on paper, and although I’m no fan of publishing on paper purely for the sake of a physical object, paper lends itself both to information permanence and to archiving. You can’t change the words on the page in a magazine, and it’s easy to pile up all the issues of a magazine, in order, and sort through them for some piece of information. Though TidBITS is electronic, we strive to achieve a similar level of information permanence and archiving.
In short, then, you can rely on what you read in an issue of TidBITS to remain the same forever. We will never go in to our archive to change content other than to fix a typo. Soon, we hope to implement forward linking in our database so any corrections would be accessible from the original article.
In addition, we take pains to ensure that all of our URLs are permanent. Our issue naming scheme is simple and consistent, and our custom GetBITS CGI ensures that we have short, permanent URLs to individual articles in our database, not to mention threads in TidBITS Talk.
Dealing with Broken URLs — Luckily, finding a Web page again after a URL breaks isn’t difficult, assuming the page still exists. The trick is to delete pages and directories from the end of the URL until you get to a page from which you can start browsing for the desired page again. Take this URL.
If you received an error message accessing that URL, the first thing to do would be to delete "about-tidbits.html" and send the URL to the Web browser again. If that shorter URL also generated an error page, you’d delete "about/" and try once more. That takes you to the top level of the site and should provide a useful starting point for additional searching.
Binary URLs — When I posted the updated GURU URL to TidBITS Talk, a related issue came up. When posting different versions of programs, how do you deal with the fact that including the version number in the name automatically ensures broken URLs after an upgrade? If you’re distributing software, check out the thread for a variety of ways to prevent binary linkrot.
Additional Reading — Finally, Jakob Nielsen’s excellent Alertbox column has touched on this topic several times, first in relation to linkrot (apparently more than six percent of the links on the Web are broken), and the second about "content gardening," the act of going back in to keep pages fresh. Jakob linked to my comments about historical accuracy and the need to avoid historical revisionism – it’s worth a read if you’re interested in the topic.
As a technical journalist, I feel compelled to address an unspoken truth of the trade: We sometimes gloss over stuff. In part, it’s unavoidable. Just as knowledge can have infinite value, conveying knowledge can require an infinite number of words. But, darn it, sometimes we leave out stuff for your own good! This world contains truths that can make your head hurt, and it’s our job to protect you, our innocent readers, from these malevolent abysses.
Such was the case with Adam’s article "Driving the 4.5 Web Browsers" in TidBITS-465, in which he noted: "Because Windows thinks monitors use a screen resolution of 96 dpi by default, rather than the Mac’s 72 dpi, Windows-based Web designers often lower the font size so text doesn’t appear too large for Windows users. Mac users are then faced with tiny text that’s hard to read."
Adam’s statement is correct, but TidBITS readers don’t know what’s good for them. More people responded to those two sentences than sometimes respond to an entire issue, asking questions about font rendering, the physical resolution of monitors, whether Windows or the Macintosh do the "right" thing, and much more.
If you can’t leave well enough alone, fine. This article turns off a few of the journalistic shields we normally employ for your benefit. Don’t say we didn’t try to protect you.
Squint-O-Vision — Most Macintosh users have encountered Web pages with unbearably tiny text. If you haven’t, spend a few minutes browsing Microsoft’s Web site – especially pages devoted to Windows itself – where it’s not uncommon for Mac users to see text one to four pixels in height.
This phenomenon isn’t limited to the Web. How often have you been forced to edit a document from a Windows user who thinks 10 point Times is a wonderful screen font? Or maybe you’ve had to review a spreadsheet formatted in 9 point Arial? Do all Windows users have some sort of telescopic vision that makes text appear larger to them?
Why, yes. They do.
Making Points — The confusion begins with a unit almost everyone uses: the point. People use points every day, choosing a 12 point font for a letter, or a 36 point font for a headline. But do you know what a point is?
Many people will tell you that a point is 1/72 of an inch. That’s correct, but only for the imaging systems used in most computers (including Apple’s QuickDraw and Adobe’s PostScript). Outside of a computer, the definition of a point varies between different measurement systems, none of which put a point precisely equal to 1/72 of an inch.
Technically, a point is one twelfth of a pica. What’s a pica? The first modern point system was published in 1737 by Pierre Fournier, who used a 12-point unit he called a cicero that was 0.1648 inches. Thus, a point would be a unit of length equal to 0.0137 inches. By 1770, Francois Ambriose Didot converted Fournier’s system to sync with the legal French foot of the time, creating a larger 0.1776-inch pica, with 12 points each measuring 0.0148 inches. As fate would have it, the French converted to the metric system by the end of the 18th century, but Didot’s system was influential and is still widely used in Europe. In Didot’s system, a pica is larger than one-sixth of an inch, and thus his point – still called the Didot point – is larger than 1/72 of an inch.
The U.S., of course, did its own thing. In 1879 the U.S. began adopting a system developed by Nelson Hawks, who believed the idea of a point system was his and his alone. Claims of originality aside, Hawks’ system came to dominate American publishing within a decade, and today an American pica measures 0.1660 inches, just under one sixth of an inch, and a point (often called a pica point) is 0.0138 inches, very close to Fournier’s original value, but still a tiny bit less than 1/72 of an inch.
Also in 1879, Hermann Berthold converted Didot’s point system to metric, and the Didot-Berthold system is still used in Germany, Russia, and eastern Europe. Just to make things more confusing, many Europeans measure type directly in millimeters, bypassing the point altogether.
The term pica may confuse readers old enough to remember typewriters and daisy wheel printers. Those technologies describe type in terms of pitch, or how many characters fit into a horizontal inch. Pica type corresponded to 10 characters per inch, elite to 12 characters per inch, and micro-elite to 15 characters per inch. These days, you’d simulate these pitches using a monospaced font (like Courier) at 12, 10, and 8 points, respectively.
For the purposes of understanding why text on Windows Web pages often looks too small on a Macintosh, you can do the same thing your computer does: assume there are 72 points to an inch.
Not Your Type — When you refer to text of a particular point size, you’re describing the height of the text, rather than its width or the dimensions of a particular character (or glyph) in a typeface. So, if there are 72 points to an inch, you might think 72 point characters would be one inch in height – but you’d almost always be wrong. The maximum height of text is measured from the top of a typeface’s highest ascender (generally a lowercase d or l, or a capital letter) to the bottom of the face’s lowest descender (usually a lowercase j or y). Most glyphs in a typeface use only a portion of this total height and thus are less than one inch in height at 72 points.
If this doesn’t make sense, think of a period. At any point size, a period occupies only a small fraction of the height occupied by almost every other character in a typeface. When you’re typing 72 point text you don’t expect a period to be one inch high. Lowercase letters generally use less total vertical space than capital letters, and capital letters typically use about two thirds of the complete vertical height. (If you’re curious, the tallest letter in a typeface is often the capital J: it sometimes has a descender even when capitalized.)
To make things more confusing, some typefaces break these rules. Special symbols, diacritics, and punctuation might extend beyond the limits specified by a point size, although it’s rare for a single character to exceed both the upper and lower limits. Other typefaces may not use the full vertical height available; that’s why Times seems smaller than many other faces at the same point size.
If a point size is an indication of text height, what about text width? Unlike points, which are (sort of) an absolute measure, text width is measured using the em. An em is a width equal to the point size of the type in which it’s used. So, in 36 point type, an em is equal to 36 points; in 12 point type, an em is equal to 12 points. The em unit was originally based on an uppercase M, which was often the widest character in a typeface back in the days of handset type. Today, however, a capital M usually isn’t a full em in width, allowing for a little bit of space before and after the character. The em is important because it is a relative unit, but the implications of the em are beyond the scope of this article.
Pixel Dust — Now that you know a little about the somewhat fuzzy ways type is measured, how does a computer use this information to display text on a monitor?
Let’s say you’re writing a novel, and you set your chapter titles in 18 point text. First, the computer needs to know how tall 18 points is. Since the computer believes there are 72 points in an inch, this is easy: 18 points is 18/72 of an inch, or exactly one quarter inch. The computer then proceeds to draw text on your screen that’s one quarter inch high.
This is where the universe gets strange. Your computer thinks of your monitor as a Cartesian grid made up of pixels or "dots." To a computer, your display is so many pixels wide by so many pixels tall, and everything on your screen is drawn using pixels. Thus, the physical resolution of your display can be expressed in pixels per inch (ppi) or, more commonly, dots per inch (dpi).
To draw 18 point text that’s one quarter inch in height, your computer needs to know how many pixels fit into a quarter inch. To find out, you’d think your computer would talk to your display about its physical resolution – but you’d be wrong. Instead, your computer makes a patent, nearly pathological assumption about how many pixels fit into an inch, regardless of your monitor size, resolution, or anything else.
If you use a Mac, your computer always assumes your monitor displays 72 pixels per inch, or 72 dpi. If you use Windows, your computer most often assumes your monitor displays 96 pixels per inch (96 dpi), but if you’re using "large fonts" Windows assumes it can display 120 pixels per inch (120 dpi). Unix systems vary, but generally assume between 75 and 100 dpi. Most graphical environments for Unix have some way to configure this setting, and I’m told there’s a way to configure the dpi setting used by Windows NT and perhaps Windows 98. However, the fundamental problem remains: the computer has no idea of your display’s physical resolution.
These assumptions mean a Macintosh uses 18 pixels to render 18 point text, a Windows system typically uses 24 pixels, a Unix system typically uses between 19 and 25 pixels, and a Windows system using a large fonts setting uses 30 pixels.
Thus, in terms of raw pixels, most Windows users see text that’s 33 percent larger than text on a Macintosh – from a Macintosh point of view, Windows users do in fact see text with telescopic vision. When you view the results on a single display where all pixels are the same size, the differences range from noticeable to dramatic. The Windows text is huge, or the Mac text is tiny – take your pick. See my sample below.
Size Does Matter — This leads to the answer to our $20 question: why text on Web pages designed for Windows users often looks tiny on a Mac. Say your computer’s display – or Web browser’s window – measures 640 by 480 pixels. Leaving aside menu bars, title bars, and other screen clutter, the Mac can display 40 lines of 12 point text in that area (with solid leading, meaning there’s no extra space between the lines). Under the same conditions, Windows displays a mere 32 lines of text; since Windows uses more pixels to draw its text, less text fits in the same area. Thus, Windows-based Web designers often specify small font sizes to jam more text into a fixed area, and Macintosh users get a double whammy: text that was already displaying using fewer pixels on a Macintosh screen is further reduced in size, even to the point where the text is illegible.
Dots Gotta Hurt — The fundamental issue is that the computer is trying to map a physical measurement – the point – to a display device with unknown physical characteristics. A standard computer monitor is basically an analog projection system: although its geometry can be adjusted to varying degrees, the monitor itself has no idea how many pixels it’s showing over a particular physical distance. New digitally programmable displays – including both CRTs and flat LCD panels – would seem to offer hope of conveying resolution information to a computer. However, I know of no systems that do so, and full support would obviously have to be built into video hardware and operating systems. However, many displays do convey some capabilities to the host computer, including available logical resolutions in pixels.
Why do Windows and the Macintosh make such different assumptions about display resolutions? On the Macintosh, it had to do WYSIWYG: What You See Is What You Get. The Mac popularized the graphical interface, and Apple understood that the Macintosh’s screen display physically must correspond as closely as possible to the Mac’s printed output. Thus, pixels correspond to points: just as the Mac believes there were 72 points per inch, it displays 72 pixels per inch. In the days before 300 dpi laser printers, the Mac was a stunning example of WYSIWYG, and displays in the original compact Macs and Apple’s original color displays were from 71 to 74 dpi – close enough to 72 dpi to hide the fact the Mac had no idea about the display’s physical resolution.
In fact, Apple resisted higher display resolutions and multisync displays for years; after all, the further a pixel drifted from 1/72 of an inch, the less of a WYSIWYG computer the Mac became. The rising popularity of Windows, cost pressures from PC manufacturers, and strong demand for laptops finally caused Apple to relent on this issue, and today Macintosh displays generally show more than 72 dpi. A 17-inch monitor running at 1024 by 768 pixels usually displays between 85 and 90 dpi. The iMac’s built-in display has a 13.8-inch viewable area (measured diagonally); a quick check with the Pythagorean Theorem indicates the iMac’s screen is a rather chunky 58 dpi at 640 by 480, but almost 93 dpi at 1024 by 768 resolution. A PowerBook G3 with a 13.3-inch display area (also measured diagonally) displays over 96 dpi at 1024 by 768. SGI’s upcoming 1600SW flat panel display sports a resolution of 110 dpi.
The historical reasons for Windows’ assumption of 96 dpi displays are less clear. The standard seems to have been set by the Video Electronics Standards Association (VESA) with the VGA (Video Graphics Adapter) specification, which somewhat predates the market dominance of Windows. From what I can tell, there may have been compatibility concerns with older CGA and EGA video systems, and it seems no one with VESA felt text below 10 or 12 points in size would be legible on screen with a resolution of less than 96 dpi. The Macintosh proved this wrong, largely through careful design of its screen bitmap fonts, like Geneva, Monaco, Chicago, and New York. More ironically, although the resolution of mainstream Macintosh displays is indeed creeping closer to 96 dpi, Windows displays routinely sport resolutions well below the assumed 96 dpi. If you know a Windows user with a 17-inch monitor and a 1024 by 768 resolution, their monitor is probably displaying between 85 and 90 dpi – just as a Macintosh would be.
Connect the Dots — Hopefully, this article shows how computers can take a mildly fuzzy measurement (the point), use it as a yardstick to render characters which themselves use an arbitrary portion of their point size, and finally convey that information to a display that, in all probability, does not conform to the computer’s internal imaging system. The situation is a mess, even leaving platform out of the equation.
Windows advocates occasionally trumpet their systems’ higher text resolution as an advantage, or claim the Mac’s lower text resolution is its dirty little secret. Historically, neither claim rings particularly true. Although text on Windows system is rendered with more fidelity at a particular point size, Windows sacrifices WYSIWYG to get those extra pixels. Unfortunately, the bottom line is that no mainstream system for either platform is likely to display accurately sized text, so everyone loses.
And that’s all the print that fits… or doesn’t, depending on your system.