Extract Directly from Time Machine
Normally you use Time Machine to restore lost data in a file like this: within the Time Machine interface, you go back to the time the file was not yet messed up, and you restore it to replace the file you have now.
You can also elect to keep both, but the restored file takes the name and place of the current one. So, if you have made changes since the backup took place that you would like to keep, they are lost, or you have to mess around a bit to merge changes, rename files, and trash the unwanted one.
As an alternative, you can browse the Time Machine backup volume directly in the Finder like any normal disk, navigate through the chronological backup hierarchy, and find the file which contains the lost content.
Once you've found it, you can open it and the current version of the file side-by-side, and copy information from Time Machine's version of the file into the current one, without losing any content you put in it since the backup was made.
Series: Bandwidth & Latency
Your new modem isn't fast enough? You're right!
Article 1 of 2 in series
[Last week in TidBITS-367, Stuart examined issues of latency and delay in typical modem-based Internet communications. This week, Stuart offers general observations on how bandwidth can be used more efficiently and how it effects the overall latency of a connection.] Last week, I asked readers to imagine a world where the only network connection you can get to your house is a modem running over a telephone line at 33 KbpsShow full article
[Last week in TidBITS-367, Stuart examined issues of latency and delay in typical modem-based Internet communications. This week, Stuart offers general observations on how bandwidth can be used more efficiently and how it effects the overall latency of a connection.]
Last week, I asked readers to imagine a world where the only network connection you can get to your house is a modem running over a telephone line at 33 Kbps. Now, imagine that this is not enough bandwidth for your needs. You have a problem.
Making Bandwidth is Easy -- Technically, the solution is simple. You can install two telephone lines and use them in parallel, giving you a total of 66 Kbps. If you need more capacity you can install ten telephone lines, for a total of 330 Kbps. Sure, it's expensive, having ten modems in a pile is inconvenient, and you may have to write networking software to share the data evenly between the ten lines. But if it was sufficiently important, it could be done. People with ISDN lines already do this using a process called BONDING (which is short for "Bandwidth ON Demand INteroperability Group"), which enables them to use two 64 Kbps ISDN channels in parallel for a combined throughput of 128 Kbps.
Getting additional bandwidth is possible, even if it's not always economical. However, equally important is that making limited bandwidth go further is easy.
Compression -- Compression is an easy way to increase bandwidth. You can apply general purpose compression (such as StuffIt) to the data. Even better, you can apply data-specific compression (such as JPEG for still images and MPEG for video), which can provide much higher compression ratios.
These compression techniques trade off use of CPU power for lower bandwidth requirements. However, there's no equivalent way to trade off use of extra CPU power to make up for poor latency.
All modern modems utilize internal compression algorithms. Unfortunately, having your modem do compression is nowhere near as good as having your computer do it. Your computer has a powerful, expensive, fast CPU, whereas your modem has a feeble, cheap, slow processor. In addition, as we noted last week, a modem must hold on to data until it has a block big enough to compress effectively. This requirement adds latency, and once added, latency can't be eliminated. Also, since the modem doesn't know what kind of data you're sending, it can't use superior data-specific compression algorithms. In fact, since most images and sounds on Web pages are already compressed, a modem's attempts to compress the data a second time adds more latency without any benefit.
This is not to say that having a modem do compression never helps. When the host software at the endpoints of the connection is not smart and doesn't compress data appropriately, then the modem's own compression can compensate somewhat and improve throughput. The bottom line is that modem compression only helps dumb software, and it hurts smart software by adding extra delay.
Send Less Data -- Another way to cope with limited bandwidth is to write programs that take care not to waste bandwidth. For example, to reduce packet size, wherever possible Bolo (my interactive network tank game) uses bytes instead of 16-bit or 32-bit words.
For many kinds of interactive software like games, it's not important to carry a lot of data. What's important is that when the little bits of data are delivered, they are delivered quickly. Bolo was originally developed running over serial ports at 4800 bps and could support eight players that way. Over 28.8 Kbps modems it can barely support two players with acceptable response time. Why? A direct-connect serial port at 4800 bps has a latency of 2 ms. A 28.8 Kbps baud modem has a latency of 100 ms, 50 times worse than the 4800 bps serial connection.
Software can cope with limited bandwidth by sending less data. If a program doesn't have enough bandwidth to send high-resolution pictures, it could use a lower resolution. If a program doesn't have enough bandwidth to send colour images, it could send black-and-white images, or images with dramatically reduced colour detail (which is what NTSC television does). If there isn't enough bandwidth to send 30 frames per second, the software could send 15 fps, 5 fps, or fewer.
These trade-offs aren't pleasant, but they are possible. You can pay for more bandwidth or send less data to stay within your limited available bandwidth. However, if the latency is not good enough to meet your needs you don't have the same option. Running multiple circuits in parallel won't improve latency, and sending less data won't help either.
Caching -- One of the most effective techniques for improving computer and network performance is caching. If you visit a Web site, your browser can copy the text and images to your hard disk. If you visit the site again, the browser verifies that the stored copies are up-to-date, and - if so - the browser just displays the local copies.
Checking the date and time a file was last modified is a tiny request to send across the network - so small that modem throughput makes no difference. Latency is all that matters.
Recently, some companies have begun providing CD-ROMs of entire Web sites to speed Web browsing. When browsing these Web sites, all the Web browser does is check the modification date of each file it accesses to verify that CD-ROM copy is up-to-date. It must download from the Web only files that have changed since the CD-ROM was made. Since most large files on a Web site are images, and since images on a Web site change far less frequently than the HTML text files, in most cases little data has to transfer.
Once again, because the Web browser is primarily doing small, modification date queries to the Web server, latency determines performance and throughput is virtually irrelevant.
Latency Workarounds -- ISDN has a latency of about 10 ms. Its throughput may be twice that of a modem, but its latency is ten times better, and that's the key reason why browsing the Web over an ISDN link feels faster than over a modem.
One reason standard modems have such poor latency is that they don't know what you're doing with your computer, or why. An external modem is usually connected through a serial port, and all it sees is an unstructured stream of bytes coming down the serial port.
Ironically, the much-maligned Apple GeoPort Telecom Adapter may solve this problem. The Apple GeoPort Telecom Adapter connects your computer to a telephone line, but it's not a modem. Instead, all modem functions are performed by software running on the Mac. The main reason for the criticism is that this extra software takes up memory and slows down the Mac, but in theory it could offer an advantage no external modem could match. When you use the GeoPort Telecom Adapter, the modem software is running on the same CPU as your TCP/IP software and your Web browser, so it could know exactly what you are doing. When your Web browser sends a TCP packet, the GeoPort modem software doesn't have to mimic the behaviour of current modems. It could take that packet, encode it, and send it over the telephone line immediately, with almost zero latency.
Sending 36 bytes of data, a typical game-sized packet, over an Apple GeoPort Telecom Adapter running at 28.8 Kbps could take as little as 10 ms, making it as fast as ISDN, and ten times faster than the best modem you can buy today. For less than the price of a typical modem, the GeoPort Telecom Adapter could give you Web browsing performance close to that of ISDN. Even better, people who already own Apple GeoPort Telecom Adapters would need only a software upgrade.
Bandwidth Still Matters -- Having said all this, you should not conclude that I believe bandwidth is unimportant. It is very important, but not in the way most people think. Bandwidth is valuable for its own sake, but also for its effect on overall latency - the important issue is the total end-to-end transmission delay for a data packet.
Remember the example in the first part of this article comparing the capacity of a Boeing 747 to a 737? Here's a real world example of the same issue. Many people believe that a private 64 Kbps ISDN connection is as good (or even better) as a 1/160 share of a 10 Mbps Ethernet connection. Telephone companies argue that ISDN is as good as new technologies like cable modems because though cable modems have much higher bandwidth, that bandwidth is shared between lots of users so the average works out the same. This reasoning is flawed, as the following example will show.
Say we have a game where the data representing the game's overall state amounts to 40K. We have a game server, and in this simple example, the server transmits the entire game state to a player once every ten seconds. That's 40K every 10 seconds, an average of 4K per second or 32 Kbps. That's only half the capacity of a 64 Kbps ISDN line, and 160 users doing this on an Ethernet network will utilize only half the capacity of the Ethernet. So far so good: both links are running at 50 percent capacity, so the performance should be the same, right?
Wrong. On the Ethernet, when the server sends the 40K to a player, the player can receive that data as little as 32 ms later (40K / 10 Mbps). If the game server is not the only machine sending packets on the Ethernet, then there could be contention for the shared medium, but even in that case the average delay before the player receives the data is only 64 ms. On the ISDN line, when the server sends the 40K to a player, the player receives that data five seconds later (40K / 64 Kbps). In both cases the users have the same average bandwidth, but the actual performance is different. In the Ethernet case, the player receives the data almost instantly because of the connection's high capacity. But in the ISDN case, the connection's lower capacity means the information is already 5 seconds old when the player receives it.
The problem is that sending a 40K chunk every ten seconds and sending data at a uniform rate of 4K per second are not the same thing. If they were, ISDN, ATM, and other telephone company schemes would be good ideas. Telephone companies assume all communications are like the flow of fluid in a pipe. You just tell them the rate of flow you need, and they tell you how big the pipe has to be. Voice calls work like the flow of fluid in a pipe, but computer data does not. Computer data comes in lumps. A common mistake is to think that sending 60K of data once per minute is exactly the same as sending 1K per second. It's not. A 1K per second connection may be sufficient capacity to carry the amount of data you're sending, but that doesn't mean it will deliver the entire 60K in a timely fashion. It won't. By the time the lump finishes arriving, it will be one minute old.
The conclusion here is obvious: the capacity of a connection has a profound affect on its performance. If you're given the choice between a low bandwidth private connection, or a small share of a larger bandwidth connection, take the small share. Again, this is painfully obvious outside the computer world. If a government said it would build either a large shared freeway, or a million, tiny, separate footpaths, one reserved for each citizen, which would you vote for?
What Can You Do? I've received numerous messages from people who want to know what they can do to spread the word about these latency and bandwidth problems. I've found that calling modem vendors directly is futile, so I recommend that you circulate these two articles to friends who might find them interesting, and most important, send letters to editors of major magazines asking them to include latency times via ping and traceroute when testing modems for review. Perhaps if we can raise awareness about the horrible latency problems that all modems suffer, modem manufacturers will start putting effort into decreasing latency instead of just increasing throughput.
[Portions of this article come from Stuart Cheshire's white paper entitled "Latency and the Quest for Interactivity," commissioned by Volpe Welty Asset Management, L.L.C.]
Article 2 of 2 in series
Years ago David Cheriton at Stanford University taught me something that seemed obvious at the time - if you have a network link with low bandwidth then it's easy to put several in parallel to make a combined link with higher bandwidth, but if you have a network link with bad latency then no amount of money can turn any number of parallel links into a combined link with good latencyShow full article
Years ago David Cheriton at Stanford University taught me something that seemed obvious at the time - if you have a network link with low bandwidth then it's easy to put several in parallel to make a combined link with higher bandwidth, but if you have a network link with bad latency then no amount of money can turn any number of parallel links into a combined link with good latency. Many years have passed, and these facts seem lost on the most companies making networking hardware and software for the home. I think the time has come to explain it.
Speed & Capacity -- Even smart people have trouble grasping the implications of latency on throughput. Part of the problem is the misleading use of the word "faster." Would you say a Boeing 747 is three times faster than a Boeing 737? Of course not. They both cruise at around 500 miles per hour. The difference is that the 747 carries 500 passengers where as the 737 only carries 150. The Boeing 747 is three times bigger than the Boeing 737, not faster.
If you were in a hurry to get to London, you'd take the Concorde, which cruises around 1,350 miles per hour. It seats only 100 passengers though, so it's the smallest of the three. Size and speed are not the same thing.
On the other hand, if you had to transport 1,500 people and you only had one airplane to do it, the 747 could do it in three trips while the 737 would take ten. So, you might say the Boeing 747 can transport large numbers of people three times faster than a Boeing 737, but you would never say that a Boeing 747 is three times faster than a Boeing 737.
That's one problem with communications devices today. Manufacturers say speed when they mean capacity. The other problem is that as far as end-users are concerned, the main thing they want to do is transfer large files more quickly. It may seem to make sense that a high-capacity, slow link would be the best thing for the job. What end users don't see is that in order to manage that file transfer, their computers are sending dozens of little control messages back and forth. Computer communication differs from television or radio broadcasting in the interactivity of the communication, and interactivity depends on back-and-forth messages.
The phrase "high-capacity, slow link" above probably looks odd to you. It looks odd even to me. We've been used to wrong thinking for so long that correct thinking looks odd now. How can a high-capacity link be a slow link? High-capacity means fast, right? It's odd how that's not true in other areas. If someone talks about a high-capacity oil tanker, do you immediately assume it's a fast ship? If someone talks about a large-capacity truck, do you immediately assume it's faster than a small sports car?
We must start making this distinction again in communications. When someone tells us that a modem has a speed of 28.8 Kbps we have to remember that 28.8 Kbps is its capacity, not its speed. Speed is a measure of distance divided by time, and "bits" is not a measure of distance.
But there's more to perceived throughput than issues of speed and capacity, namely latency. Many people know that when you buy a hard disk you should check its seek time. The maximum transfer rate is something you might also be concerned with, but seek time is more important. Why does no one think to ask about a modem's seek time? Latency is the same thing as seek time: the minimum time between asking for a piece of data and getting it, just like the seek time of a disk, and it's just as important.
Monkey On Your Back -- Once you have bad latency you're stuck with it. If you want to transfer a large file over your modem it might take several minutes. The less data you send, the less time it takes, but there's a limit. No matter how small the amount of data, for any particular network device there's always a minimum time that you can never beat. That's called the latency of the device. For a typical Ethernet connection the latency is usually about 0.3 ms (milliseconds, or thousandths of a second). For a typical modem link, ping and traceroute tests show the latency is typically about 100 ms, about 300 times worse than Ethernet.
If you wanted to send ten characters (at eight bits per character) over your 33 Kbps modem link you might think it would take:
80 bits / 33000 bits per second = 2.4 ms
Unfortunately, it doesn't. It takes 102.4 ms because of the 100 ms latency introduced by the modems at each end of the link.
If you want to send a large amount of data, say 100K, then that takes 25 seconds, and the 100 ms latency isn't very noticeable, but for smaller amounts of data, say 100 bytes, the latency overwhelms the transmission time.
Why would you care about this? Why do small pieces of data matter? For most end-users it's the time it takes to transfer big files that annoys them, not small files, so they don't even think about latency when buying products. In fact, if you look at the boxes modems come in, they proudly proclaim "28.8 Kbps" and "33.6 Kbps", but they don't mention latency at all.
What most people don't realize is that computers must exchange hundreds of little control messages in the process of transferring big files, so the performance of small data packets directly affects the performance of everything else on the network.
Now, imagine you live in a world where the only network connection you can get to your house is a modem running over a telephone line. Your modem has a latency of 100 ms, but you're doing something that needs lower latency. Maybe you're trying to do audio over the network. 100 ms may not sound like much, but it's enough to cause a noticeable delay and echo in voice communications, which makes conversation difficult. Maybe you're playing an interactive game over the network. The game only sends tiny amounts of data, but that 100 ms delay makes the interactivity of the game decidedly sluggish.
What can you do about this? Absolutely nothing. You could compress the data, but that won't help: the data was already small, and that 100 ms latency is still there. You could install 80 phone lines in parallel and simultaneously send a single bit over each phone line, but that 100 ms latency is still there.
In other words, once you have a device with bad latency there's nothing you can do except replace the device with one that has good latency.
Modem Latency -- Current consumer devices have appallingly bad latency. A typical Ethernet card has a latency less than 1 ms. The Internet backbone as a whole also has very good latency. Here's a real example:
- The distance from Stanford in California to MIT in Boston is 4320 km
- The speed of light in vacuum is 300 * 10^6 m/s
- The speed of light in fibre is 60 percent of the speed of light in vacuum
- The speed of light in fibre is 300 * 10^6 m/s * 0.6 = 180 * 10^6 m/s
- The one-way delay to MIT is 4320 km / 180 * 10^6 m/s = 24 ms
- The round-trip time to MIT and back is 48 ms
- The current ping time from Stanford to MIT over today's Internet is about 85 ms:
- 84.5 ms / 48 ms = 1.76
- The hardware of the Internet can currently achieve speed of light + 76 percent
So the Internet is doing pretty well. It may get better with time, but we know it can never beat the speed of light. In other words, that 85 ms round-trip time to MIT might reduce a bit, but it's never going to beat 48 ms. The speed can improve a bit, but it isn't going to double. We're already within a factor of two of the theoretical optimum. I think that's pretty good - not many technologies can make that claim.
Compare this with a modem. Suppose you're 18 km from your Internet service provider. At the speed of light in fibre (or the speed of electricity in copper, which is about the same) the latency should be:
18000 / (180 * 10^6 m/s) = 0.1 ms
Although modems vary, the latency over your modem is anywhere from 75 ms to about 130 ms. Modems are currently operating at a level that's more than 1,000 times worse than the speed of light. And, of course, latency cuts both ways. If a one-way trip using a typical modem has a latency of about 130 ms, then the round-trip delay is about 260 ms.
Of course no modem link will ever have a latency of 0.1 ms. I'm not expecting that. The important issue is the total end-to-end transmission delay for a packet - the time from the moment the transmitting software sends the packet to the moment the last bit of the packet is delivered to the software at the receiving end. The total end-to-end transmission delay is made up of fixed latency (including the speed-of-light propagation delay), plus the transmission time. For a 36 byte packet the transmission time is 10 ms (the time it takes to send 288 bits at a rate of 28.8 Kbps). When the actual transmission time is only 10 ms, working to make the latency 0.1 ms would be silly. All that's needed is that the latency should be relatively small compared to the transmission time. About 5 ms would be a sensible latency target for a modem that has a transmission rate of 28.8 Kbps.
Understanding Transmission Delay -- At each hop, overall transmission time has two components: per-byte transmission time and fixed overhead. Per-byte transmission time is easy to calculate, since it depends only on the raw transmission rate. The fixed overhead comes from sources like software overhead, hardware overhead, and speed of light delay.
For modems, the distance is typically short, so speed of light delay should be negligible. However, the data rate is low, so it takes a long time to send each byte. The per-byte transmission time should account for most of the time taken to send the packet. To send 100 bytes over a 28.8 Kbps modem should take:
100 bytes * 8 bits per byte / 28800 bits per second = 28 ms
That means the round-trip should be twice that, or 56 ms. In reality it's often more like 260 ms. What's going on? Two other factors contribute to the overall time.
First, modems are often connected via serial ports. Many modem users assume that if they connect their 28.8 Kbps modem to their serial port at 38.4 Kbps they won't limit their performance, because 38.4 is greater than 28.8. It's true that the serial port won't limit throughput, but it will add delay, and delay, once added, never goes away. So, sending 100 bytes down the serial port to the modem should take:
100 bytes * 10 bits per byte / 38400 bps = 26 ms
Second, modems try to group data into blocks. The modem will wait for about 50 ms to see if more data is coming that it could add to the block, before it starts to send the data it already has. Let's see what the total time is now:
26 ms (100 bytes down serial port to modem)
50 ms (modem's fixed waiting time)
28 ms (transmission time over telephone line at 28.8 Kbps)
26 ms (100 bytes up serial port at receiving end)
Thus, the total time is 130 ms each way, or 260 ms for the round-trip. To make things worse, imagine that the 100 bytes in question are used by an interactive game being played by two players. If both players are connected to their respective Internet service providers by modem, then the total player-to-player round-trip delay is 520 ms, which is hopeless for any tightly-coupled interactivity, and this is reflected in the state of today's networked computer games. Can we do anything to improve this?
Improving Latency -- One thing to notice is that the 38.4 Kbps serial connection between the computer and the modem, which many people don't think of as being the bottleneck, turns out to be responsible for 52 ms of the delay. In fact, it's the single biggest contributor - almost twice as much as the actual communication over the telephone line. What can we do about this? If you can connect the modems at both ends at 115.2 Kbps instead of 38.4 Kbps, the serial port delay can be reduced to 9 ms at each end. Better still, if you can use an internal modem on a card instead of one connected through a serial port, the delay can be eliminated entirely, leaving a round-trip delay of only 156 ms.
Having eliminated the serial port delay, the next biggest contributor to delay is the fixed 50 ms overhead built into the modem itself. Why is there a fixed 50 ms overhead? The reason is that modern modems offer lots of "features" - namely, compression and automatic error correction. To get effective compression and error correction, modems must work on blocks of data, which means characters are corralled in a buffer until the modem has received a block big enough to work on efficiently. While the characters accumulate in the modem's buffer, they're not being sent over the phone line. Imagine you're sending a small amount of data, 100 bytes. That's not enough for the modem to work on effectively, so it would like a bigger block. After you have sent the 100 bytes to the modem, it waits to see if more characters arrive. After some time - about 50 ms - it decides no more characters are coming, so it compresses and ships what it has. That 50 ms the modem spends hoping for more data is unrecoverable, wasted time.
Modems were originally designed with remote terminal access in mind. They were meant to take characters - typed by a user on one end and transmitted by a mainframe on the other - and group them into little blocks to send. The only indication that a user had finished typing (or that the mainframe had finished responding) was a pause in the data stream. No one told the modem when no more characters would be coming for a while, so it had to guess.
This is no longer the case. Most people use modems to connect to the Internet, not old mainframes, and Internet traffic is made up of discrete packets, not a continuous stream of characters.
There's a simple fix for this problem. We could make modems aware that they are sending Internet packets. When a modem sees the PPP (Point to Point Protocol) End-Of-Packet character (0x7E), it could realize that the packet is complete and immediately begin compressing and sending the block of data it has, without pausing for 50 ms. This simple fix would eliminate the 50 ms fixed overhead, and should allow us to achieve a 56 ms round-trip delay over a modem PPP connection - almost five times better than what typical modems achieve today.
[Tune in next week as Stuart explains how bandwidth and latency interact, and how software can try to cope with the latency problem.]