Question: How does a server know how fast you’re connecting? Pete Mundy <[email protected]> writes, "I am confused about how a server sending data to you knows what ‘speed’ (bps) to send the data. If I use my Internet connection at work, which runs at 64 Kbps, I can download a file off a Web server or FTP server at up to 6K per second. If I download that same file from home using my 33.6 Kbps modem it will come through at 3K per second."
Answer: The server doesn’t know – it doesn’t have to. Although the Internet appears to transfer data in a solid stream, as we’ve discussed in previous issues of NetBITS (see "Hey, I’m Talking to You!" in NetBITS-001), the data are actually sent in discrete little units called packets. The packets are transferred from your machine, to an ISP’s routers, to higher-level Internet routers, back down to a server itself.
At each of these stages, individual packets are held in the memory of each device the packet is handed off to until that device hands off that packet directly to another device. This is called a "store and forward" system; regardless of the speed of any given connection, packets are "stored" until they can be "forwarded."
Packets are sent in a series, but each packet requires acknowledgment that it was received – at least with TCP (Transmission Control Protocol) packets which are used for the Web and email. This kind of receipt acknowledgment defines TCP as a "reliable" protocol: it retransmits missing packets and doesn’t continue sending them until it knows that previous ones have been received.
So the server doesn’t know how fast your connection is, but it does react dynamically to how rapidly your system receives packets. Technically speaking, you could perform an analysis at a level where you’re measuring how fast packets are transmitted over given connections and automatically send lower-resolution images, and so forth, but no system we know incorporates that yet. [GF]