At last month's, two academic papers -  of Carnegie Mellon University (CMU) and  of Google - looked at the reliability of hard drives in large-scale installations. Among other conclusions, the CMU team found that real-world replacement rates were much higher than would have been expected from vendor-provided mean time to failure (MTTF) estimates, and Google's researchers concluded that there was little correlation between failure and either elevated temperature or activity levels. The papers weren't written for the lay audience and aren't easy reading, but they are worth a look if you're interested in when and why hard disk mechanisms fail.
Also interesting is of the University of Massachusetts Amherst on the Transparent File System (TFS). The goal of TFS is to create a contributory storage system in which multiple people could contribute unused disk space to a shared pool, much as  enables users to contribute unused CPU cycles to the shared task of analyzing radio telescope data. (And yes, there is still an active .) Apparently, TFS can contribute all of the unused space on a disk while imposing only a negligible performance drag on the contributor.  is available; I'll be curious to see if anyone cleans it up and ports it to  (see " ," 2007-01-29).