Disabling Dock Space-Making Feature
Having a hard time dragging a file into a folder in the Dock without the Dock trying to make room for it? Holding down Command while dragging the item temporarily disables this Dock feature.
Visit plucky tree
Worried your Mac might be suffering from macro viruses? This week, we report on application-based viruses and how to defend against them. Also, we share a collection of scripts and tricks for Emailer 2.0 (plus tell you how to make an Ethernet crossover cable), note a hot new version of PageSpinner, and - rounding out the issue - guest writer Glenn Fleishman reports on recent upheavals in the Internet's infrastructure.
by Tonya Engst
Spinning for a Win -- Optima Systems last week released PageSpinner 2.0, a text-oriented HTML editor. PageSpinner has retained its user-friendly approach (see my review of version 1.1b1 in TidBITS-327), making it an excellent choice for HTML newbies, but it has also added an impressive set of features that most any Web author will welcome, including support for cascading style sheets, frames, Java applets, and includes, which simplify updating common elements on a group of pagesShow full article
Spinning for a Win -- Optima Systems last week released PageSpinner 2.0, a text-oriented HTML editor. PageSpinner has retained its user-friendly approach (see my review of version 1.1b1 in TidBITS-327), making it an excellent choice for HTML newbies, but it has also added an impressive set of features that most any Web author will welcome, including support for cascading style sheets, frames, Java applets, and includes, which simplify updating common elements on a group of pages. The new version is scriptable and comes with samples that illustrate how to link PageSpinner to FileMaker Pro, HyperCard, and 4D Server.
PageSpinner 2.0 minimally requires Mac OS version 7.0.1 or later, 2 to 4 MB application RAM, 6 MB disk space, a 68020-based Macintosh, and a grayscale monitor. PageSpinner 2.0 is a free upgrade to registered owners of earlier versions; new users should plan to pay the $25 shareware fee. [TJE]
Last week in TidBITS-382, I wrote a short piece warning people not to become complacent about viruses on the Macintosh. I received a number of notes, including one thanking me for the article (the reader ran Disinfectant, which promptly found virus infestations on his hard disk)Show full article
Last week in TidBITS-382, I wrote a short piece warning people not to become complacent about viruses on the Macintosh. I received a number of notes, including one thanking me for the article (the reader ran Disinfectant, which promptly found virus infestations on his hard disk). Most, however, talked about what has become a more serious issue since I was last seriously involved in the anti-virus world - macro viruses, and especially those lurking in Microsoft Word 6.0 documents. Although we covered this topic in TidBITS-312 and TidBITS-314, the subject needs more attention.
Viruses and Macro Viruses -- On the Macintosh, viruses are usually small bits of code embedded in other files that can replicate themselves between files and between machines. Viruses may or may not cause damage; some are deliberately destructive, but some are just annoying. When I wrote about viruses last week, I was thinking about the traditional sort, which infect Macintosh files, mostly applications and the System file. The free program Disinfectant finds these viruses by scanning files for the specific code resources used by the viruses. Most Macintosh viruses are in fact named for their code resource signatures, such as nVIR and MBDF.<ftp://ftp.acns.nwu.edu/pub/disinfectant/ disinfectant36.sea.hqx>
Macro viruses aren't larger versions of viruses. They share the basic virus definition - small bits of code with replication capabilities that are embedded in other files - but instead of being Macintosh code resources, they're written in application macro languages, such as HyperTalk, Word Basic, or - conceivably - even AppleScript or Frontier's UserTalk. Unfortunately, since high-level application macro languages are generally easier than C, assembly, or other low-level programming languages, neophyte scum find it easier to write (or shamelessly copy and modify) macro viruses than more traditional viruses. Since Disinfectant only scans code resources, it doesn't identify macro viruses, and cannot protect you from them.Disinfectant also doesn't attempt to detect another class of malicious programs, called Trojan Horses. These programs often pose as a utility, game, or other useful program, but perform anything from a prank to severe disk damage when they run. Trojan Horses are rare on the Macintosh, and commercial anti-virus utilities should detect known examples.
The first macro viruses I know of were written in HyperTalk. They infected HyperCard stacks, and some still exist today, although few are destructive. HyperCard is alive and well, but it doesn't have the wide distribution and use it did when Apple bundled it for free with every Mac. As a result, HyperCard viruses aren't as much of a problem as they might be. For more information about HyperCard viruses and tools for eliminating them, check out HyperActive Software's HyperCard Viruses page.
Word Macro Viruses -- Of far more concern today are Word (and to a lesser extent, Excel) macro viruses. These viruses, written in Microsoft's Word Basic macro language (available only in Microsoft Word 6.0 and later), are embedded in Word documents. When an infected document is open, the macro viruses can copy themselves into your global template file, and from there into other Word documents.
To judge from the listings maintained by the Virus Test Center at the University of Hamburg, many Word macro viruses (over 1,100) exist, and new ones appear constantly. The problem is simple - since the Microsoft Office applications, including Word and Excel, are cross-platform, macro viruses written by PC users in Word Basic are often virulent even on the Macintosh as long as you run Word 6.0 or later. Of course, those macro viruses that try to do things like issue FORMAT C: commands can't hurt a Mac, but they can replicate themselves. Mike Groh, Software Development Manager at Virex manufacturer Datawatch, noted, "Macro viruses are quickly becoming a larger problem than Mac system viruses ever were at their peak. Improved cross-platform support for the Macintosh has brought with it one of the headaches of the PC world."
A number of readers commented that these macro viruses are commonplace in corporations because people trade Word documents around all the time, and corporations are more likely than individuals to have upgraded to, and standardized on, Word 6.0. Even worse, it's easy for these infected files to find their way into backup tapes and onto CD-ROMs, which makes it easier for them to spread and re-infect cleaned systems.
Eliminating Macro Viruses -- Since you can't use Disinfectant to find or remove Word macro viruses or any other sort of macro virus, you must rely on other tools. The two commercial anti-virus applications I mentioned last week, Virex and SAM, can both identify and eliminate many of these macro viruses, although reports from readers indicate that the viruses change frequently enough that even keeping up with Datawatch's and Symantec's updated virus listings isn't always enough. With over 200 new macro viruses appearing each month, that's not surprising, although Datawatch reportedly tries to do next-business-day turnaround when a customer sends in a new virus.
Microsoft also provides information about macro viruses and tools to help identify them. Notes from readers haven't been particularly positive about the performance and usefulness of the main utility, called MVTOOL, and the Microsoft Web site comments: "MVTOOL is able to scan for and disinfect files that contain the Concept virus. However, it is not able to detect or remove any of the other known macro viruses and is prone to crashing when processing a large number of files." MVTOOL works by notifying you when documents that you open contain macros, and lets you open the documents without the macros, which is useful, but not nearly as hands-off as anti-virus tools should be. Users simply can't be expected to know what is and what is not a macro virus.
Since I mainly use Word 5.1 when I use Word at all, I've never run into a Word macro virus and can't offer advice from personal experience. However, my feeling is that if you use and rely heavily on Word 6.0 or later, particularly if you frequently trade files with other users, it's worth getting and installing not only Microsoft's MVTOOL, but another commercial anti-virus tool such as Virex or SAM. Of course, if you don't need Word 6.0's features, Word 5.1 doesn't suffer from macro viruses at all, and can safely open infected Word 6 files. Ideally, a future version of Microsoft Office would have a feature that would prevent macro viruses.In the end, be careful out there. A major reason that the Macintosh world is plagued by relatively few traditional viruses is that the anti-virus tools are updated so quickly and utilized by such a large number of Macintosh users (and many of the programmers worked together on identifying and eliminating each new virus) that the viruses never had a chance to spread far. Vigilance is the only defense. If you own a commercial anti-virus program that fails to catch a macro virus that infects your documents, be sure to send the infected document (clearly labeled, of course) to the program's manufacturer immediately, so they can add it to their list of viruses to eradicate. Only then can we hope to get the upper hand in the fight against the macro viruses.
by Jeff Carlson
As the newest member of the TidBITS staff, I haven't yet adjusted to the increased load of email that arrives after an article or review appears in an issueShow full article
As the newest member of the TidBITS staff, I haven't yet adjusted to the increased load of email that arrives after an article or review appears in an issue. After last week's review of Claris Emailer 2.0 (see TidBITS-382), I received a number of messages pointing out more information about the product, including expanded documentation, AppleScripts that provide additional functionality, and what to do if your Mail Database becomes corrupted (which first happened to me that very day). Surprisingly, many letters focused on the tiny Ethernet network I set up at home to synchronize my email between my PowerBook and desktop Mac.
Expanded Documentation -- I mentioned that Emailer 2.0 includes a fairly comprehensive online help system, but didn't go into more detail because I rarely use online help (I've even remapped the Help key on my keyboard using CE Software's QuicKeys to stop online help systems from loading if I accidentally hit Help instead of Delete). Perhaps because they know I'm not the only one who feels this way, the people at Claris have created a downloadable "Emailer User's Guide PDF" file containing the same information as the online help system. A few readers pointed out that the 3.3 MB file is well worth the download and offers far more comprehensive information than the thin Getting Started Guide that ships in the Emailer box.
AppleScript to the Rescue -- A feature I wanted to see in Emailer was support for selecting multiple messages in my In Box (or any other folder) and saving them to disk as a single text file. I was promptly pointed to Fog City Software's Emailer Utilities Web page, where I found an AppleScript that does exactly what I asked.
The Export Selected Messages script by Dan Crevier saves all selected messages into one Unix mailbox format file (which includes mail header information). Another script in Dan's Sample Scripts collection, DB Stats, reports the total number of folders and messages in your Mail Database file.
One other AppleScript I was happy to find is Toggle Schedules, part of the Dave's Essential Scripts collection by Dave Cortright. At home, where I have dial-up access, I keep my schedules turned off. But at work, where I'm connected to a dedicated ISDN line, it's nice to have Emailer check for mail every ten minutes. Toggle Schedules enables me to switch to my "Office" schedule in one step.
These scripts are just the ones that I anticipate using in the near future. Others allow you to strip out quote prefixes (>) from messages, count the number of words in a message, provide automated hooks between FileMaker Pro and BBEdit, and more.
Phantom Messages -- If your machine crashes with Emailer open, you may find "ghost messages" in your In Box or Out Box. Everything appears to have been read, and yet the folder's icon still indicates unread mail. The only solution is to rebuild your Mail Database and Mail Index. Ironically, I had never had this problem until I started receiving mail about Emailer (insert your favorite conspiracy theory theme music here).
To activate this hidden feature, quit Emailer, then re-launch it while pressing the Option key. Then, selecting Typical Rebuild from the dialog box that appears will make Emailer copy your existing database and index, and rebuild the files. Although the process took 11 minutes on my Power Mac 7600, when Emailer finished, my 26 MB database was not only fixed, but also pared down to 25 MB due to the reordering of the data.
Building a Mini Ethernet Network -- Based on the email I received, one might think readers didn't care so much about Emailer as they did about the mini Ethernet network I run at home. After switching to Emailer 2.0 (which stores its messages in a centralized database instead of as individual files) I needed something faster than LocalTalk to synchronize mail between the 7600 and my PowerBook. My solution was to buy a $74 five-port (not four-port, as I had previously miscounted) DaynaSTAR Ethernet hub.
You can, however, create a two-machine Ethernet network without a hub by using a crossover cable. I was unable to do this, because the Ethernet PC Card I use for my PowerBook comes with special cables to connect to the card; I suppose I could have tried to modify the cable, but didn't want to have to order a new one if I inadvertently destroyed it. Having a hub also leaves me with some flexibility for adding new machines in the future.
If you want to try the crossover cable method, you can either buy one from one of the popular catalog dealers (most prices I've seen are between $5 and $10), or, if you have a cable crimper and some RJ-45 connectors, you can make one yourself. Here are the instructions, as sent to me by Roy Fenderson <email@example.com>:
There are eight pins in the RJ-45 connector at the end of the cable. Holding the cable in your hand with the connector pointed away and the flat side on top, they are 1-8 reading left to right.
The important ones are 1, 2, 3, and 6. The cable should be wired this way:
1 -> 3
2 -> 6
3 -> 1
6 -> 2
As long as you have a 10Base-T connector on both machines, the cable connection should work. Don't forget to reset your networking settings to support the new Ethernet configuration, and enjoy the speed increase!
UUNET Technologies, a major, top-level Internet service provider with a multi-million dollar nationwide network, recently announced plans to phase out arrangements with other networks to carry Internet traffic free of charge across its network, unless the other networks had substantial, national investments in infrastructureShow full article
UUNET Technologies, a major, top-level Internet service provider with a multi-million dollar nationwide network, recently announced plans to phase out arrangements with other networks to carry Internet traffic free of charge across its network, unless the other networks had substantial, national investments in infrastructure. UUNET said, in effect, "We won't peer except with our peers."
What's Peering? "Peer" has two meanings: as a technical term, peering is the act of exchanging data across networks, typically at specific, discrete locations. The other definition of peer as an "equal" we know pretty well; however, in this sense, UUNET defines business equals as companies that invest significant amounts of money into building a T3 (45 megabits, or Mbps, per second) or faster network with at least four geographically dispersed peering hookups. UUNET's description is a much more specific and higher-end than was previously the norm.
Prior to UUNET's announcement, peering wasn't an automatic right for any organization that could put equipment next to other networks' routers. Even so, any organization with reasonable size, savvy, and stability could negotiate peering arrangements that would substantially improve their customers' ability to reach other parts of the Internet.
The Internet is not Monolithic -- Let's step back briefly. The Internet strongly resembles a mass hallucination in that there's a constant delusion that it's something that it's not. The net is no single entity; instead it is many entities so intertwined - sometimes this is described as "fully meshed" - that using a machine on any individual network that comprises the Internet feels much like being on every network connected to the Internet.
Without this mass hallucination, we users would have to know a lot about every network's idiosyncrasies; as it is, standard distributed systems and standard protocols allow us to sidestep knowledge of any given company or organization's network. (If you've ever worked in a large-scale network or what was called an "enterprise" before it became an "intranet," you know that identifying resources is quite difficult.)
It can be hard to remember that the Internet is not monolithic. Web Week's recent cover story on the UUNET announcement made the enormous simplification of referring to "the Internet backbone" when the author meant "a collection of locations scattered around the United States and the world where networks interconnect."
What's Peering Got to Do with It? The Internet is based on the notion of peering. Even when the National Science Foundation Network (NSFNet) was in operation (see TidBITS-275), the Internet was comprised of multiple national and regional networks with their own infrastructures that all exchanged data at a few common points. These different networks with different customers - educational, governmental, military, and private sector - all exchange data bound for customers of other networks.
There are many formal peering points where several or dozens of Internet Service Providers (ISPs) and Network Service Providers (NSPs) exchange data. These include NSF-subsidized Network Access Points (NAPs), which were promoted to help the private-sector Internet grow and have a common place to test and implement next-generation networking technology, like the multi-gigabit-per-second networks currently in testing.
The goal of these peering points, and the reason to have a lot of them, is to enable customers of one network to easily reach customers of another. If people can't reach Netscape's servers easily, this is a big problem for Netscape, Netscape's NSP, the customers who can't reach Netscape, and the customers' NSPs, who will switch to other services. Easy access to all networks - making the Internet feel like a single entity rather than lots of scattered individuals networks - is a big marketing bonus for all companies on the Net and all users of it.
These peering points essentially involve having connections to co-located network equipment - "co-location" is a fancy term for "putting routers and/or other equipment in the same physical location." Generally, a phone company maintains the network hardware and connections, while the NSP or ISP plugging into it pays for setup and for a connection from its location into the peering point. This costs several thousand dollars a month at a bare minimum.NAPs and Multihomed Networks -- The Network Access Points are one way of peering, but there's another way. Any company can purchase service at its own location from multiple NSPs. Suppose I was a large ISP with three T1s to handle the traffic from tens of thousands of customers. (A T1 is a standard unit of bandwidth you can purchase from phone companies: 1.544 Mbps.) I could have each of those three T1s running to a different NSP, like MCI, Sprintlink, and UUNET. I might even be able to convince those NSPs to peer with my network, so that inbound and outbound traffic for my mail and Web servers (plus traffic from my dial-up customers) takes the most efficient path. This is called multihoming, where the network points at multiple networks at the same time - or has "many homes." This can be an advantage for each NSP, since in- and out-bound data from my large ISP can go over different routes when its network is overloaded, avoiding additional congestion.
The difference between multihoming and a NAP is that the NAP acts as a gateway. A multihomed network accepts and sends data only for its local networked machines and clients. A gateway exchanges any and all data across networks. It's in the interest of NSPs to allow peering at multihomed locations, since they're paid for each connection and, in some cases, for bandwidth used. It doesn't make clear business sense for an NSP like UUNET to sit at a gateway like a NAP with dissimilar companies, because that puts UUNET in the position of allowing free transit of data over their multi-million dollar networks and getting nothing in return.
UUNET's move means that ISPs and regional networks, which in the past simply plugged into a NAP or NAP-like structure, now must have a direct connection to UUNET and/or other big NSPs to take advantage of a fast and distributed network infrastructure. General reporting on this subject indicates that other major NSPs will follow UUNET's lead. It's not cheap to be in a NAP, but in the past it's been far cheaper than establishing multiple high-speed connections to multiple networks.
In Seattle, for example, there are at least two regional NSPs providing NAP-like services. They have feeds from several national NSPs and pay for each feed directly to the NSP; they resell access to this pool of peered service to local businesses and ISPs who can get the NAP-like advantage without having to maintain the pool or deal with peering relationships. Everybody makes money and the customers pay around the same (or sometimes less) than they'd pay to a national NSP.
The Impact -- UUNET's action and subsequent response by other NSPs might drive some ISPs out of business, but it seems unlikely. It will primarily affect NSPs that only peer in one or two locations and resell "downstream" bandwidth to customers. Those NSPs will now need to pay substantially more to achieve a similar level of service from their NSPs, and they'll have to build and maintain more of their own local infrastructure, too.
The real impact of this change is that the new hugely different scale between biggest and medium size NSPs and ISPs is apparent. Even a year or so ago, the cost for entry into the NSP market was low. The NAPs provided a leveling influence. And, nobody's service was great. Now, the Internet is a big pond with some big fish who can offer levels of service orders of magnitude above the smaller fish: most of the big NSPs are running or plan to install nationwide backbones running at 155 Mbps or faster.
In an announcement coming close the UUNET news, nine of these big fish formed IOPS.org, a group which has the purpose of communicating meta-information about the Internet across networks - finding and coordinating problems, planning for the future, and so forth. In actuality, it's a commercial Internet guiding committee that will have the power to control the future of the Net's implementation and growth regardless of what the Internet Engineering Task Force (IETF), The Internet Society, or other groups might recommend.
Of course, just because they might try to control the Net's future implementation doesn't mean they will. Even as the Net has become almost totally commercially funded and driven, netiquette has survived to the point that Congress is currently discussing anti-spam legislation.
The real agenda of IOPS.org may turn out to parallel with UUNET's announcements. Currently, no settlement fees are charged as data is exchanged across networks. That is, if UUNET sends a terabyte more information across MCI's network than vice-versa, UUNET doesn't pay a cent. Of course, MCI controls its own destiny and can shed routes and packets as needed to provide service to its own customers; this happens constantly and automatically.
But as more talk appears about how oversold and overtaxed phone networks and the Internet are, these kinds of announcements will lead more directly to industry-wide implementation of metering or levels of service for which we will pay. Already, several ISPs have introduced deals in which more monthly fees buys you premium access and better service and response. $20 gets you an Internet dial tone, but $40 might buy priority service for your packets (a coming innovation called RSVP) as well as a better modem-to-user ratio.<http://www.isi.edu/div7/rsvp/rsvp.html>
Bob Metcalfe, inventor of Ethernet and all-around smart guy, has been predicting major Internet outages and the coming of metered services as economic realities start to hit. Up to now, companies have faced enough competition that they couldn't unilaterally implement fees that were out of line with the rest of the industry. A shake-up that appears to be coming should free the top-end players from these fee restrictions, as they can more clearly demonstrate what being a big fish can do for their customers.
In the end, it's possible that all of us will be barnacles, firmly secured to the blue whales that will rule this new ocean, while all that's around beside them is road krill.
[Glenn Fleishman founded one of the first Internet/Web hosting businesses and has since lived most of his life on the Net. He recently finished six months as Amazon.com's catalog manager, and is now a freelance technology guy and an unsolicited pundit.]