Trust Local Addresses in VirusBarrier X6's Antivandal
VirusBarrier X6's Antivandal feature stops all kinds of network attacks, including port scans, ping floods and more. However, you may have some devices on your network that send out pings or other requests that may be interpreted as attacks. To prevent this, add them to the Trusted Addresses list so they won't be blocked. You can even add a range of addresses with wildcards, such as 192.168.1.*.
Series: Through the Digital Lens
Charles Maurer offers a detailed examination of digital photo technology, cameras, and editing
Article 1 of 5 in series
I have two modes of taking pictures: point-and-shoot and perfectionist. In the first mode I use a pocket-sized camera with no manual controls. It processes the pictures, I throw them onto my hard drive, and the only editing I'll ever do is remove some occasional red-eyeShow full article
I have two modes of taking pictures: point-and-shoot and perfectionist. In the first mode I use a pocket-sized camera with no manual controls. It processes the pictures, I throw them onto my hard drive, and the only editing I'll ever do is remove some occasional red-eye. In perfectionist mode I revert to a previous incarnation and become a commercial photographer again.
This year my perfectionist mode has gone digital and my computer has replaced my darkroom. To effect this change, I reviewed all of the photo-editing software available for the Mac. In this article I shall summarize my take on the most suitable products available for the perfectionist to finish photographs.
Since some readers will come to this from photography and others from computers, I shall not assume that anybody understands the jargon from either side and shall go back to basics frequently. My intent, however, is to point out an approach and products that go far beyond the basics.
The Raw Truth -- The digital sensor in a camera generates a file of raw data that requires an enormous amount of processing to become converted into a usable image. This processing can be done in the camera or by a computer. A computer allows more control and the opportunity to change your mind in the middle.
The conversion of a raw file into an image is not straightforward. Many algorithms are possible, so different programs come out with different results. Try the converter that came with your camera, try Adobe Photoshop, try any others you might have on hand, and see what you like the best. Note that Photoshop and perhaps some other applications will permit you to enlarge the images from the different colour channels to slightly different extents, to compensate for one cause of colour fringing, lateral chromatic aberration. This is useful but don't expect much. Most colour fringing comes from other causes.
Raw files can be converted into TIFF or JPEG. TIFF (Tagged Image File Format) contains full information but is large. JPEG (Joint Photographic Experts Group) is compressed but impoverished. It is sensible to convert raw files to TIFF, keep the TIFF files for editing, then convert them to JPEGs as needed for distribution.
You are likely to be offered the choice of converting your raw files to 8-bit or 16-bit colour. Eight bits ought to be sufficient, but if a picture is poorly lighted or poorly exposed, rescuing it may require teasing apart nearby tones. In that case smooth tonal gradients may break up into discrete bands unless you have more steps. However, 16-bit files are twice the size and take much longer to process. I prefer to use 8-bit colour normally and to take the risk of needing to reconvert a file on the odd occasion that eight bits aren't enough.
Most raw converters allow you to sharpen the pictures. Indeed, most of them sharpen pictures by default. However, never sharpen pictures at this early point in the process. Sharpening of this sort, "unsharp masking," distorts the image at edges, which then prejudices other manipulations. Moreover, the amount that is useful depends upon the size and purpose of the final image. Sharpening an image should be the very last thing you do.
Noise Ninja -- A digital sensor always records a certain amount of random noise. Usually this is an insignificant proportion of the image, but it may become objectionable with long exposures or higher amplification (i.e., higher "film speeds" or "ISO speeds"). It can also become noticeable in smooth areas under ordinary circumstances. It is possible to characterize mathematically the noise produced by any particular sensor at any particular amplification and to subtract that noise from the image. This subtraction needs to be done before you modify the image. However, at this stage in the process, it is important that you remove only noise, not detail. Later, after the detail has been brought out as best as possible, it may seem sensible to remove some of it to clean up some more noise, but that comes later. At this point, you want to retain all of the detail.
I have not tried many noise-reduction packages because wherever I read comparisons, one of them always came out at the top for both Macs and Windows: PictureCode's Noise Ninja ($30 or $70, depending on whether you need 8-bit or 16-bit output). Noise Ninja can discriminate remarkably well between image and noise. I have found that with images from my Sigma SD-10 shot at ISO 100, Noise Ninja's default settings (other than turning off sharpening) eliminate all the noise that ever becomes visible, yet never affects any detail. Noise Ninja has shown itself to be so reliable that I am about ready to start running images through it automatically in batches - but only for pictures shot at ISO 100. The higher the sensitivity, the more noise, so that the difference between detail and noise becomes less clear. Where there is more noise, it is even more important to use Noise Ninja, but I prefer to run it by hand for greater control.
FocusMagic -- Once you've eliminated noise, it's time to wave a magic wand over blurry parts of the picture. If the lens wasn't focused properly or had too little depth of field, or if the subject or camera moved, this magic wand may remove the blur. The wand is Acclaim Software's $45 FocusMagic. It can't produce perfection, but 8" x 10" enlargements can show astonishing improvements.
Although FocusMagic sharpens the picture, it works quite differently from ordinary sharpening routines. Ordinary sharpening routines enhance contrast at sharp edges; FocusMagic forms sharp edges out of blur. Be sure to use FocusMagic before any other optical correction. To fix focus blur, FocusMagic works at least as well as the $60 FocusFixer from Fixer Labs and is a bit easier to use. To fix motion blur, I don't know of any alternative.
Unfortunately, although a stand-alone version of FocusMagic is available for Windows, only a Photoshop plug-in is made for Macs and requires Photoshop; it will not run in GraphicConverter.
A Better Perspective -- Now comes the time to compensate for basic optical problems in the photo:
Compensate as best you can for colour fringing caused by lateral chromatic aberration, if you did not do this in the raw converter.
Straighten lines that are curved by barrel or pincushion distortion. Straight lines evincing convex curves show barrel distortion; straight lines evincing concave curves show pincushion distortion. With a digital image you can remove so much distortion that you can even straighten the lines of a fish-eye photograph, but with any lens, if the distortion is not simple and symmetrical, then some residual waviness will remain.
Rotate the image if the camera was not quite straight.
Correct perspective so that buildings don't appear to be falling over. This can be done horizontally as well. Both corrections make scenes look more natural.
Correct light fall-off toward the corners caused by wide-angle lenses.
Correct the magnification toward the edges of wide-angle lenses.
For these corrections I use a quartet of plug-ins by The Imaging Factory: Debarrelizer, Perspective, Squeeze and Vignette ($40 each, other than the $20 Squeeze). They are easy to use and combined they offer more control with greater sophistication than any similar products I have found, except for one lacuna: they offer no compensation for pincushion distortion. Unfortunately (in this context), I happen to have no lenses that cause pincushion distortion, so products to repair it are beyond my ken. In theory the $40 plug-in LensFix from Kekus Digital offers more precise compensation for distortion and chromatic aberration than any other product but with my lenses I found it to be no better, merely difficult to use.
Asiva -- Up to this point, all of your manipulations are straightforward and mechanical. Now we bring in artistic judgement because we need to adjust tonality and balance colour. The usual approaches to this employ the adjustments built into Photoshop, but I find something else that is easier to use, more subtle, and more powerful: the $70 Asiva Shift+Gain. This is a product fundamentally different from anything else on the market and fundamentally more useful.
Whenever you edit a photograph, the first thing you need to do is select the pixels you want to change. Often this means complicated masks and careful manipulations of the mouse. The procedures in Photoshop can be anything but simple, even when they happen to be straightforward. Instead, Shift+Gain will "see" and identify the objects that you want to change much as you see them yourself.
If you can see a face or leaves or twigs or hair, then you are seeing areas of a certain range of brightness and colour. This range must be distinct from what's adjacent, else you would not see the object. If they are different, then the computer can find them automatically and change them.
Incredible as it may seem, the Asiva folks hold a U.S. patent on this idea. Shift+Gain is one of their implementations of this patent. With Shift+Gain you define some arbitrary region of the photo and instruct the program to find and change therein all pixels of an arbitrary range of brightness and colour. If the object you want to change is too variegated to define - well, then you can define the colours of the surrounding objects and tell the program to change all the pixels that it did not find.
Although computers create colours from red, green and blue, and most programs deal with colours as mixtures of red, green and blue, people do not easily conceive of colours this way. It is easier for us to think of colours as having one place on the rainbow, more or less pure or concentrated, and lighter or darker on a continuum between black and white. Those dimensions are hue, saturation and value.
Asiva Shift+Gain lets you think about colours that way. It provides three graphs with hue or saturation or value on the horizontal axis and amount on the vertical axis. You shape a curve on each graph and Shift+Gain selects the colours that fall under those curves. The selection appears immediately. You can work on the whole photo or on regions that you have selected with the marquis or lasso. You can then make changes to your selection's hue, saturation, value, red, green, blue, or any combination of the six. The changes are in direct proportion to the amounts you specify with your curves (Shift). On top of this, they can be made to increase more when the saturation and/or value is greater (Gain).
This is difficult to understand abstractly, and using the product feels strange at first, but it can make sophisticated transformations trivial. A master painter will model his subject in light and shadow - chiaroscuro - and also in colour. Chiaroscuro and colour are limited on canvas. To add contrast, to define a scene better, a painter will mix the two dimensions by colouring highlights and shadows. Photographs have an even more restricted range of tonality and colour, so mixing the two dimensions becomes even more important in photography, but it is usually difficult. Shift+Gain makes it easy.
Look at the sample picture linked below. I took this snapshot with my point-and-shoot camera on holiday then transformed it with Shift+Gain. This transformation could not have been wrought in Photoshop without a lot of skill, but in Shift+Gain it was simple. The highlights were right but the shadows were too dark, so I tried lightening all the tones that were a little above black. That lightened some shadowed leaves too much, so I played with the saturation curve until things looked right, which turned out to mean lightening only weakly saturated dark tones. This left the shadows fine but the mid-tones were still too dark, so I lightened all of the mid-tones. At this point the tonality was okay but the picture still looked flat. It needed more saturated contrast within the midtones - i.e., brighter colours - so I increased ("shifted") the saturation. That didn't look good, so I tried increasing the gain of the saturation, making more-saturated colours still more saturated but changing less-saturated colours less. That was the right direction but the colours needed different amounts of this treatment and saturated blues needed to be decreased rather than increased.
Asiva also makes three other plug-ins that offer the same visual method of selecting areas. Correct+Apply Color ($50) replaces one hue with another, or overlays a hue as digital make-up, in both cases maintaining the original saturation and value. Sharpen+Soften ($70) sharpens or softens the selected objects. (Note: do not use Sharpen to sharpen everything. It still isn't time for that.) Selection ($40) creates a selection in Photoshop for use with other Photoshop tools. All four of those plug-ins are excellent products that are convenient to use and work with alacrity. A $200 bundle includes them all. Asiva also sells a $50 plug-in, the just-released JPEG Deblocker, that is designed to enable Shift+Gain to work properly with JPEGs, if TIFF files are not available. I have not tried it.
Unfortunately, each of these plug-ins requires Photoshop. Asiva does make a stand-alone application - Asiva Photo - that does the job of all four plug-ins, but I cannot recommend it. I find its user interface inflexible and awkward, and it is so slow that on my 800-MHz Titanium PowerBook G4 I need to twiddle my thumbs for 10 to 30 seconds after every click of the mouse.
Photoshop, At Last -- Finally we have finished our Asiva detour and are ready for Photoshop. This is the point when you can do almost anything else that you want other than enlargement and sharpening. I usually need to retouch out a few specks of dust but not much more. To remove a lot of dust, you might try a free product from Polaroid, Polaroid Dust & Scratch Removal; it's available as either a stand-alone program or a Photoshop plug-in.
PhotoZoom and the Finishing Touch -- If you want to crop your photo, crop it now and save the cropped file under another name. When you want to make an enlargement, enlarge the file in Shortcut's $50 PhotoZoom or $130 PhotoZoom Pro to create a new file with the optimal number of pixels for the size of print and the resolution of your printer. By default, both of them sharpen the photo too. This, finally, is the time for sharpening and I have found one of their default settings nearly always to be appropriate, although occasionally I have wanted some of the fine-tuning available in PhotoZoom Pro. These products are significantly better than any other enlarging package for the Mac. I've found PhotoZoom Pro 1.0.95 to be buggy, but it makes such superb enlargements that I have not regretted its purchase. Finally, if the enlargement turns out to show too much noise, open the enlarged file in Noise Ninja and optimize it.
With dye-sublimation printers and some inkjet printers, it is possible to send the printer a file prepared in PhotoZoom that defines precisely every pixel that the print-head is to print. On my Olympus dye-sub printer, this technique generates photos that are strikingly sharper than any I get by sending the printer a smaller file and having the system software fill the page. Unfortunately, most inkjet printers do not have a fixed resolution, so the printer's software has to fudge whatever file you send it. If the printer's specs show a number of pixels or dots per inch that is somewhere around 300, then it probably has a fixed resolution of that number; if its specs show dots per inch in the thousands, then the number bears no relationship to the resolution you will see. Indeed, in this case the resolution of the print is likely to be undefined and variable.
To understand this, consider a printer that prints 1,440 dots per inch. Each colour of ink is laid down as individual dots and the dots cannot overlap. One dot from each ink required to produce a colour is the number of dots required to form the smallest possible bit of that colour; i.e. a pixel. If different numbers of inks are required to form different colours, the number of dots per pixel will change with the colour. On top of this, the dots are likely not to be laid down in a fixed pattern but scattered about stochastically.
If you are unhappy with your printer's sharpness, then you might try testing it with files created at different resolutions, to see if one of those files prints better than the others. If it does, then you are likely to get better results by feeding your printer files of that resolution. I made a few test files for this purpose; download them in the Zip archive linked below. These are 1-, 2- and 3-pixel stripes with headers showing different numbers of dots per inch. Print them with Photoshop or GraphicConverter, not Preview, because Preview will change the patterns' sizes to make them fit the paper. Examine each one to see how smooth the patterns are; if one particular resolution prints better than the others, then scale your photos using that number of dots per inch. However, do consider that what matters is your photographs, not tests. This test can make any printer look bad. If you are satisfied with your printer's sharpness, there is no point to investigating this particular bit of imprecision. It would be better to let ignorance remain bliss.
On the other hand, for everything else involved in printing colour, ignorance is not bliss. With most aspects of colour, it is useful to know the slop in the system, so that you know when you need to be precise and when there is no point to trying. To this end, my next article will introduce you to the wonders and absurdities of colour and ColorSync. It will show you a few simple things that matter and describe a world of complexities and costs that you can ignore.
PayBITS: If Charles's recommendations for perfecting photos
helped, he asks that you make a donation to Doctors Without
Borders: <http://www.doctorswithoutborders-usa.org/donate />
Read more about PayBITS: <http://www.tidbits.com/paybits/>
Article 2 of 5 in series
A cynic might be tempted to say that there are two categories of photographer, those who admit they have problems matching colour, and liars. Matching colour ought to be simple, according to the ads, yet it rarely seems to be. The problem is not you, the problem is that colour is astonishingly complexShow full article
A cynic might be tempted to say that there are two categories of photographer, those who admit they have problems matching colour, and liars. Matching colour ought to be simple, according to the ads, yet it rarely seems to be.
The problem is not you, the problem is that colour is astonishingly complex. Controlling colour is a minefield. You need to know where you can walk, where you cannot, and where the path is uncertain. In this article I shall map the minefield and suggest one safe route through it.
Colour Basics -- We learned in school that all colours are formed from combinations of red, green, and blue. Unfortunately, this explanation is a distortion of reality and is so overly simple as to be wrong.
Colours do not exist in nature, colours exist solely within an observer's head. Colours are perceptions. Light striking the eye triggers a chain of neurochemical reactions that end in perceptions of colour. Light has no colour itself, it is merely electromagnetic radiation. Different wavelengths of light induce different perceptions of colour but the relationship between wavelength and colour is neither simple nor straightforward. Consider:
- Any number of different wavelengths can induce the same colour.
- The same wavelength can induce different colours in different circumstances.
- Two people viewing the same wavelength may see different colours.
All of the eye's colour receptors respond to a broad range of wavelengths but they each respond to some wavelengths more readily than to others. The receptors fall into three groups with different ranges of sensitivity. If you look at the light that each group is most sensitive to, you will perceive red, green, and blue: that is why red light, green light, and blue light can induce any colour. However, although there are three primary colours of light, there are actually four primary colours. Red, green, yellow, and blue are perceptual primaries: all other colours can be identified as variants of them, even in cultures that do not distinguish any colours by name.
In short, three sets of wavelengths will induce three colours of light and mixtures of those wavelengths will induce variants of four colours. Moreover, the colours induced by any particular combination of wavelengths may differ from one circumstance to the next or from one person to the next.
This is the reason that everyone has problems matching and balancing colour. The transformation is not a straightforward mathematical function nor even a constant one. Balancing colour is like cooking fudge that will be sweet enough for all and overly sweet for none.
Engineers can deal with wavelengths but they cannot deal with mathematical functions that change in shape like an amoeba. To get around this, to relate wavelengths to colours, a group of scientists and engineers have got together to define the shape of those amoebas. This group is the Commission International de l'Eclairage (CIE). The CIE has defined several amoebas suitable for several purposes. Standard Observers they are called. All of the CIE's standards are based on them, as are most instruments that measure light, including exposure meters and spectrophotometers.
These standards are designed to facilitate repeatable and precise measurements of mechanical and electronic devices, of sensors, dyes, pigments and the like, and to standardize information. Thus, engineers create an image sensor that corresponds as closely as possible to the latest CIE standard amoeba in terms of how it matches wavelengths of light to specific colors. They measure how their sensor deviates from the standard, and they note those deviations in a profile. Other engineers create a printer, trying to make its output correspond as closely as possible to the standard amoeba. They also note the deviations between their printer's output and the standard in a profile. Then, when a computer sends the image file captured by the sensor to the printer, it adjusts the image according to the two profiles. The resulting picture comes out of the printer using colours that more or less match the relationships between wavelength and colour defined by that standard amoeba.
This approach to matching colour would be all you'd need if you invited the Standard Observer to dinner and wanted to impress him with your photos. However, if he came, he would not deign to look at them hanging on your living room wall without repainting the wall a particular shade of grey. He would also insist on drawing the curtains to block out the sun and installing a special lamp. Moreover, this would not be a normal social experience. He would not view them before dinner when he was hungry, or during dinner when he is distracted, or after dinner when he is relaxed.
Put bluntly, it is never practical to match colours based solely and precisely on CIE specs and the Standard Observer. Realistic calibration is imprecise at best, and more an art than a science. The specifications (ICC.1:2003-09) for International Color Consortium (ICC) profiles - the profiles used by Apple's ColorSync technology - make this clear:
"Clearly, there is considerable art involved in shaping the tone-reproduction and color-reproduction characteristics of different media and much of this art is based on subjective, aesthetic judgments. As a result, the substrate and the colorants used in a medium will be exploited to impart a particular personality to the reproduction that is characteristic of the medium. In reproducing an image on various types of media, it may be desirable to adjust the colorimetry to accommodate the differing characteristics of those media. In any case, it is necessary to accommodate the gamut differences. Such considerations go beyond the simplistic matching of color stimuli or even of color appearance. These adjustments need to be incorporated in the color transforms of the device profiles."
The reality is that matching colour is a chimera. The colour of a person's face as you perceive it may change with the background or a hat. When different pigments and dyes are involved, matching colour becomes a game spread across two ballparks. Except by accident, flesh tones on a monitor will never look just like the flesh tones in a print when the two are compared directly.
On the other hand, it does not matter if the monitor and print do not match. These two versions of the picture will never be compared under normal circumstances. What matters is that the flesh tones (or whatever) always look appropriate within their context. The flesh tones on the monitor need to look natural within the photo. The flesh tones on the print need to look natural within the print and next to the other portraits on the wall.
Your task is to calibrate your monitor and printer not so that their images match, but so that when a picture looks good on the monitor, it also looks good printed out. How the two compare side by side is immaterial.
Colour Profiles -- It is possible to spend a lot of time and money calibrating equipment to absurd levels of precision. Since fudge is a basic ingredient of profiles and colour-matching, ICC profiles from different sources will give different results, and there is no way to tell whether you will like a profile without buying it and trying it. Fortunately, most people don't need to profile their printer at all and can get by fine with the default settings. Long ago Microsoft and HP proposed, and the computer industry adopted as a formal standard, a colour-matching technology that's simpler than the full ICC standard while still being sufficient for most people outside the graphics-arts industry. All devices are assumed to be able to produce a range of colours that will fit within a range or "colour space" called sRGB. A standard set of numbers defines every colour within this space. All devices are supposed to interpret those numbers sensibly. It is the norm for photos on the Web, and most commercial printing services use it, so I've set my Mac to use sRGB by default (ColorSync Utility > Preferences pane > Default Profiles tab > RGB Default pop-up menu).
Most inks on most papers are limited to the range of sRGB, although some do exceed sRGB's range. From some inks a larger colour space defined by Adobe, called "Adobe RGB," allows more vivid colours. The difference is likely to matter in print competitions and some corners of the graphic arts trade, but it not clear to me that it would matter elsewhere. Using a larger colour space incurs a cost: it is likely to require 16-bit colour, which requires more storage and processing time. A good description of those and other colour spaces is at:
The sRGB standard ought to make colour-matching simple and invisible but Microsoft is not known for support of standards, even its own. When I was exploring some of Photoshop's preferences, I noticed some curious results. The pure green on the Macbeth ColorChecker showed up with these different amounts of red using different programs and different versions of the sRGB profile:
|Photoshop using ColorSync||0%||11.0%|
|Photoshop using Adobe Color Engine||0%||5.9%|
|Preview and GraphicConverter||0%||5.5%|
The first profile was supplied by Apple as part of Mac OS X. The second was supplied by Microsoft, was installed as part of Photoshop, and is built into some other applications as well. I don't know which of these is correct but it appears to be the profile that Apple supplied. In any case, that profile is consistent, which is more important.
Calibrating Your Equipment -- Before you can use a monitor to balance the colour of photographs, you need to calibrate it under its normal ambient lighting. I work with my PowerBook sometimes under incandescent lighting and sometimes under fluorescent lighting, so I have calibrations for both and switch between them. Apple's calibrator, accessible from the Display preference pane, is adequate to set up a computer for ordinary purposes but it is not good enough for editing photos. I suggest instead the $20 shareware package SuperCal. However, do not use it with the photo built into the program. Instead use an electronic version of the Macbeth ColorChecker (free from the second link below). If you are taking the sRGB route and using only a single printer, then it would be reasonable just to compare your monitor directly to a printout of that file, but if you are using a different colour space or want to use multiple printers, compare it to a real Macbeth card. In any case, be sure to set the gamma to 2.2. That is the de facto standard for working with colour. The Mac's standard of 1.8 was intended to make a grey-scale monitor look like a printed page.
Your goal in calibrating the monitor should be to make the two images of the ColorChecker match as closely as possible overall and to fudge the inevitable differences so that none of the colours is further off than any other. From such different technologies any kind of real match is impossible; you are after the best approximation.
To compare the target photo to the monitor, and to assess the colour of prints, you don't need fancy instruments - you are pleasing your eye, not the Standard Observer - but you do need a suitable lamp. Ideally this will be the same kind of lamp you always view your pictures with, but since most pictures are viewed under a variety of conditions, you really need an average lamp. Although there is no such thing as an average lamp, there is a graphic-arts standard for judging colour. It is an arbitrary standard that has proven to be functional. Ordinary light bulbs are redder than this and most fluorescent tubes are too green. A reasonable compromise is a desk lamp that combines a 60-watt incandescent bulb with a circular fluorescent tube. If you need to buy something, you might find a desk lamp that uses a compact fluorescent tube and replace the tube with an Ott-Light TrueColor.
If you want professional-looking 8" x 10" prints that are tough and durable, and if you want to keep things simple, then you might take the sRGB approach with a $500 Olympus P-440 dye-sublimation printer. That's what I do. If you use the P-440 from a Macintosh, all you do is switch it on. It needs no special set-up and, to my eye, its native colour management works better than any ColorSync profile I have found for the machine, including a couple from the QImage folks and the one supplied by Olympus themselves. The printer requires no cleaning and uses no liquid ink that can dry up. The running cost is $2 per 8" x 10" print, which is less than good ink-jet paper and ink work out to be if a print head ever dries out. Its range of colours is not the broadest - it lies completely within sRGB space - and when I compare prints of the Macbeth ColorChecker made from it and from my darkroom, the tones from the Olympus look a bit more restricted. However, this is not a comparison people normally make and the P-440's prints on their own can be stunning. The prints I get from it do not look digital even under a loupe. (Do note, however, that I use special-purpose scaling software, I do not merely send the printer a file and have it fill the paper. Pictures scaled by PhotoZoom Pro are sharper than pictures scaled in the usual ways, even with PhotoZoom's sharpening switched off. See "Editing Photos for the Perfectionist" in TidBITS-748 for more details on PhotoZoom Pro.)
[Editor's note: I can vouch for the quality of these prints. Charles passed through Ithaca recently while on a trip and brought me pictures he'd taken while on a trip through the Himalayas in July. He used a Sigma SD-10 camera and the Olympus P-440 printer, and the prints were utterly gorgeous. -Adam]
I have found only two glitches with the P-440. The first is trivial: the printer driver will not accept images at the advertised spec of 3200 pixels, it requires them to be three pixels shorter. The second is more serious: if the line voltage in your house fluctuates too much, then the printer will produce random blotches and lines. Fluctuating voltage is not common in cities but it can happen in rural areas. I need to plug the printer into a voltage regulator. (The computer's UPS would have done but I happened to have an old voltage regulator in the basement.)
Last-minute addition: Kodak just announced the release of a comparably priced dye-sublimation printer that prints pictures up to 8" x 12". Kodak is aiming this printer at professional photographers, which is an important market for them and one that it has always served well. This printer looks very interesting.
Balancing Colours in Photos -- Once you have a system that is calibrated, you need to get to work balancing the colour of individual photos. The easiest way I know to do this is to relate everything in the picture to some spot that is some shade of neutral, colourless grey. There is one such spot in just about every photo. It may be merely the reflection of a lamp on a metallic surface, it may be only a few pixels across, maybe just the glint off a ring, but it is almost certain to be there somewhere. Once you've found that neutral spot, adjust the overall colour balance so that the red, blue, and green values for that spot measure the same using DigitalColor Meter or the equivalent function in Photoshop or your raw converter. (The raw converter is the best place to do this. In Photoshop you can just click on the spot with the eyedropper.)
Now every colour of a similar brightness ought to be properly balanced. Find a colour that is particularly sensitive to colour casts - most likely skin tones - and look for differences in that one hue between highlights and shadows. Those differences indicate colour casts that vary with brightness. In Asiva Shift+Gain (again, see the previous article in TidBITS-748 for details) it is easy to select the range of brightness that is off and correct it by nudging the hue slightly warmer or softer. If that one tone is corrected so that it looks the same at all levels, then other tones ought to be similar as well, although there may be anomalies if the picture was taken in mixed or fluorescent or high-intensity lighting. If some specific tones show an anomaly, select just those tones in Asiva Shift+Gain and fix them.
The Eyes Have It -- The approach to controlling colour that I have outlined here is the simplest and cheapest I know of. It would be a sensible approach to start with and would be a good one to retreat to if you are having problems doing it another way. However, this is certainly not the only way to control colour nor can it achieve the highest quality possible. Enough time and enough money can buy better results. If you decide to aim higher, though, do always keep in mind that colour is a variable perception; it is not a stable, objective phenomenon. There is rarely an intelligent way to answer the question, "Does this profile (or printer) give better colour?"
Perception also ought to be kept in mind when buying a digital camera. The specifications of digital cameras are not what they seem to be if you consider them from the perspective of the eye. My next article will examine cameras from this angle. Among other things it will calculate a finite and surprisingly low answer to the question, "How many pixels are enough?"
PayBITS: If Charles's recommendations for matching colours
helped, he asks that you make a donation to Doctors Without
Borders: <http://www.doctorswithoutborders-usa.org/donate />
Read more about PayBITS: <http://www.tidbits.com/paybits/>
Article 3 of 5 in series
In another incarnation I was a commercial photographer. At the end of that life I sold all of my studio equipment and all of my cameras save one, a Horseman 985, a contraption with a black bellows that resembles the Speed Graphic press cameras you see in pre-war moviesShow full article
In another incarnation I was a commercial photographer. At the end of that life I sold all of my studio equipment and all of my cameras save one, a Horseman 985, a contraption with a black bellows that resembles the Speed Graphic press cameras you see in pre-war movies. It uses roll film and allows the front and back of the camera to be twisted in every direction when it's parked on a tripod. You can also hold it in your hands and pretend you're acting in "Front Page." Never have I found a camera so useful. Nowadays, however, digital sensors are pushing the optical limits of lenses and software has become more pliable than leather bellows, not just for adjusting colour but for optical manipulations as well. This year a modestly priced (as such things go) digital SLR supplanted my Horseman. I can no longer see owning a camera that uses film.
In this article I am going to examine the technology of digital cameras, but in an unconventional way. I am going approach it from basic principles. This approach may seem abstract and theoretical at first, but it won't for long. You will see that if you understand the scientific principles, you can ignore a lot of marketing hype and save significant sums of money.
Photocells -- Imagine a small windowpane with bits of a special metal embedded in the glass and a wire touching those bits. Photons of light bang against the glass. The impact unsettles electrons in the metal. They bang into electrons within the wire, which bump into electrons further down the wire, which bump into still more electrons, so that a wave of moving electrons passes along the wire - an electrical current. The more photons that bang into the pane, the more electricity flows.
This is a photocell, a sensor that is sensitive to the intensity of light. Now imagine millions of cells like this assembled into a checkerboard and shrunk to the size of a postage stamp. Put this stamp-sized collection of photocells inside a camera where the film usually goes. The lens projects an image onto it. Each cell receives a tiny portion of the image and converts that portion into an electrical charge proportionate to the amount of light forming that portion of the picture. Now we have a photosensor.
The complete matrix of charges on this photosensor forms an electrical equivalent of the complete image - but only of the intensity of the image. Since the eye interprets the intensity of light as brightness, brightness devoid of colour, this photosensor provides the information of a colourless photograph, of a black-and-white photograph. If we feed the output of the photosensor to the input of a printer, and if we let the printer spray ink on paper in inverse proportion to the voltage (lower voltage, more ink), then we will see a black-and-white photograph appear. The output of the photosensor can be connected directly to the printer through an amplifier, or it can be converted into digital numbers and the digital numbers can be sent to the printer. The first approach is analog, the second is digital. The greater the range of digital numbers, the finer the steps from black to white. If there are enough steps, the printout will look like a continuous-tone photograph.
To make a photosensor record colour, we need to make it sensitive to wavelengths of light as the eye is sensitive to them. We see long wavelengths weakly as reds, short wavelengths very weakly as blues, and medium wavelengths strongly as greens. The easiest way to make a black-and-white photosensor record colour is to put filters over the cells so that alternate cells respond to short wavelengths, medium ones and long ones. Since the eye is most sensitive to medium wavelengths, it is practical to use twice as many of these as the others: one blue, one red, two greens. Such a set of filtered cells - red, green, blue, green - forms the Bayer photosensor (named after its inventor) that is used in nearly every digital camera.
Now consider what happens when a spot of light is smaller than a group of four cells, when it is small enough to strike only a single cell. Assume the spot to be white light, which includes every wavelength. If the white spot falls on a blue-filtered cell, then the picture will show the spot to be blue. If the white spot falls on a red-filtered cell, the picture will show the spot to be red. If it falls on a green-filtered cell, the spot will look green. This can cause so many errors in the image that manufacturers try to prevent it from happening by blurring the image, by putting a diffusing filter in front of the sensor to smear small spots of light over more than one cell.
Note that in a sensor like this, four cells form the smallest unit that can capture full information about some part of a picture. That is, four cells form the basic element of a picture, the basic "picture element" or "pixel". Unfortunately, to make their products sound more impressive, manufacturers count cells as pixels. That's like saying a piano has 234 notes, not 88, because it is built with 234 strings. Since the sensors function differently at the level of the cell and the level of the pixel, it is important to ignore the advertising and to discriminate appropriately between pixel and cell. I shall do that in this article.
A simpler approach would be to design a sensor in which every cell is sensitive to every wavelength. Such a sensor was patented by Foveon, Inc., in 2002, and is currently in its second commercial generation. Foveon's sensor uses no coloured filters but instead embeds photo-sensitive materials within the silicon at three depths. The longer the wavelength of the light, the farther it penetrates the semi-transparent silicon and the deeper the photo-sensitive material it stimulates. With a Foveon sensor, every cell records a complete pixel with all wavelengths. (Note, however, that Foveon have taken to multiplying the number of pixels by three, to sound competitive in their ads.)
How many pixels do you need? The smallest detail usable in a print is defined by the finest lines that a person can see. At a close reading distance (about 10 inches, or 25 cm), somebody with perfect vision can resolve lines slightly finer than those on the 20/20 (6/6) line of the eye chart, lines of about 8 line-pairs per millimetre (l-p/mm), which is the unit of optical resolution.
However, those are black-and-white lines. No ordinary photograph contains black-and-white lines so thin because no camera can produce them on photographic (as distinct from lithographic) film. No lens can create such fine lines without beginning to blur the blacks and whites into grey. Dark-grey-and-light-grey lines need to be thicker than black-and-white lines to be seen. In the perception of fine lines, a halving or a doubling of thickness is usually the smallest difference of any practical significance, so this pronouncement of Schneider-Kreuznach sounds perfectly reasonable to me: "A picture can be regarded as impeccably sharp if, when viewed from a distance of 25 cm, it has a resolution of about 4 l-p/mm." On an 8" x 12" photo, this is 1,600 by 2,400 pixels, or 3.8 megapixels. (8" x 12" is about the size of A4 paper. It isn't quite a standard size of a photo but will prove more convenient for discussion than 8" x 10".)
In short, 4 million pixels carry all of the useful information that you can put into an 8" x 12" photograph. Finer detail than this will matter to technical aficionados making magnified comparisons, and it may matter for scientific or forensic tasks, but it will not matter for ordinary purposes. The same holds for larger prints because we don't normally view larger photographs from only 10 inches away. It holds even for the gigantic images in first-run movie theatres. The digital processing used routinely for editing and special effects generates movies with no more than 2,048 pixels of information from left to right, no matter how wide the screen. The vertical dimension differs among cinematic formats but is typically around 1,500 pixels.
This, of course, presents quite a paradox: a frame of a Cinemascope print obviously contains a lot more than 4 million pixels. Even an 8" x 12" print from a 300-dpi printer contains 2,400 pixels by 3,600 pixels, or 8.6 million pixels. Large prints need those additional pixels to prevent our seeing jagged edges on diagonal lines, because the eye will see discontinuities in lines that are finer than the lines themselves.
Since no photograph of any size can contain more than 3 to 4 million elements of information, even when made from film, any substantial enlargement needs to be composed primarily of pixels that do not exist in the original. These pixels need to be interpolated: interpolated through continuous optical integration (film), interpolated mechanically (high-resolution scanner), or interpolated logically by software (digital photography). This need for interpolation in enlargements makes interpolating algorithms fundamentally important to digital photography. For most enlargements, the quality of the interpolating algorithm matters more than the resolution of the sensor or the quality of the lens. We shall come back to this.
For the moment - indeed, forevermore - it is essential to keep straight the distinction between (1) the information that is contained within an image and (2) the presentation of this information. Both are often measured by pixels but they are orthogonal dimensions. The information within a picture can be described by a certain number of pixels. That information may be interpolated into any number of additional pixels but doing so adds nothing to the information, it merely presents the information in smaller pieces.
To illustrate this, here are some examples:
A good 8" x 12" photograph and the same photo run full-page in a tabloid newspaper both contain about 1 megapixel of information.
A slightly better photograph and the same photo run full-page in a glossy magazine and a broadsheet newspaper all contain about 1.9 megapixels of information.
A slightly better photograph still - the best possible - and the same photo spread over two pages in a glossy magazine both contain about 3.8 megapixels of information.
If you have an 8" x 10" photo printer, you can compare those levels of information by printing out a set of pictures (linked below, about 30 MB) that I took at approximately those resolutions, keeping everything else the same. (The test pictures were shot at 3.4, 1.5 and 0.86 megapixels: I used a Foveon sensor and, to generate the lower resolutions, used its built-in facility to average cells electronically in pairs or in groups of four.) I enlarged the pictures using the best interpolator I could find to 3,140 by 2,093 pixels.
The photos are JPEG 2000 files, saved in GraphicConverter at 100 percent quality using QuickTime lossless compression. To prepare them I adjusted the levels, cleaned up some dirt in the sky, then enlarged them in PhotoZoom Pro using the default settings for "Photo - Regular." Those settings include a modest and appropriate amount of sharpening.
What you will see, if you print them, is surprisingly small differences from one level of resolution to the next. Each of these photos looks sharp on its own, and at arm's length they all look the same. You can see a difference only if you compare them up close. That, of course, is because the only information that's missing from the lower-resolution pictures is information that is close to the limit of the eye's acuity and thus is difficult to see.
Bayer vs. Foveon in Theory -- Cameras today fall into two categories, those with a Bayer sensor and those with a Foveon sensor, which at this writing include only two, a theoretical Polaroid 530 and a very real Sigma SD-10.
In a Bayer sensor, a single cell records a single colour, but a pixel in the print can be any colour. Carl Zeiss explain this: "Each pixel of the CCD has exactly one filter color patch in front of it. It can sense the intensity for this color only. But how can the two remaining color intensities be sensed at the very location of this pixel? They cannot. They have to be generated instead through interpolation (averaging) by monitoring the signals from the surrounding pixels which have filters of these other two colors in front of them."
Since the cells provide a lot of partial information, the interpolation can be accurate, but it can be inaccurate as well. Patterns of coloured light can interact with the checkerboard pattern of filters over the cells to generate grotesque moire patterns. To avoid these, Bayer sensors are covered with a filter that blurs every spot of light over more that one cell. The net result proves to be interpolated resolution that varies with colour and peaks with black-and-white at about 50 percent more line-pairs/millimetre than the intrinsic resolution of the sensor. This sounds like a lot but cannot be seen unless you look closely.
More problematic is the fact that this filter does not merely prevent moire patterns, it also blurs edges. With a Bayer sensor, every edge of every line is blurred. You can see the interpolated resolution and the blurring in the magnified tests in the picture linked below. There I have compared cameras with a Foveon and a Bayer sensor containing the same number of pixels - pixels, not cells. Both have 3.4 million pixels (although the Bayer has 13.8 million cells).
People make a big deal about resolution because it sounds important and is easy to test, but aside from special cases like astronomical observation, fine resolution actually matters little. By definition, at the limits of resolution, we can only just make out detail. Anything that is barely visible will not obtrude itself upon our attention or be badly missed if it is not there. What we see easily is what matters to us, what determines our impression of sharpness. Our impression of sharpness is determined by the abruptness and contrast at the edges of lines that are broad enough to be easily made out. You can see this with the two tortoises in this picture linked below. The sharper tortoise has less resolution but its edges are more clearly defined.
The Bayer sensor resolves finer black-and-white lines but a Bayer sensor will not reproduce any line so sharply as the Foveon. As a result, when comparing two top-quality images, I would expect the Bayer's image to look slightly more impressive when large blow-ups are examined up close, but I would expect the Foveon's to look slightly clearer when held a little farther away. Moreover, when detail is too fine for the sensor to resolve, the Bayer looks ugly or blank but the Foveon interpolates pseudo-detail. This means that in some areas, large enlargements examined closely might actually look better with the Foveon. In sum, I would expect the 3.4 megapixel Foveon and what is marketed as a 13.8-megapixel Bayer to be in the same league. I would expect photographs from them to be different but comparable overall, if they are enlarged with an appropriate algorithm.
Bayer vs. Foveon in Practice -- "If they are enlarged with an appropriate algorithm..." - that statement is critical to a sensible comparison. Usually, if you magnify an object a little, it won't change its appearance much. If you simply interpolate according to some kind of running average, you can increase its size to a certain extent and it will still look reasonable. This is how most enlargements are made. It is the basis of the bicubic algorithm used in most photo editors, including Photoshop and, apparently, Sigma's PhotoPro. It is also the basis of most comparisons between Bayer and Foveon. However, a running average will widen transitions at the edges of lines, and it will destroy the Foveon's sharp edges, softening them into the edges of a Bayer. A better class of algorithm will stop averaging at lines. Any form of averaging, though, tends to distort small regularities (wavelets) that occur in similar forms at different scales. Best of all are algorithms that look for wavelets, too. The only Macintosh application I know of in that class is PhotoZoom Pro. PhotoZoom Pro has a limited set of features and some annoying bugs - version 1.095 for the Mac feels like a beta release - but it creates superb enlargements.
An appropriate comparison of the Bayer and Foveon sensors would see how much information these sensors capture overall. (How much spatial information, that is: comparing colour would be comparing amoebas, as I explained in "Colour & Computers" in TidBITS-749.) To do this, I tested an SD-10 against an SLR that was based on a larger Bayer sensor, a sensor 70 percent larger than the Foveon that contained 13.8 million cells. Kodak were most helpful in supplying this camera once they heard Doctors Without Borders (Medecins sans Frontiers) was to benefit (see the PayBITS block at the bottom of this article to make a donation if you've found this article helpful). Also, Sigma sent me a matched pair of 50-mm macro lenses to use with the cameras.
I copied an oil painting with a wide variety of colours and a lot of fine textural detail. With each camera I photographed a large chunk of the painting, cropped out a small section from the centre, blew up that section to the same size as the original using PhotoZoom Pro (the defaults for "Photo - Regular"), and compared that blow-up to a gold standard, a close-up that had not seen any enlargement, interpolation, or blurring filter in front of the sensor. Before blowing them up I balanced all three photos to be as similar as I could, then, to prevent unavoidable differences in colour from confounding the spatial information, I converted all three images to black-and-white. I did this in ImageJ. First I split each image into its three channels, then I equalized the contrast of each channel across the histogram, then I combined the channels back into a colour picture, converted the new colour picture to 8-bit, and equalized the contrast of the 8-bit file. (See the second link below for an explanation of contrast-equalization.) I chose a painting in which most of the coloured brush strokes were outlined with black brush strokes, so that adjacent colours would not merge after conversion into a similar shades of grey. With my 314-dpi printer, the two enlargements are the equivalent of chunks from a 14" x 21".
The difference between the photos from the Bayer and Foveon is very slight. The two pictures are indistinguishable unless you compare them closely. Fine, contrasty lines on the standard are finer on the Bayer, more contrasty on the Foveon. The one that looks more like the standard depends upon the distance from the eye and the lighting but the differences are trivial. The two images do contain slightly different information, but they contain comparable amounts overall.
On the other hand, for efficiency of storage and speed of processing, the Foveon wins hands down. This is how two identical pictures compared:
|RAW||7.8 MB||14.7 MB|
|8-bit TIFF||9.8 MB||38.7 MB|
If you would like to print out my test pictures, you can download them. However, for the comparison to be meaningful, you must specify a number of dots per inch for the pictures that your printer can resolve in both directions. I know that an Olympus P-440 can resolve 314 dpi, with no more than occasional one-pixel errors in one colour's registration. I have not found any resolution that an Epson 9600 can handle cleanly in both directions, although I have not been able to test it exhaustively. Other printers I know nothing about. You will have to experiment with the test patterns in the Printer Sharpness Test file linked below. For this purpose, only the black-and-white stripes matter.
Each picture in the 5.8 MB file below is 1512 pixels by approximately 2270. If a picture has been printed correctly, the width in inches will be 1512 divided by the number of dots per inch. Print them from Photoshop or GraphicConverter; Preview will scale them to fit the paper.
Remember that the question to ask is not which picture looks better or which picture shows more detail but which picture looks more like the gold standard overall. I suggest that you compare the pictures upside down. Remember, too, that these are small sections from big enlargements that you would normally view framed and hanging on a wall. Also, although the contrast is equalized overall, the original colours were not quite identical and the equalization of contrast amplified some tonal differences. If you perceive the Bayer or Foveon to be better in one or another area, make sure that in this area the tonality is similar. If the tonality is different, the difference there is probably an artifact. An example of this is the shadow beneath the tape on the left side.
I have not been able to test this but I suspect that the most important optical difference between Bayer and Foveon sensors may be how clearly they reveal deficiencies in lenses. Since the Foveon sensor is sharper, I would expect blur and colour fringing to show up more clearly on a Foveon sensor than a Bayer.
Megapixels, Meganonsense -- Megapixels sell cameras as horsepower sells cars and just as foolishly. To fit more cells in a sensor, the cells need to be smaller. It is possible to make cells smaller than a lens can resolve. Even if the lens can resolve the detail more finely, doubling the number of cells makes a difference that is only just noticeable in a direct comparison.
On the other hand, small pixels create problems. Electronic sensors pick up random fluctuations in light that we cannot see. These show up on enlargements like grain in film. Larger cells smooth out the fluctuations better than smaller cells. Also, larger cells can handle more light before they top out at their maximum voltage, so they can operate farther above the residual noise. For both reasons, images taken with larger cells are cleaner. Enlargements from my pocket-sized Minolta Xt begin to fall apart from too much noise, not from too few pixels.
In contrast, enlargements from my Sigma SD-10 have so little noise that they can be enormous. A 30" x 44" test print looked as though it came from my 2-1/4" x 3-1/4" Horseman. The Sigma has less resolution than the Horseman - it's probably less than can be extracted from scanning the finest 35-mm film - but its noise level can be reduced to something approaching 4" x 5" sheet film. Such a low level of noise leaves the detail that it contains, which is substantial, very clean. In perception, above a low threshold, the proportion of noise to signal matters far more to the brain than the absolute amount of signal. Indeed, if I look through a box of my old 11" x 14" enlargements, the only way I can distinguish the 35-mm photos from the 2-1/4 x 3-1/4" is to examine smooth tones for noise. I cannot tell them apart by looking at areas with detail.
In sum, with the range of sensors used in cameras today, there is no point to worrying about a few megapixels more or less. Shrinking cells to fit more of them in the sensor can lose more information than it gains. The size of the cells is likely to be more important than their number. For the same money, I would rather buy a larger sensor with fewer pixels than a smaller sensor with more pixels. If nothing else, the larger sensor is likely to be sharper because it will be less sensitive to movement of the camera. For a realistic comparison of sensors as they are marketed see this chart:
Tripod vs. Lens -- Most people believe that the quality of the lens is of primary importance in digital photography. If you have stayed with me so far, you may not be surprised to hear me calculate otherwise. With 35mm cameras, an old rule of thumb holds that the slowest shutter speed that a competent, sober photographer can use without a tripod and still stand a good chance of having the picture look sharp is 1 divided by the focal length of the lens: 1/50" for a 50-mm lens, 1/100" for a 100-mm lens, etc. At these settings there will always be some slight blur but it will usually be too little to be noticed. This blur will mask any difference in sharpness between lenses. To see differences in sharpness requires speeds several times faster.
With digital cameras that use 35-mm-sized sensors, the same rule of thumb holds, but most digital cameras use smaller sensors. With smaller sensors, the same amount of movement will blur more of the picture. If you work out the trigonometry, you'll find that you need shutter speeds roughly twice as fast for 4/3" sensors and four times faster for 2/3" and 1/1.8" sensors. (Digital sensors come in sizes like 4/3", 2/3" and 1/1.8". Those numbers are meaningless relics from the days of vacuum tubes; they are now just arbitrary numbers equivalent to dress sizes.) That means minimal speeds of 1/100" and 1/200" for a normal lens. Differences in sharpness among lenses would not be apparent until shutter speeds are several times higher again. Because of this, it strikes me that the weight of lenses matters more to image quality than the optics. The heavier a camera bag becomes, the more likely the tripod will be left at home.
(Note that this does not mean that 35-mm-sized sensors are best. Other optical problems increase with the size of the sensor. As an overall compromise, the industry is beginning to adopt a new standard, the 4/3", or four-thirds, which is approximately one-half the diameter of 35-mm. This is not unreasonable.)
Frankly, I should be astonished to find any lens manufactured today that does not have sufficient contrast and resolution to produce an impressive image in the hands of a competent photographer. I know that close comparisons of photos shot on a tripod will show differences from one lens to another, and I know that some lenses have weaknesses, but very few people will decorate a living room with test pictures. In the real world, nobody is likely to notice any optical deficiency unless the problem is movement of the camera, bad focus, distortion or colour fringing. It is certainly true that distortion and colour fringing can be objectionable but, although enough money and experimentation might find some lenses that evince less of these problems than others, as a practical matter, especially with zoom lenses, they seem to be inescapable. Fortunately, these can usually be corrected or hidden by software.
Indeed, even a certain amount of blur can be removed with software. Let's say that half of the light that ought to fall on one pixel is spread over surrounding pixels. Knowing this, it is possible to move that much light back to the central pixel from the surrounding ones. That seems to be what Focus Magic does (see the discussion of Focus Magic in "Editing Photographs for the Perfectionist" in TidBITS-748).
One More Myth -- Finally, I would like to end this article by debunking a common myth. I have often read that Bayer sensors work well because half of their cells are green and the wavelengths that induce green provide most of the information used by the eye for visual acuity. This made no sense to me but I am not an expert on the eye so I asked an expert - three experts in fact, scientists known internationally for their work in visual perception. I happened to be having dinner with them. It made no sense to them, either, although I took care to ask them before they had much wine. Later I pestered one of them about it so much that eventually she got out of bed (this was my wife Daphne) and threw an old textbook at me, Human Color Vision by Robert Boynton. In it I found this explanation:
"To investigate 'color,'" an experimenter puts a filter in front of a projector that is projecting an eye chart. "An observer, who formerly could read the 20/20 line, now finds that he or she can recognize only those letters corresponding to 20/60 acuity or worse. What can be legitimately concluded from this experiment? The answer is, nothing at all," because the filter reduced the amount of light. "A control experiment is needed, where the same reduction in luminance is achieved using a neutral filter.... When such controls are used, it is typically found that varying spectral distribution has remarkably little effect upon visual acuity."
In short, each cell in a Bayer sensor provides similar information about resolution. It is true that green light will provide a Bayer sensor with more information than red and blue light but that is only because the sensor has more green cells.
If you want to shop for a digital camera, this article will help you make the most important decision, what kind and size of sensor to buy, with how many pixels. Once you have decided that, a host of smaller decisions await you. My next article will walk you through these. It is also going to incorporate a review of the Sigma SD-10 and will appear shortly after one more lens arrives from Japan.
PayBITS: If Charles's explanation of resolution and debunking of
the megapixel myth were useful, please support Doctors Without
Borders: <http://www.doctorswithoutborders-usa.org/donate />
Read more about PayBITS: <http://www.tidbits.com/paybits/>
Article 4 of 5 in series
In my last article, "Sense & Sensors in Digital Photography," I tried to cut through some of the mythology about image sensors and bring some sense to the subjectShow full article
In my last article, "Sense & Sensors in Digital Photography," I tried to cut through some of the mythology about image sensors and bring some sense to the subject. Today I shall explain what I look for when examining a camera for purchase. Instead of doing this in the abstract, I shall detail my thinking earlier this year when I bought my last camera, a Sigma SD-10. You will have different needs than I, but this approach ought to help you figure out what features may be important and what gadgetry to ignore. This article will also provide a detailed review of the SD-10.
Preliminary Questions -- The first question to ask yourself is what you want to use the camera for. Be very specific here. The more specific you are, the easier your decision will be. I defined three uses: (1) to take portraits of friends, some to be framed and some for use as publicity photos, (2) to provide illustrations for a book I am working on, and (3) to take pictures during a month of hiking in the Himalaya. The publicity pictures and illustrations need to be of commercial quality and the illustrations require maximal flexibility. For the Himalayan shots I wanted sufficient clarity and detail that I might enlarge them to the poster size and hang them on a wall. I still owned my professional workhorse, a 2-1/4" x 3-1/4" rangefinder camera that offers all the swings and tilts of a view camera. I did not want to buy a digital camera offering less control or quality.
Once you have clarified the camera's purpose, you can decide on the level of image sensor that you need. That was the subject of the previous article, so I shall not discuss it here. Do read it now if you have not. For me the answer was clear: I wanted either the best of the Bayer sensors or the larger Foveon.
Knowing the sensor and your purpose, you can decide on the level of camera. I put digital cameras into three categories: (1) simple point-and-shoot, (2) fully featured but compact, with a lens that is fixed to the body and cannot be swapped for another, and (3) fully featured with interchangeable lenses. I own the first and, for convenience, I would have preferred to buy the second, but to obtain the versatility and quality that I wanted, I had to buy the third. A check through the database at Digital Photography Review showed me that my choices were a Sigma SD-10 that cost $1,500 including two lenses, or models from Kodak and Canon that cost $5,000 and $8,000 for the body alone (i.e., the camera with the lenses yet to buy). I had no intention of spending more than $5,000, especially for technology that is changing so quickly and for electronic devices that usually prove impractical to repair a few years down the line. For me the choice was either the SD-10 or sticking with what I had.
Since I wanted big blow-ups that would compare to those from my 2-1/4" x 3-1/4" camera, I first checked to see whether the SD-10 could provide them. I downloaded a raw image from Sigma's Web site, an image that appeared to be taken with a tripod and that would be a hard test of a sensor and lens, then I enlarged the image to 30" x 44" using PhotoZoom Pro (see the previous article) and had the image printed out. I could see excessive colour fringing and other flaws but they would have been correctable with software. Even without any other image-processing, the overall appearance was surprisingly good. I decided to examine the camera in person.
Three Requirements for Any Camera -- I have three absolute requirements for any level of camera. The first is that all menus be labeled in English. Chinese ideograms are not memorable. When I am taking a picture, I do not want to have to look up in an instruction book what some hieroglyphics on a camera's display are supposed to mean. If a menu isn't in English, it might as well not be there. If all the menus are not in English, I do not want the camera, no matter how good it is, not matter how cheap it is, no matter what. The SD-10 has menus in English, so I looked further.
Second, if I am to use a camera, I must be able to see through its viewfinder. I must be able to see clearly and sharply the entire frame while wearing my glasses. Many cameras do not permit this even with simple eyeglasses, and I wear graduated lenses that are the equivalent of trifocals. The eyepiece of the SD-10 can be adjusted to let me see sharply through the portion of my glasses that I need to look through. It also lets me see the entire frame, because the viewfinder shows a lot more than the frame. This extra space is convenient for composition and is one reason I liked my big rangefinder camera. It compensates for one awkwardness: glasses make it difficult to see a digital readout that is at the very bottom of the viewfinder, below the extra space.
The third requirement is that I be able to focus the camera reliably. A camera's focussing system affects sharpness more than almost anything else and focussing can be more problematic than it seems. Focussing looks easy in a good rangefinder camera, because lines suddenly line up, but this is merely a precise display. The display can mask any amount of error in the shape of the cam that actually adjusts the lens. A single-lens reflex camera (SLR) has no hidden mechanism - what you see ought to be what you get - but eyes are not built to focus cameras. Nobody can see the point of sharpest focus, all we can do is move the lens back and forth through that point, notice when the focus begins to get worse on either side, and try to find the midpoint. Few cameras are still made with big, bright, optical focussing systems, because electronic focussing systems are cheaper to make and easier to sell, yet most electronic devices have coarser discrimination than the eye, and they have other weaknesses as well, like motors that cannot stop instantaneously. On top of that, no automatic focussing system can know what it is that you want to focus on, all it can do is focus on the middle of the frame.
When I tried to focus the SD-10 with the cheap lens that came with it, I almost decided not to buy the camera. The small, dim image in the viewfinder did not facilitate manual focussing, and the automatic focussing did not always put the lens at the same place when I started with the lens set too close and when I started with the lens set too far away. However, after playing with the camera for a while, I concluded that when I pointed the camera at something easy to focus on, the variability in the automatic focussing was no worse than my own variability and was probably close enough, considering the depth of field. I still do not like it but I have been able to make it work. The most important factor to getting well focussed pictures has proven to be a switch on the lens that turns automatic focussing on and off. I switch it on, depress the shutter halfway to focus on what I want to, switch it off, then frame and take the photo. For maximum reliability I have learned to do this habitually. I also focus with the zoom lens extended whenever I can. I still mistrusted the system enough to buy a magnifier for the viewfinder (Nikon's fits), but I have found it to be unnecessary. More important is a better lens that has a larger aperture and thus is brighter. Such a lens makes it easier to focus manually in light that is too dim for the autofocus.
Features, Gadgets, & Gimmicks -- The criteria above are absolute requirements for me. Since the SD-10 met them, I examined it further.
One advantage of digital cameras is that you can see if your photograph is acceptable before you leave the scene - if you can see it, that is. A liquid-crystal display (LCD) on the back needs to be bright enough to be visible in bright sun. On the other hand, you need to be able to dim it enough that you can stand to use it at night. The SD-10 offers three levels of brightness, which I have found to be sufficient. The SD-10 also comes with a transparent plastic cover to protect the LCD. For taking pictures that cover stays in place but you can pop it off to use the menus.
On a digital camera, a perfect exposure will record specular reflections of the sun as pure white but will record any whites containing information that matters with enough tonality - just enough tonality - to show detail. A histogram on the LCD ought to inform you of this clearly. The SD-10 does this well. It overlays the three colour channels (red, green and blue), and if you click the + button to enlarge the image, it graphs only the enlarged portion. This approach beats any form of exposure meter hands down. I take a very quick picture without aiming, check that the highlights are properly exposed, make any necessary adjustment, then take the picture for real.
Colour slide film is designed for a range of seven f/stops between highlights and shadows that are not completely white or black. Squeezing sunlit scenes into this range can be a trick. Digital sensors need to have at least this much "dynamic range," and more is better. Much more is much better. For pictorial photography, dynamic range usually matters more than resolution. It doesn't matter if the sensor is able to resolve fine detail on an object if you cannot see the object at all because it is buried in shadow. You can do a quick-and-dirty test of dynamic range even in a camera shop by systematically underexposing photos of a photographic grey scale. I found the dynamic range of the SD-10 to be remarkable. If highlights are correctly exposed, shadows can be 10 to 11 stops darker yet still retain some coarse detail. The pair of screenshots (linked below) from Sigma's PhotoPro software show how easily and effectively this detail can be extracted. This photo was exposed perfectly for the highlights. The dark version shows a normal dynamic range, about what a colour slide would have shown. The light version shows additional detail in the shadows that was recorded by the sensor and brought out by the Tonal Adjustment sliders.
Although digital cameras offer you a choice of film speeds or "ISO" speeds - sensitivity would be a better term than speed - there is actually no ISO standard that can be applied sensibly to digital cameras. That's why I am putting quotation marks around "ISO." I have never seen a credible comparison of the sensitivity or exposure-metering of two digital cameras, nor do I think it's particularly important, since any differences are likely to be small and you can see the exposures instantly. However, I was curious to see how carefully the SD-10 was calibrated, so after I bought it, I checked it at "ISO" 100. In sunlight and in normal room light, the exposure metering on the camera agreed almost perfectly (to within 1/3 of an f-stop) with my studio exposure meter. When I photographed a scale of calibrated greys at different exposures, the correctly exposed 95-percent white was exactly where it ought to be, 2/3 to 1 stop below washing out. (Note that with a digital camera, increasing the "ISO" speed does not make the sensor more sensitive, it amplifies the signal and, at the same time, it amplifies the noise. "ISO" 100 is normal for most cameras and speeds up to 1600 are commonly available, but any speed over 400 is not likely to look very good.)
Some cameras offer a choice of metering modes - spot, segment, averaging - so you can choose the one most likely to be accurate for the picture you are about to take. This is the sort of silly featuritis that makes so many electronic devices difficult to use. There is no point to trying to figure out how to set the meter to read a scene the most accurately, it's as fast and more certain to take a quick test picture and adjust the control that nudges the automatic exposure up or down. Automatic exposure-bracketing is almost as useless: there is rarely reason to bracket exposures when you can identify the correct exposure when you make it. The SD-10 dedicates push-buttons and primary display space to both of these "features."
Something else I can't see worrying about is how the camera reproduces colour. As I explained in "Colour & Computers" in TidBITS-749, this is tantamount to complimenting or castigating an amoeba on its figure. There is even less reason to worry about the colour reproduction of lenses. If a lens tints the image that it projects onto the sensor, the tint will be systematic and slight, and it will be corrected automatically by whatever software converts the raw image into a usable one.
It is possible and highly desirable for a camera to compensate somewhat for camera shake by moving the sensor in the direction opposite the shake. The SD-10 does not do this. Sigma sells a telephoto lens that does, but the feature would be more useful built into the camera.
Speed of operation is a weakness with many digital cameras. They can take a long time to switch on and there can be long delays for processing the picture and writing it to memory. Check out any camera to make sure its speed is adequate for your purposes. The SD-10 responds quickly but takes long enough to process each picture that sometimes I find myself waiting for it to catch up. Fortunately, there is a workaround for this problem that is more practical than it may sound: have the camera combine two or four pixels into one, to halve or quarter the amount of data that it processes for each exposure. Although this approach reduces resolution, usually when I need to take a lot of frames quickly, I have little control over the lighting or the subject. In these situations, the overall quality of the picture is rarely optimal so the loss of resolution will never be noticed. On the SD-10, medium resolution still records as much information as can be reproduced on a full page of a glossy magazine. See the last article, "Sense & Sensors in Digital Photography," for a discussion of this topic.
Dust is the bete noire of image sensors. In almost every camera with interchangeable lenses, dust wafts into the camera whenever you remove a lens, then it lands on the sensor and creates innumerable specks in an enlargement. Preventing dust from entering the camera saves a lot of time cleaning up pictures. The SD-10 covers the opening behind the lens with a sheet of transparent plastic. Some dust manages to get on the sensor anyway but much less than the norm.
For studio photography, it is useful to be able to plug the camera into a laptop computer and see your pictures immediately at a reasonable size. The SD-10 can connect by USB or FireWire. When connected by FireWire, pictures show up in Sigma's PhotoPro application in seconds. To light the poster linked below, I used four strobes and three halogen lamps. I found it quicker and easier to balance them and find the exposure by viewing the image than by using a meter.
Digital cameras incur hidden costs for memory cards. Price them early in your shopping. In this respect the SD-10 has a significant advantage over its competition: it stores raw images from its Foveon sensor in only one-half the space required by an equivalent Bayer sensor (see my comparison of the sensors in "Sense & Sensors in Digital Photography"). Also, writing a picture to memory can take a while and some cards are faster than others. Lexar make some of the faster cards and use some proprietary technology they call write-acceleration. The SD-10 supports Lexar's write-acceleration technology.
Another hidden cost with digital cameras is spare batteries, especially if the batteries are proprietary. I planned to use my camera far from electricity in the Himalaya, so it had to run on disposable batteries. The SD-10 uses AA or CR-V3 batteries but it turned out that they need to be lithium cells. Despite what the manual says, other kinds don't work. Also, I have found the SD-10's battery life to be disconcertingly short, especially in cool temperatures.
Long before the batteries die, they act dead then come to life for a few pictures after the battery tray is removed and replaced, and they will continue do this a number of times. It looks to me as though the camera's tolerance for both internal resistance and voltage drop are unrealistically demanding. A second pair of batteries in parallel seemed called for and can be had by buying an accessory grip/power-pack ($130) that screws onto the base. I bought one immediately upon my return from India. It adds a bit of weight and size, but in compensation, it makes the camera more comfortable to use. On the other hand, it leaves the camera wobbly on a tripod. I tried it with ordinary alkaline cells and it worked, but only for a short while. (For an explanation of how batteries fail in digital cameras, see page 9 of this discussion of battery recycling.)
For snapshots, a built-in flash is useful. Outdoors, it is often convenient to have a built-in flash to fill in shadows. I wanted one for the latter purpose especially, but the SD-10 does not have one, so I bought a Sigma on-camera unit purportedly designed for the SD-10. This flash zooms (with a scale for a 35mm cameras), bounces in all directions, and adjusts its exposure automatically. Like all such units, I find it top-heavy, gimmicky, and inaccurate. Indoors I prefer a large flash with a handle that I can detach from the camera, hold at arm's length, and point toward a wall. Fortunately, I have not yet had to use the thing. I bought it primarily for fill light in the field but the camera's broad dynamic range and a "Fill Light" adjustment in Sigma's software have obviated it.
Assessing quality of construction is not high on my list because I have no idea what on a digital camera is likely to break, aside from obvious things like hinges and latches. In my experience, the structures that fail in electronic boxes are rarely visible or predictable; they are usually things like solder joints, foil traces, and the contacts of switches. If the outside of the camera is mostly plastic - well, the airplane that delivered it used a lot of plastic too, and the SD-10 does have metal innards. What gives me comfort is a camera by Kodak. The SD-10 shares many, if not most, of its mechanical parts with one of Kodak's newest professional cameras and feels comparable mechanically. If Kodak's camera turns out to be a hunk of junk, it will be a very expensive hunk of junk aimed at a market that values reliability above all else, so their engineers must have reason to think that Sigma know how to construct a camera.
Lenses -- Most camera buffs wax expansively and expensively on the necessity of good lenses and which are the best. This strikes me as the last thing to worry about. As I explained in "Sense & Sensors in Digital Photography," the only way to tell the difference between lenses is to compare identical test photos shot on a tripod. Unless the lens has an unusual amount of colour fringing or distortion, nobody will ever look at a photo and say, "Gee, the lens you used was a dog."
That said, some lenses do produce objectionable amounts of colour fringing, especially when used on digital cameras. I don't know why digital cameras show it more. I've read lots of hypotheses, but none can explain the problem completely, and digital cameras sure can show a lot of colour fringing. Look at the inset close-up of the cello's endpin in the photo linked previously. (The upper photo shows its original state. As discussed in "Editing Photographs for the Perfectionist" in TidBITS-748, I used Asiva Shift+Gain to even out the lighting on the cello and clean up the colour fringing.) Unfortunately, this problem is exacerbated by the Foveon sensor, which can produce a fringe every bit as sharp as the line that is fringed. A Bayer sensor would be a little less sharp but a little more forgiving.
Even expensive lenses on film cameras can be freakish. I used to swear at a Leitz 90mm lens that I bought for my megabuck Leica M4s. It was sufficiently problematic that Leitz replaced it. The replacement focussed more accurately but was no better optically. To maintain contrast and to avoid artifacts from flare light (optical noise), I had to use it more carefully than any other lens I owned.
Brochures often display graphs of modulation transfer functions (MTF) that purport to summarize the optical quality of a lens. However, those graphs are the results of a simplified mathematical model. Even if you can visualize image quality from them - I certainly cannot - you will not visualize, because they do not capture, unpredictable flaws like the blue in the cello's endpin. Neither do fancy diagrams of lens elements mean anything, nor the number of asymmetrical elements, nor the number of elements of exotic glass. The design of a lens is an art, not a science. If a lens uses a lot of elements, all you know is that the artist used a lot of paint.
I really don't know any sensible way to compare lenses without trying them. Indeed, I don't always know what "better" means. Imagine two lenses: when the sun is behind you, one gives a sharper picture, but when the sun is in front of you, the other one does. Which is better? Since software can correct most deficiencies, it strikes me that the most important factor is not the degree of perfection but the degree of perfectibility. Consider two marginal lenses. One is soft but consistent and can be sharpened digitally; the other is sharper but shows occasional failings that cannot be helped. I would prefer the former.
I do share the prejudice that expensive lenses are likely to be better, but "likely" is an important qualifier. Ease of production, the size of production runs, and marketing arrangements can all have significant effects on price. When I bought film cameras, I indulged my prejudice and bought only expensive lenses. I knew that this was silly - one of my Leicas' lenses made this manifest - but I found commercial film shoots so stressful that I wanted to feel as though I was doing everything I could to minimize the risk of something's going wrong. Digital photography is different, though. Digital images appear instantly and they are malleable. With digital cameras I cannot see any reason to buy the dubious insurance of an expensive brand name.
The most important thing to consider about lenses is not their quality but the optical perspective they provide. Imagine that you are standing a few feet back from a small window looking out at your garden. You don't see it all but the part that you do see looks natural. Now imagine that somehow your entire garden becomes squeezed into the purview of the window. You would see everything in the garden, but the garden would look bizarre. Next imagine that the window were a photograph. The lens on the camera would have created the perspective of that photograph. A lens "seeing" the same angle as your eye would create a realistic photograph, a lens "seeing" more than your eye would create a distorted photograph.
The eye can take in a scene about 45 degrees wide without moving. When you look at an enlarged photograph, you will probably tend to hold it so that it subtends an angle of about 45 degrees around your eye. For this reason, if you are looking at an enlargement, a lens that "sees" about 45 degrees will usually provide the most normal perspective. Smaller and larger angles of view distort perspective. There is a range of what looks incontestably natural, and it is not clearly defined, but this range does not extend beyond 20-30 degrees on the narrow end or 60-70 degrees on the wide end. Narrower and wider angles of view begin to distort perspective. This distortion can be effective artistically if handled with care, but it is distortion nonetheless. Although distortion can make quite a splash, it does not usually wear well.
In round numbers, with 35mm cameras, the range of natural perspective is covered by lenses running from 30mm or 35mm to 80mm or 90mm. Nowadays, most cameras come with a zoom lens that cover this range. Often the lens will cover more than this range. Lenses that zoom over a wider range sell more cameras, so manufacturers push zoom ratios as they push megapixels. However, image quality tends to deteriorate rapidly with focal length, not for optical reasons but because longer lenses magnify the effect of a shaky hand. This problem is amplified by the small size of most digital image sensors. When you halve or quarter the size of the image, the same amount of movement doubles or quadruples the blur. I don't want to find out from blurry pictures that I slid accidentally into a focal length that demanded a tripod, so I do not want a general-purpose lens that extends beyond the equivalent of 80mm or 90mm.
That said, if I am buying a camera with interchangeable lenses, I do want to have a lot of focal lengths available. For my SD-10 I bought additional lenses above and below the normal range. My observations of seven lenses for the SD-10 will come at the end of the article.
The Camera and the Computer -- Most people think of digital cameras as optical devices, as ordinary cameras with electronic gadgetry replacing film. I think it's more sensible to see them as digital computers, digital computers that are fed by optical devices instead of spinning disks. Some sensors feed more information than others but as I showed in "Sense & Sensors in Digital Photography," the differences are smaller than they seem. Far greater differences come with the way that the image is processed, with the way it is interpolated, balanced, cleaned of noise, and sharpened.
To convert a raw image into a usable one, a typical image-processing program will:
- Weight the colour channels to compensate for the sensor's response to the source of light
- Set a range of brightness and contrast
- Remove portions of the image that appear to be noise
- Distort edges so that they look sharper
- Throw away information that looks unnecessary, and save the file in JPEG format
The image-processing program in a camera does all of this by default with every picture it touches. Needless to say, it cannot look at the picture and do this knowledgeably, it can only follow rules. Most reviews of digital cameras examine JPEG files produced by the cameras, so they are really not examining the quality of the optics and sensor, they are examining the results of the algorithms employed by the built-in image-processing software.
If you are satisfied with snapshots from a point-and-shoot film camera, then a digital camera's built-in image-processing will do fine, but it will never extract the most from a picture or enable the best enlargements. You can change the camera's settings, but a camera does not offer anything like the convenience and control of a personal computer. For me, and I suspect for many TidBITS readers, it makes more sense to buy a camera that will save files in a raw, unprocessed format, and do all of the processing afterwards on a computer.
The SD-10 is unique among digital cameras in that it does no digital processing whatsoever. To me this is a significant advantage. Not having this software built in markedly simplifies the camera's menus. With the SD-10, it is never necessary to negotiate a complicated tree of commands with lots of hidden submenus. If I want to change the brightness of the LCD, I can see immediately how to do it. Furthermore, with the SD-10 it is not possible to lose pictures by leaving some image-processing parameter in the wrong setting.
For processing images, Sigma provides a program called PhotoPro. At first blush PhotoPro is disappointing. It looks and feels like a port from Windows, it embeds a Windows sRGB profile (see "Colour & Computers"), it hogs the CPU while idling in the background, and, incredible for a graphics package on the Mac, it does not "know" about the monitor's calibration, so that the colour you see in PhotoPro is not the same as the colour you see in Photoshop or iPhoto or coming off your printer. Nevertheless, PhotoPro's controls are so simple and implemented so well that I prefer it to the raw-file converter in Photoshop. (See the screenshots linked above for a look at PhotoPro's interface.) If you want to adjust pictures individually, PhotoPro provides excellent tools, and if you want to save a folder full of files automatically as JPEGs, it can do that too. Moreover, if you also copy the raw files to your hard drive, you can always revert to them to redo something.
Summary -- All in all, I think buying the SD-10 proved to be a sensible decision, considering what I wanted it for. The camera is priced for amateurs, but it feels and functions like a professional camera and I can extract pictures from it that look as though I had shot them on 2-1/4" x 3-1/4" film. I could not ask for more.
Although the SD-10 suits my purposes, it may not suit yours. I am willing to put enough time into pictures to perfect them; you may not be. If you prefer to trade time for money, a full-frame Bayer sensor will give comparable potential quality with less care. A Bayer sensor the size of the SD-10's Foveon sensor will be a bit more limited in overall quality but more forgiving. If you have been happy with the quality of 35mm film, then you may be served just fine by a more compact camera with a smaller sensor.
Whatever you decide, if you are buying anything more than a point-and-shoot, do consider your computer alongside the camera. As I showed in "Sense & Sensors in Digital Photography," the amount of information in pictures is much less than people think. What matters to the eye is less the amount of information than how that information is presented, how clearly the information makes it through blur, grain, pixelation, and other forms of visual noise. The camera codes optical information plus optical and electronic noise; a computer decodes all of those into a visible image and removes the noise. Although this computer is usually buried inside the camera, you can use your Mac instead. Better software is available for your computer than for your camera. That is why I started this series with an article on image-processing software ("Editing Photographs for the Perfectionist," linked above). Before you buy a camera, skim that article to get an idea of what is possible and what you might be willing to do. Keep that in mind when you decide what to look for and consider the cost of the software when you shop.
Finally, for Canadian readers, I should like to add a little about buying these products in Canada. In March, 2004, I tried to find an SD-10 in a shop. I could not. While I was asking around, two salesmen warned against my buying any Sigma camera, not because of the product but because of Sigma's Canadian distributor, Gentec International. I came to see why. Gentec never had in stock a single item that I wanted to buy or that I wanted to borrow for this review. Everything had to await delivery from Japan. Predicted delivery was sometimes weeks but more often months, and prices were higher than in the U.S., where I could buy the product off the shelf. I was expected to purchase the products sight unseen, yet they were so unusual that no shop would countenance their return for any reason. I hate to say it but the only practical vendors for these products are mail-order houses in the U.S. Not only are they faster and cheaper, they are more likely to accept returns. If you ever need to return something and recover the tax, the paperwork required is the one-page "Informal Adjustment Request" form B2G that can be downloaded from:
Supplemental Thoughts: Lenses for the SD-10 -- I have tried seven lenses with the SD-10. Except as noted, all of them seem to be well constructed, but all of them tend to produce colour fringing toward the corners that is broad enough to be noticeable in enlargements if you are looking for it. Except as noted I based my judgements of sharpness on test pictures made at infinity.
The cheapest basic lens is an 18-50mm f/3.5-5.6 zoom that Sigma packages with a couple of kits. (It's not priced on its own but adds $10 or $100 to the other items.) At 50mm this lens seems respectably sharp but at 24mm and 18mm it seems softer. The lens often shows excessive colour fringing and below 50mm it suffers from convex "barrel" distortion that is asymmetrical, so that it cannot be corrected perfectly with software. The lens feels cheaply built and is slow (dim) enough to make focussing difficult in dim light, although closing down the aperture only one f-stop brings it almost to maximum sharpness. If I spend enough time in front of the computer, I can make most of its pictures look as good as any - its images are usually perfectible - but I do not think it is comparable in quality to the camera.
In the last few months Sigma started offering an alternative, an 18-50mm f/2.8 zoom (street price $500). I borrowed one to review and decided to buy it. It shows less colour fringing, especially at 50mm, its distortion is more symmetrical, and it is sharper at 24mm and 18mm. It is maximally sharp from f/5.6 through f/11 at all focal lengths. This lens complements the camera nicely.
The cheapest telephoto lens available is a 55mm-200mm f/4-5.6 zoom (street price $140). On an SD-10, 200mm is the equivalent of 340mm on a 35mm camera. That is the equivalent of 8x binoculars. It reaches out so far that sharp pictures of distant objects require not just a tripod but also clear air without thermal currents. This lens feels so cheap mechanically that I mistrust it, but it is competent optically and is small and light. As a telephoto to carry for casual use it would be appropriate for anybody, amateur or professional. For maximum sharpness the aperture needs to be closed down two or three f-stops.
For my last article, I borrowed Sigma's latest 50mm f/2.8 macro (street price $250). Since I had it in the house, I compared to the 18-50mm f/2.8 at 50mm. At infinity the macro lens was a little softer and more sensitive to flare. Since a macro lens is designed for close-up work, I also tested it on a copy stand. There it was a little sharper. At both distances the zoom lens showed less colour fringing. I would buy the macro only for technical work. For all pictorial photography, including extreme close-ups, I would prefer the zoom.
For a wider wide-angle lens, I bought a 14mm f/2.8 (street price $900). This lens is big and heavy. With 35mm it pushes some limits of practical optics and with the SD-10 it is a mixed blessing. When the sun is behind me, and there isn't much white in the picture, and I am very far away from everything in the photo, it is respectably sharp from f/8 through f/16 with no more colour fringing than the usual. At distances closer than a very distant infinity, the corners soften. Where the picture contains a lot of contrast, flare light softens the image further and increases colour fringing, sometimes to remarkable amounts. Also, when the sun is near the subject, the lens tends to produce severe reflections that cannot be corrected at all. It has noticeable barrel distortion as well, although that can be corrected.
I dislike the 14mm, so for this review, I borrowed the only alternative, a 12-24mm f/4.5-5.6 (street price $670). It is even bigger and also stretches practical optics for 35mm, but it has an opposite character. Compared to the 14mm, in front-lighted, distant scenes without much white, it looks softer, but in other circumstances it looks sharper in the corners and sometimes in the centre as well. It shows similar barrel distortion but very little colour fringing and no untoward disturbance from flare light. It is more difficult to focus and slower, but it covers a wider angle and zooms through a range of focal lengths. I don't particularly care for this lens either but I prefer it to the 14mm. When stopped down to f/11 or f/16 its softness can be overcome with Focus Magic, whereas nothing can fix some of the 14mm's flaws. Compared to the 18-50mm f/2.8 zoom at 18mm and 24mm, the 12-24mm is usually softer but it has less colour fringing and distortion and is comparable in sharpness at f/16, which is beyond the peak of the 18-50mm. For architectural and landscape photography, to maximize depth of field I often stop down to f/16. In this circumstance I would choose the 12-24mm. I am going to replace the 14mm with this lens, but I am also going to hope that Sigma come out with a 12mm or 14mm lens that is designed for the SD-10 and works better.
The 15mm fish-eye (street price $450) is of modest size although it, too, is designed to cover 35mm film. On an SD-10 it covers only a little more than the 14mm lens but it provides a unique perspective. It is a fish-eye perspective, so that straight lines end up curved, but only the central portion of the image is captured on the small sensor of the SD-10, so the curvature is not severe. On the other hand, unlike an ordinary wide-angle lens, it does not magnify objects near corners and it does not distort any angles. This means that for scenes containing no straight lines, the lens provides a more natural perspective. The difference is subtle but to my eye it is significant. I carried this lens and the 14mm in the Himalaya. I rarely used the 14mm, but I used this one often. It's maximally sharp from f/5.6 through f/11 and is the sharpest of the three wide-angle lenses.
None of the wide-angle lenses accepts a filter in front; they accept only gelatin filters in back, but that is of little consequence. With all digital cameras, software supplants colour-balancing filters and I was pleasantly surprised to find that with the SD-10, software also supplants polarizing filters for their primary use, which is to increase the saturation of colours that are washed out by the glare of the sun. (I don't know about polarizers with other digital cameras, because I don't know why they aren't necessary on the SD-10.) On the SD-10, the only use I have found for a polarizing filter is to reduce patterned reflections off windows and water, but that use is more common in photography books than in the field.
All of these lenses can be fitted to many cameras besides the SD-10 but I have no idea how they would work on any other camera. The optical characteristics of an image sensor interact with the lens in so many ways that I do not know how to divorce the two.
PayBITS: If Charles's approach to buying a camera helped
you, he asks that you make a donation to Doctors Without
Borders: <http://www.doctorswithoutborders-usa.org/donate />
Read more about PayBITS: <http://www.tidbits.com/paybits/>
Article 5 of 5 in series
Digital Photography: Correction & Follow-up -- I would like to point out a mistake in my article "Sense & Sensors in Digital Photography" in TidBITS-751Show full article
Digital Photography: Correction & Follow-up -- I would like to point out a mistake in my article "Sense & Sensors in Digital Photography" in TidBITS-751. I stated that smaller sensors are more sensitive to camera movement than larger sensors, but when the field of view is comparable, they are not. This vitiates one paragraph but has no effect on any conclusions or advice. Also, an editing gaffe at the end of the same article sowed some confusion over the role of green in Bayer sensors. The second link below points to a clarification of this and also to an extensive set of FAQs that have come out of the series. [Charles Maurer]