Orders Start for Unique Lytro Light Field Camera
When I first heard about the Lytro camera, I assumed it was some kind of snake-oil pitch or a hoax. Take a picture and then adjust the focus from the resulting image data? That’s un-possible! But I quickly found that the camera’s technology was legitimate. Instead of capturing just the intensity and color of light passing through a main lens as it lands on a discrete image sensor, the Lytro captures the passage of light rays, allowing the re-creation of images with varying focal planes and depths of field.
The camera was slated to ship in Q3 2011, but has just been put on sale with an “early 2012” release date. The device has a peculiar form factor in keeping with its unique nature: it’s a squared-off cylinder that looks more like a short telescope than a camera. An LCD touchscreen is on the end, a single button captures photos, and there’s also a zoom (8x) slider and power switch.
The Lytro comes in three colors: a red model with 16 GB of internal storage costs $499 and can store 750 pictures in Lytro’s format. A graphite or blue camera with 8 GB of storage costs $399. Orders are being accepted only in the United States, and the software to download and view Lytro files via USB will initially be available only for Mac OS X. There is no memory-card slot. I had hoped for Wi-Fi for cable-free image transfers, but a USB connection is required.
The technique behind its focus-after-shooting approach has an ocean of detail behind it. Lytro’s inventor, Ren Ng, who developed the technology during his Ph.D. studies at Stanford, created a camera that interposes a layer of microlenses between a standard image sensor array and the main, standard lens of the camera.
Each microlens acts as a kind of light-field capture device. A light field comprises not just the light falling on a given plane, but the collection of light rays that combine to make up that light. The microlenses each focus incoming light on a subset of image sensors. Those groupings of sensors form a superpixel. With a few dozen pixels, the direction of the light ray can be extracted along with color and intensity detail. This turns a several-megapixel normal image sensor array into one that captures a few hundred thousand superpixels.
The resulting image is small in pixel dimensions (a few hundred by a few hundred), but it has the unique property that the data are used to create any focal plane you want within the entire image (and to adjust the depth of field as well). You can have an infinite depth of field in which everything is in focus, or you can single out details. There’s also a sort of stereoscopic-3D effect in which you can move planes of focus around and pan in a limited fashion through the photo. (It’s unclear if that last feature will be in the first release of the Lytro’s viewing software.)
Using a ray-tracing approach essentially identical to that used in computer animation, Lytro’s display engine computes an image. The engine is vital. Lytro has built this computational display system into the camera, where it works like a viewfinder. The engine will also be available for embedding as a Flash or HTML5 viewer on Web pages, and in software for viewing on a computer.
The Lytro is an example of computational photography, which can involve capturing multiple images rapidly or multiple images at once (or both), and combining them into impossible results using more sophisticated algorithms than are typically employed for normal image adjustments. High-dynamic range (HDR) imaging is the most prominent form of computational photography in wide use, combining multiple images taken in rapid succession at different exposures to create a near-unnatural effect of viewing a scene with tonal details from the very lightest to the very darkest. (For more detail, see my Economist article from 3 September 2011.)
I’m dying to get my hands on a Lytro, because it could change the nature of snapshot photography. The approach allows relatively short exposures with a wide-open (f/2) lens, which helps in low-light conditions, but less so with action photographs. There’s no delay for auto-focus, which Lytro says means a picture is taken the instant the shutter button is pressed. But it’s all in the execution. Clever is good, but the proof is in the pictures.
"The resulting image is small in pixel dimensions (a few hundred by a few hundred)"
Makes me wonder if the resulting picture is small and cannot be enlarged.
Not sure what you mean by enlarged? Instead of, say, 2,500 by 4,000 pixel image from a modern inexpensive digital camera, you will get dimensions that Lytro hasn't released, but appear to be between 300 to 500 pixels square.
If I take a picture with my current camera (5 MP I think, but it's been a while since I looked) I have some room to crop to image and still get a decent print on paper. It sounds to me like you need to do your cropping as you shoot with this camera because it doesn't give you enough resolution to do that after you've taken it.
Now I get it. Yes, you need to compose in the camera, much like a large-format film image, cloth over your head.
The output from a Lytro isn't a flat TIFF or something. It's a light field measurement, which is interpreted by their engine. You can apparently export flat images, but that's not how they expect it will be used. Instead, they think most light-field images will be shared on Facebook and elsewhere.
And when should you "never" crop in the camera?!?!? One should always try your best to get the image you want in the camera's frame.
As a former photographer, it's an interesting trick, but I can't see a whole lot of utility. Most pictures really only look good at a particular focal length. For example, I take a picture of someone in front of a background. I want them in focus and not the background. Being able to refocus the picture doesn't do too much for me.
I can see a utility if you could take a picture faster because you didn't have to focus first. That could be good for action shots.
Another possibility is being able to adjust your depth of field -- how much is in focus. I guess you could modify it from anywhere between a fraction of an inch to infinity. That might be of value. Blurring out what's not blurred or making an entire image perfectly sharp. Or picking two or more focus points. Right now, the software doesn't let you do that.
However, focus is only one issue: Exposure is another. Can I have multiple F-stop settings in a single picture? That would be great.
For multiple F-stops, you want HDR...
http://en.wikipedia.org/wiki/High_dynamic_range_imaging
I believe there's a lot to come in future iterations that will allow things like multiple f-stops. You'd like need microlenses of different apertures to be able to do that, but if you have a dense enough image sensor array, you could vary microlens size.
Also, the software as it stands does let you adjust for infinite depth of field, but I haven't seen a slider for adjusting the depth of field for spot focus.
Glenn, I don't think you'd need apertures on the microlenses. The camera records light direction, so the software could vary the exposure in different parts of the photo.
I'm thinking about that. Exposure is a function of the amount of light captured, so you could simulate apertures of lesser diameter, potentially, but not greater?