
Photo by Google
Five Ways Google’s Pixel 3 Advances Computational Photography
Google has announced its latest iPhone competitor, the Pixel 3, and Digital Photography Review’s Rishi Sanyal outlines five ways it pushes the boundaries of computational photography:
- Super Res zoom, which uses burst photography to create sharper zoomed photos
- Computational raw capture, which merges up to 15 frames for the so-called “raw” image
- Automatic synthetic fill flash, which relights people who are set against bright backgrounds
- A new machine-learning approach that promises to improve portrait mode photos
- Night Sight, which lets the Pixel 3 take clear images in dim lighting without a flash
Most impressive is that the Pixel 3 supports such capabilities with only a single rear-facing camera. It will be interesting to see how these features work in the real world when the Pixel 3 ships next week, and to compare them with what iPhone XS users can do today and with whatever new computational photography features Apple adds to next year’s iPhones. (Digital Photography Review also published an article about the computational photography capabilities of the iPhone XS, by our own Jeff Carlson.)
Start the discussion in the TidBITS Discourse forum