Skip to content
Thoughtful, detailed coverage of everything Apple for 28 years
and the TidBITS Content Network for Apple professionals
The Google Pixel 3 camera.

Photo by Google

No comments

Five Ways Google’s Pixel 3 Advances Computational Photography

Google has announced its latest iPhone competitor, the Pixel 3, and Digital Photography Review’s Rishi Sanyal outlines five ways it pushes the boundaries of computational photography:

  • Super Res zoom, which uses burst photography to create sharper zoomed photos
  • Computational raw capture, which merges up to 15 frames for the so-called “raw” image
  • Automatic synthetic fill flash, which relights people who are set against bright backgrounds
  • A new machine-learning approach that promises to improve portrait mode photos
  • Night Sight, which lets the Pixel 3 take clear images in dim lighting without a flash

Most impressive is that the Pixel 3 supports such capabilities with only a single rear-facing camera. It will be interesting to see how these features work in the real world when the Pixel 3 ships next week, and to compare them with what iPhone XS users can do today and with whatever new computational photography features Apple adds to next year’s iPhones. (Digital Photography Review also published an article about the computational photography capabilities of the iPhone XS, by our own Jeff Carlson.)

Read original article

Subscribe today so you don’t miss any TidBITS articles!

Every week you’ll get tech tips, in-depth reviews, and insightful news analysis for discerning Apple users. For 28 years, we’ve published professional, member-supported tech journalism that makes you smarter.

Registration confirmation will be emailed to you.

Comments About Five Ways Google’s Pixel 3 Advances Computational Photography

Start the discussion in the TidBITS Discourse forum