Not too long ago, tech giants like Apple and Samsung raved about the number of megapixels they were cramming into smartphone cameras to make photos look clearer. Nowadays, all the handset-makers are shifting focus to the algorithms, artificial intelligence and special sensors that are working together to make our photos look more impressive. Our phones are working hard to make photos look good, with minimal effort required from the user.
Google’s Pixel 4 and Pixel 4 XL include new hardware features such as an extra camera lens. Among its new features is a mode for shooting the night sky and capturing images of stars. And by adding the extra lens, Google augmented a software feature called Super Res Zoom, which allows users to zoom in more closely on images without losing detail.
“Most photos you take these days are not a photo where you click the photo and get one shot,” said Ren Ng, a computer science professor at the University of California, Berkeley, US. “These days it takes a burst of images and computes all of that data into a final photograph.” Computational photography has been around for years. One of the earliest forms was HDR, for high dynamic range, which involved taking a burst of photos at different exposures and blending the best parts of them into one optimal image.
Over the last few years, more sophisticated computational photography has rapidly improved the photos taken on our phones.
Last year, Google introduced Night Sight, which made photos taken in low light look as if they had been shot in normal conditions, without a flash. The technique took a burst of photos with short exposures and reassembled them into an image. With the Pixel 4, Google is applying a similar technique for photos of the night sky. For astronomy photos, the camera detects when it is very dark and takes a burst of images at extra-long exposures to capture more light. The result is a task that could previously be done only with full-size cameras with bulky lenses, Google said.
Apple’s new iPhones also introduced a mode for shooting photos in low light, employing a similar method. Once the camera detects that a setting is dark, it automatically captures multiple pictures and fuses them together while adjusting colours and contrast.
A few years ago, phone-makers like Apple, Samsung and Huawei introduced cameras that produced portrait mode, also known as the bokeh effect, which sharpened a subject in the foreground and blurred the background. Most phone-makers used two lenses that worked together to create it.
Two years ago, with the Pixel 2, Google accomplished the same effect with a single lens. Its method largely relied on machine learning — computers analysing millions of images to recognise what’s important in a photo. The Pixel 2 then made predictions about the parts of the photo that should stay sharp and created a mask around it. A special sensor inside the camera, called dual-pixel autofocus, helped analyse the distance between the objects and the camera to make the blurring look realistic.
With the Pixel 4, Google has improved the camera’s portrait mode. The new second lens will allow the camera to capture more information about depth, which lets the camera shoot objects with portrait mode from greater distances.
In the past, zooming in with digital cameras was practically taboo because the image would inevitably become very pixelated, and the slightest movement would create blur. Google used software to address the issue in the Pixel 3 with Super Res Zoom.
The technique takes advantage of natural hand tremors to capture a burst of photos in varying positions. By combining each of the slightly varying photos, the camera software composes a photo that fills in detail that wouldn’t have been there with a normal digital zoom.
The Pixel 4’s new lens expands the ability of Super Res Zoom by adjusting to zoom in, similar to a zoom lens on a film camera. In other words, now the camera will take advantage of both the software feature and the optical lens to zoom in extra close without losing detail.