Wednesday, May 17, 2023

Face unblur - how do Google pixel phones do it?

It's a clever combination of motion detection, using two lenses at the same time, and getting artificial intelligence to stitch the photos together. 


"Face Unblur is a new feature on the Pixel 6 that relies on Google's machine learning algorithms to ensure when you take a shot of a moving subject, its face isn't blurry... When it detects a subject moving too fast, it automatically takes an image from both the primary and wide-angle lenses and stitches them together... Basically, the main 50MP camera on the Pixels usually defaults to a higher ISO [fine-grained image] and low shutter speed, and while this leads to bright images full of detail, it doesn't work very well for moving subjects... What Google is doing here is using both lenses; even before you shoot a photo, the camera finds the subject and determines if they're moving too fast for the primary lens. If it finds that this is the case, it automatically switches to Face Unblur mode, so when you take a photo, the camera uses both the primary and wide-angle lenses to take two pictures. The primary lens contributes the details and uses a low shutter speed, while the shot from the wide-angle lens is taken at a low ISO and high shutter speed. The wide-angle shot delivers a clean face even while the subject is in motion because it's shot at a high shutter speed, and Google then turns to its machine-learning algorithm to stitch the photos together."

In my photo app, it identifies photos where this technique has been used with an icon on the top right corner. 



No comments:

Post a Comment

Search This Blog

Followers