Portrait Light: Enhancing portrait lighting through machine learning
-
Professional portrait photographers are able to take compelling images by using specialist equipment (such as off-camera flashes and reflectors) and expertise to capture the correct lighting of their subject. To allow users to better simulate professional-looking portraits, we recently released Portrait Light, a new post-shot feature from the Pixel camera and Google Photos app that adds simulated directional light to portraits, with directionality and intensity Settings to complement the lighting of the original photo.
-
On Pixel cameras on the Pixel 4, Pixel 4A, Pixel 4A (5G) and Pixel 5, portrait light is automatically applied to images in default mode after being taken, as well as Night Sight photos that contain people — just one person, or even a small group of people. In portrait mode photos, portrait light provides more dramatic lighting to match the shallow depth of field effects that have been applied, resulting in a studio-quality look. But since lighting can be a personal choice, Pixel users who take photos in Portrait mode can manually reposition and adjust the brightness of the lighting applied in Google Albums to suit their preferences. For users running Google Photos on the Pixel, this re-lighting feature is also available for many pre-existing portrait photos.
-
Today we show the technology behind portrait light. Inspired by the off-ramp lights used by Portrait photographers, Portrait Light simulates a relocatable Light source that can be added to the scene, automatically selecting the initial lighting direction and intensity to complement existing lighting in the photo. We do this by leveraging novel machine learning models, each trained using a different set of photo data captured in the Light Stage computational lighting system. These models enable two new algorithmic capabilities:
-
- Automatic directed Light Placement: For a given portrait, the algorithm places synthetic directed light in the scene in the same way that a photographer would place an off-camera light source in the real world.
- Capture re-lighting after compositing: For a given lighting direction and portrait, add compositing light in a way that looks realistic and natural.
-
These innovations enable Portrait Lighting to create attractive lighting for every portrait at any time — all on your mobile device.
-
Automatic lighting placement
-
Photographers often rely on perceptual cues when deciding how to use off-machine light sources to enhance ambient lighting. They assessed the intensity and directionality of the light falling on their faces and adjusted their subjects’ head positions to complement it. To inform the Portrait Light’s automatic Light placement, we developed computational equivalents of these two perceptual signals.
-
First, we trained a novel machine learning model to estimate the high-dynamic range, omni-directional illuminated contours of a scene using input-based portraits. This new light estimation model can infer the direction, relative intensity and color of all light sources in a scene from all directions, treating the face as a light detector. We also estimated the subject’s head posture using MediaPipe Face Mesh.
-
Using these clues, we can determine the direction of the source of the synthetic lighting. In studio portrait photography, when the scene is viewed from overhead, the main off-machine light source, or primary light, is placed approximately 30° above the eyeliner and between 30° and 60° from the camera axis. We followed this guideline to achieve a classic portrait look, enhancing any pre-existing lighting directionality in the scene, while aiming for a balanced, subtle key-to-fill lighting ratio of about 2:1.
-
Data driven portrait re
-
Lighting given the desired lighting direction and portrait, we next trained a new machine learning model to add lighting from directional light sources to the original photo. The training model required millions of pairs of portraits with and without extra light. Taking such a data set under normal Settings is impossible because it requires a near-perfect registration of portraits taken under different lighting conditions.
-
Instead, we generated training data by photographing 70 different people using a computational lighting system called the Light Stage. The spherical lighting consists of 64 cameras with different viewpoints and 331 individually programmable LED light sources. We took photos of everyone illuminated one light at a time by each light (OLAT), which produced their reflective fields — or the appearance that they were illuminated by discrete parts of the spherical environment. Reflective fields encode the unique color and light reflective properties of an object’s skin, hair, and clothing — how shiny or dull each material looks. Since the superposition principle is applied to light, these OLAT images can then be added together linearly to render a realistic image of the object as they appear in any image-based lighting environment, and correctly represent complex light transport phenomena such as underground scattering.
-
Using Light Stage, we photographed people with many different face shapes, genders, skin tones, hairstyles and clothing/accessories. For each person, we generate composite portraits in many different lighting environments, with or without adding directional light, rendering millions of pairs of images. The data set encourages model performance across different lighting environments and individuals.
-
Use quotient images to learn to preserve details of re-lighting
-
Instead of trying to predict the output re-illuminated image directly, we train the re-illuminated model to output a low-resolution quotient image, i.e., a per-pixel multiplier, which can be applied to the original image input image when upsampled to produce the desired output image and add the contribution of additional light sources. This technique is computationally efficient and only encourages low-frequency lighting changes without affecting high-frequency image details, which are transmitted directly from the input to maintain image quality.
-
Use geometric estimation to supervise re-lighting
-
When the photographer adds an additional light source to the scene, its direction relative to the geometry of the subject’s face determines the brightness of each part of the face. To simulate the optical behavior of a light source reflecting from a relatively matte surface, we first train a machine learning model to estimate the surface normals of a given input photo, and then apply Lambert’s law to calculate a “light visibility map” for the desired lighting direction. We provide this light visibility map as input to the quotient image predictor, ensuring the use of physics-based insight training models.
-
We optimized the entire pipeline to run at interactive frame rates on mobile devices, with the total size of the model under 10 MB. Here are some examples of using portrait light.
-
Take full advantage of portrait light
-
You can try portrait light in your pixel camera and change the position and brightness of the light to your liking in Google Photos. For users with dual exposure control, portrait light can be applied after a shot for extra creative flexibility to find just the right balance between light and shadow. On existing images in your Google Albums library, try using portrait light where your face is slightly underexposed to brighten and highlight your subject. It is particularly beneficial for images in which a single person poses directly for the camera.
Update note: first update wechat public number “rain night blog”, later update blog, after will be distributed to each platform, if the first to know more in advance, please pay attention to the wechat public number “rain night blog”.
Blog Source: Blog of rainy Night