Siggraph 2012 and Game Connection 2012 talk : Local Image-based Lighting With Parallax-corrected Cubemap

Here is the slides of me and my co-worker Antoine Zanuttini’s talk at Siggraph 2012 and Game Connection 2012 : “Local Image-based Lighting With Parallax-corrected Cubemap”

The two talk are similar but the Game Connection talk are more recent and contain little update. This is the one to get.

Game Connection 2012:

A short sum up can be seen on the official Game Connection 2012 website : Local Image-based Lighting With Parallax-corrected Cubemap

Powerpoint 2007:  Parallax_corrected_cubemap-GameConnection2012.pptx
PDF: Parallax_corrected_cubemap-GameConnection2012.pdf
Accompanying video of the slides:
Video 1 : https://www.youtube.com/watch?v=f8_oEg2s8dM

Video 2 : http://www.youtube.com/watch?v=Bvar6X0dUGs

Siggraph 2012:

A short sum up can be seen on the official Siggraph 2012 website : Fast Realistic Lighting

Powerpoint 2007: Parallax_corrected_cubemap-Siggraph2012.pptx
PDF: Parallax_corrected_cubemap-Siggraph2012.pdf
Accompanying video of the slides: http://www.youtube.com/watch?v=Bvar6X0dUGs

Description:

The talk describes a new approach of local image-based lighting. It consist in mixing several local parallax-corrected cubemaps and use the result to light scene objects.
The advantages are accurate local lighting, no lighting seams and smooth lighting transitions.
The content of this talk is also describe in the post : Image-based Lighting approaches and parallax-corrected cubemap.

Advertisements

Dive in SH buffer idea

Edit: I rewritte some part of this post to be more understandable. First version was 30 September 2011.

Deferred lighting/shading is common now day. Deferred lighting buffers store results of accumulate diffuse and specular lighting for a given view to be composite with material properties later. But is there other way to achieve the same goal ? Store lights information in buffer to light objects later in order to have more flexibility in the lighting process.
Obviously, this way has already been explored. An idea is to store lights information themselves in buffer [7], another is to store the lighting environment in a compact form to be decompressed later, as Steve Anichini and Jon Greenberg do with spherical harmonic buffer [1][2]. I have myself think about the idea of using a spherical harmonic buffer (SH Buffer) a long  time ago and finally decided to give it a try. As them, I will share my experience here in order to grow the discussion about this algorithm. Any feedbacks are welcome.

SH Lighting : Same approach different goal

When starting a new approach, the first step is to fix the context. From my understanding, in [1][2][3] the approach was to produce a low resolution buffer then upsampling it to composite with the scene.The low resolution buffer allow to minimize bandwidth used by SH buffers and the upsampling phase require a “smart” filter, like a bilateral filter or an ID buffer like in Inferred lighting [5]. The author seems to try to replace the classic deferred light approach with this SH deferred approach. My context is different. I am in a forward rendering context, one pass per light and have really heavy constraint on interpolator. Any extra lights require sending geometry again. The main purpose of the SH Buffer will be for fill light (secondary light), and so can tolerate more approximation.

SH buffer is an appealing approach :
– No need to have the normal (and I don’t have them)
– Can be offloaded on SPU/CPU on console
– Decoupled from geometry
– SH buffer can be composite in the main geometry pass and get access to all other material properties. Mean that complex BRDF are handled correctly (like the one I describe in my post Adopting a physically based shading model).

Requirement of SH buffers is frightening: Using quadratic SH (9 coefficient) is impractical in term of performance and memory, but linear SH (4 coefficient) give good result for simulating indirect light. So we used only 4 SH coefficients.
We need to store 4 coefficients for each channel R, G and B. This result to 3 * 4 float16 per pixel which mean 21Mo at 1280×720! And the composite in the main pass require to sample 3 float16 buffers and add several instructions inside the already heavy main shader.
Note that in my context, I don’t want to do smart upsampling of the SH buffer. First because of artifact introduced, second because I want to compose the SH buffers in the main pass to be able to apply complex BRDF  and the shader is heavy enough. I am ALU bound and will not suffer too much of the high-resolution compare to adding instructions.

Obviously we can’t afford this on console. I will describe the method I try in order to minimize these constraints in following section.

Experiment

In this section we deal only with diffuse lighting.
Here is my reference scene with classic dynamic lights ( click on all images to see full-size):

On the left there 3 lights, green, red, blue. A yellowish light with large brightness in the middle and three other overlapping lights purple, green, blue on the right.
The stairs on left and right allow highlighting the problem of light directionality. On the left stair the green light affect the top of the step and the blue the side. On the right stair, the three lights affect the top of the step and the purple light the side.

Here is the step and details of my test.

The first step is to render lights in the SH buffer by accumulating them in linear SH. To be practical, we need to use RGBA8bit SH buffer instead of float16 buffer. This highlight one of the problem of SH coefficient: they are signed float and require a range higher than one to get HDR lighting.
To avoid precision issue and sign constraint, I draw every light affecting a pixel in a single draw call (so no additive blending). I generate several combinations of shaders for different number of lights and used the tiled deferred lighting approach (as describe in [8] [9] or [10]).  In deferred rendering context,  the tiled deferred approach is only a gain for many lights on the screen, if you have few lights it is not an optimization. However, in my case this is the only approach available as you need to have all the accumulated ligths in the shader. I limit myself to 8 lights for this testing. As you can see, all optimizations of deferred lighting/shading apply.

I used the Z buffer to recover world position in the SH buffer pass, apply attenuation and project each light in SH.

float4 SHEvalDirection(float3 Dir)
{
    float4 Result;
    Result.x = 0.282095;
    Result.y = -0.488603 * Dir.y;
    Result.z = 0.488603 * Dir.z;
    Result.w = -0.488603 * Dir.x;
    return Result;
}

void AddLightToSH(float4 InLightPosition, float4 InLightColor, float3 PixelPosition,
                  inout float4 SHLightr, inout float4 SHLightg, inout float4 SHLightb)
{
    // No shadow available for SH buffer lighting
    float3   WorldLightVectorUnormalized       = InLightPosition.xyz - PixelPosition;
    float    DistanceAttenuation               = Attenuation(WorldLightVectorUnormalized);
    float3   LightColorWithDistanceAttenuation = DistanceAttenuation * InLightColor;

    float4 SHLightResult                        = SHEvalDirection(normalize(WorldLightVectorUnormalized));

    SHLightr += SHLightResult * LightColorWithDistanceAttenuation.r;
    SHLightg += SHLightResult * LightColorWithDistanceAttenuation.g;
    SHLightb += SHLightResult * LightColorWithDistanceAttenuation.b;
}

cosine convolution of the SH method

SH is only an approximation of the lighting environment. The more coefficients you have, the more precision you get. With only 4 coefficients we lose a little intensity (having 9 coefficient will allow to be closest to the original intensity), but still get good result. SH coefficients for each channel are store in 3 RGBA8 buffer and require the use of multirendertarget (MRT). My way to compress them is as follows:

Read more of this post

Adopting a physically based shading model

Version : 1.31 – Living blog – First version was 2 August 2011

With permission of my company : Dontnod entertainmenhttp://www.dont-nod.com/

This last year sees a growing interest for physically based rendering. Physically based shading simplify parameters control for artists, allow more consistent look under different lighting condition and have better realistic look. As many game developers, I decided to introduce physical based shading model to my company. I started this blog to share what we learn. The blog post is divided in two-part.

I will first present the physical shading model we chose and what we add in our engine to support it : This is the subject of this post. Then I will describe the process of making good data to feed this lighting model: Feeding a physically based shading model . I hope you will enjoy it and will share your own way of working with physically based shading model. Feedback are welcomed!

Notation of this post can be found in siggraph 2010 Physically-Based Shading Models in Film and Game Production Naty Hoffman’s paper [2].

Working with a physically based shading model imply some changes in a game engine to fully support it. I will expose here the physically based rendering (PBR) way we chosed for our game engine.

When talking about PBR, we talk about BRDF, Fresnel, energy conserving, Microfacet theory, punctual light sources equation… All these concepts are very well described in [2] and will not be reexplained here.

Our main lighting model is composed of two-part: Ambient lighting and direct lighting. But before digging into these subjects, I will talk about some magic numbers.

Normalization factor

I would like to clarify the constant we find in various lighting model. The energy conservation constraint (the outgoing energy cannot be greater than the incoming energy) requires the BRDF to be normalized. There are two different approaches to normalize a BRDF.

Normalize the entire BRDF

Normalizing a BRDF means that the directional-hemispherical reflectance (the reflectance of a surface under direct illumination) must always be between 0 and 1 : R(l)=\int_\Omega f(l,v) \cos{\theta_o} \mathrm{d}\omega_o\leq 1 . This is an integral over the hemisphere. In game R(l) corresponds to the diffuse color c_{diff} .

For lambertian BRDF, f(l,v) is constant. It mean that R(l)=\pi f(l,v) and we can write f(l,v)=\frac{R(l)}{\pi}
As a result, the normalization factor of a lambertian BRDF is \frac{1}{\pi}

For original Phong (the Phong model most game programmer use) \underline{(r\cdot v)}^{\alpha_p}c_{spec} normalization factor  is \frac{\alpha_p+1}{2\pi}
For Phong BRDF (just mul Phong by \cos{\theta_i} See [1][8]) \underline{(r\cdot v)}^{\alpha_p}c_{spec}\underline{(n\cdot l)} normalization factor  becomes \frac{\alpha_p+2}{2\pi}
For Binn-Phong \underline{(n\cdot h)}^{\alpha_p}c_{spec} normalization factor  is \frac{(\alpha_p+2)}{4\pi(2-2^\frac{-\alpha_p}{2})}
For Binn-Phong BRDF \underline{(n\cdot h)}^{\alpha_p}c_{spec}\underline{(n\cdot l)} normalization factor  is \frac{(\alpha_p+2)(\alpha_p+4)}{8\pi(2^\frac{-\alpha_p}{2}+\alpha_p)}
Derivation of these constants can be found in [3] and [13]. Another good sum up is provide in [27].

Note that for Blinn-Phong BRDF, a cheap approximation is given in [1] as : \frac{\alpha_p+8}{8\pi}
There is a discussion about this constant in [4] and here is the interesting comment from Naty Hoffmann

About the approximation we chose, we were not trying to be strictly conservative (that is important for multi-bounce GI solutions to converge, but not for rasterization).
We were trying to choose a cheap approximation which is close to 1, and we thought it more important to be close for low specular powers.
Low specular powers have highlights that cover a lot of pixels and are unlikely to be saturating past 1.

When working with microfacet BRDFs, normalize only microfacet normal distribution function (NDF)

A Microfacet distribution requires that the (signed) projected area of the microsurface is the same as the projected area of the macrosurface for any direction v [6]. In the special case v = n:
\int_\theta D(m)(n\cdot m)\mathrm{d}\omega_m=1
The integral is over the sphere and cosine factor is not clamped.

For Phong distribution (or Blinn distribution, two name, same distribution) the NDF normalization constant is  \frac{\alpha_p+2}{2\pi}
Derivation can be found in [7]

Direct Lighting

Our direct lighting model is composed of two-parts : direct diffuse + direct specular
Direct diffuse is the usual Lambertian BRDF : \frac{c_{diff}}{\pi}
Direct specular is the microfacet BRDF describe by Naty Hoffman in [2] : F_{schilck}(c_{spec},l_c,h)\frac{\alpha_p+2}{8\pi}\underline{(n\cdot h)}^{\alpha_p}

Read more of this post

Feeding a physically based shading model

Version : 1.0 – Living blog – First version was 17 August 2011

With permission of my company : Dontnod entertainmenhttp://www.dont-nod.com/

Adopting a physically based shading model is just a first step. Physically based rendering (PBR) require to use physical lighting setup and good spatially varying BRDF inputs (a.k.a textures) to get best results.
Feeding the shading model with physically plausible data is in the hand of artists.

There are many texture creation tutorials available on the web. But too often, artists forget to link their work with the lighting model for which textures are created. With traditional lighting model, there is often a RGB diffuse texture, RGB specular texture, specular mask texture, constant specular power and normal map. For advanced material you can add specular power texture, Fresnel intensity texture, Fresnel scale texture, reflection mask texture…
Physically based shading model is more simple and will provide a consistent look under different lighting condition. However, artists must be trained because right values are not always trivial to find and they should accept to not fully control specular response.

Our physically based shading model requires four inputs:

  • Diffuse color RGB (named diffuse albedo or diffuse reflectance or directionnal-hemispherical reflectance)
  • Specular color RGB (named specular albedo or specular reflectance)
  • Normal and gloss monochrome

Authoring time of these textures are not equal. I will expose the advice and material reference to provide to artists to help them authoring these textures. The better the artists workflow will be, the better the shading model will appear. Normal and gloss are tightly coupled so they will be treated together.

When talking about texture, we talk about sRGB/RGB color space, linear/gamma space… All these concepts are well described in [8] and will not be explained here.

Before digging into the subject in more detail, here are some advices for the textures workflow :

  • Artists must calibrate their screens. Or better, all your team’s screen should be calibrated in the same way [6].
  •  Make sure Colour Management is set to use sRGB in Photoshop [5].
  •  Artists will trust their eyes, but eyes can be foolish. Adjusting grey level texture can be annoying [7]. Provide reference material and work with a neutral grey background.
  •  When working with sRGB color space, as it is the case for most textures authored with Photoshop, remember that the middle grey is not 128,128,128 but 187,187,187. See John Hable post [22] for comparison between 128 and 187 middle grey.
  • Game engine should implement debug view mode to display texture density, mipmap resolution, lighting only, diffuse only, specular only, gloss only, normal only… This is a valuable tool to track textures authoring problems.
  • Textures should be uniform in the scene. Even if all textures are amazing, only one poor texture on the screen will attract the eye, like a dead pixel on a screen. The resulting visual feeling will be bad. The same scene with uniform density and medium quality will look better.

Dielectric and metallic material

There are different types of substances in real world. They can be classified in three main group: Insulators, semi-conductors and conductors.
In game we are only interesting by two of them: Insulators (Dielectric materials) and conductors (Metallic materials).
Artists should understand to which category a material belong to. This will have influence on diffuse and specular value to assign to this material.

I already talked about these two categories in the post Adopting a Physically based shading model.

Dielectric materials are the most common materials. Their optical properties rarely vary much over the visible spectrum: water, glass, skin, wood, hair, leather, plastic, stone, concrete, ruby, diamond…
Metals. Their optical properties vary over the visible spectrum: iron, aluminium, copper, gold, cobalt,  nickel, silver…
See [8].

Diffuse color

Diffuse textures require some time to author.

In the past, it was usual to bake everything in a “diffuse” texture to fake lighting effects like shadow, reflection, specular… With newer engine, all these effects are simulated and must not be baked.
The best definition for diffuse color in our engine is : How bright a surface is when lit by a 100% bright white light [4]. This definition is related to the definition of light unit from the punctual light equation (See Adopting a physically based shading model). Read more of this post