Adopting a physically based shading model

Version : 1.31 – Living blog – First version was 2 August 2011

With permission of my company : Dontnod entertainmenhttp://www.dont-nod.com/

This last year sees a growing interest for physically based rendering. Physically based shading simplify parameters control for artists, allow more consistent look under different lighting condition and have better realistic look. As many game developers, I decided to introduce physical based shading model to my company. I started this blog to share what we learn. The blog post is divided in two-part.

I will first present the physical shading model we chose and what we add in our engine to support it : This is the subject of this post. Then I will describe the process of making good data to feed this lighting model: Feeding a physically based shading model . I hope you will enjoy it and will share your own way of working with physically based shading model. Feedback are welcomed!

Notation of this post can be found in siggraph 2010 Physically-Based Shading Models in Film and Game Production Naty Hoffman’s paper [2].

Working with a physically based shading model imply some changes in a game engine to fully support it. I will expose here the physically based rendering (PBR) way we chosed for our game engine.

When talking about PBR, we talk about BRDF, Fresnel, energy conserving, Microfacet theory, punctual light sources equation… All these concepts are very well described in [2] and will not be reexplained here.

Our main lighting model is composed of two-part: Ambient lighting and direct lighting. But before digging into these subjects, I will talk about some magic numbers.

Normalization factor

I would like to clarify the constant we find in various lighting model. The energy conservation constraint (the outgoing energy cannot be greater than the incoming energy) requires the BRDF to be normalized. There are two different approaches to normalize a BRDF.

Normalize the entire BRDF

Normalizing a BRDF means that the directional-hemispherical reflectance (the reflectance of a surface under direct illumination) must always be between 0 and 1 : R(l)=\int_\Omega f(l,v) \cos{\theta_o} \mathrm{d}\omega_o\leq 1 . This is an integral over the hemisphere. In game R(l) corresponds to the diffuse color c_{diff} .

For lambertian BRDF, f(l,v) is constant. It mean that R(l)=\pi f(l,v) and we can write f(l,v)=\frac{R(l)}{\pi}
As a result, the normalization factor of a lambertian BRDF is \frac{1}{\pi}

For original Phong (the Phong model most game programmer use) \underline{(r\cdot v)}^{\alpha_p}c_{spec} normalization factor  is \frac{\alpha_p+1}{2\pi}
For Phong BRDF (just mul Phong by \cos{\theta_i} See [1][8]) \underline{(r\cdot v)}^{\alpha_p}c_{spec}\underline{(n\cdot l)} normalization factor  becomes \frac{\alpha_p+2}{2\pi}
For Binn-Phong \underline{(n\cdot h)}^{\alpha_p}c_{spec} normalization factor  is \frac{(\alpha_p+2)}{4\pi(2-2^\frac{-\alpha_p}{2})}
For Binn-Phong BRDF \underline{(n\cdot h)}^{\alpha_p}c_{spec}\underline{(n\cdot l)} normalization factor  is \frac{(\alpha_p+2)(\alpha_p+4)}{8\pi(2^\frac{-\alpha_p}{2}+\alpha_p)}
Derivation of these constants can be found in [3] and [13]. Another good sum up is provide in [27].

Note that for Blinn-Phong BRDF, a cheap approximation is given in [1] as : \frac{\alpha_p+8}{8\pi}
There is a discussion about this constant in [4] and here is the interesting comment from Naty Hoffmann

About the approximation we chose, we were not trying to be strictly conservative (that is important for multi-bounce GI solutions to converge, but not for rasterization).
We were trying to choose a cheap approximation which is close to 1, and we thought it more important to be close for low specular powers.
Low specular powers have highlights that cover a lot of pixels and are unlikely to be saturating past 1.

When working with microfacet BRDFs, normalize only microfacet normal distribution function (NDF)

A Microfacet distribution requires that the (signed) projected area of the microsurface is the same as the projected area of the macrosurface for any direction v [6]. In the special case v = n:
\int_\theta D(m)(n\cdot m)\mathrm{d}\omega_m=1
The integral is over the sphere and cosine factor is not clamped.

For Phong distribution (or Blinn distribution, two name, same distribution) the NDF normalization constant is  \frac{\alpha_p+2}{2\pi}
Derivation can be found in [7]

Direct Lighting

Our direct lighting model is composed of two-parts : direct diffuse + direct specular
Direct diffuse is the usual Lambertian BRDF : \frac{c_{diff}}{\pi}
Direct specular is the microfacet BRDF describe by Naty Hoffman in [2] : F_{schilck}(c_{spec},l_c,h)\frac{\alpha_p+2}{8\pi}\underline{(n\cdot h)}^{\alpha_p}

Read more of this post

Advertisements

Feeding a physically based shading model

Version : 1.0 – Living blog – First version was 17 August 2011

With permission of my company : Dontnod entertainmenhttp://www.dont-nod.com/

Adopting a physically based shading model is just a first step. Physically based rendering (PBR) require to use physical lighting setup and good spatially varying BRDF inputs (a.k.a textures) to get best results.
Feeding the shading model with physically plausible data is in the hand of artists.

There are many texture creation tutorials available on the web. But too often, artists forget to link their work with the lighting model for which textures are created. With traditional lighting model, there is often a RGB diffuse texture, RGB specular texture, specular mask texture, constant specular power and normal map. For advanced material you can add specular power texture, Fresnel intensity texture, Fresnel scale texture, reflection mask texture…
Physically based shading model is more simple and will provide a consistent look under different lighting condition. However, artists must be trained because right values are not always trivial to find and they should accept to not fully control specular response.

Our physically based shading model requires four inputs:

  • Diffuse color RGB (named diffuse albedo or diffuse reflectance or directionnal-hemispherical reflectance)
  • Specular color RGB (named specular albedo or specular reflectance)
  • Normal and gloss monochrome

Authoring time of these textures are not equal. I will expose the advice and material reference to provide to artists to help them authoring these textures. The better the artists workflow will be, the better the shading model will appear. Normal and gloss are tightly coupled so they will be treated together.

When talking about texture, we talk about sRGB/RGB color space, linear/gamma space… All these concepts are well described in [2] and will not be explained here.

Before digging into the subject in more detail, here are some advices for the textures workflow :

  • Artists must calibrate their screens. Or better, all your team’s screen should be calibrated in the same way [6].
  •  Make sure Colour Management is set to use sRGB in Photoshop [5].
  •  Artists will trust their eyes, but eyes can be foolish. Adjusting grey level texture can be annoying [7]. Provide reference material and work with a neutral grey background.
  •  When working with sRGB color space, as it is the case for most textures authored with Photoshop, remember that the middle grey is not 128,128,128 but 187,187,187. See John Hable post [22] for comparison between 128 and 187 middle grey.
  • Game engine should implement debug view mode to display texture density, mipmap resolution, lighting only, diffuse only, specular only, gloss only, normal only… This is a valuable tool to track textures authoring problems.
  • Textures should be uniform in the scene. Even if all textures are amazing, only one poor texture on the screen will attract the eye, like a dead pixel on a screen. The resulting visual feeling will be bad. The same scene with uniform density and medium quality will look better.

Dielectric and metallic material

There are different types of substances in real world. They can be classified in three main group: Insulators, semi-conductors and conductors.
In game we are only interesting by two of them: Insulators (Dielectric materials) and conductors (Metallic materials).
Artists should understand to which category a material belong to. This will have influence on diffuse and specular value to assign to this material.

I already talked about these two categories in the post Adopting a Physically based shading model.

Dielectric materials are the most common materials. Their optical properties rarely vary much over the visible spectrum: water, glass, skin, wood, hair, leather, plastic, stone, concrete, ruby, diamond…
Metals. Their optical properties vary over the visible spectrum: iron, aluminium, copper, gold, cobalt,  nickel, silver…
See [8].

Diffuse color

Diffuse textures require some time to author.

In the past, it was usual to bake everything in a “diffuse” texture to fake lighting effects like shadow, reflection, specular… With newer engine, all these effects are simulated and must not be baked.
The best definition for diffuse color in our engine is : How bright a surface is when lit by a 100% bright white light [4]. This definition is related to the definition of light unit from the punctual light equation (See Adopting a physically based shading model). Read more of this post