Weathering and aging effects in the hand of artists

Version : 1.1 – Living blog – First version was 04 May 2014

With permission of Dontnod entertainmenhttp://www.dont-nod.com/

It is now frequent in game to have dynamic weathering and aging effects: Rain, snow, dirt, rust, dust, pollution… Most of the time these effects are drive by programmers because they require specific code like access to a custom shadow map or perform by a full screen space post process. Before leaving Dontnod I was working on a feature to allow artists to handle themselves these kind of effects. At the time we were requiring a lot of effects and I wanted to allow artists to prototype as much idea as they can. This post will provide explanation of this feature and will give implementation details under the Unreal Engine 4. It aim to work with a deferred rendering engine. These weathering and aging effects feature has been use at Dontnod but there is some pitfalls which may prevent to use it effectively in a shipping game. It all depends on the scale of the game and performance expected. I hope by exposing this idea that others could improve the system 🙂

I thanks Dontnod to allow me to talk about it.

 The weathering and aging effects

We know from graphic literature that a lot of weathering and aging effects can be done through surface properties modification [1]. Of course using complex multi-layered lighting model is the right way to handle it but it is way less flexible, will require a programmer to code every effects, and it will be difficult to get right regarding all supported lights type (like image based lighting or area lights).

As an example, the wet surfaces appearing when it is raining could be simulated with material properties modification: Roughness, diffuse albedo, normal… I talk a lot about this subject in other posts on this blog (see water drops series). However these modifications should be done only were it matter. When it is raining, you don’t want your interior surface to be wet. In Remember Me, we were handling this by adding some extra code in the shaders to modify the surface properties, then artists were vertex painting the part of the surfaces requiring to be dry. But this was not sufficient to handle all cases of wet surface. For example, a player walking in a wet street could, once he get into a dry interior, will let wet footprint on the ground. This could have been simulated with decals.

Taking another example, in Kill zone 4: Shadow fall [2] they perform a full screen space pass to modify the material attributes where normal are pointing up to add dust where it matter (after an explosion for example).

GBuffer modification

Thinking with a deferred renderer in mind, it is possible to identify a set of desired control for artists allowing them to modify material properties and simulating a weathering or aging effect. These controls are perform with GBuffer modification with:
– Deferred decals
– Full screen space quad. I call it material postprocess.
– Deferred effect lights: These are same as lights with shadow map or not, except they behave like deferred decal. The amount of light being use as opacity for blending operation. Soft shadow map also allow smooth transitions between effects.
– Object shaders: Properties will be modified directly at GBuffer generation time in the shader of the object. So this require specific code in each shaders. User could use vertex painting to bring information for an effect.

Every tools will perform some material properties modification to simulate an effect, like darkening the diffuse and boosting the smoothness for wet surfaces. All the GBuffer modification must be done before the lighting pass to be taken into account.

Applying the material property modification for the GBuffer modification control could be done in two ways.  Most common case is to use hardware blending, but it could be too restrictive. The other case is to use read/write ability into the same textures (a.k.a prorgammable blending). Sadly this ability is not widely supported. The PS4 and Mantle support it for example, DX11 doesn’t (I don’t talk about intel’s pixel synchronization but simple read/writing the same pixel).

Delaying the GBuffer modification

To support every platform and to be able to do any customization of material properties, I perform the modification of material properties in an extra full screen pass. Effectively delaying the GBuffer modification. Rather than modifying the GBuffer, the different tools simply output an effect weight inside the GBuffer. This effect weight is read later in the extra pass to apply the effect. There is multiple benefit to do that:
– Applying only one time the effect could save performance for heavy effect
– Accumulating effect weights could allow to clamp it in the delayed pass in order to limit the strength of an effect
– Centralized place to deal with the effect. Easier to author.
Sadly it will require to store the effect weight in the GBuffer.
Read more of this post

Adopting a physically based shading model

Version : 1.31 – Living blog – First version was 2 August 2011

With permission of my company : Dontnod entertainmenhttp://www.dont-nod.com/

This last year sees a growing interest for physically based rendering. Physically based shading simplify parameters control for artists, allow more consistent look under different lighting condition and have better realistic look. As many game developers, I decided to introduce physical based shading model to my company. I started this blog to share what we learn. The blog post is divided in two-part.

I will first present the physical shading model we chose and what we add in our engine to support it : This is the subject of this post. Then I will describe the process of making good data to feed this lighting model: Feeding a physically based shading model . I hope you will enjoy it and will share your own way of working with physically based shading model. Feedback are welcomed!

Notation of this post can be found in siggraph 2010 Physically-Based Shading Models in Film and Game Production Naty Hoffman’s paper [2].

Working with a physically based shading model imply some changes in a game engine to fully support it. I will expose here the physically based rendering (PBR) way we chosed for our game engine.

When talking about PBR, we talk about BRDF, Fresnel, energy conserving, Microfacet theory, punctual light sources equation… All these concepts are very well described in [2] and will not be reexplained here.

Our main lighting model is composed of two-part: Ambient lighting and direct lighting. But before digging into these subjects, I will talk about some magic numbers.

Normalization factor

I would like to clarify the constant we find in various lighting model. The energy conservation constraint (the outgoing energy cannot be greater than the incoming energy) requires the BRDF to be normalized. There are two different approaches to normalize a BRDF.

Normalize the entire BRDF

Normalizing a BRDF means that the directional-hemispherical reflectance (the reflectance of a surface under direct illumination) must always be between 0 and 1 : R(l)=\int_\Omega f(l,v) \cos{\theta_o} \mathrm{d}\omega_o\leq 1 . This is an integral over the hemisphere. In game R(l) corresponds to the diffuse color c_{diff} .

For lambertian BRDF, f(l,v) is constant. It mean that R(l)=\pi f(l,v) and we can write f(l,v)=\frac{R(l)}{\pi}
As a result, the normalization factor of a lambertian BRDF is \frac{1}{\pi}

For original Phong (the Phong model most game programmer use) \underline{(r\cdot v)}^{\alpha_p}c_{spec} normalization factor  is \frac{\alpha_p+1}{2\pi}
For Phong BRDF (just mul Phong by \cos{\theta_i} See [1][8]) \underline{(r\cdot v)}^{\alpha_p}c_{spec}\underline{(n\cdot l)} normalization factor  becomes \frac{\alpha_p+2}{2\pi}
For Binn-Phong \underline{(n\cdot h)}^{\alpha_p}c_{spec} normalization factor  is \frac{(\alpha_p+2)}{4\pi(2-2^\frac{-\alpha_p}{2})}
For Binn-Phong BRDF \underline{(n\cdot h)}^{\alpha_p}c_{spec}\underline{(n\cdot l)} normalization factor  is \frac{(\alpha_p+2)(\alpha_p+4)}{8\pi(2^\frac{-\alpha_p}{2}+\alpha_p)}
Derivation of these constants can be found in [3] and [13]. Another good sum up is provide in [27].

Note that for Blinn-Phong BRDF, a cheap approximation is given in [1] as : \frac{\alpha_p+8}{8\pi}
There is a discussion about this constant in [4] and here is the interesting comment from Naty Hoffmann

About the approximation we chose, we were not trying to be strictly conservative (that is important for multi-bounce GI solutions to converge, but not for rasterization).
We were trying to choose a cheap approximation which is close to 1, and we thought it more important to be close for low specular powers.
Low specular powers have highlights that cover a lot of pixels and are unlikely to be saturating past 1.

When working with microfacet BRDFs, normalize only microfacet normal distribution function (NDF)

A Microfacet distribution requires that the (signed) projected area of the microsurface is the same as the projected area of the macrosurface for any direction v [6]. In the special case v = n:
\int_\theta D(m)(n\cdot m)\mathrm{d}\omega_m=1
The integral is over the sphere and cosine factor is not clamped.

For Phong distribution (or Blinn distribution, two name, same distribution) the NDF normalization constant is  \frac{\alpha_p+2}{2\pi}
Derivation can be found in [7]

Direct Lighting

Our direct lighting model is composed of two-parts : direct diffuse + direct specular
Direct diffuse is the usual Lambertian BRDF : \frac{c_{diff}}{\pi}
Direct specular is the microfacet BRDF describe by Naty Hoffman in [2] : F_{schilck}(c_{spec},l_c,h)\frac{\alpha_p+2}{8\pi}\underline{(n\cdot h)}^{\alpha_p}

Read more of this post

Feeding a physically based shading model

Version : 1.0 – Living blog – First version was 17 August 2011

With permission of my company : Dontnod entertainmenhttp://www.dont-nod.com/

Adopting a physically based shading model is just a first step. Physically based rendering (PBR) require to use physical lighting setup and good spatially varying BRDF inputs (a.k.a textures) to get best results.
Feeding the shading model with physically plausible data is in the hand of artists.

There are many texture creation tutorials available on the web. But too often, artists forget to link their work with the lighting model for which textures are created. With traditional lighting model, there is often a RGB diffuse texture, RGB specular texture, specular mask texture, constant specular power and normal map. For advanced material you can add specular power texture, Fresnel intensity texture, Fresnel scale texture, reflection mask texture…
Physically based shading model is more simple and will provide a consistent look under different lighting condition. However, artists must be trained because right values are not always trivial to find and they should accept to not fully control specular response.

Our physically based shading model requires four inputs:

  • Diffuse color RGB (named diffuse albedo or diffuse reflectance or directionnal-hemispherical reflectance)
  • Specular color RGB (named specular albedo or specular reflectance)
  • Normal and gloss monochrome

Authoring time of these textures are not equal. I will expose the advice and material reference to provide to artists to help them authoring these textures. The better the artists workflow will be, the better the shading model will appear. Normal and gloss are tightly coupled so they will be treated together.

When talking about texture, we talk about sRGB/RGB color space, linear/gamma space… All these concepts are well described in [8] and will not be explained here.

Before digging into the subject in more detail, here are some advices for the textures workflow :

  • Artists must calibrate their screens. Or better, all your team’s screen should be calibrated in the same way [6].
  •  Make sure Colour Management is set to use sRGB in Photoshop [5].
  •  Artists will trust their eyes, but eyes can be foolish. Adjusting grey level texture can be annoying [7]. Provide reference material and work with a neutral grey background.
  •  When working with sRGB color space, as it is the case for most textures authored with Photoshop, remember that the middle grey is not 128,128,128 but 187,187,187. See John Hable post [22] for comparison between 128 and 187 middle grey.
  • Game engine should implement debug view mode to display texture density, mipmap resolution, lighting only, diffuse only, specular only, gloss only, normal only… This is a valuable tool to track textures authoring problems.
  • Textures should be uniform in the scene. Even if all textures are amazing, only one poor texture on the screen will attract the eye, like a dead pixel on a screen. The resulting visual feeling will be bad. The same scene with uniform density and medium quality will look better.

Dielectric and metallic material

There are different types of substances in real world. They can be classified in three main group: Insulators, semi-conductors and conductors.
In game we are only interesting by two of them: Insulators (Dielectric materials) and conductors (Metallic materials).
Artists should understand to which category a material belong to. This will have influence on diffuse and specular value to assign to this material.

I already talked about these two categories in the post Adopting a Physically based shading model.

Dielectric materials are the most common materials. Their optical properties rarely vary much over the visible spectrum: water, glass, skin, wood, hair, leather, plastic, stone, concrete, ruby, diamond…
Metals. Their optical properties vary over the visible spectrum: iron, aluminium, copper, gold, cobalt,  nickel, silver…
See [8].

Diffuse color

Diffuse textures require some time to author.

In the past, it was usual to bake everything in a “diffuse” texture to fake lighting effects like shadow, reflection, specular… With newer engine, all these effects are simulated and must not be baked.
The best definition for diffuse color in our engine is : How bright a surface is when lit by a 100% bright white light [4]. This definition is related to the definition of light unit from the punctual light equation (See Adopting a physically based shading model). Read more of this post