Water drop 3b – Physically based wet surfaces

Version : 1.3 – Living blog – First version was 15 avril 2013

This is the third post of a series about simulating rain and its effect on the world in game. As it is a pretty big post, I split it in two parts A and B:

Water drop 1 – Observe rainy world
Water drop 2a – Dynamic rain and its effects
Water drop 2b – Dynamic rain and its effects
Water drop 3a – Physically based wet surfaces
Water drop 3b – Physically based wet surfaces
Water drop 4a – Reflecting wet world
Water drop 4b – Reflecting wet world

Directly following the part A let’s continue with other rain effects:

Approximation for game

We have begun this post by studying the influence of water on lighting for wet surfaces. Then we have seen two real time implementations with analytic light for both optical phenomena which were inherited directly from the observation. They have some drawbacks, they miss an image based lighting implementation and their cost for in-game usage still a problem for XBOX360/PS3 game. In computer graphic there is a different path available to simulate optical phenomena other than changing the lighting model. We could simply create/edit/capture both wet and dry surfaces BRDF parameters (diffuse, specular, specular power…) with the same lighting model. This is not really a “simulation” as we know the final wet state of the surface but we are now able to render wet surfaces and to dynamically wet our game world. Simply interpolating between dry and  wet BRDF parameters without changing the lighting model do the effect. Nevertheless this approach is stuck to the subsurface scattering events inside the wet material and its top, i.e the diffuse and specular of the wet surface itself. It will not allow to simulate the dual layering lighting we have study in part A when there is a thin layer of water on the top of the wet surface. I will discuss this point later.

The benefit is the simplicity of the process and we still compatible with any kind of lights: image based lighting and analytic. The drawback of requiring a dry and a wet set of BRDF parameters by surfaces is the time to author and store them. The wet lighting model approach required more instructions whereas this one require only few extra instructions (but this still two textures fetch for the blending). However in game development, doubling the storage in memory/disc space and the number of textures to author is prohibited. Hopefully we now know that’s we can express both wet and dry surfaces with the same lighting model, so maybe we can find a way to tweak the dry BRDF parameters to get an approximation of the wet’s one and thus avoid the inconvenient of storing and authoring new textures.

Almost all games I know chose to follow this  BRDF parameters’ tweaking path : Stalker [12], Uncharted 2/3 (on the main character), Assassin’s creed 3 [13], Crysis 2/3, Metal Gear Solid V [15] etc… This is not surprising as the method seems simple and it fit very well with a deferred shading renderer: You can tweak dry BRDF parameters in the G-buffer without complicating the lighting systems. However the wet BRDF parameters generated by these games are either coarse or wrong approximation (in a physical sense, I am agree that’s visually the look can be Ok). Most use the same eye-calibrated factors to attenuate diffuse and boost specular (old fashion) on every wet surfaces of the scene regardless of material properties (roughness/smoothness, porosity, metalic/dielectric…). Assassin’s creed 3 even does an additional wrong step by changing the strength of the factor based on the type of rain. Remember from part A that under any type of rains a porous surface can be water saturated. This only depends on  water precipitation volume and exposition time. A bit differently Tomb Raider : A survivor is born [14] use “dark light” to attenuate the light receive by the diffuse part of the wetted surfaces, the specular part is modified as other games. As they use lights to produce rain with a light prepass renderer, I think they intent to make up the missing of a diffuse parameter in the small G-Buffer with this method. Which again apply wrongly the same modification factors on all dry surfaces.

One of the purposes of the remainder of this section is to improve the BRDF wet parameters generation from the dry one. I want to highlight the benefit of PBR for this parameters generation. I will begin by talking about the tweaking of the diffuse (or subsurface scattering part) and the specular parameters for porous dielectric material then for other kind of materials. I will end with the effect of the thin layer of water which can accumulate above surfaces and the case of thick accumulated water like puddles.

Porous dielectric material

Disclaimer all color values I will talk about are in RGB linear space, not sRGB space. All graph show here are available in a Mathematica file at the end of this section.

We aim to found a factor which can be applied on a dry diffuse parameter to get a wet diffuse parameter and equivalent for the glossiness parameter. I will begin by an overview of previous works, they are all focus on rough dielectric material.

For asphalt in driving simulator, Nakamae et al [2] use a factor between 0.1 and 0.3 to attenuate the diffuse albedo and a factor of 5 to 10 to boost the specular (not PBR). As many other, they perform an empirical calibration for this coefficient without taking into account the properties of the surfaces.

[3] and [16] details the two optical theories that’s we see in this post (part A)  which aim to explain the darkening of the albedo. We will call the model of [3] LD and the model of [16] TBM. I would advice that’s the albedo mention in this paper don’t match the diffuse albedo definition we use in computer graphic (i.e the diffuse color of a perfect lambertian surface), it contain some specular reflection. Both papers purpose a relationship between wet and dry albedo. They explain that’s the highest differences between wet and dry albedo occurs for surface in the middle range of dry albedo. Dark surfaces will tend to absorb more light on first contact with the surface, the contribution of internal reflections will be less important  decreasing the effect of wetting. Bright surfaces will tend to reflect much more light than is absorbed by internal reflection also decreasing the effect of wetting. In both case the relationship between dry and wet albedo depends only on the  index of refraction (IOR) of the surface, the IOR of the water and the dry albedo. The following graph is the wet albedo function of the dry albedo from the optical phenomena of [3] for an IOR of 1.5 for surface (common value for rough dielectric surface) and 1.33 for water. The red line is the dry albedo for comparison:

WetDryAlbedo2

I found more readable to transform this graph to the fraction of wet / dry albedo function of albedo. This means the factor to apply to dry albedo to retrieve wet albedo:

WetDryAlbedo3
A good comment about these graphs is done in [6]:

They further show that the wet albedo is a non-linear function of dry albedo, with low albedos reduced more by wetting than high albedos. A consequence of this result (not explicitly stated in their paper) is that wet surface color is more saturated than dry surface color, because the wetting further exaggerates the differences in albedo for different wavelengths.
Source [6]

Read more of this post

Advertisement

Water drop 3a – Physically based wet surfaces

Version : 1.3 – Living blog – First version was 19 March 2013

This is the third post of a series about simulating rain and its effect on the world in game. As it is a pretty big post, I split it in two parts A and B:

Water drop 1 – Observe rainy world
Water drop 2a – Dynamic rain and its effects
Water drop 2b – Dynamic rain and its effects
Water drop 3a – Physically based wet surfaces
Water drop 3b – Physically based wet surfaces
Water drop 4a – Reflecting wet world
Water drop 4b – Reflecting wet world

Physically based rendering (PBR) is now common in game (See Adopting a physically based shading model to know more about it). When I introduce PBR in my company few years ago we were actually working on rain. At this time we were questioning about how our new lighting model should behave to simulate wet surfaces. With classic game lighting model, the way everybody chose was to darken the diffuse term and boost the specular term (Here I refer to the classic specular use as RGB color to multiply with the lighting). The wet diffuse/specular factors being eye calibrate. I wanted to go further than simply adapt this behavior to PBR and this required to better understand the interaction between water and materials. This post is a mix of the result of old and recent researches. I chose to provide up to date information including experimental (not complete) work because the subject is complex and talking about it is useful. This might be of interest for future research. The post describe how water influence materials and provide ways to simulate wet surfaces with physically based lighting model.  I suppose here that’s the reader know the basics of reflected/refracted lights with Snell’s law and index of refraction (IOR).

Wet surfaces – Theory

People are able to distinguish between a wet and a dry surface by sight. The observation post show many pictures to illustrate this point. The main visual cue people retain is that wet surfaces look darker, more specular and exhibit subtle changes in saturation and hue when wet:

WetDry

This behavior is commonly observed for natural or human made rough material/porous materials (brick, clay, plaster, concrete, asphalt, wood, rust, cardboard, stone…), powdered materials (sand, dirt, soil…), absorbent materials (paper, cotton, fabrics…) or organic materials (fur, hair…). However this is not always the case, smooth materials (glass, marble, plastic, metal, painted surface, polished surface…) don’t change. For example, there is a big difference between a dry rough stone and a wet rough stone but a very small difference between highly polished wet stone and highly polished dry stone.
In the following discussion, wet surfaces refer mostly to rough and diffuse materials quenched in water and having a very thin water layer on their surfaces.

Why rough wet surfaces are darker when wet ? Because they reflect less light.
There is two optical phenomena imply in this decrease of light reflection and they are details in [3] and [4]. A rough material has small air gaps or pores which will be filling by water when wetting process begin. When pores are filled, there is  “water saturation”, water propagates onto the material as a thin layer.

Let’s first see the impact of the thin layer of water. The rough surface leads to a diffuse reflection (Lambertian surface).  Some of the light reflected from the surface will be reflected back to the surface by the water-air interface due to total internal reflection. Total internal reflection occur when moving from a denser medium into a less dense one (i.e., n1 > n2), above an incidence angle known as the critical angle (See [1] for more detail). For water-air interface, this is \theta_c=arcsin(\frac{n_{air}}{n_{water}})=arcsin(\frac{1.0}{1.33})=48.75^{\circ}

CriticalAngleWaterSource [1]

This reflected light from the surface is then subject to another round of absorption by the surface before it is reflected again. This light’s back and forth result in darkening of the surface.

WaterAirSource [2]

Now take a look at the water filling in the pore inside the rough material. There is a concentration of water beneath the surface. The water which replace the air have an index of refraction higher than that of air (1.33 against 1.0) which is closer to index of refraction of most rough dielectric material (1.5). Consequence, following the Snell’s law, light entering in material will be less refracted due to the reduced index of refraction difference: The scattering of light under the surface is more directional in the forward direction. The increase scattering events before the light leave the surface increases the amount of light absorbed and thus reduce the light reflection.

subsurfaceinteractionwaterSource[5]

The darkening of the material is also accompanied by a subtle change in saturation and hue. In [11] the spectral reflectance (i.e the “RGB” representation of real world color, the visible range of the spectrum is around 400nm blue to 780 nm red) of a dry and wet stone has been measured to highlight these characteristics. Analyze show that’s there is a significant reduction in reflectance across the whole range of the visible spectrum when the surface gets wet. Which confirm the darkening of the surface. It also show that’s the surface color becomes more saturated because of this reduction.

WetSaturation

Source [11].

Read more of this post

AMD Cubemapgen for physically based rendering

Version : 1.67 – Living blog – First version was 4 September 2011

AMD Cubemapgen is a useful tool which allow cubemap filtering and mipchain generation. Sadly, AMD decide to stop the support of it. However it has been made open source  [1] and has been upload on Google code repository  [2] to be improved by community. With some modification, this tool is really useful for physically based rendering because it allow to generate an irradiance environment map (IEM) or a prefiltered mipmaped radiance environment map (PMREM).  A PMREM is an environment map (in our case a cubemap) where each mipmap has been filtered by a cosine power lobe of decreasing cosine power value. This post describe such improvement I made for Cubemapgen and few others.

Latest version of Modified Cubemapgen (which include modification describe in this post) are available in the download section of the google code repository. Direct link : ModifiedCubeMapGen-1_66 (require VS2008 runtime and DX9) .

This post will first describe the new features added to Cubemapgen, then for interested (and advanced) readers, I will talk about theory behind the modification and go into some implementation details.

The modified Cubemapgen

The current improvements are under the form of new options accessible in the interface:

(click for full rez)


Use Multithread : Allow to use all hardware threads available on the computer. If uncheck, use the default behavior of Cubemapgen. However new features are unsupported with the default behavior.
Irradiance Cubemap : Allow a fast computation of an irradiance cubemap. When checked, no other filter or option are take in account. An irradiance cubemap can be get without this option by setting a cosine filter with a Base angle filter of 180 which is a really slow process. Only the base cubemap is affected by this option, the following mipmap use a cosine filter with some default values but these mipmaps should not be used.
Cosine power filter : Allow to specify a cosine power lobe filter as current filter. It allow to filter the cubemap with a cosine power lobe. You must select this filter to generate a PMREM.
MipmapChain : Only available with Cosine power filter. Allow to select which mode to use to generate the specular power values used to generate each PMREM’s mipmaps.
Power drop on mip, Cosine power edit box : Only available with the Drop mode of MipmapChain. Use to generate specular power values used for each PMREM’s mipmaps. The first mipmap will use the cosine power edit box value as cosine power for the cosine power lobe filter. Then the cosine power will be scale by power drop on mip to process the next mipmap and once again this new cosine power will be scale for the next mipmap until all mipmap are generated. For sample, settings 2048 as cosine power edit box and 0.25 as power drop on mip, you will generate a PMREM with each mipmap respectively filtered by cosine power lobe of 2048, 512, 128, 32, 8, 2…
Num Mipmap, Gloss scale, Gloss bias : Only available with the Mipmap mode of MipmapChain. Use to generate specular power values used for each PMREM’s mipmaps.  The value of Num mipmap, Gloss scale and Gloss bias will be used to generate a specular power value for each mipmap.
Lighting model: This option should be use only with cosine power filter. The choice of the lighting model depends on your game lighting equation. The goal is that the filtering better match your in game lighting.
Exclude Base : With Cosine power filter, allow to not process the base mimap of the PMREM.
Warp edge fixup: New edge fixup method which do not used Width based on NVTT from Ignacio Castaño.
Bent edge fixup: New edge fixup method which do not used Width based on TriAce CEDEC 2011 presentation.
Strecht edge fixup, FixSeams: New edge fixup method which do not used Width based on NVTT from Ignacio Castaño. FixSeams allow to display PMREM generated with Edge fixup’s Stretch method without seams.

All modification are available in command line (Print usage for detail with “ModifiedCubemapgen.exe – help”).

Irradiance cubemap

Here is a comparison between irradiance map generated with cosine filter of 180 and the option irradiance cubemap (Which use spherical harmonic(SH) for fast processing): Read more of this post

Adopting a physically based shading model

Version : 1.31 – Living blog – First version was 2 August 2011

With permission of my company : Dontnod entertainmenhttp://www.dont-nod.com/

This last year sees a growing interest for physically based rendering. Physically based shading simplify parameters control for artists, allow more consistent look under different lighting condition and have better realistic look. As many game developers, I decided to introduce physical based shading model to my company. I started this blog to share what we learn. The blog post is divided in two-part.

I will first present the physical shading model we chose and what we add in our engine to support it : This is the subject of this post. Then I will describe the process of making good data to feed this lighting model: Feeding a physically based shading model . I hope you will enjoy it and will share your own way of working with physically based shading model. Feedback are welcomed!

Notation of this post can be found in siggraph 2010 Physically-Based Shading Models in Film and Game Production Naty Hoffman’s paper [2].

Working with a physically based shading model imply some changes in a game engine to fully support it. I will expose here the physically based rendering (PBR) way we chosed for our game engine.

When talking about PBR, we talk about BRDF, Fresnel, energy conserving, Microfacet theory, punctual light sources equation… All these concepts are very well described in [2] and will not be reexplained here.

Our main lighting model is composed of two-part: Ambient lighting and direct lighting. But before digging into these subjects, I will talk about some magic numbers.

Normalization factor

I would like to clarify the constant we find in various lighting model. The energy conservation constraint (the outgoing energy cannot be greater than the incoming energy) requires the BRDF to be normalized. There are two different approaches to normalize a BRDF.

Normalize the entire BRDF

Normalizing a BRDF means that the directional-hemispherical reflectance (the reflectance of a surface under direct illumination) must always be between 0 and 1 : R(l)=\int_\Omega f(l,v) \cos{\theta_o} \mathrm{d}\omega_o\leq 1 . This is an integral over the hemisphere. In game R(l) corresponds to the diffuse color c_{diff} .

For lambertian BRDF, f(l,v) is constant. It mean that R(l)=\pi f(l,v) and we can write f(l,v)=\frac{R(l)}{\pi}
As a result, the normalization factor of a lambertian BRDF is \frac{1}{\pi}

For original Phong (the Phong model most game programmer use) \underline{(r\cdot v)}^{\alpha_p}c_{spec} normalization factor  is \frac{\alpha_p+1}{2\pi}
For Phong BRDF (just mul Phong by \cos{\theta_i} See [1][8]) \underline{(r\cdot v)}^{\alpha_p}c_{spec}\underline{(n\cdot l)} normalization factor  becomes \frac{\alpha_p+2}{2\pi}
For Binn-Phong \underline{(n\cdot h)}^{\alpha_p}c_{spec} normalization factor  is \frac{(\alpha_p+2)}{4\pi(2-2^\frac{-\alpha_p}{2})}
For Binn-Phong BRDF \underline{(n\cdot h)}^{\alpha_p}c_{spec}\underline{(n\cdot l)} normalization factor  is \frac{(\alpha_p+2)(\alpha_p+4)}{8\pi(2^\frac{-\alpha_p}{2}+\alpha_p)}
Derivation of these constants can be found in [3] and [13]. Another good sum up is provide in [27].

Note that for Blinn-Phong BRDF, a cheap approximation is given in [1] as : \frac{\alpha_p+8}{8\pi}
There is a discussion about this constant in [4] and here is the interesting comment from Naty Hoffmann

About the approximation we chose, we were not trying to be strictly conservative (that is important for multi-bounce GI solutions to converge, but not for rasterization).
We were trying to choose a cheap approximation which is close to 1, and we thought it more important to be close for low specular powers.
Low specular powers have highlights that cover a lot of pixels and are unlikely to be saturating past 1.

When working with microfacet BRDFs, normalize only microfacet normal distribution function (NDF)

A Microfacet distribution requires that the (signed) projected area of the microsurface is the same as the projected area of the macrosurface for any direction v [6]. In the special case v = n:
\int_\theta D(m)(n\cdot m)\mathrm{d}\omega_m=1
The integral is over the sphere and cosine factor is not clamped.

For Phong distribution (or Blinn distribution, two name, same distribution) the NDF normalization constant is  \frac{\alpha_p+2}{2\pi}
Derivation can be found in [7]

Direct Lighting

Our direct lighting model is composed of two-parts : direct diffuse + direct specular
Direct diffuse is the usual Lambertian BRDF : \frac{c_{diff}}{\pi}
Direct specular is the microfacet BRDF describe by Naty Hoffman in [2] : F_{schilck}(c_{spec},l_c,h)\frac{\alpha_p+2}{8\pi}\underline{(n\cdot h)}^{\alpha_p}

Read more of this post

Feeding a physically based shading model

Version : 1.0 – Living blog – First version was 17 August 2011

With permission of my company : Dontnod entertainmenhttp://www.dont-nod.com/

Adopting a physically based shading model is just a first step. Physically based rendering (PBR) require to use physical lighting setup and good spatially varying BRDF inputs (a.k.a textures) to get best results.
Feeding the shading model with physically plausible data is in the hand of artists.

There are many texture creation tutorials available on the web. But too often, artists forget to link their work with the lighting model for which textures are created. With traditional lighting model, there is often a RGB diffuse texture, RGB specular texture, specular mask texture, constant specular power and normal map. For advanced material you can add specular power texture, Fresnel intensity texture, Fresnel scale texture, reflection mask texture…
Physically based shading model is more simple and will provide a consistent look under different lighting condition. However, artists must be trained because right values are not always trivial to find and they should accept to not fully control specular response.

Our physically based shading model requires four inputs:

  • Diffuse color RGB (named diffuse albedo or diffuse reflectance or directionnal-hemispherical reflectance)
  • Specular color RGB (named specular albedo or specular reflectance)
  • Normal and gloss monochrome

Authoring time of these textures are not equal. I will expose the advice and material reference to provide to artists to help them authoring these textures. The better the artists workflow will be, the better the shading model will appear. Normal and gloss are tightly coupled so they will be treated together.

When talking about texture, we talk about sRGB/RGB color space, linear/gamma space… All these concepts are well described in [8] and will not be explained here.

Before digging into the subject in more detail, here are some advices for the textures workflow :

  • Artists must calibrate their screens. Or better, all your team’s screen should be calibrated in the same way [6].
  •  Make sure Colour Management is set to use sRGB in Photoshop [5].
  •  Artists will trust their eyes, but eyes can be foolish. Adjusting grey level texture can be annoying [7]. Provide reference material and work with a neutral grey background.
  •  When working with sRGB color space, as it is the case for most textures authored with Photoshop, remember that the middle grey is not 128,128,128 but 187,187,187. See John Hable post [22] for comparison between 128 and 187 middle grey.
  • Game engine should implement debug view mode to display texture density, mipmap resolution, lighting only, diffuse only, specular only, gloss only, normal only… This is a valuable tool to track textures authoring problems.
  • Textures should be uniform in the scene. Even if all textures are amazing, only one poor texture on the screen will attract the eye, like a dead pixel on a screen. The resulting visual feeling will be bad. The same scene with uniform density and medium quality will look better.

Dielectric and metallic material

There are different types of substances in real world. They can be classified in three main group: Insulators, semi-conductors and conductors.
In game we are only interesting by two of them: Insulators (Dielectric materials) and conductors (Metallic materials).
Artists should understand to which category a material belong to. This will have influence on diffuse and specular value to assign to this material.

I already talked about these two categories in the post Adopting a Physically based shading model.

Dielectric materials are the most common materials. Their optical properties rarely vary much over the visible spectrum: water, glass, skin, wood, hair, leather, plastic, stone, concrete, ruby, diamond…
Metals. Their optical properties vary over the visible spectrum: iron, aluminium, copper, gold, cobalt,  nickel, silver…
See [8].

Diffuse color

Diffuse textures require some time to author.

In the past, it was usual to bake everything in a “diffuse” texture to fake lighting effects like shadow, reflection, specular… With newer engine, all these effects are simulated and must not be baked.
The best definition for diffuse color in our engine is : How bright a surface is when lit by a 100% bright white light [4]. This definition is related to the definition of light unit from the punctual light equation (See Adopting a physically based shading model). Read more of this post