Water drop 3a – Physically based wet surfaces

Version : 1.3 – Living blog – First version was 19 March 2013

This is the third post of a series about simulating rain and its effect on the world in game. As it is a pretty big post, I split it in two parts A and B:

Water drop 1 – Observe rainy world
Water drop 2a – Dynamic rain and its effects
Water drop 2b – Dynamic rain and its effects
Water drop 3a – Physically based wet surfaces
Water drop 3b – Physically based wet surfaces
Water drop 4a – Reflecting wet world
Water drop 4b – Reflecting wet world

Physically based rendering (PBR) is now common in game (See Adopting a physically based shading model to know more about it). When I introduce PBR in my company few years ago we were actually working on rain. At this time we were questioning about how our new lighting model should behave to simulate wet surfaces. With classic game lighting model, the way everybody chose was to darken the diffuse term and boost the specular term (Here I refer to the classic specular use as RGB color to multiply with the lighting). The wet diffuse/specular factors being eye calibrate. I wanted to go further than simply adapt this behavior to PBR and this required to better understand the interaction between water and materials. This post is a mix of the result of old and recent researches. I chose to provide up to date information including experimental (not complete) work because the subject is complex and talking about it is useful. This might be of interest for future research. The post describe how water influence materials and provide ways to simulate wet surfaces with physically based lighting model.  I suppose here that’s the reader know the basics of reflected/refracted lights with Snell’s law and index of refraction (IOR).

Wet surfaces – Theory

People are able to distinguish between a wet and a dry surface by sight. The observation post show many pictures to illustrate this point. The main visual cue people retain is that wet surfaces look darker, more specular and exhibit subtle changes in saturation and hue when wet:

WetDry

This behavior is commonly observed for natural or human made rough material/porous materials (brick, clay, plaster, concrete, asphalt, wood, rust, cardboard, stone…), powdered materials (sand, dirt, soil…), absorbent materials (paper, cotton, fabrics…) or organic materials (fur, hair…). However this is not always the case, smooth materials (glass, marble, plastic, metal, painted surface, polished surface…) don’t change. For example, there is a big difference between a dry rough stone and a wet rough stone but a very small difference between highly polished wet stone and highly polished dry stone.
In the following discussion, wet surfaces refer mostly to rough and diffuse materials quenched in water and having a very thin water layer on their surfaces.

Why rough wet surfaces are darker when wet ? Because they reflect less light.
There is two optical phenomena imply in this decrease of light reflection and they are details in [3] and [4]. A rough material has small air gaps or pores which will be filling by water when wetting process begin. When pores are filled, there is  “water saturation”, water propagates onto the material as a thin layer.

Let’s first see the impact of the thin layer of water. The rough surface leads to a diffuse reflection (Lambertian surface).  Some of the light reflected from the surface will be reflected back to the surface by the water-air interface due to total internal reflection. Total internal reflection occur when moving from a denser medium into a less dense one (i.e., n1 > n2), above an incidence angle known as the critical angle (See [1] for more detail). For water-air interface, this is \theta_c=arcsin(\frac{n_{air}}{n_{water}})=arcsin(\frac{1.0}{1.33})=48.75^{\circ}

CriticalAngleWaterSource [1]

This reflected light from the surface is then subject to another round of absorption by the surface before it is reflected again. This light’s back and forth result in darkening of the surface.

WaterAirSource [2]

Now take a look at the water filling in the pore inside the rough material. There is a concentration of water beneath the surface. The water which replace the air have an index of refraction higher than that of air (1.33 against 1.0) which is closer to index of refraction of most rough dielectric material (1.5). Consequence, following the Snell’s law, light entering in material will be less refracted due to the reduced index of refraction difference: The scattering of light under the surface is more directional in the forward direction. The increase scattering events before the light leave the surface increases the amount of light absorbed and thus reduce the light reflection.

subsurfaceinteractionwaterSource[5]

The darkening of the material is also accompanied by a subtle change in saturation and hue. In [11] the spectral reflectance (i.e the “RGB” representation of real world color, the visible range of the spectrum is around 400nm blue to 780 nm red) of a dry and wet stone has been measured to highlight these characteristics. Analyze show that’s there is a significant reduction in reflectance across the whole range of the visible spectrum when the surface gets wet. Which confirm the darkening of the surface. It also show that’s the surface color becomes more saturated because of this reduction.

WetSaturation

Source [11].

Read more of this post

Water drop 2a – Dynamic rain and its effects

Version : 1.3 – Living blog – First version was 27 december 2012

This is the second post of a series about simulating rain and its effect on the world in game. As it is a pretty big post, I split it in two parts a and b:

Water drop 1 – Observe rainy world
Water drop 2a – Dynamic rain and its effects
Water drop 2b – Dynamic rain and its effects
Water drop 3a – Physically based wet surfaces
Water drop 3b – Physically based wet surfaces
Water drop 4a – Reflecting wet world
Water drop 4b – Reflecting wet world

In the first water drop we have seen several rain effects. To immerse the player in a rainy world, we need to support a lot of them. The major reference for rain city environment rendering is the “Toy Shop” demo from ATI which has been widely covered by Natalya Tatarchuck at many conferences [2][3][4]. However, even if the demo was available in late 2005, all the techniques described can’t easily fit in a PS3/XBOX360 playable game environment. In this second water drop, I want to share the work me and others at Dontnod have done around these rain effects for “Remember Me“. This post is the result of our researches. We will not only discuss about what we implemented but also about theory and other approaches. For this post, I invited my co-workers Antoine Zanuttini, Laury Michel and Orson Favrel to write some words, so this is a collaborative post :). We focused on rainy urban environment and we described different rain effects one by one. Our engine (Unreal engine 3) is a forward renderer but ideas here could also be applied in a deferred renderer.

Rain Effects

Rain splashes  / Falling drops splashes

In the real world, when a falling drop hits a surface a splash is generated. Rain, or water flow on large heights like rooftop or tree, can generate falling drops, the behavior is the same in both cases. We  will focus on raindrops in a first time. Rain splashes can be simulated easily in a game by spawning a water splash particle when the stretched particle representing the raindrop collides with the scene. Tracking every particles colliding with a scene can be costly. With so many raindrops creating water splashes, it is hard to distinguish which rain drop is causing a specific rain splash. Based on this fact and for performance reasons it is simpler to have two independent systems to manage raindrops and rain splashes. Most games collide a bunch of random rays starting from the top of the world downward with a simple geometry representation of the scene then generate water splashes particles at the origin of the collisions [1][2]. As an optimization, the water splashes are only generated close to the screen. Another simple solution when you have complex geometry that you can’t simply approximate is to manually put an emitter of water splash particles following geometry boundaries. The pattern will not be as random as other water splashes but the effect will be there.

We tried another approach. Instead of trying to collide some rays with the world, we can simply render a depth map view from the top in the rain direction. The depth map gives all the information we require to emit a water splash particle at a random position in the world respecting the scene geometry. The steps of our approach are :
- Render a depth map
- Transfer depth map from GPU to CPU memory
- Use the depth map to generate random positions following the world geometry
- Emit the water splash at the generated positions

To render the depth map, we link a dummy location in front of the current camera but at a little higher,  then render the world geometry from this point of view. All standard shadow map optimizations apply here (Any culling method, Z double speed,  to render masked after opaque, having a stream with position only, not rendering  too small objects, forcing lod mesh etc…). As not all parts of the world need to generate rain splashes, we added an extra meshes tagging method for our artists to specify if a mesh needs to be rendered in the depth map. We also allow a mesh to be only rendered in the depth map and not in the normal scene. This is useful when you have translucent objects like glass which should stop rain but can’t render in opaque depth map or to approximate a lot of meshes by a single less complex mesh. To ease the debugging we added a special visualization mode in our editor to only see objects relevant to the rain splash.

EditorRainCollision
The precision of world generated positions from this depth map depends on the resolution and the size of the frustum of the depth map. With a 256×256 depth map and a 20m x 20m orthogonal frustum we get world cells of 7.8cm² at the height taken from the depth map. The rasterizer will rule the height store in the depth map. This means that if you get an object in a cell of 7.8cm² with large height disparities, chances are the water splash will be spawned at a wrong height. This is a tradeoff between memory and performance.
To render the depth map we can use either an orthogonal or a perspective matrix. We haven’t found any usage for perspective matrix but in the following I will suppose that we can have both. Moreover, on console or DX10 and above, we can access the depth buffer, so we will use this functionality. On PC DX9 we store the depth value in the alpha channel of a color buffer. For consistency with other platforms the depth value is stored in normalized coordinate device. In case of a perspective projection, a reversed floating depth value is used to increase precision. Here is the PC DX9 pseudo code for this encoding:

Read more of this post