## FXGuide Game environment series based on Remember Me

Remember Me, the new Capcom’s IP and the project I was working on has been released!
For the occasion FXGuide has publish a series of three articles about the rendering of Remember Me.

Mike Seymour from FXGuide, Michel Koch (Co-art director of Dontnod) and me have worked together to get these articles up. These articles overlap some posts I have done on this blog, this is what I will details here.

## Game environments – Part A: rendering Remember Me

http://www.fxguide.com/featured/game-environments-parta-remember-me-rendering/

The first article talk about the PBR system of Remember Me describe in

and the reflection system describe in two parts:

The FXGuide article include exclusive video, concept and image from the game not present in above posts.It link the concept art with the technic develop. It also include return from the Dontnod team. The blog posts contain pseudo-code and more programming stuff not present on FXGuide.

## Game environments – Part B: rain

http://www.fxguide.com/featured/game-environments-partb/

The second article talk about how we handle the rain in Remember Me, as was describe in

Water drop 2a – Dynamic rain and its effects

The FXGuide article include additional concept to show rainy mood world of the game. The blog posts contain pseudo-code and more programming stuff not present on FXGuide.

## Game environments – Part C: making wet environments

http://www.fxguide.com/featured/game-environments-partc/

The third articles talk about physically based wet surfaces, as was describe in:

FXGuide article sum up this huge topic to be more accessible and less painful to read. People willing to go more in depth could read the above posts. The blog posts contain pseudo-code and more programming stuff not present on FXGuide.

## Dead End Thrills screen of Remember Me

Not related to technical stuff but as the subject of this post is the rendering of Remember Me, don’t forget to take a look at the awesome high resolution screenshots done by Dead End Thrills.

## GPU Pro 4 – Practical planar reflections using cubemaps and image proxies (with Video)

I have written with my co-worker Antoine Zanuttini an article in the GPU Pro 4 book:
I am particularly proud of the cover as this is extracted from the project I worked on “Rememer Me” from Dontnod entertainment edited by Capcom. There is a short sum up available on the GPU Pro blog : http://gpupro.blogspot.fr/2013/02/gpu-pro-4-practical-planar-reflections.html that I will duplicate here.

Rendering scenes with glossy and specular planar reflections is a challenge in video game, particularly when targeting old hardware like current console (PS3, XBOX360). Real time planar reflection is often implemented by re-rendering the scene from the point of view of a reflected camera, an expensive process. At the opposite, several game developers use a simple generic 2D environment texture which lack realism but is really cheap. Others use screen-space local reflection which has edge cases and still has a not negligible cost.

For the game “Remember Me”,we have developed a really fast planar reflection solution usable on every ground. In our GPU Pro 4 chapter we discuss how we render an approximation of the reflected scene with the help of parallax corrected offline-generated elements: Environment map and image proxies. The planar reflection has enough quality for game and take into account the roughness of the surface.The chapter discusses about the algorithm details and follows the work we have presented at Siggraph 2012 “Local Image-based Lighting With Parallax-correctedCubemap“.

Our goal was not only to discuss implementation of our algorithm but also its usage in a context of a game development. We describe the tools we develop for our artists and the best practices they discover. Artists are creative peoples and they have pushed the boundary of our tools where we not expected them. Want to look at the result in action? See this video which accompanying the article

## Memo on Fresnel equations

Version : 1.4 – Living blog – First version was 29 April 2013

This post is a memo for me about misc thing I found related to the Fresnel equation as there is plenty of formula into the wild. I spend some times to gather these information and I was thinking it can interest others. This will not be really relevant to game rendering but still good to know. This could be useful when doing reference or when dealing with total internal reflection (frequent with multi layered BRDF). This is a memo, not a tutorial, so I won’t give basic explanation of many concepts like reflection/refraction, Snell’s law, index of refraction (IOR), total internal reflection (TIR), etc… At the end of the post I provide a Mathematica file with all the equations and graphs.

Notation for this post:

Conductor mean Metal and dielectric mean no-Metal material.

Light moves from a medium of a given IOR $n_i$ (incoming) into a second medium with IOR $n_t$ (transmitted).

Conductor have complex IOR with an imaginary part $k_t$ (note that’s t “transmitted” is a bad choice for conductor but I found it more identifiable than $n_1$ and $n_2$, $k_2$).

## Fresnel Equation basis

All equations using an index of refraction $n_t$ can be replace with the same equation using a complex index of refraction $n_t -\mathbf{i} k_t$.

Snell’s law dielectric-conductor interface : $\frac{\sin\theta_i}{\sin\theta_t}=\frac{n_t -\mathbf{i} k_t}{n_i}$

Snell’s law dielectric-dielectric interface : $\frac{\sin\theta_i}{\sin\theta_t}=\frac{n_t}{n_i}$

The calculations of the reflectance $R$ (What we are looking for when we want to calculate the percentage of reflection and transmission) depend on p- and s-polarization of the incident ray. $Rs$ and $Rp$ are the reflectivity for the two planes of polarization. $Rs$ is perpendicular (s = German senkrecht) and $Rp$ is parallel. The reflectance $R$ for unpolarized light is the average of $Rs$ and $Rp$:

$R=\frac{(R_s+R_p)}{2}=\frac{(r_\perp^2+r_\parallel^2)}{2}$

Following expressions use $R_s$ and $R_t$ or $r_\perp$ and $r_\parallel$ depend on cases to simplify notation.

## Water drop 3b – Physically based wet surfaces

Version : 1.3 – Living blog – First version was 15 avril 2013

This is the third post of a series about simulating rain and its effect on the world in game. As it is a pretty big post, I split it in two parts A and B:

Water drop 1 – Observe rainy world
Water drop 2a – Dynamic rain and its effects
Water drop 2b – Dynamic rain and its effects
Water drop 3a – Physically based wet surfaces
Water drop 3b – Physically based wet surfaces
Water drop 4a – Reflecting wet world
Water drop 4b – Reflecting wet world

Directly following the part A let’s continue with other rain effects:

## Approximation for game

We have begun this post by studying the influence of water on lighting for wet surfaces. Then we have seen two real time implementations with analytic light for both optical phenomena which were inherited directly from the observation. They have some drawbacks, they miss an image based lighting implementation and their cost for in-game usage still a problem for XBOX360/PS3 game. In computer graphic there is a different path available to simulate optical phenomena other than changing the lighting model. We could simply create/edit/capture both wet and dry surfaces BRDF parameters (diffuse, specular, specular power…) with the same lighting model. This is not really a “simulation” as we know the final wet state of the surface but we are now able to render wet surfaces and to dynamically wet our game world. Simply interpolating between dry and  wet BRDF parameters without changing the lighting model do the effect. Nevertheless this approach is stuck to the subsurface scattering events inside the wet material and its top, i.e the diffuse and specular of the wet surface itself. It will not allow to simulate the dual layering lighting we have study in part A when there is a thin layer of water on the top of the wet surface. I will discuss this point later.

The benefit is the simplicity of the process and we still compatible with any kind of lights: image based lighting and analytic. The drawback of requiring a dry and a wet set of BRDF parameters by surfaces is the time to author and store them. The wet lighting model approach required more instructions whereas this one require only few extra instructions (but this still two textures fetch for the blending). However in game development, doubling the storage in memory/disc space and the number of textures to author is prohibited. Hopefully we now know that’s we can express both wet and dry surfaces with the same lighting model, so maybe we can find a way to tweak the dry BRDF parameters to get an approximation of the wet’s one and thus avoid the inconvenient of storing and authoring new textures.

Almost all games I know chose to follow this  BRDF parameters’ tweaking path : Stalker [12], Uncharted 2/3 (on the main character), Assassin’s creed 3 [13], Crysis 2/3, Metal Gear Solid V [15] etc… This is not surprising as the method seems simple and it fit very well with a deferred shading renderer: You can tweak dry BRDF parameters in the G-buffer without complicating the lighting systems. However the wet BRDF parameters generated by these games are either coarse or wrong approximation (in a physical sense, I am agree that’s visually the look can be Ok). Most use the same eye-calibrated factors to attenuate diffuse and boost specular (old fashion) on every wet surfaces of the scene regardless of material properties (roughness/smoothness, porosity, metalic/dielectric…). Assassin’s creed 3 even does an additional wrong step by changing the strength of the factor based on the type of rain. Remember from part A that under any type of rains a porous surface can be water saturated. This only depends on  water precipitation volume and exposition time. A bit differently Tomb Raider : A survivor is born [14] use “dark light” to attenuate the light receive by the diffuse part of the wetted surfaces, the specular part is modified as other games. As they use lights to produce rain with a light prepass renderer, I think they intent to make up the missing of a diffuse parameter in the small G-Buffer with this method. Which again apply wrongly the same modification factors on all dry surfaces.

One of the purposes of the remainder of this section is to improve the BRDF wet parameters generation from the dry one. I want to highlight the benefit of PBR for this parameters generation. I will begin by talking about the tweaking of the diffuse (or subsurface scattering part) and the specular parameters for porous dielectric material then for other kind of materials. I will end with the effect of the thin layer of water which can accumulate above surfaces and the case of thick accumulated water like puddles.

Porous dielectric material

Disclaimer all color values I will talk about are in RGB linear space, not sRGB space. All graph show here are available in a Mathematica file at the end of this section.

We aim to found a factor which can be applied on a dry diffuse parameter to get a wet diffuse parameter and equivalent for the glossiness parameter. I will begin by an overview of previous works, they are all focus on rough dielectric material.

For asphalt in driving simulator, Nakamae et al [2] use a factor between 0.1 and 0.3 to attenuate the diffuse albedo and a factor of 5 to 10 to boost the specular (not PBR). As many other, they perform an empirical calibration for this coefficient without taking into account the properties of the surfaces.

[3] and [16] details the two optical theories that’s we see in this post (part A)  which aim to explain the darkening of the albedo. We will call the model of [3] LD and the model of [16] TBM. I would advice that’s the albedo mention in this paper don’t match the diffuse albedo definition we use in computer graphic (i.e the diffuse color of a perfect lambertian surface), it contain some specular reflection. Both papers purpose a relationship between wet and dry albedo. They explain that’s the highest differences between wet and dry albedo occurs for surface in the middle range of dry albedo. Dark surfaces will tend to absorb more light on first contact with the surface, the contribution of internal reflections will be less important  decreasing the effect of wetting. Bright surfaces will tend to reflect much more light than is absorbed by internal reflection also decreasing the effect of wetting. In both case the relationship between dry and wet albedo depends only on the  index of refraction (IOR) of the surface, the IOR of the water and the dry albedo. The following graph is the wet albedo function of the dry albedo from the optical phenomena of [3] for an IOR of 1.5 for surface (common value for rough dielectric surface) and 1.33 for water. The red line is the dry albedo for comparison:

I found more readable to transform this graph to the fraction of wet / dry albedo function of albedo. This means the factor to apply to dry albedo to retrieve wet albedo:

A good comment about these graphs is done in [6]:

They further show that the wet albedo is a non-linear function of dry albedo, with low albedos reduced more by wetting than high albedos. A consequence of this result (not explicitly stated in their paper) is that wet surface color is more saturated than dry surface color, because the wetting further exaggerates the differences in albedo for different wavelengths.
Source [6]

## Water drop 3a – Physically based wet surfaces

Version : 1.3 – Living blog – First version was 19 March 2013

This is the third post of a series about simulating rain and its effect on the world in game. As it is a pretty big post, I split it in two parts A and B:

Water drop 1 – Observe rainy world
Water drop 2a – Dynamic rain and its effects
Water drop 2b – Dynamic rain and its effects
Water drop 3a – Physically based wet surfaces
Water drop 3b – Physically based wet surfaces
Water drop 4a – Reflecting wet world
Water drop 4b – Reflecting wet world

Physically based rendering (PBR) is now common in game (See Adopting a physically based shading model to know more about it). When I introduce PBR in my company few years ago we were actually working on rain. At this time we were questioning about how our new lighting model should behave to simulate wet surfaces. With classic game lighting model, the way everybody chose was to darken the diffuse term and boost the specular term (Here I refer to the classic specular use as RGB color to multiply with the lighting). The wet diffuse/specular factors being eye calibrate. I wanted to go further than simply adapt this behavior to PBR and this required to better understand the interaction between water and materials. This post is a mix of the result of old and recent researches. I chose to provide up to date information including experimental (not complete) work because the subject is complex and talking about it is useful. This might be of interest for future research. The post describe how water influence materials and provide ways to simulate wet surfaces with physically based lighting model.  I suppose here that’s the reader know the basics of reflected/refracted lights with Snell’s law and index of refraction (IOR).

## Wet surfaces – Theory

People are able to distinguish between a wet and a dry surface by sight. The observation post show many pictures to illustrate this point. The main visual cue people retain is that wet surfaces look darker, more specular and exhibit subtle changes in saturation and hue when wet:

This behavior is commonly observed for natural or human made rough material/porous materials (brick, clay, plaster, concrete, asphalt, wood, rust, cardboard, stone…), powdered materials (sand, dirt, soil…), absorbent materials (paper, cotton, fabrics…) or organic materials (fur, hair…). However this is not always the case, smooth materials (glass, marble, plastic, metal, painted surface, polished surface…) don’t change. For example, there is a big difference between a dry rough stone and a wet rough stone but a very small difference between highly polished wet stone and highly polished dry stone.
In the following discussion, wet surfaces refer mostly to rough and diffuse materials quenched in water and having a very thin water layer on their surfaces.

Why rough wet surfaces are darker when wet ? Because they reflect less light.
There is two optical phenomena imply in this decrease of light reflection and they are details in [3] and [4]. A rough material has small air gaps or pores which will be filling by water when wetting process begin. When pores are filled, there is  “water saturation”, water propagates onto the material as a thin layer.

Let’s first see the impact of the thin layer of water. The rough surface leads to a diffuse reflection (Lambertian surface).  Some of the light reflected from the surface will be reflected back to the surface by the water-air interface due to total internal reflection. Total internal reflection occur when moving from a denser medium into a less dense one (i.e., n1 > n2), above an incidence angle known as the critical angle (See [1] for more detail). For water-air interface, this is $\theta_c=arcsin(\frac{n_{air}}{n_{water}})=arcsin(\frac{1.0}{1.33})=48.75^{\circ}$

Source [1]

This reflected light from the surface is then subject to another round of absorption by the surface before it is reflected again. This light’s back and forth result in darkening of the surface.

Source [2]

Now take a look at the water filling in the pore inside the rough material. There is a concentration of water beneath the surface. The water which replace the air have an index of refraction higher than that of air (1.33 against 1.0) which is closer to index of refraction of most rough dielectric material (1.5). Consequence, following the Snell’s law, light entering in material will be less refracted due to the reduced index of refraction difference: The scattering of light under the surface is more directional in the forward direction. The increase scattering events before the light leave the surface increases the amount of light absorbed and thus reduce the light reflection.

Source[5]

The darkening of the material is also accompanied by a subtle change in saturation and hue. In [11] the spectral reflectance (i.e the “RGB” representation of real world color, the visible range of the spectrum is around 400nm blue to 780 nm red) of a dry and wet stone has been measured to highlight these characteristics. Analyze show that’s there is a significant reduction in reflectance across the whole range of the visible spectrum when the surface gets wet. Which confirm the darkening of the surface. It also show that’s the surface color becomes more saturated because of this reduction.

Source [11].

## Water drop 2b – Dynamic rain and its effects

Version : 1.1 – Living blog – First version was 3 january 2013

This is the second post of a series about simulating rain and its effect on the world in game. As it is a pretty big post, I split it in two parts a and b:

Water drop 1 – Observe rainy world
Water drop 2a – Dynamic rain and its effects
Water drop 2b – Dynamic rain and its effects
Water drop 3a – Physically based wet surfaces
Water drop 3b – Physically based wet surfaces
Water drop 4a – Reflecting wet world
Water drop 4b – Reflecting wet world

Directly following the part a let’s continue with other rain effects:

Water influence on material, water accumulation and puddles

In the observation post we see that a wet surface is darker and brighter depends on its material properties. The influence of water on a material is complex and will be discuss in detail in the next post : Water drop 3 – Physically based wet surfaces. For now we will follow the guideline define by Nakamae et al in “A Lighting Model Aiming at Drive Simulators” [1]. When it rains water accumulate in ground crack, gap and deformation. With sufficient precipitation puddles can appear and stay a long time even after it stop to rain (as highlight by the observation post). To define the different states of the ground surfaces [1] introduce the following classification:

Type 1: a dry region
Type 2: a wet region; i.e., the region where the road surface is wet but no water gathers
Type 3: a drenched region; i.e., the region where water remains to some extent but no puddles are formed, or the region of the margin of a puddle
Type 4: a puddle region

When a surface is wet (type 2), the paper suggest to apply a reflection coefficient of 5 to 10 on the specular and 0.1 to 0.3 on the diffuse. In the pseudo code of this section, we will represent this water influence by a function DoWetProcess. This function take a percentage of wetting strength under the form of a shader variable we will call wet level. When wet level is 0, the surface is dry, when 1 it is wet. This value is different from the raindrop intensity of the previous sections. Wet level variable is increase when the rain starts and takes some time to decrease it after it stops. Allowing simulating some drying. Here is a simple pseudo code:

void DoWetProcess(inout float3 Diffuse, inout float Gloss, float WetLevel)
{
// Water influence on material BRDF
Diffuse    *= lerp(1.0, 0.3, WetLevel);
// Not the same boost factor than the paper
Gloss       = min(Gloss * lerp(1.0, 2.5, WetLevel), 1.0);
}

// Water influence on material BRDF
DoWetProcess(Diffuse, Gloss, WetLevel);

Note that’s there is no change apply on the normal, when the surface is wet, we simply use the original normal. Here is shot with an environment map apply:

For puddles (type 4), the paper suggest a two layers reflection model as the photos of real rainy world show us. For now we keep it simple and just use the water BRDF parameters coupled with the diffuse attenuation of a wet surface. For the margin region of puddles (type 3) a simple weighting function between the two previous models is proposed. Here we lerp between the current material BRDF parameters (wet or not) and result of type 4 BRDF parameters to simulate the presence of accumulate water. Puddles placement need to be control by the artists and we use the alpha channel of vertex colors of a mesh for this purpose.  We provide a tool to our artists to paint vertex color in the editor directly on the mesh instance (more exactly Unreal Engine 3 provides tools). The blend weight of our lerping method is defined by the value of the vertex’s color alpha channel: 0 mean puddle and 255 mean no puddle (the default value for vertex color is often white opaque). Pseudo code:

AccumulatedWater = VertexColor.a;

// Type 2 : Wet region
DoWetProcess(Diffuse, Gloss, WetLevel);

// Apply accumulated water effect
// When AccumulatedWater is 1.0 we are in Type 4
// so full water properties, in between we are in Type 3
// Water is smooth
Gloss    = lerp(Gloss, 1.0, AccumulatedWater);
// Water F0 specular is 0.02 (based on IOR of 1.33)
Specular = lerp(Specular, 0.02, AccumulatedWater);
N        = lerp(N, float3(0, 0, 1), AccumulatedWater);

View in editor: no puddle, puddle paint with vertex color’s tools, puddle

Puddle paint can be seen in action at the begin of the puddles, heightmap and ripples youtube video.

## Water drop 2a – Dynamic rain and its effects

Version : 1.3 – Living blog – First version was 27 december 2012

This is the second post of a series about simulating rain and its effect on the world in game. As it is a pretty big post, I split it in two parts a and b:

Water drop 1 – Observe rainy world
Water drop 2a – Dynamic rain and its effects
Water drop 2b – Dynamic rain and its effects
Water drop 3a – Physically based wet surfaces
Water drop 3b – Physically based wet surfaces
Water drop 4a – Reflecting wet world
Water drop 4b – Reflecting wet world

In the first water drop we have seen several rain effects. To immerse the player in a rainy world, we need to support a lot of them. The major reference for rain city environment rendering is the “Toy Shop” demo from ATI which has been widely covered by Natalya Tatarchuck at many conferences [2][3][4]. However, even if the demo was available in late 2005, all the techniques described can’t easily fit in a PS3/XBOX360 playable game environment. In this second water drop, I want to share the work me and others at Dontnod have done around these rain effects for “Remember Me“. This post is the result of our researches. We will not only discuss about what we implemented but also about theory and other approaches. For this post, I invited my co-workers Antoine Zanuttini, Laury Michel and Orson Favrel to write some words, so this is a collaborative post. We focused on rainy urban environment and we described different rain effects one by one. Our engine (Unreal engine 3) is a forward renderer but ideas here could also be applied in a deferred renderer.

## Rain Effects

Rain splashes  / Falling drops splashes

In the real world, when a falling drop hits a surface a splash is generated. Rain, or water flow on large heights like rooftop or tree, can generate falling drops, the behavior is the same in both cases. We  will focus on raindrops in a first time. Rain splashes can be simulated easily in a game by spawning a water splash particle when the stretched particle representing the raindrop collides with the scene. Tracking every particles colliding with a scene can be costly. With so many raindrops creating water splashes, it is hard to distinguish which rain drop is causing a specific rain splash. Based on this fact and for performance reasons it is simpler to have two independent systems to manage raindrops and rain splashes. Most games collide a bunch of random rays starting from the top of the world downward with a simple geometry representation of the scene then generate water splashes particles at the origin of the collisions [1][2]. As an optimization, the water splashes are only generated close to the screen. Another simple solution when you have complex geometry that you can’t simply approximate is to manually put an emitter of water splash particles following geometry boundaries. The pattern will not be as random as other water splashes but the effect will be there.

We tried another approach. Instead of trying to collide some rays with the world, we can simply render a depth map view from the top in the rain direction. The depth map gives all the information we require to emit a water splash particle at a random position in the world respecting the scene geometry. The steps of our approach are :
– Render a depth map
– Transfer depth map from GPU to CPU memory
– Use the depth map to generate random positions following the world geometry
– Emit the water splash at the generated positions

To render the depth map, we link a dummy location in front of the current camera but at a little higher,  then render the world geometry from this point of view. All standard shadow map optimizations apply here (Any culling method, Z double speed,  to render masked after opaque, having a stream with position only, not rendering  too small objects, forcing lod mesh etc…). As not all parts of the world need to generate rain splashes, we added an extra meshes tagging method for our artists to specify if a mesh needs to be rendered in the depth map. We also allow a mesh to be only rendered in the depth map and not in the normal scene. This is useful when you have translucent objects like glass which should stop rain but can’t render in opaque depth map or to approximate a lot of meshes by a single less complex mesh. To ease the debugging we added a special visualization mode in our editor to only see objects relevant to the rain splash.

The precision of world generated positions from this depth map depends on the resolution and the size of the frustum of the depth map. With a 256×256 depth map and a 20m x 20m orthogonal frustum we get world cells of 7.8cm² at the height taken from the depth map. The rasterizer will rule the height store in the depth map. This means that if you get an object in a cell of 7.8cm² with large height disparities, chances are the water splash will be spawned at a wrong height. This is a tradeoff between memory and performance.
To render the depth map we can use either an orthogonal or a perspective matrix. We haven’t found any usage for perspective matrix but in the following I will suppose that we can have both. Moreover, on console or DX10 and above, we can access the depth buffer, so we will use this functionality. On PC DX9 we store the depth value in the alpha channel of a color buffer. For consistency with other platforms the depth value is stored in normalized coordinate device. In case of a perspective projection, a reversed floating depth value is used to increase precision. Here is the PC DX9 pseudo code for this encoding: