Water drop 2a – Dynamic rain and its effects

Version : 1.3 – Living blog – First version was 27 december 2012

This is the second post of a series about simulating rain and its effect on the world in game. As it is a pretty big post, I split it in two parts a and b:

Water drop 1 – Observe rainy world
Water drop 2a – Dynamic rain and its effects
Water drop 2b – Dynamic rain and its effects
Water drop 3a – Physically based wet surfaces
Water drop 3b – Physically based wet surfaces
Water drop 4a – Reflecting wet world
Water drop 4b – Reflecting wet world

In the first water drop we have seen several rain effects. To immerse the player in a rainy world, we need to support a lot of them. The major reference for rain city environment rendering is the “Toy Shop” demo from ATI which has been widely covered by Natalya Tatarchuck at many conferences [2][3][4]. However, even if the demo was available in late 2005, all the techniques described can’t easily fit in a PS3/XBOX360 playable game environment. In this second water drop, I want to share the work me and others at Dontnod have done around these rain effects for “Remember Me“. This post is the result of our researches. We will not only discuss about what we implemented but also about theory and other approaches. For this post, I invited my co-workers Antoine Zanuttini, Laury Michel and Orson Favrel to write some words, so this is a collaborative post :). We focused on rainy urban environment and we described different rain effects one by one. Our engine (Unreal engine 3) is a forward renderer but ideas here could also be applied in a deferred renderer.

Rain Effects

Rain splashes  / Falling drops splashes

In the real world, when a falling drop hits a surface a splash is generated. Rain, or water flow on large heights like rooftop or tree, can generate falling drops, the behavior is the same in both cases. We  will focus on raindrops in a first time. Rain splashes can be simulated easily in a game by spawning a water splash particle when the stretched particle representing the raindrop collides with the scene. Tracking every particles colliding with a scene can be costly. With so many raindrops creating water splashes, it is hard to distinguish which rain drop is causing a specific rain splash. Based on this fact and for performance reasons it is simpler to have two independent systems to manage raindrops and rain splashes. Most games collide a bunch of random rays starting from the top of the world downward with a simple geometry representation of the scene then generate water splashes particles at the origin of the collisions [1][2]. As an optimization, the water splashes are only generated close to the screen. Another simple solution when you have complex geometry that you can’t simply approximate is to manually put an emitter of water splash particles following geometry boundaries. The pattern will not be as random as other water splashes but the effect will be there.

We tried another approach. Instead of trying to collide some rays with the world, we can simply render a depth map view from the top in the rain direction. The depth map gives all the information we require to emit a water splash particle at a random position in the world respecting the scene geometry. The steps of our approach are :
- Render a depth map
- Transfer depth map from GPU to CPU memory
- Use the depth map to generate random positions following the world geometry
- Emit the water splash at the generated positions

To render the depth map, we link a dummy location in front of the current camera but at a little higher,  then render the world geometry from this point of view. All standard shadow map optimizations apply here (Any culling method, Z double speed,  to render masked after opaque, having a stream with position only, not rendering  too small objects, forcing lod mesh etc…). As not all parts of the world need to generate rain splashes, we added an extra meshes tagging method for our artists to specify if a mesh needs to be rendered in the depth map. We also allow a mesh to be only rendered in the depth map and not in the normal scene. This is useful when you have translucent objects like glass which should stop rain but can’t render in opaque depth map or to approximate a lot of meshes by a single less complex mesh. To ease the debugging we added a special visualization mode in our editor to only see objects relevant to the rain splash.

EditorRainCollision
The precision of world generated positions from this depth map depends on the resolution and the size of the frustum of the depth map. With a 256×256 depth map and a 20m x 20m orthogonal frustum we get world cells of 7.8cm² at the height taken from the depth map. The rasterizer will rule the height store in the depth map. This means that if you get an object in a cell of 7.8cm² with large height disparities, chances are the water splash will be spawned at a wrong height. This is a tradeoff between memory and performance.
To render the depth map we can use either an orthogonal or a perspective matrix. We haven’t found any usage for perspective matrix but in the following I will suppose that we can have both. Moreover, on console or DX10 and above, we can access the depth buffer, so we will use this functionality. On PC DX9 we store the depth value in the alpha channel of a color buffer. For consistency with other platforms the depth value is stored in normalized coordinate device. In case of a perspective projection, a reversed floating depth value is used to increase precision. Here is the PC DX9 pseudo code for this encoding:

Read more of this post