Water drop 2a – Dynamic rain and its effects

Version : 1.3 – Living blog – First version was 27 december 2012

This is the second post of a series about simulating rain and its effect on the world in game. As it is a pretty big post, I split it in two parts a and b:

Water drop 1 – Observe rainy world
Water drop 2a – Dynamic rain and its effects
Water drop 2b – Dynamic rain and its effects
Water drop 3a – Physically based wet surfaces
Water drop 3b – Physically based wet surfaces
Water drop 4a – Reflecting wet world
Water drop 4b – Reflecting wet world

In the first water drop we have seen several rain effects. To immerse the player in a rainy world, we need to support a lot of them. The major reference for rain city environment rendering is the “Toy Shop” demo from ATI which has been widely covered by Natalya Tatarchuck at many conferences [2][3][4]. However, even if the demo was available in late 2005, all the techniques described can’t easily fit in a PS3/XBOX360 playable game environment. In this second water drop, I want to share the work me and others at Dontnod have done around these rain effects for “Remember Me“. This post is the result of our researches. We will not only discuss about what we implemented but also about theory and other approaches. For this post, I invited my co-workers Antoine Zanuttini, Laury Michel and Orson Favrel to write some words, so this is a collaborative post :). We focused on rainy urban environment and we described different rain effects one by one. Our engine (Unreal engine 3) is a forward renderer but ideas here could also be applied in a deferred renderer.

Rain Effects

Rain splashes  / Falling drops splashes

In the real world, when a falling drop hits a surface a splash is generated. Rain, or water flow on large heights like rooftop or tree, can generate falling drops, the behavior is the same in both cases. We  will focus on raindrops in a first time. Rain splashes can be simulated easily in a game by spawning a water splash particle when the stretched particle representing the raindrop collides with the scene. Tracking every particles colliding with a scene can be costly. With so many raindrops creating water splashes, it is hard to distinguish which rain drop is causing a specific rain splash. Based on this fact and for performance reasons it is simpler to have two independent systems to manage raindrops and rain splashes. Most games collide a bunch of random rays starting from the top of the world downward with a simple geometry representation of the scene then generate water splashes particles at the origin of the collisions [1][2]. As an optimization, the water splashes are only generated close to the screen. Another simple solution when you have complex geometry that you can’t simply approximate is to manually put an emitter of water splash particles following geometry boundaries. The pattern will not be as random as other water splashes but the effect will be there.

We tried another approach. Instead of trying to collide some rays with the world, we can simply render a depth map view from the top in the rain direction. The depth map gives all the information we require to emit a water splash particle at a random position in the world respecting the scene geometry. The steps of our approach are :
- Render a depth map
- Transfer depth map from GPU to CPU memory
- Use the depth map to generate random positions following the world geometry
- Emit the water splash at the generated positions

To render the depth map, we link a dummy location in front of the current camera but at a little higher,  then render the world geometry from this point of view. All standard shadow map optimizations apply here (Any culling method, Z double speed,  to render masked after opaque, having a stream with position only, not rendering  too small objects, forcing lod mesh etc…). As not all parts of the world need to generate rain splashes, we added an extra meshes tagging method for our artists to specify if a mesh needs to be rendered in the depth map. We also allow a mesh to be only rendered in the depth map and not in the normal scene. This is useful when you have translucent objects like glass which should stop rain but can’t render in opaque depth map or to approximate a lot of meshes by a single less complex mesh. To ease the debugging we added a special visualization mode in our editor to only see objects relevant to the rain splash.

The precision of world generated positions from this depth map depends on the resolution and the size of the frustum of the depth map. With a 256×256 depth map and a 20m x 20m orthogonal frustum we get world cells of 7.8cm² at the height taken from the depth map. The rasterizer will rule the height store in the depth map. This means that if you get an object in a cell of 7.8cm² with large height disparities, chances are the water splash will be spawned at a wrong height. This is a tradeoff between memory and performance.
To render the depth map we can use either an orthogonal or a perspective matrix. We haven’t found any usage for perspective matrix but in the following I will suppose that we can have both. Moreover, on console or DX10 and above, we can access the depth buffer, so we will use this functionality. On PC DX9 we store the depth value in the alpha channel of a color buffer. For consistency with other platforms the depth value is stored in normalized coordinate device. In case of a perspective projection, a reversed floating depth value is used to increase precision. Here is the PC DX9 pseudo code for this encoding:

// ScreenPosition is the projected position
// We encode the Z device value for orthogonal matrix 
// ScreenPosition.w is supposed to be 1.0 in orthogonal projection matrix
OutColor = float4(0, 0, 0, ScreenPosition.z / ScreenPosition.w);
// Define A = ProjectionMatrix[2][2] and B = ProjectionMatrix[3][2] (row major)
// Standard projection do Z_NDC = A + B / Z  => Reversed (1 - Z_NDC) = 1 - A - B / Z
OutColor = float4(0, 0, 0, 1 - A - B / ScreenPosition.w);

A top view depth map:DepthMap

Once the depth map is created we need to access it on the CPU, so we need to transfer the depth map from GPU memory to CPU memory. On PC DX9 you could use “GetRenderTargetData()” which will block until the rendering is finished (a bad thing) or use the depth map generated at the previous frame (With a double buffer). On console this is simpler. In case we need it we can transfer the data with the GPU when the depth map has finished its rendering (Can’t talk about the specific). This is a quick operation, a 256×256 depth map cost 0.036ms to transfer on PS3. More, we don’t need to perform any synchronization between CPU and GPU here. Even if we overwrite some CPU accessed data, we will just produce a wrong position for a frame. And with tens of rain splashes spawned each frame, it will be hard to notice.

The CPU readable data are then used to generate spawn positions. For this, we setup an emitter to spawn a particle in an area around the point of interest (like the player). The area should be within the frustum use for the depth map. In our third person game, we define a varying size circle emitter in front of the camera where we want to generate water splashes. When a particle is emitted, we project its position with the ViewProjection matrix of the depth map, reconstruct the normalized device Z from the depth map then unproject with the InverseViewProjection of the depth map  to retrieve the final position. Here is some pseudo-code:

// Project current position in our depth map and set world space z location.
Vector4 ShadowPosition = ViewProjMatrix.TransformVector4(Particle->Location);
ShadowPosition = ShadowPosition / ShadowPosition.W;

// Save depth map X Y position for later world space reconstruction
Vector2D PosNDC(Clamp(ShadowPosition.X, -1.0f, 1.0f), Clamp(ShadowPosition.Y, -1.0f, 1.0f));

// If we are out of shadowmap, just kill the pixel (We do this by testing if value change by clamp)
if (PosNDC.X - ShadowPosition.X + PosNDC.Y - ShadowPosition.Y)
    return ;

// Convert to shadow map texel space - apply a clamp mode address mode
ShadowPosition.X = Clamp(ShadowPosition.X *  0.5f + 0.5f, 0.0f, 1.0f);
ShadowPosition.Y = Clamp(ShadowPosition.Y * -0.5f + 0.5f, 0.0f, 1.0f);

int PosX = (int)(ShadowPosition.X * (float)(SizeX - 1));
int PosY = (int)(ShadowPosition.Y * (float)(SizeY - 1));


Data = &Data[(PosY * DepthBuffer->DepthSurfacePitch) + (PosX * 4)]; // Data is the CPU memory containing the depth map
// Big endian on console - D24S8 depth/stencil buffer
unsigned int Val = (Data[0] << 16) + (Data[1] << 8) + Data[2];    // Remove stencil value
float DepthNDC = (float)Val / 16777215.0f; // 2^24 - 1 == 16777215.0f  (for 24bit depth )


Data = &Data[(PosY * SizeX + PosX) * sizeof(Float16Color)];
float DepthDeviceFloat = ((Float16Color*)Data)->A; // Convert to float

// As inversion is not handled inside the projection matrix but in
// the shader we must invert here
if (UsesInvertedZ && ProjMatrix[3][3] < 1.0f) // Orthogonal projection is not inversed
    DepthDeviceFloat = 1.0f - DepthDeviceFloat;


Vector4 ReconstructedPositionWS = InverseViewProjection.TransformVector4(Vector4(PosNDC.X, PosNDC.Y, DepthNDC, 1.0f)); 
Particle->Location = ReconstructedPositionWS/ ReconstructedPositionWS.W;

The above pseudo code use a D24S8 integer depth/stencil buffer on console because this format is compatible with hardware PCF (Depth map will be used for other purpose than just generating positions on the CPU). But a depth only format could be used. When decoding a value from the depth buffer it is important to know its format. Floating or integer ? Bit depth ? Tiled/Swizzled ?  The reversed depth storage or not is handled by this pseudo code inside the projection matrix on console. In case the texture is tiled or swizzled, care must be taken when addressing the pixel inside the data.

Finally, we use the generated position to spawn a water splash. The appearance of a real splash is difficult to model. An extensive study has been done in [7] and is summed-up here:

When a falling drop hits a surface, it is subjected to a sudden impact force. This impact deforms the drop and forces the water to flow radially outward. Some of the water eventually leaves the surface in the form of numerous small droplets – an event defined as a splash. Splashes can occur in two possible ways: corona splash, where a thin crow-shaped water sheet rises vertically above the surface before breaking into smaller droplets and prompt splash where droplets are emitted directly from the base of the drop without the formation of a crown.

High speed camera can be used to see in details the appearance of the impact. Here are two sample videos 1 and 2 on youtube (not ours) and the images below are extracted from the first video (from Mikrotron GmbH ) :


Typically, a corona splash requires a thin layer of water on a flat surface and stays alive for 10-20ms else we have a prompt splash. The dynamic of the splash depends on many factors divided in two categories: the material properties of the surface (Roughness, Rigidity, Wetness, Inclination, Hydrophobia) and the falling drop properties (Size, Velocity). Rough materials tend to have important influence on the impact result. The radius and height of the crown can be related to the drop properties, the number of splash droplets is also related to the velocity of the drop. The distribution of the splash droplets can be described by a stochastic model. All the details can be found in [7].
For game of course, it is difficult to go so far in the details. For example the Toy shop demo uses a single quad textured animation captured from a high-speed video of a splashing milk drop. The quad is scaled to produce variety. A good way seems to stay with the two main characteristics of the impact: A generic crown shape scale by height and radius and some droplets. However, as this part of the effect is often in the hand of the FX artists, they tend to do whatever they feel looking good regardless of the physics. In this spirit, John David Thornthon has developed a modular rig for “Ice age: The meltdown” allowing artistic control of splash [9]. We use the artistic way in our game with a mesh used to represent the crown shape and a sprite to simulate the splash droplets. The picture below shows a wireframe of the effect with the crown mesh in white and the splash droplets in red (Left). The crown mesh and the splash droplets are mapped with an animated material. The crown mesh (Middle) is scaled during the effect. Result (Right). FX by Timothée Letourneux:

Lastly, a word on the splash distribution. The number of splashes to generate depends on the number of raindrops, we keep it simple and we link the number of rain splashes to a rain intensity value (More details at end of the post: Rain effects control panel). A rain splashes youtube video allow seeing the splashes in action.

Here is two shots of rain splashes:

Other kind of falling drops, like drops from rooftop or trees after it has stopped to rain, can be simulated by individual particle systems reusing the depth map produced for the rain. The splash appearance is exactly the same.

On PS3 a 256×256 depth map rendering mainly dominated by character take around 0.32ms the rain splashes under heavy rain take around 0.33ms.
On XBox360 depth map take around 0.20ms the rain splashes under heavy rain take around 0.25ms.
As said at the start of this section, if characters is too costly to render, you can spawn random splash on their head and shoulders. The difficulty here is to try to get the same splashes distribution than the ground.

Added note:
We don’t include lighgting information in rain splashes for performance reasons but as we see in the observation section this has important impact during the night. Rain splashes are more visible when backlighted. The Toy shop demo use an overhead lightmap to simulate sky and street lamp lighting and compute some average lighting in the vertex shader [2].


Rain is the most complicated of the rain effects (but also the most studied) and is costly to make right. The observation post showed that it is difficult to perceive the rain. The best way to see the components of rain is with a bright light or when it is raining very strong. In all cases what we see is long streaks but in reality the rain is composed of individual raindrops. If we wanted to render real world raindrops, at a normal low resolution, raindrops will be just a few pixels wide. Moreover, as raindrop is very faint in bright regions and tends to appear stronger in dark area, it is not a good idea to match exactly the reality in game. We won’t take into account the complex appearance of raindrops, nevertheless knowing what’s happen under the hood is interesting.

A detailed description of the physical  properties of a raindrop can be found in [6] and [11]. Here is the main information:
Raindrops are transparent objects of 0.5-10mm size. Smaller raindrops are spherical in shape but larger are oblate spheroid.

- Raindrops refract light from a large solid angle (165°) of the environment (including the sky) towards the camera. Moreover, the incident light that is refracted is only attenuated by 6%. Specular and internal reflections further add to the brightness of the drop. Thus, a drop tends to be much brighter than its background (the portion of the scene it occludes).
- The solid angle of the background occluded by a drop is far less than the total field of view of the drop itself. Thus, in spite of being transparent, the average brightness within a stationary drop (without motion-blur) does not depend strongly on its background.
- The brightness of a raindrop is not affected by other raindrops. This is because for any given raindrop the total solid angle subtended by other raindrops (raindrops are small and are far apart) is insignificant to that subtended by the environment.


Citation and image from source [11]

As observed on the picture, the world is refracted through the raindrop. Correctly rendering a raindrop requires rendering a spherical shape with reflection, refraction and internal reflection.

As they fall, raindrops undergo rapid shape distortions (oscillations). A rain drop attains a constant velocity between 3 m/s for smaller rain drop (1mm) to 9 m/s for larger (> 4mm). Their motions produce randomly varying spatial and temporal intensities in image. This is perceived as streaks due to long time exposure of a camera or our eye: the intensities produced by rain drops are motion-blurred. Rain streaks appearance is detailed in [8]:

The appearance of rain streaks depends on many factors – the lighting and viewing directions; the distances from the source and the camera; the oscillation parameters; the size of the drop and the camera’s exposure time. (…)

Simple photometric models can only be used when the rendered rain is at a great distance from the camera, in which case, all the streaks are thin enough to make the details of their brightness patterns irrelevant. In close-up shots of rain, however, each raindrop projects to a large image streak, revealing the intensity pattern within it. This pattern is highly complex because of shape distortions that the raindrop undergoes as it falls. These shape distortions are due to oscillations induced by aerodynamic forces and surface tension. The interaction of the shape distortions with light result in speckles, multiple smeared highlights and curved brightness contours within the rain streak.

A large database of rendered streaks under different lighting, viewing and oscillations condition is publicly available [8]. An old NVidia SDK includes a sample using a part of this database [13]. Using a database like this is hard on this current console generation and as we say earlier, we are not looking for to exactly match the reality in games.

There are two methods to implement raindrop in game. Either with a particle system or with large textures.

The particles system method often consists of representing the streaks with simple shapes such as rectangle. Particles system often produce realistic movement, they can be wind-driven and efficiently simulate on the GPU [14]. For performance, the particle system is link to the camera to minimize the number of particles to manage. For example “Space Marine” use a view frustum restricted particle generator  [1]. The main downside of particles system is the lack of scalability. Stronger precipitation requires increasing the number of particles lowering the framerate.

The large textures method use animated textures (either procedural or hand authored) representing several streaks. Large textures method has the same performance overhead for heavy precipitation as light precipitation unlike particle system. But they lack the realism of raindrops depth and movement. Toy shop demo use a screen quad mapped textures (commonly name a postprocess). The demo try to emulate multiple layers of raindrops moving with different speeds and at varied depths within a single rendering screen quad. Different input are use to generate a “w” parameter for a projective texture fetch [3]. However the weakness of this approach appears quickly when the camera is moving around. The rain effect being a postprocess, looking downward make raindrops falling parallel to the ground (as illustrated by the Toy shop demo shot below).

“Flight simulator 2004″ maps four animated textures onto a double cone [12]. By using a cone mesh and by tilting it to adjust for camera movement, they allow precipitation to fall toward the camera. They scale down each of the four successive textures and scroll it more slowly, creating drops that are smaller and move slower to simulate depth with parallax.

RainConeMeshCone mesh and texture from [12]

To support dynamic rain with varying intensities at reasonable performance, we have developed an approach similar to “Flight simulator 2004″ describe by Antoine Zanuttini:

We define four layers of rain each representing an area in front of the camera.


Each layer uses the same pre-motion blurred raindrops texture:


We mapped the texture on a mi-cylinder mi-cone mesh link to the camera and positioned at the camera origin. We render from inside the cylinder. We could smoothly fade the raindrops at the top and bottom
when looking up or down (like [12]) by simply store opacity in vertices color, but we chose to not do it.


We translate and no uniformly scale the cylinder’s texture coordinate for each layer at different speed and size. Translation simulates the raindrops motion, far layer use bigger scale factor to increase the number of raindrops with distance. To simulate the feelings of winds and raindrops not always having a straight down direction like see in the observation post, we apply an additional cyclic rotation. In practice, we have not based the rotation on the wind direction but rather on artistic values to get a sort of chaotic motion. The 2 shot below show cylinder’s texture coordinate after transformation for the first two layers:


The 4 textures below show the result of each of the individual layers (from near to far):


Here is the pseudo code of the texture coordinate transformation for the two first layers:

float2 SinT = sin(Time.xx * 2.0f * Pi / speed.xy) * scale.xy;
// rotate and scale UV
float4 Cosines = float4(cos(SinT), sin(SinT));
float2 CenteredUV = UV - float2(0.5f, 0.5f);
float4 RotatedUV = float4(dot(Cosines.xz*float2(1,-1), CenteredUV)
                         , dot(Cosines.zx, CenteredUV)
                         , dot(Cosines.yw*float2(1,-1), CenteredUV)
                         , dot(Cosines.wy, CenteredUV) ) + 0.5f);
float4 UVLayer12 = ScalesLayer12 * RotatedUV.xyzw;

To get depth and parallax feeling within the raindrops effects, we want to use the depth buffer to occlude the raindrops. For this, each raindrop require a “virtual” position. We already define that’s each layer represent an area in front of the camera. Within each layer, we use a heightmap texture scale and biased with layer attributes to give each raindrop a depth inside the area. We use the direction from the view to the cylinder pixel position and this depth to retrieve the virtual world position of the raindrops. Generating heightmap texture is not an easy task, best is to generate it procedurally.

As for particles, a soft depth test can be performing to progressively decrease the opacity of raindrops. This depth occlusion increase the realism of rain, particularly in urban environment with lot of occluders in front of the camera. In a third person game like us you can see some raindrops falling between the player and the camera. A good side effect of this test is that’s raindrops disappear when looking at the ground because the depth of the raindrops are behind the depth of the ground. Another occlusion to take into account is the one from the sky. When we are under cover, we expect to not have raindrops. This could be easily achieved by reusing the depth map generated for the rain splashes. Like for a shadow map, we can project the virtual position of the raindrops and do a depth comparison to know if this raindrops is expect to reach its virtual position or if it should be stop. The test could be performing with the hardware PCF feature if the depth map supports this format.

In practice, we perform several simplifications for performance reasons. We chose to perform the raindrops occlusion tests only on the first two layers and at a lower resolution. Our depth map generated for rain splashes has a limited range, so we setup the distance of the first two layers in order to be covered by the depth map (remember that’s the depth map is shifted toward the camera) . We also  decide to not project the virtual position for distance occlusion but simply do a depth test difference in view space. Here is some pseudo code for the two first layers:

// Background Pixel depth - in view space
float Depth = CalcSceneDepth(ScreenPosition);
// Layers Depth tests :
float2 VirtualDepth = 0;
// Constant are based on layers distance
VirtualDepth.x = tex2D(Heightmap, UVLayer12.xy).r * RainDepthRange.x + RainDepthStart.x;
VirtualDepth.y = tex2D(Heightmap, UVLayer12.zw).r * RainDepthRange.y + RainDepthStart.y;
// Mask using virtual position and the scene depth
float2 OcclusionDistance = saturate((Depth - VirtualDepth) * 10000.0f);
// Calc virtual position
float3 Dir = normalize(PixelPosition);   // Cylinder is link to camera
float3 VirtualPosition1WS = CameraPositionWS.xyz + Dir * DepthLayers.x;
float3 VirtualPosition2WS = CameraPositionWS.xyz + Dir * DepthLayers.y;
// Mask using virtual layer depth and the depth map
// RainDepthMapTest use the same projection matrix than
// the one use for render depth map
float2 Occlusion= 0;
Occlusion.x = RainDepthMapTest(VirtualWPos1);
Occlusion.y = RainDepthMapTest(VirtualWPos2);
Occlusion*= OcclusionDistance;

For take into account raindrops occlusion for other far layers (3 and 4) we smoothly mask-in each layer based on the layer distance and depth buffer at full resolution. We also perform this test for the first two layer to correct the artifact introduce by the low resolution occlusion tests.

// Depth is in view space
// RainDepthStart contain the start distance of each layer
// RainDepthRange contain the area size of each layer
float4 Mask = saturate((Depth - RainDepthStart) / RainDepthRange);

The shot below show the mask result for the first three layers:


For the two far layers (3 and 4), we add an esthetic feature to add raindrops variety. We generate two smooth changing pattern textures use as blend mask to attenuate raindrops allowing no repeating raindrops falling. These textures are generated at lower resolution at the same time than the raindrops occlusion test.  Here is two sample of blend-mask generated:


Let’s take a look at the low and full pass pseudo code:

void RainLowPixelShader(...)
    // Mask with magic values (detail are not provide as this is "artistic" feature)
    // Layer 3
    float2 NoiseUV = tex2D(DistortionTexture, DistoUV.xy).xy
                     + tex2D(DistortionTexture, DistoUV.zw).xy;
    NoiseUV = NoiseUV * UV.y * 2.0f + float2(1.5f, 0.7f)*UV.xy
                     + float2(0.1f, -0.2f) * Time;    
    float LayerMask3 = tex2D(NoiseTexture, NoiseUV) + 0.32f;
    LayerMask3 = saturate(pow(2.0f * Layer1, 2.95f) * 0.6f);

    // Layer 4
    float LayerMask4 = tex2D(NoiseTexture, BlendUV.xy)
                       + tex2D(NoiseTexture, BlendUV.zw) + 0.37f;
    // Background Pixel depth - in view space
    float Depth = CalcSceneDepth(ScreenPosition);
    // Layers Depth tests :
    float2 VirtualDepth = 0;
    // Constant are based on layers distance
    VirtualDepth.x = tex2D(Heightmap, UVLayer12.xy).r * RainDepthRange.x + RainDepthStart.x;
    VirtualDepth.y = tex2D(Heightmap, UVLayer12.zw).r * RainDepthRange.y + RainDepthStart.y;
    // Mask using virtual position and the scene depth
    float2 OcclusionDistance = saturate((Depth - VirtualDepth) * 10000.0f);
    // Calc virtual position
    float3 Dir = normalize(PixelPosition);   // Cylinder is link to camera
    float3 VirtualPosition1WS = CameraPositionWS.xyz + Dir * DepthLayers.x;
    float3 VirtualPosition2WS = CameraPositionWS.xyz + Dir * DepthLayers.y;
    // Mask using virtual layer depth and the depth map
    // RainDepthMapTest use the same projection matrix than
    // the one use for render depth map
    float2 Occlusion= 0;
    Occlusion.x = RainDepthMapTest(VirtualWPos1);
    Occlusion.y = RainDepthMapTest(VirtualWPos2);
    Occlusion*= OcclusionDistance;

    OutColor = float4(Occlusion.xy, LayerMask3, LayerMask4);


The texture generated by the low pass is use when rendering the cylinder at full resolution to get the final rain effect:

// Depth is in view space
float Depth = CalcSceneDepth(ScreenPosition);
// RainDepthMin contain the start distance of each layer
// RainDepthRange contain the area size of each layer
// RainOpacities allow to control opacity of each layer (useful with lightning
// or to mask layer)
float4 Mask = RainOpacities * saturate((Depth - RainDepthStart) / RainDepthRange);

float2 MaskLowUV = ScreenPosition.xy * float2(0.5f, -0.5f) + float2(0.5f, 0.5f);
float4 MaskLow = tex2D(RainLowTexture, MaskLowUV);

float4 Values;
Values.x = tex2D(RainTexture, CylUVLayer1.xy);
Values.y = tex2D(RainTexture, CylUVLayer1.zw);
Values.z = tex2D(RainTexture, CylUVLayer2.xy);
Values.w = tex2D(RainTexture, CylUVLayer2.zw);

// The merge of all mask: occlusion, pattern, distance is perform here
float RainColor = dot(Values, Mask * MaskLow);

float3 FinalColor = RainColor.xxx * 0.09f * RainIntensity;

In the code you can see a RainOpacities variable. This is use by level designer to mask some layer depends on game context. For example, as we have no depth map occlusion for far layers, level designer can help by “disabling” them. This is also useful to control the raindrops opacity based on lightning as describe in the Toy shop demo. The code also introduces the rain intensity value. Rain intensity allows to dynamically defining the strength of the rain. For performance, we simply modulate the raindrops color intensity. As the rain texture includes different rain color intensity this allow to see more and more raindrops with increasing value of rain intensity. A thing to note is that’s you can’t display more raindrops than what is present in the base texture. All the code written tries to remove raindrops (occlusion, pattern, distance mask…) so your rain texture must be author with max rain intensity in mind.

You can see the result of our rain effect on a  raindrops youtube video (For raindrops better to see HD version).

The video show the behavior of the different layers by enabling/disabling them, the rain intensity influence, the layer distance, the occlusion tests… The shots below show respectively: no rain, the rain effect enabled, an exaggerated version to better highlight the layers and occlusions (Click for high resolution):

You can also see rain effects directly in the Remember me PS3 game footage trailer :

For the sake of optimization, as we said before, we do two passes. One low resolution pass at quarter resolution and one at full resolution by drawing a full screen cylinder. The full resolution pass is merging with all other postprocess (motion blur, depth of field, color grading, tone mapping…) to avoid redundant resolve and blending work, so we effectively use a cylinder for them too.

On PS3 the low resolution pass take around 0.40ms and the full resolution (the additional cost of the rain effect in the post process pass) : 1.29ms for a total of 1.69ms.
On XBox360 the low resolution pass take around 0.34ms and the full resolution : 1.38ms for a total of 1.72ms.
Timings are the same whatever the strength of the rain.

Limitation: The current raindrops occlusion method based on the soft depth test don’t work with translucent objects. When seeing rain through a clear window this is often not a problem, but when the object is half translucent the raindrops can appear in front of it.

Added note:

It could be possible to use a specific texture matching the rain texture to assign to each raindrop a rain intensity threshold. When increasing the rain intensity value the threshold is compare with it to enable or not the raindrops. The texture is similar to the heightmap texture but with different intensity distribution,  the current result were already satisfying and we wanted to save some instructions.
I would also like to add for reference an interesting piece of code extracted from the ShaderX5 article on rain in the Toy shop demo  [3]. We don’t talk about lighting here because lighting is too expensive for our case. But it is good to know that’s to get better realistic raindrops, as seen in the small theory section at the begin, reflection, refraction and internal reflection should be take into account. Toy shop demo chose to manage reflection and refraction by sampling an environment map. If your engine supports it, this nicely fit with any zone based or local parallax cubemap approaches (See Image-based Lighting approaches and parallax-corrected cubemap):

float3 SiTransmissionDirection (float fromIR, float toIR, 
    float3 incoming, float3 normal)
    float eta = fromIR/toIR; // relative index of refraction
    float c1 = -dot (incoming, normal); // cos(theta1)
    float cs2 = 1.-eta*eta*(1.-c1*c1); // cos^2(theta2)
    float3 v = (eta*incoming + (eta*c1-sqrt(cs2))*normal);
    if (cs2 < 0.) v = 0; // total internal reflection
    return v;

// Reimplemented - Not in the shaderX 5 article
float3 SiReflect (float3 view, float3 normal)
    return 2*dot (view, normal)*normal - view;
// In the main shader:
// Retrieve the normal map normal for rain drops:
float3 vNormalTS = tex2Dproj( tBump, texCoord ).xyz;
vNormalTS = SiComputeNormalATI2N( vNormalTS );
// Compute normal in world space:
float3x3 mTangentToWorld = float3x3( normalize( i.vTangent ), 
                            normalize( i.vBinormal ), normalize( i.vNormal ));
float3   vNormalWS       = normalize( mul( mTangentToWorld, vNormalTS ));
// Compute the reflection vector:
float3 vReflectionWS = SiReflect( vViewWS, vNormalWS );
// Environment contribution:
float3 cReflection = texCUBE( tEnvironment, vReflectionWS );
// Approximate fresnel term
float fFresnel = SiComputeFresnelApprox( vNormalWS, vViewWS );
// Compute refraction vector: 0.754 = 1.003 (air) / 1.33 (water)
float3 vRefractWS = SiTransmissionDirection( 1.003, 1.33, vViewWS, vNormalWS );
// Refraction contribution:
float3 cRefraction = texCUBE( tEnvironment, vRefractWS );
cResult = saturate( (cReflection * fFresnel * 0.25f) +
                     cRefraction * (1.0f - (fFresnel * 0.75 )));

[20] also have a lightweight model to simulate refraction, reflection and internal reflection. The nice idea here is the use of a precomputed texture mask that determines the direction of the refracted viewing vector for a quasi spherical raindrop. The refractionvector is later used to index a texture storing a wide field of view render of the background. The internal reflection is simulated for direct light with simple factor.

A technique of interest to light raindrops at low cost is describe in [18] with an “inferred renderer”. Raindrops are light like other objects.

Droplets / wall glides / additional rain

As seen in the observation post, rain is not about just splashes, raindrops and puddles. Many objects interact with the accumulated water based on their curvature and environment. Adding all these elements increase the rainy mood. But compare to the other effects, they are generally less scalable and less controllable due to production time constraint. They are best suited for known rain condition in an area. Dynamic rain still an option by blending in and out the effects, but they will not really be adapted to all weather condition. They are usually created by FX artists in much creative ways (and not always realistic) and set by hand in the scene. For example in Bioshock [5], dripping water is created with a cylinder mesh. Cascading water that interact with object with the help of a 1D shadowmap is also described. On the programming side, the Toy shop demo describe some effects like raindrops falling off various objects with drop normal map, fresnel equation, reflection, refraction etc… [2].

One of our FX artists, Orson Favrel, will describe few misc effects we use among other. A misc FX artists effects youtube video is available (else difficult to see the result on static image):

Droplets on glasses:

This effect use a droplets sliding texture (Below). The texture are sampled two times with different translate and scale. The result is also use to enable a distortion effect where there are droplets. To get back some view dependant lighting information we sample a low res environment cubemap (Available to all objects in our renderer. See Image-based Lighting approaches and parallax-corrected cubemap) and add it to the actual color.


Falling droplets:


Better to see the video for this effect. There is falling drops because by water accumulated on an edge of a roof. The goal of this FX is to be generic enough to be reused. A random location is taken on a thin cylinder. Sizes of the cylinder, size of the drops, the spawn rate are all configurable to adapt to different edge size and fixed weather condition.  As for droplets on glasses, we use a low resolution environment cubemap for the lighting integration with some tint value. Here is a wireframe of the effect with the cylinder in blue and droplets in red:


Wall glides:


This effect is similar to droplets on glass. But in the texture, the blue channel is use for the water glide and the green channels allow controlling the crack shape. Goal is to simulate infiltrated water in wall.


Depends on situation, we can also use a detailed mesh (to conserve the organic shape of the water) instead of sprites which for this effect tend to have lot of overdraw:


Additional rain:
Sometime the raindrops effect is not practicable because not enough flexible, like when you are in interior and looking through a broken glass to the outside (More visible in the video). In this case, it is less costly and more controllable to add specific rain mesh. Vertices color are use to control speed of drop falling, delaying of drop at startup and movement variation. The mesh is stretched and move inside the vertex shader:


Camera droplets

By Laury Michel.

An important clue to immersion in a rainy scene is the presence of rain drops on the camera lens when the camera is facing upward. This kind of effects is always implemented in an artistic way in games. In order to achieve this effect, we first attempted to use a fullscreen postprocess blending a distortion texture in when the camera looked upward and then fading out the effect when it no longer did. This effect worked quite well except for two problems:
- Apart from the fading in and out, it wasn’t that dynamic.
- It was quite costly.
So our second attempt was directed toward implementing some screen space effect using particles. It has the advantage of reducing the fillrate needed and allows for more dynamic effects using the particle system framework (Unreal Engine 3′s Cascade in our case). Drawing the particles directly in screen space is not easily done within the particle system framework. An easier approach is to draw the particles as if they were in view space (on the near plane in front of the camera) and link their transformation to the one of the camera. It has some disadvantages as well: changing the FOV affects the way those particles look for example as does changing the screen ratio. But these problems were neglectable in our case. Here is pseudo-code added in the spawn particles modules :

if (ParticleSystem->UseAsCameraLensEffect())
    // View matrix transform from world to view.
    // Here we want that's the view transform
    // has no effect as if we were spanwing in view space. 
    // So use inverse of the view matrix.
    ParticleSystem->LocalToWorld = View->InvViewMatrix;

It’s worth noting that we also use particle trimming [10] to reduce a little more the fillrate.
In practice, our FX artists generate particles with a plane distribution (on the picture below it is generated above the orange plane). These particles are then linked to the camera. FX by Florian Monsçavoir:


The effect can be seen in action at the begin of the Raindrops youtube video. The effect cost around 0.32ms on PS3 and 0.54ms on XBox360. Also not show in the video, in practice we generate more droplets on the camera when it is up and if it is undercover we don’t generate droplets. This can be test by throwing a ray in the opposite rain direction and testing collision with the world or more simply by reusing the depth map of the rain splashes.

Rain effects control panel

This section is not an effect. It is a set of definitions allowing to level designers to understand and manage the misc rain effects. Often it is difficult for a level designer to interpret a value. What is the meaning of a rain intensity of 0.7. Is it a strong rain ? Moreover, if 0.7 should be a strong rain, does it feel like a strong rain ? Defining common words are important, not only to easier setup the misc effects but also to allow debugging thing when they are wrong. The QA will be more able to say if something is wrong if they read in the design document that’s a particular area should have a strong rain. If a strong rain have few ripples and don’t fill puddles, this may be a bug in the code. The best thing to do when dealing with natural phenomena is to use the common glossary largely available and understandable. We base our rain intensity parameter on the glossary of meteorology [15]. Wikipedia “Rain” page have taken it  [16] :

Rainfall intensity is classified according to the rate of precipitation:

  • Light rain — when the precipitation rate is < 2.5 millimeters per hour
  • Moderate rain — when the precipitation rate is between 2.5 millimeters – 7.6 millimeters or 10 millimeters  per hour
  • Heavy rain — when the precipitation rate is > 7.6 millimeters per hour, or between 10 millimeters and 50 millimeters per hour
  • Violent rain — when the precipitation rate is > 50 millimeters per hour

Intensity and duration of rainfall are usually inversely related, i.e., high intensity storms are likely to be of short duration and low intensity storms can have a long duration.

Precipitation value can be use to predict the number of raindrop of radius a per unit volume. The Marshall-Palmer distribution [16] allows calculating it: N(a)=8000 * e^{-4.1*h^{-0.21}*a} . Where h is the rain rate given in mm/hr, a is the radius of the drop in meters and N(a) is the number of rain drops per unit volume. Different sources give different numbers for the constant. What is important here is the shape of the curve and the difference between precipitation values. The number of raindrops decreases with size growing. Here is a plot with blue (1mm/h), red (5mm/h) and yellow (30mm/h):


Another interesting graphics on distribution of drop sizes of rain can be found in [19]. This figure show that’s normal rainfall has a variety of drop sizes distribution. It illustrates the wider distribution of droplet sizes in
the heavier rain which has the larger droplets.


The following items need to be considered in rainfall tests in the laboratory:

a. Raindrop size distribution.
Rates less than 25 mm h–1, drop size of 1 mm.
Rates greater than 25 mm h–1, drop size from 1 to 5 mm.

Text and graph from the NASA technical report on “Precipitation, Fog And Icing” [19]

These curves could be use to retrieve the good raindrops distribution by unit volume for a given precipitation intensity. Of course, in the context of game, precipitation and raindrop numbers are useless and we prefer to follow the feedback feeling. A descriptive system would be more useful than numbers. We chose to simply linearly increase the number of raindrops with the rain intensity. It allow to keep performance under control.
Here we will just use three kind of rain: “Light”, “Moderate”, and “Heavy” and link all the parameters of this post to these words (there is other parameters not show here which will be describe in the part b):

Rain type Rain Intensity Raindrops threshold Number splashes
Light 0.33 0.33 20
Moderate 0.66 0.66 40
Heavy 1.0 1.0 60

We have not include the “drizzle” rain word because this kind of rain are not visible enough in reality to have interest. Raindrops threshold is use if you chose to support a threshold by raindrops (with a texture like explain in raindrops section), else it is equal to rain intensity.
All parameters are simply interpolated between the different kinds of rain when the rain intensity varies. This list is not exhaustive, you could add a rain duration range base on the Wikipedia statement that’s heavy rain are short etc… Other control could be added like defining probabilities of a rain type, etc…

Other effects

Other effects are describe in the part B: Water drop 2b – Dynamic rain and its effects.


[1]  Barrero, “Relic’s FX system: Making big battles comes Alives”, http://www.slideshare.net/proyZ/relics-fx-system
[2] Tatarchuck, “Artist directable real-time rain rendering in city environments ” http://www.ati.com/developer/gdc/2006/GDC06-Advanced_D3D_Tutorial_Day-Tatarchuk-Rain.pdf
[3] Tatarchuck, “Rendering Multiple Layers of Rain with a Post-Processing Composite Effect”, ShaderX 5
[4] Tatarchuck, “Artist-Directable Real-Time Rain Rendering in City Environments”, http://developer.amd.com/wordpress/media/2012/10/Tatarchuk-Isidoro-Rain%28EGWNph06%29.pdf
[5] Alexander, Johnson, “The Art and Technology Behind Bioshock’s Special Effects”, http://gdcvault.com/play/289/The-Art-and-Technology-Behind
[6] Garg, K. Nayar, “Photometric Model of a Rain Drop”, http://www1.cs.columbia.edu/CAVE/publications/pdfs/Garg_TR04.pdf
[7] Garg, Krishnan, K. Nayar , “Material Based Splashing of Water Drops”, http://www1.cs.columbia.edu/CAVE/projects/mat_splash/
[8] Garg, K. Nayar , “Photorealistic Rendering of Rain Streaks”, http://www1.cs.columbia.edu/CAVE/projects/rain_ren/rain_ren.php
[9] Thornton, “Directable Simulation of Stylized Water Splash Effects in 3D Space”, http://nguyendangbinh.org/Proceedings/Siggraph/2006/cd2/content/sketches/0106-thornton.pdf
[10] Persson,  “Graphics Gems for Games – Findings from Avalanche Studios”, http://www.humus.name/index.php?page=Articles
[11] Garg, Krishnan, K. Nayar , “Vision and Rain”, http://www1.cs.columbia.edu/CAVE/publications/pdfs/Garg_IJCV07.pdf
[12] Wang, Wade, “Rendering falling rain and sown”, http://www.ofb.net/~niniane/rainsnow/rainsnow-sketch.pdf
[13] Tariq, “Rain”, http://developer.download.nvidia.com/SDK/10/direct3d/Source/rain/doc/RainSDKWhitePaper.pdf
[14] Feng, Tang, Dong, Chou, “Real-Time Rain Simulation”
[15] Glossary of Meteorology, http://amsglossary.allenpress.com/glossary/search?id=rain1
[16] Rain – Wikipedia, http://en.wikipedia.org/wiki/Rain
[17] Marshall, Palmer, “The distribution of raindrops with size”, http://journals.ametsoc.org/doi/pdf/10.1175/1520-0469%281948%29005%3C0165%3ATDORWS%3E2.0.CO%3B2
[18] Kircher, “Lighting & Simplifying Saints Row: The Third”, http://twvideo01.ubm-us.net/o1/vault/gdc2012/slides/Programming%20Track/Kircher_Lighting_and_Simplifying_Saints_Row_The_Third.pdf
[19] NASA, “Precipitation, Fog And Icing”, https://standards.nasa.gov/released/1001/1001_7.pdf
[20] Rousseau, Jolivet, Ghazanfarpour, “Realistic real-time rain rendering”, http://www.pierrerousseau.fr/rain.html

9 Responses to Water drop 2a – Dynamic rain and its effects

  1. Adrien Lamarque says:

    Well, that was a nice read ! You guys definitely did your research.

  2. seblagarde says:

    Add a link to the Remember me PS3 game footage trailer where you can see rainsplashes, raindrops and some rain effects directly into our current game (More visible in HD). Look for moment where the player is outside, or go directly on roof part at 7m20s:

  3. seblagarde says:

    v1.1: I discover a new interesting document I wasn’t aware :
    NASA, “Precipitation, Fog And Icing”, https://standards.nasa.gov/released/1001/1001_7.pdf
    So I do a small modification of the raindrop sizes distrubution in the Rain effects control panel part.

  4. seblagarde says:

    v1.2: add a reference to a paper from Rousseau, Jolivet and Ghazanfarpour, “Realistic real-time rain rendering”, http://www.cse.iitb.ac.in/graphics/~pisith/references/realistic%20real%20time%20rain%20rendering.pdf and a really short sum up of it at the end of “Rain/Raindrops” section.

  5. seblagarde says:

    v1.3: add some words on limitation of the raindrops effect for the occlusion soft depth test with translucent objects.

  6. Pierre says:

    Hi seb,
    You might want to reference my article (your [20] reference) from its original source, and maybe even go through my other writings on the subject :
    By the way, nice work !

    • seblagarde says:

      Hi Pierre,
      Update done.

      I wasn’t aware of your thesis, it seems you done a lot of work on this topic. I will read this carefully. Thank for sharing your links!

  7. Excellent article.

    I’m a vfx artists working in the film industry. In the last year I’ve had my hands on at least 50 shots that needed rain added to them. In an attempt to figure out an accurate and efficient method for creating realistic rain I used the database of rain streaks from Garg, K. Nayar , “Photorealistic Rendering of Rain Streaks” in some tests with fine results.

    In practice it’s turned out time and time again that the perception of rain has very little to do with the physical realities of rain. The clients would constantly come back with notes asking for more rain, brighter rain, more layers of rain, blowing rain, blowing mist. In the end the rain in the final shots were pretty much reality X10. The requested medium rainfall turns into what would amount to a torrential downpour in reality. To some degree this is a result of “We’re paying for rain! We want to see the rain!” but in the real world the experience of rain is so much more than just the sight of it that the task of simulating rainfall will always be difficult and subjective regardless of the advances in technology.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: