GPU Pro 4 – Practical planar reflections using cubemaps and image proxies (with Video)

I have written with my co-worker Antoine Zanuttini an article in the GPU Pro 4 book:
FinalCover
I am particularly proud of the cover as this is extracted from the project I worked on “Rememer Me” from Dontnod entertainment edited by Capcom. There is a short sum up available on the GPU Pro blog : http://gpupro.blogspot.fr/2013/02/gpu-pro-4-practical-planar-reflections.html that I will duplicate here.

Rendering scenes with glossy and specular planar reflections is a challenge in video game, particularly when targeting old hardware like current console (PS3, XBOX360). Real time planar reflection is often implemented by re-rendering the scene from the point of view of a reflected camera, an expensive process. At the opposite, several game developers use a simple generic 2D environment texture which lack realism but is really cheap. Others use screen-space local reflection which has edge cases and still has a not negligible cost. 

For the game “Remember Me”,we have developed a really fast planar reflection solution usable on every ground. In our GPU Pro 4 chapter we discuss how we render an approximation of the reflected scene with the help of parallax corrected offline-generated elements: Environment map and image proxies. The planar reflection has enough quality for game and take into account the roughness of the surface.The chapter discusses about the algorithm details and follows the work we have presented at Siggraph 2012 “Local Image-based Lighting With Parallax-correctedCubemap“.

Our goal was not only to discuss implementation of our algorithm but also its usage in a context of a game development. We describe the tools we develop for our artists and the best practices they discover. Artists are creative peoples and they have pushed the boundary of our tools where we not expected them. Want to look at the result in action? See this video which accompanying the article

Siggraph 2012 and Game Connection 2012 talk : Local Image-based Lighting With Parallax-corrected Cubemap

Here is the slides of me and my co-worker Antoine Zanuttini’s talk at Siggraph 2012 and Game Connection 2012 : “Local Image-based Lighting With Parallax-corrected Cubemap”

The two talk are similar but the Game Connection talk are more recent and contain little update. This is the one to get.

Game Connection 2012:

A short sum up can be seen on the official Game Connection 2012 website : Local Image-based Lighting With Parallax-corrected Cubemap

Powerpoint 2007:  Parallax_corrected_cubemap-GameConnection2012.pptx
PDF: Parallax_corrected_cubemap-GameConnection2012.pdf
Accompanying video of the slides:
Video 1 : https://www.youtube.com/watch?v=f8_oEg2s8dM

Video 2 : http://www.youtube.com/watch?v=Bvar6X0dUGs

Siggraph 2012:

A short sum up can be seen on the official Siggraph 2012 website : Fast Realistic Lighting

Powerpoint 2007: Parallax_corrected_cubemap-Siggraph2012.pptx
PDF: Parallax_corrected_cubemap-Siggraph2012.pdf
Accompanying video of the slides: http://www.youtube.com/watch?v=Bvar6X0dUGs

Description:

The talk describes a new approach of local image-based lighting. It consist in mixing several local parallax-corrected cubemaps and use the result to light scene objects.
The advantages are accurate local lighting, no lighting seams and smooth lighting transitions.
The content of this talk is also describe in the post : Image-based Lighting approaches and parallax-corrected cubemap.

Image-based Lighting approaches and parallax-corrected cubemap

Version : 1.28 – Living blog – First version was 2 December 2011
This post replace and update a previous post name “Tips, tricks and guidelines for specular cubemap” with information contain in the Siggraph 2012 talk “Local Image-based Lighting With Parallax-corrected Cubemap” available here.

Image-based lighting (IBL) is common in game today to simulate ambient lighting and it fit perfectly with physically based rendering (PBR). The cubemap parameterization is the main use due to its hardware efficiency and this post is focus on ambient specular lighting with cubemap. There is many different specular cubemaps usage in game and this post will describe most of them and include the description of a new approach to simulate ambient specular lighting with  local image-based Lighting. Even if most part of this post are dedicated to tools setup for this new approach, they can easily be reuse in other context. The post will first describe different IBL strategies for ambient specular lighting, then give some tips on saving specular cubemap memory. Second, it will detail an algorithm to gather nearest cubemaps from a point of interest (POI), calculate each cubemap’s contribution  and efficiently blend these cubemaps on the GPU. It will finish by several algorithms to parallax-correct a cubemap and the application for the local IBL with parallax-cubemap approach. I use the term specular cubemap to refer to both classic specular cubemap and prefiltered mipmapped radiance environment map. As always, any feedbacks or comments are welcomed.

IBL strategies for ambient specular lighting

In this section I will discuss several strategies of cubemap usage for ambient specular lighting. The choice of the right method to use depends on the game context and engine architecture. For clarity, I need to introduce some definitions:

I will divide cubemaps in two categories:
– Infinite cubemaps: These cubemaps are used as a representation of infinite distant lighting, they have no location. They can be generated with the game engine or authored by hand. They are perfect for representing low frequency lighting scene like outdoor lighting (i.e the light is rather smooth across the level) .
– Local cubemaps: These cubemaps have a location and represent finite environment lighting.They are mostly generating with game engine based on a sample location in the level. The generated lighting is only right at the location where the cubemap was generated, all other locations must be approximate. More, as cubemap represent an infinite box by definition, there is parallax issue (Reflected objects are not at the right position) which require tricks to be compensated. They are used for middle and high frequency lighting scene like indoor lighting. The number of local cubemap required to match lighting condition of a scene increase with the lighting  complexity (i.e if you have a lot of different lights affecting a scene, you need to sample the lighting at several location to be able to simulate the original lighting condition).

And as we often need to blend multiple cubemap,  I will define different cubemap blending method :
– Sampling K cubemaps in the main shader and do a weighted sum. Expensive.
– Blending cubemap on the CPU and use the resulted cubemap in the shader. Expensive depends on the resolution and required double buffering resources to avoid GPU stall.
– Blending cubemap on the GPU and use the resulted cubemap in the shader. Fast.
– Only with a deferred or light-prepass engine: Apply K cubemaps by weighted additive blending. Each cubemap bounding volume is rendered to the screen and normal+roughness from G-Buffer is used to sample the cubemap.

In all strategies describe below, I won’t talk about visual fidelity but rather about problems and advantages.

Object based cubemap
Each object is linked to a local cubemap. Objects take the nearest cubemap placed in the level and use it as specular ambient light source. This is the way adopted by Half Life 2 [1] for their world specular lighting.
Background objects will be linked at their nearest cubemaps offline and dynamic objects will do dynamic queries at runtime. Cubemaps can have a range to not affect objects outside their boundaries, they can affect background objects only, dynamic objects only or both.

The main problem with object based cubemap is lighting seams between adjacent objects using different cubemaps.
Here is a screenshot (click for full res)

Read more of this post

AMD Cubemapgen for physically based rendering

Version : 1.67 – Living blog – First version was 4 September 2011

AMD Cubemapgen is a useful tool which allow cubemap filtering and mipchain generation. Sadly, AMD decide to stop the support of it. However it has been made open source  [1] and has been upload on Google code repository  [2] to be improved by community. With some modification, this tool is really useful for physically based rendering because it allow to generate an irradiance environment map (IEM) or a prefiltered mipmaped radiance environment map (PMREM).  A PMREM is an environment map (in our case a cubemap) where each mipmap has been filtered by a cosine power lobe of decreasing cosine power value. This post describe such improvement I made for Cubemapgen and few others.

Latest version of Modified Cubemapgen (which include modification describe in this post) are available in the download section of the google code repository. Direct link : ModifiedCubeMapGen-1_66 (require VS2008 runtime and DX9) .

This post will first describe the new features added to Cubemapgen, then for interested (and advanced) readers, I will talk about theory behind the modification and go into some implementation details.

The modified Cubemapgen

The current improvements are under the form of new options accessible in the interface:

(click for full rez)


- Use Multithread : Allow to use all hardware threads available on the computer. If uncheck, use the default behavior of Cubemapgen. However new features are unsupported with the default behavior.
Irradiance Cubemap : Allow a fast computation of an irradiance cubemap. When checked, no other filter or option are take in account. An irradiance cubemap can be get without this option by setting a cosine filter with a Base angle filter of 180 which is a really slow process. Only the base cubemap is affected by this option, the following mipmap use a cosine filter with some default values but these mipmaps should not be used.
Cosine power filter : Allow to specify a cosine power lobe filter as current filter. It allow to filter the cubemap with a cosine power lobe. You must select this filter to generate a PMREM.
MipmapChain : Only available with Cosine power filter. Allow to select which mode to use to generate the specular power values used to generate each PMREM’s mipmaps.
Power drop on mip, Cosine power edit box : Only available with the Drop mode of MipmapChain. Use to generate specular power values used for each PMREM’s mipmaps. The first mipmap will use the cosine power edit box value as cosine power for the cosine power lobe filter. Then the cosine power will be scale by power drop on mip to process the next mipmap and once again this new cosine power will be scale for the next mipmap until all mipmap are generated. For sample, settings 2048 as cosine power edit box and 0.25 as power drop on mip, you will generate a PMREM with each mipmap respectively filtered by cosine power lobe of 2048, 512, 128, 32, 8, 2…
Num Mipmap, Gloss scale, Gloss bias : Only available with the Mipmap mode of MipmapChain. Use to generate specular power values used for each PMREM’s mipmaps.  The value of Num mipmap, Gloss scale and Gloss bias will be used to generate a specular power value for each mipmap.
Lighting model: This option should be use only with cosine power filter. The choice of the lighting model depends on your game lighting equation. The goal is that the filtering better match your in game lighting.
Exclude Base : With Cosine power filter, allow to not process the base mimap of the PMREM.
Warp edge fixup: New edge fixup method which do not used Width based on NVTT from Ignacio Castaño.
Bent edge fixup: New edge fixup method which do not used Width based on TriAce CEDEC 2011 presentation.
Strecht edge fixup, FixSeams: New edge fixup method which do not used Width based on NVTT from Ignacio Castaño. FixSeams allow to display PMREM generated with Edge fixup’s Stretch method without seams.

All modification are available in command line (Print usage for detail with “ModifiedCubemapgen.exe – help”).

Irradiance cubemap

Here is a comparison between irradiance map generated with cosine filter of 180 and the option irradiance cubemap (Which use spherical harmonic(SH) for fast processing): Read more of this post

PI or not to PI in game lighting equation

Version : 3.1 – Living blog – First version was 4 January 2012

With physically based rendering current trend of photo-realistic game, I feel the need to do my lighting equation more physically correct. A good start for this is to try to understand how our currently game lighting equation behave. For a long time, the presence or not in my game lighting equation of term \pi or \frac{1}{\pi} have been more based on trial and error than physics. The origin of these terms in game lighting equations have already been discussed by others [1][3][7][12]. But as I found this subject confusing I dedicated this post only to that topic. This post is not about code, it is about understanding what we do under the hood and why this is correct. I care about this because more correct game lighting equations mean consistency under different lighting condition, so less artists time spend to tweak values. I would like to thank Stephen Hill for all the help he provide me around this topic.

This post is written as a memo for myself and as a help to anyone which was confusing as I was. Any feedback are welcomed.

I will begin this post by talking about Lambertian surface and the specificity of game light’s intensity then talk about diffuse shading and conclude by specular shading. I will not define common term found in lighting field like BRDF, Lambertian surface… See [1] for all these definitions and notation of this post.

Origin of \pi term confusion

The true origin of the confusing \pi term come from the Lambertian BRDF which is the most used BRDF  in computer graphics. The Lambertian BRDF is a constant define as :

f(l_c,v)=\frac{c_{diff}}{\pi}

The notation f(l_c,v) mean BRDF parametized by light vector l_c and view vector v. The view vector is not used in the case of Lambertian BRDF. c_{diff} is what we commonly call diffuse color.
The first confusing \frac{1}{\pi} term appear in this formula. It come from a constraint a BRDF should respect which is name conservation of energy. It mean that the outgoing energy cannot be greater than the incoming energy. Or in other word that you can’t create light. The derivation of the \frac{1}{\pi} can be found in [3].

As you may note, game Lambertian BRDF  have not this \frac{1}{\pi} term. Let’s see a light affecting a Lambertian surface in game:

FinalColor = c_diff * c_light * dot(n, l)

To understand where the \frac{1}{\pi} disappeard, see how game light’s intensity is define. Games don’t use radiometric measure as the light’s intensity but use a more artist friendly measure  [1] :

For artist convenience, c_{light} does not correspond to a direct radiometric measure of the light’s intensity; it is specified as the color a white Lambertian surface would have
when illuminated by the light from a direction parallel to the surface normal (l_c=n)

Which mean, if  you setup a light in a game with color  c_{light} and point it directly on a diffuse only quad mapped with a white diffuse texture  you get the color of c_{light}.
Another way to see this definition is by taking the definition of a diffuse texture [2] :

How bright a surface is when lit by a 100% bright white light

Which mean, if  you setup a white light in a game with brightness 1 and point it directly on a diffuse only quad mapped with a diffuse texture, you get the color of the diffuse texture.
This is very convenient for artists which don’t need to care about physical meaning of light’s intensity unit.

Theses definitions allows to define the punctual light equation use in game. A punctual light is an infinite small light like directional, point or spot light common in games.

L_o(v)=\pi f(l_c,v)\bigotimes c_{light}\underline{(n\cdot l_c)}).

The derivation and the notation of this equation is given in [1]. L_o(v) is the resulting exit radiance in the direction of the view vector v which is what you will use as color for your screen pixel. v is not used for Lambertian BRDF.

Using the punctual light equation with a Lambertian BRDF give us :

L_o=\pi \frac{c_{diff}}{\pi} \bigotimes c_{light}\underline{(n\cdot l_c)}).

Which I will rewrite more simply by switching to a monochrome light (\bigotimes is for RGB) :

L_o=\frac{c_{diff}}{\pi} \pi c_{light} \underline{(n\cdot l_c)}

This shading equation looks familiar except the \pi term. In fact after simplification we get :

L_o=c_{diff} c_{light} \underline{(n\cdot l_c)}

Which is our common game diffuse lighting equation.

This mean that for artists convenience, the value they enter as brightness in light’s settings is in fact the result of the light brightness multiply by \frac{1}{\pi} (the energy conserving constant of Lambertian BRDF) . When artists put 1 in brightness, in reality they set a brightness of \pi. This is represented in the punctual lighting equation by \pi c_{light}. In this post, I will define as game light’s unit the fact of multiplying the light brightness by \pi and as game Lambert BRDF the c_{diff} term which is linked.

In the following I will describe the consequence of the game light’s unit on common diffuse lighting technic use in game then on common specular lighting technic.

Diffuse lighting in game

Lambert lighting

Read more of this post