Siggraph 2014 : Moving Frostbite to Physically based rendering V2

The slides, course notes and Mathematica files of me and my-coworker Charles de Rousiers “Moving Frostbite to Physically based rendering” are available here (The course notes have been update to v2, mathematica files to v3):

And also on the official PBR course website: (To be update only slides for now)

The talk is a survey of current PBR technics and small improvement we have done for the Frostbite engine. It covert many topics. Here is the table of content of the course note (available on linked website):

1 Introduction
2 Reference
2.1 Validating models and hypothesis
2.2 Validating in-engine approximations
2.3 Validating in-engine reference mode
3 Material
3.1 Material models
3.2 Material system
3.3 PBR and decals
4 Lighting
4.1 General
4.2 Analytical light parameters
4.3 Light unit
4.4 Punctual lights
4.5 Photometric lights
4.6 Sun
4.7 Area lights
4.8 Emissive surfaces
4.9 Image based lights
4.10 Shadow and occlusion
4.11 Deferred / Forward rendering
5 Image
5.1 A Physically Based Camera
5.2 Manipulation of high values
5.3 Antialiasing
6 Transition to PBR

v2 Update:
During a year, we have get several feedbacks from various people on our document (Sorry we forget to do a list of all of them). There was several mistakes, typo and unclear statement. We have upgrade the course note with all the reported error and clarified some part. The v2 course contain the following list of correction (Also listed on page 98 in the new course note pdf document):

– Section 3.2.1 – Corrected wrong statement for describing the micro-specular occlusion of the Reflectance parameters: ”The lower part of this attribute defines a micro-specular occlusion term used for both dielectric and metal materials.”. Description of BaseColor and Reflectance parameters have been updated.
– Section 3.2.1 – Removed reference on Alex Fry work of normal encoding as it has not been done.
– Section 4.2 – Updated the description of color temperature for artificial lights sources. Including the concept of color correlated temperature (CCT).
– Section 4.4 – Clarified what is lightColor in Listing 4
– Section 4.5 – Clarified what is lightColor in Listing 5
– Section 4.6 – Updated and explained the computation of the Sun solid angle and the estimated illuminance at Earth surface.
– Section – Added comment in Listing 7: FormFactor equation include a invPi that needs to be canceled out (with Pi) in the sphere and disk area light evaluation
– Section – Clarified in which case the diffuse sphere area formula is exact above the horizon
– Section – Clarified in which case the diffuse disk area formula is exact above the horizon
– Section 4.7.4 – Correct listing 15. getDiffuseDominantDir parameter N is float3
– Section 4.7.5 – Correct listing 16. getSpecularDominantDirArea parameters N and R are float3
– Section 4.9.2 – Corrected the PDF of the specular BRDF and equations from 48 to 60. They had missing components or mistakes. The code was correct.
– Section 4.9.3 – Correct listing 21/22/23. getSpecularDominantDir parameters N and R are float3. getDiffuseDominantDir parameters N and V are float3
– Section 4.9.5 – Added and update comment about reflection composition: The composition weight computation for medium range reflections was causing darkening if several local light probes were overlapping. The previous algorithm was considering that each local light probes visibility was covering a different part of the BRDF lobe (having 10 overlapping local light probes of 0.1 visibility result in 1.0). The new algorithm considers that it covers the same part of the BRDF lobe (Adding 10 overlapping local light probes of 0.1 visibility result in 0.1).
– Section 4.10.2 – Corrected listing 26. Roughness and smoothness were inverted. The listing have been updated and an improve formula have been provided. Figure 65 has been updated accordingly.
– Section 4.10.2 – Added a reference to “Is Accurate Occlusion of Glossy Reflections Necessary” paper.
– Section 5.2 – Table~\ref{tab:SmallFloat}: Fixed wrong largest value for 14-bit float format. 16-bit float format is a standard floating point format with implied 1 on the mantissa. Max exponent for 16-bit float is 15 (not 16, because 16 is reserved for INF). Largest value is (1+m)^{maxExp} = (1+\frac{1023}{1024})*2^{15} = 65504. Whereas 14-bit float format has no leading 1, but a max exponent of 16. Largest value is m*2^{maxExp} = (\frac{511}{512})^{16} = 65408. 10-bit and 11-bit float format follow same rules as 16-bit float format.

And here is the slides:

Few notes

Last year I was giving a talk about Remember Me at GDCEurope 2013 : The art and rendering of Remember Me. And I was saying: “Converting an engine to physically based rendering is easily done. The hard part is on the artists’ side and this is where we spent most of our effort”.

I learn a lot during the past year at EA Frostbite and I see now how hard is, on the engine side, the physically based rendering way. Put a finger somewhere and your whole body is impact. Having everything coherent is an insane work. A lot more than having a gamma correct pipeline. The massive course notes we have produce is a proof of that. We have try to cover a lot of topics related to PBR, and try to aboard subject rarely discuss like camera exposure implementation. But in the end it bring more questions than answers. During the writing of this document I was often questioning myself about correctness of what we are doing. Is it how the real world really work ?

There is plenty of topics that are unsolved regarding PBR within real-time constraint and I hope by sharing this knowledge that people will push further the graphic boundary. Here is a short list

– Good specular area light with a GGX NDF and good behavior at grazing angle for several shapes: Sphere, Disk, Rectangle, Tube. The dimensionality is crazy preventing full precomputation.
– Good area shadow with dynamic lights (I like Epic’s static area lights look based on distance field).
– Distance based roughness, it is really difficult to model with a GGX lobe. It is in fact difficult to get the footprint of a GGX lobe in a plane. This is similar to rectangular area light problem.
– GGX IBL behavior at grazing angle, i.e the stretching look. Most think to use unwrap 2D texture with anisotropic fetch, but it will require to fetch a lot of samples to get a good look (Note: anisotropic filtering doesn’t exist for cubemap in hardware currently).
– LEADR cheaper and working with GGX.
– Accurate multi layered material. The best we have today is Weidlich and Wilkie multilayer BRDF.
– Microfacet BRDF with inter-reflection (currently they only model one bounce)
– Energy conservation between specular and diffuse term, particularly in a multilayer model. It feel like it is impossible to solve with microfacet BRDF as it loose energy due to shadowing term. This is important for example to avoid doubling the lighting when you apply a specular IBL and a diffuse IBL on a rough object.
– What is the behavior of the light at grazing angle for a rough diffuse object ? This seems particularly tricky to study as measurement device go crazy at grazing angle. Eric Heitz in his phd thesis have found a nice idea. He model a plausible rough surface with Gaussian statistics and brute force simulate the lighting with ray tracing to study its behavior. This is maybe the way to go. Why bothering ?  Because Oran-Nayar is not enough, because we are not sure if the GGX derived diffuse term, which is correct mathematically, is good, because Disney model match MERL and have some characteristic that others BRDF don’t have despite being empirical. What is the part which belong to specular, what is the part which belong to diffuse ? The G smith correlated term that Eric derive (There is two G smith correlated term present in Eric’s work, a simple and a more complex that is a bit costly to use but is more physically correct) could give some answer as it has strong “rim light” that’s Disney model artificially.
– Should we consider a unique roughness term for the diffuse and specular layer ? If we model the BRDF as one layer with diffuse and specular, the roughness is coupled, but if we model it as one specular layer and one diffuse layer it is decoupled. What is correct ? And remember that real world don’t make distinction between specular and diffuse.
– View dependent roughness ? This term seems to be handled by the G smith correlated behavior (Which simulate an apparent decrease of roughness at grazing angle), just a hypothesis.
– Specular occlusion, only Yoshiharu Gotanda from Tri-ace have attack the subject. It is a hack, not physically based but it is useful.

The PBR way is full of hurdles and is at the same time an enjoying field, we are only at the beginning!

I would like to thanks EA Frostbite, the EA Frostbite rendering team and PBR course organizer Steve Hill and Steve McCauley to allowed us to do this talk and publish it.

31 Responses to Siggraph 2014 : Moving Frostbite to Physically based rendering V2

  1. Pingback: Implementing a Physically Based Camera: Understanding Exposure | Placeholder Art

  2. Pingback: Implementing a Physically Based Camera: Manual Exposure | Placeholder Art

  3. kneedragr says:

    I’ve been reading over the course notes in my spare time – great stuff! One thing I noticed was that you included a F90 for the fresnel reflection at 90 degrees in a couple places. This is something I have not seen before as most people just assume it trends to 100% on the edge. I was wondering where you found resources for this value as most people are using the direct reflectance and schlick approx to calculate their fresnel reflectivity. Is this just something the artists use to make things look ‘right’ ?

    • seblagarde says:

      It is just an artist controlled parameter.
      In our case, the f90 is deduced from the reflectance parameter (or the base color for a metal).

      See p79

      f90 = saturate (50.0 * dot ( fresnel0 , 0.33) );

      float3 F_Schlick (in float3 f0 , in float f90 , in float u)
      return f0 + (f90 – f0) * pow (1. f – u, 5.f);

  4. Pingback: Microfacet, BRDF and PBR | Abstract Algorithm

  5. AbstractAlgorithm says:

    How can we justify diffuse and specular being so disconnected one from another? Is that kind of reasoning based only on our visual perception, or perhaps, Lambertian diffuse and some specular model most BRDFs well enough, so it’s relatively “okay” to split BRDF into a diffuse and specular? Should we then split BRDF into more components than just diffuse and specular? After seeing Naty’s PBR presentation from SIGGRAPH’10 and ’12, this whole thing just confused me too much.

    • seblagarde says:

      Real world don’t really care about diffuse and specular term they are both reflected light, it is our way to model the lighting interaction with matter because in computer graphics it allow efficient implementation for good looking result.

      Moreover, when you look at some diffuse term like Disney diffuse or Oren-Nayar, you introduce view dependent component and roughness component, so the boundary between diffuse and specular are even more fuzzy. There is no “true” answer here, this is one way to model it, not the only way.
      Some don’t use the microfacet theory to model interaction but wave optics principle:

      >How can we justify diffuse and specular being so disconnected one from another?
      They are not disconnected, you should do energy conservation between both term, but it is not simple to process. Also recent research from Eric Heitz show that diffuse term should be derive from specular term introducing a strong coupling from a mathematical framework point of view.

      • AbstractAlgorithm says:

        Thank you for a very quick response.

        I will take a look at the wave optics and see if that theory handles things differently.

        Maybe we just need an even more low-level look at the things to explain weird phenomena as those mentioned in the Disney paper. It’s feels wrong trying to simulate things by trying to find functions to fit the data, without fully understanding what is going on.

      • seblagarde says:

        Keep in mind that performance and complexity matter🙂

        A BRDF must be compatible with all your lighting: punctual, area, image based light as well as GI to get an homogenous result (In game you almost never have a clear separation of material and lighting to get performance). The more your BRDF is complex the more difficult you can achieve this goal.

      • AbstractAlgorithm says:

        Could we say that with enough time and compute power, we could simulate everything physically correct? That would probably assume the most brute-force possible path tracing.
        But when it comes to games, we need to take care about perf then, and then we need approximations. But if 99% of the time, results are 99% correct when compared to the ground truth, why bother? : D

  6. Wobbly says:

    Impressive work! You’ve dug deeper into these thorny problems than anyone I’ve seen.

    One quick note about the computeSpecOcclusion implementation in the course notes. There the spec occlusion term is computed as this:

    saturate (pow( NdotV + AO , roughness ) – 1 + AO);

    It seems like here the roughness term is reversed — the value driving that pow function should be glossiness not roughness. Would you agree?

    • seblagarde says:

      Hey, thanks!

      You are right, I have inverted roughness and smoothness in this code.
      I will release soon a new document with a lot of fixes like this (report by various people).
      My bad, should have been more rigorous.

      Cheers and thanks for letting me know.

  7. Ke Chen says:

    Hi, I noticed that in the DFG term, you use VdotH, but in the shader(Listing 18), you use LdotH in the GVis term. Which one is right? I also noticed that Karis’s course note use VdotH and Disney’s use LdotH….

    • Ke Chen says:

      and why the Fresnel term in equation 54 use VdotH not LdotH?

      • seblagarde says:

        It is because VdotH == LdotH
        H is the half angle between V and L, so either taking one or the other it doesn’t matter.
        But yes, I should have been more consistent between code and formulae.

  8. Ke Chen says:

    Hi, It is a pleasant to read your course note and helpful, I really enjoy reading it, thanks for the material!
    I have one more question regarding the pdf for importance sampling GGX, how is the Jacobian J(h)=1/(4VdotH) derived when transforming from random variable H to L?

  9. JarkkoL says:

    Great work on the paper, it’s been very useful!
    I got a question regarding your disk light specular calculation. You mentioned that you use Karis’ method for area light specular in general. For disk lights this would require finding closest point on the disk to the ray (reflection / dominant BRDF vector). However, this is quite involved calculation to do correctly in a shader as explained in this paper by Eberly:
    Are you using Eberly’s method or some kind of approximation? I tried couple of approximations myself which unfortunately gave strange specular shape for tilted disk lights.

    • seblagarde says:


      about the specular area light we are not satisfy of any solution we have try currently, including the one from Brian. That is why we have chose to not talk about it.

      You are right that finding the closest point from reflection ray is hard and we have try to follow the papers you mention in the past. But it is way too costly. We end up with a coarse approximation (i.e we do not find the closest point of reflection ray). But other than the shape, the major trouble is the energy conservation part. After many fails, we have reuse the values that Brian use for the Sphere.

      In practice (depends on the games because we have game that use them, other that doesn’t) our area lights are use sparingly and our lighters try to hide the artifacts. It still a benefit compare to punctual light but it have a cost.

      In the future I want to get away from Brian’s method and try to follow something more mathematically correct.

      • JarkkoL says:

        I actually realized that closest point isn’t what you want, but the smallest angle, which gives results closer to the reference, particularly for rougher surfaces. With nearest point you get this abrupt discontinuity in lighting.

        It would indeed be nice to have better approximation for the integral. The MRP method results this strange two-tail drop shaped highlight for sphere/disc lights at grazing angles for GGX (also apparent in Dropot’s article in GPU Pro 5).

  10. neoragex2002(LU Yin) says:

    Hi, I found the equation (67) in pp.83 v2 is still WRONG.

    By the APEX exposure equation, The proper formula should be EV100=log2(N^2*100/(t*S)).

    Hope this would help.: )

    • seblagarde says:


      you are right, thank you for reporting this typo, it should have been:
      EV100 = log2(N^2 / t) – log2(S / 100) instead.
      i.e EV100 = log2(N^2*100/(t*S)) as you said.
      Will try to see if I can update the doc.

  11. neoragex2002(LU Yin) says:

    And I also found that the equation (69) in pp.84 of v2 could be improved, because it’s only a general definition for EV. For a more special case, this equation could be wrote as EV100=log2(L_avg*100/K). I think S=100 is the key-point here. By doing that the equation (69) and the eq(67) could be matched perfectly.

    • seblagarde says:

      Not sure what you aim for here, the equation is fine like that at this place. For the code you want indeed replace S by 100 like it is show in listing 28:

      float computeEV100FromAvgLuminance ( float avgLuminance )
      // We later use the middle gray at 12.7% in order to have
      // a middle gray at 18% with a sqrt (2) room for specular highlights
      // But here we deal with the spot meter measuring the middle gray
      // which is fixed at 12.5 for matching standard camera
      // constructor settings (i.e. calibration constant K = 12.5)
      // Reference : http :// en. wikipedia . org / wiki / Film_speed

      return log2 ( avgLuminance * 100.0 f / 12.5 f);

  12. Chao Liu says:

    Excuse me,I found the method ” getSample (i , sampleCount ) “,it’s not a hlsl or glsl function,what’s the usage of this function?

    • seblagarde says:


      getSample is a GPU random number generator,
      It is implemented to generate a stochastic samples to reduce the number of samples required to get good result.

      A typical implementation will use Hammersley on GPU

      float radicalInverse_VdC(uint bits) {
      bits = (bits <> 16u);
      bits = ((bits & 0x55555555u) <> 1u);
      bits = ((bits & 0x33333333u) <> 2u);
      bits = ((bits & 0x0F0F0F0Fu) <> 4u);
      bits = ((bits & 0x00FF00FFu) <> 8u);
      return float(bits) * 2.3283064365386963e-10; // / 0x100000000

      vec2 hammersley2d(uint i, uint N) {
      return vec2(float(i)/float(N), radicalInverse_VdC(i));

  13. Jordan Walker says:

    Hey, I have a question; how do photometric units such as Lux ultimately get scaled back to a manageable range before retrieving the pixel luminance? After multiplying a brdf result with a value of ~100,000 lux as described in the paper (for the sun) is there any conversion that happens or is that value fed to the tonemapper? In the case of the latter, I would assume the final image would still be way too bright.

  14. skavenplanet says:

    Hi, how are the luminance values using photometric units such as Lux ultimately scaled back to a manageable range? In particular, after multiplying the brdf with a value of ~100,000 lux as described in the paper (for the sun) is there are any conversion that happens before retrieving the pixel luminance? I would assume the final processed image would end up being far too bright with a value of that magnitude.

    • seblagarde says:


      This is explain in the section “5.2 Manipulation of high values” of the document and based on the exposure.

      When you expose for a scene, you will move the range of the lighting that affect your exposed objects to [0..1]. In our case we don’t wait until postprocess step to do it, but we pre-expose at the end of the shader, allowing to avoid precision issue.
      Btw 100 000 lux for a lambertian surface is convert to luminance with lux * ndotl / PI.

      Then the tone mapper apply. The result depends if expose correctly your scene, here you can under expose/over expose and get poor result like with a real camera. We use the Sunny 16 rule (section 5.1.4) to validate we haven’t fail somewhere🙂 .

      • skavenplanet says:

        Thanks! I thought that step took place during postprocessing since it needs the average luminance of the image but that makes much more sense.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: