My name is Sébastien Lagarde and I am currently director of rendering research at Unity (Paris – France).

Previously I was a senior rendering programmer at EA DICE/Frostbite and a senior engine/graphic programmer at Dontnod entertainment (Paris – France) working on “Remember Me” :


Linkedin profile:



16 Responses to About

  1. John Roberts says:

    Hello Sebastien,
    Is there an article on how the context signage in Remember Me was done in Unreal? The text is so clear and legible. I am looking to use a UDK environment with augmented glasses, for some specialized training, and drop down context signage will be a major part of the glass interface. I have read all of your articles I can find on the web, you are one of the few that takes the time to share your techniques, thanks, it is really appreciated by us mortals. I have lectured in Paris many times, and the wet, rain, bright reflections, texture realism, sky, brought the Paris of Remember Me to life. Most amazing environments of any game.

    • seblagarde says:


      thank you.

      Not sure what you mean by “context signage”. But all “floating” UI from the Sensen has been done through Scaleform. And no, we haven’t published anything on it as this is simply Scaleform usage.


  2. John Roberts says:

    Thanks so much for your reply. By context signage I meant the shop signs and platform signs that pop up when you get close to them. Please excuse my ignorance, are you saying these in game signs were done with Scaleform, not just the QTEs and menu screens? I just won a five year battle with stage 4 “incurrable” Ampullary cancer. I am afraid the chemo has wiped some of my brain cells, so I am a bit slower than I use to be. Thanks again Sebastien, love your work, I could almost smell the freshly baked baguetts in the shop windows, and the trash in the lower parts! But most of all I loved the rain and wet, with drops rolling off the glass. Fealt real, very immersive.

    • seblagarde says:

      Yes all the signage, even those carried by flying robot are done with scaleform. They are simple flash/vectoriel bitmap rendre in a texture then mapper on a 3D objects ( and most of the Time for performance reasons we convert the vectoriel image in simple 2D image of offline).


  3. John Roberts says:

    Thanks Sebastien, can’t wait to give it a try!

  4. kneedragr says:

    Hey Sebastien, do you know of any resources that discuss the resolution of cubemaps with regard to image based lighting? Ive seen several articles where people mention using sizes based on the max specular power in the direct lighting engine. My concern is when using parallax corrected cubes spread out over large areas. In that instance, we are spreading the pixels over a larger area and thus changing that link. Ive yet to seen that discussed, just wondering if you know of any resources.

    • seblagarde says:

      Hey, nope I am not aware of a resource speaking specifically about that. You’re question is in fact very context dependent (resolution in memory/streaming bandwith allowed, texture cache miss and number of cubemap for openworld or restrictive room…) so you may find the perfect fit for one lightprobe with a given proxy size (texel by square meter) and a given smoothness, but that doesn’t meant you can afford it, moreover. People tend to give size based on smoothness because there is a minimun resolution to have to be able to simulate perfectly smooth surface. I found that 256x256x6 is the minimun to start to have quality (And in Remember Me I have made a system to load a 256x256x6 specifically in some area for the water reflection but it have not been used in the end, too complicated for production). And remember that storing a mipmap chain with varying smoothness is a way to save memory, but having one cubemap for a set of smoothness is higher quality than using mipmap. You will be able to find some information in my Siggraph 2014 once it is out but not a detailed analysis. My advice is to have cubemap between 128x128x6 and 256x256x6. 64x64x6 is too bad quality and 512x512x6 too expensive. Hope that’s help.

      • kneedragr says:

        Thanks Sebastien. I am also using 256 cubes but tried upping it to 512 and some of our larger areas with box cube projections looked significantly better. But thats a real waste in others. Right now I use a cube array so I can just pass an index down to my deferred shader and do the cube mapping there. I am considering adding a new RT in my GBuffer that stores all reflection data. That way I could blend ray marched reflections, object cubes, box cubes and level cubes together into 1 screen space buffer, and have different resolutions on cubes. However the bandwidth would go up quite a bit, and many of the calculations would be subject to overdraw.

  5. Hi Sébastien,

    first of all, thanks for the precious knowledge you’re sharing with us.

    I’m Michele Bedendo and I’m professor of a video games course in a Spanish institute in Madrid. We’re started using UE4 as game engine which introduced a new PBR system. Would it be possible for me to translate your article “Feeding a physically based shading model” to Spanish and sharing it with my students? Since not everyone is familiar with English, I’d like to reach the more people possible.

    Thanks again.


    • seblagarde says:


      Sure you can do it 🙂
      Keep in mind that the article start to be a bit old and up to date information about pbr can be found in my siggraph 2014 course note : moving frostbite to pbr. It still a good introduction .


      • Hi Sébastien,

        thank you very much!

        I know the article is a bit old but I don’t want to get too technical with my students since they’re starting from scratch. By the way, I’ll have a look at the Siggraph 2014 course note 😉

        Thanks again!


  6. Matt Jacobs says:

    Hi, Saw your presentation at Siggraph 2016 regarding HDRI capture. Can you provide more specifics on the lens you use as well as the filters? Most fisheye lenses require filters to be placed in the rear of the lens which would require removing the lens. I’d imagine this would cause some registration problems with images.
    Are you using an adapter for your filters?

    • seblagarde says:

      Hi, we provides all the details and more in our course note. They are currently in review and I hope they will be publish next week.

      Here is the list of our equipment:

      \item Camera: Canon EOS 6D
      \item Lens: Fisheye Sigma 8 mm f/4 EX-DG Circular
      \item Tripod: Manfrotto 475 B.Digital pro tripod black
      \item Tripod bag
      \item Nodal Head Ninja 4 (highly recommended)
      \item Remote: CamRanger + USB cable
      \item Mobile: Android or IOS
      \item Laptop: Mac Book Pro (optional: dedicated to storage and processing data during long trip)
      \item Secondary camera battery
      \item Memory Card 256GB (Class 10)
      \item Colorchecker Classic (X-rite)
      \item Colorchecker Passport Photo (X-rite)
      \item Lux meter: Sauter SO 200K (additional battery can be necessary)
      \item Luminance meter: Konica Minolta LS – 110 (additional battery can be necessary)
      \item Spirit level
      \item HOYA ND filter: 16, 32, 64, 500, 1000. Unity use power of 2 ND filters to simplify the workflow. HOYA are recommended due to their filters’ reduced vignetting and color shift.
      \item Bag
      \item Screwdriver
      \item Lenspen (lens cleaner tools)
      \item Notepad and pen to record information about the photographs: location, time, weather, lux, luminance, range, EV step, lens setting
      \item Marker
      \item Nadir Adapter

      We don’t use rear filter, just front filter. We add a filter support on the fish eye lens causing the fov to be at 139°, then with a ND filter it become 124°. With 2 ND filter is become 110° (for the sun), but in this case we still use 124° for cropping in ptgui as the filter is black and we try to only capture the sun (everything else is black).

      Hope to publish all the details soon.

  7. Matt Jacobs says:

    Thanks so much for this detailed list!
    I take it you only shooting the ND500 + ND1000 for capturing the view of the sun? The other 5 views are not captured with this bracket of exposures?

    • seblagarde says:

      We usually take all the views except the bottom view with ND500+ND1000 just in case there is some strong highlights in the scene (like with chrome surface for example). But it is true that in this case you need to be careful that the highlights fall into the 110° fov else they will be remove (or you can do 4 view instead of 3). If you know that there is no strong highlights, like shooting in a grass field, then yes you can only shoot sun view and replace others with black image in ptgui.

  8. Matt Jacobs says:

    Understood. Look forward to the course notes. Very informative lecture. Thanks!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: