Before we part, let’s talk about some issues and improvements related to deferred rendering.


    As we’ve seen before, transparent objects are a problem for a deferred renderer. The reason is obvious: since lighting is applied at the end, after everything is drawn, only the points closest to the camera will be lit. Thus, if we just let transparent objects be drawn, the objects behind them will be visible, but will not be lit correctly. The solution is to render all transparent objects after you apply the deferred lighting. They are drawn normally, using a forward renderer, and blended into the image resulted from the deferred render. The only inconvenience is that we need to use the depth buffer from the G-Buffer creation, so the occluded parts of transparent objects render correctly.

Post processing Effects

    Deferred rendering goes along very well with post processing. The G-Buffer can be used for several post processing effect we all know and love, like Depth of Field, Glow / Bloom, Distortion, HDR lighting (for HDR, we can simply render the light map to a floating point texture instead of a Color texture, and use that as an input for HDR processing), Auto-Exposure, and others. Another category of effects that work well with deferred rendering are screen-space effect, like Screen Space Occlusion Mapping, Global Volumetric Fog, Fog Volumes, Soft Particles, Cloud Rendering, etc. These effect integrate well with the deferred renderer, and can take great advantage from the G-Buffer.


    For shadows, deferred rendering works very well with shadow maps. For directional and spot lights, these pose no problems. For point lights, we can use a cube-shadow map, or treat it as 6 spotlights. As optimizations, shadow maps do not need to be used for all lights. Some lights may not need to cast shadow. Many optimizations can be done here, but just remember that shadow maps are a good fit for deferred rendering.

Anti Aliasing

Anti aliasing is also a big problem of deferred rendering. But since HDR is not such a great friend with anti aliasing either (for now), this might not be such a big deal. If you do want AA in your renderer, there are some ways to do it. One way is to make an edge-detection pass, using the depth and normals from the G-Buffer. Then, as a post processing effect, blur the image based on the edges detected. This will remove somewhat the jagged lines which appear on the edges, and the whole image will have a smoother look. And when this is combined with some bloom, the result should be pleasing enough for the eyes.


    In this article, we’ve covered the basics of implementing a deferred renderer in XNA. We saw how to add directional lights and point lights, and wrote a content processor to prepare models for the renderer. We’ve also discussed further improvements and possibilities for extension, like spot lights, shadows, a more complex material system, etc. From this point on, there are many things to experiment and to try, and the topic of deferred rendering is far from being exhausted, and I hope this article got some of you interested in it.


1. Shawn Hargreaves, Deferred Shading

2. Shawn Hargreaves and Mark Harris, Deferred Shading

3. Michael Deering, Triangle Processor

4. Shader Series 4

5. GPU Gems 2 – Chapter 9 : Deferred Rendering in S.T.A.L.K.E.R.

6. Deferred Rendering in Killzone 2

  • Martin Elberts


    First off let me thank you for this awesome tutorial about deferred rendering/shading, i learned a lot from it.

    Now, i ran into a problem when implementing deferred rendering into my own project. Actually, let me rephrase that, the deferred renderer works perfect, but as soon as i call a SpriteBatch.Begin() and draw my User interface after the finalEffect takes place, my spriteBatch draws, but it draws on top of the LightRendertarget( black screen with some white highlights), i tried creating a Final-Rendertarget and using that in a SpriteBatch.Draw call to draw onscreen and draw the UserInterface over that, unfortunately that gave me the same result, strange enough i seem unable to render the final image to a RenderTarget, while as soon as i remove the SpriteBatch.Begin() call, my scene displays just fine.
    I’m a bit puzzled as of why this is happening, and was hoping you could give me some tips as to what could be going wrong.

    Thanks in advance,


  • Catalin Zima

    SpriteBatch probably messes with the renderstates.
    please try using spriteBatch.Begin(SpriteBlendMode.AlphaBlend, SpriteSortMode.Deferred, SaveStateMode.SaveState);

    If this doesn’t work, can you make a small reproduction, and send it my way, so I can look at it myself?

  • Martin Elberts


    Changing the savestate mode to the one mentioned above indeed fixes the problem, Thanks again and congratulations on becomming MVP!

  • Ted de Vries

    Hey there,

    Thanks for the great tutorial! I’ve learned alot from it.

    But that’s the good news, i also been having a problem implementing transparent objects in the deferred renderer :). You say the way to go is use the depth information from the Gbuffer. My question is, how do i get this depth information in the actual depth buffer?


  • Catalin Zima

    Well, if XNA didn’t erase the depth buffer when changing render targets, you would have the depth available since rendering the G-Buffer.

    As things stand, you’d need to rebuild the depth buffer. One way to do this would be to use the DEPTH pixel shader semantic for output, and write the depth values (from the GBuffer) there. However, the last time I tried this, I wasn’t fully successfull.

  • Beau Bennett

    Hi. Thanks for the tutorials. They were very informative and fun to complete.

    I have a heavily modified version of the deferred renderer that I’ve been working on. All the modifications are to code architecture, so all the fx files are the same. I seem to be having some troubles and I don’t know where they are originating from. When moving my camera araound I experience some flickering with the models and lights. If I stop moving mid-flicker, I can see that parts of the models are missing. I have screen shots if you think that would help.

  • Catalin Zima

    Yes, some screenshots would help.
    What are your near and far plane set to?

  • Beau Bennett

    .01 and 3000.

    Where can I find your e-mail to send screenshots?

  • Catalin Zima

    try using 1 and 3000…
    my email is zimacatalin at gmail dot com

  • Zhi Wen

    I found the light actually go through wall. Do you know how to fix it?

    I think I can compare the distance to the light. If from the camera, the distance to the wall is smaller than the distance to the point light, then PointLight.fx’s pixel-shader-function will return float4(0, 0, 0, 0). But I have problem comparing the depth value and the distance between the camera and point light.

    I did this (in PointLight.fx) :
    float4 lightWorldPosition = mul(float4(lightPosition,1), World);
    float4 lightViewPosition = mul(lightWorldPosition, View);
    float4 lightScreenPosition = mul(lightViewPosition, Projection);

    float light2Camera = lightScreenPosition.z / lightScreenPosition.w;

    float depthVal = tex2D(depthSampler,texCoord).r;

    if(depthVal > light2Camera)
    return float4(0, 0, 0, 0);
    return attenuation * lightIntensity *float4(diffuseLight.rgb,specularLight);

    Any idea why it doesn’t work? Especially I couldn’t figure out how to do a line-by-line debug on effect file to see what value I am getting.

  • Catalin Zima

    the method your trying to use is odd, and will fail in some cases.

    The best you can do is implement some shadows using shadow maps. The tutorial is not concerned with actual shadows, only with shading.

  • Zhi Wen

    Thanks. Let me try the shadow maps.

  • Phil Neumann

    Hello Catalin,
    it’s very kind that you share your knowldege about deferred shading with us, I learned a lot from it.

    I have a question, though, concerning your deferred ModelProcessor. It seems to me that your implementation overrides the SurfaceFormat key, with the result that all textures are being dxt-compressed. This is not so much of an issue with color textures, but my normalmaps look terrible in game, with banding and speckles everywhere.
    I’ve taken a look into your code, but I don’t know where exactly the textures are converted. The msdn help has an example where they overload the “BuildTexture” method, but this method doesn’t seem to exist and thus overloading fails.

    It’d be great if you or anyone else could tell me how to get my normalmaps uncompressed into the game. Thanks in advance!

  • Catalin Zima

    I’m not sure you can set this for each individual texture, in the setup used for the tutorial. But I’m pretty sure that you can try using the “TextureFormat” property of the content processor (In the Properties window, when the model is selected) to switch from DXTCompressed to Color or NoChange.

  • Phil Neumann

    Sorry, I mixed up words here, I meant “TextureFormat” instead of SurfaceFormat.
    For me, it seems that a change on this property in the model properties doesn’t change the texture compression of the textures for that model; they all look the same although the original image was losslessly saved as tga.
    When I import the same model with the xna model importer, the textures look fine. That’s why I believe the reason for this issue lies somewhere in your importer.

  • Catalin Zima

    you’re probably right, and the problem is somewhere in my processor, but at this moment, I can’t think of any way to solve it without rewriting the whole processor. I’m not too experienced with the Content Pipeline.

  • James Bailey

    @Phil Neumann

    This line here is what calls the material processor;

    return context.Convert(deferredShadingMaterial, typeof(MaterialProcessor).Name);

    You can make a custom material processor and custom texture processor then change the typeof(MaterialProcessor).Name to read typeof(CustomMaterialProcessor).Name. Not compressing these textures in some way will mean you have very large memory usage though.

    I find a better option than not compressing them is to change the way your normal maps store their data. You can remove the blue channel completely and rebuild it from the red and green channels in your shader;

    b = sqrt(1 – (r*r + g*g));

    You can go one step better and move the red channel into the alpha channel which will make the DXT compression very very happy and stop those annoying artifacts.

    Shawn Hargreaves gives a quick overview of it here;

    Anyways hope that helps :).

  • Andrey

    Hello, Catalin Zima!
    Your article really nice! Very big thanks. I’ve made deferred renderer as your article told (i am using XNA 3.1), but when i am drawing directional light i receive white screen, even i set green color. When i am using point lights – it works properly. I tried to save _lightRT texture. When i am using point lights it transparent, and in case of directional light – it white.

  • Domi

    Hi Catalin!

    That’s just an amazing article!
    So now I have almost the same Renderer, but i don’t know exactly how I should draw transparent Billboards (for some Glass Windows etc.).

    Would be great if you could help me.

  • Catalin Zima

    You need to draw any transparent objects at the end, because you can’t include them in the deferred pipeline.

  • Domi

    Thanks for that help!

    I want to show you the Map Editor, I have just finished with your help 😀

  • Domi

    hi Catalin

    Now I have another problem. I have noticed a slightly brightness change between rendering from outside / inside the point light sphere.

    Do you know how to get rid of that?

  • default_ex

    About the depth-stencil buffer:

    The depth-stencil buffer can be preserved by changing the pattern in which render targets are swapped around. If you are going to set say the light accumulation buffer after peeling up the geometry buffer, do not set null and then the light accumulation buffer, instead set the light accumulation buffer and then set null for the remaining render targets.

    The depth-stencil buffer will be resolved under only a handful of conditions:
    1) GraphicsDevice.Present(…) is called.
    2) GraphicsDevice.ResolveBackBuffer() is called.
    3) GraphicsDevice.SetRenderTarget is called on the same render target index twice without any drawing occurring in between.

    The reason why you do not want the depth-stencil buffer to be resolved is because once it’s resolved it can no longer be used for depth testing nor can it be written to. Even writing depth values back into the depth-stencil buffer you still lose the stencil if you choose to employ one.

    There is one additional trick that can be used to preserve the depth-stencil buffer, simply peel it off the device, store it away, and then drop it back down on the device when it is needed again. This way when the GPU attempts to resolve the depth-stencil buffer, it is not there to be resolved.

    Using a combination of the above I’ve been able to preserve the depth-stencil buffer throughout an entire frame of deferred rendering and post processing.

  • http://Website Shanee

    Hi Catalin!

    I just wanted to say thank you for the tutorials on Deferred Shading.

    One day I looked at my engine and the question of how to implement SSAO, Cascaded Shadow Mapping, Bloom, Depth of Field and many other effects with my Forward Renderer and just thought “a Deferred Shader would be so much more comfortable, I will already have all the data I need for those effects available to me instead of doing extra passes for each.”, so I googled and I found your site!

    That night I read it a bit and read a bit more about Deferred Shading on other websites, the morning later I thought of it a bit more and said, yeah I am going to do it. I followed your tutorial and implemented it (on DirectX 9.0 though), very simple and very easy, thank you!

    Day after that SSAO was in, day after that one, Cascaded Shadow Mapping in, today I added your normal map and specular map techniques (adding the option to use just one or the other as well) and I am expecting to have Depth of Field done sometime soon.

    I *LOVE* the simplicity of a Deferred Shader, the ease it offers with implementing more and more post processing effects, it’s just great 🙂

    So once again, BIG Thank you and I hope there might be some new tutorials on the subject for special effects that could be useful 🙂

  • Ted

    Hi Catalin!

    I really like you article about deferred shading and I decided to use it in my game project. However, I cant seem to apply shadow mapping with directional lights because I don’t know how to get the shadow map for it.

    Thanks so much,

  • Fuchs

    Hi there :D, just wanted to thank you for the awesome tutorial!.
    Even though this is the third one I read about deferred shading, I learned lots of things from yours :).


  • Chris

    This tutorial has inspired me to push it further and work on my own graphics engine. I finally get what deferred rendering is, and now light pre-pass rendering comes second nature because of it. It’s not such a complicated subject anymore! 🙂

    By the way it seems that a few people had problems with the screen turning completely white after the lighting pass. For me, it is because I needed to clear the color to Transparent instead of Black. When you clear to Black (or any other solid color) the alpha value to all backbuffer pixels is set to 1 so you always get a spec value of 1. When you add this to the final color, the result is always white. However, I personally find it looks better to multiply the object’s specular factor by the light color that shines on it before adding that to the final color.