Creating the G-Buffer

Before we go practical, we need to discuss some theoretical aspects. Choosing the G-Buffer format is one of the most crucial steps in deferred shading. You have to make a compromise between quality and speed. Greater quality comes from greater precision or larger number of parameters. These two determine how large the G-Buffer needs to be, which can affect the performance.

Multiple Render Targets

We saw earlier that the G-Buffer needs to store a whole lot of information. Normally, the result of rendering is a single image, but we need several: one for each component of the G-Buffer. One way to achieve this is to render the scene in multiple passes, each time with a different shader, and to a different RenderTarget. This means that the computations on the scene geometry will be done several times, and lots of Draw calls will be made.

Multiple Render Targets (MRTs) allow the pixel shader to output several colors at once. This is exactly what we need, since the scene will be drawn only once, but all attributes will be generated.

To use MRTs we need to use COLOR0 to COLOR3 semantics in the Pixel Shader. An example of a pixel shader that outputs a different color to each render target can be seen below:


struct PixelShaderOutput
{
float4 Color : COLOR0;
float4 Color1 : COLOR1;
float4 Color2 : COLOR2;
};

PixelShaderOutput PixelShaderFunction(VertexShaderOutput input)
{
PixelShaderOutput output;
output.Color = float4(1,0,0,1);
output.Color1 = float4(0,1,0,1);
output.Color2 = float4(0,0,1,1);
return output;
}

When using MRT, we have some restrictions. First, all RenderTargets have to have the same bit-depth. This means you cannot have a RenderTarget which is of SurfaceFormat.Color (32bit RGBA), and another that is SurfaceFormat.HalfVector4(64bit R16G16B16A16F). This poses some new restrictions on how we setup our G-Buffer. If we use large formats (64 bits), we will have good precision where we need it (normals, positions, depth), but we will have wasted space in other places (color, specular intensity, etc). On the other side, if we use a smaller formats (32 bits), we will have better performance, but lower precision.

Next we’ll analyze the design choices for each G-Buffer component. The main components that need to be addressed are: diffuse color, normals, position . After these are fixed, we can focus on adding material attributes, such as specular intensity or specular power.

Choosing RenderTarget Format for diffuse color

Ok, this one is simple. We need to use SurfaceFormat.Color, which is on 32 bits. However, if the rest of the attributes would be better suited by 64 bit render targets, then we can also make the diffuse component on SurfaceFormat.HalfVector4 . The second option wastes some space, but some configurations may be ok with this.

Choosing RenderTarget Format for normals

Here, we have more options, and more decisions to make. The obvious first choice would be SurfaceFormat.Vector4 (128 bits) or SurfaceFormat.HalfVector4(64 bits). The 128 bits is out of the question. We don’t need that much precision, and if choosing 128 bits, then a 4-MRT G-buffer at 1024*768 would have a size of 48 MB total, which is huge. Besides, 128-bit is not supported on the Xbox360. The second option, HalfVector4 provides very good precision for normals. If the 64 bits size bothers us, and we want to use 32 bits, we still have two options:

  • SurfaceFormat.Color – This gives 8 bits for each normal component X,Y and Z, and also, we are left with a free 8-bit field (from the alpha channel), which we can use for other parameters. If we choose to use this format, we’ll have to take care when storing / reading normals. SurfaceFormat.Color can only store values between 0.0f and 1.0f, but the normals can be between -1.0f and 1.0f. This is easily solved by converting the [-1.0,1.0] domain to [0.0,1.0] domain before writing the data, and converting back when reading the data.
  • SurfaceFormat.HalfVector2 - This format gives two 16-bit components. But wait, you might say, the normal needs 3 components, since it is a vector in space. Indeed, this is true, but since we always want the normal to be normalized, and since the X,Y and Z axis are perpendicular to each other, we can only store X and Y, and then compute Z with the formula Z = sqrt( 1 – X*X – Y*Y ). By using this, we have both high precision normals, and 32 bit RenderTargets.
  • SurfaceFormat.Rgba1010102 – Shawn Hargreaves suggests this in his paper[1]. This is a nice version, on 32 bits, with good precision, and also leaves us a 2-bit channel if we want to use it. However, I tried creating a RenderTarget of this format on 3 computers, and it didn’t work.

Choosing RenderTarget Format for positions

The options available for storing the position are the following:

  • SurfaceFormat.Vector4 – this is a 128-bit format. Again, this forces 128 bit on the other RenderTargets, which wastes very much memory, and is not available on the Xbox.
  • SurfaceFormat.HalfVector4 – 64 bit format. This has good precision, and also leaves us with another channel were we can store an extra attribute.
  • SurfaceFormat.Single – this is the third option, and the only option if you want to have 32-bit wide RenderTargets. In order to use this, you only store the depth of the pixel, instead of the whole position. When we need to use the position, we have the screen-space X,Y position of the pixel, which, together with the depth can be used to compute the world position corresponding to that pixel.

Choosing RenderTarget Format for other attributes

When we reach this point, we might already have some free channels remaining for extra attributes. If these are not enough, we can add a new RenderTarget dedicated strictly to extra attributes. Choices for this extra RenderTarget can be SurfaceFormat.Color, when using 32 bit targets, or SurfaceFormat.HalfVector4 when using 64 bit targets.

G-Buffer Setup for this article

For the rest of this article, we will use to following setup for the G-Buffer, with three 32-bit Render Targets.

RenderTarget 0 (SurfaceFormat.Color)

Color.R

Color.G

Color.B

Specular Intensity

RenderTarget 1 (SurfaceFormat.Color)

Normal.X

Normal.Y

Normal.Z

Specular Power

RenderTarget 2 (SurfaceFormat.Single)

Depth 32bit

For now, we will use a simple lighting model (Phong), in which the light is composed of the diffuse light, and specular highlights. For this, we only need as extra parameters information related to specular lighting, which is Specular Intensity and Specular Power. The 32-bit depth value is used to compute the position of the pixel in world space. The precision of the normals is not that good, and this might be noticeable for some surfaces. The advantage is that this setup is very compact, and the performance is very good.

Setting up the G-Buffer

We need to add some variables in the DeferredRenderer class. These are the three RenderTargets we will use: colorRT, normalRT and depthRT. We then initialize them in the LoadContent function.

private RenderTarget2D colorRT;    //this Render Target will hold color and Specular Intensity
private RenderTarget2D normalRT; //this Render Target will hold normals and Specular Power
private RenderTarget2D depthRT; //finally, this one will hold the depth
[...]
 protected override void LoadContent()
{
    scene.InitializeScene();
    //get the sizes of the backbuffer, in order to have matching render targets   
    int backBufferWidth = GraphicsDevice.PresentationParameters.BackBufferWidth;
    int backBufferHeight = GraphicsDevice.PresentationParameters.BackBufferHeight;
    colorRT = new RenderTarget2D(GraphicsDevice, backBufferWidth,
                                                            backBufferHeight, 1, SurfaceFormat.Color);
    normalRT = new RenderTarget2D(GraphicsDevice, backBufferWidth,
                                                            backBufferHeight, 1, SurfaceFormat.Color);
    depthRT = new RenderTarget2D(GraphicsDevice, backBufferWidth,
                                                            backBufferHeight, 1, SurfaceFormat.Single);
    base.LoadContent();
}

Next, we will create two functions called SetGBuffer and ResolveGBuffer, that set the render targets on the device. We set the colorRT as the first render target, normalRT as the second, and finally, depthRT as the third.

private void SetGBuffer()
{
    GraphicsDevice.SetRenderTarget(0, colorRT);
    GraphicsDevice.SetRenderTarget(1, normalRT);
    GraphicsDevice.SetRenderTarget(2, depthRT);
}
private void ResolveGBuffer()
{
    //set all rendertargets to null. In XNA 2.0, switching a rendertarget causes the resolving of the previous rendertarget.
    // In XNA 1.1, we needed to call GraphicsDevice.ResolveRenderTarget(i);
    GraphicsDevice.SetRenderTarget(0, null);
    GraphicsDevice.SetRenderTarget(1, null);
    GraphicsDevice.SetRenderTarget(2, null);
}

Clearing the G-Buffer

Before we draw anything, we need to clear the G-Buffer to default values. The problem here is that we can’t simply use GraphicsDevice.Clear(), because that would set all render targets to the same color, which is not something we want. We need to clear the color render target to Black, the depth render target to White (which means maximum depth), and the normal render target to Grey, which, when transformed from the [0,1] domain to [-1,1] domanin, will become (0,0,0), which is a good value for a default normal. This will prevent lighting artifacts from appearing on the background, where no other object was drawn.

To fix the clearing problem, we will need to create a new effect file, named ClearGBuffer.fx, inside the Content Project. Inside this effect, we set the render targets to the values we want. In the vertex shader, we just pass the position further. We will use the quadRenderer to draw a full-screen quad.

The code for the shader is simple:

struct VertexShaderInput
{
    float3 Position : POSITION0;
};
struct VertexShaderOutput
{
    float4 Position : POSITION0;
};
VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
    VertexShaderOutput output;
    output.Position = float4(input.Position,1);
    return output;
}
struct PixelShaderOutput
{
    float4 Color : COLOR0;
    float4 Normal : COLOR1;
    float4 Depth : COLOR2;
};
PixelShaderOutput PixelShaderFunction(VertexShaderOutput input)
{
    PixelShaderOutput output;
    //black color
    output.Color = 0.0f;
    output.Color.a = 0.0f;
    //when transforming 0.5f into [-1,1], we will get 0.0f
    output.Normal.rgb = 0.5f;
    //no specular power
    output.Normal.a = 0.0f;
    //max depth
    output.Depth = 1.0f;
    return output;
}
technique Technique1
{
    pass Pass1
    {
        VertexShader = compile vs_2_0 VertexShaderFunction();
        PixelShader = compile ps_2_0 PixelShaderFunction();
    }
}

We will load this effect file into a variable, and then we will create a function called ClearGBuffer() in the DeferredRenderer class, where we draw a full-screen quad using this effect.

private Effect clearBufferEffect;
protected override void LoadContent()
{
    [...]
    clearBufferEffect = Game.Content.Load<Effect>("ClearGBuffer");
}
private void ClearGBuffer()
{
    clearBufferEffect.Begin();
    clearBufferEffect.Techniques[0].Passes[0].Begin();
    quadRenderer.Render(Vector2.One * -1, Vector2.One);
    clearBufferEffect.Techniques[0].Passes[0].End();
    clearBufferEffect.End();
}

Finally, in the Draw function, we set the G-Buffer, clear it, draw the scene, and then resolve the G-Buffer.

public override void Draw(GameTime gameTime)
{
    SetGBuffer();
    ClearGBuffer();
    scene.DrawScene(camera, gameTime);
    ResolveGBuffer();
    base.Draw(gameTime);
}

If you run the game now, you won’t see anything. (Actually, you’ll probably see a violet screen, because of how RenderTarget switching is handled in XNA 2.0). This is normal, because we didn’t actually draw anything. Next, we will add some items in the Scene class, and draw them using a special shader.

Drawing the Scene

We will now create an effect that will be used to draw all geometry in the game. This effect will output values to all render targets, and it is responsible for filling the G-Buffer, so it is one of the main pieces of code for the deferred renderer.

To create a new shader, right click the Content Project, and Add New Item, select a new Effect File, and name it RenderGBuffer.fx. After this, we get a template for an effect file. We’ll make some modifications to this. We need to add a texture, and a sampler for it. This will be used to draw the color of the model. For the specular intensity and specular power, we will use two effect parameters, for now. In a later chapter, we will see how to read this data from textures. For specularPower, we will store a value in [0,1] range; which will be multiplied later by 255 to obtain a power coefficient between 0 and 255.

float4x4 World;
float4x4 View;
float4x4 Projection;
float specularIntensity = 0.8f;
float specularPower = 0.5f;
texture Texture;
sampler diffuseSampler = sampler_state
{
    Texture = (Texture);
    MAGFILTER = LINEAR;
    MINFILTER = LINEAR;
    MIPFILTER = LINEAR;
    AddressU = Wrap;
    AddressV = Wrap;
};

Because we will be outputting normals and depth, we need to add the normals to the VertexInput structures, and normals and depth to the VertexOutput structure. We also need texture coordinates. The depth is a two-component vector, because we will only do the division by w in the pixel shader. Otherwise, strange values are obtained when the vertices of a triangle are out of the view frustum, but the triangle is still visible.

struct VertexShaderInput
{
    float4 Position : POSITION0;
    float3 Normal : NORMAL0;
    float2 TexCoord : TEXCOORD0;
};
struct VertexShaderOutput
{
    float4 Position : POSITION0;
    float2 TexCoord : TEXCOORD0;
    float3 Normal : TEXCOORD1;
    float2 Depth : TEXCOORD2;
};

The VertexShaderFunction remains mostly the same, for now. We only need to add three instructions for the new outputs. The normals are transformed in world coordinates, and the depth is composed by output.Position.z and output.Position.w.

>

VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
    VertexShaderOutput output;
    float4 worldPosition = mul(input.Position, World);
    float4 viewPosition = mul(worldPosition, View);
    output.Position = mul(viewPosition, Projection);
    output.TexCoord = input.TexCoord;                            //pass the texture coordinates further
    output.Normal =mul(input.Normal,World);                   //get normal into world space
    output.Depth.x = output.Position.z;
    output.Depth.y = output.Position.w;
    return output;
}

We are now left with the Pixel Shader. Since we are no longer outputting to only one render target, we need to have an output structure for the pixel shader. This will contain three float4′s, each one for the corresponding render target.

struct PixelShaderOutput
{
    half4 Color : COLOR0;
    half4 Normal : COLOR1;
    half4 Depth : COLOR2;
};

In the PixelShaderFunction, we need to output the color, normal and depth, each one to the corresponding render target. You can see that we transform the normal domain, from [-1,1] to [0,1]. The depth is computed by dividing the two depth components. The code for the pixel shader is:

PixelShaderOutput PixelShaderFunction(VertexShaderOutput input)
{
    PixelShaderOutput output;
    output.Color = tex2D(diffuseSampler, input.TexCoord);            //output Color
    output.Color.a = specularIntensity;                                              //output SpecularIntensity
    output.Normal.rgb = 0.5f * (normalize(input.Normal) + 1.0f);    //transform normal domain
    output.Normal.a = specularPower;                                            //output SpecularPower
    output.Depth = input.Depth.x / input.Depth.y;                           //output Depth
    return output;
}

One last thing, we need to set the PixelShader and VertexShader versions to 2_0:

technique Technique1
{
    pass Pass1
    {
        VertexShader = compile vs_2_0 VertexShaderFunction();
        PixelShader = compile ps_2_0 PixelShaderFunction();
    }
}

Now that we’re done with the effect, we’ll add some code inside the Scene class. For now, we will just add a model, its texture, and draw them using our effect. This code will change a lot later, when we will see how to integrate normal mapping and specular textures using the content pipeline. But you don’t have to worry about that now.

We’ll need to add some models to our game. Add a folder named Models inside your Content folder. Then, add Models\ship1.fbx and Models\ship1_c.tga (from Resources.zip) to your Content project, in the Models folder. The other models will be used only later. Next, add members inside the Scene class, to hold the color texture, the model, and our effect file. These will be initialized in the InitializeScene function. (you also need to add using Microsoft.Xna.Framework.Graphics to Scene.cs)

class Scene
{
    private Game game;
    Model shipModel;
    Texture2D shipColor;
    Effect gbufferEffect;
    [...]
    public void InitializeScene()
    {
        shipModel = game.Content.Load<Model>("Models\\ship1");
        shipColor = game.Content.Load<Texture2D>("Models\\ship1_c");
        gbufferEffect = game.Content.Load<Effect>("RenderGBuffer");
    }
}

Finally, inside the DrawScene function, we make sure the render state is as we want it, then we set the desired effect parameters, and then draw the model geometry using out own effect. This means we cannot use the ModelMesh.Draw() function. The code for this is the following:

public void DrawScene(Camera camera, GameTime gameTime)
{
    game.GraphicsDevice.RenderState.DepthBufferEnable = true;
    game.GraphicsDevice.RenderState.CullMode = CullMode.CullCounterClockwiseFace;
    game.GraphicsDevice.RenderState.AlphaBlendEnable = false;
    gbufferEffect.Parameters["World"].SetValue(Matrix.Identity);
    gbufferEffect.Parameters["View"].SetValue(camera.View);
    gbufferEffect.Parameters["Projection"].SetValue(camera.Projection);
    gbufferEffect.Parameters["Texture"].SetValue(shipColor);
    gbufferEffect.Begin();
    gbufferEffect.CurrentTechnique.Passes[0].Begin();
    foreach (ModelMesh mesh in shipModel.Meshes)
    {
        foreach (ModelMeshPart meshPart in mesh.MeshParts)
        {
            game.GraphicsDevice.VertexDeclaration = meshPart.VertexDeclaration;
            game.GraphicsDevice.Vertices[0].SetSource(mesh.VertexBuffer, meshPart.StreamOffset, meshPart.VertexStride);
            game.GraphicsDevice.Indices = mesh.IndexBuffer;
            game.GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList,
                                                                                    meshPart.BaseVertex, 0,
                                                                                    meshPart.NumVertices,
                                                                                    meshPart.StartIndex,
                                                                                    meshPart.PrimitiveCount);
        }
    }
    gbufferEffect.CurrentTechnique.Passes[0].End();
    gbufferEffect.End();
}

Now, before moving on, let’s try and see our G-Buffer. For this, we will need a SpriteBatch in our DeferredRenderer class, and we’ll draw the three render targets at the end of out Draw code.

public class DeferredRenderer : Microsoft.Xna.Framework.DrawableGameComponent
{
    [...]
    private SpriteBatch spriteBatch;
    [...]
    protected override void LoadContent()
    {
        [...]
        spriteBatch = new SpriteBatch(Game.GraphicsDevice);
    }
    [...]
    public override void Draw(GameTime gameTime)
    {
        SetGBuffer();
        ClearGBuffer();
        scene.DrawScene(camera);
        ResolveGBuffer();
        int halfWidth = GraphicsDevice.Viewport.Width / 2;
        int halfHeight = GraphicsDevice.Viewport.Height / 2;
        spriteBatch.Begin();
        spriteBatch.Draw(colorRT.GetTexture(), new Rectangle(0, 0, halfWidth, halfHeight), Color.White);
        spriteBatch.Draw(normalRT.GetTexture(), new Rectangle(0, halfHeight, halfWidth, halfHeight), Color.White);
        spriteBatch.Draw(depthRT.GetTexture(), new Rectangle(halfWidth, 0, halfWidth, halfHeight), Color.White);
        spriteBatch.End();
        base.Draw(gameTime);
    }
}

Now, we should see the contest of the G-Buffer. The depth might seem all-white, but it isn’t actually: the values are close to white because of how depth precision is distributed in the scene.

gbuffer

This concludes our first chapter. In these chapter we saw what is the purpose of the G-Buffer, and how to create it. We wrote a special effect file that outputs data to 3 different RenderTargets, taking advantage of MRT. But this is just the beginning. In the next chapter, we will add a directional light to the scene, which will later be followed by point lights and spotlights. The code for this chapter can be downloaded here: Chapter2.zip

  • Karim Kenawy

    Hi there, very nice tutorial and easy tutorial. I have Geforce fx 5500 which supports only 1 render target :( , but I’m trying to convert your code to work on 1 render target instead of 3. So can this process be done ? ( I don’t consider performance, I just want to make it work on my card ).

  • http://www.catalinzima.com Catalin Zima

    Yes, it can be done.
    You still create 3 render targets, but instead of drawing to them all at once, you have to make three passes, each time using a different render target and a different shader.

    If you have any problems with doing this, please let me know, and I’ll help you.

  • Sergey

    Hi, Catalin.
    You “Chapter2.zip” links to local drive: file:///D:/programare/My%20projects/XNA/Deferred%20Shading%20Article/Final/Chapter2.zip

  • http://www.catalinzima.com Catalin Zima

    The link is fixed, but looks like updating messed up my code.
    I’ll fix it as soon as I can.

  • http://link name

    Hello!,

  • name

    This line is incorrect:

    output.Normal.rgb = 0.5f * (normalize(input.Normal) + 1.0f);

    to convert from [-1,1] to [0,1] must be

    output.Normal.rgb = 0.5f * (normalize(input.Normal) + 0.5f);

  • http://www.catalinzima.com Catalin Zima

    actually, you’re wrong.

    My way:
    [-1,1] + 1.0f => [0.0f,2.0f]
    [0.0f,2.0f] * 0.5f => [0.0f, 1.0f]

    your way:
    [-1,1] + 0.5f = [-0.5f, 1.5f]
    [-0.5f, 1.5f] * 0.5f => [-0.25f, 0.75f]

    So my way is the right one

  • name

    Oh sorry, yes, your method is correct.
    Second one is:
    output.Normal.rgb = 0.5f * normalize(input.Normal) + 0.5f;

  • Isac

    Hi, Catalin
    It’s a great article.
    I have a question. You draw the all geometry use the same effect file, but when each geometry in the scene has its own effect, how to use the MRT ?(I want to use MRT to get depth data)

  • http://www.catalinzima.com Catalin Zima

    You would still have to modify each effect used by models in your game to write to the second render target.

  • BRoany

    nice article however when I try this my Depth target always comes our green and white, my Normals look washed out and the background is black, and the Diffuse looks exactly like yours.

    Could I be doing something wrong?

    thanks

    BRoany

  • http://www.catalinzima.com Catalin Zima

    I don’t know why you would be getting green in the depth buffer.
    How much washed-out is the normal map? depending on the model, this might not be too much of an issue

  • BRoany

    Ok well I may have fixed most of my problems. The only one that still remains is the depth buffer color is still a greenish blue color. It seems to have to do with the “SurfaceFormat.Single” if I set it to color it works perfect? the shader code also does not seem to respond to colors. example if I output a color in the clearBuffer effect it remains a certain the greenish blue the difference is instead of the gradient it is solid. I also have the problem that if I change the Color.alpha or the Normal.Alpha it breaks the the other to color slots making the background stay the purple color. I hope this makes since its kinda late.

    thanks again

  • http://www.catalinzima.com Catalin Zima

    Wait, you mean you get CYAN (http://ro.wikipedia.org/wiki/Cyan) ?
    That’s natural. Because in floating point textures, you alter the red channel, and if the red channel varies from 0 to 1, and green and blue stay at 1, then you have different shades of Cyan, so that’s ok.

  • BRoany

    Ok no more issues but I thought I would drop back by to say thanks cause without you I would have had no idea where to start on learning this stuff. thanks again

    BRoany

  • http://www.catalinzima.com Catalin Zima

    you are welcome

  • Venatu

    Great article, very helpful!

    One thing though, to get the test render (last code sample) to work correctly, I had to use

    spriteBatch.Begin(SpriteBlendMode.None);

    as opposed to

    spriteBatch.Begin();

    that you used. Is this a quirk of the XNA 3.1 as opposed to 2.0? Please keep at the articles and samples, they’re all very useful

  • http://www.catalinzima.com Catalin Zima

    Thanks for the notification. Normally there’s no difference on that front between 2.0 and 3.1, so I don’t know why that was needed.

    I’ll be coming back to do more samples and tutorial, but probably only in July/August.

  • Daan Nijs

    There’s an error in your 2*16bit normal packing code. Normals CAN be negative in view space, due to perspective. You can ceck out a reference @ http://cmpmedia.vo.llnwd.net/o1/vault/gdc09/slides/gdc09_insomniac_prelighting.pdf

    Correct packing code would convert the normal to spherical coordinates.

  • http://www.catalinzima.com Catalin Zima

    You’re right. I never thought of that…

  • Mike

    if i only have Nvidia GeForce FX 5500
    and the game need 4 MRT , so i can’t do anything to play that game ?

    i got this error starting the game
    ” Insufficient MRT support video memory at least 4 are required (you have 1)”

  • http://www.catalinzima.com Catalin Zima

    No, if your video card only support 1 render target, you can’t run this example.

    You could theoretically do something similar with nu support for MRT, that is, to draw each pass of the Geometry Buffer one after the other, in a different render target, but would thus loose all benefits of MRTs, and using Deferred Rendering this way wouldn’t make much sense.

  • Mike

    ohh my bad card.
    thanks to you ,Catalin

    super fast response ever!

    which AGP cards the best for 8xAGP slot
    in my head ..there are
    -Radeon 3850 512MB
    -Geforce 7600GT or 7900GS
    or any AGP cards better than these ?

    specs :
    Celeron 2.4GHz
    512 RAM
    8X agp slot with GeForce 5500 already

  • http://www.catalinzima.com Catalin Zima

    If you can find an GeForce 7600GT or greater running on AGP, than you’re set.

    Of course, a PCI-Express card would be better, but I assume your computer doesn’t have a PCI-Express slot.

  • Mike

    yes.. my comp dont have PCI-E slot
    “old generation ”

    my friend told me ATI Radeon still release AGP products until now,even this is PCI-E era.

    Radeon 3850 AGP vs Geforce 7600 ==> winner is 3850 ?

  • Mike

    thanks Catalin..

    topic close please
    i had choose 3850 AGP :)

  • http://www.catalinzima.com Catalin Zima

    I’m not familiar with the ATI line, so I can’t say which one is better :)

  • David

    Dude, in the

    spriteBatch.Begin();

    line, it should be

    spriteBatch.Begin(SpriteBlendMode.None);

    or you could not obtain the image you put as result.

  • czaurel

    Hi, I just bumped into this the other day and tried (still trying) to put some of the code to use in my bsc thesis, using XNA 3.0.
    ATM the only minor problem I’ve got is that when I draw the resolved rendertarget textures with the spritebatch, the first draws OK, but the other two appear as completely transparent/cyan, depending on the surface format.
    And this seems SpriteSortMode-dependent. With BackToFront I get the last one (the depth rt) correct, with FronToBack or Deferred only the first appears right.
    Strangely enough (for me), RenderTargetUsage.PreserveContents seems to solve this problem even tough I’m pretty sure the rendertarget textures shouldn’t lose their content because of a mere spritebatch draw call after being resolved.
    It might not be worth the pondering, but I’m still curious what I’m missing.

  • Peter

    I want to say thank You for great article and piece of great code. Now after ziggyware.com I left with code only:). I will appreciate some help. I’m using geoclipmapped terrain made in GPU. It is built from heightmap. It is easy to get normal map from heightmap with many programs. But I don’t know how to build in Vertex Shader TBN Matrix. Do you know how to get binormal and tangent from normal map or heightmap? Do you have maybe VTF terrain merged with deferred lightning HLSL code or documents, links where I can find solution how to do it?

    Thank you

    Peter, from Poland

  • Alex Pecoraro

    Why do you transform the normal by the full world transformation in the gbuffer vertex shader? Wouldn’t you want to only transform it by the rotation component of the world transform in order to avoid scaling or translating the normal? Something like this is what I would have expected:

    Output.Normal = normalize(mul(inNormal, (float3x3)World));

    Thanks.

  • Joshua Tully

    This is a great tutorial however for people reading this should also read the ShaderX2 article on deferred lighting with multiple render targets, it will definitely fill in some gaps and explain the process a bit more thoroughly.

    ShaderX2 is actually free on GameDev.net or a direct link http://www.gamedev.net/reference/programming/features/shaderx2/Tips_and_Tricks_with_DirectX_9.pdf

  • Dennis

    Thanks for the awesome tutorial! I managed to make a simple rendering engine, but I’m having trouble getting the depth value out of the depth render target. How do I get the depth in XNA units from the [0,1] values?
    This is what I’m using,

    float distance = tex2D(depthSampler, input.TexCoord).r;

    but I can’t figure out what I’m supposed to multiply with/divide by.
    Thanks

  • Seabolt

    Hey this might be a dumb question but:

    I’m currently using XNA 4.0 and I’ve been able to convert everything in this chapter over to it so far, but I’m have a problem with there being no depth buffer (I think) for the textures. So I render the ship, but I can see things that should be behind pixels being rendered in front. We have this depth buffer we can use, but how can I set it up to make sure there isn’t a problem with the depth buffers for the render targets?

    Thanks for the help and the excellent tutorials.

  • http://Website Peter

    IN HiProfile in XNA 4.0 we can’t use SurfaceFormat.Single with AlphaBlending for depth texture. Any sugestions how to fix this problem with depth buffer format?

  • http://roy-t.nl Roy Triesscheijn

    Hey Catalin,

    I’m trying to convert your code to XNA4.0, by slowly following each tutorial. Now I’m a bit stuck on the point light, directional light works perfectly, however the point light doesn’t lit any thing. It seems that at least attenuation is not correctly computed (always 0). But it also seems that even when removing this from the equation things aren’t exactly lit.

    Here’s a picture to show what I mean:http://i56.tinypic.com/1h44uc.png

    When I return a solid color in the shader I get to see the moving sphere. But somehow something is wrong :P.

    The shader code is 100% the same as yours, but the class code was updated to 4.0, like this: http://pastebin.com/nsyR5BAi

    If you have the time could you give me some pointers on what I could check that is causing this? If you are extremely bored, maybe even to glance at the code?

    You can contact me for further questions @roytries on your twitters, and I think you have my e-mail address.

    Sorry for bothering you like this. I’m really stuck. And ofc I’ll share the code with you once I’ve completed the tutorial and cleaned up the code :).

  • Pingback: XNA 4.0 Light Pre-Pass | J.Coluna

  • http://perfunction.net Josh

    If you’re having trouble using SurfaceFormat.Single for your depth render target, run this line of code directly before clearing your g-buffer:

    Game.GraphicsDevice.BlendState = BlendState.Opaque;

  • Isinfur

    z = sqrt(1 – x*x – y*y) <- incorrect

    Computing normal from X and Y as mention here have subtle incorrectness to them, as the Z value will can go either positive or negative after perspective projection. This method simply does not account for it.

    To get real Z value from only X,Y value is usually too much work (is possible but is computationally heavy), most in the industry just simply store the X,Y,Z in the Gbuffer as is.

    (google for "viewspace normal myth" for a more complete explaination on this subject)

  • Pingback: How Lights Work | Dev Blog