Before we go practical, we need to discuss some theoretical aspects. Choosing the G-Buffer format is one of the most crucial steps in deferred shading. You have to make a compromise between quality and speed. Greater quality comes from greater precision or larger number of parameters. These two determine how large the G-Buffer needs to be, which can affect the performance.
Multiple Render Targets
We saw earlier that the G-Buffer needs to store a whole lot of information. Normally, the result of rendering is a single image, but we need several: one for each component of the G-Buffer. One way to achieve this is to render the scene in multiple passes, each time with a different shader, and to a different RenderTarget. This means that the computations on the scene geometry will be done several times, and lots of Draw calls will be made.
Multiple Render Targets (MRTs) allow the pixel shader to output several colors at once. This is exactly what we need, since the scene will be drawn only once, but all attributes will be generated.
To use MRTs we need to use COLOR0 to COLOR3 semantics in the Pixel Shader. An example of a pixel shader that outputs a different color to each render target can be seen below:
struct PixelShaderOutput { float4 Color : COLOR0; float4 Color1 : COLOR1; float4 Color2 : COLOR2; }; PixelShaderOutput PixelShaderFunction(VertexShaderOutput input) { PixelShaderOutput output; output.Color = float4(1,0,0,1); output.Color1 = float4(0,1,0,1); output.Color2 = float4(0,0,1,1); return output; }
When using MRT, we have some restrictions. First, all RenderTargets have to have the same bit-depth. This means you cannot have a RenderTarget which is of SurfaceFormat.Color (32bit RGBA), and another that is SurfaceFormat.HalfVector4(64bit R16G16B16A16F). This poses some new restrictions on how we setup our G-Buffer. If we use large formats (64 bits), we will have good precision where we need it (normals, positions, depth), but we will have wasted space in other places (color, specular intensity, etc). On the other side, if we use a smaller formats (32 bits), we will have better performance, but lower precision.
Next we’ll analyze the design choices for each G-Buffer component. The main components that need to be addressed are: diffuse color, normals, position . After these are fixed, we can focus on adding material attributes, such as specular intensity or specular power.
Choosing RenderTarget Format for diffuse color
Ok, this one is simple. We need to use SurfaceFormat.Color, which is on 32 bits. However, if the rest of the attributes would be better suited by 64 bit render targets, then we can also make the diffuse component on SurfaceFormat.HalfVector4 . The second option wastes some space, but some configurations may be ok with this.
Choosing RenderTarget Format for normals
Here, we have more options, and more decisions to make. The obvious first choice would be SurfaceFormat.Vector4 (128 bits) or SurfaceFormat.HalfVector4(64 bits). The 128 bits is out of the question. We don’t need that much precision, and if choosing 128 bits, then a 4-MRT G-buffer at 1024*768 would have a size of 48 MB total, which is huge. Besides, 128-bit is not supported on the Xbox360. The second option, HalfVector4 provides very good precision for normals. If the 64 bits size bothers us, and we want to use 32 bits, we still have two options:
- SurfaceFormat.Color – This gives 8 bits for each normal component X,Y and Z, and also, we are left with a free 8-bit field (from the alpha channel), which we can use for other parameters. If we choose to use this format, we’ll have to take care when storing / reading normals. SurfaceFormat.Color can only store values between 0.0f and 1.0f, but the normals can be between -1.0f and 1.0f. This is easily solved by converting the [-1.0,1.0] domain to [0.0,1.0] domain before writing the data, and converting back when reading the data.
- SurfaceFormat.HalfVector2 – This format gives two 16-bit components. But wait, you might say, the normal needs 3 components, since it is a vector in space. Indeed, this is true, but since we always want the normal to be normalized, and since the X,Y and Z axis are perpendicular to each other, we can only store X and Y, and then compute Z with the formula Z = sqrt( 1 – X*X – Y*Y ). By using this, we have both high precision normals, and 32 bit RenderTargets.
- SurfaceFormat.Rgba1010102 – Shawn Hargreaves suggests this in his paper[1]. This is a nice version, on 32 bits, with good precision, and also leaves us a 2-bit channel if we want to use it. However, I tried creating a RenderTarget of this format on 3 computers, and it didn’t work.
Choosing RenderTarget Format for positions
The options available for storing the position are the following:
- SurfaceFormat.Vector4 – this is a 128-bit format. Again, this forces 128 bit on the other RenderTargets, which wastes very much memory, and is not available on the Xbox.
- SurfaceFormat.HalfVector4 – 64 bit format. This has good precision, and also leaves us with another channel were we can store an extra attribute.
- SurfaceFormat.Single – this is the third option, and the only option if you want to have 32-bit wide RenderTargets. In order to use this, you only store the depth of the pixel, instead of the whole position. When we need to use the position, we have the screen-space X,Y position of the pixel, which, together with the depth can be used to compute the world position corresponding to that pixel.
Choosing RenderTarget Format for other attributes
When we reach this point, we might already have some free channels remaining for extra attributes. If these are not enough, we can add a new RenderTarget dedicated strictly to extra attributes. Choices for this extra RenderTarget can be SurfaceFormat.Color, when using 32 bit targets, or SurfaceFormat.HalfVector4 when using 64 bit targets.
G-Buffer Setup for this article
For the rest of this article, we will use to following setup for the G-Buffer, with three 32-bit Render Targets.
RenderTarget 0 (SurfaceFormat.Color) |
Color.R |
Color.G |
Color.B |
Specular Intensity |
RenderTarget 1 (SurfaceFormat.Color) |
Normal.X |
Normal.Y |
Normal.Z |
Specular Power |
RenderTarget 2 (SurfaceFormat.Single) |
Depth 32bit |
For now, we will use a simple lighting model (Phong), in which the light is composed of the diffuse light, and specular highlights. For this, we only need as extra parameters information related to specular lighting, which is Specular Intensity and Specular Power. The 32-bit depth value is used to compute the position of the pixel in world space. The precision of the normals is not that good, and this might be noticeable for some surfaces. The advantage is that this setup is very compact, and the performance is very good.
Setting up the G-Buffer
We need to add some variables in the DeferredRenderer class. These are the three RenderTargets we will use: colorRT, normalRT and depthRT. We then initialize them in the LoadContent function.
private RenderTarget2D colorRT; //this Render Target will hold color and Specular Intensity
private RenderTarget2D normalRT; //this Render Target will hold normals and Specular Power
private RenderTarget2D depthRT; //finally, this one will hold the depth
[...]
protected override void LoadContent()
{
scene.InitializeScene();
//get the sizes of the backbuffer, in order to have matching render targets
int backBufferWidth = GraphicsDevice.PresentationParameters.BackBufferWidth;
int backBufferHeight = GraphicsDevice.PresentationParameters.BackBufferHeight;
colorRT = new RenderTarget2D(GraphicsDevice, backBufferWidth,
backBufferHeight, 1, SurfaceFormat.Color);
normalRT = new RenderTarget2D(GraphicsDevice, backBufferWidth,
backBufferHeight, 1, SurfaceFormat.Color);
depthRT = new RenderTarget2D(GraphicsDevice, backBufferWidth,
backBufferHeight, 1, SurfaceFormat.Single);
base.LoadContent();
}
Next, we will create two functions called SetGBuffer and ResolveGBuffer, that set the render targets on the device. We set the colorRT as the first render target, normalRT as the second, and finally, depthRT as the third.
private void SetGBuffer()
{
GraphicsDevice.SetRenderTarget(0, colorRT);
GraphicsDevice.SetRenderTarget(1, normalRT);
GraphicsDevice.SetRenderTarget(2, depthRT);
}
private void ResolveGBuffer()
{
//set all rendertargets to null. In XNA 2.0, switching a rendertarget causes the resolving of the previous rendertarget.
// In XNA 1.1, we needed to call GraphicsDevice.ResolveRenderTarget(i);
GraphicsDevice.SetRenderTarget(0, null);
GraphicsDevice.SetRenderTarget(1, null);
GraphicsDevice.SetRenderTarget(2, null);
}
Clearing the G-Buffer
Before we draw anything, we need to clear the G-Buffer to default values. The problem here is that we can’t simply use GraphicsDevice.Clear(), because that would set all render targets to the same color, which is not something we want. We need to clear the color render target to Black, the depth render target to White (which means maximum depth), and the normal render target to Grey, which, when transformed from the [0,1] domain to [-1,1] domanin, will become (0,0,0), which is a good value for a default normal. This will prevent lighting artifacts from appearing on the background, where no other object was drawn.
To fix the clearing problem, we will need to create a new effect file, named ClearGBuffer.fx, inside the Content Project. Inside this effect, we set the render targets to the values we want. In the vertex shader, we just pass the position further. We will use the quadRenderer to draw a full-screen quad.
The code for the shader is simple:
struct VertexShaderInput
{
float3 Position : POSITION0;
};
struct VertexShaderOutput
{
float4 Position : POSITION0;
};
VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
VertexShaderOutput output;
output.Position = float4(input.Position,1);
return output;
}
struct PixelShaderOutput
{
float4 Color : COLOR0;
float4 Normal : COLOR1;
float4 Depth : COLOR2;
};
PixelShaderOutput PixelShaderFunction(VertexShaderOutput input)
{
PixelShaderOutput output;
//black color
output.Color = 0.0f;
output.Color.a = 0.0f;
//when transforming 0.5f into [-1,1], we will get 0.0f
output.Normal.rgb = 0.5f;
//no specular power
output.Normal.a = 0.0f;
//max depth
output.Depth = 1.0f;
return output;
}
technique Technique1
{
pass Pass1
{
VertexShader = compile vs_2_0 VertexShaderFunction();
PixelShader = compile ps_2_0 PixelShaderFunction();
}
}
We will load this effect file into a variable, and then we will create a function called ClearGBuffer() in the DeferredRenderer class, where we draw a full-screen quad using this effect.
private Effect clearBufferEffect;
protected override void LoadContent()
{
[...]
clearBufferEffect = Game.Content.Load<Effect>("ClearGBuffer");
}
private void ClearGBuffer()
{
clearBufferEffect.Begin();
clearBufferEffect.Techniques[0].Passes[0].Begin();
quadRenderer.Render(Vector2.One * -1, Vector2.One);
clearBufferEffect.Techniques[0].Passes[0].End();
clearBufferEffect.End();
}
Finally, in the Draw function, we set the G-Buffer, clear it, draw the scene, and then resolve the G-Buffer.
public override void Draw(GameTime gameTime)
{
SetGBuffer();
ClearGBuffer();
scene.DrawScene(camera, gameTime);
ResolveGBuffer();
base.Draw(gameTime);
}
If you run the game now, you won’t see anything. (Actually, you’ll probably see a violet screen, because of how RenderTarget switching is handled in XNA 2.0). This is normal, because we didn’t actually draw anything. Next, we will add some items in the Scene class, and draw them using a special shader.
Drawing the Scene
We will now create an effect that will be used to draw all geometry in the game. This effect will output values to all render targets, and it is responsible for filling the G-Buffer, so it is one of the main pieces of code for the deferred renderer.
To create a new shader, right click the Content Project, and Add New Item, select a new Effect File, and name it RenderGBuffer.fx. After this, we get a template for an effect file. We’ll make some modifications to this. We need to add a texture, and a sampler for it. This will be used to draw the color of the model. For the specular intensity and specular power, we will use two effect parameters, for now. In a later chapter, we will see how to read this data from textures. For specularPower, we will store a value in [0,1] range; which will be multiplied later by 255 to obtain a power coefficient between 0 and 255.
float4x4 World;
float4x4 View;
float4x4 Projection;
float specularIntensity = 0.8f;
float specularPower = 0.5f;
texture Texture;
sampler diffuseSampler = sampler_state
{
Texture = (Texture);
MAGFILTER = LINEAR;
MINFILTER = LINEAR;
MIPFILTER = LINEAR;
AddressU = Wrap;
AddressV = Wrap;
};
Because we will be outputting normals and depth, we need to add the normals to the VertexInput structures, and normals and depth to the VertexOutput structure. We also need texture coordinates. The depth is a two-component vector, because we will only do the division by w in the pixel shader. Otherwise, strange values are obtained when the vertices of a triangle are out of the view frustum, but the triangle is still visible.
struct VertexShaderInput
{
float4 Position : POSITION0;
float3 Normal : NORMAL0;
float2 TexCoord : TEXCOORD0;
};
struct VertexShaderOutput
{
float4 Position : POSITION0;
float2 TexCoord : TEXCOORD0;
float3 Normal : TEXCOORD1;
float2 Depth : TEXCOORD2;
};
The VertexShaderFunction remains mostly the same, for now. We only need to add three instructions for the new outputs. The normals are transformed in world coordinates, and the depth is composed by output.Position.z and output.Position.w.
>
VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
VertexShaderOutput output;
float4 worldPosition = mul(input.Position, World);
float4 viewPosition = mul(worldPosition, View);
output.Position = mul(viewPosition, Projection);
output.TexCoord = input.TexCoord; //pass the texture coordinates further
output.Normal =mul(input.Normal,World); //get normal into world space
output.Depth.x = output.Position.z;
output.Depth.y = output.Position.w;
return output;
}
We are now left with the Pixel Shader. Since we are no longer outputting to only one render target, we need to have an output structure for the pixel shader. This will contain three float4’s, each one for the corresponding render target.
struct PixelShaderOutput
{
half4 Color : COLOR0;
half4 Normal : COLOR1;
half4 Depth : COLOR2;
};
In the PixelShaderFunction, we need to output the color, normal and depth, each one to the corresponding render target. You can see that we transform the normal domain, from [-1,1] to [0,1]. The depth is computed by dividing the two depth components. The code for the pixel shader is:
PixelShaderOutput PixelShaderFunction(VertexShaderOutput input)
{
PixelShaderOutput output;
output.Color = tex2D(diffuseSampler, input.TexCoord); //output Color
output.Color.a = specularIntensity; //output SpecularIntensity
output.Normal.rgb = 0.5f * (normalize(input.Normal) + 1.0f); //transform normal domain
output.Normal.a = specularPower; //output SpecularPower
output.Depth = input.Depth.x / input.Depth.y; //output Depth
return output;
}
One last thing, we need to set the PixelShader and VertexShader versions to 2_0:
technique Technique1
{
pass Pass1
{
VertexShader = compile vs_2_0 VertexShaderFunction();
PixelShader = compile ps_2_0 PixelShaderFunction();
}
}
Now that we’re done with the effect, we’ll add some code inside the Scene class. For now, we will just add a model, its texture, and draw them using our effect. This code will change a lot later, when we will see how to integrate normal mapping and specular textures using the content pipeline. But you don’t have to worry about that now.
We’ll need to add some models to our game. Add a folder named Models inside your Content folder. Then, add Models\ship1.fbx and Models\ship1_c.tga (from Resources.zip) to your Content project, in the Models folder. The other models will be used only later. Next, add members inside the Scene class, to hold the color texture, the model, and our effect file. These will be initialized in the InitializeScene function. (you also need to add using Microsoft.Xna.Framework.Graphics to Scene.cs)
class Scene
{
private Game game;
Model shipModel;
Texture2D shipColor;
Effect gbufferEffect;
[...]
public void InitializeScene()
{
shipModel = game.Content.Load<Model>("Models\\ship1");
shipColor = game.Content.Load<Texture2D>("Models\\ship1_c");
gbufferEffect = game.Content.Load<Effect>("RenderGBuffer");
}
}
Finally, inside the DrawScene function, we make sure the render state is as we want it, then we set the desired effect parameters, and then draw the model geometry using out own effect. This means we cannot use the ModelMesh.Draw() function. The code for this is the following:
public void DrawScene(Camera camera, GameTime gameTime)
{
game.GraphicsDevice.RenderState.DepthBufferEnable = true;
game.GraphicsDevice.RenderState.CullMode = CullMode.CullCounterClockwiseFace;
game.GraphicsDevice.RenderState.AlphaBlendEnable = false;
gbufferEffect.Parameters["World"].SetValue(Matrix.Identity);
gbufferEffect.Parameters["View"].SetValue(camera.View);
gbufferEffect.Parameters["Projection"].SetValue(camera.Projection);
gbufferEffect.Parameters["Texture"].SetValue(shipColor);
gbufferEffect.Begin();
gbufferEffect.CurrentTechnique.Passes[0].Begin();
foreach (ModelMesh mesh in shipModel.Meshes)
{
foreach (ModelMeshPart meshPart in mesh.MeshParts)
{
game.GraphicsDevice.VertexDeclaration = meshPart.VertexDeclaration;
game.GraphicsDevice.Vertices[0].SetSource(mesh.VertexBuffer, meshPart.StreamOffset, meshPart.VertexStride);
game.GraphicsDevice.Indices = mesh.IndexBuffer;
game.GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList,
meshPart.BaseVertex, 0,
meshPart.NumVertices,
meshPart.StartIndex,
meshPart.PrimitiveCount);
}
}
gbufferEffect.CurrentTechnique.Passes[0].End();
gbufferEffect.End();
}
Now, before moving on, let’s try and see our G-Buffer. For this, we will need a SpriteBatch in our DeferredRenderer class, and we’ll draw the three render targets at the end of out Draw code.
public class DeferredRenderer : Microsoft.Xna.Framework.DrawableGameComponent
{
[...]
private SpriteBatch spriteBatch;
[...]
protected override void LoadContent()
{
[...]
spriteBatch = new SpriteBatch(Game.GraphicsDevice);
}
[...]
public override void Draw(GameTime gameTime)
{
SetGBuffer();
ClearGBuffer();
scene.DrawScene(camera);
ResolveGBuffer();
int halfWidth = GraphicsDevice.Viewport.Width / 2;
int halfHeight = GraphicsDevice.Viewport.Height / 2;
spriteBatch.Begin();
spriteBatch.Draw(colorRT.GetTexture(), new Rectangle(0, 0, halfWidth, halfHeight), Color.White);
spriteBatch.Draw(normalRT.GetTexture(), new Rectangle(0, halfHeight, halfWidth, halfHeight), Color.White);
spriteBatch.Draw(depthRT.GetTexture(), new Rectangle(halfWidth, 0, halfWidth, halfHeight), Color.White);
spriteBatch.End();
base.Draw(gameTime);
}
}
Now, we should see the contest of the G-Buffer. The depth might seem all-white, but it isn’t actually: the values are close to white because of how depth precision is distributed in the scene.
This concludes our first chapter. In these chapter we saw what is the purpose of the G-Buffer, and how to create it. We wrote a special effect file that outputs data to 3 different RenderTargets, taking advantage of MRT. But this is just the beginning. In the next chapter, we will add a directional light to the scene, which will later be followed by point lights and spotlights. The code for this chapter can be downloaded here: Chapter2.zip
Pingback: XNA 4.0 Light Pre-Pass | J.Coluna
Pingback: How Lights Work | Dev Blog