There are three common types of lights: directional, point lights, and spot lights. In this chapter we will see how to add a directional light to our renderer. Directional lights are lights which equally illuminate all objects from a given direction. Because all objects are equally illuminated, the computations for directional lights need to be done on all pixels on the screen. This makes these types of lights expensive, so it is recommended to have a small number of directional lights. This limitation will not exist for other types of lights.
To apply directional lights, we will apply a full-screen post processing pass. Here, we’ll take each pixel, and based on the normals and position read from the G-Buffer, we compute the amount of illumination. We will create an effect file for this, named DirectionalLight.fx. Applying this shader will generate a lightmap that will later be combined with the colors from the G-Buffer. For this, we will output the diffuse light, (which may be colored) to the rgb channels, and the specular light, (which we will always consider white) to the alpha channel. When combining these in the end, we will use the equation FinalColor = DiffuseColor * DiffuseLight + SpecularLight.
Since we are applying shading as a post processing, we need to make sure we properly align pixels to texels. To understand why we need to to this, you may want to read Directly Mapping Texels to Pixels, on MSDN. We’ll add a member for this in DeferredRenderer, and initialize it to half the size of a pixel.
private Vector2 halfPixel;
[...]
protected override void LoadContent()
{
halfPixel.X = 0.5f / (float)GraphicsDevice.PresentationParameters.BackBufferWidth;
halfPixel.Y = 0.5f / (float)GraphicsDevice.PresentationParameters.BackBufferHeight;
[...]
}
Back to the shader, inside DirectionalLight.fx, as properties of a directional light, we need parameters for the light direction, and light color. For specular computations, we need to know the position of the camera. Finally, in order to compute the world position of a pixel when knowing the screen depth, we need the inverse matrix of the ViewProjection matrix. We also need parameters for the textures in the G-Buffer: color, normal and depth.
//direction of the light
float3 lightDirection;
//color of the light
float3 Color;
//position of the camera, for specular light
float3 cameraPosition;
//this is used to compute the world-position
float4x4 InvertViewProjection;
// diffuse color, and specularIntensity in the alpha channel
texture colorMap;
// normals, and specularPower in the alpha channel
texture normalMap;
//depth
texture depthMap;
sampler colorSampler = sampler_state
{
Texture = (colorMap);
AddressU = CLAMP;
AddressV = CLAMP;
MagFilter = LINEAR;
MinFilter = LINEAR;
Mipfilter = LINEAR;
};
sampler depthSampler = sampler_state
{
Texture = (depthMap);
AddressU = CLAMP;
AddressV = CLAMP;
MagFilter = POINT;
MinFilter = POINT;
Mipfilter = POINT;
};
sampler normalSampler = sampler_state
{
Texture = (normalMap);
AddressU = CLAMP;
AddressV = CLAMP;
MagFilter = POINT;
MinFilter = POINT;
Mipfilter = POINT;
};
The vertex input and outputs are positions and texture coordinates. We will be using Ziggyware’s QuadRenderer again, which sets the vertex positions directly in screen-space, so the vertex shader will not transform them in any way. The texture coordinates will just be passed forward after we align them, using one half of a pixel.
struct VertexShaderInput
{
float3 Position : POSITION0;
float2 TexCoord : TEXCOORD0;
};
struct VertexShaderOutput
{
float4 Position : POSITION0;
float2 TexCoord : TEXCOORD0;
};
float2 halfPixel;
VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
VertexShaderOutput output;
output.Position = float4(input.Position,1);
//align texture coordinates
output.TexCoord = input.TexCoord - halfPixel;
return output;
}
Now we need to write the pixel shader. First we need to get the data we need out of the G-Buffer. Most of it is straightforward, like normals and specular coefficients.
float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
//get normal data from the normalMap
float4 normalData = tex2D(normalSampler,input.TexCoord);
//tranform normal back into [-1,1] range
float3 normal = 2.0f * normalData.xyz - 1.0f;
//get specular power, and get it into [0,255] range]
float specularPower = normalData.a * 255;
//get specular intensity from the colorMap
float specularIntensity = tex2D(colorSampler, input.TexCoord).a;
This would be enough for diffuse lighting. But for specular lighting we need to have the vector from the camera to the point being shaded. And for this, we need the position of the point. Right now, we have the depth in the depthMap, and position on the screen in the [0,1][0,1] range, which comes from the texture coordinates. We will move this into screen coordinates, which are in [-1,1][1,-1] range, and then using the InverseViewProjection matrix, we can get them back into world coordinates. If you need to better understand the different coordinate spaces, like world space, view space, screen space, check the Shader Series on creators.xna.com.
//read depth
float depthVal = tex2D(depthSampler,input.TexCoord).r;
//compute screen-space position
float4 position;
position.x = input.TexCoord.x * 2.0f - 1.0f;
position.y = -(input.TexCoord.y * 2.0f - 1.0f);
position.z = depthVal;
position.w = 1.0f;
//transform to world space
position = mul(position, InvertViewProjection);
position /= position.w;
After we compute the vector from the surface towards the light (which is in this case the negated lightDirection), we compute the diffuse light with the dot product between the normal and the light vector. The specular light is computed using the dot product between the light reflection vector and the camera-to-object vector. The output will contain the diffuse light in the RGB channels, and the specular light in the A channel. The technique will use these shaders, and compile them with SM2.0
//surface-to-light vector
float3 lightVector = -normalize(lightDirection);
//compute diffuse light
float NdL = max(0,dot(normal,lightVector));
float3 diffuseLight = NdL * Color.rgb;
//reflexion vector
float3 reflectionVector = normalize(reflect(lightVector, normal));
//camera-to-surface vector
float3 directionToCamera = normalize(cameraPosition - position);
//compute specular light
float specularLight = specularIntensity * pow( saturate(dot(reflectionVector, directionToCamera)), specularPower);
//output the two lights
return float4(diffuseLight.rgb, specularLight) ;
}
technique Technique0
{
pass Pass0
{
VertexShader = compile vs_2_0 VertexShaderFunction();
PixelShader = compile ps_2_0 PixelShaderFunction();
}
}
To use this shader, let’s go in the DeferredRenderer class, and add a new member to hold our effect. This is then initialized in LoadContent.
private Effect directionalLightEffect;
[...]
protected override void LoadContent()
{
[...]
directionalLightEffect = Game.Content.Load<Effect>("DirectionalLight");
}
We will add a new function in the DeferredRenderer class, that draws a directional light. The parameters for this function will be the direction and color of this light. Inside this function, we will just set-up the effect and render a full-screen quad.
private void DrawDirectionalLight(Vector3 lightDirection, Color color)
{
//set all parameters
directionalLightEffect.Parameters["colorMap"].SetValue(colorRT.GetTexture());
directionalLightEffect.Parameters["normalMap"].SetValue(normalRT.GetTexture());
directionalLightEffect.Parameters["depthMap"].SetValue(depthRT.GetTexture());
directionalLightEffect.Parameters["lightDirection"].SetValue(lightDirection);
directionalLightEffect.Parameters["Color"].SetValue(color.ToVector3());
directionalLightEffect.Parameters["cameraPosition"].SetValue(camera.Position);
directionalLightEffect.Parameters["InvertViewProjection"].SetValue(Matrix.Invert(camera.View * camera.Projection));
directionalLightEffect.Parameters["halfPixel"].SetValue(halfPixel);
directionalLightEffect.Begin();
directionalLightEffect.Techniques[0].Passes[0].Begin();
//draw a full-screen quad
quadRenderer.Render(Vector2.One * -1, Vector2.One);
directionalLightEffect.Techniques[0].Passes[0].End();
directionalLightEffect.End();
}
At this point, if we add a call to this function in the Draw function, we should see on the screen the lighting of the object, without colors. We will only see the diffuse lighting, because the specular light is in the alpha channel.
public override void Draw(GameTime gameTime)
{
SetGBuffer();
GraphicsDevice.Clear(Color.Gray);
scene.DrawScene(camera,gameTime);
ResolveGBuffer();
GraphicsDevice.Clear(Color.Black);
DrawDirectionalLight(new Vector3(1,-1,0),Color.White);
base.Draw(gameTime);
}
In this moment, we have a few problems:
- If we try to call DrawDirectionalLight more than once, only the last light will be drawn. We will fix this next.
- We want to combine this lightmap with the object colors. We will also deal with this very soon.
- Some parts of the objects (the wings) are not lit right. This is because this model has some normals set wrong. In a later chapter, when we add normal mapping and other objects, this will go away.
Drawing more lights
Even though we saw that a deferred renderer should not use too many directional lights, we will modify the code to support many lights, because this will also be needed for the point lights and spot lights. We will move all code for drawing lights in a new function, called DrawLights, which will be called from the Draw function.
public override void Draw(GameTime gameTime)
{
SetGBuffer();
ClearGBuffer();
scene.DrawScene(camera, gameTime);
ResolveGBuffer();
DrawLights(gameTime);
base.Draw(gameTime);
}
In order to have multiple lights, we will draw them using alpha blending. Since lighting an object with two lights results in a brighter object, we will use Additive blending. Our alpha channel also contains useful data, so we need to apply the same operation on the alpha channel. Finally, after adding 3 directional lights, the DrawLights code would look like this. By using Additive blending, all lights are added together, to create the final light.
private void DrawLights(GameTime gameTime)
{
//clear all components to 0
GraphicsDevice.Clear(Color.TransparentBlack);
GraphicsDevice.RenderState.AlphaBlendEnable = true;
//use additive blending, and make sure the blending factors are as we need them
GraphicsDevice.RenderState.AlphaBlendOperation = BlendFunction.Add;
GraphicsDevice.RenderState.SourceBlend = Blend.One;
GraphicsDevice.RenderState.DestinationBlend = Blend.One;
//use the same operation on the alpha channel
GraphicsDevice.RenderState.SeparateAlphaBlendEnabled = false;
//draw some lights
DrawDirectionalLight(new Vector3(0, -1, 0), Color.White);
DrawDirectionalLight(new Vector3(-1, 0, 0), Color.Crimson);
DrawDirectionalLight(new Vector3(1, 0, 0), Color.SkyBlue);
GraphicsDevice.RenderState.AlphaBlendEnable = false;
}
Running the code now should show something like this:
Composing the final image
Now that we have the lightmap, we need to combine this with the colors, to obtain the final image. To do this, we will render the lights into another RenderTarget, which will then be used as an input to an Effect that composes the final image. We add the new rendertarget, initialize it, and set it up when drawing the lights.
private RenderTarget2D lightRT;
[...]
protected override void LoadContent()
{
[...]
lightRT = new RenderTarget2D(GraphicsDevice, backBufferWidth,
backBufferHeight, 1, SurfaceFormat.Color);
[...]
}
private void DrawLights(GameTime gameTime)
{
GraphicsDevice.SetRenderTarget(0, lightRT);
//code to setup alpha blending and draw lights
[...]
GraphicsDevice.SetRenderTarget(0, null);
}
Next, we will add a new Effect file to the Content project, named CombineFinal.fx. In this effect file, we will need the color map, and the light map. The vertex shader inputs and outputs are the position and texture coordinates. As with DirectionalLight.fx, the vertex shader outputs the transformed coordinates.
texture colorMap;
texture lightMap;
sampler colorSampler = sampler_state
{
Texture = (colorMap);
AddressU = CLAMP;
AddressV = CLAMP;
MagFilter = LINEAR;
MinFilter = LINEAR;
Mipfilter = LINEAR;
};
sampler lightSampler = sampler_state
{
Texture = (lightMap);
AddressU = CLAMP;
AddressV = CLAMP;
MagFilter = LINEAR;
MinFilter = LINEAR;
Mipfilter = LINEAR;
};
struct VertexShaderInput
{
float3 Position : POSITION0;
float2 TexCoord : TEXCOORD0;
};
struct VertexShaderOutput
{
float4 Position : POSITION0;
float2 TexCoord : TEXCOORD0;
};
float2 halfPixel;
VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
VertexShaderOutput output;
output.Position = float4(input.Position,1);
output.TexCoord = input.TexCoord - halfPixel;
return output;
}
In the pixel shader, we use the formula mentioned earlier to obtain the final color: FinalColor = DiffuseColor * DiffuseLight + SpecularLight
float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
float3 diffuseColor = tex2D(colorSampler,input.TexCoord).rgb;
float4 light = tex2D(lightSampler,input.TexCoord);
float3 diffuseLight = light.rgb;
float specularLight = light.a;
return float4((diffuseColor * diffuseLight + specularLight),1);
}
technique Technique1
{
pass Pass1
{
VertexShader = compile vs_2_0 VertexShaderFunction();
PixelShader = compile ps_2_0 PixelShaderFunction();
}
}
Back in the DeferredRenderer class, we’ll need to load this effect, and draw a full screen quad using it.
private Effect finalCombineEffect;
[...]
protected override void LoadContent()
{
[...]
finalCombineEffect = Game.Content.Load<Effect>("CombineFinal");
}
private void DrawLights(GameTime gameTime)
{
GraphicsDevice.SetRenderTarget(0, lightRT);
//draw lights to the lightMap
[...]
GraphicsDevice.SetRenderTarget(0, null);
//set the effect parameters
finalCombineEffect.Parameters["colorMap"].SetValue(colorRT.GetTexture());
finalCombineEffect.Parameters["lightMap"].SetValue(lightRT.GetTexture());
finalCombineEffect.Parameters["halfPixel"].SetValue(halfPixel);
finalCombineEffect.Begin();
finalCombineEffect.Techniques[0].Passes[0].Begin();
//render a full-screen quad
quadRenderer.Render(Vector2.One * -1, Vector2.One);
finalCombineEffect.Techniques[0].Passes[0].End();
finalCombineEffect.End();
}
If we try to run the code, the final image will appear.
In this chapter, we saw how to add directional lights to the renderer, how to support multiple lights through alpha blending, and how to obtain the final image. Next we will see how we can add point lights, and spot lights. You can download the code for this chapter here: Chapter3.zip
Pingback: Optimering och strukturering « En projektdagbok