In this chapter, we will see how we can add two other types of lights to our renderer. Because each light is a post-processing effect, we are interested in minimizing the amount of pixels processed for each light. We couldn’t do this with directional lights, because they affect all objects in the scene. But a point light only affect objects close to it, and a spot light objects in front of it. Each area affected by a light can be contained inside a volume. For a point light, this volume is a sphere, and for a spot light, this is a cone. Having this volume in 3D, it’s an easy job to determine which pixels is the 2D projection are potentially affected by this light; we only need to project the said volume to the screen.

** Point Lights**

A point light is represented by a point source. It radiates light in all directions, and has a certain position in space. The differences between a point light and a directional light are: the directional light lights all object from the same direction, while a point light lights an object depending on the position of the object relative to the light. Also, a point light only illuminates objects that are close to it. The further the object is, the less illuminated it becomes. This gives us a great advantage. Because only objects that are close enough to the lights are lit, we only need to apply the lighting computations to a certain area on the screen, instead of applying a full-screen pass. This means that if the lights do not overlap too much in the screen-space, many small point lights will, on the whole, be as expensive as one directional light (which is applied on the whole screen).

The first problem is determining the area of the screen affected by the light. As briefly mentioned earlier, this is done by using a volume that encompasses the light, and projecting it to the screen. Only the pixels that are in the area covered by this projection may possibly be affected by the lights. The volume for a point lights is a sphere with the radius equal to the radius of the point light. We simply draw a sphere in 3D world space, center on the light’s position. In the vertex shader, we transform the vertices normally, which projects them on the screen, and then we pass data to the pixel shader, so it can detect the screen-position of each pixel. So we are actually doing post processing on the screen area covered by the projection of the light volume.

We begin by writing a new effect file for this, named ** PointLight.fx. **We need all the parameters from the DirectionalLight, and since we process actual geometry here, we need the World, View and Projection Matrices. For the point light, we need the light position, light radius, and a coefficient for the light intensity, so we can better control the brightness of the light. The textures and samplers remain the same.

float4x4 World;

float4x4 View;

float4x4 Projection;

`//color of the light `

float3 Color;

`//position of the camera, for specular light`

float3 cameraPosition;

`//this is used to compute the world-position`

float4x4 InvertViewProjection;

`//this is the position of the light`

float3 lightPosition;

`//how far does this light reach`

`float lightRadius;`

`//control the brightness of the light`

`float lightIntensity = 1.0f;`

`// diffuse color, and specularIntensity in the alpha channel`

texture colorMap;

`// normals, and specularPower in the alpha channel`

texture normalMap;

`//depth`

texture depthMap;

sampler colorSampler = sampler_state

{

Texture = (colorMap);

AddressU = CLAMP;

AddressV = CLAMP;

MagFilter = LINEAR;

MinFilter = LINEAR;

Mipfilter = LINEAR;

};

sampler depthSampler = sampler_state

{

Texture = (depthMap);

AddressU = CLAMP;

AddressV = CLAMP;

MagFilter = POINT;

MinFilter = POINT;

Mipfilter = POINT;

};

sampler normalSampler = sampler_state

{

Texture = (normalMap);

AddressU = CLAMP;

AddressV = CLAMP;

MagFilter = POINT;

MinFilter = POINT;

Mipfilter = POINT;

};

The vertex shader only requires the position as the input, and it will output two things: the position in screen-space, which is required by the GPU pipeline, and copy of that position, which needs to be available in the pixel shader. You can remember in * DirectionalLight.fx* we used the texture coordinates for a quad covering the screen, and from those we computed a position in screen-space coordinate system. Here, we will directly pass the screen-space coordinates to the pixel shader, and convert them to texture coordinates when needed. The vertex shader looks like this.

`struct VertexShaderInput`

{

float3 Position : POSITION0;

};

`struct VertexShaderOutput`

{

float4 Position : POSITION0;

float4 ScreenPosition : TEXCOORD0;

};

VertexShaderOutput VertexShaderFunction(VertexShaderInput input)

{

VertexShaderOutput output;

` //processing geometry coordinates`

float4 worldPosition = mul(float4(input.Position,1), World);

float4 viewPosition = mul(worldPosition, View);

output.Position = mul(viewPosition, Projection);

output.ScreenPosition = output.Position;

` return output;`

}

In the pixel shader, we first obtain the screen position. After this, we compute the corresponding texture coordinates. From this point on, most of the calculations are similar to the directional light. Using the dot product, we compute the diffuse light and specular light. But for point lights, the light vector is computer for each point, as the vector between the light point and the surface. For attenuation, we use a linear attenuation, based on the distance from the light. Finally, the attenuation is multiplied by the light intensity, and then by the diffuse and specular light components. This attenuation will make objects further away from the light be less and less lit. The code for the pixel shader is:

float2 halfPixel;

float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0

{

` //obtain screen position`

input.ScreenPosition.xy /= input.ScreenPosition.w;

` //obtain textureCoordinates corresponding to the current pixel`

` //the screen coordinates are in [-1,1]*[1,-1]`

` //the texture coordinates need to be in [0,1]*[0,1]`

float2 texCoord = 0.5f * (float2(input.ScreenPosition.x,-input.ScreenPosition.y) + 1);

` //allign texels to pixels`

texCoord -=halfPixel;

` //get normal data from the normalMap`

float4 normalData = tex2D(normalSampler,texCoord);

` //tranform normal back into [-1,1] range`

float3 normal = 2.0f * normalData.xyz - 1.0f;

` //get specular power`

` float specularPower = normalData.a * 255;`

` //get specular intensity from the colorMap`

` float specularIntensity = tex2D(colorSampler, texCoord).a;`

` //read depth`

` float depthVal = tex2D(depthSampler,texCoord).r;`

` //compute screen-space position`

float4 position;

position.xy = input.ScreenPosition.xy;

position.z = depthVal;

position.w = 1.0f;

` //transform to world space`

position = mul(position, InvertViewProjection);

position /= position.w;

` //surface-to-light vector`

float3 lightVector = lightPosition - position;

` //compute attenuation based on distance - linear attenuation`

` float attenuation = saturate(1.0f - length(lightVector)/lightRadius); `

` //normalize light vector`

lightVector = normalize(lightVector);

` //compute diffuse light`

` float NdL = max(0,dot(normal,lightVector));`

float3 diffuseLight = NdL * Color.rgb;

` //reflection vector`

float3 reflectionVector = normalize(reflect(-lightVector, normal));

` //camera-to-surface vector`

float3 directionToCamera = normalize(cameraPosition - position);

` //compute specular light`

` float specularLight = specularIntensity * pow( saturate(dot(reflectionVector, directionToCamera)), specularPower);`

` //take into account attenuation and lightIntensity.`

` return attenuation * lightIntensity * float4(diffuseLight.rgb,specularLight);`

}

Finally, we end this effect file by writing the technique.

technique Technique1

{

pass Pass1

{

VertexShader = compile vs_2_0 VertexShaderFunction();

PixelShader = compile ps_2_0 PixelShaderFunction();

}

}

Now that the effect file is written, we need to go back the the **DeferredRenderer** class, add a member to hold this effect, and load it.

`private Effect pointLightEffect;`

[...]

protected override void LoadContent()

{

[...]

` pointLightEffect = Game.Content.Load<Effect>("PointLight");`

}

Since we are going to use geometry to approximate the volumes of lights, we need to add a model for this. Go ahead and add **sphere.x** (from the archive) to your project, in the **Models** directory. This model only contains a single sphere with radius 1. We need to add a variable to hold the model in **DeferredRenderer.cs**

`private Model sphereModel;`

[...]

protected override void LoadContent()

{

[...]

` sphereModel = Game.Content.Load<Model>("Models\\sphere");`

}

For each light we draw, we will position this sphere in the light’s position, and scale it by the light radius. This way, the sphere will encompass the whole volume that might be lit by that light. We will add a new function, called DrawPointLight, which will receive several parameters that describe the light: the position, the radius, the color and the light intensity. This function will set up the shader parameters for each light drawn. For some optimizations in a real-game situation, you’d probably have a class that holds all information related to a light, set up the common effect parameters, and then go through each light and draw it. This can save some calls to EffectParameter.SetValue and Effect.Begin. But for the purpose of this tutorial, we will use a simple function, for clarity and simplicity. The optimizations are left as an exercise for the reader. In the **DrawPointLight** function, we first set the effect parameters.

private void DrawPointLight(Vector3 lightPosition, Color color, float lightRadius, float lightIntensity)

{

` //set the G-Buffer parameters`

` pointLightEffect.Parameters["colorMap"].SetValue(colorRT.GetTexture());`

` pointLightEffect.Parameters["normalMap"].SetValue(normalRT.GetTexture());`

` pointLightEffect.Parameters["depthMap"].SetValue(depthRT.GetTexture());`

` //compute the light world matrix`

` //scale according to light radius, and translate it to light position`

Matrix sphereWorldMatrix = Matrix.CreateScale(lightRadius) * Matrix.CreateTranslation(lightPosition);

` pointLightEffect.Parameters["World"].SetValue(sphereWorldMatrix);`

` pointLightEffect.Parameters["View"].SetValue(camera.View);`

` pointLightEffect.Parameters["Projection"].SetValue(camera.Projection);`

` //light position`

` pointLightEffect.Parameters["lightPosition"].SetValue(lightPosition);`

` //set the color, radius and Intensity`

` pointLightEffect.Parameters["Color"].SetValue(color.ToVector3());`

` pointLightEffect.Parameters["lightRadius"].SetValue(lightRadius);`

` pointLightEffect.Parameters["lightIntensity"].SetValue(lightIntensity);`

` //parameters for specular computations`

` pointLightEffect.Parameters["cameraPosition"].SetValue(camera.Position);`

` pointLightEffect.Parameters["InvertViewProjection"].SetValue(Matrix.Invert(camera.View * camera.Projection));`

` //size of a halfpixel, for texture coordinates alignment`

` pointLightEffect.Parameters["halfPixel"].SetValue(halfPixel);`

After setting the parameters, we draw the sphere model, using the effect file. But before we do this, we must set the desired culling mode. If we are outside the sphere, we want to draw the exterior of the sphere. Otherwise, when the camera in inside the light volume, we need to draw the inner side of the sphere. Using CullMode.None would apply the lighting calculations twice when the camera is outside the viewing volume, which is not desirable. By switching the culling mode, we make sure that the light is always applied once.

`//calculate the distance between the camera and light center`

` float cameraToCenter = Vector3.Distance(camera.Position, lightPosition);`

` //if we are inside the light volume, draw the sphere's inside face`

` if (cameraToCenter < lightRadius)`

GraphicsDevice.RenderState.CullMode = CullMode.CullClockwiseFace;

` else`

GraphicsDevice.RenderState.CullMode = CullMode.CullCounterClockwiseFace;

Now we can draw the sphere model, and in the end, set the culling mode back to the default value.

pointLightEffect.Begin();

pointLightEffect.Techniques[0].Passes[0].Begin();

foreach (ModelMesh mesh in sphereModel.Meshes)

{

foreach (ModelMeshPart meshPart in mesh.MeshParts)

{

GraphicsDevice.VertexDeclaration = meshPart.VertexDeclaration;

GraphicsDevice.Vertices[0].SetSource(mesh.VertexBuffer, meshPart.StreamOffset, meshPart.VertexStride);

GraphicsDevice.Indices = mesh.IndexBuffer;

GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, meshPart.BaseVertex, 0, meshPart.NumVertices, meshPart.StartIndex, meshPart.PrimitiveCount);

}

}

pointLightEffect.Techniques[0].Passes[0].End();

pointLightEffect.End();

GraphicsDevice.RenderState.CullMode = CullMode.CullCounterClockwiseFace;

To see the result of out work, go to **DrawLights**, and replace the existing calls to **DrawDirectionalLight**, with a call to** DrawPointLight**.

`DrawDirectionalLight(new Vector3(0, -1f, 1), Color.DimGray);`

DrawPointLight(new Vector3(50 * (float)Math.Sin(gameTime.TotalGameTime.TotalSeconds),10, 50 * (float)Math.Cos(gameTime.TotalGameTime.TotalSeconds)), Color.White, 100, 4);

This will draw a light that moves around the ship, and lights it. The result should look like this. However, this looks way better in motion.

Before going further, let’s play with the point lights a little. Add he following code inside **DrawLights**, and run run it.

`DrawDirectionalLight(new Vector3(0, -1f, 1), Color.DimGray);`

` Color[] colors = new Color[10];`

colors[0] = Color.ForestGreen;

colors[1] = Color.Blue;

colors[2] = Color.Pink;

colors[3] = Color.Yellow;

colors[4] = Color.Orange;

colors[5] = Color.Green;

colors[6] = Color.Crimson;

colors[7] = Color.CornflowerBlue;

colors[8] = Color.Gold;

colors[9] = Color.Honeydew;

float angle = (float)gameTime.TotalGameTime.TotalSeconds;

for (int i = 0; i < 10; i++)

{

Vector3 pos = new Vector3((float)Math.Sin(i * MathHelper.TwoPi / 10 + gameTime.TotalGameTime.TotalSeconds), 0.3f,

` (float)Math.Cos(i * MathHelper.TwoPi / 10 + gameTime.TotalGameTime.TotalSeconds));`

DrawPointLight(pos * 20, colors[i], 12, 2);

pos = new Vector3((float)Math.Cos(i * MathHelper.TwoPi / 10 + gameTime.TotalGameTime.TotalSeconds), -0.6f,

` (float)Math.Sin(i * MathHelper.TwoPi / 10 + gameTime.TotalGameTime.TotalSeconds));`

DrawPointLight(pos * 20, colors[i], 20, 1);

}

DrawPointLight(new Vector3(0, (float)Math.Sin(gameTime.TotalGameTime.TotalSeconds * 0.8)*40, 0), Color.Red, 30, 5);

DrawPointLight(new Vector3(0, 0, 70), Color.Wheat, 55 + 10 * (float)Math.Sin(5*gameTime.TotalGameTime.TotalSeconds), 3);

You now have 1 directional light and 22 point lights lighting the object. Pretty neat, huh?

** Spot Lights**

Spot lights are very similar to point lights. The main difference is in the fact that while a point light emits light in all directions, a spotlight’s light ways are restricted to a cone of light. Extra properties of spotlights include:

- direction, which is the axis of the cone. Let’s call this spotDirection
- cone angle, which specified how large the cone is. The cosine of the angle is useful. Let’s call this spotLightAngleCosine
- a rate of decay, which measures how the light intensity decreases from the center of the cone, towards the walls. We’ll call this spotDecayExponent

When computing the light, we only need to light a point if the angle between the surface-to-light vector and the cone direction is smaller that the cone angle. This can be computed using the dot product. After that, using the spotDecayExponent, you can compute the intensity of illumination coming from the spotlight. A rough sketch of the needed computations is shown below.

float3 lightVector = lightPosition - position;

`float attenuation = saturate(1.0f - length(lightVector)/lightRadius); `

`//normalize light vector`

lightVector = normalize(lightVector);

`//SpotDotLight = cosine of the angle between spotdirection and lightvector`

SdL = dot(spotDirection,-lightVector);

`if (SdL > spotLightAngleCosine) `

{

spotIntensity = pow(SdL,spotDecayExponent);

` //rest of computations from the point light go here`

[...]

` //multiply the attenuation by spotIntensity before applying it to the light`

}

I’ll leave the rest of the spotlight implementation as an __exercise to the reader__. For the light volume, use ** cone.x**, from the archive. Before drawing it, you’ll have to compute it’s world matrix, made from: scaling on the Y axis, based on the radius of the light, scaling on the XZ plane, based on the cone angle, rotations based on the cone direction, and finally, translation based on the light position.

In this chapter, we added point lights to the renderer, and saw how different aspect of point lights are handled. We also looked at what adding spotlights would need, and left these as an exercise. Until now, we’ve covered most of what deferred rendering means, and what needs to be done different from a forward rendering approach. The full source code for this chapter can be found here: Chapter4.zip

In the next chapters, we will explore some aspects of actually integrating the deferred renderer in a real game. We will first look at writing a custom Content Processor that will prepare models for our renderer, assigning them the desired Effect, and adding normal maps and specular maps.

Pingback: The Building Blocks of Deferred Rendering