Point Lights

    In this chapter, we will see how we can add two other types of lights to our renderer. Because each light is a post-processing effect, we are interested in minimizing the amount of pixels processed for each light. We couldn’t do this with directional lights, because they affect all objects in the scene. But a point light only affect objects close to it, and a spot light objects in front of it. Each area affected by a light can be contained inside a volume. For a point light, this volume is a sphere, and for a spot light, this is a cone. Having this volume in 3D, it’s an easy job to determine which pixels is the 2D projection are potentially affected by this light; we only need to project the said volume to the screen.

    Point Lights

    A point light is represented by a point source. It radiates light in all directions, and has a certain position in space. The differences between a point light and a directional light are: the directional light lights all object from the same direction, while a point light lights an object depending on the position of the object relative to the light. Also, a point light only illuminates objects that are close to it. The further the object is, the less illuminated it becomes. This gives us a great advantage. Because only objects that are close enough to the lights are lit, we only need to apply the lighting computations to a certain area on the screen, instead of applying a full-screen pass. This means that if the lights do not overlap too much in the screen-space, many small point lights will, on the whole, be as expensive as one directional light (which is applied on the whole screen).

    The first problem is determining the area of the screen affected by the light. As briefly mentioned earlier, this is done by using a volume that encompasses the light, and projecting it to the screen. Only the pixels that are in the area covered by this projection may possibly be affected by the lights. The volume for a point lights is a sphere with the radius equal to the radius of the point light. We simply draw a sphere in 3D world space, center on the light’s position. In the vertex shader, we transform the vertices normally, which projects them on the screen, and then we pass data to the pixel shader, so it can detect the screen-position of each pixel. So we are actually doing post processing on the screen area covered by the projection of the light volume.

    We begin by writing a new effect file for this, named PointLight.fx. We need all the parameters from the DirectionalLight, and since we process actual geometry here, we need the World, View and Projection Matrices. For the point light, we need the light position, light radius, and a coefficient for the light intensity, so we can better control the brightness of the light. The textures and samplers remain the same.

float4x4 World;
float4x4 View;
float4x4 Projection;
//color of the light 
float3 Color; 
//position of the camera, for specular light
float3 cameraPosition; 
//this is used to compute the world-position
float4x4 InvertViewProjection; 
//this is the position of the light
float3 lightPosition;
//how far does this light reach
float lightRadius;
//control the brightness of the light
float lightIntensity = 1.0f;
// diffuse color, and specularIntensity in the alpha channel
texture colorMap; 
// normals, and specularPower in the alpha channel
texture normalMap;
texture depthMap;
sampler colorSampler = sampler_state
    Texture = (colorMap);
    AddressU = CLAMP;
    AddressV = CLAMP;
    MagFilter = LINEAR;
    MinFilter = LINEAR;
    Mipfilter = LINEAR;
sampler depthSampler = sampler_state
    Texture = (depthMap);
    AddressU = CLAMP;
    AddressV = CLAMP;
    MagFilter = POINT;
    MinFilter = POINT;
    Mipfilter = POINT;
sampler normalSampler = sampler_state
    Texture = (normalMap);
    AddressU = CLAMP;
    AddressV = CLAMP;
    MagFilter = POINT;
    MinFilter = POINT;
    Mipfilter = POINT;

     The vertex shader only requires the position as the input, and it will output two things: the position in screen-space, which is required by the GPU pipeline, and copy of that position, which needs to be available in the pixel shader. You can remember in DirectionalLight.fx we used the texture coordinates for a quad covering the screen, and from those we computed a position in screen-space coordinate system. Here, we will directly pass the screen-space coordinates to the pixel shader, and convert them to texture coordinates when needed. The vertex shader looks like this.

struct VertexShaderInput
    float3 Position : POSITION0;
struct VertexShaderOutput
    float4 Position : POSITION0;
    float4 ScreenPosition : TEXCOORD0;
VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
    VertexShaderOutput output;
    //processing geometry coordinates
    float4 worldPosition = mul(float4(input.Position,1), World);
    float4 viewPosition = mul(worldPosition, View);
    output.Position = mul(viewPosition, Projection);
    output.ScreenPosition = output.Position;
    return output;

    In the pixel shader, we first obtain the screen position. After this, we compute the corresponding texture coordinates. From this point on, most of the calculations are similar to the directional light. Using the dot product, we compute the diffuse light and specular light. But for point lights, the light vector is computer for each point, as the vector between the light point and the surface. For attenuation, we use a linear attenuation, based on the distance from the light. Finally, the attenuation is multiplied by the light intensity, and then by the diffuse and specular light components. This attenuation will make objects further away from the light be less and less lit. The code for the pixel shader is:

float2 halfPixel;
float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
    //obtain screen position
    input.ScreenPosition.xy /= input.ScreenPosition.w;
    //obtain textureCoordinates corresponding to the current pixel
    //the screen coordinates are in [-1,1]*[1,-1]
    //the texture coordinates need to be in [0,1]*[0,1]
    float2 texCoord = 0.5f * (float2(input.ScreenPosition.x,-input.ScreenPosition.y) + 1);
    //allign texels to pixels
    texCoord -=halfPixel;
    //get normal data from the normalMap
    float4 normalData = tex2D(normalSampler,texCoord);
    //tranform normal back into [-1,1] range
    float3 normal = 2.0f * normalData.xyz - 1.0f;
    //get specular power
    float specularPower = normalData.a * 255;
    //get specular intensity from the colorMap
    float specularIntensity = tex2D(colorSampler, texCoord).a;
    //read depth
    float depthVal = tex2D(depthSampler,texCoord).r;
    //compute screen-space position
    float4 position;
    position.xy = input.ScreenPosition.xy;
    position.z = depthVal;
    position.w = 1.0f;
    //transform to world space
    position = mul(position, InvertViewProjection);
    position /= position.w;
    //surface-to-light vector
    float3 lightVector = lightPosition - position;
    //compute attenuation based on distance - linear attenuation
    float attenuation = saturate(1.0f - length(lightVector)/lightRadius); 
    //normalize light vector
    lightVector = normalize(lightVector); 
    //compute diffuse light
    float NdL = max(0,dot(normal,lightVector));
    float3 diffuseLight = NdL * Color.rgb;
    //reflection vector
    float3 reflectionVector = normalize(reflect(-lightVector, normal));
    //camera-to-surface vector
    float3 directionToCamera = normalize(cameraPosition - position);
    //compute specular light
    float specularLight = specularIntensity * pow( saturate(dot(reflectionVector, directionToCamera)), specularPower);
    //take into account attenuation and lightIntensity.
    return attenuation * lightIntensity * float4(diffuseLight.rgb,specularLight);

    Finally, we end this effect file by writing the technique.

technique Technique1
    pass Pass1
        VertexShader = compile vs_2_0 VertexShaderFunction();
        PixelShader = compile ps_2_0 PixelShaderFunction();

    Now that the effect file is written, we need to go back the the DeferredRenderer class, add a member to hold this effect, and load it.

private Effect pointLightEffect;
protected override void LoadContent()
    pointLightEffect = Game.Content.Load<Effect>("PointLight");

    Since we are going to use geometry to approximate the volumes of lights, we need to add a model for this. Go ahead and add sphere.x (from the archive) to your project, in the Models directory. This model only contains a single sphere with radius 1. We need to add a variable to hold the model in DeferredRenderer.cs

private Model sphereModel;
protected override void LoadContent()
    sphereModel = Game.Content.Load<Model>("Models\\sphere");

    For each light we draw, we will position this sphere in the light’s position, and scale it by the light radius. This way, the sphere will encompass the whole volume that might be lit by that light. We will add a new function, called DrawPointLight, which will receive several parameters that describe the light: the position, the radius, the color and the light intensity. This function will set up the shader parameters for each light drawn. For some optimizations in a real-game situation, you’d probably have a class that holds all information related to a light, set up the common effect parameters, and then go through each light and draw it. This can save some calls to EffectParameter.SetValue and Effect.Begin. But for the purpose of this tutorial, we will use a simple function, for clarity and simplicity. The optimizations are left as an exercise for the reader. In the DrawPointLight function, we first set the effect parameters.

private void DrawPointLight(Vector3 lightPosition, Color color, float lightRadius, float lightIntensity)
    //set the G-Buffer parameters
    //compute the light world matrix
    //scale according to light radius, and translate it to light position
    Matrix sphereWorldMatrix = Matrix.CreateScale(lightRadius) * Matrix.CreateTranslation(lightPosition);
    //light position
    //set the color, radius and Intensity
    //parameters for specular computations
    pointLightEffect.Parameters["InvertViewProjection"].SetValue(Matrix.Invert(camera.View * camera.Projection));
    //size of a halfpixel, for texture coordinates alignment

    After setting the parameters, we draw the sphere model, using the effect file. But before we do this, we must set the desired culling mode. If we are outside the sphere, we want to draw the exterior of the sphere. Otherwise, when the camera in inside the light volume, we need to draw the inner side of the sphere. Using CullMode.None would apply the lighting calculations twice when the camera is outside the viewing volume, which is not desirable. By switching the culling mode, we make sure that the light is always applied once.

//calculate the distance between the camera and light center
  float cameraToCenter = Vector3.Distance(camera.Position, lightPosition);
  //if we are inside the light volume, draw the sphere's inside face
  if (cameraToCenter < lightRadius)
      GraphicsDevice.RenderState.CullMode = CullMode.CullClockwiseFace;
      GraphicsDevice.RenderState.CullMode = CullMode.CullCounterClockwiseFace;

    Now we can draw the sphere model, and in the end, set the culling mode back to the default value.

   foreach (ModelMesh mesh in sphereModel.Meshes)
       foreach (ModelMeshPart meshPart in mesh.MeshParts)
           GraphicsDevice.VertexDeclaration = meshPart.VertexDeclaration;
           GraphicsDevice.Vertices[0].SetSource(mesh.VertexBuffer, meshPart.StreamOffset, meshPart.VertexStride);
           GraphicsDevice.Indices = mesh.IndexBuffer;
           GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, meshPart.BaseVertex, 0, meshPart.NumVertices, meshPart.StartIndex, meshPart.PrimitiveCount);
   GraphicsDevice.RenderState.CullMode = CullMode.CullCounterClockwiseFace;

    To see the result of out work, go to DrawLights, and replace the existing calls to DrawDirectionalLight, with a call to DrawPointLight.

DrawDirectionalLight(new Vector3(0, -1f, 1), Color.DimGray);
   DrawPointLight(new Vector3(50 * (float)Math.Sin(gameTime.TotalGameTime.TotalSeconds),10, 50 * (float)Math.Cos(gameTime.TotalGameTime.TotalSeconds)), Color.White, 100, 4);

    This will draw a light that moves around the ship, and lights it. The result should look like this. However, this looks way better in motion.


    Before going further, let’s play with the point lights a little. Add he following code inside DrawLights, and run run it.

DrawDirectionalLight(new Vector3(0, -1f, 1), Color.DimGray);
   Color[] colors = new Color[10];
   colors[0] = Color.ForestGreen;
   colors[1] = Color.Blue;
   colors[2] = Color.Pink;
   colors[3] = Color.Yellow;
   colors[4] = Color.Orange;
   colors[5] = Color.Green;
   colors[6] = Color.Crimson;
   colors[7] = Color.CornflowerBlue;
   colors[8] = Color.Gold;
   colors[9] = Color.Honeydew;
   float angle = (float)gameTime.TotalGameTime.TotalSeconds;
   for (int i = 0; i < 10; i++)
       Vector3 pos = new Vector3((float)Math.Sin(i * MathHelper.TwoPi / 10 + gameTime.TotalGameTime.TotalSeconds), 0.3f,
                                                      (float)Math.Cos(i * MathHelper.TwoPi / 10 + gameTime.TotalGameTime.TotalSeconds));
       DrawPointLight(pos * 20, colors[i], 12, 2);
       pos = new Vector3((float)Math.Cos(i * MathHelper.TwoPi / 10 + gameTime.TotalGameTime.TotalSeconds), -0.6f, 
                                         (float)Math.Sin(i * MathHelper.TwoPi / 10 + gameTime.TotalGameTime.TotalSeconds));
       DrawPointLight(pos * 20, colors[i], 20, 1);
   DrawPointLight(new Vector3(0, (float)Math.Sin(gameTime.TotalGameTime.TotalSeconds * 0.8)*40, 0), Color.Red, 30, 5);
   DrawPointLight(new Vector3(0, 0, 70), Color.Wheat, 55 + 10 * (float)Math.Sin(5*gameTime.TotalGameTime.TotalSeconds), 3);


    You now have 1 directional light and 22 point lights lighting the object. Pretty neat, huh?

    Spot Lights

    Spot lights are very similar to point lights. The main difference is in the fact that while a point light emits light in all directions, a spotlight’s light ways are restricted to a cone of light. Extra properties of spotlights include:

  • direction, which is the axis of the cone. Let’s call this spotDirection
  • cone angle, which specified how large the cone is. The cosine of the angle is useful. Let’s call this spotLightAngleCosine
  • a rate of decay, which measures how the light intensity decreases from the center of the cone, towards the walls. We’ll call this spotDecayExponent

    When computing the light, we only need to light a point if the angle between the surface-to-light vector and the cone direction is smaller that the cone angle. This can be computed using the dot product. After that, using the spotDecayExponent, you can compute the intensity of illumination coming from the spotlight. A rough sketch of the needed computations is shown below.

float3 lightVector = lightPosition - position;
float attenuation = saturate(1.0f - length(lightVector)/lightRadius); 
//normalize light vector
lightVector = normalize(lightVector);
//SpotDotLight = cosine of the angle between spotdirection and lightvector
SdL = dot(spotDirection,-lightVector);
if (SdL > spotLightAngleCosine) 
    spotIntensity = pow(SdL,spotDecayExponent);
    //rest of computations from the point light go here
    //multiply the attenuation by spotIntensity before applying it to the light

    I’ll leave the rest of the spotlight implementation as an exercise to the reader. For the light volume, use cone.x, from the archive. Before drawing it, you’ll have to compute it’s world matrix, made from: scaling on the Y axis, based on the radius of the light, scaling on the XZ plane, based on the cone angle, rotations based on the cone direction, and finally, translation based on the light position.

    In this chapter, we added point lights to the renderer, and saw how different aspect of point lights are handled. We also looked at what adding spotlights would need, and left these as an exercise. Until now, we’ve covered most of what deferred rendering means, and what needs to be done different from a forward rendering approach. The full source code for this chapter can be found here: Chapter4.zip

    In the next chapters, we will explore some aspects of actually integrating the deferred renderer in a real game. We will first look at writing a custom Content Processor that will prepare models for our renderer, assigning them the desired Effect, and adding normal maps and specular maps.

  • Flylio

    Thank you very very very much for this article.

  • Dave Butler

    I had a weird bug with the depth render target and have a solution for anyone else that encounters this. I have an nVidia 8800GT in my desktop, and after it suspends and resumes, it can no longer create an R32F render target (SurfaceFormat.Single). I’m on the latest drivers, etc. My guess is that the bug is somewhere deep in XNA itself, since I had no problem running a non-XNA demo that uses R32F surfaces. The solution I used is to pack the depth data into a SurfaceFormat.Color target instead.

    In RenderGBuffer.fx, add:

    half4 packFloatToHalf4(float value)
    float4 bitSh = float4(256.0*256.0*256.0, 256.0*256.0, 256.0, 1.0);
    float4 bitMask = float4(0.0, 1.0/256.0, 1.0/256.0, 1.0/256.0);
    float4 res = frac(value*bitSh);
    res -= res.xxyz * bitMask;
    return res;

    And change the depth storage line to:

    output.Depth = packFloatToHalf4(input.Depth.x / input.Depth.y);

    Wherever you need to use the data, such as in PointLight.fx, add:

    float unpackFloatFromHalf4(half4 value)
    float4 bitSh = float4(1.0 / (256.0*256.0*256.0), 1.0/(256.0*256.0), 1.0/256.0, 1.0);
    return dot(value, bitSh);

    And change the depth read line to:

    float depthVal = unpackFloatFromHalf4(tex2D(depthSampler,texCoord));

    Once I made that change, I no longer encountered the weird bug. I didn’t experience the bug on my laptop (ATI x1400), but performance was horrible. Framerate cut in half with one light, down to almost nothing after 3-4 lights. Anyone know any reason why the code in this tutorial would perform terribly on that GPU? I know it’s not that great, but I was hoping for more than 2-4 fps.

  • Dave Butler

    Oh, and don’t forget to change the creation of the depthRT to use SurfaceFormat.Color if you use the above fix.

  • http://three.homeip.net Tom

    As I posted on ziggyware, this is an awesome tutorial. However, creating a spotlight is not exactly trivial. I really wish you’d included that as part of the tutorial. Do you happen to have a spotlight shader written, and are you willing to share it?

  • http://www.catalinzima.com Catalin Zima

    Sorry, I don’t have a spotlight shader available.

  • John

    Would you know of any reason why the point lights wouldn’t work correctly using DirectX 9.0c with a sphere made from D3DXCreateSphere? I’ve done the rest of the tutorial fine translating everything over but for the point lights they are being rendered on top of everything else instead of being occluded by something in the way.

    They are also not attenuating correctly (the light is lessened by attenuation but it is lessened by a constant amount across the sphere instead of brighter at the center and dimmer at the edges).

    I think it is because the point lights are being rendered to the same light rendertarget as the directional lights, so they are treated as showing up over everything else when combined in the last step. Maybe I’m missing something obvious.. I know this was meant for XNA so if you don’t have any advice for this case that’s fine, but I’m stuck.

  • http://www.catalinzima.com Catalin Zima

    Try rendering the point lights first, and only at the end draw the directional light.

    Also, make sure the sphere created with D3DXCraeteSphere has the normals in the right directions. (towards the outside or inside of the sphere, depending on how close they are to the camera)

  • John

    Thank you for the quick response. I tried both orders and it didn’t change. Isn’t the point light only being rendered into the light target and not the depth/color/normals though? I’m not seeing how the point light gets properly occluded. Also, am I thinking about attenuation the wrong way or is it supposed to be brightest at the center and darker near the edges of the sphere? I have basic forward rendering lighting and it looks a lot more natural than what I’m getting with this, but I’m sure that’s because I’m doing something wrong.

    It kind of works if I render the point lights into the color, then switch to the light target for the directional. But when I do this the point lights only show up if there is a directional light present also, since the point lights are now being rendered as if they were normal scene geometry and have to be lit by the directional.

    I don’t think it’s the sphere normals facing the wrong way because I am seeing a light, just not correctly.

  • http://www.catalinzima.com Catalin Zima

    yes, the point light does get rendered into the light target, but if you draw the directional light first, which is a full-screen quad, it will fill the empty z-buffer, and might cause the point lights to not be drawn anymore, since they are actually positioned in space in the scene.

    And yes, the light is brightest in the center and darkest a the edges of the sphere. The sphere represents the area lit by a light that sits in the center of the sphere.

  • http://web70.login-82.hoststar.ch/files/deferred/ Lucas

    Hi, first of all thanks for your great work..
    I’m trying to build a small engine based on these articles here and I now encountered a small bug somewhere in the point light code (I don’t know what it is, so maybe you know the solution). The problem can be seen at the outside of the point light sphere where it “cuts” the light in an ugly way. It’s hard to explain, so best look at the pictures that can be found on my attached website.
    I hope you can help because it doesn’t look very nice and i really have no idea what went wrong.
    (The screenshots are actually from chapter 5, the bug can be seen better there. I set the number of point light passes to one and the angle to 0f so you should be able to look at the very same result as seen in my pictures.)
    Thanks in advance 🙂

  • Sean

    this is a great tutorial, and like many others I’ve been trying to convert it to DirectX, DirectX9 in specific. Although I’ve hit a problem with the point light. When I try to render a point light, I get this:


    This is my C++ code

    and this is how I call it
    DeferredPointLight(D3DXVECTOR3(0.0f, 2.0f, 0.0f), vColour3, 10.0f, 2.0f, pBackBuffer)

    I’m really lost, thanks =]

  • http://medsgames.com Meds

    Hi Catalin,

    First of all I must say that you’ve done a terrific job with these tutorials!

    I’m slowly porting the XNA code to unmanaged C++ DirectX, so far I’ve gotten as far as implementing point lights (here).

    The reason I’m porting it is to learn as I code however I’m sort of stuck.

    Where you calculate the position in world space:

    //compute screen-space position float4 position; position.xy = input.ScreenPosition.xy; position.z = depthVal; position.w = 1.0f; //transform to world space position = mul(position, InvertViewProjection); position /= position.w;

    The position created does not seem to be what I think. My assumption is that here you’re getting the position at the point in world space as the comment syas. However if I simply output position (that is, avoid all lighting calculations) right after the position is calculated (after position /= position.w;) I do: return position;

    The colours returned do not match up with what I’d expect of if it was a position map because I thought a position map would essentially replace each pixels colour with the colour of its position in terms of r, g, b where r is x, g is y and b is z. However the colours output keep changing based on both the cameras position and cameras orientation.

    So I was wondering why this is and also was wondering if there was a way to get a position map as I’ve described 🙂

    Again, I really like your tutorial! Has helped me more than any other deffered rendering tutorial on the internet!

  • czaurel

    In reply to Lucas:

    I found that this problem (or at least my problem, which was utterly similar) was related to parts of the sphere model rendering with wrong culling.
    This can be solved if you test the camera position against the light radius taking the near plane distance into account. This is because the rendering “starts” from the near plane, not from the actual camera position.
    So the code for determining the cull mode should be something like this:

    if (cameraToCenter < lightRadius + camera.NearDistance)
    GraphicsDevice.RenderState.CullMode = CullMode.CullClockwiseFace;
    GraphicsDevice.RenderState.CullMode = CullMode.CullCounterClockwiseFace;

  • JinJin


    For anyone who experiences horizontal lines or similar strange artifacts, this solution I “found” on http://www.sjbaker.org/steve/omniv/love_your_z_buffer.html (well, not exactly) worked on my GeForce 6600GT:

    private float cameraDistance = 70;

    projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, aspectRatio, 10, 100);

    Thanks for the great tutorial!

  • csisy


    Thanks for this article, very cool! 🙂

    I’ve some problem with point lights:
    – if the camera is “inside” the sphere’s triangle (== camera eye position intersect sphere triangle), i can’t see anything. What can i do?
    – Here is an image: http://yscik.com/jf/pfs.php?m=view&v=2044-screenshot_5.jpg
    The 4th square from left is the light render target’s texture. The final image isn’t as contrast as the light RT’s texture. Why?
    I cheated a little: in final combine effect, i do this:

    float3 diffuse = tex2D(colorSampler, In.TexCoord).rgb;
    float3 light = tex2D(lightSampler, In.TexCoord).rgb * 2;

    return float4(saturate(diffuse * light), 1.0f);

    PS: Sorry for my bad english

  • Alex


    Great tutorial, thanks alot for the effort! 🙂

    Just a hint for anyone having problems with the point lights not showing. I had to disable depth write for the directional full screen pass. Otherwise, the spheres won’t get rendered.

    So in DrawDirectionalLight() do this :

    GraphicsDevice.RenderState.DepthBufferWriteEnable = false;
    quadRenderer.Render(Vector2.One * -1, Vector2.One);
    GraphicsDevice.RenderState.DepthBufferWriteEnable = true;

  • Pingback: The Building Blocks of Deferred Rendering

  • 62316e

    My point light is rendered as Half Sphere. What i did wrong?

  • 62316e

    There is my problem with point light.

    Directional light is disabled.

  • http://Website Hubert

    Actually checking the distance of light from camera and setting different culling modes based on that is not needed. You can set back face culling and it should just work, lighting every pixel exactly once for convex closed shape like sphere – more details on this page: http://www.beyond3d.com/content/articles/19/6

  • necro

    in chrome the formatting of the source code is almost impossible to read – maybe you could fix that