Multiple Materials

    One disadvantage of deferred shading is the limited amount of material types. The Phong model we currently have allows for customizations, through the specular intensity and specular power. Using these we can have surfaces of different degrees of shininess, as seen in the previous chapters. However, the light model cannot model any type of surface we might like. For example, a velvety surface, which has some nice subtle shininess when the view angle is low, cannot be drawn using Phong. Fortunately, there are solutions.

    A solution is having a very large G-Buffer, with lots of parameters in in which, when combines, could model any kind of surface. This would be ideal, but definitely not practical. Such a large G-Buffer would cancel any advantages gained through deferred shading.

    Another solution would be to store a material ID. Then, based on this ID, apply different lighting shaders to model different types of surfaces. This would be similar to forward rendering, where we have lots and lots of shaders for any type of object in the world. However, what would be the criteria for determining which shader to use for each light? Drawing each light multiple times, which every possible shader is very slow. Based on this solution, we could move all shaders in a single uber-shader, and determine which equation to apply using lots branching. In this case, the branching coherence would be very acceptable, since pixels which are close on the screen tend to have the same material. But to model a large number of light models would require lots of branching. On today’s hardware, this is not such a good idea, but this method should be kept in mind for use in the future. As the graphics cards improve, branching will be less and less expensive, so this solution will be more and more attractive. But for now, there is another solution, which is both cheap, and has good flexibility.

    The idea is to store the light response in a texture. Then we can read from these texture, using parameters such as N*L, and N*H (H is the half vector, which is a substitution for the reflection vector, in the Blinn lighting model ), and this will return an intensity for the light. NVIDIA has an example of doing this on their site, here.

    They use the following texture to model the light response. (left side is the color channel, which defines the diffuse response, and the right side is the alpha channel which defines the specular response)


    Applying it on a model results in the following image.


    Using a similar technique applied to the deferred renderer yielded a velvety look on the lizard.


    As you can see, this resulted in an interesting lighting effect, which is not achievable by using only the Phong model. If we can have a light-response-texture for each light model we want to implement, then we only have to choose between these textures using a material ID. Furthermore, if we store these textures in a 3D texture, we can use the material ID as the third texture coordinate. So, to obtain the light response on a certain surface, we would look into the 3D texture using (N*L, N*H, ID) , where N is the surface normal, H is the half vector and ID is the material ID. Unfortunately, I can’t provide a sample for this technique, since I’ve only just experimented with it, and the code is messed up. Also, this topic alone would deserve to be treated in a article all by itself, and is beyond the scope of this tutorial.

    This technique of using a material ID to look into a 3D texture for the light response has already been used in a commercial game (S.T.A.L.K.E.R.), and it would be a nice domain to further experiment, if you’re interested in deferred renderers. Also, this might be extended further by using BRDFs, which lookup light information in two or more textures. More examples of BRDF using textures can be found in the NVIDIA shader library, here.

  • Julian

    Very interesting post! A quick question though. Most of the models for specular highlights on this page seem to require more information than just N.L and N.H:

    For example, the Cook-Terrance model requires E.N and E.L. Also, effects like rim lighting require E.N as well.

    One way I can see getting around this is to encode the R channel as E.N = E.L = 0, G channel as E.N = 0, E.L = 1, and the B channel as E.N = 1, E.L = 0. The only problem is, this would only work if the light response function was linear with respect to these variables, and I’m pretty sure that’s not the case.

    Am I missing something here? Is there another, better way to do this? Perhaps using two textures per ID?

  • Julian

    Hmm….now that I’m thinking about it, perhaps the texture coordinates could be E.N / E.L or N.L / N.H depending on the color channel! Do you think this could work?

  • Julian

    Hmm I don’t actually think that would work. Also, Cook-*Torrance. Where’s an edit button when you need one? 😉

  • tudor

    Rolex GMTマスタ