Introduction

        This tutorial will cover four ways in which one can use Vertex Textures in his or her XNA game. We will start by an introduction to vertex textures, what they mean, and how they are used. Then we will continue with the four effects. These will begin with basic operations, and will gradually increase in difficulty, touching different techniques that can be used together with vertex textures. This text will try to explain at every step why certain design choices were made, and how different techniques can be used in other situations, outside of the current tutorial. Some chapters may contain a Bonus  section, in which I will shortly explain how to apply certain techniques to the effects, even though they are not related to VTF. These sections will go faster, and there will be some hard coded values, but their meaning will be explained, so they can be used in whatever context you wish.

Vertex Textures

Ever since the appearance of programmable GPUs, there has been a significant difference between the capabilities of vertex and pixel shaders. Shader Model 3.0 took the first steps in providing common functionality between the two, and DirectX 10 took the final step and unified the Instruction Set between all types of shaders (Pixel, Vertex and the new Geometry Shaders). The main focus of this text will be one feature of Shader Model 3.0 that builds towards this unification, which is Vertex Texture Fetch ( VTF ).

As you probably know by now, traditionally, Vertex Shaders deal with transforming the vertices, and manipulating or setting properties like position, color, normal, texture coordinates. After this, vertices define triangles, which are then transformed into pixels. At this step, Pixel Shaders come into play, and can be used for texturing, bump mapping, normal mapping, and lots of other effects we all love. For a short introduction to HLSL, see this tutorial, and the Shader Series articles on creators.xna.com.

Vertex Texture Fetch allows us to read data from textures inside a Vertex Shader, almost like Pixel Shaders do. To use VTF, you will need either an Xbox 360 (which has an unified shader architecture), or a NVIDIA graphics card from the GeForce 6 series, or greater. For those owning ATI GPUs there is another technique called Render To Vertex Buffer (R2VB), which can be used to achieve similar results, but to my knowledge it is not usable through XNA, because it was a ATI-only hack.

The GPU assembly function used to read textures in a Vertex Shader is texldl . In HLSL, the instruction that will be used is tex2Dlod. The usage is tex2Dlod( s , t ), where s is a 2D texture sampler, and t is a 4 component vector. The instruction reads from a texture, using an user-defined mipmap level. This mipmap level is specified through the 4th component of the texture coordinate vector ( t.w ). In most cases, I used level 0 for the mipmap (the texture exactly as it is), so my usual call is something like tex2Dlod(textureSampler, float4(uv.xy,0,0)) .

Vertex Textures behave like Pixel Textures except for the following restrictions:

    • Bilinear and Trilinear filtering are not supported directly in hardware. We will see how bilinear filtering can be implemented in the vertex shader.
    • Anisotropic Filtering is not supported in hardware.
    • Level of detail (mipmap level) is not available, and has to be computed manually if needed.

The base code

Before we begin, please download the resources.zip file. It contains:

    • Two heightmaps, in .dds R32F format, which means each pixel is a 32 bit floating point number that specifies a height.
      Note: on NVIDIA cards, generation 6 and 7, Vertex Texture Fetch is only supported on R32F and  A32R32G32B32F formats. GeForce 8 or the Xbox might support VTF for other texture formats, but I didn’t have the opportunity to test this.
    • Several textures, taken from www.turbosquid.com , or other samples or tutorials.
    • Camera.cs is a GameComponent that handles the camera. It is taken from the Skinned Model Sample .
      • use the Triggers or the Z and X keys to zoomm out / in
      • use the Right Stick, or WASD to move the camera
    • Grid.cs is a class for rendering a grid of triangles, covering the plane XZ.

These resources will be used in the tutorials, and will be referred by their filename.

Grid.cs

This class offers the geometry for a grid, and will be used in Heightmap Rendering, Terrain Morphing, and Steps in Snow. The CellSize property controls how big a cell in the grid is, and the Dimension property controls how many cells are on each row / column. Each vertex has a position, a normal, and a corresponding set of texture coordinates. Thus, each vertex will map to a certain position in the texture, and this will be used in the vertex shader to manipulate the vertex. This association, between a vertex and a pixel in a texture is the key mechanism for using vertex textures . The heights of all vertices is initially set to 0, so we have a completely flat plane. This can be seen in the GenerateStructures function, at line 46 of Grid.cs

 

 

//horizontal plane, of size cellSize * dimension, centered at Vector3.Zero
vert.Position = new Vector3((i - dimension / 2.0f) * cellSize, 0, (j-dimension/2.0f ) * cellSize);
// associate texture coordinates to each vertex
vert.TextureCoordinate = new Vector2((float)i / dimension, (float)j / dimension);