Working on a sample regarding Orientation in a Direct3D app on Windows Phone 8, I came upon the need to clarify why I’m writing some math code in a specific way.
Coming from XNA to C++/DirectX11 might occasionally be confusing. Once source of that confusion can be in ‘when’ and ‘how’ matrices are used, namely row major vs column major representations and row-vector vs column-vector mathematics. Shortly, in a row major representation, the matrix is represented as an array of rows in memory; while in the column major, the matrix in represented as an array of columns in memory. There’s also the difference between row-vector math and column-vector math, which affects how transformation matrices are represented, and the correct order in which various operations should be made. For more details on the mathematical aspects, see this and this. A nice analysis of all these four aspects, which I totally recommend anyone to read, can be found here: Row major vs. column major, row vectors vs. column vectors.
For OpenGL and GLSL users, it’s been quite clear for some time now: column major representation and column vectors.
For DirectX, it’s not always that simple, and here are some reasons why:
- The old D3D9 fixed function pipeline worked with row-major matrices and row-vector math
- HLSL by default uses column-major matrices and row-vector math (to be fair, if you use mul(M,v) v is treated as a column vector, so I think you could end up using column-major and colum-vector math, if you wanted to, so it’s more fair to say that HLSL uses whatever vector math you want it to).
- DirectXMath uses row-major matrices and row-vector math.
- The old D3DX Effects framework (and the XNA Effects framework), took care to transform the row-major matrices used by D3DXMath and Xna.Framework.Math into column-major matrices before passing them to shaders. This was done transposing the matrices silently, behind the scenes, so you didn’t have to worry about that.
- BasicMath.h (commonly found in the Windows 8 and Windows Phone 8 samples) tried to mimic HLSL in CPU code, and thus wants to use column-major matrices (so it’s easy to copy them directly to HLSL without transposition). But on the math side, I see it as a royal pain in the ass, as it seems to require me to think in column-vector math with regards to the order of multiplication of transforms. So either I’m too thick to properly understand how I should work with BasicMath, or something in it’s implementation is not quite right.
So, what is the way to deal with all of this? Thankfully, HLSL gives us a helping hand, since it knows how to handle both row-major and column-major matrix packing. The only place this matters is that constant buffers, where you pass the matrices from the CPU to the shader. By default, HLSL assumes you provide these matrices in column-major. To change this behavior, you have three options:
- Use the /Zpc or /Zpr switches when compiling the shaders using fxc.exe, to tell the compiler how to pack matrices
- use the #pragma pack_matrix directive
- use the row_major or column_major type modifiers to specify packing for individual matrix parameters
Making a choice
To make your life easier, and to avoid confusion, you should make a choice as early as possible, and stick with it. There’s no right or wrong choice, but I’ll make some comments about advantages and disadvantages of each options, to justify my own choice. Let’s see which are the options:
- BasicMath.h – Found in almost any Win8/WP8 sample from MSDN. mimics HLSL in syntax, but it’s really basic. You don’t need to transpose the matrices before sending then to HLSL, but it requires you to work with different multiplication order than HLSL
- OpenGL Mathematics – Works in column-major and column-row. You’ll need to either transpose matrices before sending them to HLSL, or use the column-row version of mul() in your HLSL code. But otherwise, it is a full-fledged math library, that has all functionality you’ll need.
- DirectXMath + transpose matrices – part of the Platform SDK. It’s a great math library, with support for SIMD. It works with row-vector math and uses row-major representations, so all you need to do is transpose the matrices before sending them to the shader
- DirectXMath + change HLSL matrix layout – just use DirectXMath normally, but instead of transposing all matrices that you send to the shaders, set the layout for matrix parameters as row_major, using one of the methods explained previously.
I’m sure there are many other options, but these are the ones that I focused on. At AmusedSloth, we decided to use glm (OpenGL Mathematics), since it’s portable, and works out of the box on all platforms we want to target. But for my samples/tutorials/spare-time projects, I wanted to use something as close as possible to how things worked in XNA (row-vector math), which is why I’ll be using one of the last two options in my samples from now on. For simplicity and clarity, I’ll likely change the matrix layout in HLSL instead of transposing all matrices most of the time, so my choice for future samples on this site is: DirectXMath + change HLSL matrix layout.
Meanwhile I found out (thanks Travis) that the updated Effects Framework for Direct3D11 was released recently as shared-source. However, this is for use in Win32 desktop applications, and was provided to help porting apps. So this will (just like the previous Effect frameworks) transpose the matrices automatically for you when needed, but it won’t be useable on WinRT or Windows Phone