The Dark Side

Published on 7/20/2011

Tricky shadow mapping bits comming up...

The principle behind this is quite simple. There are no shortage of tutorials out there on this topic, and plenty of discussions about the best method to use. The one I have opted for is the custom shadow mapping option. I suppose at some point I will try and figure out proper hardware supported stencil buferring etc, but there are well documented pros and cons attached to each method. See here for details. It is a very, very old post, but serves to illustrate my point.

So anyway, moving on to what I'm trying to accomplish... Here's the depth buffer I'm generating from the viewpoint of the sun. This bit is pretty straight forward, using its own (but small) shader and a render target. One note on this - most native DirectX implementations opt to use the R32 surface format for this to retain accuracy. For XNA, I initialize the render target to the R1010102 format. This is memory overkill at the moment, since I'm only using the red channel for depth information. The problem with this is that other formats like Single does not support blending (or requires Point filtering instead of linear), so passing it to the real scene render shader is not supported when the BlendState is set for alpha. So, to make the best of it, I'll probably change it to a normal RGBA32 format and pack the float depth value into the various components instead. Should make for some pretty psychedelic depth buffers :)


Currently the actual generated shadows are all over the place. I'm thinking an error in the projection from the camera viewpoint to the light viewpoint for depth comparison. No luck so far, but I'll keep at it. After that, there's blurring to be done as an additional step. The only problem with all this is that the frame rate is cut in half... At this point I'm not discriminating as to what I'm rendering for the depth buffer, but that said there isn't a lot on the test level to exclude either.


I've managed to solve the problems with the projection from the camera view to the depth buffer. The HLSL function tex2Dproj doesn't actually do what it says on the tin. Basically, once you've transformed the pixel to the depth buffer space using the appropriate projection, simply calling tex2Dproj directly using your resulting projected vector isn't actually going to work. Even though the function does the projection divide, it doesn't restrict it into the 0.0 to 1.0 space that is used for UV lookup. A simple matrix calculation can do that for you though, and then use the function to get the appropriate texel.


This is what I ended up with. Very blocky and very low fidility, but at least it's accurate. These is still a problem with the clamp filtering or something. The far tiles are dark which shouldn't be. But all in good time...