Thomas Wrote:2. Render the shadow casting light (using light geometry: sphere, cone, etc) to scene and use shadow map from #1 to calculate occlusion.
I guess what I don't quite understand is your #2 step. You're saying you render the scene as the light geomerty, but obviously you have to re-draw the entire scene geomerty to get the projection coordinates and distance away.
From what you just said, it sounds as if you're testing the depth map against the scene without this data, so I'm obviously missing something 0_o. Do you mean that you are redrawing the scene geometry with the shadow shader and using cone geometry to cull it (such as the stencil or depth buffer)?
Anyway though, from the sounds of it, you're doing shadowing as a completley forward pass. This allows possiblity for overdraw, and on my graphics card, there is a noticable speed drop even without overdraw (probably due to vertex coordinate recalculation between pixel sets or something like that).
EDIT: I misunderstood what you were saying, fixed in the post below.
Quote:you render the shadow map data and all world geometry again to a new 128 bit render target and then use that render target when rendering the light.
Yes, this is the step we are kind of miscommunicating on. The 128 bit render target makes absolutley no reference to the depth map previously drawn. It could actually be an in entirley different shader all together. If you're like me you probably understand code better than description so let me psuedo-code the algorithem.
setup 2 render targets:
light depth map R32F (or could be shadowmap format)
coordinate map G32B32R32F (ehh, didn't type that exactly right)
#1.
VS() //vertex shader
{
out.position = in.position * (lightMatWVP);
}
PS()
{
out.depth = in.position.x / in.position.w;
}
render target = light.depth map
#2.
VS()
{
out.position = in.position * (cameraMatWVP);
worldPositon = in.position * MatW;
out.lightPos = worldPositon * (lightProjectionMatrix)
}
PS
{
----------- //transform shadow coordinates (forget the exact code)
float2 coord = ShadowCoordinates; //(ranging from 0-1 in 2 directions)
out.color.r = coord.x;
out.color.g = coord.y;
out.color.b = in.lightPos.z / in.lightPos.w ; //(the depth away from the light according to the lightProjection matrix)
}
render target = coordinate map
//just if you're curious, this texture if displayed normally should look like a rainbow colored pallate
#3.
To be clear, in this step there is NO scene geometry rendered, there is only 1 single cone in the shape of the shadow caster. The out.cameraPos variable is unnecassary, but I put it there for the sake of clarity that we are projecting a texture onto the geometry from the camera. Technically you could create this as a fullscene quad and not even need a vertex shader, but that would not take advantage of what the algorithem offers. The "jittered disk" is simply an array of float2's that are used to create a random sampling pattern (replaces banding with noise)
shader.SetTexture("coordMap", coordinate map);
shader.SetTexture("depthMap", light.depthMap);
VS() //just transforming the cone
{
out.position = in.position * (cameraMatWVP);
worldPositon = in.position * MatW;
out.cameraPos = worldPositon * (cameraProjectionMatrix)
}
PS()
{
----------- //transform camera projection coordinates
float2 fSMcoords = tex2D(coordMap, out.cameraPos).RG;
float depth = tex2D(coordMap, out.cameraPos).B;
float result;
for (i = 0; i < numsamples)
{
result = (depth < tex2D(depthMap, fSMcoords + jitteredDisk[i]))? 0:1;
}
result /= numsamples;
}
render target = mainSceneTarget