Imagine a bat signal has been turned on somewhere in Gotham City. Light bursts forth from a lamp. Some of the light hits a filter in the shape of a bat and proceeds no further. Other light escapes into the cityscape, landing on nearby buildings, clouds, trees, and so on.
Use your mouse to broadcast the signal in different directions in this renderer:
How do you think the renderer is doing this?
The bat signal is a texture that is being projected onto the scene. Pretend you are the spotlight broadcasting the signal. Your hand is the texture. When your hand is right in front of you, the image is small. As you move your hand away, the image gets bigger, just as it does with a digital projector. When the signal lands on a fragment in the scene, it contributes its color to that fragment.
Given the way GPUs work, you don't actively project the texture. Rather, you figure out how the vertices and fragments receive it. Each vertex must be assigned texture coordinates that locate it within the texture. Since the texture moves around, the texture coordinates cannot possibly be fixed in a VBO. Instead, the texture coordinates are determined dynamically in the vertex shader.
Somehow you must find where a vertex lands on the projected image. Good news. You've done this before. You performed a very similar operation when trying to figure out where a vertex lands on the image plane in a perspective projection. Everything you learned back then is applicable for projective texturing.
Back then, you moved the vertex from model space into the larger world, and then from the world into a space where the eye was at the origin, and then from eye space into the normalized unit cube that WebGL expects. The end result was a set of coordinates that positioned the vertex on the image plane. This is the matrix gauntlet that carried you through these spaces:
clipPosition = clipFromEye * eyeFromWorld * worldFromModel *
vec4(position, 1.0);
In projective texturing, your treat the light source exactly like an eye. But instead of going into eye space where the eye is at the origin, you go into light space where the light is at the origin. The modified gauntlet looks like this:
texPosition = clipFromLight * lightFromWorld * worldFromModel *
vec4(position, 1.0);
The lightFromWorld
matrix is constructed with the aid of a Camera
instance. The clipFromLight
matrix is a perspective matrix that shapes the aperture of the spotlight.
This gauntlet lands you in the [-1, 1] interval of the unit cube, but you want to be in the [0, 1] interval of texture coordinates. So, you need to prepend a couple of extra matrices that do some range-mapping:
texPosition = Matrix4.translate(0.5, 0.5, 0) *
Matrix4.scale(0.5, 0.5, 1) *
clipFromLight * lightFromWorld * worldFromModel *
vec4(position, 1.0);
That's a lot of matrices to be multiplying for every vertex. You should avoid this cost by multiplying all these matrices together in JavaScript. Since JavaScript doesn't allow you to overload builtin in operators like *
, multiplying five matrices together is ugly. However, if you put all the matrices in an array, JavaScript's reduce
function will iterate through accumulate their product:
const lightCamera = Camera.lookAt(lightPosition, lightTarget, new Vector3(0, 1, 0));
const matrices = [
Matrix4.translate(0.5, 0.5, 0),
Matrix4.scale(0.5, 0.5, 1),
Matrix4.fovPerspective(45, 1, 0.1, 1000),
lightCamera.matrix,
worldFromModel,
];
const textureFromModel = matrices.reduce((accum, transform) => accum.multiplyMatrix(transform));
Like your other matrices, the matrix must be uploaded as a uniform:
shaderProgram.setUniformMatrix4('textureFromModel', textureFromModel);
The vertex shader receives the matrix and transforms the vertex position into the texture coordinates that locate the vertex on the projected texture:
uniform mat4 textureFromModel;
in vec3 position;
out vec4 mixTexPosition;
void main() {
// ...
mixTexPosition = textureFromModel * vec4(position, 1.0);
}
Note that the texture coordinates are a vec4
. The coordinates are in clip space, which means they haven't yet been divided by their w-component. You perform the perspective divide in the fragment shader and then look up the projected color like you look up a color from any texture:
uniform sampler2D signal;
in vec4 mixTexPosition;
void main() {
vec2 texPosition = mixTexPosition.xy / mixTexPosition.w;
vec3 signalColor = texture(signal, texPosition).rgb;
fragmentColor = vec4(signalColor, 1.0);
}
Alternatively, GLSL provides a lookup function that will perform the perspective divide for you. It's called textureProj
:
vec3 signalColor = textureProj(signal, mixTexPosition).rgb;
This example code computes the color using only the projected texture and doesn't perform any other lighting. In the bat signal renderer, the projected color is added onto a darkened diffuse term.
You can't tell in the bat signal renderer, but if you were to look behind the light source, you would find a second instance of the projected texture. The projection works in both directions. In the backward projection, the w-component is negative. You can cancel out the unwanted second instance with a conditional expression:
vec3 signalColor = mixTexPosition.w > 0.0
? textureProj(signal, mixTexPosition).rgb
: vec3(0.0);
You may not have many occasions where you need to project an image onto a scene. However, projective texturing is commonly used to add shadows, as you'll soon learn.