In which you sort out which coordinate system you should use when shading.
When this renderer first loads, the shading indicates that the light source is positioned above the torus:
However, when you rotate just the torus so that it flips over, the shading suggests that the light source is below the torus, even though the light source's yellow orb hasn't moved.
The problem is these lines in the fragment shader:
const lightPosition = vec3(0.0, 10.0, 0.0);
in vec3 mixPosition;
What space are these in? Clip space? Eye space? World space? Model space? Are they even in the same space? When you don't consider the spaces of your coordinates, you will almost certainly end up with strange behaviors.
The vertex shader in this renderer determines the space:
in vec3 position;
in vec3 normal;
out vec3 mixPosition;
out vec3 mixNormal;
void main() {
// ...
mixPosition = position;
mixNormal = normal;
}
There are no transformations in the assignments to mixPosition
and mixNormal
. That means the interpolated values are in model space.
The fragment shader has this line:
vec3 lightDirection = normalize(lightPosition - mixPosition);
The subtraction implies that lightPosition
is also in model space. Since lighting is being performed in the untransformed model space, rotating the torus has no effect on the shading.
Rarely do you want to perform lighting in model space. Lights are usually defined in either world space or eye space. Lamp posts, wall-mounted torches, and fixtures are world space lights. Headlamps and flashlights in the viewer's hands are eye space lights.
Whether your lights are defined in world space or eye space, lighting itself tends to be performed in eye space. This is because some lighting terms involve the position of the eye. In eye space, the position of the eye is predictably at \(\begin{bmatrix}0&0&0\end{bmatrix}\).
The fragment position and normal must all be transformed to eye space. This is done in the vertex shader:
in vec3 position;
in vec3 normal;
out vec3 mixPosition;
out vec3 mixNormal;
uniform eyeFromModel;
void main() {
// ...
mixPosition = (eyeFromModel * vec4(position, 1.0)).xyz;
mixNormal = (eyeFromModel * vec4(normal, 0.0)).xyz;
}
Note that the homogeneous coordinate for the normal is 0 instead of 1. The homogeneous coordinate was added to make translation work. Vectors are mere directions; they do not translate. Up is up wherever you are.
The light position must also be in eye space. This fragment shader is almost identical to the one that performed lighting in model space, except lightPosition
has been renamed lightEyePosition
and turned into a uniform:
uniform vec3 lightEyePosition;
in vec3 mixPosition;
in vec3 mixNormal;
out vec4 fragmentColor;
void main() {
vec3 lightDirection = normalize(lightEyePosition - mixPosition);
vec3 normal = normalize(mixNormal);
float litness = max(0.0, dot(normal, lightDirection));
fragmentColor = vec4(vec3(litness), 1.0);
}
In this renderer, the light source is fixed at \(\begin{bmatrix}2&2&-10\end{bmatrix}\) in eye space:
As you rotate the world, the light source stays put since it is anchored to the viewer and not the spinning world. The shading on the torus changes as different faces become illuminated.