# Diffuse Term

In which you shade a surface according to how much its faces a light source.

To shade a fragment, you must know its normal. You must also know in which direction the light source lies in relation to the fragment. The angle $$a$$ between these two vectors gives us a measure of the "litness" of the surface. When they form an angle of 0 degrees, they are perfectly aligned, and the fragment is fully illuminated. When the angle between these two vectors is 90 degrees or more, the fragment receives no light.

These facts establish two endpoints of the litness map:

You must also figure out the litness between these two endpoints. Perhaps a straight line would do. Or you could turn to the work of physicist Lambert, who found that the cosine function gives a reasonable measure of illumination for a certain class of surfaces:

When the angle reaches 90 degrees, the litness goes negative. Only black holes should suck light out of a scene. To prevent negative litness for fragments facing away from the light source, you clamp the litness to 0:

Hence you have the following definition of litness:

$$\mathrm{litness} = \mathrm{max}(0, \cos a)$$

Compared to arithmetic, trigonometric functions are expensive to calculate. Since the GPU will be processing a lot of fragments, you'd like to find a way to compute the cosine very quickly. Good news. When two vectors have unit length, the cosine of the angle between them is their dot product.

You may want to know why the dot product gives the cosine. Consider the dot product between two 2-vectors $$\mathbf{p}$$ and $$\mathbf{q}$$:

$$\mathbf{p} \cdot \mathbf{q} = p_x \times q_x + p_y \times q_y$$

Recall that vectors can be represented in polar coordinates by their radius and angle:

\begin{aligned} \mathbf{p} &= (p_r, p_a) \\ \mathbf{q} &= (q_r, q_a) \end{aligned}

The polar coordinates are turned into Cartesian coordinates using sine and cosine:

\begin{aligned} p_x &= p_r \cos p_a \\ p_y &= p_r \sin p_a \\ q_x &= q_r \cos q_a \\ q_y &= q_r \sin q_a \\ \end{aligned}

The angle between $$\mathbf{p}$$ and $$\mathbf{q}$$ is $$p_a - q_a$$. You want the cosine of this angle to determine the fragment's shading. Starting from the dot product, you work your way back to the cosine by substituting in the values above and applying a trigonometric identity:

\begin{aligned} \mathbf{p} \cdot \mathbf{q} &= p_x \times q_x + p_y \times q_y \\ &= p_r \cos p_a \times q_r \cos q_a + p_r \sin p_a \times q_r \sin q_a \\ &= p_r q_r (\cos p_a \times \cos q_a + \sin p_a \times \sin q_a) \\ &= p_r q_r \cos (p_a - q_a) \\ \end{aligned}

When a vector has unit length, its radius is 1, which allows you to simplify even further:

\begin{aligned} \mathbf{p} \cdot \mathbf{q} &= \cos (p_a - q_a) \\ \end{aligned}

This simplification is why normals are expected to have unit length. Litness may now be expressed as a fast dot product:

$$\mathrm{litness} = \mathrm{max}(0, \mathrm{normal} \cdot \mathrm{lightDirection})$$

The light direction must be a vector leading from the fragment to the light source. To compute this direction, you need to know the positions of the fragment and the light source. The light source position is something you send in as a uniform or hardcode in the shader. The fragment position must come from the vertex shader.

To hardcode the light 10 units up from origin and receive the interpolated fragment position, you'd write these lines in the fragment shader:

const lightPosition = vec3(0.0, 10.0, 0.0);
in vec3 mixPosition;

In main, you compute the light direction by subtracting one position from another:

void main() {
vec3 lightDirection = normalize(lightPosition - mixPosition);
// ...
}

Next you throw in the normal from the vertex shader and compute the litness:

const lightPosition = vec3(0.0, 10.0, 0.0);
in vec3 mixPosition;
in vec3 mixNormal;
out vec4 fragmentColor;

void main() {
vec3 lightDirection = normalize(lightPosition - mixPosition);
vec3 normal = normalize(mixNormal);
float litness = max(0.0, dot(normal, lightDirection));
fragmentColor = vec4(vec3(litness), 1.0);
}

The result is a grayscale shading of your surface, as seen in this torus renderer:

Lambert's model provides a reasonable approximation of how light behaves when a surface has a rough finish. You see no glossy highlights like you'd encounter on a very smooth surface. Instead, light bounces off and diffuses in all directions. Such light is called diffuse light. You will soon expand this lighting model with other terms.

If you rotate just the torus in this renderer, you should observe a problem. The shading sticks to the surface instead of adapting to the new orientation. The problem is that this renderer isn't taking the spaces of the graphics pipeline into account.