Billboarding

Compare this first-person terrain walker with a stroll you've taken through a park:

What's different about the two walks? Probably a few things, but the right answer is plants. The physical world has plants, while this renderer is barren. Virtual worlds need plants. They feed and shelter living things. They block erosion. They regulate the environment. Human eyes are sensitive to more shades of green than any other color. You simply must put plants in your renderers.

Constructing plant meshes in a modeling tool is onerous. Plants are small, intricate, and flat. Populating an entire field with 3D flora adds a lot of vertices to a scene. For these reasons, many game developers prefer to render plants as simple quadrilaterals textured with flat images like this one:

Plant texture

These images usually have an alpha channel, which means that you can use alpha mapping to discard the fragments of the quadrilaterals that fall on the background.

In the renderer below, 1000 quadrilaterals have been randomly positioned around the terrain. Walk around and look for some disappointing behavior:

The disappointment occurs when you look at a plant quadrilateral from its side. The plant effectively disappears. It is, after all, only a quadrilateral. But objects in a scene shouldn't disappear just because you're looking at the them from the wrong angle.

To cure the disappointment, you must orient the quadrilaterals so they are always facing the viewer. Embedding 2D shapes in a 3D world so that they are always facing the viewer is called billboarding. The name is a nod to the billboards you see along a highway, which are turned to face traffic.

Orienting the quadrilaterals so they face the camera could be done by applying a rotation matrix, but you'd need a different matrix for each one. A cheaper solution is to send the quadrilateral's four vertices condensed to a single location at the billboard's base. Then in the vertex shader, you expand the vertex positions horizontally and vertically along the viewer's right and up vectors. Step through this breakdown of the expansion to see how it's done:

In the following pseudocode, 1000 plants are randomly scattered around a terrain. Their x- and z- coordinates are chosen at random, and their y-coordinates are determined by blerping within the terrain. The positions and texture coordinates are generated as described above.

n = 1000
positions = []
texPositions = []
indices = []

for i to n
  // Compute p.
  x = random in [0, terrain.width)
  z = random in [0, terrain.depth)
  y = blerp (x, z) in terrain

  // Make all four vertex positions be p.
  positions.push(x, y, z)
  positions.push(x, y, z)
  positions.push(x, y, z)
  positions.push(x, y, z)

  texPositions.push(0, 0)
  texPositions.push(1, 0)
  texPositions.push(0, 1)
  texPositions.push(1, 1)

  indices.push(i * 4, i * 4 + 1, i * 4 + 3)
  indices.push(i * 4, i * 4 + 3, i * 4 + 2)

In the fragment shader, the texture coordinates are range-mapped and used to expand the incoming position out to the desired corner. This movement happens along the viewer's right vector and up vector in world space, which must be sent in as uniforms:

uniform mat4 eyeFromModel;
uniform mat4 clipFromEye;
uniform vec3 worldRight;
uniform vec3 worldUp;

in vec3 position;
in vec2 texPosition;
out vec2 mixTexPosition;

void main() {
  vec2 factors = vec2(texPosition.x * 2.0 - 1.0, texPosition.y * 2.0);
  vec3 expandedPosition = position + factors.x * worldRight + factors.y * worldUp;
  gl_Position = clipFromEye * eyeFromModel * vec4(expandedPosition, 1.0);
  mixTexPosition = texPosition;
}

Where do you get these vectors from? Probably you already have them. The viewer's right vector is part of your camera state. The up vector is probably \(\begin{bmatrix}0&1&0\end{bmatrix}\).

This renderer uses billboarding to keep the plant quadrilaterals always visible:

If you look closely as you turn the camera, you can see the plant rotating. However, the rotation is less jarring than a disappearing plant.