*In which you discover the many coordinate systems through which vertices and fragments pass.*

Suppose you head to the library to look for a book. You have a particular title in mind, and you ask the librarian where to find it. The librarian's answer might take one of the following forms:

- It's 35 feet above you and 56 feet to your right.
- That book's on the third floor.
- Five minutes ago, it reported its location as 38° 26' 19.4784" N and 78° 52' 20.1684'' W.

The book only exists in one place, but there are many ways to identify that one place. Each of the librarian's answers locates the book relative to a different frame of reference. In the first answer, the book is located relative to you. In the second answer, to the building. In the third answer, to the globe.

So too may your 3D objects be situated in different frames of reference or spaces. As you render an object, you will situate it in a series of spaces, performing different operations in each.

The coordinate system that you use when modeling a 3D object is model space. You choose the coordinate system that will ease the modeling process. For example, you might model a character so its feet straddle the origin. Then its y-coordinates will represent a distance from the ground.

When you export the model from your modeling program, you get a file full of model space coordinates. Consider this magic carpet exported in the OBJ file format:

```
v 3 0.5 -1
v 3 0.5 1
v -3 0.5 -1
v -3 0.5 1
f 1 4 3
f 1 2 4
```

The carpet is a rectangle floating half a unit above the origin. Each `v`

line describes a vertex position. Each `f`

line lists the 1-based indices of a triangle. You can see that the carpet is oriented along the x-axis.

Your renderers read in the model space coordinates and store them in VBOs.

Suppose you are digitally recreating the neighborhood in which you grew up. You have several tree models and want them to line the avenue, so you read them in and ship them to the GPU in VBOs. Sadly, you find that all the trees render on top of each other.

The artist who created the models rightly put each tree in a convenient model space with the origin at its roots and the trunk climbing the y-axis. It is you who must situate the trees at different locations in world space. You do this by transforming the model. For example, you might rotate one of the trees 31 degrees and then shift it along the x- and z-axes with this transformation matrix:

```
const rotater = Matrix.rotateAroundY(31);
const translater = Matrix.translate(4, 0, 7);
const worldFromModel = translater.multiplyMatrix(rotater);
```

The vertex shader takes the position from model space into world space by applying this matrix:

```
uniform mat4 worldFromModel;
in vec3 position; // which is in model space
void main() {
vec4 worldPosition = worldFromModel * vec4(position, 1.0);
gl_Position = worldPosition;
}
```

World space is the coordinate system in which you arrange all of your models, but also non-geometric entities like light sources and the viewer.

When you start to add lighting to your renderers, you will need to consider your object's location relative to the viewer. This is done by reframing your scene in eye space, in which the viewer is positioned at the origin and the all the objects' coordinates are offsets from the viewer. Eventually you will write a `Camera`

class that allows you to place the viewer anywhere in the world and look in any direction. For the time being, however, you will assume that eye space and world space are the same. This means that the viewer is situated at \(\begin{bmatrix}0&0&0\end{bmatrix}\) in world space and looking down the negative z-axis.

Once you do introduce `Camera`

, you will transform your objects from world space to eye space by way of another matrix. This vertex shader illustrates how a vertex goes from model space to eye space through matrix multiplication:

```
uniform mat4 worldFromModel;
uniform mat4 eyeFromWorld;
in vec3 position; // which is in model space
void main() {
vec4 eyePosition = eyeFromWorld * worldFromModel * vec4(position, 1.0);
gl_Position = eyePosition;
}
```

The matrices are intentionally named `eyeFromWorld`

and `worldFromModel`

to help you connect them in logical sequence. Additionally, the space of the resulting vector is marked by naming it `eyePosition`

. You are encouraged to use these naming practices.

Many graphics developers call the product of `eyeFromWorld`

and `worldFromModel`

the "modelview" matrix. This term confuses the order in which the spaces are visited. A more descriptive name for the combined transformation is `eyeFromModel`

.

Suppose you are designing a retro American football game, and you'd like to put all your player models between 0 and 100 on the x-axis. When you start up your application, you find that only the players at one of the endzones are visible.

The problem is that WebGL expects the world to fit in a very small box. That box ranges from -1 to 1 on the x-axis, -1 to 1 on the y-axis, and -1 to 1 on the z-axis. If you want something to be visible, it must be in this box.

You fix your problem by reframing your world in normalized space. In particular, you scale and translate the coordinates so that the chunk of the world that you want to be visible fits in the unit box. That scaling and translating adds one last matrix to your transformation pipeline in the vertex shader:

```
uniform mat4 worldFromModel;
uniform mat4 eyeFromWorld;
uniform mat4 normalizedFromEye;
in vec3 position; // which is in model space
void main() {
vec4 normalizedPosition = normalizedFromEye *
eyeFromWorld *
worldFromModel *
vec4(position, 1.0);
gl_Position = normalizedPosition;
}
```

Normalized space is called normalized because it is a standard and versatile coordinate system. Each coordinate in normalized space is effectively a proportion of the screen.

After a vertex has gone from model space to world space to eye space to normalized space, it then lands on a particular pixel in the framebuffer. WebGL decides which pixel by transforming the vertex's normalized coordinates into pixel space. The transformation applies the vertex's proportional coordinates to the bounding box of the viewport that you defined with `gl.viewport`

.

Unlike the spaces above, you as a developer don't actively transform your vertices into pixel space. WebGL does it for you.