The Fourth Wall

In which you learn how the physical world communicates with the virtual world.

Your 3D scenes are an aquarium teeming with colorful creatures that streak through space. Now and again, a user comes along and taps on the glass, expecting a response. As a graphics developer, you must figure out what the user wants and maybe which creature is supposed to respond. This is an interesting problem. The tap happens on a flat 2D surface, but the world behind the tap is 3D.

Many users tap with a mouse. In the early days of JavaScript running in a web browser, you listened for mouse events by registering callbacks, like this:

window.addEventListener('mousedown', event => {
  // handle down events
});

window.addEventListener('mouseup', event => {
  // handle up events
});

But then devices without mice appeared, like phones and tablets. Users interact with these devices using their fingers or a stylus. Browsers got some new event types to support touch screens:

window.addEventListener('touchstart', event => {
  // handle down events
});

window.addEventListener('touchend', event => {
  // handle up events
});

If you were writing an application and wanted to support both mice and touch events, you registered similar listeners for both types. Browser developers eventually eliminated this redundancy by making a third unifying event type:

window.addEventListener('pointerdown', event => {
  // handle down events
});

window.addEventListener('pointerup', event => {
  // handle up events
});

You should generally favor the pointer event family if you want your renderers to work on both desktop computers and mobile devices.

Knowing that the user has tapped on the glass is an important first step, but what do you do with that tap? Read on to learn about several different ways that graphics developers interpret these taps to manipulate the 3D scene.