Shadows are an important visual cue that help humans perceive proximity and shape. When there are no shadows, the visual system cannot form an accurate mental model of how a scene is arranged. Watch this aged video on some psychology experiments to observe how shadows influence your perception.
Your renderers do not have shadows. That's because when you calculate the lighting terms for a fragment, you only consider a few inputs:
Nowhere in this list is consideration of any other object in the scene. Other objects are what cause shadows. They block or occlude light from reaching the fragment. If you want shadows, you must find a way to determine a fragment's occluders.
Your intuition may tell you to cast a ray from the fragment to the light source and see if it hits any occluders. That's a great idea that is used in a different approach to 3D rendering called raytracing. Sadly, casting rays doesn't really fit the WebGL model, which processes scenes one shape at a time. It also requires many ray intersection tests, which are expensive to run per fragment.
Lance Williams, the same individual who invented mipmapping, invented a faster algorithm for computing shadows called shadow mapping. By now you've probably figured out that the word "mapping" means throwing information into textures. True to form, the shadow mapping algorithm stores information about occluders in a texture.
The complete shadow mapping algorithm follows these steps:
Read on to learn about each of these steps in detail.