Dynamic Range is a measure of how much “light” is captured by the digital sensor and resulting image. It’s the range from the darkest black to the lightest white, or relative to an 8-bit scale, 0 is darkest and 256 is brightest. If the actual scene has a greater range of light intensity, then some parts of the image will be “blocked” at either black (0), or white (256) or both. Enough of the techno-babble, even though there’s a whole lot of interesting techno stuff related to this, often mixed into the discussion of high dynamic range (HDR) processing.
The sensor’s dynamic range is a physical limitation. Each part of the sensor can only “hold” so much light (or photons) before it becomes full, so you can “expose for the shadows” and risk blocking the highlights, or expose for the highlights and lose detail in the shadows. (More on this at E is for Exposure.) The luminance range of the scene can be measured in units called eV (exposure values) where each eV is double the amount of light of the lesser value. It’s believed the human eye has about a 10 to 14 eV range, whereas many camera sensors have a 8 to 10 eV range. The newest Sony sensors in Nikon and Sony cameras might have a 14eV range, and seem to show a much greater dynamic range than any Canon sensor. (Check out this Sony AR7 review with many comparisons of shadow detail between Sony and Canon.) Whether this is essential, or critical, or even important, is another matter, and the subject of much debate, especially considering most amateur photography is expressive rather than documentary.
Recently interest in high dynamic range (HDR) processing has increased, and some cameras even come with HDR built in. Essentially it’s a simple process of making multiple images at various exposures, some under-exposed and some over-exposed (relative to what the camera might determine is normal exposure.) These images are merged using software to capture the wider range of luminance in one image. I like to think this is similar to the process Ansel Adams describes in chapter four of his book “The Negative.” He called it the Zone System and it relates subject luminance range to the scale of grays (white to black) on the final print. For Adams the method by which the visualization of the final print came from the exposed negative was both a technical and creative/artistic process. Although Adams worked with film and paper and chemicals, the idea that the scene has some dynamic range, and one of the photographer’s choices is how to represent that dynamic range in the final image, still remains relevant today.
If the camera sensor has a greater capacity to record detail in shadows and detail in highlights, then that’s a boon! But even without having that super sensor, there are many ways to achieve a particular artistic outcome, including deciding to give up the shadow detail for more highlight detail, to make multiple exposures and process using HDR software, or to use other software manipulations (Photoshop has a “shadow and highlight” tool that can expand or compress these areas of the image after the fact, even though it sometimes makes the image look flat. A photographer can learn how the camera sensor “reads” light, and experiment to develop processes to achieve the desired image.