Dynamic Range And Images

What is dynamic range and why is it important in imagery?

Dynamic range is a measurement of the ability to record large and small changes simultaneously. When an image has high dynamic range, it means that the image has the ability to display both small and large changes. To keep the discussion simple and clear, we will further constrain our discussions to gray-scale images.

If an image has a 1-bit dynamic range, then each pixel value can be represented by a single bit, which can be 1 or 0. As you can imagine, the image will not look so good. As the number of bits per pixels is increased, the number of gray scale shades that can be represented increases as well. Most monitors today have an 8-bit dynamic range, meaning that they can represent 2^8 = 256 shades of gray.

Why would we ever need or want more than 8 bits of dynamic range, when most of today’s monitors will not display more than 8 bits?

The answer is: when we want to be able to capture small and large changes. An ideal sensor has a linear response to light. The ideal sensor will produce a number that doubles when the amount of light hitting the sensor doubles, and so on. Assume that an image contains a picture of a tree on a very bright sunny day, with the tree’s shadow and the bright light outside the shadow. A high dynamic range image will capture details in the bright and dark regions using the least significant bits. Then, using the most significant bits, it will capture the large image changes between the shadow and the bright background. Naturally, the next question becomes:

How do we map this higher dynamic range image, that contains more than 8 bits, into the 8 bits dynamic range of our monitors?

The answer is tone mapping. This can be a global function that produces an output, y, based solely on the value of the input pixel, x:

y = g(x)

Or it can be a locally adaptive function that takes into consideration not only the input value, but also its location within the image, and in particular the values of the neighboring pixels:

y = f(x,position) = f(x, neighboring values of x)

The locally adaptive functions tend to be more meaningful than the global mapping functions. Let’s assume that we want to map a 16 bits dynamic range image into the 8 bits dynamic range of the standard monitor. A global function g(x) that will perform this mapping is a right shift of 8 pixels (which is equivalent to a division by 256):

y = g(x) = x>>8 = x/256

Using the global mapping, g(x), all the local details that were captured in the least significant bits are now gone. The division wiped out all of the details. Thus, a more meaningful approach would be to first extract the local details and to map the local details independently of the coarse details. Thus, let’s us define the local average as:

average(x) = (sum of all neighboring pixels) divided by (number of pixels)

Then, the coarse details and the fine details at x are defined as:

coarse(x) = average(x)
fine(x) = x – average(x)

(Notice that these would be the HARR wavelet coefficients if the window size is 2×2.) The fine(x) values should have mostly zeros for the most significant bits, since their values are small. One locally adaptive mapping would be:

y = coarse(x)>>8 + fine(x)

or, if the details are too strong, the fine(x) can be divided (reduced) by some constant.

It’s immediately clear from this discussion that wavelets are good candidates for such tone maps. The coarse and fine wavelet coefficients can be mapped independently from the high dynamic range to the lower dynamic range, by preserving more energy in the fine wavelet coefficients. The inverse wavelet transform of the dynamically converted wavelet coefficients will then return a lower dynamic range image that will capture both: coarse and fine details.

– Darian Muresan

no comments

Your email is never published or shared. Required fields are marked *

*

*

There was an error submitting your comment. Please try again.