Gamma correction

Applet: Katie Dektar
Text: Marc Levoy
Technical assistance: Andrew Adams

Do your digital images look too dark when you print them? Or maybe the colors look washed out when you upload the images to a photo sharing site and view them using a web browser? If so, then you're probably suffering from a breakdown in the color management "ecosystem", in particular in its handling of the non-linear transformation of pixel values called gamma correction.

The historical reason for gamma

For most of the last half of the 20th century, television was displayed on a cathode ray tube (CRT), rather than today's liquid crystal display (LCD) or plasma panel. On a CRT the displayed luminance L is related to the voltage V fed to the electron guns (three of them in the case of a color display) by the relationship L &prop V &gamma, where &gamma is about 2.5. To compensate for this physical behavior of CRTs, television cameras were designed to output a voltage V that is proportional to L 1/2.5, in other words L 0.4.

Implicit in this compensation is the assumption that the luminance emanating from the CRT should match the luminance emanating from the original scene. In practice the original scene may be too bright to reproduce. Fortunately, the human visual system responds to relative brightnesses, not absolute brightness. Thus, it suffices to present scaled-down luminances on a CRT. We represent this scaling by using the proportional-to symbol ("&prop") in our formulas.

The Zen of system gamma

The applet above represents the flow of imagery from an original scene to a displayed image. Start by clicking on "NTSC television". In this configuration display gamma &gammadisp is greater than unity, leading to an upwards-concave plot of input voltage versus output luminance, and camera gamma &gammacam is less than unity, leading to a downwards-concave plot. (OS correction is effectively disabled by setting &gammacorr to 1.0, because in the days of analog television there was no operating system involved.) But while the display gamma shown is 2.5 as explained earlier, camera gamma is 0.5, not 0.4. Why is this?

One of the fundamental results of human psychophysics is that subjective brightness B is non-linearly related to luminance L entering the eye. More specifically, if the scene is bright, and the eye is bright-adapted, then this relationship is roughly B &prop L 0.5. Similar relationships were derived in the 1800s by psychophysicists Weber and Fechner, who found a roughly logarithmic relationship. However, if you watch television while sitting in a dark living room, then your eye is dark-adapted. In this case, subjective brightness follows the relationship B &prop L 0.4 or thereabouts. Thus, if the camera exactly compensated for the gamma of the CRT, your perception of the scene would not match (up to a linear scaling) what you would see if you were looking at the original bright scene.

To compensate for this difference in adaptation, the National Television Standards Committee (NTSC) recommended setting the gamma of the camera to 0.5. This produces B &prop L0.5 × 2.5 × 0.4, where the three exponents (in sequence) represent the camera, the display, and the dark-adapted eye. Performing the indicated multiplications yields B &prop L0.5, which matches what you would see looking directly at the bright scene. The product of the first two terms, 0.5 × 2.5 = 1.25, is called system gamma. As display screens get brighter, and for computer screens used in brightly lit office environments, this special correction is omitted, i.e. system gamma is set to 1.0.

The computer graphics pipeline

Click on "Silicon Graphics workstation". This represents a (slightly old-school) computer graphics (CG) pipeline. When rendering a 3D computer model to generate a digital image, most CG algorithms assume that pixels in the frame buffer represent (or are proportional to) the luminance of the synthetic scene. In the applet we represent this situation by setting camera gamma to 1.0. In this case we must correct explicitly for the CRT's natural gamma. This correction, performed by the IRIX operating system in the case of Silicon Graphics (SGI) workstations, was implemented using a hardware lookup table called a colormap. Note that SGI used a high system gamma (2.41). After all, computer graphics, like astronomy, is best practiced at night (or at least in a dark room)!

The digital photography pipeline

Now click on "sRGB digital camera". (We're skipping over "AppleRGB", which is seldom used anymore.) Let's first review how a digital camera works. The electron charge accumulated at each pixel site in a CCD or CMOS image sensor is proportional the the number of photons striking it, scaled by quantum efficiency of the device, which is wavelength-dependent. In sensors with R,G,B color filter arrays, the filters are designed so pass photons of each visible wavelength in the same proportion as the human visual system's rho, gamma, and beta receptor systems, respectively. In other words, the transmissivity of these filters are designed to match the sensitivity of the three types of cones in our retina. (See our first color applet for a more detailed description of human color vision.) Some cameras allow you to store the resulting R, G, and B pixel values directly, as a RAW image file. If we add together the R, G, and B pixel values produced by an adjacent set of pixels from this file, the sum should be proportional to the sum of our rho, gamma, and beta responses to the same stimulus, i.e. to our total response. We call this total response luminance, and we refer to such human-centric measures of light as photometric rather than radiometric.

RAW files are big, partly because each R, G, and B value is typically represented as a 16-bit number. To save space, image compression formats such as JPEG were invented. One of the ways JPEG saves space is to reduce the number of bits, for example to 8 bits for each of three components. To avoid quantization artifacts such as contouring, this reduction in bitdepth must be done carefully. In particular, since the human visual system is non-linearly sensitive to luminance, it makes sense to transform RGB coordinates into a space in which coordinates are proportional to subjective brightness, then to re-quantize them to fewer bits in that space. One such space, widely used in modern digital cameras, is sRGB. In this space stored luminance is made proportional to sensed luminance raised to the power 1/2.2. (To be more specific, JPEG calls for a conversion from R, G, and B to three new coordinates called Y', Cb, and Cr, where Y' represents sensed luminance1/2.2. However, a full description of these conversions is beyond the scope of this applet.) We represent this gamma transformation in the applet above by setting camera gamma to 1/2.2 = 0.45. It is interesting to note that while this gamma is similar to those used in the cameras of analog television systems, the reason is entirely different - optimal quantization rather than compensation for the physics of a cathode ray tube.

To finish up this pipeline, we assume that the JPEG file is being displayed on a liquid crystal display (LCD) rather than a CRT. LCDs have a gamma of 1.0. We also assume the Macintosh operating system. In this case the system gamma was chosen (strangely enough) to be 0.82, leading to a gamma correction in the operating system of 1.80, as shown in the applet.

The Great Darkening

To many people the colors on Macintoshes have always looked washed out. Partly for this reason, and partly because Windows computers assume a system gamma of 1.0, and they currently dominate the world, Apple decided with Snow Leopard (Mac OS 10.6) to change their assumed system gamma to 1.0, hence their OS correction from 1.8 to 2.2. Since this change caused images created under previous Mac OS's to look darker (click on the "Snow Leopard" button in the applet), Apple engineers despairing (but privately) call this switchover "The Great Darkening".

As Macintosh users gradually upgrade to Snow Leopard, photographs captured using an sRGB camera will look the same whether displayed on Macintoshes or Windows PCs. In this new world, the RGB values in any image file that contains no ICC color profile are assumed to be in sRGB space, whose gamma is 1/2.2. Although all modern cameras output files with ICC profiles, some photo sharing sites strip out these profiles. (Hopefully, this silliness will stop soon.) Conversely, if the file contains an ICC color profile but the software reading it is not color managed, i.e. it ignores the color profile, then the graphics cards in both Macintoshes and PCs now institute a default gamma correction of 2.2, and the images will look fine. Examples of "unmanaged" software systems are Windows operating systems before Vista and Firefox browsers before version 3.5. Thus, as users upgrade to the latest operating systems and web browsers, there will be fewer instances of your images looking too dark or washed out because of gamma correction problems.


Questions or Comments? Please e-mail us.
© 2010; Marc Levoy
Last update: March 1, 2012 12:59:45 AM
Return to index of applets