Archived posting to the Leica Users Group, 2006/11/15

[Author Prev] [Author Next] [Thread Prev] [Thread Next] [Author Index] [Topic Index] [Home] [Search]

Subject: [Leica] On camera gamuts and the M8
From: davison_m at msn.com (MARK DAVISON)
Date: Wed Nov 15 10:39:14 2006

I have been reading through the technical information at www.color.org (the 
home page of the International Color Consorium, which sets the standards for 
profiles used in color management), describing the contents of camera 
profiles, and I have been inspecting some camera profiles with the ICC 
Profile Inspector available at http://www.color.org/profileview2.html.

My conclusion is that you cannot tell anything about how a camera responds 
to the spectrum of light from looking at a "camera gamut" which is derived 
from a camera profile. For a graphical example of such a "camera gamut" look 
at:

http://www.luminous-landscape.com/reviews/cameras/leica-m8.shtml

(See Figure 1 on that page.)  This diagram shows a "camera gamut" for the 
Leica M8 which extends outside of the gamut of CIE LAB space.

Here's the problem with camera gamuts.  A profile for the camera describes a 
function which maps camera device R,G, B values (the ones you would 
encounter in a linear RAW file) to CIE Lab values. The profile can define 
this mapping in (roughly) two ways.  The first way is by giving the entries 
of a 3 x 3 matrix which you would multiply times the column vector of (R, G, 
B) values to get CIE X, Y, Z values, (which then can easily converted to CIE 
Lab). The second method is by giving a color lookup table where you can 
simply take a device R, G, B triple and look up the closest L, a, b output.
(This is a slight oversimplification:  the standard allows the profile to 
specify separate non-linear functions to be applied before and after the 
matrix transform or the color look up table transform.)

Now here is the conceptual difficulty:  it is straightforward to write a 
piece of software which examines a profile and determines the range of 
output values in CIE Lab space which will be attained as the input varies 
over all possible combinations of device R, G and B.  But the problem is 
that not all combinations of R, G and B will ever come out of the camera.  
The camera does have a perfectly well defined device gamut which is a subset 
of R, G, B space. (The intuitive reason for this is that the camera response 
is limited by its responses to monochromatic light--you can't get more 
saturated than monochromatic. You could determine the camera gamut via the 
following experiment:  take a tunable source of monochromatic light.  For a 
set of evenly spaced frequencies over a range which includes the visible 
spectrum but extends into UV and IR, take a monochromatic beam and shine it 
into the camera.  Record the R, G, B values attained at that frequency.  
Plot the percentage of R and the percentage of G relative to the total.  If 
you look at the resulting plot you will see a curve in a two dimensional 
space--the spectrum locus.  The camera device gamut is the set of all points 
in the convex set bounded by this curve.  Alternatively if you have the 
camera spectral sensitivity curves for the three channels you can calculate 
the spectrum curve mathematically.)

What we want to know is the image of the device gamut under the 
transformation defined by the profile, not the image of all of R, G, B 
space. Unfortunately the camera profile does not contain any information 
about the device gamut, so instead we show the range of all of R, G, B 
space. Not surpisingly this larger range often includes points outside the 
CIE Lab gamut. Unfortunately this is a purely mathematical artifact and 
tells us nothing at all about the camera or its spectral sensitivity curves.

An example will make this clear.  Suppose some clever camera company was 
able to construct a camera whose spectral sensitivity functions exactly 
matched the color matching functions specified by CIE.  (This camera would 
be able to exactly predict the colors seen by human observers with ordinary 
color vision. Note that the camera would have no sensitivity outside of the 
visible range of wavelengths.)   For this camera the R, G, B responses to 
the spectrum of light coming from an object would exactly match the CIE X, 
Y, Z values.  The mappings recorded in the profile would be a 3 x 3 identity 
matrix (1's down the diagonal and 0's otherwise), and a color look up table 
approximation to the standard mapping from X, Y, Z to L, a b. If we ran 
gamut display software on the resulting profile, it would show that the 
camera gamut occupied all of CIE X, Y, Z space, spilling out well beyond the 
CIE gamut limits.  Would we then conclude that our ideal camera has extended 
IR response?  That it can see unreal colors?

There is a further complication:  the mappings defined by camera profiles do 
not have to represent the result of physical profiling of the camera.  They 
can be "renderings", i.e. mappings chosen to create a pleasant appearance in 
the resulting image.  Thus the profile can be a matter of taste rather than 
scientific calibration.

A sidebar for those unfamiliar with CIE X, Y, Z coordinates.  One way to 
think of these coordinates is that they are the raw device coordinates of an 
abstract camera whose spectral sensitivites have been chosen so that if two 
input spectra yield the same X, Y, Z coordinates, then human observers with 
normal color vision will identify the two spectra as having the same color. 
Think of it as a color camera that can always identify matching colors.  (My 
wife wishes I had one when I pick socks.)



The whole point of color management (roughly) is to define every device you 
use in terms of these
CIE X, Y, Z coordinates.  (I say roughly because there are  further 
complications to account for the human ability to perceive neutral gray as a 
constant color under different illumination, even though the X, Y, Z 
coordinates of a totally gray object will slide around in X, Y, Z space as 
the illumination changes.  This is the dread white balance phenomenon.)

For an output device like a printer the characterization problem is easy:  
you run through all possible input values (R, G, B triples for typical home 
inkjet printers), print a little patch for each triple, and then use a 
colorimeter to measure the CIE X, Y, Z values (under a specified 
illumination.)  The range of all possible X, Y, Z values that you can obtain 
is literally the gamut of colors that the printer can achieve.  The printer 
gamut will always lie inside the CIE X, Y, Z gamut, because these gamut 
values are the result of direct physical measurement.  A printer can never 
create a color you can't see.

For a camera the problem is harder.  The mapping from raw camera R, G, B 
values to X, Y, Z values has to be chosen by a human for a particular 
purpose.  Do you want the mapping to produce pleasing images for an 
unrestricted set of photographic situtations?  Do you want the mapping to 
produce the most accurate possible X, Y, Z values for a limited range of 
input spectra under a fixed lighting type?  Even an accurate camera profile 
gamut (where you only look at the range of X, Y, Z values when the input R, 
G, B values are restricted to the camera's device gamut) may tell you more 
about the profile maker than the camera.

Cameras don't create colors. Human profile makers create colors. There can 
be no camera color gamut without a profile.  (Of course if you set the 
camera to create .jpg files, the camera dutifully creates R, G, B values in 
a well-defined color space which has a clearly specified way of mapping from 
color space R, G, B to X, Y, Z.  Here the camera firmware is choosing and 
applying a mapping from device R, G, B to color space R, G, B. So in that 
sense, cameras can create colors, but it is the human camera firmware writer 
who is deciding on the colors.)

References:

For more than you want to know about human color vision and colorimetry see

The Science of Color, Steven K. Shevell editor, Optical Society of America

This book is unique for relating psychophysical experiments (color matching) 
to the anatomy of the eye.  For example:  did you know that there are no 
short wavelength cones in the very center of the retina? If you thought 
de-mosaicing an R, G, B image from a Bayer array is tough, wait till you see 
the pictures of the distribution of S, M and L cones in the human retina--it 
just looks like random sprinkles.  The book also defines CIE X,Y, Z space 
exactly in terms of physical measurements. The introductory chapter on the 
history of color science is also extremely illuminating. It took a long time 
for scientists to realize that the color of an object is not an independent 
attribute of that object, but rather a human sensation derived from light 
being reflected from or emitted from the object.

For an introduction  to color management and profiles see:
http://www.color.org/slidepres.html

Proviso:  I am not a color-scientist myself, but I have a Ph.D. in 
Mathematics and have worked as a software engineer for many years, so I can 
read and understand the technical descriptions of color science.


Mark Davison



Replies: Reply from firkin at ncable.net.au (Alastair Firkin) ([Leica] On camera gamuts and the M8)
Reply from firkin at ncable.net.au (Alastair Firkin) ([Leica] On camera gamuts and the M8)
Reply from reid at mejac.palo-alto.ca.us (Brian Reid) ([Leica] On camera gamuts and the M8)
Reply from Frank.Dernie at btinternet.com (Frank Dernie) ([Leica] On camera gamuts and the M8)
Reply from henningw at archiphoto.com (Henning Wulff) ([Leica] On camera gamuts and the M8)