Copyright Michael Karbo, Denmark, Europe.


  • Next chapter.
  • Previous chapter.


    Chapter 27. Colours from tinted glasses ...

    How do we produce a colour image out of image signals from small light meters, which can only distinguish grey tones? The solution is a smart little colour filter, which is screwed down on the image sensor.

    The filter is designed in such a way that every sensor has ”sunglasses” on for one specific colour. One cell gets a red filter, another a green filter, and so on. In this way we use some of the sensor’s cells to register green light, while others register blue and the rest capture red light.

    Figur 103. The colour filter is placed on top of the image sensor’s light sensitive surface. The result is a kind of grey tone equipped with sunglasses.

    The filter sees to that only light with a certain wavelength reaches the photocell. Light with other wavelengths is absorbed by the filter and sorted out. The individual cells can still only register luminosity (and by this grey tones), but with the help of the filter, the cells can be divided into three groups, each of which covering one of the primary colours.

    Green is important

    In most of the cameras, there is a RGB filter, which consists of the three primary colours red, green and blue. But experience has proven that the human eye is most sensitive with green colours. So half of the sensor’s area is used to capture the image’s green colour nuances. The other half is shared between red and blue cells, as seen in the following figure. This colour filter is called a Bayer grid.

    Figur 104. In the so-called Bayer grid, there are twice as many green pixels than red and blue.

    So the image sensor registers lots of ”raw” pixels, which either are red, green or blue in various luminosities. But real life pictures don’t look like that. The colours of nature are a mixture of the three primary colours.

    Raw data is interpolated

    The image’s colour nuances have to be created from the filter’s RGB mosaic. This operation is called de-mosaicing, and takes place in the camera’s image computer.

    The image sensor’s raw data has to be processed, and this is done with very advanced software in the camera’s image computer. This processing consists mostly of interpolation.

    Figur 105. The light’s colours are gathered in millions of small dots in the primary colours red, green and blue and are reproduced in the camera as an image file.

    An image computer and all the software, with which it is programmed, is an unbelievably important component in the construction of a camera. It is here that the image’s all-important balance of colour is created. It reproduces the raw data from the image sensor into beautiful colour photographs.

    There is quite a big difference in this area between the various camera manufacturers. The ”old” photo suppliers like Canon, Fujifilm and Nikon have traditionally had the strongest technology, but there is rapid development and many very fine cameras of many different brands are to be found today.

    But there is still a difference in the colours of the different brands. All the different manufacturers have their own opinion of, what “correct colours” look like. It’s exactly the same situation as with analog film, where Kodak, Agfa and Fujifilm clearly produce different colours.

    Colours are not just colours

    Colour assessment is very individual, because colour is something, which originates in our heads. This is why there probably aren’t two people, who see colour in the same way. Some will clearly prefer Canon’s colours, which others will think are too reddish and over saturated. Some will like Minolta’s colours while others will think they have too much yellow and green. Nikon and Fujifilms colour interpretation is probably the most neutral but it is not everyone, who likes them either.

    The companies Sony, Sharp and Panasonic produce most of the image sensors that are found in cameras from various other manufacturers such as, for example, Nikon. These sensors are to a wide extent comparable – their size and resolution vary but otherwise they are alike. But the software used to process the image sensor’s raw pixels varies a lot from camera to camera.

    But image sensors are also under development. In 2004 Sony introduced an image sensors of 8 MP, which uses a colour filter with two nuances of green (green and emerald). So while most of the cameras use a filter with the formula Green-Red-Green-Blue, Sony has replaced it with the colours Emerald-Red-Green-Blue.

    In this way colour reproduction is still being developed with new technologies and “inventions”.

    Figur 106. The colour filter in Sony’s 8MP image sensor uses two green nuances as well as red and blue.

    So colours are mostly a matter of taste and this is obvious in the various camera products. The big differences with regard to the price, quality and facilities of cameras, which are otherwise very alike is due first and foremost to the camera’s image computer and the software used to process the raw pixels. Because the camera hasn’t done anything other than register lots of red, green and blue dots.

    The construction of the image itself is pure manipulation, where the software guesses “the right colours”.

    These conditions are mirrored in many cameras’ software, where the menu system offers various user options in connection with colour processing. The point being that the exposures are ‘”raw”, so there are many ways of processing and varying them immediately after they have been taken. It’s possible to program the camera’s software yourself so that you can determine the colours and interpolation yourself.

    Figur 107. Modern cameras give users a lot of options, which are, in fact, the programming of the image computer.


  • Next chapter.
  • Previous chapter.


  • Photo book overview.