Imaging Systems David Mitchell

contents

4

Chapter 1: Fundamentals

14

Chapter 2: Input

24

Chapter 3: Processing

32

Chapter 4: Output

1.4

Chapter 1 Fundamentals

1.5

Resolution There are four basic types of resolution in imaging. The first three, spatial, tonal, spectral, apply to color still images while the fourth, temporal, applies only to moving images. The following pages will discuss each of these resolutions and their effect on the quality of an image.

Spatial resolution is based upon the number of pixels in an image and thus determines itâ&#x20AC;&#x2122;s dimensions. The more pixels the higher the spatial resolution while a smaller spatial resolution will result in a more pixilated image.

When resizing an image different interpolation methods are used to infer what pixels are added or to determine what pixels are removed. Three of the most common interpolation methods are Biliner, bicubic and Nearest Neighbor.w

Figure 1.1 - Original Image

1.6

fundamentals

Spatial Resolution Bicubic interpolation generally gives the best results, with smooth transitions between pixels. This comes at the cost of speed since this method is the slowest. Resampling this image took 1.2 seconds.

Bilinear interpolation gives results very similar to bicubic, although at high magnifications slightly harder edges can be seen. It is slightly faster taking only 1 second to resample.

Nearest Neighbor interpolation gives the worst quality of all the resampling methods although it is the fastest. When magnified you clearly see square shaped groups of pixels. Resampling this image took 0.8 seconds.

Figure 1.2 - Resized images at 400%. Bicubic, Bilinear, Nearest Neighbor

fundamentals 1.7

Tonal Resolution

8 bit 256 levels

6 bit 64 levels

4 bit 16 levels

Figure 1.3 - Image represented at different bit depths 1.8

fundamentals

2 bit 4 levels

Tonal resolution is the number of distinct levels from brightest to darkest that an

image contains. This is often referred to as dynamic range. The greater the number of levels in an image, higher dynamic range, the smoother the gradations between tones will appear.

The image on the left is broken up into 4 sections, with accompanying grey scale, to illustrate this. The 2 bit image has very clear posterization. In the 4 bit image, highly detailed parts are becoming more relatively defined but open areas such as the sky are very pixilated. At 6 bits most of the image looks normal but upon close inspection there are still some sharp edges in areas of tonal gradation. When you move past 7 bits it is nearly impossible for the human eye to perceive

a difference. Also, printers are not able to print an image at more than 7 bits. The usefulness of a bit depth greater than 7 is that it gives a greater dynamic range and more exposure latitude during post-production.

Bits/Channel 2 3 4 5 6 7 8 16

Levels 4 8 16 32 64 128 256 65,526

Figure 1.4 - 7 bit greysscale(top half) 8 bit greyscale (bottom half)

fundamentals 1.9

Spectral Resolution

Figure 1.5 - Normal image(right) compared to image with RBG channels shifted(left) The number of color channels a particular camera can capture determines Spectral Resolution, also called color resolution. In general digital cameras have three channels: Red, Green and Blue. In addition to this there are more specific types of cameras that capture parts of the light spectrum not visible to the human eye.

1.10 fundamentals

Examples of these kinds of cameras are Infrared, Thermal, and X-Ray. In order to gain spectral resolution, spatial and tonal resolution must be sacrificed. Thus a 16 megapixel color sensor has an effective resolution of 4 megapixels where a black and white 16 megapixel camera has an effective resolution of 16 megapixels.

There are two different mixing modes of color: Additive and subtractive. Each of these mixing modes is relevant to different aspects of the imaging process.

Additive Color Additive color involves the mixing of light using Red, Green and Blue. Most screens and monitors utilize additive color. When equal parts of Red, Green and Blue light are mixed they create white. The complete absence of these colors would create black. Images in RGB color space are divided into 3 color channels: Red, Green and blue. Each of these channels holds one colorâ&#x20AC;&#x2122;s information and when combined create a full color image. Subtractive Color Subtractive color is the mixing of pigments using Cyan, Magenta, Yellow, and Black. This color process is used in printing. This process is the opposite of additive color. When Cyan, magenta and yellow are mixed they create black while the absence of any pigment is white.

Figure 1.6 - Left, RGB channels. Right, CMY channels

fundamentals 1.11

File Formats & Size

File Size Calculations In order to determine the size of a file you multiply the number of applicable resolutions. For a normal color photograph the equation is:

Assuming that Spatial Resolution = 1024x1024 pixels Tonal Resolution = 8 bits/pixel Spectral Resolution = 3 channels

File size= Spatial x Tonal x Spectral

(1024x1024) x 8 x 3 = 25,165,824 bytes = 24,576kb = 24mb

1.12 fundamentals

File Formats All digital information must be stored in a file. There are numerious different types of file formats which serve this purpose. Different formats will save images with different algorithms which will result in varied image qualities. The following are a number of commonly used file formats.

RAW

GIF

JIFF (JPEG)

TIFF

• 12-14 bit/channel • Minimally processed data • Lossless, non-destructive

• 8 bits/channel (24 bit color image) • Best for continuos tone images • Lossy compression

• 8 bits total (256 colors maximum) • Lossless compression • Uses transparency and interlacing

• 8 Bits/channel (24 bit color images) • Lossless image compression • RGB Color space

fundamentals 1.13

2.14

Chapter 2 Input

2.15

Image Processing Pipeline The overall digital imaging system is outlined in the following pipeline, which is separated into the three stages of the system: Input, Processing and Output. This gives a general sense of the path an image takes from capture to print. Many of the individual steps in the pipeline will be discussed in greater detail in the subsequent chapters.

2.16 input

input 2.17

Sensors

An image sensor is the device that converts an optical image into an electrical signal. The photons that enter the lens are read by many photosites on the sensor, each photosite translating the light into voltage for that individual pixel. While all sensors serve this function there are many different types of sensor technology. Three of the primary types of digital sensors are CCD, CMOS and Foveon. 2.18 input

CCD

Figure 2.3 - CCD Sensor.

Charged Coupled Devices, CCDs,

record electrons at each photosite and then transfer each row of information individually, generally one photosite at a time. In order to be read the signal must go through an amplifier. Because the architecture of the sensor has very few wires, the images captured will have relatively low levels of noise. CCD sensors are primarily used in Medium Format and some Full Frame 35mm cameras. They are very expensive to produce and work somewhat slower than CMOS sensors.

Low Noise Higher light sensitivity Higher quality image

Expensive to produce Use a lot of power

input 2.19

CMOS

Figure 2.4 - CMOS Sensor

Complimentary metal-oxide semiconductors, CMOS, read every pho-

tosite individually. this is done by a wire connected to each site. This allows for much faster reading time but because of the distance the signal must travel through wires it becomes very noisy. In Passive CMOS sensors, the amplifier is at one end of the sensor, this means that the noise created is amplified as well as the signal. Active CMOS sensors were created to minimize this problem, with amplifiers located at each photosite.

2.20 input

CMOS sensors are very cheap and easy to make by using a process similar to making computer chips. Because of this they are the most widely used type of sensor in consumer cameras as well as cell phones.

Cheap to produce Use little power Smaller

High noise level Low Light sensitivity Lower quality image

Foveon

Figure 2.5 - Foveon Sensor

Foveon, Foveon sensors gather light in a

similar manner to film. It uses an array of photosites that each contain three vertically stacked photodiodes. Each of these responds to different wavelengths of light. In contrast to other digital sensors, color values are assigned at each photosite thus no demosaicing is required. While they have a greater color accuracy, Foveon sensors have not achieved prominent commercial use. Currently Sigma is the only camera manufacturer utilizing this technology.

Greater color accuracy Greater light sensitivity Shaper images

More noise Expensive Slow

input 2.21

Color Filter Arrays

Color Filter Array, CFA While a cameraâ&#x20AC;&#x2122;s sensor collects light it is not able to differentiate between different wavelengths of this light. Therefore they cannot separate color information, only intensity of light. To correct for this tiny color filters are placed on top of each pixel. With the filters only allowing certain wavelengths of light through, color information is separated and recorded.

2.22 input

Demosaicing This raw data however is still not readable as an image. It must be converted into a full color image by a demosaicing algorithm, which are built to match different CFA patterns. Demosaicing algorithms aim to create full color images with as low computational complexity as possible in order to function efficiently in camera software.

Bayer Pattern The most common type of color filter array is called the Bayer Pattern. The Bayer pattern utilizes red, green and blue filters. Because human eyesight is more sensitive to green light the Bayer filter aims to replicate this by using twice the number of green filters. Bayer filters can be found in the vast majority of both consumer and professional grade digital cameras.

Foveon The Foveon x3 sensor uses a different structure for it’s color filter array. It uses three stacked sensors, each picking up one color. Thus no demosaicing is required because each photosite has red, green and blue information.

RGBW A category of CFA patterns which is based upon the Bayer pattern are RGBW, the “W” standing for white. In these filters, many of which have been designed by Kodak, some transparent filters in order to create some Pancromatic pixels. This allows for greater low light sensitivity, however generally increases low light noise.

input 2.23

3.24

Chapter 3 Processing

3.25

Color Correction

Figure 3.1 - Original Image, left. Neutral Balance, Middle. Warm Tone, Right Neutral Balance One of the first steps which must be accomplished when opening a raw image in processing software, is to correct the colors so the image is Neutral Balanced. Depending on the lighting conditions the color temperature of light varies considerably. Our eyes adjust for these discrepancies but camera sensors do not. Thus images may appear to have an overall color cast, either warm yellow or cool blue. 3.26 processing

The process of neutral balancing, also referred to as white or grey balancing, involves defining a point in the scene which should be a grey tone with no color value. This point is then adjusted to be completely neutral and in the process the rest of the image is also adjusted to the same specifications.

While in many cases the only color adjustment an image requires is neutral balancing. However, if a scene has light sources of different temperatures more localized color adjustments must take place. There are also situations in which an image which is completely neutral is not visually appealing. In these cases once neutral is defined the color can be adjusted away form neutral.

Color Modes Color modes A color mode is a way of defining the range of producible colors under given circumstances, known as color gamut. There are many different types of color modes, based on various methods of defining colors. When reproducing images it is important to be aware of what color mode you are using, because moving back and forth between smaller and larger color spaces introduces color issues which could be easily avoided. Adobe 1998 & sRGB Adobe 1998 is an RGB color space and the most commonly used color space in professional context. Itâ&#x20AC;&#x2122;s gamut encompasses most of the colors achievable through print. While it is a large color space it only encompasses 50% of the color visible to the human eye. The sRGB color space is smaller than Adobe 1998 but still encompasses most colors used in print. It was designed as a easy way to view images across mediums with little color error.

LAB Space Lab space is a coordinate based and device independent system, which has a gamut encompassing all visible colors. When editing an image in LAB space it is broken up into three channels, L, a* and b*. The L channel deals only with luminance, allowing for brightness/contrast control independent of color. The a* and b* channels both carry the color information of the image.

processing 3.27

Exposure Correction Histograms A histogram is a graphical representation of the tonal distribution in an image, showing the frequency of pixels at each light level. There are 256 levels in an 8 bit image. The x axis represents image tone, with blacks on the far left and whites on the far right. The y axis represents the number of pixels at each value.

Brightness The brightness of an image refers to the overall concentration of tones along the histogram. To adjust brightness a mathematical function is used to either add or subtract a given number to all of the pixels in an image. If you add then an image becomes brighter and if you subtract the image becomes darker.

Contrast The contrast of an image refers to the difference between the brightest collection of pixels and the darkest collection of pixels. The greater this difference the greater the contrast. To adjust contrast a mathematical function is utilized to adjust all of the pixel values of an image based on a given number. When you multiply you get greater contrast, when you divide you get less contrast.

3.28 processing

Exposing to the Right,ETTR. Generally speaking a digital image can be manipulated more when it is slightly over exposed, assuming no white information is clipped. This is because there are a greater number of light levels in the bright tones of an image and these tones also have less noise. Overexposing with this in mind is referred to as Exposing to the Right.

Low Contrast A low contrast image has a small difference between the brightest concentration of pixelsz and the darkest. A low contrast image may be low key, high key or normally exposed. Lowering the contrast of an image in processing divides all of the pixel values by a number, the larger the number the lower the contrast, in order to bring all of the values closer together.

High Contrast A high contrast image has a large difference between the brightest concentration of pixels and the darkest. Increasing contrast in an image processer multiplies all of the pixel values by a number, the greater the number the greater the contrast, in order to increase the difference between these values.

processing 3.29

Sharpening

Almost all digital images require some sharpening, either simply to optimize them for output or for more creative reasons. While cameras use built in sharpening algorithms th ese do not provide much control and generally lead to inferior results. When shooting in RAW it is best to turn off the output sharpening your camera provides and use one of the many output sharpening techniques in photoshop or similar software. One of the most popular forms of sharpening used in both film and digital photography is unsharp masking. 3.30 processing

The process of unsharp masking requires making a blurred copy of your image then overlaying it on top of the original. After doing this you must subtract the blurred image from the original, which results in a mask. This mask shows only the high frequency sections of the image, which are normally the edge areas. This mask can then be adjusted to control the amount of sharpening and what areas of the image are effected. Finally this mask is added to the original and this results in a perceptually sharper image.

Blurring the image To blur this image the blur filter in photoshop was used. The filter was applied at a relatively low intensity so that the resulting mask would consist of finer lines. The more an image is blurred, the greater the difference between that and the original. Thus the more the image is blurred the sharper the final image will be.

The Mask Subtracting the blurred image from the original results in a mask. For this image calculations were used in photoshop to achieve this. After the mask is created the brightness/contrast of the mask can be adjusted to control the level of sharpening in the image.

Sharpened Image Using calculations in photoshop add the mask onto the original image. Once this is done the image will appear sharper since all of the high frequency edges have been outlined.

processing 3.31

4.32

Chapter 4 Output

4.33

Dots and Pixels

Dots or Pixels

DPI, PPI, and LPI

Rule of 16

Digital images are comprised of either dots or pixels. Dots are the smallest binary element that a non-continuous tone device can generate. Laser and inkjet printers are common examples of non-continuous tone devices. Pixels are the smallest elements of an image which are capable of displaying grey levels. Pixels can be found in continuous tone devices such as monitors and scanners.

Dots per inch, DPI, and pixels per inch, PPI, are the two systems of measurement when discussing an output devices resolution. Lines per inch, LPI, is also used for this but is the same as PPI. When manufacturers write spec sheets for their monitors, cameras and printers DPI and PPI are often confused. For example when an inkjet printer is said to have a resolution of 600 PPI they actually mean 600 PPI. This tendency to use these very different terms synonymously requires us to be especially aware of their differences. If a device is continuous tone such as a camera or monitor then use PPI. If a device is non continuous, like an inkjet printer, then use DPI.

In order to convert between DPI and PPI/ LPI the following equation is used

4.34 output

Dpi/[â&#x2C6;&#x161;dots/pixel]=PPI/LPI In general you can assume that an image is 8 bit, 256 levels, and thus this formula sypifies to DPI/16=PPI This is known as the Rule of 16

Halftoning

Figure 4.1 - Color Halftoning Simulation Color Halftoning In order to simulate continuous tones in a non continuous device half-toning must be used. Halftoning simulate continuous tones by using dots and varying their size, shape and frequency. In general halftoning uses cyan, yellow, magenta and black inks in combination. In order to create a halftone image , a continuous tone image is projected through a screen of dots. Areas with fewer dots will appear lighter while areas with more dots will appear darker. In color images this process is repeated for each of the color channels. In doing this spatial resolution is sacrificed to gain tonal resolution.

output 4.35

Printing Technology Inkjet Inkjet printing is one of the most common and widely used types of printing. This is a non-continuious process with two main which can be devided into two main catigories, continuious and drop on demand. Continuous printing shoots tiny drops of ink regardless of whether there is ink needed or not. This process is messy and can result in some waste, even though the excess ink is recycled back into the ink reservoir. Drop on Demand printers only drop ink where it is required on a print. Drop on demand printers work in various ways but the two primary ones are through a piezo electric process or a thermal process.

4.36 output

Laser Laser printers are primarily used in office settings. This is because they are fast, relatively inexpensive, and streamlined. While they print text best they can produce acceptable quality photographic images. Some of the disadvantages of laser printers are that their resolution is limited and they are prone to paper jams.

Dye Sublimation Dye sublimation printers can produce some of the best photographic prints. This is because instead of using pigments like inkjet and laser printers, this process uses dyes. Because dyes have a much wider gamut of colors these prints are continuous. Some of the downsides to this process are that it requires special types of paper that absorb the dyes. These papers are considerably more expensive than normal inkjet papers. Also the printing process can be very slow.

output 4.37

David Mitchell