Kristen McNicholas

Page 1

imaging systems by kristen mcnicholas


2

table of contents


table of contents

5

fundamentals

17

input

25

processing

29

output

3

table of contents


4 fundamentals


the fundamentals

5 fundamentals


Spatial Resolution Spatial resolution is the sampling of pixels on a twodimensional grid, using an x and y plane. An image’s spatial resolution could be represented numerically in pixels as the long edge by the short edge (1800 x 1901). Typically higher spatial resolution gives an image

more detail. A common misunderstanding is that a camera needs more pixels when, really, smaller pixels can collect more resolution whereas bigger pixels are unable to collect as much detail in an image. Interpolation is a resizing operation by which new

6 fundamentals

pixels are created in an image to change file size. The three interpolation processes are Bilinear, Bicubic and Nearest Neighbor. The images on the next page have been magnified 150% and represent the different effects that each interpolation process has.


Bilinear Performed by first interpolating columns of an image then followed by linear interpolation of resulting values in the other dimension. It is slower and produces smoother results.

Bicubic Fits a smooth surface to a pixel and its neighbor and is most commonly used in images where a smooth gradation is desired like a portrait or a landscape with a lot of sky tones. This interpolation process is the slowest and produces the smoothest results.

Nearest Neighbor Performed in a line or column of an image and an interpolated pixel value is set to that of its neighbor. This interpolation process is the fastest and produces the crudest results.

7 fundamentals


Tonal Resolution Tonal resolution represents the number of gray levels in an image. Bit depth is the calculation of gray levels in an image and calculates the number of gray levels in an image, as shown in the Equation 1 below. The dynamic range of an image relates to the relationship of the highlights and shadows. The amount of tones that Equation 1

a human eye can detect is limited to 256 levels or 8 bits per pixel. Posterization is the result of undersampling the brightness levels and is visually evident below 7 bits per pixel. Tonal resolution can be represented in all different languages such as f/stops, density units, bits and decibels (dB) as shown in Equation 2, below. 2x (in bits) = bit depth (in levels) 28 = 256 levels

Equation 2

8 fundamentals

1 f/stop = 1 bit = 6 dB = 0.3 density unit


8 bits/256 levels

6 bits/64 levels

4 bits/16 levels

3 bits/8 levels

2 bits/4 levels

9 fundamentals


Spectral Resolution

Spectral resolution is defined by the three monochrome signals being displayed that represent the red, green, and blue channels of an image. When the monochromatic signals are applied to the camera’s sensor, a full color image that matches to the cones of the human eye is

produced. Each channel has a certain bit depth, most commonly 8 bits per channel which creates a 24 bit color image once the three color signals are multiplied together. The two different types of color mixing processes are additive (red, green and

10 fundamentals

blue) and subtractive (cyan, magenta and yellow), each with their own uses. When combined, additive colors create white, subtractive colors create black (also known as K). Additive is used with monitors and screens whereas subtractive is used for printing.


11 fundamentals


Temporal Resolution

Temporal resolution is defined by the refresh rate of a monitor and pertains to only motion pictures and video. A monitor must be refreshed at periodic intervals. Interlaced frames are the opposite rows

of information that are refreshed at a certain rate. Frames per second (FPS) refer to the amount of still frames that exist in one second in video. When there are more frames per second, the motion appears more

12 fundamentals

fluid. Movies are often shot at 24 fps and create a cinematic effect. Television is often shown at 30 fps. Slow motion is shot with a very high frame rate to ensure that detail is not lost in the motion.


File Formats & Sizes File formats are a standard container for storing data. Each “container� format will save the information with different algorithms and result in a different quality

image, each with a medium it is best suited for. Commonly known formats are RAW, JPEG, PNG and TIFF. To calculate file size, all of the resolutions are multiplied

together, but temporal resolution is only included when calculating the size of a video file.

File Size = Spatial x Tonal x Spectral x Temporal*

13 fundamentals


Histograms Histograms offer a convenient way to visualize the brightness distribution in an image. The shape of a histogram represents the tonal range of an image, or in other words the range of highlights and shadows. The histogram graphs the pixel

of each gray level on a scale of 0 to 255, or the shadows to highlights respectively. When taking a picture it is a common practice to “expose to the right� which means the photographer exposes for the highlights. This is based simply on light levels

and gives the photographer more continuous levels and pixels to work with during post processing. Clipping is a term used for when an image is either underexposed or overexposed to the point at which information is lost in the shadows or highlights.

A properly exposed image will have a fairly even historgam with frequencies in both the highlights and shadows.

14 fundamentals


An overexposed image will result in clipping in the highlights which can be seen in a histogram when the levels are meeting the edge of the corresponding level (0 or 255). Clipping in both the highlights and shadows result in irreversible damage. When clipped in the highlights, the edges

An underexposed image will also result in clipping, but in the shadows. The loss of detail will be evident in the blacks and dark colors. When clipped in the shadows or highlights, the graph itself will be pushed up against the respective side of whatever tone is clipped.

15 fundamentals


16 input


input

17 input


Color Filter Array Interpolation A color filter array is used to apply color to the information gathered by the photosites located on the sensor. The CFA creates three images, one red, one blue and one green to combine to make a full color image.

Light will only pass through its respective color filter to create that color image. A man by the name of Bryce Bayer, from Eastman Kodak & Company, constructed the CFA intperpolation pattern that mimics the human

eye and the sensitivity to green light and called it the Bayer Pattern. The RGB distribution on the pattern is as follows: 50% green, 25% blue and 25% red and is demonstrated by the illustration on the right.

------ Incoming light

------ Filter layer

------ Sensor array

------ Resulting pattern

Illustration from Google Images

18 input


Bayer Pattern Illustration by Emily Shriver

19 input


Sensors Sensors are a two dimensional array of photosites and their output are pixels to form an image. It’s important to understand that sensors are only sensitive to light, therefore they are colorblind. This is CCD

CMOS

Foveon x3

why we need color filter array interpolations to account for the colorblindness. There are three major types of sensors: Charge Coupled Device (CCD), Complementary Metal Oxide Semiconductor (CMOS) and the Foveon sensor. This sensor works like that of a conveyor belt and reads the info from each photosite and moves on to the next. CCD’s are more light senstive than CMOS sensors and are used in astronomy photography and compact cameras.

This sensor includes an analog-todigital converter at each photosite. The photosites are also smaller so there are microlenses on each that focus the light to the photosite. By having an analog-to-digital converter at each site, there is more noise in the image.

This sensor uses the semiconductor process similar to CMOS. It is the first commerical sensor to eliminate the CFA interpolation process because it contains three silicon RGB layers that the photosites filter through.

20 input


photons e

CCD Sensor

e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e

vertical register e electron transfer register e electric charge

e e e e e e e e e e e e e

e output signal

amplifier

e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e

e e e e e e e e e e e e e

e

e

e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e

e e e e e e e e e e e e e e

horizontal register

photons

CMOS Sensor

photodiode (pixel) amplifier metal wire e electric charge

output signal

21 input

e e e e

e e

e e

e e

e e e e

e

e

e e e e

e e e e

e e e

e e e

e e

e e e e

e e e e

e e e e

e

e

e

e

e

e e e e

e e e e

e e e e

e e e e

e e e e

e e e

e e e

e e e

e e e

e e e

e e

e e

e e e e

e

e

e e e e

e e e e

e e e

e e e

Illustration by Emily Shriver


Image Processing Pipeline

22 input


RAW

Illustration by Meghan Connor

23 input


Image Processing Pipeline 1

ANALOG SYSTEM The Analog System captures a black and white one channel image. After photons, or light, passes through the lens and strikes the sensor, it is converted into electrons. The sensor detects the amount of light captured for each individual pixel and sends said information along to the next step in the pipeline, the Analog to Digital Converter (ADC).

2

ANALOG TO DIGITAL CONVERTER (ADC) The Analog to Digital Converter or “ADC,” is responsible for making the analog signal provided to it in the step prior into a digital signal. The ADC converts the voltage provided to it by the Analog System into numerical values. The image continues to remain black & white, but because numbers have been assigned to pixels, it is now an 8-bit grayscale image.

3

4

RAW DATA The raw numerical data, or the camera’s capture file format (NEF, CR2, JPEG, etc.), is stored within a “container.” The image continues to be a black and white mosaiced file.

COLOR FILTER ARRAY (CFA) INTERPOLATION CFA Interpolation, otherwise known as “demosaicing,” constructs a full color image from the data provided to it in the previous step. Here, missing pixel values are determined through the use of one of the three Interpolation methods covered in Chapter 1. Because each pixel lies behind one of the three filters (red, green, and blue) within the Bayer Color Filter Array, an algorithm is used in order to estimate the color levels for each pixel. As seen in the pipeline on the previous page, the black and white image has been separated into these three color channels. Because of this complex process, the image triples in size.

24 input


5

NEUTRAL BALANCE Neutral White Balancing eliminates color casts, making the neutrals appear to be truly neutral. This step can take place within the camera by adjusting the White Balance to match an environments color temperature prior to capture or in post processing.

6

GAMMA CORRECTION Gamma Correction adjusts an image for monitor display.

7

COLOR SPACE At this step, Tristimulus Values (XYZ) are translated into the Working LAB/RGB space. Here, Color Spaces or “profiles” (sRGB, Adobe RGB) can be applied for output, prior to making any subjective changes.

8

SUBJECTIVE CORRECTIONS

The method in which subjective corrections are made is the one step within the pipeline which diffentiates RAW files from JPEGs files. These corrections are referred to as “manufacturing processes” for JPEGs because they occur within the camera after the image is demosaiced. For JPEGs, these corrections include increased contrast and detail. RAW files are much larger and lossless, meaning they can withstand significantly more adjustments in post-processing than a JPEG can without suffering in overall quality. Because of this, there are many subjective corrections that can be made. Professional photographers utilize the RAW format because of this flexibility. These adjustments are truly subjective, as they are the ones adjusted in Photoshop or Lightroom post-import. These steps include but are not limited to: exposure correction (histograms), noise reduction (luminance), lens distortion correction (correcting for vignetting and abberations), brightness and contrast adjustments, and sharpening. After these adjustments are finalized, the image is exported as a rendered output file, whether it be RAW or JPEG.

Page Design by Meghan Connor

25 input


26 processing


processing

27 processing


Neutral Balance Unlike the human eye, cameras cannot adjust to the different colors temperatures of light. The human eye performs an adjustment known as Chromatic Adaptation and at its core, this process makes the color white appear white no matter what light situation is apparent. Since a camera does not have this capability,

the photographer must set the white balance in the camera by identifying what light source is present in the environment or performing a custom white balace with a neutral surface like a gray card. However, one can also choose to make a picture neutral in post-processing with software. Either way to achieve a neutral tone

in color temperature, there must be some adjustment done because the camera cannot perform these decisions on its own. In some cases, there will be choice of whether the artist wants to be neutral or not to achieve a certain effect.

Cool tone

Neutral tone

Warm tone

28 processing


This figure shows the different symbols that represent white balance on a camera. The numbers represent the color temperature in Kelvin (K). A camera can also produce a custom white balance by using a neutral surface to adjust to the color temperature of the environment.

Illustration from Google Images

29 processing


Sharpening & Blurring Sharpening and blurring are the most common image processing methods. Sharpening enhances the high frequencies in an image which are usually in the edges within the image. This is a diffrentiation process

because it is making the edges look significantly different from one another. Blurring, however, is an averaging process because it decreases the frequencies in an image to make the edges less apparent.

Original image with only color toning

30 processing

Convulution kernels can be adjusted to sharpen or blur an image usually in a 3x3 pixel group. Surrounding pixels are used to alter the center pixel value and affect the amount of contrast between frequencies.


Blurring

5 5

19

5

Positive values surrounding the center pixel value will result in blurring of the high frequencies in the image.

5 Sharpening

-3 -3

10

-1

Negative values surrounding the center pixel will result in sharpening of the high frequencies in the image.

-2

-1

-1

-1

-1

9

-1

-1

-1

-1

31 processing

Unmask Sharpening Unsharp masking kernels are characterized by the center pixel being positive and all surrounding pixels being negative. This will blur the image, create a mask and then add the mask to the original to sharpen it.


32 output


output

33 output


Dots per inch vs. Pixels per inch Dots and pixels are the ingredients that make up an image. Dots are binary objects meaning that there only have two levels and are non-continuous in tonal

Rule of 16

printing processes like inkjet and laser printing processes. Pixels, on the other hand are continuous objects with multiple gray levels. Scanners, and monitors

dpi/16 = ppi

DPI - dots per inch

PPI - pixels per inch

smallest binary component a device can gnerate

smallest component of image that displays gray levls

utilized in inkjet and laser printers

continuous tone devices like scanners, monitors, and dye-sublimation printers

Dots

Pixel

34 output

all utilize the continuous capabilites of pixels. To convert from dots to pixels, or vice versa, we use the equation below, known as the Rule of 16.


Half-toning

Orignal image

Exaggerated color half-tone image

Half-toning is a noncontinuous tone printing process that utilizes pure inks such as cyan, magenta, yellow and black (CMYK). This technique simulates continuous tone with dots of varying shape, size and frequency. To

make a half-tone image out of a continuous tone image, spatial resolution must be sacrificed to gain proper tonal resolution. The continuous image is projected through a screen of dots and darkers areas will have bigger dots whereas

35 output

the lighter areas will have fewer dots. The black and white areas in combination will create shades of gray. With color, the same process is repeated for each color channel.


Printing Technology There are several types of printing but the most common and well known are inkjet, laser and dye-

sublimation. Each has a significantly different process that results in very different final results.

Laser Laser printers can be most commonly found in offices and work best for high quality text documents and acceptable photographs. Each print is quite inexpensive, however printer prices can range from as low as $15 up to $20,000 for industrial printers.

Dye-sublimation Dye-sublimation printers are best for printing photographs. This printing process is the only continuous tone printing between inkjet, laser and dye-sub. The printing can be slow and expensive because there is only certain types of papers that can be utilized.

36 output


Inkjet

Inkjet printers are most popular and widely used printers. There are two dropon-demand processes within inkjet printing: piezoelectric and thermal. Piezoelectric uses voltage to alter the

shape, create pressure thus forcing the ink out of the cartridge. In thermal printers, ink is stored with chambers that each have a heater. When ejecting a droplet of ink, heat is applied to the

ink to vaporize and create a bubble that will force ink out of the cartridge.

All images from Google images

37 output


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.