Imaging Systems Nicole Leclair
Capture Process Print
8 9 10 12 14 15 17
Fundamentals Resolution Spatial Resolution Tonal Resolution Spectral Resolution Temporal Resolution Calculating Image Size Histograms
Capture 20 Color Filter Array 22 CCD, CMOS, Foveon Process 26 Image Processing Pipeline 28 Sharpening 32 33 34 35
Print dpi / ppi / lpi Halftoning Analog Printers Digital Printers
Conservation Imaging 40 Lighting Techniques 42 Imaging Techniques
Still images are composed of three resolutions: Spatial, Tonal, and Spectral. These three resolutions are multiplied to give us the image size. Moving images (videos) must factor in temporal resolution: Image Size = Spatial x Tonal x Spectral x (Temporal) 8
Spatial Resolution Spatial Resolution describes the number of pixels in an image in the X and Y dimensions. For example, a high-definition image has 1920 by 1080 pixels. In other words, it explains the amount of detail that could be displayed clearly. The X and Y dimensions in pixels are multiplied to give us the first number in the image size equation. Spatial resolution is what is being affected when we resize an image. The three methods of resizing are shown below. Interpolation is the process by which the computer samples existing pixels in order to create new ones.
Nearest Neighbor interpolation is the fastest technique. However, this method primarily preserves hard edges, so it results in noticeable artifacts.
Bilinear interpolation provides better image quality than nearest neighbor without adding too much time for resizing.
Bicubic interpolation yields the highest quality image, but requires significantly more time than other methods.
Tonal Resolution 2 levels
An imageâ&#x20AC;&#x2122;s tonal resolution expresses the number of brightness levels expressed in the photograph. This number is often expressed in bit depth, the number of bits per pixel. Humans can differentiate between up to 128 levels (7 bits/pixel). Multiply by the number of bits per pixel in the image size equation.
To determine the number of light levels from bit depth, use the equation: 2 number of bits per pixel = number of levels
An imageâ&#x20AC;&#x2122;s spectral resolution describes the number of color channels in the image file. Typically, this number is 3, which represents the Red, Green, and Blue (RGB) additive colors. This number is not applicable to black & white cameras or cameras with three separate color filters.
Each of the three RGB filters will absorb every color but its own. Combining the two absorbed colors will result in a subtractive color. The subtractive colors, used in print, are Cyan, Magenta, Yellow, and Black (the result of combining Red, Green, and Blue).
Temporal resolution only applies to time-based works. Temporal resolution describes the frames per second (fps) of a video. The standard in the U.S. for televisions and other electronics is 30 fps, which causes the image to refresh often enough that the human eye will not perceive flicker (picture a silent film from the early days of cinematography). In the image size equation, we will multiply by the frames per second.
Calculating Image Size Image Size = Spatial x Tonal x Spectral x (Temporal) I have a 16GB memory card in my Nikon D7000. I will be shooting a series of still images at 1024 x 1024 pixels, at a bit depth of 8 bits per pixel. This camera makes color photographs without separate color filters. How many images will fit in this memory card? To solve this problem, we begin by multiplying: (1024 x 1024 pixels) x (8 bits/pixel) x 3 = 25,165,924 bits The two pixel units cancel each other, so we know our resulting answer will be expressed in bits. Now we need to convert bits to the units of my memory card, the capacity of which is expressed in gigabytes. 1 Byte 1 Kilobyte 1 Megabyte
8 Bits 1024 Bytes 1024 Kilobytes
With these units defined, we now know to first divide our answer by 8, then by 1024 three times: 25,165,924 / 8 = 3,145,728 Bytes 3,145,728 / 1024 = 3,072 Kilobytes 3,072 / 1024 = 3 Megabytes 3 / 1024 = 0.00292969 Gigabytes This is the size, in Gigabytes, of one image file. Dividing 16 Gigabytes by the size of one image file will tell us how many photographs I can take: 16 / 0.00292969 = 5,461 images!
Histograms show the distribution of highlights, midtones, and shadows in an photograph. We describe images as high-contrast, continuous tone, high-key, or low-key. The image on page 16 is a high contrast image
A continuous tone image with a good range of highlights, midtones, and shadows
A high-key image composed largely of highlights
A low-key image composed largely of shadows
Color Filter Array
The Color Filter Array, often referred to as the Bayer Pattern after the inventor of the most popular iteration, describes the distribution of red, green, and blue filters on a sensor. Because the human eye is twice as sensitive to the color green, there are twice as many green filters on a sensor as there are red or blue.
A recent development is an RGBW sensor. The â&#x20AC;&#x153;whiteâ&#x20AC;? areas are actually transparent, giving the sensor more light-gathering capabilities, increasing the light sensitivity by about one stop. These transparent filters require a sacrifice in color resolution, and these filters currently experience increased noise as well as clipping in the highlights. Because this is a new technology, interpolation remains a challenge, and more storage space is needed.
Sensors Sensors have replaced film as the objects that receive light in a camera. Sensors generate analog electrical signals from incoming light, which will later be converted to digital values via the Analog to Digital Converter.
Passive Pixel CMOS (Complementary MetalOxide Semiconductor) sensors use wires to transfer charges to the amplifier, which is faster, but generates noise.
Charged Coupled Device (CCD) sensors read one detector at a time across each horizontal line. CCDs have the cleanest signal because there are no wires or dead space in which to accumulate noise until the charge reaches the amplifier.
Active Pixel CMOS sensors include an amplifier with each detector to reduce noise by lessening the wires and distance between the detector and amplifier.
Foveon CMOS sensors have 3 layered red, green, and blue chips, which means these have three times the resolution of other sensors that compromise spatial resolution for tonal resolution in one chip. Foveon sensors create full-color pixels at every pixel site.
Image Processing Pipeline Analog Signals
Analog to Digital Converter
3 Color Channels
“Raw” Sensor Data
The following steps will happen automatically and be packaged into a .jpeg file if you choose to shoot in the .jpeg format. You have more control over these steps in post-processing if you choose to shoot and process a raw format: 1. The chip receives analog electrical signals (volts) after light enters the camera 2. The Analog to Digital Converter translates analog signals to digital values 3. Each color channel has a set of digital values 4. This data is packaged into a Raw file format (.nef, .crw, .ptx, etc.) 5. Neutral Balance eliminates color casts and makes the neutrals truly neutral. Neutral Balance can take place in-camera or in post processing 6. CFA (Color Filter Array) Interpolation, also called demosaicing, translates CFA data into the actual image 7. Gamma Correction visually adjusts the image for display by a monitor 8. Exposure Correction attempts to correct over- or under-exposure
Artistic adjustments to brightness, sharpness, contrast, or color are all optional adjustments made by the photographer after the above steps are completed in the raw processor:
Convolution kernels apply to a small group of pixels at a time, often a 3x3 or 5x5 group. The values of the surrounding pixels are used to alter the center pixel value. Convolution kernels affect the amount of contrast between pixels.
Positive values will blur the image
Negative values will sharpen the image
Unsharp masking will blur the image, create a mask, and add the mask to the original image in order to sharpen it
dpi / ppi / lpi Output is described in dots per inch, pixels per inch, or lines per inch. While these words are often used interchangeably, this is not an accurate use of these terms. Pixels per inch are indeed the same as lines per inch, but the two cannot be exchanged for the term dots per inch. ppi/lpi: These units are used to describe an object capable of expressing continuous tone, or multiple light levels. If an object can capture or display continuous tone, its output can be expressed in pixels per inch. This includes scanners, cameras, and monitors. dpi: This unit is used to describe a binary object: an object that cannot express continuous tone. This includes ink and laser jet printers. These objects simulate continuous tone through a process called halftoning.
dpi can be converted to ppi/lpi by dividing the dpi value by the square root of the number of dots per pixel, or the number of gray levels the printer can express.
Halftoning Digital images are captured and stored as pixels, but to print these images on an ink or laser printer, they must be halftoned in order to be expressed by a binary object. Halftoning converts pixels into dots of ink that can be placed on paper. In the exaggerated halftone effect below, linear moray patterns can be seen.
Analog Printing Analog printing uses a metal plate in a mechanical press. This process is very fast, but is only feasible for long runs such as newspapers, magazines, or books to be printed by the thousands, as each plate is expensive to make. Paper can be loaded â&#x20AC;&#x153;roll to rollâ&#x20AC;? with long rolls of paper being fed through very quickly, or sheet fed using pre-cut sheets of paper, a slightly slower method. Rotogravure is the method of analog printing that comes closest to expressing continuous tone because the plate is engraved at various depths, allowing a page to receive multiple amounts of ink from a single plate. This method is considered to be the best for reproducing photographs, and is still used by National Geographic to rapidly print high quality quality images in large volumes.
Impression Roller Doctor Blade
Digital Printing Digital printing is slower, but much more flexible than analog printing. The digital file and software become the â&#x20AC;&#x153;plateâ&#x20AC;? that dictates what appears on paper. This process is much more cost-efficient for small jobs since a custom plate does not need to be made. Digital printers are composed of their marking engines (the method of ink application) and a Raster Image Processor (RIP). The RIP may be third party or manufacturer-specific. The RIP is responsible for: Rasterizing (Vector to raster translation) Color management (RGB to CMYK) Halftoning (Continuous tone to binary dots) Job queuing and nesting
Digital Printing Inkjet
Inkjet printers are the most popular digital printers. These devices apply liquid ink to the paper surface in a few different methods.
Drop On-Demand inket printers can apply ink with vibrations or heat: Piezoelectric Vibrations push ink onto paper Capable of multiple dot sizes Slow
Electrodes Piezo Ceramic
Thermal (Bubble Jet) Heat pushes ink onto paper Uniform dot size Fewer paper jams
Continuous inkjet printers emit a continuous stream of ink droplets: Continuous Pump pushes ink onto paper High speed printing Slow drying
High Voltage DeďŹ&#x201A;ection Plate
Laser Laser printers fuse a powder onto the paper surface. These printers are best used for text documents, and are less successful at reproducing photographs. The RIP is housed in the printer instead of in the computer for laser printers.
Laser Scanning Unit
Developer Toner Roller Hopper
Photoreceptor Drum Assembly
Fuser Paper Corona Wires
Dye-Sublimation Dye-Sublimation printers These printers produce continuous tone images with 256 levels. No halftoning is needed to print with dyesublimation printers. clear These printers are especially expensive, and certainly not worth the investment for printing text.
paper magenta cyan
yellow thermal head
Lighting Techniques Conservationists want to learn as much about a painting as possible without risking its structural integrity. A variety of lighting and imaging techniques allow them to discern the amount of damage, date of creation, techniques used, and other valuable information from an oil painting with minimal physical contact or lasting effects.
First the painting will be lit with white light at 90 degrees (perpendicular) to the surface, under reflected light. Next the white light will be placed almost parallel to the surface to create a raking light. These lights will reveal variations in gloss and application as well as damage such as flaking or distortion. Unusually raised areas could indicate re-working or an underpainting.
Illuminating a painting with ultraviolet light will cause aged varnishes and some pigments to fluoresce. The conservationist can detect differences in age and chemical composition, which would indicate prior restorations. UV light will also make any faded ink inscriptions appear on the back of a work.
Imaging Techniques Infrared and X-ray imaging require dedicated imaging devices or machines. These techniques are more costly and are used more sparingly, especially in the case of X-ray due to its facility and trained operator requirements. They are however, very useful in looking beneath the pigments of a painting to reveal underdrawings or underpaintings that are otherwise concealed.
Infrared light allows the conservationist to look beneath the paint to seek out any graphite, charcoal, or ink underdrawings. The light is absorbed by these dark materials and reflected by white surfaces. Infrared Reflectography uses and IRsensitive CCD sensor. The Electromagnetic Spectrum
Radiography involves placing sheet film against the paint layer and exposing the painting and film to a small amount of electromagnetic radiation. X-rays can penetrate the top layers of a work to reveal underpaintings and structural elements. Through this technique, conservationists discovered that about 15% of the paintings at the Van Gogh Museum have some sort of underpainting.
(d) Section of the surface image corresponding to underpainting.
(c) X-ray synchrotron imaging (Fe channel) of section.
(f ) X-ray synchrotron imaging (Hg channel) of section.
(g) X-ray synchrotron imaging (Sb channel) of section.
X-Ray Fluorescence describes the chemical composition of a painting. Different elements fluoresce diffrently under X-rays, which allows this process to separate specific chemical channels. This can aid in differentiating between the visible painting and an underpainting. With the knowledge of which elements are in which pigments, an underpainting could be digitally recreated with fairly accurate color.
Notes All content is based on the curriculum of the Imaging Systems course taught by Nitin Sampat at the Rochester Institute of Technology in the fall of 2013. All photographs are by Nicole Leclair unless otherwise indicated. Conservation Imaging Photographs: http://www.ecs.umass.edu/~mduarte/images/Underpainting-EUSIPCO11.pdf http://www.williamstownart.org/techbulletins/images/WACC%20Imaging%20of%20Paintings. pdf http://www.themorgan.org/collections/works/RomeAfterRaphael/raphaelPainting.asp