I ns t i t ut eofManage me nt & Te c hni c alSt udi e s ENGI NEERI NGGRAPHI CS
500
Ci v i lEn g i n e e r i n g
www. i mt s i n s t i t u t e . c o m
IMTS (ISO 90012008 Internationally Certified)
ENGINEERING GRAPHICS
ENGINEERING GRAPHICS FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
ENGINEERING GRAPHICS CONTENTS: UNIT I
0116 A Survey of Computer Graphics  Overview of Graphics Systems: Video
Display Devices – RasterScan Systems – RandomScan Systems – Input Devices – HardCopy Devices UNIT II
1734
Output Primitives: Points and Lines – Line Drawing Algorithms  Loading the Frame Buffer – CircleGenerating Algorithms – EllipseGenerating Algorithms – Other Curves – Parallel Curve Algorithms – Pixel Addressing and Object Geometry – FilledArea Primitives  Character Generation – Attributes of Output Primitives : Line Attributes – Curve Attributes – Color and Grayscale Levels – AreaFill Attributes – Character Attributes – Antialiasing.
UNIT III
3557
TwoDimensional Geometric Transformations : Basic Transformations – Matrix
Representations
and
Homogeneous
Coordinates
–
Composite
Transformations  Other Transformations – TwoDimensional Viewing : The Viewing Pipeline – Viewing Coordinate Reference Frame – WindowstoViewport Coordinate Transformation – Clipping Operations – Point Clipping – Line Clipping: CohenSutherland Line Clipping – LiangBarsky Line Clipping – Polygon Clipping: SutherlandHodgeman Polygon Clipping – WeilerAtherton Polygon Clipping – Curve Clipping – Text Clipping – Exterior Clipping.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
UNIT IV
5877
ThreeDimensional Concepts: ThreeDimensional Display Methods ThreeDimensional Geometric and Modeling Transformations: Translation – Rotation – Scaling – Other Transformations – Composite Transformations ThreeDimensional Viewing: Viewing Pipeline – Viewing Coordinates – Projections – Clipping.
UNIT V
78101
Graphical User Interfaces and Interactive Input Methods: The User Dialogue – Input of Graphical Data – Input Functions – Interactive Picture – Construction Techniques. VisibleSurface Detection Methods: Classification of VisibleSurface Detection Algorithms – BackFace Detection – DepthBuffer Method. Basic Illumination Models  Color Models and Color Applications: Properties of Light – Standard Primaries and the Chromaticity Diagram – Intuitive Color Concepts – RBG Color Model – YIQ Color Model – CMY Color Model – HSV Color Model.
UNIT QUESTIONS
102106
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
ENGINEERING GRAPHICS
1
UNIT I ENGINEERING GRAPHICS UNIT STRUCTURE 1.0 Introduction 1.1 Objectives 1.2 A Survey of Computer Graphics 1.2.1 ComputerAided Design 1.2.2 Presentation Graphics 1.2.3 Computer Art 1.2.4 Entertainment 1.2.5 Education and Training 1.2.6 Visualization 1.2.7 Image Processing 1.2.8 Graphical User Interface 1.3 Video Display Devices 1.3.1 Refresh Cathode Ray Tubes 1.3.2 Raster Scan Displays 1.3.3 Random Scan Displays 1.3.4 Color CRT Monitors 1.3.5 Direct View Storage Tubes 1.3.6 Flat Panel Displays 1.3.7 Three Dimensional Viewing Devices 1.3.8 Stereoscopic and Virtual Reality Systems 1.4 Raster Scan Systems 1.4.1 Video Controller 1.4.2 Raster Scan Display Processor 1.5 Random Scan Systems 1.6 Input Devices 1.6.1 Keyboards 1.6.2 Mouse 1.6.3 Trackball and Spaceball 1.6.4 Joystick 1.6.5 Data Glove 1.6.6 Digitizer 1.6.7 Image Scanner 1.6.8 Touch Panel 1.6.9 Light Pen 1.6.10 Voice Systems 1.7 Hard Copy Devices 1.8 Summary
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
ENGINEERING GRAPHICS
1.0
INTRODUCTION Computers have become a powerful tool for the rapid and economical
production of pictures. Graphics capabilities for both twodimensional and threedimensional applications are now common on general purpose computers. Various devices are available for data input on graphics workstations.
1.1 OBJECTIVES At the end of this unit, you should be able to
Know the introductory concepts of computer graphics and the survey of computer graphics
Familiar with the various video display devices
Have a thorough study about the raster and random scan systems
Study the various graphical input devices.
1.2 A SURVEY OF COMPUTER GRAPHICS Computers have become a powerful tool for the rapid and economical production of pictures. Computer graphics used routinely in science, engineering, medicine, business, industry, government, art, entertainment, advertising, education, and training.
1.2.1 ComputerAided Design CAD, computeraided design methods are now routinely used in the design of buildings, automobiles, aircraft, watercraft, spacecraft, computers, textiles. Objects are first displayed in a wireframe outline form that shows the overall shape and internal features of objects. Wireframe displays also allow designers to quickly see the effects of interactive adjustments to design shapes. Animations are often used in CAD applications. Realtime animations using wireframe displays on a video monitor are useful for testing performance of a vehicle or system. Wireframe displays allow the designer to see into the interior of the vehicle and to watch the behavior of inner components during motion. Animations in virtualreality environments are used to determine how vehicle operators are affected by certain motions. The manufacturing process is also tied in to the computer description of designed objects to automate the construction of the product. A circuit board layout, for example can be transformed into a description of the individual processes needed to construct the layout. Architects use interactive graphics methods to lay out floor plans. The positioning of rooms, doors, windows, stairs, shelves, counter, and other building features. Working from the display of a building layout on a video monitor, an electrical designer can try out
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
2
ENGINEERING GRAPHICS
arrangements for wiring, electrical outlets, and fire warning systems. With virtualreality systems, designers can even go for a simulated “walk” through the rooms or round the outsides of buildings.
1.2.2 Presentation Graphics Another major application area is presentation graphics, used to produce illustrations for reports or to generate 35mm slides or transparencies for use with projectors. Presentation graphics is commonly used to summarize financial, statistical, mathematical, scientific, and economic data for research reports, managerial reports, consumer information bulletins, and other types of reports. Typical examples of presentation graphics are bar charts, line graphs, surface graphs, pie charts, and other displays showing relationships between multiple parameters.
1.2.3 Computer Art Artists use a variety of computer methods, including specialpurpose hardware, artist’s paintbrush programs (such as Lumena), other paint packages (such as PixelPaint and SuperPaint), specially developed software, symbolic mathematics packages (such as mathematica), CAD packages, desktop publishing software, and animation packages that provide facilities for designing object shapes and specifying object motions. Paintbrush program allows artists to “paint” pictures on the screen of a video monitor. The picture is usually painted electronically on a graphics tablet (digitizer) using a stylus, which can stimulate different brush strokes, brush widths, and colors. To create pictures the artist uses a combination of threedimensional modeling packages, texture mapping, drawing programs, and CAD software. For “mathematical” art the artist uses a combination of mathematical functions, fractal procedures, Mathematica software, inkjet printers, and other system to create a variety of threedimensional and twodimensional shapes and stereoscopic image pairs.
1.2.4 Entertainment Computer graphics methods are now commonly used in making motion pictures, music videos, and television shows. Graphics objects can be combined with the live action or graphics and image processing techniques can be used to produce a transformation of one person or object into another (morphing).
1.2.5 Education and Training Models of physical systems, physiological systems, population trends, or equipment, such as the colorcoded diagram help trainees to understand the operation of the system.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
3
ENGINEERING GRAPHICS
Examples of specialized systems are the simulators for practice sessions or training of ship captains, aircraft pilots, heavyequipment operators, and air trafficcontrol personnel. Some simulators have no video screens; for example, a flight simulator with only a control panel for instrument flying.
1.2.6 Visualization Producing graphical representations for scientific, engineering, and medical data sets and processes is generally referred to as scientific visualization. Business visualization is used in connection with data sets related to commerce, industry, and other nonscientific areas.
1.2.7 Image Processing In computer graphics, a computer is used to create a picture. Image processing, on the other hand, applies techniques to modify or interpret existing pictures, such as photographs and TV scans. Two principal applications of image processing are (1) improving picture quality and (2) machine perception of visual information, as used in robotics. Medical applications also make extensive use of imageprocessing techniques for picture enhancements, in tomography and in simulations of operations. Tomography is a technique of Xray photography that allows crosssectional views of physiological systems to be displayed. Both computed Xray tomography (CT) and position emission tomography (PET) use projection methods to reconstruct across sections from digital data. These techniques are also used to monitor internal functions and show cross sections during surgery. Other medical imaging techniques include ultrasonics and nuclear medicine scanners. With ultrasonics, high frequency sound waves, instead of Xrays, are used to generate digital data. Nuclear medicine scanners collect digital data from radiation emitted from ingested radionuclides and plot colorcoded images. The last application is generally referred to as computeraided surgery. Twodimensional cross sections of the body are obtained using imaging techniques. Then the slices are viewed and manipulated using graphics methods to simulate actual surgical procedures and to try out different surgical cuts. 1.2.8 Graphical User Interfaces It is common now for software packages to provide a graphical interface. A major component of a graphical interface is a window manager that allows a user to display multiplewindow areas. Each window can contain a different process that can contain graphical or nongraphical displays. Interfaces also display menus and icons for fast selection of processing operations or parameter values. An icon is a graphical symbol that is designed to look like the processing option it represents.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
4
ENGINEERING GRAPHICS
1.3 VIDEO DISPLAY DEVICES Typically, the primary output device in a graphics system is a video monitor. The operation of most video monitors is based on the standard cathoderay tube (CRT) design.
1.3.1 Refresh CathodeRay Tubes A beam of electrons (cathode rays) emitted by an electron gun, passes through focusing and deflection systems that direct the beam toward specified positions on the phosphorcoated screen. The phosphor then emits a small spot of light at each position contacted by the electron beam. Because the light emitted by the phosphor fades very rapidly, some method is needed for maintaining the screen picture. One way to keep the phosphor glowing is to redraw the picture repeatedly by quickly directing the electron beam back over the same points. This type of display is called a refresh CRT. Persistence is defined as the time it takes the emitted light from the screen to decay to onetenth of its original intensity. Lowerpersistence phosphors require higher refresh rates to maintain a picture on the screen without flicker. A phosphor with low persistence is useful for animation; a highpersistence phosphor is useful for displaying highly complex, static pictures. The maximum number of points that can be displayed without overlap on a CRT is referred to as the resolution. A more precise definition of resolution is the number of points per centimeter that can be plotted horizontally and vertically, although it is often simply stated as the total number of points in each direction. Highresolution systems are often referred to as highdefinition systems. Another property of video monitors is aspect ratio. This number gives the ratio of vertical points to horizontal. The following diagram shows the basic design of a magnetic deflection CRT.
Figure 1: Basic design of a magnetic deflection CRT
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
5
ENGINEERING GRAPHICS
1.3.2 Raster Scan Displays The most common type of graphics monitor employing a CRT is the rasterscan display, based on television technology. In a rasterscan system, the electron beam is swept across the screen, one row at a time from top to bottom. As the electron beam moves across each row, the beam intensity is turned on and off to create a pattern of illuminated spots. Picture definition is stored in a memory area called the refresh buffer or frame buffer. This memory area holds the set of intensity values for all the screen points. Stored intensity values are then retrieved from the refresh buffer and â€œpaintedâ€? on the screen one row (scan line) at a time. Each screen point is referred to as a pixel or pel (shortened forms of picture element). Home television sets and printers are examples of other systems using rasterscan methods. In a simple blackandwhite system, each screen point is either on or off, so only one bit per pixel is needed to control the intensity of screen positions. For a bilevel system, a bit value of 1 indicates that the electron beam is to be turned on at that position, and a value of 0 indicates that the beam intensity is to be off. On a blackandwhite system with one bit per pixel, the frame buffer is commonly called a bitmap. For systems with multiple bits per pixel, the frame buffer is often referred to as a pixmap. Refreshing on rasterscan displays is carried out at the rate of 60 to 80 frames per second, although some systems are designed for higher refresh rates. Sometimes, refresh rates are described in units of cycles per second, or Hertz (Hz), where a cycle corresponds to one frame. At the end of each scan line, the electron beam returns to the left side of the screen to begin displaying the next scan line. The return to the left of the screen, after refreshing each scan line, is called the horizontal retrace of the electron beam. And at the end of each frame (displayed in1/80th to 1/60th of a second), the electron beam returns (vertical retrace) to the top left corner of the screen to begin the next frame. The following figure shows the raster scan system that displays an object as a set of discrete points across each scan line.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
6
ENGINEERING GRAPHICS
Figure 2: Raster scan system that displays an object as a set of discrete points across each scan line
1.3.3 Random Scan Displays When operated as a randomscan display unit, a CRT has the electron beam directed only to the parts of the screen where a picture is to be drawn. Randomscan monitors draw a picture one line at a time and for this reason are also referred to as vector displays (or strokewriting or calligraphic displays). A pen plotter operates in a similar way and is an example of a randomscan, hardcopy device. Refresh rate on a randomscan system depends on the number of lines to be displayed. Picture definition is now stored as a set of linedrawing commands in an area of memory referred to as the refresh display file. Sometimes the refresh display file is called the display list, display program, or simply the refresh buffer. Randomscan systems are designed for line drawing applications and cannot display realistic shaded scenes.
1.3.4 Color CRT Monitor A CRT monitor displays color pictures by using a combination of phosphors that emit differentcolored light. The two basic techniques for producing color displays with a CRT are the beampenetration method and the shadowmask method. The beampenetration method for displaying color pictures has been used with randomscan monitors. Two layers of phosphor, usually red and green, are coated onto the inside off the CRT screen, and the displayed color depends on how far the electron beam penetrates into the phosphor layers. A beam of slow electrons excites only the outer red layer. A beam of very fast electrons penetrates through the red layer and excites the inner
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
7
ENGINEERING GRAPHICS
green layer. At intermediate beam speeds, combination of red and green light is emitted to show two additional colors, orange and yellow. Shadowmask methods are commonly used in rasterscan systems (including color TV) because they produce a much wider range of colors than the beampenetration method. A shadowmask CRT has three phosphor color dots at each pixel position. One phosphor dot emits a red light, another emits a green light, and the third emits a blue light. This type of CRT has three electron guns, one for each color dot, and a shadowmask grid just behind the phosphorcoated screen. The three electron beams are deflected and focused as a group onto the shadow mask, which contains a series of holes aligned with the phosphordot patterns. When the three beams pass through a hole in the shadow mask, they activate a dot triangle, which appears as a small color spot on the screen. The phosphor dots in the triangles are arranged so that each electron beam can activate only its corresponding color dot when it passes through the shadow mask. Color CRTs in graphics systems are designed as RGB monitors. These monitors use shadowmask methods and take the intensity level for each electron gun (red, green, and blue) directly from the computer system without any intermediate processing. An RGB color system with 24 bits of storage per pixel is generally referred to as a fullcolor system or a truecolor system. The following figure shows a random scan system that draws the component lines of an object in any order specified.
Figure 3: A random scan system that draws the component lines of an object in any order specified
1.3.5 DirectView Storage Tube An alternative method for maintaining a screen image is to store the picture information inside the CRT instead of refreshing the screen. A directview storage tube (DVST) stores the picture information as a charge distribution just behind the phosphor
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
8
ENGINEERING GRAPHICS
coated screen. Two electron guns are used in DVST. One, the primary gun, is used to store the picture pattern; the second, the flood gun, maintains the picture display. A DVST monitor has both disadvantages and advantages compared to the refresh CRT. Because no refreshing is needed, very complex pictures can be displayed at very high resolutions without flicker. Disadvantages of DVST systems are that they ordinarily do not display color and that selected parts of a picture cannot be erased. The following figure shows the operation of a deltadelta shadow mask CRT.
Figure 4: Operation of a deltadelta shadow mask CRT
1.3.6 FlatPanel Displays The term flatpanel display refers to a class of video devices that have reduced volume, weight, and power requirements compared to a CRT. A significant feature of flatpanel displays is that they are thinner that CRTs and we can hang them on walls or wear them on our wrists. We can separate flatpanel displays into two categories: emissive displays and nonemissive displays. The emissive displays (or emitters) are devices that convert electrical energy into light. Plasma panels, thinfilm electroluminescent displays, and lightemitting diodes are examples of emissive displays. Nonemmissive displays (or nonemitters) use optical effects to convert sunlight or light from some other source into graphics patterns. The most important example of a nonemissive flatpanel display is a liquidcrystal device. Plasma panels, also called gasdischarge displays, are constructed by filling the region between two glass plates with a mixture of gases that usually includes neon.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
9
ENGINEERING GRAPHICS
Thinfilm electroluminescent displays are similar in construction to a plasma panel. The difference is that the region between the glass plates is filled with a phosphor, such as zinc sulfide doped with manganese, instead of a gas. A third type of emissive device is the lightemitting diode (LED). A matrix of diodes is arranged to form the pixel positions in the display, and picture definition is stored in a refresh buffer. As in scanline refreshing of a CRT, information is read from the refresh buffer and converted to voltage levels that are applied to the diodes to produce the light patterns in the display. Liquidcrystal displays (LCDs) are commonly used in small systems, such as calculators and portable, laptop computers. The term liquid crystal refers to the fact that these compounds have a crystalline arrangement of molecules, yet they flow like a liquid.
1.3.7 ThreeDimensional Viewing Devices Graphics monitors for the display of threedimensional scenes have been devised using a technique that reflects a CRT image from a vibrating, flexible mirror. A Genisco Space Graph uses a vibrating mirror to project three dimensional objects into a 25cm by 25cm by 25cm volume.
1.3.8 Stereoscopic and Virtual Reality Systems Stereoscopic views provide a threedimensional effect by presenting a different view to each eye of an observer so that scenes appear to have depth. One way to produce a stereoscopic effect is to display each of the two views with a raster system on alternate refresh cycles. Stereoscopic viewing is also a component in virtualreality systems. An interactive virtual reality environment can also be viewed with stereoscopic glasses and a video monitor instead of a headset.
1.4 RASTER SCAN SYSTEMS Interactive raster graphics systems typically employ several processing units. In addition to the central processing unit, or CPU, a specialpurpose processor, called the video controller or display controller, is used to control the operation of the display device.
1.4.1 Video Controller A fixed area of the system memory is reserved for the frame buffer, and the video controller is given direct access to the framebuffer memory. Framebuffer locations, and the corresponding screen positions, are referenced in Cartesian coordinates. The coordinate origin is defined at the lower left screen corner. The screen surface is then represented as the first quadrant of a twodimensional system, with positive x values increasing to the right and positive y values increasing from bottom to top. (On some personal computers, the coordinate origin is referenced at the upper left corner of the screen, so the y values are inverted.) Scan
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
10
ENGINEERING GRAPHICS
lines are then labeled from ymax at the top of the screen to 0 at the bottom. Along each scan line, screen pixel positions are labeled from 0 to xmax. Two registers are used to store the coordinates of the screen pixels. Initially, the x register is set to 0 and the y register is set to ymax. The value stored in the frame buffer for this pixel position is then retrieved and used to set the intensity of the CRT beam. Then the x register is incremented by 1, and the process repeated for the next pixel on the top scan line. This procedure is repeated for each pixel along the scan line. A number of other operations can be performed by video controller, besides the basic refreshing operations. For various applications, the video controller can retrieve pixel intensities from different memory areas on different refresh cycles. In highquality systems, for example, two frame buffers are often provided so that one buffer can be used for refreshing while the other is being filled with intensity values. Then the two buffers can switch roles. This provides a fast mechanism for generating realtime animations, since different views of moving objects can be successively loaded into the refresh buffers. Finally, some systems are designed to allow the video controller to mix the framebuffer image with an input image from a television camera or other input device. The following figure shows the architecture of a simple raster graphics system.
Figure 5: Architecture of a simple raster graphics system
1.4.2 Raster Scan Display Processor The organization of a raster system containing a separate display processor, sometimes referred to as a graphics controller or a display coprocessor. The purpose of the display processor is to free the CPU from the graphics chores. In addition to the system memory, a separate displayprocessor memory area can also be provided. A major task of the display processor is digitizing a picture definition given in an application program into a set of pixelintensity values for storage in the frame buffer. This digitization process is called scan conversion. One way to do this is to store each scan line as a set of integer pairs. One number of each pairs indicates an intensity value, and the second number specifies the number of
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
11
ENGINEERING GRAPHICS
adjacent pixels on the scan line that are to have that intensity. This technique, called runlength encoding, can result in a considerable saving in storage space if a picture is to be constructed mostly with long runs of a single color each. The disadvantages of encoding runs are that intensity changes are difficult to make and storage requirements actually increase as the length of the runs decreases.
1.5 RANDOM SCAN SYSTEMS Graphics commands in the application program are translated by the graphics package into a display file stored in the system memory. This display file is accessed by the display processor to refresh the screen. The display processor cycles through each command in the display file program once during every refresh cycle. Sometimes the display processor in a randomscan system is referred to as a display processing unit or a graphics controller. The following figure shows the architecture of a random scan system with a display processor.
Figure 6: Architecture of a random scan graphics system with a display processor
1.6 INPUT DEVICES
1.6.1 Keyboards An alphanumeric keyboard on a graphics system is used primarily as a device for entering text strings. The keyboard is an efficient device for inputting such nongraphic data as picture labels associated with a graphics display. Cursorcontrol keys and function keys are common features on generalpurpose keyboards. Function keys allow users to enter frequently used operations in a single keystroke, and cursorcontrol keys can be used to select displayed objects or coordinate positions by positioning the screen cursor. Other types of cursorpositioning devices, such as a trackball or joystick, are included on some keyboards.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
12
ENGINEERING GRAPHICS
1.6.2 Mouse A mouse is small handheld box used to position the screen cursor. Wheels rollers on the bottom of the mouse can be used to record the amount and direction of movement. Another method for detecting mouse motion is with an optical sensor. The mouse is moved over a special mouse pad that has a grid of horizontal and vertical lines. The optical sensor detects movement across the lines in the grid.
1.6.3 Trackball and Spaceball As the name implies, a trackball is a ball that can be rotated with the fingers or palm of the hand. While a trackball is a twodimensional positioning device, a spaceball provides six degrees of freedom. Unlike the trackball, a spaceball does not actually move.
1.6.4 Joystick A joystick consists of a small, vertical lever (called the stick) mounted on a base that is used to steer the screen cursor around. Most joysticks select screen positions with actual stick movement; others respond to pressure on the stick. The distance that the stick is moved in any direction from its center position corresponds to screencursor movement in that direction.
1.6.5 Data Glove Data glove can be used to grasp a “virtual” object. The glove is constructed with a series of sensors that detect hand and finger motions.
1.6.6 Digitizer A common device for drawing, painting, or interactively selecting coordinate positions on an object is a digitizer. These devices can be used to input coordinate values in either a twodimensional or a threedimensional space. One type of digitizer is the graphics tablet (also referred to as a data tablet), which is used to input twodimensional coordinates by activating a hand cursor or stylus at selected positions on a flat surface. A hand cursor contains cross hairs for sighting positions, while a stylus is a pencilshaped device that is pointed at positions on the tablet. Acoustic (or sonic) tablets use sound waves to detect a stylus position. Either strip microphones or point microphones can be used to detect the sound emitted by an electrical spark from a stylus tip. The position of the stylus is calculated by timing the arrival of the generated sound at the different microphone positions. An advantage of twodimensional acoustic tablets is that the microphones can be placed on any surface to form the “tablet” work area.
1.6.7 Image Scanner
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
13
ENGINEERING GRAPHICS
Drawings, graphs, color and blackandwhite photos, or text can be stored for computer processing with an image scanner by passing an optical scanning mechanism over the information to be stored.
1.6.8 Touch Panel As the name implies, touch panels allow displayed objects or screen positions to be selected with the touch of a finger. A typical application of touch panels is for the selection of processing options that are represented with graphical icons. Optical touch panels employ a line of infrared lightemitting diodes (LEDs) along one vertical edge and along one horizontal edge of the frame.
1.6.9 Light Pen Pencilshaped devices are used to select screen positions by detecting the light coming from points on the CRT screen. They are sensitive to the short burst of light emitted from the phosphor coating at the instant the electron beam strikes a particular point. An activated light pen, pointed at a spot on the screen as the electron beam lights up that spot, generates an electrical pulse that causes the coordinate position of the electron beam to be recorded. Although light pens are still with us, they are not as popular as they once were since they have several disadvantages compared to other input devices that have been developed. And prolonged use of the light pen can cause arm fatigue.
1.6.10 Voice Systems Speech recognizers are used in some graphics workstations as input devices to accept voice commands. The voicesystem input can be used to initiate graphics operations or to enter data. These systems operate by matching an input against a predefined dictionary of words and phrases. A dictionary is set up for a particular operator by having the operator speak the command words to be used into the system.
1.7 HARDCOPY DEVICES We can obtain hardcopy output for our images in several formats. The quality of the pictures obtained from a device depends on dot size and the number of dots per inch, or lines per inch, that can be displayed. To produce smooth characters in printed text strings, higherquality printers shift dot positions so that adjacent dots overlap. Impact printer press formed character faces against an inked ribbon onto the paper. A line printer is an example of an impact device. Nonimpact printers and plotters use laser techniques, inkjet sprays, xerographic processes (as used in photocopying machines), electrostatic methods, and electro thermal methods to get images onto paper.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
14
ENGINEERING GRAPHICS
Character impact printers often have a dotmatrix print head containing a rectangular array of producing wire pins, with the number of pins depending on the quality of the printer. In a laser device, a laser beam creates a charge distribution on a rotating drum coated with a photoelectric material, such as selenium. Toner is applied to the drum and then transferred to paper. Inkjet methods produce output by squirting ink in horizontal rows across a roll of paper wrapped on a drum. The electrically charged ink stream is deflected by an electric field to produce dotmatrix patterns. An electrostatic device places a negative charge on the paper, one complete row at a time along the length of the paper. Then the paper is exposed to a toner. The toner is positively charged and so is attracted to the negatively charged areas. Electro thermal methods use heat in a dotmatrix print head to output patterns on heatsensitive paper.
1.8
SUMMARY In this unit the major hardware and software features of computer graphics systems
are surveyed. The predominant graphics display device is the raster refresh monitor. Flat panel display technology is commonly used and replaces raster displays in the near future. For graphical input, keyboards, button boxes and dials are used to input text, data values. Hard copy devices for graphics workstations include standard printers and plotters.
1.9
SELF ASSESSMENT QUESTIONS
Answer the following questions 1. Define the term scientific visualization. 2. ________________ are commonly used in calculators and laptop computers. 3. Define the term resolution. 4. What is a trackball? 5. Name some of the hard copy devices. 1.10
1.
ANSWERS TO SELF ASSESSMENT QUESTIONS
The process of producing graphical representations for scientific, engineering and medical data sets and processes is referred to as scientific visualization.
2.
Liquid Crystal Display.
3.
The maximum number of points that can be displayed without overlap on a CRT is referred to as resolution.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
15
ENGINEERING GRAPHICS
4.
A trackball is a ball that can be rotated with the fingers or palm of the hand. A trackball is mounted on a keyboard.
5.
Dot matrix, laser, ink jet, electrostatic, electro thermal printers and pen plotting devices are some of the hard copy devices.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
16
ENGINEERING GRAPHICS
17
UNIT II OUTPUT PRIMITIVES AND ATTRIBUTES OF OUTPUT PRIMITIVES UNIT STRUCTURE 2.0 Introduction 2.1 Objectives 2.2 Output Primitives 2.2.1 Points and Lines 2.2.2 Line Drawing Algorithms 2.2.3 Loading the Frame Buffer 2.2.4 Circle Generating Algorithms 2.2.5 Ellipse Generating Algorithms 2.2.6 Other Curves 2.2.7 Parallel Curve Algorithms 2.2.8 Pixel Addressing and Object Geometry 2.2.9 Filled Area Primitives 2.2.10 Character Generation 2.3 Attributes of Output Primitives 2.3.1 Line Attributes 2.3.2 Curve Attributes 2.3.3 Color and Grayscale Levels 2.3.4 Area Fill Attributes 2.3.5 Character Attributes 2.3.6 Antialiasing 2.4 Summary 2.5 Self Assessment questions 2.6 Answers to Self Assessment questions
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
ENGINEERING GRAPHICS
2.0 INTRODUCTION Each output primitive is specified with input coordinate data and other information about the way that objects are to be displayed. The discussion is about picture generation procedures by examining devicelevel algorithms for displaying twodimensional output primitives with particular emphasis on scan conversion methods for raster graphics systems. This unit describes the output primitives and the attributes of the output primitives.
2.1 OBJECTIVES At the end of this unit, you should be able to
Know the line drawing, circle drawing and ellipse drawing algorithms
Have a overview of the attributes of the output primitives
Have a thorough study about the antialiasing techniques
Study the process of character generation.
2.2 OUTPUT PRIMITIVES Graphics programming packages provide functions to describe a scene in terms of these basic geometric structures, referred to as output primitives. Each output primitive is specified with input coordinate data and other information about the way that objects are to be displayed. Additional output primitives that can be used to construct a picture include circles and other conic sections, quadric surfaces, spline curves and surface, polygon color areas, and character strings.
2.2.1 Points and Lines Point plotting is accomplished by converting a single coordinate position furnished by an application program into appropriate operations for the output device in use. With a CRT monitor, for example, the electron beam is turned on to illuminate the screen phosphor at the selected location. For a blackandwhite raster system, on the other hand, a point is plotted by setting the bit value corresponding to a specified screen position within the frame buffer to 1. Line drawing is accomplished by calculating intermediate positions along the line path between two specified endpoint positions. An output device is then directed to fill in these positions between the endpoints. For analog devices, such as a vector pen plotter or a randomscan display, a straight line can be drawn smoothly from one endpoint to the other. To load a specified color into frame buffer at a position corresponding to column x along scan line y, we will assume we have available a lowlevel procedure of the form setPixel (x, y)
To retrieve the current framebuffer intensity setting for a specified location.We accomplish this with the lowlevel function
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
18
ENGINEERING GRAPHICS
getPixel (x, y)
2.2.2 Line Drawing Algorithms The Cartesian slopeintercept equation for a straight line is y=m.x+b With m representing the slope of the line and b as the y intercept. Given that the two endpoints of a line segment are specified at positions (x 1, y1) and (x2, y2). We can determine values for the slope m and y intercept b with the following calculations: m = y2  y1 / x2  x1 b = y1 – m. x1 For any given x interval ∆x along a line, we can compute the corresponding y interval ∆y ∆y = m ∆x Similarly, we can obtain the x interval ∆x corresponding to a specified ∆y as ∆x = ∆y / m
DDA Algorithm The digital differential analyzer (DDA) is a scanconversion line algorithm based on calculating either ∆y or ∆x. #include “device.h” #define ROUND (a) ( (int) (a+0.5) ) Void lineDDA (int xa, int ya, int xb, int yb) { int dx – xb – xa, dy = yb – ya, steps, k; float xIncrement, yIncrement, x = xa, y = ya;
if (abs (dx) > abs (dy) ) steps = abs (dx); else steps = abs (dy); xIncrement = dx / (float) steps; yIncrement = dy / (float) steps;
setPixel (ROUND (x), ROUND (y) ); for (k=0; k<steps; k++); {
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
19
ENGINEERING GRAPHICS
x += xIncrement; y +=yIncrement; setPixel (ROUND(x), ROUND(y) ); } }
The DDA algorithm is a faster method for calculating pixel positions.
Bresenham’s Line Algorithm An accurate and efficient raster linegenerating algorithm, developed by Bresenham, scan converts line using only incremental integer calculations that can be adapted to display circles and other curves. The vertical axes show scanline positions, and the horizontal axes identify pixel columns.
1. Input the two line endpoints and store the left endpoint in (x 0, y0). 2. Load (x0, y0) into the frame buffer; that is, plot the first point. 3. Calculate constants ∆x, ∆y, 2∆y, and 2∆y  2∆x, and obtain the starting value for the decision parameter as p0 = 2∆y  ∆x 4. At each xk along the line, starting at k = 0, perform the following test: If pk < 0, the next point to plot is (xk + 1, yk ) pk+1 = pk + 2∆y otherwise, the next point to plot is (xk + 1, yk + 1) and pk+1 = pk + 2∆y  2∆x 5. Repeat step 4 ∆x times.
2.2.3 Loading the Frame Buffer When straight line segments and other objects are scan converted for display with a raster system, framebuffer positions must be calculated. Scanconversion algorithms generate pixel positions at successive unit intervals. This allows us to use incremental methods to calculate framebuffer addresses.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
20
ENGINEERING GRAPHICS
21
As a specific example, suppose the framebuffer array is addressed in rowmajor order and that pixel positions vary from (0, 0) at the lower left screen corner to (x max, ymax) at the top right corner. For a bilevel system (1 bit per pixel), the framebuffer bit address for pixel position (x,y) is calculated as
addr(x, y) = addr(0, 0) + y(xmax + 1) + x Moving across a scan line, we can calculate the framebuffer address for the pixel at (x + 1, y) as the following offset from the address for position (x, y):
addr(x + 1, y) = addr(x, y) +1
Stepping diagonally up to the next scan line from (x, y), we get to the framebuffer address of (x + 1, y + 1) with the calculation addr (x + 1, y + 1) = addr(x, y) + xmax + 2 where the constant xmax + 2 is precomputed once for all line segments. 2.2.4 Circle Generating Algorithms Since the circle is a frequently used component in pictures and graphs, a procedure for generating either full circles or circular arcs is include in most graphics packages.
Properties of Circles A circle is defined as the set of points that are all at a given distance r from a center position (xc, yc). this distance relationship is expressed by the Pythagorean theorem in Cartesian coordinates as 2
2
(x  xc) + (y  yc) = r
2
Expressing the circle equation in parametric polar form yields the pair of equations x = xc + r cosÎ¸ y = yc + r sinÎ¸ Midpoint Circle Algorithm To apply the midpoint method, we define a circle function: 2
2
fcircle(x, y) = x + y  r
2
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
ENGINEERING GRAPHICS
22
Any point (x, y) on the boundary of the circle with radius r satisfies the equation fcircle(x, y) = 0. If the point is in the interior of the circle, the circle function is negative. And if the point is outside the circle, the circle function is positive. To summarize, the relative position of any point (x, y) can be determined by checking the sign of the circle function:
fcircle(x, y)={ < 0, if (x, y) is inside the circle boundary = 0, if (x, y) is on the circle boundary > 0, if (x, y) is outside the circle boundary
1. Input radius r and circle center (x c, yc) and obtain the first point on the circumference of a circle centered on the origin as
(x0, y0) = (0, r) 2. Calculate the initial value of the decision parameter as p0 = 5/4 – r 3. At each xk position, starting at k=0, perform the following test: If pk < 0, the next point along the circle centered on (0, 0) is (x k + 1, yk) and pk + 1 = pk + 2xk + 1 + 1 Otherwise, the next point along the circle is (x k + 1, yk – 1) and pk + 1 = pk + 2xk + 1 + 1 – 2y k + 1 where 2xk + 1 = 2xk + 2 and 2y k + 1 = 2y k – 2. 4. Determine symmetry points in the other seven octants. 5. Move each calculated pixel position (x, y) onto the circular path centered on (x c, yc) and plot the coordinate values: x = x + xc,
y = y + yc
6. Repeat steps 3 through 5 until x ≥ y.
2.2.5 Ellipse Generating Algorithms An ellipse is an elongated circle.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
ENGINEERING GRAPHICS
23
Properties of Ellipse An ellipse is defined as the set of points such that the sum of the distances from two fixed positions (foci) is the same for all points. If the distances to the two foci from any point P = (x, y) on the ellipse are labeled d1 and d2, then the general equation of an ellipse can be stated as
d1 + d2 = constant Midpoint Ellipse Algorithm 2
We define an ellipse function from (x  xc / rx) + (y  yc / ry )
2
= 1 with (xc, yc) = (0, 0)
as 2
2
2
2
2
2
fellipse(x, y) = ry x + rx y  rx ry which has the following properties:
fellipse(x, y) ={ < 0, if (x, y) is inside the ellipse boundary = 0, if (x, y) is on the ellipse boundary > 0, if (x, y) is outside the ellipse boundary
1. Input rx, ry, and ellipse center (xc, yc), obtain the first point on an ellipse centered on the origin as
(x0, y0) = (0, ry) 2. Calculate the initial value of the decision parameter in region 1 as 2
2
2
p10 = ry – rx ry + ¼ r x
3. At each xk position in region 1, starting at k = 0, perform the following test: If p1 k < 0, the next point along the ellipse centered on (0, 0) is (x k + 1, y k)and 2
2
p1k + 1 = p1k + 2ry xk + 1 + ry
Otherwise, the next point along the circle is (x k + 1, y k – 1) and 2
2
2
p1k + 1 = p1k + 2ry xk + 1  2rx yk + 1 + ry with
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
ENGINEERING GRAPHICS
24
2
2
2
2
2
2
2ry x k + 1 = 2ry x k + 2ry , 2rx y k + 1 = 2rx y k  2rx 2
2
and continue until 2ry x ≥ 2rx y 4. Calculate the initial value of the decision parameter in region 2 using the last point (x0, y0) calculated in region 1 as 2
2
2
2
2
2
p20 = ry (x0 + ½) + rx (y0 – 1)  rx ry
5. At each yk position in region 2, starting at k = 0, perform the following test: If p2 k > 0, the next point along the ellipse centered on (0, 0) is (x k, yk – 1) and 2
2
p2k + 1 = p2k  2rx yk + 1 + r x
Otherwise, the next point along the circle is (x k + 1, yk – 1) and 2
2
2
p2k + 1 = p2k + 2ry x k + 1  2rx yk + 1 + r x
Using the same incremental calculations for x and y as in region 1. 6. Determine symmetry points in the other three quadrants. 7. Move each calculated pixel position (x, y) onto the elliptical path centered on (x c, yc) and plot the coordinate values: x = x + xc,
y = y + yc 2
2
8. Repeat the steps for region 1 until 2ry x ≥ 2rx y 2.2.6 Other Curves Various curve functions are useful in object modeling, animation path specifications, data and function graphing, and other graphics applications. Commonly encountered curves include conics, trigonometric and exponential functions, probability distributions, general polynomials, and spline functions. A straightforward method for displaying a specified curve function is to approximate it with straight line segments. Straightline or curve approximations are used to graph a data set of discrete coordinate points.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
ENGINEERING GRAPHICS
25
Conic Sections In general, we can describe a conic section (or conic) with the seconddegree equation: 2
2
Ax + By + Cxy +Dx + Ey + F = 0 where values for parameters A, B, C, D, E, and F determine the kind of curve we are to display. Given this set of collections, we can determine the particular conic that will be 2
generated by evaluating the discriminant B – 4AC: 2
B – 4AC {< 0, generates an ellipse (or circle) = 0, generates a parabola > 0, generates a hyperbola
Polynomials and Spline Curves A polynomial function of nth degree in x is defined as y = ∑ akx
k
= a0 + a1x + … + an1x
n1
+ anx
n
where n is a nonnegative integer and the ak are constants, with an ≠ 0. We get a quadratic when n = 2; a cubic polynomial when n = 3; a quartic n = 4 and a straight line when n =1.
2.2.7 Parallel Curve Algorithms We can either adapt a sequential algorithm by allocating processors according to curve partitions. A parallel midpoint method for displaying circles is to divide the circular arc from 90° to 45° into equal sub arcs and assign a separate processor to each sub arc. Pixel positions are then calculated throughout each sub arc, and positions in the other circle octants are then obtained by symmetry. A parallel ellipse midpoint method divides the elliptical arc over the first quadrant into equal sub arcs and parcels these out to separate processors. Pixel positions in the other quadrants are determined by symmetry. Each processor uses the circle or ellipse equation to calculate curveintersection coordinates.
2.2.8 Pixel Addressing and Object Geometry Several coordinate references associated with the specification and generation of a picture. Object descriptions are given in a worldreference frame, chosen to suit a particular application, and input world coordinates are ultimately converted to screen display positions. Another approach is to map world coordinates onto screen positions between pixels, so that we align object boundaries with pixel boundaries instead of pixel centers.
Screen Grid Coordinates
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
ENGINEERING GRAPHICS
A screen coordinate position is then the pair of integer values identifying a grid intersection position between two pixels. With the coordinate origin at the lower left of the screen, each pixel area can be referenced by the integer grid coordinates of its lower left corner. We identify the area occupied by a pixel with screen coordinates (x, y) as the unit square with diagonally opposite corners at (x, y) and (x + 1, y + 1).this pixeladdressing scheme has several advantages: It avoids halfinteger pixel boundaries, it facilitates precise object representations, and it simplifies the processing involved in many scanconversion algorithm and in other raster procedures.
Maintaining Geometric Properties of Displayed Objects When we convert geometric descriptions of objects into pixel representations, we transform mathematical points and lines into finite screen areas. For an enclosed area, input geometric properties are maintained by displaying the area only with those pixels that are interior to the object boundaries. The rectangle defined with the screen coordinate vertices, for example, is larger when we display it filled with pixels up to and including the border pixel lines joining the specified vertices.
2.2.9 Filled Area Primitives A standard output primitive in general graphics packages is a solidcolor or patterned polygon area. Other kinds of area primitives are sometimes available, but polygons are easier to process since they have linear boundaries. There are two basic approaches to area filling on raster systems. One way to fill an area is to determine the overlap intervals for scan lines that cross the area. Another method for area filling is to start from a given interior position and paint outward from this point until we encounter the specified boundary conditions. The scanline approach is typically used in general graphics packages to fill polygons, circles, ellipses, and other simple curves.
ScanLine Polygon Fill Algorithm For each scan line crossing a polygon, the areafill algorithm locates the intersection points of the scan line with the polygon edges. These intersection points are then sorted from left to right, and the corresponding framebuffer positions between each intersection pair are set to the specified fill color. A scan line passing through a vertex intersects two polygon edges at that position, adding two points to the list of intersections for the scan line. The topological difference between scan line y and scan line y' is identified by noting the position of the intersecting edges relative to the scan line. For scan line y, the two intersecting edges sharing a vertex are on opposite sides of the scan line. But for scan line y', the two intersecting edges are both above the scan line. Thus, the vertices that require additional processing are those that have connecting edges on opposite sides of the scan line.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
26
ENGINEERING GRAPHICS
One way to resolve the question as to whether we should count a vertex as one intersection or two is to shorten some polygon edges to split those vertices that should be counted as one intersection.
InsideOutside Tests We apply the oddeven rule, also called the odd parity rule or the evenodd rule, by conceptually drawing a line from any position P to a distance point outside the coordinate extends of the object and counting the number of edge crossings along the line. If the number of polygon edges crossed by this line is odd, then P is an interior point. Otherwise, P is an exterior point. Another method for defining interior regions is the nonzero winding number rule, which counts the number of times the polygon edges wind around a particular point in the counterclockwise direction, this count is called the winding number, and the interior points of a twodimensional object are defined to be those that have a nonzero value for the winding number.
ScanLine Fill of Curved Boundary Areas In general, scanline fill of regions with curved boundaries requires more work than polygon filling, since intersection calculations now involve nonlinear boundaries. We only need to calculate the two scanline intersections on opposite sides of the curve. Symmetries between quadrants (and between octants for circles) are used to reduce the boundary calculations. Similar methods can be used to generate a fill area for a curve section. The interior region is bounded by the ellipse section and a straightline segment that closes the curve by joining the beginning and ending positions of the arc.
BoundaryFill Algorithm Another approach to area filling is to start at a point inside a region and paint the interior outward toward the boundary. If the boundary is specified in a single color, the fill algorithm proceeds outward pixel by pixel until the boundary color is encountered. This method, called the boundaryfill algorithm, is particularly useful in interactive painting packages, where interior points are easily selected. A boundaryfill procedure accepts as input the coordinates of an interior point (x, y), a fill color, and a boundary color. Starting from (x, y), the procedure tests neighboring positions to determine whether they are of the boundary color. If not, they are painted with fill color, and their neighbors are tested. This process continues until all pixels up to the boundary color for the area have been tested.
FloodFill Algorithm
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
27
ENGINEERING GRAPHICS
Sometimes we want to fill in (or recolor) an area that is not defined within a single color boundary. We can paint such areas by replacing a specified interior color instead of searching for a boundary color value. This approach is called a floodfill algorithm. We start from a specified interior point (x, y) and reassign all pixel values that are currently set to a given interior color with the desired fill color. If the area we want to paint has more than one interior color, we can first reassign pixel values so that all interior points have the same color.
2.2.10 Character Generation Letters, numbers, and other characters can be displayed in a variety of sizes and styles. The overall design style for a set (or family) of characters is called a typeface. Examples of a few common typefaces are courier, Helvetica, New York, Palatino, and Zapf Chancery. The term font referred to a set of cast metal character forms in a particular size and format, such as 10point Courier italic or 12point Palatino Bold. The terms font and typeface are often used interchangeably. Typefaces (or fonts) can be divided into two broad groups: serif and sans serif. Serif type has small lines or accents at the ends of the main character strokes, while sansserif type does not have accents. Serif type is generally more readable; that is, it is easier to read in longer blocks of text. Sansserif type is said to be more legible. Since sansserif characters can be quickly recognized, this typeface is good for labeling and short headings. A simple method for representing the character shapes in a particular typeface is to use rectangular grid patterns. The set of characters are then referred to as a bitmap font (or bitmapped font). Another, more flexible, scheme is to describe character shapes using straightline and curve sections, as in PostScript, for example. In this case, the set of characters is called an outline font. Bitmap fonts are the simplest to define and display: The character grid only needs to be mapped to a framebuffer position.
2.3 ATTRIBUTES OF OUTPUT PRIMITIVES Any parameter that affects the way a primitive is to be displayed is referred to as an attribute parameter. Some attribute parameters, such as color and size, determine the fundamental characteristics of a primitive. Others specify how the primitive is to be displayed under special conditions. Examples of attributes in this class include depth information for threedimensional viewing and visibility or detectability options for interactive objectselection programs.
2.3.1 Line Attributes Basic attributes of a straight line segment are its type, its width, and its color. In some graphics packages, line can also be displayed using selected pen or brush options.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
28
ENGINEERING GRAPHICS
Line Type Possible selections for the linetype attribute include solid lines, dashed lines, and dotted lines. We modify a linedrawing algorithm to generate such lines by setting the length and spacing of displayed solid sections along the line path. A dash line could be displayed by generating an interdash spacing that is equal to the length of the solid sections. A dotted line can be displayed by generating very short dashes with the spacing equal to or greater than the dash size. Pixel counts for the span length and interspan spacing can be specified in a pixel mask, which is a string containing the digits 1 and 0 to indicate which positions to plot along the line path.
Line Width A heavy line on a video monitor could be displayed as adjacent parallel lines, while a pen plotter might require pen changes. We set the linewidth attribute with the command: setLinewidthScaleFactor (lw)
Linewidth parameter lw is assigned a positive number to indicate the relative width of the line to be displayed. A value of 1 specifies a standardwidth line. Values greater than 1 produce lines thicker than the standard. We can adjust the shape of the line ends to give them a better appearance by adding line caps. One kind of line cap is the butt cap obtained by adjusting the end positions of the component parallel lines so that the thick line is displayed with square ends that are perpendicular to the line path. Another line cap is the round cap obtained by adding a filled semicircle to each butt cap. The circular arcs are centered on the line endpoints and have a diameter equal to the line thickness. A third type pf line cap is the projecting square cap. Here, we simply extend the line and add butt caps that are positioned onehalf of the line width beyond the specified endpoints. A meter join is accomplished by extending the outer boundaries of each of the two lines until they meet. A round join is produced by capping the connection between the two segments with a circular boundary whose diameter is equal to the line width. A bevel join is generated by displaying the line segments with butt caps and filling in the triangular gap where the segments meet.
Pen and Brush Options With some packages, lines can be displayed with pen or brush selections. Operations in this category include shape, size, and pattern. Lines generated with pen (or brush) shapes can be displayed in various widths by changing the size of the mask.
Line Color
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
29
ENGINEERING GRAPHICS
When a system provides color (or intensity) options, a parameter giving the current color index is included in the list of systemattribute values. We set the line color value in PHIGS with the function
setPolylineColourIndex (lc)
Nonnegative integer values, corresponding to allowed color choices, are assigned to the line color parameter lc. A line drawn in the background color is invisible.
2.3.2 Curve Attributes Parameters for curve attributes are the same as those for line segments. We can display curves with varying colors, widths, dotdash patterns, and available pen or brush options. Pixel masks display dashes and interdash spaces that vary in length according to the slope of the curve. Raster curves of various widths can be displayed using the method of horizontal or vertical pixel spans. Where the magnitude of the curve slope is less than 1, we plot vertical spans; where the slope magnitude is greater than 1, we plot horizontal spans. Another method for displaying thick curves is to fill in the area between two parallel curve paths, whose separation distance is equal to the desired width.
2.3.3 Color and Grayscale Levels Generalpurpose rasterscan systems, for example, usually provide a wide range of colors. For CRT monitors, these color codes are then converted to intensitylevel settings for the electron beams. In a color raster system, the number of color choices available depends on the amount of storage provided per pixel in the frame buffer. Also, color information can be stored in the frame buffer in two ways: We can store color codes directly in the frame buffer, or we can put the color codes in a separate table and use pixel values as an index into this table.
Color Tables A user can set colortable entries in a PHIGS applications program with the function
setColourRepresentation (ws, ci, colorptr)
Parameter ws identifies the workstation output device; parameter ci specifies the color index, which is the colortable position number (0 to 255 for the example); and parameter colorptr points to a trio of RGB color values (r, g, b) each specified in the range from 0 to 1.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
30
ENGINEERING GRAPHICS
There are several advantages in storing color codes in a lookup table. Use of a color table can provide a “reasonable” number of simultaneous colors without requiring large frame buffers.
Grayscale With monitors that have no color capability, color functions can be used in an application program to set the shades of gray, or grayscale, for displayed primitives. Numeric values over the range from 0 to 1 can be used to specify grayscale levels, which are then converted to appropriate binary codes for storage in the raster. This allows the intensity settings to be easily adapted to systems with differing grayscale capabilities. An alternative scheme for storing the intensity information is to convert each intensity code directly to the voltage value that produces this grayscale level on the output device in use. When multiple output devices are available at an installation, the same colortable interface may be used for all monitors.
2.3.4 Area Fill Attributes Options for filling a defined region include a choice between a solid color or a patterned fill and choices for the particular colors and patterns. These fill options can be applied to polygon regions or to areas defined with curved boundaries.
Fill Styles Areas are displayed with three basic fill styles: hollow with a color border, filled with a solid color, or filled with a specified pattern or design. A basic fill style is selected in a PHIGS program with the function
setInteriorStyle (fs)
Values for the fillstyle parameter fs include hollow, solid, and pattern. Another value for fill style is hatch, which is used to fill an area with selected hatching patterns—parallel lines or crossed lines. Other fill options include specifications for the edge type, edge width, and edge color of a region.
Pattern Fill We select fill patterns with
setInteriorStyleIndex (pi)
where pattern index parameter pi specifies a table position.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
31
ENGINEERING GRAPHICS
32
For fill style pattern, table entries can be created on individual output devices with
setPatternRepresentation (ws, pi, nx, ny, cp)
Parameter pi sets the pattern index number for workstation code ws, and cp is a twodimensional array of color codes with nx columns and ny rows. A reference position for starting a pattern fill is assigned with the statement
setPatternReferencePoint (position)
Parameter position is a pointer to coordinates (xp, yp) that fix the lower left corner of the rectangular pattern. The process of filling an area with a rectangular pattern is called tiling and rectangular fill patterns are sometimes referred to as tiling patterns.
Soft Fill Modified boundaryfill and floodfill procedures that are applied to repaint areas so that the fill color is combined with the background colors are referred to as softfill or tintfill algorithms. As an example of this type of fill, the linear softfill algorithm repaints an area that was originally painted by merging a foreground color F with a single background color B, where F â‰ B.
2.3.5 Character Attributes The appearance of displayed characters is controlled by attributes such as font, size, color, and orientation.
Text Attributes There is the choice of font (or typeface), which is a set of character with a particular design style such as New York, Courier, Helvetica, London, Times Roman, and various special symbol groups. The characters in a selected font can also be displayed with assorted underlining styles (solid , dotted , double), in boldface, in italics, and in
or shadow
styles. Character height is defined as the distance between the baseline and the capline of characters. Text size can be adjusted without changing the widthto height ratio of characters with
setCharacterHeight (ch)
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
ENGINEERING GRAPHICS
Parameters ch is assigned a real value greater than 0 to set the coordinate height of capital letters: the distance between baseline and capline in user coordinates. The width only of text can be set with the function setCharacterExpansionFactor (cw)
where the characterwidth parameter cw is set to a positive real value that scales the body width of characters. Spacing between characters is controlled separately with
setCharacterSpacing (cs)
where the characterspacing parameter cs can be assigned any real value. The orientation for a displayed character string is set according to the direction of the character up vector:
setCharacterUpVector (upvect)
Parameter upvect in this function is assigned two values that specify the x and y vector components. A precision specification for text display is given with
setTextPrecision (tpr)
where text precision parameter tpr is assigned one of the values: string, char, or stroke.
Marker Attributes A marker symbol is a single character that can be displayed in different colors and in different sizes. We select a particular character to be the marker symbol with
setMarkerType (mt)
where marker type parameter mt is set to an integer code. Typically codes for marker type are the integers 1 through 5, specifying, respectively, a dot (路), a vertical cross (+), an asterisk (*), a circle (o), and a diagonal cross (X).
2.3.6 Antialiasing Displayed primitives generated by the raster algorithm have a jagged, or stairstep, appearance because the sampling process digitizes coordinate points on an object to discrete
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
33
ENGINEERING GRAPHICS
integer pixel positions. This distortion of information due to lowfrequency sampling (under sampling) is called aliasing. We ca improve the appearance of displayed raster lines by applying antialiasing methods that compensate for the under sampling process.
2.4 SUMMARY
In this unit the various attributes that control the appearance of displayed primitives are discussed. The basic line attributes are line type, line color and line width. To reduce the size of the frame buffer, some raster systems use a separate color lookup table. Characters can be displayed in difference colors, sizes and orientations. Marker symbols can be displayed using selected characters of various sizes and colors. The appearance of raster primitives can be improved by applying antialiasing procedures that adjust pixel intensities.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
34
ENGINEERING GRAPHICS
35
UNIT III TWODIMENSIONAL GEOMETRIC TRANSFORMATIONS UNIT STRUCTURE 3.0 Introduction 3.1 Objectives 3.2 Basic Transformations 3.2.1 Translation 3.2.2 Rotation 3.2.3 Scaling 3.3Matrix Representations and Homogeneous Coordinates 3.4 Composite Transformations 3.4.1 Translations 3.4.2 Rotations 3.4.3 Scalings 3.4.4 General Pivot Point Rotation 3.4.5 General Fixed Point Scaling 3.5 Other Transformations 3.5.1 Reflection 3.5.2 Shear 3.6 The Viewing Pipeline 3.7 Viewing Coordinate Reference Frame 3.8 Window to Viewport Coordinate Transformation 3.9 Clipping Operations 3.9.1 Types of Clipping 3.10 Point Clipping 3.11 Line Clipping 3.11.1 CohenSutherland Line Clipping 3.11.2 LiangBarsky Line Clipping 3.12 Polygon Clipping 3.12.1 SutherlandHodgeman Polygon Clipping 3.12.2 WeilerAtherton Polygon Clipping 3.13 Curve Clipping 3.14 Text Clipping 3.15 Exterior Clipping 3.16 Summary 3.17 Self Assessment questions 3.18 Answers to Self Assessment questions
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
ENGINEERING GRAPHICS
36
3.0 INTRODUCTION Design applications and facility layouts are created by arranging the orientations and sizes of the component parts of the scene. Animations are produced by moving he objects in a scene along animation paths. Changes in orientation, size and shape are accomplished with geometric transformations that alter the coordinate descriptions of objects. The basic geometric transformations are translation, rotation and scaling. Other transformations include reflection and shear.
3.1 OBJECTIVES At the end of this unit, you should be able to
Understand the basic transformations namely translation, rotation and scaling
Know the other transformations namely reflection and shear
Familiar with the window and viewport transformations
Have a thorough study about line clipping and polygon clipping algorithms
Study the text clipping, curve clipping and exterior clipping.
3.2 BASIC TRANSFORMATIONS Changes in orientation, size and shape of an object are accomplished with geometric transformations that alter the coordinate descriptions of objects.
3.2.1 Translation
A translation is applied to an object by repositioning it along a straight path from one coordinate location to another.
We translate a twodimensional point by adding translation distances,tx and ty to ’
’
the original coordinate position(x,y)to move the point to a new position(x ,y ). ’
X =x+tx
y=y+ty
The translation distance pair(tx,ty)is called a translation vector or shift vector. x1 P= x2 ,
x1’ P’= x2’
tx T= ty
This allows us to write the twodimensional translational equations in the matrix form:
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
ENGINEERING GRAPHICS
37
P’=P+T Translation is a rigidbody transformation that
moves
objects without deformation. Every point on the object is translated by the same amount. The following figure shows the translation of an object.
Figure 7: Translation of an object
3.2.2 Rotation A twodimensional rotation is applied to an object by repositioning it along a circular path in the xy plane. To generate a rotation, we specify a rotation angle θ and the position (xr,yr)of the rotation point(or pivot point) about which the object is to be rotated. Positive values for the rotation angle define counterclockwise rotations about the pivot point. Negative values rotate objects in the clockwise direction. This transformation can also be described as a rotation about rotation axis that is perpendicular to the xy plane and passes through the pivot point. The rotation equation is expressed in the matrix form:
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
ENGINEERING GRAPHICS
P’=R.P
Where the rotation matrix is
cosθ –sinθ R= sinθ cosθ
The following figure shows the rotation of an object.
Figure 8: Rotation of an object
3.2.3 Scaling A scaling transformation alters the size of an object. This operation can be carried out for polygons by multiplying the coordinate values(x,y)of each vertex by scaling factors sx and sy to produce the transformed coordinates(x’,y’): X’=x.sx ,
y’=y.sy
Scaling factor sx scales objects in the x direction, while s y scales in the y direction. The transformation equation can also be written in the matrix form: X’
sx 0 =
Y’
x .
0 sy
y
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
38
ENGINEERING GRAPHICS
Or P’=S.P
When sx and sy are assigned the same value, a uniform scaling is produced that maintains relative object proportions.
Unequal values for sx and sy result in a differential scaling that is often used in design applications. The following figure shows the scaling operation where a square is converted to a rectangle.
Figure 9: Scaling operation
3.3 MATRIX REPRESENTATIONS AND HOMOGENEOUS COORDINATES
Many graphics applications involve sequence of geometric transformations. An animation, for example, might require an object to be translated and rotated at each increment of the motion. Each of the basic transformations can be expressed in the general matrix form: P’=M1.P + M2 With coordinate positions p and p’ represented as column vectors.
To produce a sequence of transformations such as scaling followed by rotation then translation, we must calculate the transformed coordinate’s one step at a time.
First coordinate positions are scaled, then these scaled coordinates are rotated, and finally the rotated coordinates are translated.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
39
ENGINEERING GRAPHICS
To express any twodimensional transformation as a matrix multiplication, we represent each Cartesian coordinate position (x,y)with the homogeneous coordinate triple(xh,yh,h), where X= xh
,
y= yh
h
h
A general homogeneous coordinate representation can also be written as (h.x,h.y,h).
The term homogeneous coordinates is used in mathematics to refer to the effect of this representation on Cartesian equations.
When a Cartesian point (x,y)is converted to a homogeneous representation (xh,yh,h),equations containing x and y , such as f(x,y)=0, become homogeneous equations in the three parameters xh,yh, and h. This just means that if each of the three parameters is replaced by any value v times that parameter; the value v can be factored out of the equations. For translation we have P’= T(tx,ty).P Rotation transformation equations about the coordinate origin are written as P’=R(θ).P
A scaling transformation relative to the coordinate origin is P’=S(sx,sy).P 3.4 COMPOSITE TRANSFORMATIONS
With the matrix representations, we can set up a matrix for any sequence of transformations as a composite transformation matrix by calculating the matrix product of the individual transformations. Forming products of transformation matrices is often referred to as a concatenation, or composition, of matrices.
3.4.1 Translation If two successive translation vectors (tx1,ty1) and(tx2,ty2)are applied to a coordinate position P, the final transformed location P’ is calculated as
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
40
ENGINEERING GRAPHICS
41
P’=T(tx2,ty2).{T(tx1,ty1).P} ={T(tx2,ty2).T(tx1,ty1)}.P Where P and P’ are represented as homogeneouscoordinate column vectors.
T(tx2,ty2).T(tx1,ty1)=T(tx1+tx2,ty1+ty2) 3.4.2 Rotation
Two successive rotations applied to point to produce the transformed position P’=R(θ2).{R(θ1).P} ={R(θ2).R(θ1)}.P By multiplying the two rotation matrices, we can verify that two successive rotations are additive: R(θ2).R(θ1)=R(θ1+θ2) So that the final rotated coordinates can be calculated with the composite rotation matrix as P’=R(θ1+θ2).P 3.4.3 Scaling
Concatenating transformation matrices for two successive scaling operations produces the following composite scaling matrix:
Sx2 0 0
sx1 0 0
sx1.sx2 0
0 sy2 0 . 0 sy1 0 = 0 01
0 0 1
0
0 sy1.sy2 0 0
0
1
Or
S(sx2,sy2).S(sx1,sy1)=S(sx1.sx2,sy1.sy2)
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
ENGINEERING GRAPHICS
3.4.4 General Pivot Point Rotation
We can generate rotations about any selected pivot point (xr,yr) by performing the following sequence of translate–rotatetranslate operations:
Translate the object so that the pivotpoint position is moved to the coordination origin.
Rotate the object about the coordinate origin.
Translate the object so that the pivot point is returned to its original position.
T(xr,yr).R(θ).T(xr,yr)=R(xr,yr,θ) The following figure shows a transformation sequence for rotating an object about a specified pivot point using the rotation matrix R of transformation.
Figure 10 : Transformation sequence for rotating an object about a specified pivot point using the rotation matrix R of transformation
3.4.5 General Fixed Point Scaling
A transformation sequence to produce scaling with respect to a selected fixed position (xf,yf)using a scaling function that can only scale relative to the coordinate origin.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
42
ENGINEERING GRAPHICS
Translate object so that the fixed point coincides with the coordinate origin. Scale the object with respect to the coordinate origin. Use the inverse translation of step1 to return the object to its original position.
T(xf,yf).S(sx,sy).T(xf,yf)=S(xf,yf,sx,sy) The following figure shows a transformation sequence for scaling an object with respect to a specified fixed position using the scaling matrix.
Figure 11 : Transformation sequence for scaling an object with respect to a specified fixed position using the scaling matrix
The following figure shows how a square is converted to a parallelogram using the composite transformation matrix.
Figure 12 : A square is converted to a parallelogram using the composite transformation matrix
3.5 OTHER TRANSFORMATION Basic transformations such as translation, rotation and scaling are included in most graphics packages. Some packages provide a few additional transformations that are useful in certain applications. Two such transformations are reflection and shear.
3.5.1 Reflection
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
43
ENGINEERING GRAPHICS
A reflection is a transformation that produces a mirror image of an object. The mirror image for a twodimensional reflection is generated relative to an axis of reflection by 0
rotating the object180 about the reflection axis.
Reflection about the line y=0, the axis, is a accomplished with the transformation matrix
1 0 0
0 1 0
0 0 1 This transformation keeps x values the same, but â€œflipsâ€? the y values of coordinate positions.
A reflection about the y axis flips x coordinates while keeping y coordinates the same. The matrix for this transformation is
1 0 0
0 1 0
0 0 1
We flip both the x and y coordinates of a point by reflecting relative to an axis that is perpendicular to the xy plane and that passes through the coordinate origin. This transformation, referred to as a reflection relative to the coordinate origin, has the matrix representation:
1 0 0
0 1 0
0 0 1
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
44
ENGINEERING GRAPHICS
45
If we choose the reflection axis as the diagonal line y=x, the reflection matrix is
0 1 0
1 0 0
0 0 1
We can derive this matrix by concatenating a sequence of rotation and coordinateaxis reflection matrices.
To obtain a transformation matrix for reflection about the diagonal y=x, we could concatenate matrices for the transformation sequence: 0
Clockwise rotation by 45 . Reflection about the yaxis. 0
Counterclockwise rotation by 45 .
The resulting transformation matrix is
0 1 0
1 0 0
0 0 1
Reflection about any line y=mx+b in the xy plane can be accomplished with a combination of translaterotatereflect transformations. The following figure shows the reflection transformation.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
ENGINEERING GRAPHICS
Figure 13: Reflection of an object about x axis
Figure 14: Reflection of an object about y axis
3.5.2 Shear A transformation that distorts the shape of an object such that the transformed shape appear as if the object were composed of internal layers that had been caused to slide over each other is called shear. Two common shearing transformations are those that shift coordinate x values and those that shift y values.
An xdirection shear relative to the xaxis is produced with the transformation matrix
1 shx 0 0 1
0
0 0
1
This transforms coordinate position as X’=x+ shx, y’=y We can generate xdirection shears relative to other reference lines with
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
46
ENGINEERING GRAPHICS
1 shx shx.yref 0 1
0
0 0
1
With coordinate positions transformed as X’=x+ shx(y yref), y’=y A ydirection shear relative to the line x=x ref is generated with the transformation matrix
1
0
0
Shy 1 Shy.xref 0
0
1
This generates transformed coordinate positions X’=x, y’= Shy(xxref)+y This transformation shifts a coordinate position vertically by an amount proportional to its distance from the reference line x=xref. The following figure shows how a unit square is transformed to a shifted parallelogram using shearing.
Figure 15: Unit square is transformed to a shifted parallelogram using shearing
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
47
ENGINEERING GRAPHICS
3.6 THE VIEWING PIPELINE
A worldcoordinate area selected for display is called a window. An area on a display device to which a window is mapped is called a viewport.
The window defines what is to be viewed; the viewport defines where it is to be displayed.
The mapping of a part of a worldcoordinate scene to devicecoordinates is referred to as a viewing transformation. Sometimes the twodimensional viewing transformation is simply referred to as the windowtoviewport transformation or the windowing transformation.
The term window originally referred to an area of a picture that is selected for viewing. Windowmanager systems to refer to any rectangular screen area that can be moved about, resized, and made active or inactive.
A twodimensional viewingcoordinate system in the worldcoordinate plane, and define a window in the viewing coordinate system.
The viewing coordinate reference frame is used to provide a method for setting up arbitrary orientations for rectangular windows.
By changing the position of the viewport, we can view objects at different positions on the display area of an output device. Panning effects are produced by moving a fixedsize window across the various objects in a scene.
Viewports are typically defined within the unit square(normalized coordinates). This provides a means for separating the viewing and other transformations from specific outputdevice requirements, so that the graphics package is largely deviceindependent. The following figure shows the two dimensional viewing transformation pipeline.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
48
ENGINEERING GRAPHICS
Figure 16: Two dimensional viewing transformation pipeline
3.7 VIEWING COORDINATE REFERENCE FRAME
This coordinate system provides the reference frame for specifying the worldcoordinate window. A viewingcoordinate origin is selected at some world position: P0= x0,y0 To specify a world vector V that defines the viewing yv direction. Vector V is called the
view up vector. Given V, we can calculate the components of unit vectors v= vx,vy
And u= ux,uy for the viewing yv and xv axes respectively.
These unit vectors are used to form the first and second rows of the rotation matrix R that aligns the viewing xvyv axes with the world xwyw axes.
The composite twodimensional transformation to convert world coordinates to viewing coordinates is
MWC,VC=R.T
Where T is the translation matrix that takes the viewing origin point P0 to the world origin and R is the rotation matrix that aligns the axes of the two reference frames.
3.8 WINDOWTOVIEWPORT COORDINATE TRANSFORMATION
A point at position (xw,yw)in the window is mapped into position(xv,yv)in the associated viewport.
xvxvmin
xwxwmin
= xvmaxxvmin
yvyvmin
xwmaxxwmin
ywywmin
= yvmaxyvmin
ywmaxywmin
Solving these expressions for the viewport position (xv,yv),we have
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
49
ENGINEERING GRAPHICS
50
xv=xvmin+(xwxwmin)sx yv=yvmin+(ywywmin)sy where the scaling factors are
xvmaxxvmin sx = xwmaxxwmin
yvmaxyvmin sy = ywmaxywmin
A set of transformations that converts the window area into the view port area.
Perform
a
scaling
transformation
using
a
fixedpoint
position
of
(xwmin,ywmin)that scales the window area to the size of the viewport.
Translate the scaled window area to the position of the viewport.
Any number of output devices can be open in a particular application, and another windowtoviewport transformation can be performed for each open output device. This mapping, called the workstation transformation is accomplished by selecting a window area in normalized space and viewport area in the coordinates of the display device.
3.9 CLIPPING OPERATIONS
A picture that are either inside or outside of a specified region of space is referred to as a clipping algorithm or clipping.
The region against which an object is clipped is called a clip window.
3.9.1 Types of Clipping o
Point clipping
o
Line clipping(straightline segments)
o
Area clipping(polygons)
o
Curve clipping
o
Text clipping
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
ENGINEERING GRAPHICS
3.10 POINT CLIPPING
The clip window is a rectangle in standard position,we save a point P=(x,y)for display if the following inequalities are satisfied: Xwmin≤ x≤ xwmax ywmin≤ y≤ ywmax where the edges of the clip window(xwmin,xwmax,ywmin,ywmax)can be either the worldcoordinate window boundaries or viewport boundaries.
Point clipping can be applied to scenes involving explosions or sea foam that are modeled with particles (points) distributed in some region of the scene.
3.11 LINE CLIPPING
A line with both endpoints inside all clipping boundaries, such as the line from P1 to P2, is saved. A line with both endpoints outside any one of the clip boundaries is outside the window.
All other lines cross one or more clipping boundaries, and may require calculation of multiple intersection points. For a line segment with endpoints(x 1,y1)and(x2,y2)and one or both endpoints outside the clipping rectangle, the parametric representation
X=x1+u(x2x1) y=y1+u(y2y1), 0≤ u≤ 1 If the value of u for an intersection with a rectangle boundary edge is outside the range 0 to 1 , the lines does not enter the interior of the window at that boundary.
If the value of u is with in the range from 0 to 1, the line segment does indeed cross into the clipping area.
3.11.1 CohenSutherland Line Clipping
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
51
ENGINEERING GRAPHICS
Every line endpoint in a picture is assigned a fourdigit binary code, called region code that identifies the location of the point relative to the boundaries of the clipping rectangle. Each bit position in the region code is used to indicate one of the four relative coordinate positions of the point with respect to the clip window: to the left, right, top or bottom. Bit 1 : Left Bit 2 : Right Bit 3 : Below Bit 4 : Above A value of 1 in any bit position indicates that the point is in that relative position; otherwise, the bit position is set to 0. If a point is within the clipping rectangle, the region code is 0000. Regioncode bit values can be determined with the following two steps: o
Calculate differences between endpoint coordinates and clipping boundaries.
o
Use the resultant sign bit of each difference calculation to set the corresponding value in the region code.
Bit 1 : sign bit of xxwmin Bit 2 : sign bit of xwmaxx Bit 3 : sign bit of yywmin Bit 4 : sign bit of ywmaxy A method that can be used to test lines for total clipping is to perform the logical and operation with both region codes. If the result is not 0000, the line is completely outside the clipping region. A line with endpoint coordinates(x1,y1) and (x2,y2), the y coordinate of the intersection point with a vertical boundary can be obtained with the calculation. Y = y1 + m(x  x1) Where, m=(y2  y1) / (x2 – x1) Similarly for the intersection with a horizontal boundary, the x coordinate can be calculated as, X = x1 + (y  y1) / m
3.11.2 LiangBarsky Line Clipping
The parametric equation of a line segment is of the form, X = x1 + u∆ x Y = y1 + u∆ y,
0≤ u≤ 1
Where,
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
52
ENGINEERING GRAPHICS
∆ x = x2 – x1 ∆ y = y2 – y1. The point clipping parametric equations in LiangBarsky is of the form, Xwmin ≤ x1 + u∆x ≤ xwmax Ywmin ≤ y1 + u∆ y ≤ ywmax Each of these four inequalities can be expressed as Upk ≤ qk ,
k=1,2,3,4
Where parameters p and q are defined as, P1 = ∆x,
q1 = x1  xwmin
P2 = ∆x,
q2 = xwmax – x1
P3 = ∆y,
q3 = y1  ywmin
P4 = ∆y,
q4 = ywmax – y1
The two conditions are qk < 0,
line lies completely outside the boundary
qk
line lies inside the boundary.
≥
0,
pk < 0, line proceeds from outside to inside boundary pk
≥
0, line proceeds from inside to outside boundary.
For a nonzero value of pk , we can calculate the value of u that corresponds to the point where the infinitely extended line intersects the extension of boundary k as,
U = qk / pk
The following figure shows line clipping against a rectangular clip window.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
53
ENGINEERING GRAPHICS
Figure 17: line clipping against a rectangular clip window.
3.12 POLYGON CLIPPING o
A polygon boundary processed with a line clipper may be displayed as a series of unconnected line segments, depending on the orientation of the polygon to the clipping window.
o
Polygon clipping requires an algorithm that will generate one or more closed areas that are then scan converted for the appropriate area fill. The output of a polygon clipper should be a sequence of vertices that defines the clipped polygon boundaries.
3.12.1 SutherlandHodgeman Polygon Clipping
We can correctly clip a polygon by processing the polygon boundary as a whole against each window edge. This could be accomplished by processing all polygon vertices against each clip rectangle boundary. The four possible cases when processing vertices in sequence around the perimeter of a polygon are: If the first vertex is outside the window boundary and the second vertex is inside, both the intersection point of the polygon edge and the second vertex are added to the output vertex list. If both input vertices are inside the window boundary, only the second vertex is added to the output vertex list. If the first vertex is inside and second vertex is outside only the intersection is added to output list. If both the input vertices are outside, nothing is added to the output list.
3.12.2 WeilerAtherton Polygon Clipping
This clipping procedure was developed as a method for identifying visible surfaces, and so it can be applied with arbitrary polygonclipping regions. The basic idea in this algorithm is that instead of always proceeding around the polygon edges as vertices are processed, we sometimes want to follow the window boundaries. For clockwise processing of polygon vertices, the following rules are applied: For an outsidetoinside pair of vertices, follow the polygon boundary. For an insidetooutside pair of vertices, follow the window boundary in a clockwise direction. The following figure shows polygon clipping.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
54
ENGINEERING GRAPHICS
Figure 18: Polygon Clipping
3.13 CURVE CLIPPING
Curve clipping procedures will involve nonlinear equations and requires more processing than for objects with linear boundaries.
The bounding rectangle for a circle or other curved object can be used to test for overlap with a rectangular clip window.
If the bounding rectangle for the object is completely inside the window, we save the object. If the rectangle is determined to be completely outside the window, we discard the object. If either case fails for circle we test coordinate extends of individual coordinates.
3.14 TEXT CLIPPING
This clipping technique will involve characters. If all the string is inside a clip window we keep it otherwise we discard it. It is called allornone stringclipping.
An entire character string that overlaps a boundary is to use the allornone characterclipping.
A final method for text clipping is to clip the components of individual characters.
If an individual character overlaps a clip window boundary, clip off the parts of the character that are outside the window. The following figure shows the process of text clipping.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
55
ENGINEERING GRAPHICS
Figure 19: Text Clipping
3.15 EXTERIOR CLIPPING
If we want to clip a picture to the exterior of a specified region, the picture parts to be saved are those that are outside the region. This is referred to as exterior clipping. A typical example of the application of exterior clipping is in multiplewindow systems.
Exterior clipping is also used in other applications that require overlapping pictures.
This technique can also be used for combining graphs, maps or schematics.
Procedures for clipping objects to the interior of concave polygon windows can also make use of external clipping.
3.16 SUMMARY
The basic geometric transformations are translation, rotation and scaling. Other transformations include reflection and shear. Transformations between Cartesian coordinate systems are accomplished with a sequence of translaterotate transformations. The viewing transformation pipeline includes constructing the world coordinate scene using modeling transformations, transferring world coordinates to viewing coordinates, mapping the viewing
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
56
ENGINEERING GRAPHICS
coordinate descriptions of objects to normalized device coordinates and finally mapping to device coordinates. Line clipping, polygon clipping, curve clipping, exterior clipping and text clipping are also discussed.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
57
ENGINEERING GRAPHICS
58
UNIT IV THREEDIMENSIONAL CONCEPTS 4.0 Introduction 4.1 Objectives 4.2 Three Dimensional Display Methods 4.2.1 Parallel Projection 4.2.2 Perspective Projection 4.2.3 Depth Cueing 4.2.4 Visible Line and Surface Identification 4.2.5 Surface Rendering 4.2.6 Exploded and Cutaway Views 4.2.7 Three Dimensional and Stereoscopic Views 4.3 ThreeDimensional Geometric and Modeling Transformation 4.3.1 Translation 4.3.2 Rotation 4.3.3 Scaling 4.4 Other Transformations 4.4.1 Reflection 4.4.2 Shear 4.5 Composite Transformations 4.6 Viewing Pipeline 4.7 Viewing Coordinates 4.8 Projections 4.9 Clipping 4.10 Summary 4.11 Self Assessment questions 4.12 Answers to Self Assessment questions
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
ENGINEERING GRAPHICS
4.0 INTRODUCTION Graphics packages often provide routines for displaying internal components or cross sectional views of solid objects. Some geometric transformations are more involved in threedimensional space than in two dimensions. Twodimensional rotations are always around an axis that is perpendicular to the xy plane. Viewing transformations in three dimensions are much more complicated because we have many more parameters to select when specifying how a threedimensional scene is to be mapped to a display device. Methods for geometric transformations and object modeling in three dimensions are extended from twodimensional methods by including the z coordinate.
4.1 OBJECTIVES
At the end of this unit, you should be able to
Understand the three dimensional display methods
Know the three dimensional transformations namely translation, rotation, scaling, reflection and shear
Familiar with the viewing pipeline and viewing coordinates
Have a thorough study about clipping and projections
4.2 THREE DIMENSIONAL DISPLAY METHODS
To obtain a display of a three dimensional scene that has been modeled in world coordinates, we must first set up a coordinate reference for the camera.
This coordinate reference defines the position and orientation for the plane of the camera film.
Object descriptions are then transferred to the camera reference coordinates and projected onto the selected display plane.
4.2.1 Parallel projection
One method for generating a view of a solid object is to project points on the object surface along parallel lines onto the display plane.
In a parallel projection, parallel lines in the worldcoordinate scene project into parallel projection on the two dimensional display plane.
This technique is used in engineering and architectural drawings to represent an object with a set of views.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
59
ENGINEERING GRAPHICS
4.2.2 Perspective projection
Another method for generating a view of a threedimensional scene id to project points to the display plane along converting paths.
In a prospective projection, parallel lines in a scene that are not parallel to the display plane are projected into converging lines.
Scenes displayed using perspective projections appear more realistic.
Parallel lines appear to converge to a distant point in the background, and distant objects appear smaller than closer to the viewing position.
4.2.3 Depth Cueing
Depth information is important so that we can easily identify, for a particular viewing direction, which direction, which is the front and which is the back of displayed objects.
A simple method for generating depth with wire frame displays is to vary the intensity of objects according to their distance from the viewing position.
The lines closest to the viewing position are displayed with the highest intensities, and lines farther away are displayed with decreasing intensities.
Another application of depth cueing is modeling the effect of the atmosphere on the perceived intensity of objects.
4.2.4 Visible Line and Surface Identification
Clarify depth relationships in a wire frame display.
The simplest method is to highlight the visible lines or to display them in a different color.
Another technique, commonly used for engineering drawings, is to display the no visible lines as dashed lines.
Another approach is to simply remove the no visible lines.
Some visiblesurface algorithms establish visibility pixel by pixel across the viewing plane; other algorithms determine for object surfaces as a whole.
4.2.5 Surface Rendering
Surface properties of objects include degree of transparency and how rough or smooth the surfaces are to be.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
60
ENGINEERING GRAPHICS
Procedures can then be applied to generate the correct illumination and shadow regions for the scene.
Surfacerendering methods are combined with perspective and visiblesurface identification to generate a degree of realism in a displayed scene.
4.2.6 Exploded and Cutaway Views
Exploded and cutaway views of such objects can then be used to show the internal structure and relationship of the object parts.
An alternative to exploding an object into its component parts is the cutaway view, which removes part of the visible surfaces to show internal structure.
4.2.7 Three Dimensional and Stereoscopic Views
Another method for adding a sense of realism to a computergenerated scene is to display objects using either threedimensional or stereoscopic views.
Stereoscopic devices present two views of a scene.
One for the left eye.
Other for the right eye.
The two views are generated by selecting viewing positions that correspond to the two eye positions of a single viewer.
4.3 THREEDIMENSIONAL GEOMETRIC AND MODELING TRANSFORMATION Methods for geometric transformations and object modeling in three dimensions are extended from twodimensional methods by including considerations for the z coordinate.
4.3.1 Translation
In a threedimensional homogeneous coordinate representation, a point is translated from position P=(x,y,z) to position P’=(x’,y’,z’)with the matrix operation x’
1 0 0 tx
x
y’ = 0 1 0 ty . y z’
0 0 1 tz
z
1
0001
1
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
61
ENGINEERING GRAPHICS
62
P’=T+P
Parameters tx, ty, tz specifying translation distances for the coordinate directions x,y,z are assigned any real values.
The matrix representation in Eq.1 is equivalent to the three equations
x’= x + tx;
y’= y + ty;
z’= z + tz;
An object is translated in three dimensions by transforming each of the defining points of the object.
4.3.2 Rotation
To generate a rotation transformation for an object we must designate an axis of rotation (about which the object is to be rotated) and the amount of angular rotation.
The easiest rotation axes to handle are those that are parallel to the coordinate axes.
Positive rotation angles produce counterclockwise rotations about a coordinate axis.
CoordinateAxes Rotations
The two dimensional zaxis rotation equations are easily extended to three dimensions:
x’=xcos a  ysin a y’=xsin a + ycos a z’=z Parameter a specifies the rotation angle.
We get equations for an xaxis rotation:
y’=ycos a – zsin a z’=ysin a + zcos a x’=x
which can be written in the homogeneous coordinate form
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
ENGINEERING GRAPHICS
x’
1 0
0
0
63
x
y’ = 0 cos a –sin a 0 . y z’
0 sin a cos a 0
1
0 0
0
1
z 1
P’=Rx (a).P
The transformation equation for a yaxis rotation: z’= zcos a – xsin a x’= zsin a + xcos a y’=y
The matrix representation for yaxis rotation is
x’
cos a 0 sin a 0
y’ = z’ 1
0
1
0
x
0 .y
sin a 0 cos a 0 0
0
0
1
z
1
P’=Ry (a).P
General ThreeDimensional Rotations
In the special case where an object is to be rotated about an axis that is parallel to one of the coordinate axes, we can attain the desired rotation with the following transformation sequence.
1. Translate the object so that the rotation axis coincides with the parallel coordinate axis. 2. Perform the specified rotation about axis. 3. Translate the object so that the rotation axis is moved back to its original position.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
ENGINEERING GRAPHICS
64
When an object is to be rotated about an axis that is not parallel to one of the coordinate axis, we need to perform some additional transformations.
Given the specifications for the rotation axis and the rotation angle, we can accomplish the required rotation in five steps. 1. Translate the object so that rotation axis passes through the coordinate origin. 2. Rotate the object so that the axis of rotation coincides with one of the coordinate axis. 3. Perform the specified rotation about that coordinate axis. 4. Apply inverse rotation to bring the rotation axis back to its original orientation. 5. Apply the inverse translation to bring the rotation axis back to its original position.
4.3.3 Scaling
The matrix expression for the scaling transformation of a position P=(x,y,z)relative to the coordinate origin can be written as x’
Sx 0 0 0
y’ =
x
0 Sy 0 0 . y
z’
0 0 Sz 0
z
1
0 0 0 1
1
(1)
P’=S.P
Where scaling parameters Sx,Sy and Sz are assigned any positive values.
Explicit expressions for the coordinate transformations for scaling relative to the origin are x’=x.Sx y’=y.Sy
z’=z.Sz
Scaling an object with transformation (1) changes the size of the object and repositions the object relative to the coordinate origin.
We preserve the original shape of an object with a uniform scaling (Sx=Sy=Sz).
Scaling with respect to a selected fixed position (Xf,Yf,Zf) can be represented with the following transformation sequence:
1. Translate the fixed point to the origin. 2. Scale the object relative to the coordinate origin using Eq.(1).
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
ENGINEERING GRAPHICS
65
3. Translate the fixed point back to its original position.
Sx 0 0 (1Sx)Xf 0 Sy 0 (1Sy)Yf T(Xf,Yf,Zf).S(Sx,Sy,Sz).T(xf,Yf,Zf)= 0 0 0
0 0 Sy (1Sz)Zf
1
(2)
We form the inverse scaling matrix for either Eq.(1) or Eq.(2) by placing the scaling parameters Sx,Sy and Sz with their reciprocals.
The inverse matrix generates an opposite scaling transformation, so the concatenation of any scaling matrix and its inverse produces the identity matrix. The following figure shows the process of scaling an object relative to a selected fixed point which is equivalent to the sequence of transformations.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
ENGINEERING GRAPHICS
Figure 20: Scaling an object relative to a selected fixed point
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
66
ENGINEERING GRAPHICS
4.4 OTHER TRANSFORMATION
In addition to translation, rotation and scaling, there are various additional transformations that are often useful in three dimensional graphics applications namely reflection and shear.
4.4.1 Reflection A three dimensional reflection can be performed relative to a selected reflection axis or with respect to a selected reflection plane. Reflections relative to a given axis are equivalent to 180 degree rotations about that axis. Reflections with respect to a plane are equivalent to 180 degree rotations in fourdimensional space. This transformation changes the sign of the z coordinates, leaving the x and ycoordinate values unchanged. The matrix representation for this reflection of points relative to the xy plane is
1 0 0 0 RFz= 0 1 0 0 0 0 1 0 0 0 0 1
4.4.2 Shear
Shearing transformations can be used to modify object shapes.
They are also useful in two dimensional viewing for obtaining general projection transformations.
In three dimensions, we can also generate shears relative to the zaxis.
An example of threedimensional shearing ,the following transformation produces a zaxis shear:
1 0 a 0 SHz= 0 1 b 0 0 0 1 0 0 0 0 1
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
67
ENGINEERING GRAPHICS
Parameters a and b can be assigned any real values.
The effect of this transformation matrix is to alter xand ycoordinate values by an amount that is proportional to the z value, while leaving the z coordinate unchanged. The following figure shows how a unit cube is sheared by transformation matrix with a = 1 and b = 1.
Figure 21: unit cube is sheared by transformation matrix with a = 1 and b = 1
4.5 COMPOSITE TRANSFORMATIONS We form a composite threedimensional transformation by multiplying the matrix representations for the individual operations in the transformation sequence. This concatenation is carried out from right to left, where the rightmost matrix is the first transformation to be applied to an object and the leftmost matrix is the last transformation. A sequence of basic, threedimensional geometric transformations is combined to produce a single composite transformation, which is then applied to the coordinate definition of an object.
4.6 VIEWING PIPELINE
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
68
ENGINEERING GRAPHICS
The steps for computer generation of a view of a threedimensional scene are analogous to the processes involved in taking a photograph. To take a snapshot, we first need to position the camera at a particular point in space. Then we need to decide on the camera orientation. When we snap the shutter, the scene is cropped to the size of the window of the camera and light from the visible surface s is projected onto the camera film. Once the scene has been modeled, world coordinate positions are converted to viewing coordinates. Next, projection operations are performed to convert the viewing coordinate description of the scene to coordinate positions on the projection plane, which will then be mapped to the output device. Objects outside the specified viewing limits are clipped from further consideration and the remaining objects are processed through visible surface identification and surface rendering procedures to produce the display within the device viewport. The following figure shows the general three dimensional transformation pipeline from modeling coordinates to final device coordinates.
Figure 22: General three dimensional transformation pipeline from modeling coordinates to final device coordinates.
4.7 VIEWING COORDINATES
Generating a view of an object in three dimensions is similar to photographing the object. We can walk around and take its picture from any angle, at various distances and with varying camera orientations. The type and size of the camera lens determines which parts of the scene appear in the final picture.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
69
ENGINEERING GRAPHICS
Specifying the View Plane
We choose a particular view for a scene by first establishing the viewing coordinate system also called the view reference coordinate system. A view plane or projection plane is then set up perpendicular to the viewing zv axis. World coordinate positions in the scene are transformed to viewing coordinates, and then viewing coordinates are projected onto the view plane. To establish the viewing coordinate reference frame, we first pick a world coordinate position called the view reference point. This point is the origin of the viewing coordinate system. The view reference point is often chosen to be close to or on the surface of some object in a scene. Next, we select the positive direction for the viewing zv axis, and the orientation of the view plane, by specifying the view plane normal vector N. N is simply specified as a world coordinate vector. We choose the up direction for the view by specifying a vector V, called the view up vector. This vector is used to establish the positive direction for the yv axis. Vector V also be defined as a world coordinate vector or in some packages it is specified with a twist angle about the zv axis. Using vectors N and V, the graphics package can compute a third vector U, perpendicular to both N and V to define the direction for the xv axis. Then the direction of V can be adjusted so that it is perpendicular to both N and U to establish the viewing y v direction. The view plane is always parallel to the x vyv plane and the projections of objects to the view plane correspond to the view of the scene that will be displayed on the output device.
4.8 PROJECTION
Once world coordinate descriptions of the objects in a scene are converted to viewing coordinates, we can project the three dimensional objects onto the two dimensional view plane. There are two basic projection methods. In a parallel projection, coordinate positions are transformed to the view plane along parallel lines. For a perspective projection object positions are transformed to the view plane along lines that converge to a point called the projection reference point or center of projection. The projected view of an object is determined by calculating the intersection of the projection lines with the view plane. A Parallel projection preserves relative proportions of objects and this is the method used in drafting to produce scale drawings of threedimensional objects.
4.8.1 Parallel Projection
We can specify a parallel projection with a projection vector that defines the direction for the projection lines. When the projection is perpendicular to the view plane, we have an
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
70
ENGINEERING GRAPHICS
71
orthographic parallel projection. Otherwise, we have an oblique parallel projection. Orthographic projections are most often used to produce the front, side and top views of an object. Front, side and rear orthographic projections of an object are called elevations and a top orthographic projection is called a plan view. Engineering and architectural drawings are commonly employing these orthographic projections.
We can also form orthographic
projections that display more than one face of an object. Such views are called axonometric orthographic projections. The most commonly use axonometric projection is the isometric projection. We generate an isometric projection by aligning the projection plane so that it intersects each coordinate axis in which the object is defined called the principal axes at the same distance from the origin. The following figure shows the parallel projection of an object to the view plane.
Figure 23: Parallel projection of an object to the view plane.
4.8.2 Perspective Projection
To obtain a perspective projection of a three dimensional object, we transform points along projection lines that meet at the projection reference point. We can write equations describing coordinate positions along this perspective projection line in parametric form as X’ = x – xu Y’ = y – yu Z’ = z – (z – zprp)u Parameter u takes values from 0 to 1, and coordinate position (x’,y’,z’) represents any point along the projection line. When a threedimensional object is projected onto a view plane using perspective transformation equations, any set of parallel lines in the object that are not parallel to the plane are projected into converging lines. Parallel lines that are parallel to the view plane will
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
ENGINEERING GRAPHICS
be projected as parallel lines. The point at which a set of projected parallel lines appears to converge is called a vanishing point. The vanishing point for any set of lines that are parallel to one of the principal axes of an object is referred to as a principal vanishing point. The number of principal vanishing points in a projection is determined by the number of principal axes intersecting the view plane. The following figure shows the perspective projection of equal sized objects at different distances form the view plane.
Figure 24: Parallel projection of an object to the view plane. The following figure shows the isometric projection of a cube. The isometric projection is obtained by aligning the projection vector with the cube diagonal.
Figure 25: Isometric projection of a cube.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
72
ENGINEERING GRAPHICS
The following figure shows the orthographic projection of a point onto a viewing plane.
Figure 26: Orthographic Projection of a point onto a viewing plane.
Figure 27: Oblique Projection of coordinate position (x,y,z)
4.9 CLIPPING
To clip a line segment against the view volume, we would need to test the relative position of the line using the view volumeâ€™s boundary plane equations. By substituting the line
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
73
ENGINEERING GRAPHICS
endpoint coordinates into the plane equation of each boundary in turn, we could determine whether the endpoint is inside or outside that boundary. Lines with both endpoints outside boundary plane are discarded, and those with both endpoints inside all boundary planes are saved. To clip a polygon surface, we can clip the individual polygon edges First, we could test the coordinate extends against each boundary of the view volume to determine whether the object is completely inside or completely outside that boundary. If the coordinate extents of the object are inside all boundaries, we save it. If the coordinate extends are outside all boundaries, we discard it. Clipping in two dimensions is generally performed against an upright rectangle; that is, the clip window is aligned with the x and y axes.
Normalized View Volumes
At the first step, a scene is constructed by transforming object descriptions from modeling coordinates to world coordinates. A view mapping converts the world descriptions to viewing coordinates. At the projection stage, the viewing coordinates are transformed to projection coordinated, which effectively converts the view volume into a rectangular parallelepiped. Then, the parallelepiped is mapped into the unit cube, a normalized view volume called the normalized projection coordinate system. The mapping to normalized projection coordinates is accomplished by transforming points within the rectangular parallelepiped into position within a specified three dimensional view port, which occupies part or the entire unit cube. Normalized projection coordinates are converted to device coordinates for display. The normalized view volume is a region defined by the planes x = 0, x = 1, y = 0, y = 1, z = 0, z = 1
First, the normalized view volume provides a standard shape for representing any sized view volume. The unit cube then can be mapped to a workstation considerations of any size clipping procedures are simplified and standardized with unit clipping planes or the viewport planes, and additional clipping planes can be specified within the normalized space before transforming to device coordinates. Depth cueing and visible surface determination are simplified, since the z axis always points toward the viewer. The following figure shows the expanded transformation pipeline.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
74
ENGINEERING GRAPHICS
75
Figure 28: Expanded Transformation Pipeline Mapping positions within a rectangular view volume to a three – dimension rectangular viewport is accomplished with a combination of scaling and translation, similar to the operations needed for a two dimensional window – to –viewport mapping. We can express the three dimensional transformation matrix for these operations in the form Dx
0
0
Kx
0
Dy
0
Ky
0
0
Dz
Kz
0
0
0
1
Factors Dx1 Dy1 and Dz are the ratios of the dimensions of the viewport and regular parallelepiped view volume in the x,y, and z directions.
Dx
Dx
max,
xvmax xv min xwmax xwmin
yvmax yvmin ywmax ywmin
zvmax zvmin D x boundaries are established by the window limits (xw min, xw Where the view – volume zwbackzw front
ywmin, ywmax) and the positions zfront and zback of the front back planes. Viewport bundaries
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
ENGINEERING GRAPHICS
are set with the coordinate values xvmin, xvmax, yvmin, yvmax, zvmin, zvmax. The additive translation factors Kx, Ky and Kz in the transformation are Kx = xvmin – xw min Dx Ky = yvmin – yw min Dy Kz = zvmin – z front Dz
Figure 29: Dimensions of the view volume and three dimensional viewport
View port clipping
Lines and polygon surfaces in a scene can be clipped against the viewport boundaries. The twodimensional concept of region codes can be extended to three dimensions by considering positions in front and in back of the three –dimensional viewport, as well as positions that are left, right, below, or above the volume. For three – dimensional points, we need to expend the region code to six bits. Each point in the description of a scene is then assigned a sixbit region code that identifies the relative position of the point with respect to the viewport. For a line endpoint at position (x,y,z) we assign the bit positions in the region code from right to left as
bit 1 1, if x xvmin (left )
bit 2 1, if x xvmax (right ) bit 3 1, if y yvmin (below)
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
76
ENGINEERING GRAPHICS
77
bit 4 1, if y yvmax (above) bit 5 1, if z zvmin ( front )
bit 6 1, if z zvmax (back ) A line segment can be immediately identified a completely within the viewport if both endpoints have a region code of 000000. If either endpoint of a line segment does not have a region code of 000000, we perform the logical and operation on the two endpoint codes. The result of this and operation will be nonzero for any line segment that has both endpoints in one of the six outside regions. A nonzero value will be generated if both endpoints are behind the viewport, or both endpoints are above the viewport. The two dimensional parametric clipping methods of Cyrus –Beck or LiangBarsky can be extended to threedimensional scenes. For a line segment with endpoints P 1 = (x1, y1, z1) and P2 = (x2,y2,z2), we can write the parametric line equations as
x x1 ( x2 x1 )u,
0 u 1
y y1 ( y 2 y1 )u z z1 ( z 2 z1 )u Coordinates (x,y,z) represent any point on the line between the two endpoints. At u= 0, we have the point P1, and u = 1 puts us at P2. 4.10 SUMMARY
Three dimensional transformations useful in computer graphics applications include geometric transformations within a single coordinate system and transformations between different coordinate systems. The basic geometric transformations are translation, rotation and scaling. Two additional object transformations are reflections and shears. Viewing procedures for three dimensional scenes follow the general approach used in two dimensional viewing. Three dimensional viewing requires projection routines to transform object descriptions to a viewing plane before the transformation to device coordinates. Parallel projections are either orthographic or oblique and can be specified with a projection vector. Objects in three dimensional scenes are clipped against a view volume. The top, bottom, and sides of the view volume are formed with planes that are parallel to the projection lines and that pass through the view plane window edges.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
ENGINEERING GRAPHICS
78
UNIT V GRAPHICAL USER INTERFACES AND INTERACTIVE INPUT METHODS 5.0 Introduction 5.1 Objectives 5.2 The User Dialogue 5.2.1 Windows and Icons 5.2.2 Accommodating Multiple Skill Levels 5.2.3 Consistency 5.2.4 Minimizing Memorization 5.2.5 Backup and Error Handling 5.2.6 Feedback 5.3 Input of Graphical Data 5.3.1 Logical Classification of Input Devices 5.3.2 Locator Devices 5.3.3 Stroke Devices 5.3.4 String Devices 5.3.5 Valuator Devices 5.3.6 Choice Devices 5.3.7 Pick Devices 5.4 Input Functions 5.4.1 Input Modes 5.4.2 Request Mode 5.4.3 Locator and Stroke Input in Request Mode 5.4.4 String Input in Request Mode 5.4.5 Valuator Input in Request Mode 5.4.6 Choice Input in Request Mode 5.4.7 Pick Input in Request Mode 5.4.8 Sample Mode 5.4.9 Event Mode 5.5 Interactive Picture Construction Techniques 5.5.1 Basic Positioning Methods 5.5.2 Constraints 5.5.3 Grids
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
ENGINEERING GRAPHICS
5.5.4 Gravity Field 5.5.5 Rubber Band Methods 5.5.6 Dragging 5.5.7 Painting and Drawing 5.6 Visible Surface Detection Methods 5.6.1 Back Face Detection 5.6.2 DepthBuffer Method 5.7 Basic Illumination Models 5.7.1 Ambient Light 5.7.2 Diffuse Reflection 5.7.3 Specular Reflection and the Phong Model 5.7.4 Combined Diffuse and Specular Reflections with Multiple Light Sources 5.7.5 Warn Model 5.7.6 Intensity Attenuation 5.7.7 Color Considerations 5.7.8 Transparency 5.7.9 Shadows 5.8 Color Models and Color Applications 5.8.1 Properties of Light 5.8.2 Standard Primaries and the Chromaticity Diagram 5.8.3 Intuitive Color Concepts 5.8.4 RGB Color Model 5.8.5 YIQ Color Model 5.8.6 CMY Color Model 5.8.7 HSV Color Model 5.9 Summary 5.10 Self Assessment questions 5.11 Answers to Self Assessment questions
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
79
ENGINEERING GRAPHICS
5.0 INTRODUCTION
The human computer interface for most systems involves extensive graphics, regardless of the application. In this unit we take a look at the basic elements of graphical user interfaces. A variety of input devices exists and general graphics packages can be designed to interface with various devices and to provide extensive dialogue capabilities. A major consideration in the generation of realistic graphics displays is identifying those parts of a scene that are visible from a chosen viewing position. Numerous algorithms are devised for efficient identification of visible objects for different types of applications. These algorithms are referred to as visible surface detection methods. Realistic displays of a scene are obtained by generating perspective projections of objects and by applying natural lighting effects to the visible surfaces. An illumination model or a lighting model or a shading model is used to calculate the intensity of light that we could see at a given point on the surface of an object.
5.1 OBJECTIVES
At the end of this unit, you should be able to
Know the graphical user interfaces and the interactive input methods
Familiar with the various visible surface detection methods
Have a thorough study about the basic illumination models
Study the various color models and color applications.
5.2 THE USER DIALOGUE
The user’s model serves as the basis for the design of the dialogue.
The user’s model describes what the system is designed to accomplish and what graphics operations are available.
It states the type of objects that can be displayed and how the objects can be manipulated.
All information in the user dialogue is then presented in the language of the application.
5.2.1 Windows and icons
Visual representations are used both for objects to be manipulated in an application and for the actions to be performed on the application objects.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
80
ENGINEERING GRAPHICS
A window system provides a windowmanager interface for the user and functions for handling the display and manipulation of the windows.
Common functions for the window system are opening and closing windows, repositioning windows, resizing windows and display routines that provide interior and exterior clipping and other graphics functions.
Normally windows are displayed with sliders, buttons, and menu icons fro selecting various window options.
Icons representing actions such as rotate, magnify, scale, clip and paste are called control icons or command icons.
5.2.2 Accommodating Multiple Skill Levels
Usually interactive graphical interfaces provide several methods for selecting actions.
For example options could be selected by pointing at an icon and clicking different mouse buttons or by accessing pulldown or popup menus or by typing keyboard commands.
This allows a package to accommodate users that have different skill levels.
For a less experienced user, an interface with a few easily understood operations and detailed prompting is more effective than one with a large, comprehensive operation set.
A simplified set of menus and options is easy to learn and remember and the user can concentrate on the application instead on the details of the interface. Experienced users typically want speed. This means fewer prompts and more input from the keyboard or with multiple mouse button clicks.
Help facilities can be designed on several levels so that beginners can carry on a detailed dialogue, while more experienced users can reduce or eliminate prompts and messages.
5.2.3 Consistency
An important design consideration in an interface is consistency.
A complicated, inconsistent model is difficult for a user to understand and to work with in an effective way.
The objects and operations provided should be designed to form a minimum and consistent set so that the system is easy to learn, but not oversimplified to the point where it is difficult to apply.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
81
ENGINEERING GRAPHICS
Other examples of consistency are always placing menus in the same relative positions so that a user does not have to hunt for a particular option, always using a particular combination of keyboard keys for the same action.
5.2.4 Minimizing Memorization
Operations in an interface should also be structured so that they are easy to understand and to remember.
Complicated, inconsistent and abbreviated common formats lead to confusion and reduction in the effectiveness of the use of the package.
Icons are used to reduce memorizing by displaying easily recognizable shapes for various objects and actions.
5.2.5 Backup and Error Handling
A mechanism for backing up or aborting, during a sequence of operations is another common feature of an interface.
Backup can be provided in many forms. A standard undo key or command is used to cancel a single operation. A system can be backed up through several operations, allowing us to reset the system to some specified point.
Error messages are designed to help determine the cause of error. Interfaces attempt to minimize error possibilities by anticipating certain actions that could lead to an error.
5.2.6 Feedback
As each input is received, the system normally provides some type of response.
An object is highlighted, an icon appears, or a message is displayed.
If processing cannot be completed within a few seconds, several feedback messages might be displayed to keep us informed of the progress of the system.
With function keys, feedback can be given as an audible click or by lighting up the key that has been pressed. Other feedback methods include highlighting, blinking and color changes.
Audio feedback has the advantage that it does not use up screen space and we do not need to take attention from the work area to receive the message.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
82
ENGINEERING GRAPHICS
5.3 INPUT OF GRAPHICAL DATA
5.3.1 Logical Classification of Input Devices
LOCATOR is a device for specifying a coordinate position(x,y). STROKE is a device for specifying a series of coordinate positions. STRING is a device for specifying text input. VALUATOR is a device for specifying scalar values. CHOICE is a device for selecting menu options. PICK is a device for selecting picture components.
5.3.2 Locator Devices
A standard method for interactive selection of a coordinate point is by positioning the screen cursor.
When the screen cursor is at the desired location, a button is activated to store the coordinates of that screen point.
Keyboards can be used for locator input in several ways. Keyboard has four controlcursor keys that move the screen cursor up, down, left and right.
5.3.3 Stroke Devices
Strokedevice input is equivalent to multiple calls to a locator device.
The set of input points is often used to display line sections.
The graphics tablet is one of the more common stroke devices.
5.3.4 String Devices
The primary physical device used for string input is the keyboard.
Input character strings are typically used for picture or graph labels.
Other physical devices can be used for generating character patterns in a text writing mode.
A pattern recognition program then interprets the characters using a stored dictionary of predefined patterns.
5.3.5 Valuator Devices
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
83
ENGINEERING GRAPHICS
84
This logical class of devices is employed in graphics systems to input scalar values.
It is used for setting graphics parameters, such as rotation, angle and scale factors setting physical parameters associated with a particular application.
Any keyboard with a set of numeric keys can be used as a valuator device.
5.3.6 Choice Devices
A choice device is defined as one that enters a selection from a list of alternatives.
Choice devices are a set of buttons; a cursor positioning device, such as a mouse, trackball, or keyboard cursor keys; and a touch panel.
Alternate methods for choice input include keyboard and voice entry.
A standard keyboard can be used to type in commands or menu options.
5.3.7 Pick Devices
Pick devices are used to select parts of a scene that are to be transformed or edited in some way.
With a mouse or joystick, we can position the cursor over the primitives in a displayed structure and press the selection button.
The position of the cursor is then recorded, and several levels of search may be necessary to locate the particular object that is to be selected.
5.4 INPUT FUNCTIONS
Graphics input functions can set up to allow users to specify the following options.
(i)Which physical devices are to provide input within a particular logical classification? (ii)How the graphics program and devices are to interact? (iii)When the data are to be input and which device is to be used at that time to deliver a particular input type to the specified data variables?
5.4.1 Input Modes
Input modes specify how the program and input devices interact. There are three types of modes.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
ENGINEERING GRAPHICS
85
Request mode
Sample mode
Event mode
In request mode, the application program initiates data entry. In sample mode, the application program and input devices operate independently. In event mode, the input devices initiate data input to the application program.
The general form is, SetMode (ws, deviceCode, inputMode, echoFlag) Where devicecode is a positive integer; inputMode is assigned one of the values request, sample, event and parameter echoFlag is assigned either the value echo or noecho.
5.4.2 Request Mode
When an input in request mode, other processing is suspended until the input is received. The general form is, request (ws, deviceCode, status) Returned values are assigned to parameter status and to the data parameters corresponding to the requested logical class.
5.4.3 Locator and Stroke Input in Request Mode
The general form is, Requestlocator (ws,devCode,status,viewindex,pt) requeststroke(ws,devCode,nmax,status,viewindex,n,pts)
5.4.4 String Input in Request Mode
The general form is, requeststring (ws,devCode,status,nChars,str) Parameter str in this function is assigned an input string.
5.4.5 Valuator Input in Request Mode
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
ENGINEERING GRAPHICS
The general form is, requestvaluator(ws, devCode, status, value)
5.4.6 Choice Input in Request Mode
We make menu selection with the following request function: RequestChoice(ws,devCode, status, itemNum)
5.4.7 Pick Input in Request Mode
For this mode, we obtain a structure identifier number with the function RequestPick(ws, devCode, maxPathDepth, status, pathDepth, pickPath)
5.4.8 Sample Mode
Once sample mode has been set for one or more physical devices, data input begins without waiting for program direction. The general form is, Sample(ws, deviceCode)
5.4.9 Event Mode
When an input device is placed in event mode, the program and device operate simultaneously. The general form is, awaitEvent (time, ws, deviceClass, deviceCode)
5.5 INTERACTIVE PICTURE CONSTRUCTION TECHNIQUES
5.5.1 Basic positioning Methods
Coordinate values supplied by locator input are often used with positioning methods to specify a location for displaying an object or a character string.
5.5.2 Constraints
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
86
ENGINEERING GRAPHICS
A constraint is a rule for altering inputcoordinate values to produce a specified orientation or alignment of the displayed coordinates. The common constraint is horizontal or vertical alignment of straight lines.
5.5.3 Grids
Another kind of constraint is a grid of rectangular lines displayed in some part of the screen area. When a grid is used, any input coordinate position is rounded to the nearest intersection of two grid lines.
5.5.4 Gravity Field
Any selected position within the gravity field of a line is moved to the nearest position on the line. Areas around the endpoints are enlarged to make it easier for us to connect lines at their endpoints.
5.5.5 RubberBand Methods
Straight line can be constructed and positioned using rubberband methods, which stretch a line from a starting position as the screen cursor is moved. Rubber band methods are used to construct and position other objects besides straight lines.
5.5.6 Dragging
This technique is often used in interactive picture construction to move objects into position by dragging them with the screen cursor. Dragging objects to various positions in a scene is useful in applications where we might want to explore different possibilities before selecting a final location.
5.5.7 Painting and Drawing
Curvedrawing options can be provided using standard curve shapes such as circular arcs and splines or with freehand sketching procedures. In free hand drawing, curves are
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
87
ENGINEERING GRAPHICS
generated by following the path of a stylus on a graphics tablet or the path of the screen cursor on a video monitor. Once a curve is displayed, the designer can alter the curve shape by adjusting the positions of selected points along the curve path.
5.6 VISIBLE SURFACE DETECTION METHODS
A major consideration in the generation of realistic graphics displays is identifying those parts of a scene that are visible from a chosen viewing position.
Deciding upon a method for a particular application can depend on such factors as the complexity of the scene, type of objects to be displayed, available equipment, and whether static or animated displays are to be generated.
The various algorithms are referred to as visible surface detection methods. Sometimes these methods are also referred to as hidden surface elimination methods.
5.6.1 Classification of VisibleSurface Detection Algorithms
Visiblesurface detection algorithms are broadly classified according to whether they deal with object definitions directly or with their projected images.
The two approaches are called ObjectSpace methods and imagespace methods respectively.
An objectspace method compares objects and parts of objects to each other within the scene definition to determine which surfaces, as a whole, we should label as visible.
In an imagespace algorithm, visibility is decided point by point at each pixel position on the projection plane.
There are major differences in the basic approach taken by the various visiblesurface detection algorithms, most use sorting and coherence methods to improve performance.
Sorting is used to facilitate depth comparisons by ordering the individual surfaces in a scene according to their distance from the view plane.
Coherence methods are used to take advantage of regularities in a scene.
Visible surface detection methods are classified as follows o
Back Face Detection
o
DepthBuffer Method
o
ABuffer Method
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
88
ENGINEERING GRAPHICS
89
o
Scan Line Method
o
Depth Sorting Method
o
BSP Tree Method
o
Area Subdivision Method
o
Octree Mthods
o
Ray Casting Method
Among the above mentioned visible surface detection methods, Back Face detection method and Depth Buffer method are included in the syllabus.
5.6.2 Back Face Detection
A fast and simple objectspace method for identifying the back faces of a polyhedron is based on the “insideoutside” tests.
A point(x,y,z) is “inside” a polygon surface with plane parameters A,B,C, and D if
Ax+By+Cz+d<0
When an inside point is along the line of sight to the surface, the polygon must be a back face.
In general, if V is a vector in the viewing direction position then this polygon is a back face if o
V.N>0
If object descriptions have been converted to projection coordinates and our viewing direction is parallel to the viewing zv axis, then V=(0,0,Vz) and
V.N=VzC
So that we only need to consider the sign of C, the z component of the vector N.
In a righthanded viewing system with viewing direction along the negative z v axis, the polygon is a back face if C<0.
Thus , in general we can label any polygon as a back face if its normal vector has a component value: o
C<=0
Similar methods can be used in packages that employ a lefthanded viewing system.
In these packages, plane parameters A,B,C, and D can be calculated from polygon vertex coordinates specified in a clockwise direction.
Also, back faces have normal vectors that point away from the viewing position and are identified by C>=0 when the viewing direction is along the positive z v axis.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
ENGINEERING GRAPHICS
5.6.3 Depth Buffer Method
A commonly used imagespace approach to detecting visible surfaces is the depthbuffer method, which compares surface depths at each pixel position on the projection plane.
This procedure is also referred to as the zbuffer method, since object depth is usually measured from the view plane along the zaxis of the viewing system.
The depthbuffer algorithm as follows:
1. Initialize the depth buffer and refresh buffer so that for all buffer possible directions(x,y), Depth(x,y)=0,refresh(x,y)=Ibackgnd 2. For each position on each polygon surface,compare depth values to previously stored values in the depth buffer to determine visibility.
Calculate the depth z for each(x,y)position on the polygon.
If z>depth(x,y),then set Depth(x,y)=z, refresh(x,y)=Isurf(x,y)
Where Ibackgnd is the value for the background intensity, and Isurf(x,y) is the projected intensity value for the surface at pixel position(x,y).After all surfaces have been processed, the depth buffer contains depth values for the visible surfaces and the refresh buffer contains the corresponding intensity values for those surfaces.
Depth values for a surface position (x,y) are calculated from the plane equation for each surface: Z=(AxByD)/C If the depth z’ of the of the next position(x+1,y) along the scan line is Z’=(A(x+1)ByD)/C Or Z’=zA/C;
Depth values down the edge are obtained recursively as Z’=z+(A/m+B)/C
If we are processing down a vertical edge, the slope is infinite and the recursive calculations reduce to Z’=z+B/C
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
90
ENGINEERING GRAPHICS
5.7 BASIC ILLUMINATION MODELS
Lighting calculations are based on the optical properties of surfaces, the background lighting conditions, and the lightsource specifications. Optical parameters are used to set surface properties, such as glossy, matte, opaque, and transparent. This controls the amount of reflection and absorption of incident light. All light sources are considered to be point sources, specified with a coordinate position and an intensity value (color)
5.7.1 Ambient Light
A surface that is not exposed directly to a light source still will be visible if nearby objects are illuminated. In our basic illumination model, we can set a general level of brightness for a scene. This is a simple way to model the combination of light reflection from various surfaces to produce a uniform illumination called the ambient light, or background light. Ambient light has no spatial or directional characteristics. The amount of ambient light incident on each object is a constant for all surfaces and over all directions. We can set the level for the ambient light in a scene with parameter I a, and each surface is then illuminated with this constant value. The resulting reflected light is a constant for each surface, independent of the viewing direction and the spatial orientation of the surface.
Figure 30: An illuminated area projected perpendicular to the path of the incoming light rays
5.7.2 Diffuse Reflection Ambient â€“light reflection is an approximation of global diffuse lighting effects. Diffuse reflections are constant over each surface in a scene, independent of the viewing direction. The fractional amount of the incident light that is diffusely reflected can be set for each surface with parameter Kd, the diffuse â€“reflection coefficient, or diffuse reflectivity. Parameter Kd is assigned a constant value in the interval 0 to 1, according to the reflecting properties we
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
91
ENGINEERING GRAPHICS
92
want the surface to have. If we want a highly reflective surface, we set the value of K d near that of the incident light. If a surface is exposed only to ambient light, we can express the intensity of the diffuse reflection at any point on the surface as
I ambdiff
Kd I a
5.7.3 Specular reflection and the phong model
When we look at an illuminated shiny surface, such as polished metal, an apple, or a person’s forehead, we see a highlight, or bright spot, at certain viewing directions. This phenomenon, called specular reflection, is the result of total, or near total, reflection of the incident light in a concentrated region around the specularreflection angle. An empirical model for calculating the specular reflection range, developed by phong BuiTuong, sets the intensity of specular reflection proportional to cos assigned values in the range 0º to 90º, so that cos to specular reflection parameter
ns
.
Angle
can be
various from 0 to 1. The value assigned
ns is determined by the type of surface ns (say, 100 or
more), and smaller values (down to 1) are used for duller surfaces. For a perfect reflector, is infinite. For a rough surface, such as chalk or cinderblock,
ns
ns would be assigned a value
near 1. We can approximately model monochromatic specular intensity variations using a specular reflection coefficient, W( ) for each surface.
5.7.4 Combined Diffuse and Specular Reflections with Multiple Light Sources
For a single point light source, we can model the combined diffuse and specular reflections from a point on an illuminated surface as I = I diff + I spec = KaIa + KdIl (N.L) + KsIl(N.H)
ns
If we place more than one point source in a scene, we obtain the light reflection at any surface point by summing the contributions from the individual sources: n
I K a I a I li [ N .Li ) K s ( N .H i ) ns ] i 1
To ensure that any pixel intensity does not exceed the maximum allowable value, we can apply some type of normalization procedure. A simple approach is to set a maximum magnitude for each term in the intensity equation. If any calculated term exceeds the maximum, we simply set it to the maximum value. Another way to compensate for intensity
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
ENGINEERING GRAPHICS
overflow is to normalize the individual terms by dividing each by the magnitude of the largest term.
5.7.5 Warn Model
The warn model provides a method for simulating studio lighting effects by controlling light intensity in different directions. In addition, light controls, such as “barn doors” and spotlighting, used by studio photographers can be simulated in the Warn model. Flaps are used to control the amount of light emitted by a source in various directions. Two flaps are provided for each of the x,y and z directions. Spotlights are used to controls the amount of light emitted within a cone with apex at a point source position.
5.7.6 Intensity Attenuation
As radiant energy from a point light source travels through space, its amplitude is attenuated by the factor 1/d2, where d is the distance that the light has traveled. This means that a surface close to the light source (small d) receives higher incident intensity from the source than a distant from the source than a distant surface (large d). If two parallel surfaces with the same optical parameters overlap, they would be indistinguishable from each other. A general inverse quadratic attenuation can be set up as F(d) =
1 a0 a1d a2 d 2
A user can then fiddle with the coefficients a0, a1 and a2 to obtain a variety of lighting effects for a scene. The value of the constant term a0 can be adjusted to prevent f(d) from becoming too large when d is very small.
5.7.7 Color Considerations
For an RGB description, each color in a scene is expressed in terms of red, green, and blue components. We then specify the RGB components of light source intensities and surface colors, and the illumination model calculates the RGB components of the reflected light. One way to set surface colors is by specifying the reflectivity coefficients as three element vectors. The diffuse reflection coefficient vector, for example, would then have RGB components (KdR, KdG, KdB). If we want an object to have a blue surface, we select a nonzero value in the range from 0 to 1 for the blue reflectivity component, KdB, while the red and green reflectivity components are set to zero (KdR = KdG = 0). Any nonzero red or green
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
93
ENGINEERING GRAPHICS
94
components in the incident light are absorbed, and only the blue component is reflected. The intensity calculation for this example to the single expression
I B K aB I aB fi (d ) I lbi [ K db ( N .Li) K sB ( N .H i ) ns ]
5.7.8 Transparency
A transparent surface, in general, produces both reflected and transmitted light. The relative contribution of the transmitted light depends on the degree of transparency of the surface and whether any light sources or illuminated surfaces are behind the transparent surface. When a transparent surface is to be modeled, the intensity equations must be modified to include contributions from light passing through the surface Realistic transparency effects are modeled by considering light refraction. When light is incident upon a transparent surface, part of it is reflected and part is refracted. The direction of the refracted light, specified by the angle of refraction, is a function of the index of refraction of each material and the direction of the incident light. Index of refraction for a material is defined as the ratio of the speed of light in a vacuum to the speed of light in the material. Angle of refraction
r
is calculated from the angle of incidence i , the index of refraction
the “incident” material (usually air), and the index of refraction
r of
i of
the refracting material
according to Snell’s law:
Sin r
i sin i r
5.7.9 Shadows Hidden –surface methods can be used to locate areas where light sources produce shadows. By applying a hiddensurface method section cannot be “seen” from the light source. These are the shadow areas. Once we have determined the shadow areas for all light sources, the shadows could be treated as surface patterns and stored in pattern arrays. Surface that are visible from the view position are shaded according to the lighting model, which can be combined with texture patterns. We can display shadow areas with
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
ENGINEERING GRAPHICS
ambient light intensity only or we can combine the ambient light with specified surface textures.
5.8 COLOR MODELS AND COLOR APPLICATIONS
A Color model is a method for explaining the properties or behavior of color within some particular context.
No single color model can explain all aspects of color, so we make use of different models to help describe the different perceived characteristics of color.
There are two types of color models namely additive and subtractive color models.
Additive color models use light to display color.
Subtractive color models use printing inks.
Colors perceived in additive models are the result of transmitted light.
Colors perceived in subtractive models are the result of reflected light.
Colors in additive systems are created by adding colors to black to create new colors.
Additive color environments are selfluminous.
Color on monitors is additive.
Primary colors are subtracted from white create new colors.
Any color image reproduced on paper is an example of the use of a subtractive color system.
Color management is a term that describes a technology that translates the colors of an object from their current color space to the color space of the output devices like monitors, printers etc.
Color space is a more specific term for a certain combination of a color model plus a color mapping function.
There are several color models used in computer graphics but the two most common are the RGB for computer display and the CMYK for printing.
5.8.1 Properties of Light What we perceive as “light’, or different colors, is a narrow frequency band within the electromagnetic spectrum. A few of the other frequency bands within this spectrum are called radio waves, micro waves, infrared waves, and Xrays.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
95
ENGINEERING GRAPHICS
Each frequency value within the visible band corresponds to a distinct color. At the lowfrequency end is a red color and the highest frequency is violet color. The wavelength and frequency of the monochromatic wave are inversely proportional to each other, with the proportionality constant as the speed of light c: C=/\ .f
If low frequencies are predominant in the reflected light, the object is described as red. In this case, we say the perceived light has a dominant frequency (or dominant wavelength) at the red end of the spectrum. The dominant frequency is also called the hue, or simply the color, of the light.
Brightness is the perceived intensity of the light. Intensity is the radiant energy emitted per unit time, per unit solid angle, and per unit projected area of the source. Radiant energy is related to the luminance of the source. Purity or saturation describes how washed out or how â€œPureâ€? the color of the light appears.
These three characteristics dominant frequency, brightness and purity are commonly used to describe the different properties we perceive in a source of light.
The term Chromaticity is used to refer collectively to two properties describing color characteristics: Purity and dominant frequency
If the two color sources combine to produce white light, they are referred to as Complementary colors.
Examples of complementary colors are red and cyan, green and magenta and blue and yellow.
Typically, color models that are used to describe combinations of light in terms of dominant frequency (hue) use three colors to obtain a reasonably wide range of colors, called the Color gamut for that model.
Two different color light sources with suitably chosen intensities can be used to produce a range of other colors.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
96
ENGINEERING GRAPHICS
The two or three colors used to produce other colors in such a color model are referred to as Primary colors.
5.8.2 Standard Primaries and the Chromaticity Diagram
Since no finite set of color light sources can be combined to display all possible colors, three primaries were defined in 1931 by the international Commission on illumination, referred to as the CIE (Commission International del’Eclairage).
The three standard colors are imaginary colors. They are defined mathematically with positive color matching functions that specify the amount of each primary needed to describe any spectral color.
XYZ Color Model
The set of CIE primaries is generally referred to as the XYZ,or(X,Y,Z),color model, where X,Y,Z represent vectors in a three dimensional, additive color space. Any color C /\ is then expressed as C/\ =XX+YY+ZZ Where X, Y, Z designate the amounts of the standard primaries needed to match c /\. Normalised amounts are thus calculated as
x=X/X+Y+Z, y=Y/X+Y+Z, z=Z/X+Y+Z
with x+y+z=1. Thus any color can be represented with just the x and y amounts. Since we have normalized against luminance, parameters x and y are called the chromaticity values because they depend only on hue and purity.
CIE Chromaticity Diagram
When we plot the normalized amounts x and y for colors in the visible spectrum, we obtain the tongueshaped curve. This curve is called the CIE Chromaticity Diagram. The chromaticity diagram is useful for the following:
Comparing colors gamuts for different sets of primitives.
Identifying complementary colors.
Determining dominant wavelength and purity of a given color.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
97
ENGINEERING GRAPHICS
Color gamuts are represented on the chromaticity diagram as straight line segments or as polygons.
Since the color gamut for two points is a straight line, complementary colors must be represented on the chromaticity diagram as two points situated on opposite sides of C and connected with a straight line.
5.8.3 Intuitive Color Concepts Starting with the pigment for a â€œpure colorâ€?, the artist adds a black pigment to produce different shades of that color. Different tints of the color are obtained by adding a white pigment to the original color, making it lighter as more white is added.
Tones of the color are produced by adding both black and white pigments.
5.8.4 RGB Color Model
Based on the tristimulus theory of vision, our eyes perceive color through the stimulation of three visual pigments in the cones of the retina. These visual pigments have peak sensitivity at wavelength of about 630 nm (red), 530 nm (green), and 450 nm (blue). This theory of vision is the basis for displaying color output on a video monitor using the three color primaries red, green and blue referred as the RGB Color model. The origin represents black and the vertex with Coordinates (1, 1, 1) is white. Each color point within the bounds of the cube can be represented as a triple (R, G, B), where values for R,G,B are assigned in the range from 0 to 1. Thus, a color C/\ is expressed in RGB components as C/\ =RR+GG+BB The magenta vertex is obtained by adding red and blue to produce the triple (1, 0, 1) and white at (1, 1, 1) is the sum of the red, green and blue vertices. Some important features of the RGB color model are 1. It is an additive color model 2. Used for computer displays 3. Uses light to display color 4. Colors result from transmitted light 5. Red + Green + Blue = White
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
98
ENGINEERING GRAPHICS
99
RGB(X,Y) CHROMATICITY COORDINATES:
NTSC Standard
CIE Model Approx.Color Monitor values
R) (0.670,0.330) (0.735,0.265)
(0.628,0.346)
G) (0.210,0.710) (0.274,0.717)
(0.268.0.588)
B) (0.410,0.080) (0.167,0.009)
(0.150,0.070)
5.8.5 YIQ Color Model
In the YIQ model, parameter y is the same as in the XYZ model. Luminance information is contained in the y parameter, while chromaticity information is incorporated into the I and Q parameters. A combination of red, green and blue intensities are chosen for the y parameter to yield the standard luminosity curve. An NTSC encoder, which converts RGB values to YIQ values, then modulates and superimposes the I and Q information on the signal. An RGB signal can be converted to a television signal using an NTSC encoder which converts RGB values to YIQ values, then modulates and superimposes the I and Q information on the Y signal. The conversion from RGB values to YIQ values is accomplished with the transformation.
Y
0.299 0.587 0.144
R
I
0.596 0.275 0.321
G
Q
0.212 0.528 0.311
B
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
ENGINEERING GRAPHICS
100
5.8.6 CMY Color Model
A color model defined with the primary colors cyan, magenta and yellow is useful for describing color output to hardcopy devices. A CMY color model forms its gamut from the primary subtractive colors of cyan, magenta and yellow. When cyan, magenta and yellow inks are combined, it forms black.
In CMY model point (1, 1, 1) represents black, because all components of the incident light are subtracted.
The matrix transformation for the conversion from an RGB representation to a CMY representation is , C M
1 =
R
1
Y
1

G B
Here the white is represented in the RGB system as the unit column vector.
Some of the important features of the CMY color model are 1. It is a subtractive color model 2. Used for printed material 3. Uses ink to display color 4. Colors result from reflected light 5. Cyan + Magenta + Yellow = Black
5.8.7 HSV Color Model
The HSV Model uses color descriptions that have a more intuitive appeal to a user. Color parameters in this model are hue (H), saturation(S), and value (V). Hue deals with the purity of the color. Saturation determines the amount of white light mixed with the original color. Value gives the intensity of the color.
The three dimensional representation of the HSV model is derived from the RGB cube.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
ENGINEERING GRAPHICS
101
Hue is represented as an angle about the vertical axis, ranging from 0◦ at red through 360◦. Vertices of the hexagon are separated by 60◦ intervals. Yellow is at 60◦, green at 120◦ and cyan opposite red at H = 180◦. Saturation s varies from 0 to 1. Value v varies from 0 at the apex of the hex cone to 1 at the top.
5.9 SUMMARY
In this unit we have discussed the basic properties of light and the concept of a color model. Light sources are described in terms of hue, brightness and saturation. One method of defining a color model is to specify a set of two or more primary colors that are combined to produce various other colors. Common color models defined with three primary colors are the RGB and CMY models. Other color models based on specification of luminance and purity include YIQ, HSV and HLS color models. A dialogue for an application package can be designed from the user’s model, which describes the functions of the applications package.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
ENGINEERING GRAPHICS
102
UNIT QUESTIONS UNITI QUESTIONS
UNIT QUESTIONS
1. Define computer graphics. Give a survey of the various graphics system. 2. Explain briefly the video display devices. 3. Explain the process involved in the raster scan system 4. Explain the tasks involved in random scan system 5. Describe the various input devices in detail. 6. Write a note on the hard copy devices.
UNITII QUESTIONS SELF ASSESSMENT QUESTIONS
Answer the following questions 1. Define the term Output primitive. 2. ________________ is a faster method for calculating pixel position. 3. Give the types of fonts. 4. List the various character attributes. 5. What do you mean by antialiasing? 2.5 ANSWERS TO SELF ASSESSMENT QUESTIONS
1. Graphics programming packages provide functions to describe a scene in terms of the basic geometric structures referred to as output primitives. 2. DDA Algorithm. 3. Serif and Sans Serif are the types of fonts. 4. The various character attributes are font, size, color and orientation. 5. Displayed primitive generated by the raster algorithms have a jagged, appearance due to low frequency sampling is called aliasing. The appearance of displayed raster lines can be improved by using antialiasing.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
ENGINEERING GRAPHICS
103
UNIT QUESTIONS
1. Define the output primitive. Explain briefly the DDA algorithm in detail. 2. Explain briefly the Ellipse generating algorithm. 3. Explain the circle generating algorithm. 4. Explain the tasks involved in antialiasing. 5. Describe the various line attributes. 6. Write a note on the area fill attributes. 7. Explain the various character attributes.
UNITIII QUESTIONS
3.17
SELF ASSESSMENT QUESTIONS
Answer the following questions 1. Define the term transformation. 2. ________________ alters the size of an object. 3. What is meant by concatenation of matrices? 4. ________________ produces a mirror image of an object. 5. Define window and a viewport. 3.18
ANSWERS TO SELF ASSESSMENT QUESTIONS
1. Changes in orientation, size and shape of an object are accomplished with geometric transformations that alter the coordinate descriptions of objects. 2. Scaling transformation. 3. Forming product of transformation matrices is referred to as concatenation or composition of matrices. 4. Reflection. 5. A world coordinate area selected for display is called a window. An area on a display device to which a window is mapped is called a viewport.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
ENGINEERING GRAPHICS
104
UNIT QUESTIONS
1. Define the term transformation. Explain briefly the various transformations with a neat diagram. 2. Explain briefly the Reflection and Shear. 3. Explain the line clipping algorithms. 4. Explain the tasks involved in polygon clipping. 5. Describe the process of text clipping. 6. Write a note on curve clipping.
UNITIV QUESTIONS
SELF ASSESSMENT QUESTIONS
Answer the following questions 1. Define the term three dimensional transformation. 2. ________________ can be used to modify object shapes. 3. What are the types of projection? 4.11
ANSWERS TO SELF ASSESSMENT QUESTIONS
4. When the projection is perpendicular to the view plane we have an ________________. 5. Define principal vanishing point.
1. Methods for geometric transformations and object modeling in three dimensions are extended from two dimensional methods by including considerations for the z coordinate is known as three dimensional transformations. 2. Shearing transformation. 3. The types of projection are parallel and perspective projection. 4. Orthographic parallel projection.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
ENGINEERING GRAPHICS
105
5. The vanishing point for any set of lines that are parallel to one of the principal axes of an object is referred to as a principal vanishing point.
UNIT QUESTIONS
1. Define the term three dimensional transformation. Explain briefly the various 3D transformations with a neat diagram. 2. Explain briefly the 3D Reflection and Shear. 3. Explain the process of projection with a neat diagram. 4. Explain the tasks involved in 3D clipping. 5. Describe the process of viewing pipeline.
UNITV QUESTIONS
SELF ASSESSMENT QUESTIONS
Answer the following questions 1. The primary physical device used for string input is _____________. 2. In _________ mode, the input devices initiate data input to the application program. 3. What are the types of visible surface detection methods? 4. A commonly used image space approach to detect visible surfaces is the _____________. 5. Give the various color models. 5.11 ANSWERS TO SELF ASSESSMENT QUESTIONS
1. Keyboard. 2. Event mode.
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
ENGINEERING GRAPHICS
106
3. The types of visible surface detection methods are object space methods and image space methods. 4. Depth Buffer method. 5. The various color models are RGB, CMY, YIQ, HSV, HLS and XYZ.
UNIT QUESTIONS
1. Explain briefly the various interactive input methods. 2. Explain briefly the various input functions. 3. Explain the basic illumination models. 4. Explain the tasks involved in RGB color model. 5. Describe the various color models in detail. 6. What do you mean by hue, saturation and intensity? How are they related to dominant color, purity and luminance respectively?
FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +919999554621
ENGI NEERI NGGRAPHI CS
Publ i s he dby
I ns t i t ut eofManage me nt& Te c hni c alSt udi e s Addr e s s:E4 1 , Se c t o r 3 , No i da( U. P) www. i mt s i ns t i t ut e . c o m Co nt a c t :9 1 +9 2 1 0 9 8 9 8 9 8