Sei sulla pagina 1di 54

GEOMETRY AND LINE GENERATION

[Introduction, Lines, Line segments, Perpendicular Lines, Distance between a point and a Line, Vectors, Pixels and Frame Buffers.] The computer is information processing machine, a tool for storing, manipulating, and correlating data. Data tables can easily be generated. The time and effort required to understand increases with the numerous data. CG directly solves this problem. It is a study of techniques to improve communication between human and machine. A graphical presentation of the data eases out the problem. This is a passive form of CG. Communication can also be a two way process. A dialogue can be established through the graphics medium for interaction. This is termed interactive computer graphics. Applications of CG: Management information may be displayed as bars and charts. Scientific theories and models may be described in pictorial form. It can be used in CAD to display machine components, layouts, blueprints, PCBs etc. Maps can be created for all kinds of geographic information. As a tool for animation and video games Standardization CORE system GKS (Graphics Kernel System) PHIGS (Programmers Hierarchical Interactive Graphics Standard) CGM (Computer Graphics Metafile) is a file format for picture information that allows device independent capture, storage, and transfer. CGI (Computer Graphics Interface) is a companion standard to provide a pictorial interface for the CGM primitives. Lines: A point (a position in a plane) can be specified with an ordered pair of numbers (x, y), where x is the horizontal distance and y is the vertical distance from the origin. If two points (x1, y1) and (x2, y2) are specified, then an equation for the line is given by: (y y1) / (x x1) = (y2 y1) / (x2 x1) Multiplying by the denominators gives the form: (y y1) (x2 x1) = (y2 y1) (x x1) Solving for y gives: y = [(y2 y1) / (x2 x1)] (x x1) + y1 (1) (2) (3) (4)

Where:

y = mx + b m = [(y2 y1) / (x2 x1) b = y1 m x1

This is called slope-intercept form of the line. The slope m is the change in height divided by the change in width for two points on the line (the rise over the run). The intercept b is the height at which the line crosses the y-axis. Multiplying out the factors in equation 1 and collecting them on one side of the equal sign yields another different form of line equation, called the general form. (y2 y1) x (x2 x1) y + x2 y1 x1 y2 = 0 or: r x + sy + t = 0 Where: r = (y2 y1) s = (x2 x1) t = x2 y1 x1 y2 (6) (5)

Here the values of r, s, and t are the possible values and multiplying them by any common factor will produce a new set of values, which satisfy the equation 6 and describe the same line. The values of r, s, and t are sometimes chosen so that r2 + s2 = 1. (7) Comparing equations 4 and 6 we see that: m=r/s & b = t /s (8) Point of intersection: If two lines cross each other the point of intersection (P1) can be evaluated. This point P1 lies on both the lines. Solving the equations of lines for the common point yields the values of P1. Proof: Consider the slope-intercept forms of the two lines as follows: Line 1: (y = m1 x + b1) and Line 2: (y = m2x + b2) (9) Let the common point be P1 (x1, y1) then, y1 = m1 x1 + b1 and y1 = m2 x1 + b2 (10) Equating over y1 gives: m1 x1 + b1 = m2 x1 + b2 (11) Solving for x1 yields: x1 = (b2 b1) / (m1 m2) (12) Substituting x1 in line 1 or line 2 gives: y1 = (b2 m1 b1 m2) / (m1 m2) (13) The point of intersection P1 is given by: [(b2 b1) / (m1 m2), (b2 m1 b1 m2) / (m1 m2)] (14) {Note: Evaluation of P1 for parallel lines (m1 = m2) results in a division by zero.} Similarly P1 can be evaluated for the general form case and is given by: [(s1 t2 s2 t1) / (s2r1 s1r2), (t1r2 t2r1) / (s2r1 s1r2] (15)

Line Segments: A line segment is specified by its two end points. Generally in computer graphics, a line will be specified by two endpoints. The slope (m) and y-intercept (b) are often calculated as intermediate results for displaying the points that lie on the segment. If the end points are P1 (x1, y1) and P2 (x2, y2) then the equation y = mx + b can be formulated. Another point P3 (x3, y3) lies on the line segment if 1. y3 = mx3 + b 2

2. min (x1, x2) x3 max (x1, x2) 3. min (y1, y2) y3 max (y1, y2) The x coordinate increases uniformly from x1 to x2. Similarly the y coordinate increases uniformly from y1 to y2. This can be expressed by the following equations. x = x1 + (x2 x1) u (16) y = y1 + (y2 y1) u (17) The two equations together describe a straight line. This is known as parametric form because the x and y values on the line are given in terms of a parameter u. Length of the segment can be calculated using Pythagorean theorem. L2 = (x2 x1) 2 + (y2 y1) 2 (18) 2 2 L = [(x2 x1) + (y2 y1) ] (19) The mid point that lies half way between the x and y coordinates can be evaluated easily. (xm, ym) = [(x1 + x2)/2, (y1 + y2)/2] (20) Perpendicular lines: Perpendicularity of the given lines can be evaluated by examining their slopes. If two lines are perpendicular, the slope of one will be the negative reciprocal of the other. (m1 = 1/m2). Proof: Consider two lines Line 1: y = m1 x + b1 (21.1) Line 2: y = m2 x + b2. (21.2) If one line is perpendicular to the second, then a line parallel to the first will also be perpendicular to the second. The work can be simplified by taking their parallel lines, which pass through the origin. Parallel to Line 1: y = m1 x (21.3) Parallel to Line 2: y = m2 x (21.4) Consider a point P1 (x1, y1) on the Line 1 so that y1 = m1 x1 Consider another point P2 (x2, y2) on the Line 2 so that y2 = m2 x2 The points P1 (x1, y1), P2 (x2, y2) and the origin P0 (0,0) form a triangle. And the line segments L (P1, P2), L1 (P0, P1), and L2 (P0, P2) form the sides. The segment lengths are: (L) 2 = (x2 x1) 2 + (y2 y1) 2 (L1) 2 = (x1) 2 + (y1) 2 (L2) 2 = (x2) 2 + (y2) 2 Now the line segments L (P1, P2), L1 (P0, P1), and L2 (P0, P2) form the sides of a right angle triangle if the lines are perpendicular. The condition can be obtained by applying Pythagorean theorem to the segment lengths: (L) 2 = (L1) 2 + (L2) 2 substitution gives: (x2 x1) 2 + (y2 y1) 2 = (x1) 2 + (y1) 2 + (x2) 2 + (y2) 2 (22) simplification gives: 0 = 2 y1 y2 2 x1 x2 (23.1) 3

or: y1 / x1 = x2 / y2 (23.2)

we have y1 = m1 x1 and y2 = m2 x2: ( m1 = y1 / x1 and m2 = y2 /x2) m1 = 1/m2 (24) Therefore, if two lines are perpendicular, the slope of one will be the negative reciprocal of the other. (m1 = 1/m2). Distance between a Point and a Line: The distance between a point (x0, y0) and a line rx + sy + t = 0 (where r, s, and t are chosen to satisfy: r2 + s2 = 1) can be evaluated as follows: Consider a line, which is perpendicular to the original line and contains the point. It is: sx + ry + t = 0 (27) The point of intersection (eq: 15) for these two lines is given by: [s (sx0 ry0) rt), st r(sx0 ry0)] (28) The distance between these points (eq: 19) is given by: L = ({x0 [s (sx0 ry0) rt)]} 2 + {y0 [ st r(sx0 ry0)]}2) (29) This will reduce to: L = | rx0 + sy0 + t | (30) Note: When the expression is zero the point lies on the line, while other values give the perpendicular distance of the point from the line. This simple relationship is the motivation of this choice. Vectors: A vector has a single direction and a length. It can be denoted by: V (Dx, Dy). Where: Dx indicates how far to move along the x-axis direction and Dy indicates how far to move along the y-axis direction. Vectors do not have fixed position in space, but they specify the direction and magnitude. The idea of a vector is useful for drawing lines on a CRT with an electron beam, just by giving a command to move so far from its current position in a given direction. Two vectors can be added by adding their respective components: V1 + V2 = (Dx1, Dy1) + (Dx2, Dy2) = (Dx1 + Dx2, Dy1 + Dy2) (31) A vector can be multiplied by a number by multiplying each of its components. This preserves the direction but changes its magnitude. nV = n (Dx, Dy) = (nDx, nDy) (32) The magnitude of a vector V (Dx, Dy) is given by: | V | = (Dx2 + Dy2) (33)

Unit vector can be obtained by multiplying it by the reciprocal of its magnitude. Thus the magnitude of a Unit vector is equal to 1. These conveniently capture the direction information. The parametric equation for a vector can shown as: V = V1 + u (V2 V1) (33) Where: V = (x, y); V1 = (x1, y1); and V2 = (x2, y2).

Pixels and Frame Buffers: Pixel (picture element) is the smallest addressable screen element. Each pixel has a name or address. The names correspond to the coordinates, which identify points. The display screen is considered as a grid (or array) of pixels. Each row and column can be numbered conveniently to represent the screen coordinates. The coordinate (i, j) gives the column and row of a pixel. Each pixel is centered at its coordinates. Line segments are drawn by setting the intensities (brightness) of a string of pixels between a starting pixel and an ending pixel. The maximum number of distinguishable points, which a line may have, depends on the resolution of the display device. Frame buffer is an array (stored in the computers memory), which contains an internal representation of the image to be displayed. It collects and stores the pixel values for use by the display device. The graphics display device accesses this array to determine the intensity at each pixel should be illuminated (displayed). Vector Generation: The process of turning on the pixels for a line segment is called vector generation. The problem is to select pixels, which lie near to the line segment. It is not easy to select all the pixels that lie on the line. Another problem is that the apparent thickness of the line would change with slope and position. An alternative would be to step along the columns of pixels and query which row is closest to the line. The pixel in that row and column could be turned on. This works well for gentle slope lines (1 < m < 1). But for sharp slope lines (steep lines) this method leaves gaps. This can be handled by stepping up the rows of pixels and solving for the columns. (Algorithms are discussed in the later sections.) Antialising of lines: Lines may have a jagged or stair-step appearance when they step from one row and column to the next column and next row. This is one aspect of a phenomenon called aliasing. Aliasing produces defects, which occur when the scene to be displayed changes faster (or more smoothly than every two pixels). Setting pixels to gray levels between black and white provides a means to reduce this effect. This technique is called antialiasing, and it uses the gray levels to gradually turn-off the pixels in one row as it gradually turns on the pixels in the next. If a line segment lies in between two pixels, normally the closest one is selected. Suppose that instead of picking the closest, both are turned on and the intensity values are set such that the pixel closest to the line receives most of its intensity. The sum of the intensity values for the two pixels should match the total intensity value for the line. The function used can be a simple or complex expression based on the intensity patterns, pixel shapes, and how lines cover them. Antialiasing with complicated functions can still be done efficiently by storing the function values in a lookup table. Thick line segments: These are the line segments with thickness greater than one pixel. Two vector generation algorithms are run in parallel to turn on the pixels along the edges (ie, top and bottom boundaries). And the pixels that lie in between the boundaries are also turned on to produce a thick line segment. For gentle slope line between (x1, y1) and (x2, y2) with thickness w, the boundary lines are as follows: Top boundary line: (x1, y1 + wy) and (x2, y2 +wy) Bottom boundary line: (x1, y1 wy) and (x2, y2 wy)

Where: wy = [(w 1) / 2] [[(x2 x1) 2 + (y2 y1) 2] ] / [|x2 x1|] The factor containing x and y values is needed to find the amount to shift up and down in order to achieve the proper width w as measured perpendicular to the line direction. Sharply sloping lines can be handled similarly with the x and y roles reversed. Character generation: Strings of characters are often displayed to label and annotate drawings and to give instructions / information to the user. Characters are always built into the graphics display device (usually as hardware, but some times as through software). There are two primary methods for character generation. One is called the stroke method and the other the dot matrix or bitmap method. The stroke method creates characters out of a series of line segments, like strokes of a pen. This lends itself to changes of scale; the characters may be made twice as large by simply doubling the length of each segment. The dot-matrix method represents characters by an array of dots (5x7 arrays are often used, but 7x9 and 9x13 are also found). This array is like a small frame buffer, just big enough to hold a character. The dots are the pixels for this small array. Placing the character on the screen becomes a matter of copying pixel values from the small character array into some portion of the screens frame buffer. The memory containing the character dot matrix is array is often a hardware device called a character-generator chip, but RAM may also be used when many fonts are desired. This method does not lend itself to variable-sized characters as the size of a dot is fixed. Antialiasing techniques can be applied to characters to improve the appearance, particularly for very small fonts and characters where the finite resolution of the display interferes with their smooth curved shapes.

GRAPHICS PRIMITIVES
[Introduction, Display devices, Primitive operations, The Display-File Interpreter, Normalized Device Coordinates, Display-File structures.] The purpose of a graphics system is to make programming easier. It consists of special hardware for output and input of pictorial information and software routines for performing the basic graphics operations. It supplies operations like drawing a line, writing a character or a string of text, changing the line style, moving the pen, etc., and constructs suited to the creation of graphical images to enhance the development of a graphics program. Some systems are in the form of special high-level graphic languages, while others are in the form of extensions to general-purpose high-level languages. Display Devices: These are the devices, which output the images. In a Refresh CRT an electron beam is directed at the surface of the screen. The position of the beam is controlled by electric or magnetic fields within the tube. Once the screen phosphors of this tube are illuminated by the electron beam, they emit light. The light emitted fades out very rapidly. The display screen is refreshed again and again to draw the image repeatedly. This is done by directing the electron beam back over the same points quickly. In a direct view storage tube (DVST) an electron beam is directed at the surface of the screen. The position of the beam is controlled by electric or magnetic fields within the tube. Once the screen phosphors of this special tube are illuminated by the electron beam, they stay lit. Here two electron guns (viz, primary and flood guns) are used. The primary gun is used to store the picture pattern on a fine wire grid, coated with dielectric and mounted just behind the screen. The flood gun is used to maintain the picture display. These are useful for displaying highly complex and static images without flicker. The image displayed cannot be altered. The entire screen is to be erased and drawn afresh. This is still time-consuming (can take several seconds) and makes it less suitable for use in real-time animation. A Plasma Panel stores the image and allows selective erasing. It contains a gas (at low pressure) sandwiched between horizontal and vertical grids of fine wires. A large voltage difference when applied between these wires will cause the gas to glow. To illuminate a pixel the voltage is increased momentarily on the wires that intersect at the desired point. To extinguish a pixel the voltage on the corresponding wires is reduced until the glow cannot be maintained. Plasma panels are very durable and are often used for military applications. In a liquid crystal display (LCD) the light is ether transmitted or blocked depending upon the orientation of molecules in the liquid crystal. The material is sandwiched between the horizontal and vertical grids of electrodes. An electrical signal can be used to change the orientation of the molecules to turn the pixels on or off. This is a flat panel display technology, which makes it lighter in weight, and offers low voltage and power requirements. Hard copy devices: These are the raster printers and plotters that produce the image in a single sweep across the page. There are number of technologies which fall into this

category (viz, film printers, laser printers, electrostatic plotters, thermal and thermaltransfer printers, ink-jet printers, and impact dot-matrix printers). The devices range in resolution from about 100 dpi to over 1000 dpi (dpi = dots per inch). Dot-matrix printers have an array of wires, which can be individually triggered to press an inked ribbon to make a dot on the paper. By sweeping the array across the paper images can be formed. Ink-jet printers form tiny droplets of ink, which can be guided to the paper to form dots. They use nozzles to prepare the jet. Laser printers are built on top of copier technology. A laser is used to supply light pattern. A rotating mirror sweeps the laser in a raster pattern, and a light valve turns the beam on or off to form the image. Film printers use a laser scanning system. The laser is focused directly on photographic film to form the picture. Thermal printers have a print head, which can burn tiny dots on a heatsensitive paper. Thermal transfer printers have a similar print head, but it is used to melt dots of wax-based pigment on to the paper. Electrostatic printers use an array of wires to which a voltage may be selectively applied. As the paper passes across the wires, it is given a pattern of electrostatic charges. The paper then passes through a liquid toner, which is attracted, to the pattern on the paper to make permanent impression upon drying. Primitive Operations: Most graphics systems offer a set of similar set of graphics commands to perform different operations (like drawing a line segment, displaying text, line style, etc.). The first primitive command is that for drawing a line segment. A line segment is specified by its two endpoints, it is often the case that these will be connected end to end. The final point of the last segment becomes the first point of the next segment. To avoid specifying this point twice, the system can keep track of the current pen or electron beam position. The line segments are drawn using absolute line command {LINE-ABS-2 (X, Y), where X and Y are the coordinate values of the final position}. On the other hand they can also be drawn using relative line command {LINE-REL-2 (DX, DY), where DX and DY are the distances to move along x-axis and y-axis from the current position.}. Most often line segments are disconnected from one another. This needs the pen to be moved from current position to another position without drawing a line segment. In other words the interconnecting segment is not drawn (or hidden). Again there can be both absolute and relative moves: {MOVE-ABS-2 (X, Y), where X and Y are the coordinate values of the final position.} and {MOVE-REL-2 (DX, DY), where DX and DY are the distances to move along x-axis and y-axis from the current position.}. Example (Line Drawing of a House): This can be constructed by either using absolute or relative commands. If absolute commands are used then the image will always be located at the same position on the screen. If, however, only relative commands are used, then the position of the image will depend upon the current position. Generally these series of commands are stored in a sub program to avoid the repetitive coding. This may be used for the construction of pictures made of repeated instances of the basic components. Thus the subprogram for each type of component should be written using only relative commands. Drawing the entire picture is reduced to positioning the beam and calling subprograms. Subprogram House BEGIN

LINE-REL-2 (0, 0.2); LINE-REL-2 (0.1, 0.2); LINE-REL-2 (0.1, -0.2); LINE-REL-2 (0, -0.2); LINE-REL-2 (-0.2, 0); END; The above subprogram uses relative commands. It will start at the current position, which will become the lower left corner of the drawing. It will draw the left wall, the roof, the right wall, and, finally the floor. Since only relative commands are used, more number of instances can be displayed by simply calling this subprogram at different starting positions. Three calls of subprogram at different initial pen positions is shown below: BEGIN MOVE-ABS-2 (0.1, 0.2); HOUSE; MOVE-ABS-2 (0.4, 0.2); HOUSE; MOVE-ABS-2 (0.7, 0.2); HOUSE; END; Display-File Interpreter: It serves as an interface between the graphics program and the display device. The display-file contains the information necessary to construct the picture. The information will be in the form of instructions such as draw a line or move the pen. Saving instructions such as this takes much less storage than saving the picture itself. The display-file instructions may be saved in a file either for display at a later time or for transfers to another machine. Such files containing imaging instructions are called metafiles. The display-file interpreter converts these instructions into actual images. Some graphic systems are equipped with special hardware (graphic processor). In other systems, the behavior of a display processor is simulated. Normalized Device Coordinates: Different display devices have different screen sizes as measured in pixels. To make the programs to be device independent the coordinates should be specified in some units other than pixels. The device independent coordinates are called normalized device coordinates. In these units the screen measures 1 unit high and 1 unit wide. The lower left corner of the screen is the origin, and the upper-right corner is the point (1, 1). The point (0.5, 0.5) is in the center of the screen no matter what the physical dimensions or resolution of the actual display device may be. The interpreter uses a simple linear formula to convert from the normalized device coordinates to the actual device coordinates. Suppose that for the actual display the index of the leftmost pixel is WIDTH-START and that there are WIDTH pixels in horizontal direction. Similarly the bottommost pixel is HEIGHT-START and the number of pixels in the vertical direction is HEIGHT. Thus the conversion formulae are given by: Xs = WIDTH * Xn + WIDTH-START Ys = HEIGHT * Yn + HEIGHT-START

Where: Xs, Ys are screen coordinates and Xn, Yn are normalized coordinates. The display devices are often not square. The ratio of the height to width is called the displays aspect ratio. If the full dimension are used the image will be stretched or squashed. If a square area of the display is used the image is correctly proportioned but some of the display area is wasted. If a larger square area than the display is used the image may not entirely fit on the display. Display-File Structure: The display-file contains series of instructions (commands) to construct the image. Each command contains two parts, an operation code (opcode) , which indicates what kind of command it is (eg: LINE or MOVE), and operands, which are coordinates of a point (x, y). One possible method of storing these instructions is to use three separate arrays: one for the operation code (DF-OP), one for the x coordinate (DF-X), and one for the y coordinate (DF-Y). Thus the seventh instruction can be DF-OP [7], DF-X [7], and DF-Y [7] of the three arrays. The display file must be large enough to hold all the commands needed to create the image. At this point there are only two possible instructions to consider, MOVE and LINE. Also relative commands can be converted to absolute commands before they are entered into display-file. If an opcode of 1 means a MOVE command and an opcode of 2 means a LINE command, A command to move to position x = 0.3 and y = 0.7 would look like 1, 0.3, 0.7. DF-OP [3] 1; DF-X [3] 0.3; DF-Y [3] 0.7; The above statements would store this instruction in the third position of the display-file. If DF-OP [4] = 2, DF-X [4] = 0.5, and DF-Y [4] = 0.8; the display would show a line segment from (0.3, 0.7) to (0.5, 0.8) Display-file algorithms are used to enter instructions into the display-file. The interpreter will read instructions from a portion of the display-file and carry out the appropriate commands. To display the picture described in the display-file, three steps are to be carried out. 1. The current display may have to be cleared. 2. The display-file must be interpreted. 3. An explicit action (in some cases) to show the contents of the frame buffer. Text: Another primitive operation is that of text (a character or a string of characters) output. The characters may be drawn by stroke or dot matrix method. Their patterns are often copied from memory into the frame buffer or created by special character generation hardware. The advantage is speed and a saving of display-file memory. In sophisticated displays size, spacing, orientation, font can be selected. The interpreter can be extended to include the output of text. This is done by extending the number of operation codes to include one code for each character. The opcodes range between 32 and 126 inclusive. The operands decide the position and text can be displayed vertically or diagonally also. Graphics systems (CORE, GKS) have rich selections of text

10

formatting operations; they allow changing the size, orientation, spacing, and font of the character and the direction of the line of text. Line Style: Lines may be continuous, dashed, or dotted. Color, intensity, or thickness of the lines can also be selected. Special opcodes are used to change line styles (or color or intensity), but such a command would not require any operands. The opcodes range between 0 and 15 inclusive, for change of line-style commands. The default line style is 0 for normal straight line. This is the default line-style when system is initialized. Other line-styles correspond to codes 1, 2, and so on. For a line printer, changing the line-style is a matter of changing the character that is placed in the frame buffer.

11

POINT PLOTTING TECHNIQUES


[Coordinate System, Incremental Methods, Line Drawing Algorithms, Circle Generators.] Point plotting techniques have become essential in programming frame buffer displays, where the intensity of each dot must be separately computed. These are based on the Cartesian coordinate system. Points are addressed by their x and y coordinates. The value of x increases from left to right and that of y from bottom to top. Points are plotted in response to the digital signals from the computer. This depends on coordinate precision and resolution of the display screen. With 10 bits of x and y coordinate precision, 1024 x 1024 array of positions, are possible where a dot can be displayed (10 bit binary number = 210 = 1024 distinct values). Resolution is the number of visible dots that can be displayed in a given area of the screen. It is usually expressed in terms of dots per inch (dpi). Display screens generally measure about 10 inches square with 100 dpi. The value of 1024 is popular as it makes full use of 10 bit integer coordinates. Displays of different types 4096 x 4096 (12 bit), 256 x 256 (8 bit) are also available. Incremental methods are a form of iterative computation, in which each iterative step is simplified by maintaining a small value of state, or memory about the progress of computation. These methods are useful in generating lines on point plotting displays and also simplify both scan conversion and shading of computer generated pictures of solid objects. For example to draw a line, incremental techniques generate the intermediate points that lie on the line in between start and end points. Straight-line segments are used in computer-generated pictures (like block diagrams, bar charts, graphs, drawings, architectural plans, etc.). Besides curves can be approximated quite effectively by sequences of short straight-line segments. A good linedrawing algorithm should fulfill the following requirements: 1. Lines should appear straight 2. Lines should terminate accurately. 3. Lines should have constant density. 4. Line density should be independent of length and angle. 5. Lines should be drawn rapidly. The vector generation (ie, the process of turning on the pixels for a line segment) algorithms, which step along the line to determine the pixels, which should be turned on, are sometimes called Digital Differential Analyzers (DDAs). The DDA generates lines from ordinary differential equations. It works on the principle of incrementing x and y by small steps proportional to the first derivatives of x and y. [A digital differential analyzer is a mechanical device for integrating differential equations while simultaneously incrementing the values of x and y in small steps proportional to dx and dy.] In the real world of limited precision displays, a line must be generated within the available addressable points. This can be done by rounding to the nearest integer after each incremental step. Thus a point can be displayed at the resultant x and y. The following line generating methods are all incremental. Each method basically generates two sets of signals that are used to step the x and y coordinates of the point that traces out the line.

12

Symmetrical DDA: The symmetrical DDA works on the principle that the x and y values are simultaneously incremented in small steps proportional to the first derivatives of x and y. for a straight line the first derivatives are constant and proportional to x and y. thus in the case of an infinite-precision display, a straight line can be generated by incrementing the x and y values by ex and ey, where e is some small quantity. In the real world of limited-precision displays, addressable points are needed. This can be done by rounding off the values after each incremental step. After rounding a dot can be displayed at the resulting x and y coordinate values. An alternative to rounding is the use of arithmetic overflow. The x and y values are kept in registers that have two parts (viz., integer and fractional). The incrementing values, which are both, less than unity, are repeatedly added to the fractional parts. Whenever the result overflows the corresponding integer part is incremented. The integer parts of the x and y registers are used in plotting the line. This would normally have the effect of truncating rather than rounding. To achieve true rounding the DDA is initialized with the value 0.5 in each of the fractional parts. One advantage of this arrangement is that it allows detecting the changes in x and y and avoids plotting the same point twice. The overflow indicator allows this detection and produces signals to reposition the point that traces out the line. The precision of the incrementing values and of fractional parts of the registers should be greater than the coordinate precision of the display, otherwise the accuracy will be lost on long lines. The appearance of lines depends on the value chosen for the increment e. The value of e is chosen suitably to avoid the spacing of consecutive points from exceeding one screen unit. (e = 2 n, where 2 n 1 max(|x|,|y|) 2 n). In fact, e is the reciprocal of the line length estimate; in this case the line length is 2n. It generates accurate lines, since the displacement of a displayed point from the true line is less than one half of a screen unit. Logically the symmetrical DDA is simple. The use of a negative power of 2 for e means that the implementing values can be determined by shifting the dx and dy registers rather than by a division. Each step is computed with just two additions. Simple DDA: The simple DDA is as accurate as its symmetrical counterpart but generates a different sequence of dots because of its different method of estimating line length. Line length estimate is chosen to be equal to the largest of the magnitudes of dx and dy, so that edx or edy is of unit magnitude. This allows replacing one of the DDAs addresses with a simple counter. Therefore it generates unit steps in the direction of greatest motion. The simple DDA is an ideal basis for the software line generator, but the need for division logic makes it less suitable to hardware implementation.

13

Bresenhams algorithm: Similar to simple DDA it is designed so that each successive iteration changes one of the coordinate values by 1. The other coordinate may or may not change, depending on the value of an error term maintained by the algorithm. This error term records the distance, measured perpendicular to the axis of greatest movement, between the exact path of the line and the actual dots generated. During the iterations, the slope of line is added to the error term. Before this is done, the sign of e is used to determine whether to increment the y coordinate of the current point. A positive e value indicates that the exact path of the line lies above the current point and therefore y coordinate is incremented and 1 is subtracted from e. If e is negative the y coordinate value is left unchanged. Bresenhams algorithm avoids generating duplicate points. Because it also avoids multiplications and divisions, it is well suited to hardware implementation. Circle generators: Circles and circular arcs are frequently displayed (eg. Mechanical engineering drawings). A number of incremental methods are available to plot circles and arcs. Circle generators are capable of generating closely spaced dots, suitable for point plotting devices. The principle of DDA can be extended to curves. One such curve is the circular arc. The differential equation of a circle with center at the origin is dy/dx = x/y. The circle plotting DDA can be implemented by using ex and ey as incrementing values. x n+1 = xn + e yn y n+1 = yn e xn The value of e is chosen suitably to avoid the spacing of consecutive points from exceeding one screen unit. (e = 2 n, where 2 n 1 r 2 n). Unfortunately the above method plots a spiral but not a circular arc. Each step is made in a direction perpendicular to a radius of the circle. Each successive point is slightly farther from the center. This problem is solved by using x n+1 rather than xn for computing y n+1. x n+1 = xn + e yn y n+1 = yn e xn+1 As a matter of fact, the above equations generate points on an ellipse instead of a circle. The eccentricity of the resulting curves may be quite noticeable when e is relatively large. This is made negligible if e is kept small, but increases the computation considerably. The circles (ellipses!) drawn by the DDA need not be centered on the origin. The displacements in x and y to reposition the center can be adjusted for. This algorithm is well suited to hardware implementation. Finally, it is possible to construct a DDA that draws an exact circle using the trigonometric equations: [x n+1 = xn cos + yn sin; y n+1 = yn cos xn sin]. The trigonometric values are easy to compute and are constant for any circle. This pair of equations can be used to advantage if multiplications are inexpensive.

14

LINE DRAWING DISPLAYS


[The CRT, Inherent-Memory Devices, The Storage-Tube Display, The Refresh Line-Drawing Display.] Computer-generated figures may be divided into two classes, line drawings and continuous tone images. Line drawings are easier to create because the algorithms for their creation are simple. Using incremental methods, line segments can be generated. The amount of information required to present them is less. They can be displayed on equipment that is readily available. The purpose of a display device is to convert electrical signals into visible images. The display controller sits in between the computer and the display device. It receives information from the computer and converts it into signals acceptable to the device. The functions of display controller are: voltage level conversion between the computer and the display device, buffering to compensate for differences in speed of operation, and generation of line segments and text characters. Many display controllers are furnished with additional hardware to perform functions such as scaling and rotation to improve the speed of response. In most applications of CG the quality of the displayed image is very important. A great deal of effort has been directed towards the development high-quality computer display devices. In the 1950s, the CRT was the only available device. Along with the continuous development of the CRT there has been an intensive search for alternatives. This has lead to the development of a number of new techniques for converting electrical signals into images. (eg. DVST, Plasma Panel, Laser Scan Display, LCD, etc.). None of them so far could displace the CRT. The CRT: It is an evacuated and sealed conical glass tube. It is fitted with an electron gun at the narrow end. A yoke (system of electromagnetic coils) is mounted on the outside of tube at the base of the neck. The other end is coated on the inside with phosphor. The electron gun emits a high velocity, finely focused beam of electrons. The beam of electrons (cathode rays) passes through focusing and deflection systems that direct the beam toward specified positions on the phosphor coated screen. The phosphor then emits a small spot of light at each position contacted by the beam. The beam of electrons should fulfill the following requirements: 1. It must be accurately focused so that it produces a sharp spot where it strikes the phosphor. 2. It must have high velocity, since the brightness of the image depends on the velocity of the electron beam. 3. Means must be provided to control the flow of electrons so that the intensity of the beam can be controlled.

15

The electron gun contains a number of separate parts. A cathode heated by an electric filament generates electrons. Surrounding the cathode is a cylindrical metal control grid, with a hole at one end that allows electrons to escape. The control grid is kept at a lower potential than the cathode. This creates an electrostatic field that directs the electrons through a point source; this simplifies the subsequent focusing process. By altering the control-grid potential, the rate of flow of electrons (or beam current) can be varied. Thus the brightness of the image can be controlled. Focusing is achieved by a focusing structure containing two or more cylindrical metal plates at different potentials. These set up a toroidal electrostatic field that effectively catches straying electrons and deflects them back toward the axis of the beam. The result is a beam that is extremely finely focused and highly concentrated at the precise moment at which it strikes the phosphor. An accelerating structure is generally combined with the focusing structure. It consists of two metal plates mounted perpendicular to the beam axis with holes at their centers through which the beam can pass. The two plates are maintained at a sufficiently high relative potential to accelerate the beam to the necessary velocity. Accelerating potentials of several thousand volts are used. The resulting electron-gun structure has the advantage that it can be built as a single physical unit and mounted inside the CRT envelope. Other types of gun exist, whose focusing is performed by a coil mounted outside the tube. This is called electromagnetic focusing to distinguish it from the more common electrostatic method described in the preceding paragraph. The electromagnetic technique can result in finer focusing, but the electrostatic method is generally preferred in graphic displays because it leads to a cheaper gun construction. The Deflection System: A set of coils, or yoke, mounted at the neck of the tube, forms part of the deflection system responsible for addressing in the CRT. Two pairs of coils are used, one to control horizontal deflection, the other vertical. A primary requirement of the deflection system is that it deflects rapidly, since speed of deflection determines how much information can be displayed without flicker. To achieve fast deflection, we must use large-amplitude currents in the yoke. An important part of the deflection system is therefore the set of amplifiers that convert the small voltages received from the display controller into currents of the appropriate magnitude. The voltages used for deflection are generated by the display controller from digital values provided by the computer. These values normally represent coordinates that are converted into voltages by digital-to-analog (D/A) conversion. To draw a vector a pair of gradually changing voltages must be generated for the horizontal and vertical deflection coils. Several methods have been used, including the following: 1. Integrators: An integrator is a circuit, which, if provided with a constant voltage input, generates a linearly increasing or decreasing voltage as output. Thus if the x and y values defining a vector are converted into voltages and

16

2.

used as inputs to a pair of integrators, the appropriate deflection signals will be generated. Digital Methods: A fast digital vector generator, such as a DDA, can be constructed from hardware and used together with a pair of D/A converters. Every time a fresh x or y coordinate is generated, the coordinate value is converted to a deflection voltage, and a dot is displayed.

Phosphors: The phosphors used in a graphic display are normally chosen for their color characteristics and persistence. Ideally the persistence, measured as the time for the brightness to drop to one-tenth of its initial value, should last about 100 milliseconds or less, allowing refresh at 30 hertz rates without noticeable smearing as the image moves. Color should preferably be white, particularly for applications where dark information appears on a light background. The phosphor should also posses a number of other attributes: small grain size for added resolution, high efficiency in terms of electric energy converted to light, and resistance to burning under prolonged excitation. In attempts to improve performance in one or another of these respects, many different phosphors have been produced, using various compounds of calcium, cadmium, and zinc, together with traces of rare-earth elements. These phosphors are identified by a numbering system, using names like P1, P4, P7, etc. The most popular phosphors for graphic displays are P7, a fairly long-persistence blue phosphor that leaves a green afterglow, and P31. Phosphors with much longer persistence than any of these do exist but rarely used because of smearing problems. The Beam-Penetration CRT: The normal CRT can generate images of only single color, due to the limitations of its phosphor. A color CRT device for line-drawing displays has been developed, however: it uses a multiplayer phosphor and achieves color control by modulating a normally constant parameter, namely the beam accelerating potential. The arrangement of the beam-penetration CRT is similar to that of normal CRTs; the only unusual component is the multiplayer phosphor, in which a layer of red phosphor is deposited behind the initial layer of green phosphor. If a fairly low-potential electron beam strikes the tube face, it excites only the red phosphor and therefore produces a red trace. When the accelerating potential is increased, the velocity of the beam striking the phosphor is greater, and as a result the beam penetrates into the green phosphor, increasing the green component of the light output. A limited range of colors, including red, orange, yellow and green can be generated in this way. The principal problem with the beam-penetration CRT is the need to change the beam-accelerating potential by significant amount in order to switch colors. When the accelerating potential changes, the deflection system must react to compensate. The hardware or software must be designed to introduce adequate delays between changes in color, so that there is time for voltages to settle. In order to prevent frequent delays and

17

consequent flicker, it is necessary to display all the red elements of the picture consecutively, then change the accelerating potential and display the yellow elements, and so on through all the different colors. The Shadow-Mask CRT: The shadow mask color CRT can display a much wider range of colors than the beam penetration CRT and is used in the majority of color televisions and monitors. Just behind the phosphor coated face of the CRT, is a metal plate, shadow mask, pierced with the small round holes in a triangular pattern. In place of single electron gun, the shadowmask tube uses three guns, grouped in a triangle or delta. These three guns are responsible for the red, green and blue components of the light output of the CRT. The deflection system of the CRT operates on all three-electron beams simultaneously, bringing all three to the same point of focus on the shadow mask. Where the three beams encounter holes in the mask, they pass through and strike the phosphor. Since they originate at three different points, however, they strike the phosphor in three slightly different spots. The phosphor of the shadow-mask tube is therefore laid down very carefully in groups of three spots (red, green, and blue) under each hole in the mask, in such a way that each spot is struck only by electrons from the appropriate gun. The effect of the mask is thus to shadow the spots of red phosphor from all but the red beam, and likewise for the green and blue phosphor spots. Modulation of the beam current is possible. Thus control of light output in each of the three component colors is achieved. Great improvements have been made in the performance of the shadow-mask tube. Never the less it has remained relatively expensive compared with the monochrome CRT, and still has a relatively poor performance in all respects except color range. The shadow-mask CRT compares particularly unfavorably in resolution and in efficiency of light output. Both these effects are caused by the use of the shadow-mask. The grain of the triangular pattern of holes sets a limit on the attainable resolution, and the mask tends to block a large portion of the available beam energy, reducing the total brightness. With the use of very high accelerating potentials it is, however, possible to match the brightness of monochrome images. Another problem is convergence. It is extremely difficult to adjust the three guns and beam deflection system so that the electron beams are exactly together, all three converging on the same hole in the shadow-mask. When they fail to converge, poorly colored image results. Often it is possible to achieve adequate convergence over only a limited area of the screen. The convergence problem together with the relatively poor resolution and light output of shadow-mask CRT, have tended to discourage its use in line-drawing displays. It is however, a very popular device for use with a frame buffer.

18

Inherent memory devices


Refresh line-drawing displays based on the CRT have the disadvantages of high cost and tendency to flicker when the displayed picture is complex. These two problems have led to the development of display devices with inherent image storage capability. The most widely used of these devices is the direct view storage tube (DVST); others include the plasma panel and the laser scan display. The Direct View Storage Tube (DVST): The DVST is like a CRT with an extremely long-persistence phosphor. A line drawn on the screen will remain visible for upto an hour. These are useful for displaying highly complex and static images without flicker. It consists of two electron guns (viz, primary and flood guns). The primary gun is used to store the picture pattern on a fine wire grid (storage mesh), coated with dielectric and mounted just behind the screen. A pattern of positive charge is deposited on the grid. This pattern is transferred to the phosphors by a continuous flood of electrons issuing from a separate flood gun. The flood gun is used to maintain the picture display. Just behind the storage mesh, another grid (collector) is placed to smooth out the flow of flood electrons. These electrons pass through the collector at a low velocity, and are attracted to the positively charged portions of the storage mesh, but repelled by the rest. Electrons not repelled by the storage mesh pass right through it and strike the phosphor. In order to increase the energy of these relatively slow moving electrons and thus create a bright picture, the screen is maintained at a high positive potential. This is done by applying voltage to a thin aluminum coating placed between the phosphor and the tube face. The flood electrons move slowly until they pass the storage mesh and do not affect the stored charge on the mesh. The image displayed cannot be altered. The entire screen is to be erased and drawn afresh. The normal erasing method is to apply a positive voltage to the storage mesh. This removes the charge but also generates an unpleasant flash over the entire screen. This is still time-consuming (can take several seconds) and makes it less suitable for use in real-time animation. Other problems are gradual degradation of the picture quality as background glow accumulates. This glow is caused by the small amounts of charge deposited on the storage mesh by the repelled flood electrons. The performance is somewhat inferior to the refresh CRT. Only a single level of line intensity can be displayed and only green phosphor tubes are available.

19

The Plasma Panel:


It stores the image and allows selective erasing. It contains a gas (at low pressure) sandwiched between horizontal and vertical grids of fine wires. A large voltage difference when applied between these wires will cause the gas to glow. To illuminate a pixel the voltage is increased momentarily on the wires that intersect at the desired point. To extinguish a pixel the voltage on the corresponding wires is reduced until the glow cannot be maintained. Plasma panels are very durable and are often used for military applications. It consists of two sheets of glass with thin, closely spaced gold electrodes attached to the inner faces and covered with a dielectric material. The sheets are placed a few thousandths of an inch apart. The intervening space is filled with a neon-based gas and sealed. By applying voltages between the electrodes the gas within the panel is made to behave as if it were divided into tiny cells, each one independent of its neighbors. By an ingenious mechanism, the cells can be made to glow, and thus a picture is generated. A cell is made to glow by placing a firing voltage across it by means of the electrodes. The gas within the cell begins to discharge, and this develops very rapidly into a glow. The glow can be sustained by maintaining a high-frequency alternating voltage across the cell; furthermore, if the signal amplitude is chosen correctly, cells that have not been fired will not be affected. In other words, each cell is bistable: it has two stable states. Cells can be switched on by momentarily increasing the sustaining voltage; this can be done selectively by modifying the signal only in the two conductors that interest at the desired cell. Similarly, if the sustaining signal is lowered, the glow is removed. Thus the plasma panel allows both selective writing and selective erasure, at speeds of about 20 microseconds per cell. This speed can be increased by writing or erasing several cells in parallel. The plasma panel produces a very steady image, totally free of flicker, and is a less bulky device than a CRT of comparable screen size. Its main disadvantages are its relatively poor resolution, of about 60 dots per inch, and its complex addressing and writing requirements. Its inherent memory is useful but is not as flexible as a framebuffer memory. Digital memories are now so inexpensive that a raster-scan display can cost less than a plasma panel. As a result, plasma panels are not used in very many of todays displays.

20

The Laser-Scan Display: The laser-scan display is one of the few high-resolution, large-screen display devices. It is capable of displaying an image measuring 3 by 4 feet and still has a relatively small spot size of about 1/100 inch. It has been used in displaying maps, high quality text, and elaborate circuit diagrams. The principle of this display is very simple. A laser is deflected by a pair of mirrors and traces out the desired image on a sheet of photo-chromic film. This material is usually transparent, but the light from the laser leaves a dark trace on it. A light projection system is used to project the image onto a large screen. To produce afresh image, the display simply winds the role of film to bring in blank region under the laser. The deflecting mirrors are extremely small and are controlled by electrical signals received from the display controller. A complex correction mechanism compensates for the inertia of these mirrors. For interactive purposes, a second laser displays the cursor nondestructively on the screen.

21

The Storage-Tube Display The typical storage tube display incorporates a 7 by 10 inch DVST and a built-in alphanumeric numeric keyboard. The screen coordinate system is divided into 1024 positions horizontally and 760 positions vertically. The display controller receives a series of instructions from computer. Each instruction specifies a single element of the picture. The controller converts these instructions into voltages and signals for the DVST are generated. To reduce the computational overhead DVSTs are designed to plot vectors (line segments) instead of points. Vector drawing instructions make use of current beam position (the position reached by the beam after completing the previous instruction). Every vector starts at the current beam position and finishes at the specified end point. Disjoint vectors can be drawn by incorporating a move instruction to the start of the vector. A storage-tube display that uses 10-bit positive integer coordinates must supply 20 bits of data with every instruction and one additional bit to represent the operation. Thus an instruction contains 21 bits. The instructions are broken down into 7-bit instruction bytes and are transmitted to the display in serial asynchronous fashion (similar to alphanumeric text). Two commands (ENTER GRAPHICS MODE, LEAVE GRAPHICS MODE) are used to permit the display to act as a graphics terminal or as an alphanumeric text terminal. When the display is in graphics mode, the instructions are interpreted as vectorplotting commands. When the display is in text mode, the instructions are interpreted as ASCII character codes and the corresponding character is displayed on the screen at the current position. Some displays (Tektronix 40006-1) permit compact representation of pictures containing short vectors. If two successive endpoints have coordinates that differ only in their lower order five bits it is unnecessary to transmit the higher order bits.
A Simple Picture

50

50

A Simple Picture of Two Triangles Vertices of Triangle 1. (40,40), (60, 60), (40, 80) Vertices of Triangle 2. (80, 30), (100, 50), (80, 70)

22

Basic Instruction Repertoire (Tektronix Storage Tube Display) 1. The first vector command simply sets the current position. 2. The higher order bits can be omitted for small changes. Operation Value ENTER GRAPHICS MODE 0011101 LEAVE GRAPHICS MODE 0011111 Vector Representation Y (low) 01 X (high) 10

01

Y (high) 11

X (low)

ENCODING (Leaving Higher Order Bits) 0011101 01 1 11 8 01 11 28 10 28 01 2 11 16 10 0011101 01 0 11 30 01 01 1 11 18 01 01 2 11 6 01

1 10 8 2 10 3 10 2 10

16 4 16

ENCODING (Without Leaving Higher Order Bits) 0011101 01 1 11 8 01 1 10 01 1 11 28 01 1 10 01 2 11 16 01 1 10 0011101 01 0 11 30 01 2 10 01 1 11 18 01 3 10 01 2 11 6 01 2 10

8 28 8 16 4 16

23

The Refresh Line Drawing Display The Refresh Line Drawing Display is still one of the most popular types of display for interactive computer graphics. This is mainly because of its ability to display dynamically changing pictures. In many applications such pictures are extremely effective in presenting the results. The Refresh Line Drawing Display contains a display controller to convert the computers output signals into deflection voltages for the yoke of the CRT. It accepts instructions to plot vectors, and uses the current position to define their starting points. The requirements are: 1. The controller must operate at high speed to avoid flicker. The CRT can maintain a steady, flicker-free picture only if it is fed with a fresh description of the picture 30 times (or so) a second. The picture may contain as many as 5000 vectors. Thus the controller must be able to process 150,000 (= 30 * 500) vector instructions per second. (This is not possible with serial asynchronous transmission.) 2. It should have flexibility to display the state with interactive feedback. The speed of refresh can be increased in two ways, viz, wider data path (16 bit memory word) and by accessing memory more efficiently (direct memory access; DMA). Thus the instructions are passed to the display in a 16-bit parallel fashion. This can be handled by a program executing in the CPU that transmits instructions to the display upon request. To avoid taxing the CPU, the display controller normally uses DMA. A memory cycle is used to read data from memory whenever it needs an instruction. Instructions are stored in contiguous sequence of memory locations and are collectively known as the display file. The display controller contains an address register to find the next instruction. It is updated after each instruction has been fetched. Line Drawing Displays accept instructions for point plotting as well as for vector drawing. Relative commands are used for end point coordinates; intensity values are specified by a bit. Unintensified points are used to reset the current beam position. Unintensified vectors are used to move the beam position through the distance specified without producing a visible trace. To display the images that change dynamically (to display the state, to have interactive feedback) flexibility is very essential. The changes are effected by means of corresponding changes to the display file. It is difficult to make rapid changes to a large contiguous block of display instructions. This can be done by breaking the display file into a number of disjoint sequences. Various instructions (viz., jump, subroutine-jump, return-jump, push-jump, popjump) are provided to navigate through the display file. A Jump instruction resets the contents of the address register. This is analogous to computers jump or branch instruction. For example the starting address of the display file can be stored when ever the end of file is reached in the address register. This puts the controller in an endless

24

loop, requiring no further attention from the computer. It is even possible to make changes to the display file without stopping the display. A display controller that can function entirely independently of the CPU is known as display processor. Almost all line-drawing displays have this capability. A subroutine-jump instruction sets the display processors address register to the specified address. A return-jump instruction restores the registers previous contents. This results in a return from the subroutine. To allow subroutines to call other subroutines a pushdown stack is used. This is an array of memory locations with a pointer addressing the top of the stack. When a subroutine-jump instruction is executed, the return address is pushed into the stack. Thus the subroutinejump instruction is often called a push-jump. When a return-jump instruction is executed, the return address is popped off the stack and transferred to the address register. Thus the return-jump instruction is often called a pop-jump. Display subroutines offer a way of eliminating repetitive sequence of instructions (like symbols, characters). Thus they reduce the size of the display file. For displaying text, each character is represented as a subroutine, and strings of characters are included in the display file as subroutine jumps. This method is inefficient as it needs more space (16-bit word). A more efficient approach involves the provision of a text-display instruction and the use of a hardware character generator, rather than a set of subroutines to generate characters. The text-display instruction allows each displayed character to be encoded in a single 8-bit byte. The character generator receives these bytes from the display processor and generates either sequences of short strokes or matrices of dots from a small read-only memory. Most line-drawing displays provide additional instructions to allow control of brightness, display of short vectors or curves, and so on. These instructions are easily accommodated within the instruction repertoire. Instruction set for Refresh Display I = 0 for Unintensified, I = 1 for Intensified D = 1 to Draw a Vector In a Two-Word Vector: D = 0 in the first word, D = 1 in the second word. Horizontal or Vertical vectors may be drawn with a single instruction, with D = 1. Point Plotting: Position X 2 I X Coordinate Point Plotting: Position Y 3 I Y Coordinate Vector Drawing: X Vector 4 I D X Vector Drawing: Y Vector 5 I D Y

25

Display File representation A Simple Picture of Two Triangles Vertices of Triangle 1. (40,40), (60, 60), (40, 80) Vertices of Triangle 2. (80, 30), (100, 50), (80, 70) [Note: Single Instructions for Vector Drawing] Instructions for Triangle 1: 2 0 40 3 0 40 4 0 0 20 5 1 1 20 4 0 0 - 20 5 1 1 20 5 1 1 - 40 Instructions for Triangle 2: 2 0 80 3 0 30 4 0 0 20 5 1 1 20 4 0 0 - 20 5 1 1 20 5 1 1 - 40

26

POLYGONS
[Introduction To Polygons, Polygon Representation, Polygon Interfacing Algorithms, Filling Polygons, Filling With A Pattern, Initializing, Antialiasing] Raster displays can display solid patterns and objects. Coloring and shading are possible with raster technology. The representation of surface objects is important even for line-drawing displays. Polygon is a graphics primitive considered for representing the surfaces. A polygon is represented as a number of line segments connected end to end to form a closed figure. Alternatively it may be represented as the points where the sides of the polygon are connected. The line-segments, which make up the polygon boundary, are called sides or edges. The end points are called vertices. The simplest polygon is the triangle, having three edges and three vertices. Polygons are divided into two classes: convex and concave. A convex polygon is a polygon such that for any two points inside the polygon, all points on the line-segment connecting them are also inside the polygon. All regular polygons (triangle, square, pentagon, hexagon, etc,) are always convex. A concave polygon is one, which is not convex. In some cases the edges may intersect one another.

Convex

Concave

Polygon Representation: Some graphics devices supply a polygon drawing primitive image polygon shapes. Other devices provide a trapezoid primitive. Trapezoids are formed from two scan lines and two line-segments. The trapezoids are drawn by stepping down the linesegments with two vector generators and, for each scan line, filling in all the pixels between them. Every polygon can be broken up into trapezoids. In such a system, polygons are represented as a series of trapezoids. Many other graphics devices do not provide any polygon support, and it is left to the software to break up the polygon into lines or points, which can be imaged. The value of opcode (3 or greater) represents the number of sides and is limited to the maximum possible opcode. The X and Y operands of the polygon command are the coordinates of the point where the first side to be drawn begins. Since polygons are closed curves, it also becomes the final endpoint of the last side to be drawn. Upon execution, the polygon instruction (opcode 3 or greater) will signal that the following instructions belong to a polygon, but otherwise behave as a line command.

27

Entering the Polygons: The information required to specify the polygon is the number of sides and the coordinates of the vertex points. Absolute and relative commands can be used for entering the polygons into display file: POLYGON-ABS-2 (AX, AY, N) POLYGON-REL-2 (AX, AY, N) Where arrays AX, AY are the operands and contain the vertices of the polygon. N is the operation code (3 or greater) of the polygon command and indicates and number of vertices. For the absolute command the edges can be drawn easily. But, for relative commands the starting point must be stored temporarily, so that it may be used in the final instruction, which closes the figure.

Inside Test
A polygon can be shown as a solid object by setting the pixels inside the polygon as well as those on the boundary. The pixels that are inside can be determined by an inside test. Two methods are available for carrying out the inside test, viz., even-odd method and winding-number method. In even-odd method of determining the polygon interior points, the intersections made by a line-segment (constructed between the point in question and a point known to be outside the polygon) with the polygon boundary are counted. If the number is odd, then the point in question lies inside the polygon; an even number indicates that it is outside. While counting the intersections for concave polygons, care should be exercised when the point happens to be a vertex. The other end points of the line-segments, which meet at the vertex, are to be verified. If these points lie on the same side of the constructed line, then the point in question (ie, the vertex) counts as an even number of intersections. If they lie on opposite sides of the constructed line, then the point is counted as a single intersection.

Even-Odd Inside Test

28

In winding-number method of determining the polygon interior points, a piece of conceptual elastic is stretched between the point in question and a point on the polygon boundary. The end attached to the polygon is slid along the boundary until it has made one complete circuit. It is then examined to see how many times the elastic has wound around it. If it has wound at least once, then the point is inside. If there is no net winding then the point is outside. When a boundary is crossed, instead of just counting the number of intersections, a direction number is given. The direction number indicates the direction the polygon edge was drawn relative to the line-segment constructed for the test. The sum of the direction numbers yields the winding number for the point in question. For example, to test a point (xa, ya), consider a horizontal line-segment (y = ya). The direction number for the edges, which cross the line-segment, are assigned as follows: if the edge starts below the line-segment, crosses it and ends above, then a vale of 1 is assigned. On the other hand if it starts above the line-segment, crosses it and ends below, then a values of 1 is assigned. The winding-number method yields different results from the even-odd method when a polygon is allowed to overlap itself.

Even-Odd
Polygon Interfacing Algorithms

Winding-Number

Polygons can be shown either filled or in outline. The user is provided with a method of indicating his preference. This can be done by setting a global flag, which can be checked at the time when the polygon is actually drawn. SET-FILL (ON-OFF) is the user routine for this. Fill styles may be implemented as different colors, shades of gray, or filling patterns. SET-FILL (STYLE) is the user routine for this. Style values from 1 to 16, inclusive are used. These values are mapped to opcodes between 16 and 31. The user specifies a positive integer for the interior style, which is converted to a negative number between 16 and 31 before it is stored in the display file.

29

Filling of polygons One way of filling polygons is to first draw the edges of the polygon in a blank frame buffer. Then starting with some seed point known to be inside the polygon, the intensity is set to the interior style and the neighboring pixels are examined. This is continued in an increasing area until the boundary pixels are encountered. This method is called flood-fill because the color flows from the seed pixel like water flooding the interior of a container. It is quite useful and works for any closed shape in the frame buffer, no matter how that shape originated. However, it requires a frame buffer free of pixels with the polygon interior style. It also requires a seed pixel. Applying inside test and setting the pixels avoids the need for a seed pixel. This is expensive as all the pixels are to be examined. Comparing them with minimum and maximum boundary points many pixels can be eliminated. The boundary can be considered as a smallest rectangle, which contains the polygon. Scanning can be done from the largest y value to the way down. The constructed test lines are the horizontal test lines at the current y scanning value. This is called scan-line method. Care should be exercised when two polygon edges overlap each other. This can be handled by determining the polygon boundary values directly from the polygon instruction, instead of from the frame buffer. To test a point, the edges with end points straddling the test line are considered. Thus it becomes easier, if the edges are sorted in order of their maximum y value. During the scan, if the current y position is lower than the lowest y position of the edge then it may be removed from the set of sides being considered. A list of sides is maintained so that all the sides, which are currently being considered, will be grouped together. Two pointers (viz., START-EDGE and ENDEDGE) are maintained to mark the boundaries of the group. All edges stored with list indices greater than or equal to START-EDGE and less than END-EDGE should be considered. An edge in the list before START-EDGE has yet to be encountered. Those, which lie behind END-EDGE, will have been passed. It is not necessary to examine every pixel on the scan line. In some cases, the polygon breaks the scan line into pieces. The end points of these pieces (ie, the points where the scan line crosses the polygons sides) are evaluated according to even-odd method. A vector generator can be used to fill in the pieces that lie with in the polygon. For example, the computed x values for all intersections are sorted. The smallest x value will be the left polygon boundary. At this point the polygon begins. The next x value indicates where the polygon ends. Therefore, a line segment drawn between these values will fill in this portion of the polygon. The sorted x values are paired in this manner and passed to the line drawing routine. In summary, an algorithm for filling a solid polygon should begin by ordering the polygon sides on the largest y value. It should begin with the largest y value and scan down the polygon. For each y, it should determine which sides can be intersected and find the x values of these intersection points. The x values are sorted, paired, and passed to a line-drawing routine. The algorithm that performs yx scan and fills in the polygon is called FILL-POLYGON.

30

Filling with a pattern A pattern is a grid of pixel values, which is replicated like tiles to cover the polygon area. Patterns are most easily implemented on raster displays. A pattern is often fixed, or registered, to the imaging surface. A table of patterns is added to the graphics system and fill style is used to select one. The pattern table can be composed of three arrays: PATTERN-X, PATTERN-Y and PATTERNS. The first two are arrays of numbers, which specify the size of each pattern, and the last one is an array of twodimensional arrays to represent the patterns. On many devices, there are only two pixel states, on or off. For these displays, the frame buffers and pattern tables can be compactly implemented by using individual bits to describe the pixel states. Pattern dimensions can be chosen to lie on word boundaries, and FILLIN algorithm can be made more efficient by dealing with entire words of pixel values. [Word = 2 bytes = 16 bits = 4x4 array of pixel values.]

Some 4 x 4 Patterns

Initialization Initialization is used set the default values for filling and fill-style parameters. If filling with patterns is possible, then the pattern table should be initialized to default set of patterns. Antialiasing Aliasing is a problem for the edges of the polygons just as it is for lines. Several antialiasing techniques, which use the shading of gray-level displays, have been developed. One technique is to calculate the fraction of a pixel area covered by the polygon and displaying the pixel with intensity in proportion to that. Another approach is to generate the scene at a higher resolution than that which is actually used, and then to average the intensity values of neighboring pixels to determine the intensity to be displayed. Increasing the resolution between four and eight times gives good results. Note that antialiasing need only be applied to the points on the edge (interior points will have full intensity). It can be incorporated as part of the polygon-filling algorithm. It can be carried out scan line by scan line to minimize the memory requirement.

31

TRANSFORMATIONS [Introduction, Scaling Transformations, Rotation, Homogeneous Coordinates and Translations, Coordinate Transformations, Rotation about an arbitrary point, Inverse Transformations.] Transformations offer the ease with which certain operations can be performed to alter the images. The cartographer can change the size of a chart (scaling). The animator can change the position of a character (translation). The architect can view a building from a different angle (rotation). These changes are easy to perform because the graphic image is coded as numbers and stored in the computer. The numbers may be modified by mathematical operations called transformations. These are expressed in terms of matrix multiplications. In CG images are generated from a series of line segments, which are represented by the coordinates of their end points. By performing mathematical operations (matrix multiplications) on these coordinates, alterations can be made. Matrix multiplication is more complex and involves sums of simple products of elements. Two matrices can be multiplied only if the number of columns of the first matrix is equal to the number of rows of the second. Consider two matrices A and B. The elements of the product matrix C can be given by: C (i, k) = A (i, j) B (j, k) {j is varied} The element C (1, 1) is found by multiplying each element of the first row of A by the corresponding element of the first column of B and adding these products together. C (1, 1) = A (1, 1) B (1, 1) + A (1, 2) B (2, 1) + A (1, 3) B (3, 1) = (1) (1) + (2) (-1) + (3) (0) = -1 The element C (3, 2) would be: C (3, 2) = A (3, 1) B (1, 2) + A (3, 2) B (2, 2) + A (3, 3) B (3, 2) = (7) (0) + (8) (2) + (9) (1) = 25 Matrix Multiplication B 3 6 9 1 1 0 0 2 1

A 1 4 7 2 5 8

C = AB 1 1 1 7 16 25

Multiplication is associative. Mathematically: [A (BC) = (AB) C]. This is a very useful property and allows combining several graphics transformations into a single transformation, and makes the calculations more efficient. Identity matrices are square matrices (same number of rows and columns) with all the elements 0 except the elements of the main diagonal, which are all 1. When they multiply another matrix, they reproduce that matrix. [A = AI. Where I is the identity matrix.]

32

Scaling Transformations Consider a point P1 (x1, y1) as being a 1x2 matrix. If it is multiplied by some 2x2 matrix, then another 1x2 matrix can be obtained. This can be interpreted as another point P2 (x2, y2) such that: P2 = P1 S. thus the matrix S gives a mapping between an original point P1 and a new point P2. As the image is stored as a list of end points, the transformation depends on the elements of S. Suppose S is taken to be an identity matrix T: P2 = P1 T (x2, y2) = (x1, y1). The values are unaffected and the image does not change. If however, T1 is chosen then: P2 = P1 T1 (x2, y2) = (2x1, y1). Every new x coordinate would be twice as large as the old value. Horizontal lines would be twice as long on the new image. The new image would appear to be stretched to twice the width. The transformation matrix T2 would shrink all the x coordinates to one-half of their original value. Now, if the image is to be stretched to twice the width and then be compressed to half the width, P2 = (P1 T1) T2 = P1 (T1 T2) = P1. [Since T1 T2 = T = Identity Matrix] On the other hand, the image can be made twice as tall with the same width by using transformation matrix T3. Thus: P2 = P1 T3 (x2, y2) = (x1, 2y1). This just multiplies the y coordinate by 2. By applying both transformations T1 and T3, the image can be made twice as wide and twice as tall. Thus: P2 = P1 T1 T3 (x2, y2) = (2x1, 2y1). The image is similar, but twice as big. In general, transformations of the form S change the size and proportion of the image, every point except the origin changes. This means that not only the size but also its position. A scale in x by a factor greater than 1 will cause the image to shift to right, along with making it wider. A scale in y will shift the image up or down as well as change its height. Transformations Matrices T2 T3 0.5 * X 2*Y 0.5 0 0 1 1 0 0 2

T (= I) Normal 1 0 0 1

T1 2*X 2 0 0 1

T4 2*X, 2*Y 2 0 0 2

S General Form Sx 0 0 Sy

33

Rotation Transformation For rotating a point P1 (x1, y1) through an angle about the origin to get a new point P2 (x2, y2) the trigonometric functions sine and cosine are used. Consider a line segment drawn from the origin at an angle counter clock-wise from the x-axis. The line segment will have end points (0, 0) and (x, y) and length L = (x2 + y2) The sin and cos can be expressed as: sin = y / L and cos = x / L Now if L = 1 then: sin = y and cos = x Consider a transformation matrix T for rotation. The elements can be evaluated as follows: When a point P1 (1, 0) is rotated by an angle counter clock-wise, it becomes (Cos , Sin ). Thus: (Cos , Sin ) = P1 T = (a, b). When another point P2 (0,1) is rotated by an angle counter clock-wise, it becomes ( Sin , Cos ). Thus: ( Sin , Cos ) = P2 T = (c, d). From the above the values of a, b, c, and d can be evaluated and the transformation matrix for counter clock-wise rotation can be expressed as R. Similarly the transformation matrix for clock-wise rotation can be expressed by substituting a negative angle (- ). Using the trigonometry simplifications can be carried out. [Note: Cos ( ) = Cos ; Sin ( ) = Sin ] Rotation Transformation Matrices T R Considered Counter clock-wise Clock-wise a c b d Cos Sin Sin Cos Cos Sin Sin Cos

For example, to rotate a point P (2, 3) counter clock-wise by an angle /6, the transformation matrix for rotation (R) can be used and the new point (P) can be evaluated. P = P R = (2, 3) R = (0.232, 3.598) Counter clock-wise Rotation of a Point ( = /6) P R P = PR Before Rotation Counter clock-wise After Rotation 2 3 0.866 0.5 0.5 0.866 0.232 3.598

Counter clock-wise Rotation of a Point ( = /4) Before Rotation Counter clock-wise After Rotation 2 2 0.707 0.707 0.707 0.707 0 2.828

34

Homogeneous Coordinates and Translation Moving the image is called translation. It is easily accomplished by adding to each point the amount by which the picture is to be shifted. Suppose the image is to be shifted 2 units to the right, then 2 units are added to x-coordinate of every point. Similarly to move it down 1 unit, 1 is added to every y-coordinate. In general, in order to translate the image to the right and up by (Tx, Ty), every point (X1, Y1) is replaced by a new point (X2, Y2) where: X2 = X1 + Tx, Y2 = Y1 + Ty. Unfortunately, this way of describing the translation does not use a matrix, so it cannot be combined with other transformations. One way of doing this is to use homogeneous coordinates. In homogeneous coordinates, 3x3 matrices are used instead of 2x2, introducing an additional dummy coordinate w; points are specified by three numbers instead of two. The first homogeneous coordinate will be the product of x and w, the second will be the product of y and w, and the third will just be w. A coordinate point (X, Y) will be represented by the triple (XW, YW, W). The x and y coordinates can easily be recovered by dividing the first and second numbers by the third. The third number w is for 3-D perspective transformations. In 2-D its value is kept at 1 for simplicity. The transformation matrices in homogeneous coordinates for scaling, rotation, and translation can be formulated. Homogeneous Transformation Matrices S R T Sx 0 0 Sy 0 0 0 0 1 Cos Sin 0 Sin Cos 0 0 0 1 1 0 0 0 1 0 Tx Ty 1

Examples: Point: A point P (X, Y) becomes P (XW, YW, W) in homogeneous coordinates. Scaling transformation: P S = (Sx X W, Sy Y W, W) In 2-D it becomes (Sx X, Sy Y) Rotation transformation: P R = [(XW cos YW sin), (XW sin + YW cos), W] In 2-D it becomes [(X cos Y sin), (X sin + Y cos)] Translation transformation: P T = [(XW + Tx W), (YW + Ty W), W] In 2-D it becomes [(X + Tx), (Y + Ty)]

35

Coordinate Transformations Transformations may also be used to change coordinate systems. For example, a distance measured in inches can be converted to centimeters by means of a scale. The actual operation involves the interpretation of changes. Translations are useful coordinate transformations when the origins are not considered. For example, the lower left corner of the display can be considered as origin (0, 0). But in some display systems this point may actually correspond to pixel (1, 1). Another example is a display, which places the (0, 0) pixel in the upper-left corner and numbers the scan lines from top to bottom. This is done on alphanumeric printers because it is the order in which lines are printed, and on raster displays because it is the order in which lines are scanned. To convert between these coordinates, a scale of 1 in X but a scale of 1 in Y to reverse the scan-line order, and a translation in Y by the vertical screen dimension to move the origin to the proper corner. Rotation may also be used in coordinate transformations, but are usually for angles of /2 (90 degrees). For example, a printer using 8.5 by 11 inch paper may have the Y-axis along the long edge and X-axis along the short edge. This is called portrait mode. At times it is preferred to orient the Y-axis along the short edge and the X-axis along the long edge. This is termed landscape mode. This can be achieved by a rotation of /2 and a translation to reposition the origin in the lower-left corner. The following example shows the transformation from normalized device coordinates to actual device coordinates. The arithmetic used to do the conversion is: X1 X * WIDTH + WIDTH-START; Y1 Y * HEIGHT + HEIGHT-START; This is a scale by WIDTH for X and scale by HEIGHT for Y, followed by a translation by WIDTH-START and HEIGHT-START. The full transformation matrix D for this change in coordinates is: WIDTH 0 0 0 HEIGHT 0 WIDTH-START HEIGHT-START 1

36

Rotation About an Arbitrary Point The transformation matrix for a counter clockwise rotation about a point P (Xc, Yc) can be obtained in three steps: 1. Translate the point to the origin. (T1) 2. Rotate the point about the origin. (R) 3. Translate the center of rotation back where it belongs. (T2) Matrix multiplication is not commutative (AB BA). So one should be careful to order the matrices so that they correspond to the order of the transformations on the image. The coordinates of the point are placed on the left and the transformation matrix on the right. With this ordering, if an additional matrix is introduced on the right (postmultiplication), then the corresponding transformation will be carried out after the original transformation. Post-multiplication is used in the construction of general rotation. [(((P T1) R) T2)] Multiplications are associative, therefore the overall transformation matrix can be obtained [P (T1 (R T2)] by associating and multiplying all the transformation matrices together. Transformation Matrices for Rotation about a Point T1 R T2 1 0 Xc 0 1 Yc 0 0 1 cos sin 0 sin cos 0 0 0 1 1 0 0 1 Xc Yc 0 0 1

Overall Transformation Matrix for Rotation about a Point T1 R T2 Cos Sin Xc Cos + Yc Sin + Xc Sin Cos Xc Sin Yc Cos + Yc 0 0 1

37

Other Transformations (Reflection, Shear) The three transformations of scaling, rotating, and translating are the most useful and the most common. Other transformations are also possible. Since any 2 x 2 transformation matrix can be converted in to a 3 x 3 homogenous coordinate matrix as follows: Conversion 2x2 3x3 a b c d a b 0 c d 0 0 0 1

2x2 form of Some Transformations (Reflection, Shear) Reflections Shears


Y axis 1 0 0 1 X axis 1 0 0 1 Origin 1 0 0 1 Y=X 0 1 1 0 Y=X 0 1 1 0 Y Shear 1 0 a 1 X Shear 1 b 0 1

Reflection is a transformation that produces a mirror image of an object. The mirror image for a 2D reflection is generated relative to an axis of reflection by rotating the object 180o about the reflection axis. The first three reflections are just scales with negative scale factors. The reflections in the lines y = x and y = x can be done by a scale followed by a rotation. The shear transformations cause the image to slant. The y shear preserves all the x-coordinate values but shifts the y value. The amount of change in the y value depends upon the x position. This causes horizontal lines to transform into lines, which slope up or down. The x shear maintains the y coordinates, but changes the x values, which causes vertical lines to tilt right or left. It is possible to form the shear transformations out of sequences of rotations and scales, although it is much easier to just form the matrix directly. It is also possible to build rotation and some scaling transformations out of shear transformations. Reflections in Y-axis X-axis Shears X-Shear

Normal

Y-Shear

38

Inverse Transformations Sometimes, it is needed to undo the effect of a transformation. This is done by matrix inversion, where the transformed point (x, y), is transformed back to the original point (x, y). This is called inverse transformation. For example, when the user indicates a particular position on a displayed image. The display may show an object, which has undergone a transformation. To know the corresponding point on the original object in order to bring back the original image, it is required to undo the transformation. Undoing a transformation is a transformation itself. It can be determined by matrix inversion. The inverse of a matrix (T) is another matrix (T 1) such that when the two are multiplied together, the identity matrix results (T T 1 = I). Consider a transformation matrix T that transforms point P1 to yield point P2. This can be shown as: P2 = P1T. If both sides are multiplied by the inverse transformation matrix (T 1), then: P2 T 1 = P1T T 1 = P1 I = P1. This shows that inverse of T transforms P2 back to P1. Transformation routines The rotations that create the transformations (translating, rotating, and scaling) will modify a homogeneous coordinate transformation matrix. This matrix can be applied to any point to obtain the corresponding transformed point. The third coordinate w of the homogeneous coordinates is taken as 1. For the same reason, the last column (0, 0, 1) of the transformation matrix is not stored. The 3x3 transformation matrix is thus stored in a 3x2 array named H. Transformations and Patterns Patterns, which are registered to the imaging surface, are used to give gray levels or simple textures such as stripes or weaves. But there is an alternative use of patterns. It is better sometimes to represent a picture directly as a pixel pattern. Especially, when the shapes in the images are curved (characters) or do not contain lines at all (photographs). The patterns can be moved with respect to the imaging surface at the same time they can be scaled and or rotated. Suppose a polygon is subjected to scaling (S), rotation (R), and translation (T). The transformation matrix (H) is given by: H = (SRT). Now the pattern that is used to fill the polygon should also be subjected to the same transformation. For undoing the transformation caused by H inverse transformation is used. Its inverse (H 1) can be built from the inverses of its components as shown below: H 1 = (SRT) 1 = T 1 R 1 S 1 {Note: components are in reverse order} Transformations on patterns should be used for pictures, but should be avoided for simple shading patterns and repetitive designs. The reasons are: 1. The transformation of every pixel is time consuming. 2. Transforming a pattern introduces aliasing. Antialiasing is more expensive.

39

Display Procedures A transformation matrix designed and created by the user may be as complex as necessary, involving many component transformations. Routines written according to prescription will transform point values before they are entered into the display file. The transformation algorithms, on the other hand operate on points as they are read from display file. Transformations may be carried out at both stages of processing. Multiple transformations are useful when a picture is made of a few basic components combined according to some hierarchical structure. For example, a routine, which draws a petal, can be used to create a garden as follows: By combining petals with different positions and orientations a flower bush may be created. By combining flowers with different sizes and positions a flower bush may be created. Transformations of several bushes can form a garden. Pictures with this structure lend themselves to a subprogram organization. A program to draw a garden could do so by several calls on a subprogram, which draws a flower bush. The subprogram, which draws the flower bush, could do so by several calls on a subprogram, which draws a flower. The subprogram, which draws the flower, could do so by several calls on a subprogram, which draws a petal. These subprogram calls are a little more complicated; they involve the establishment of transformation matrix. An ordinary subprogram call (CALL PETAL) would always produce the image with the same size and orientation. What is needed is a call, which sets up a transformation before the points are entered into the display file. For example: CALL PETAL WITH SIZE (SX, SY), ANGLE (A), TRANSLATION (TX, TY). These calls, which involve establishment of a transformation, are named display procedure calls, and the subprograms, which draw sub pictures, are known as display procedures. The display procedures can be nested and there can be multiple transformations. Each display procedure call must save the current overall transformation matrix. Because of the nested calls several transformation matrices may have to be stored simultaneously. One possible data structure for storing these matrices is a stack as it the LIFO nature of the subroutines. A display procedure call involves the following: 1. Saving the overall transformation matrix. 2. Multiplying the overall transformation matrix on the left by the transformation in the call to form a new overall transformation matrix. 3. Transferring control to the display procedure. A return from a display procedure involves the following: 1. Restoring the overall transformation matrix from the value saved. 2. Returning the control to the calling program. The users LINE and MOVE commands with in the body of a display procedure should do the following: 1. Multiply point coordinates by current overall transformation matrix to get the transformed point. 2. Enter the transformed values into the display file via the system LINE-ABS-2 or MOVE-ABS-2 commands.

40

SEGMENTS (Algorithmic Approach only)


[Introduction, The Segment table, Segment creation, Closing a Segment, Deleting a Segment, Renaming a Segment] Dynamically changing pictures appeal to everyone involved in computer graphics. Applications, sometimes, require that a picture be redrawn at each stage. However, only a small part of the picture changes, and the rest remains unchanged. Thus dynamic graphics require speed of regeneration of successive pictures and the ability to make selective modifications (ie, to add new parts, move them around, and delete them without disturbing the rest). A single image may contain several views of an object. It may have a picture of the overall object and a close-up of a particular component. It may also contain instructions, error information. For example consider an animated display of spaceship in motion. This can be done either by showing the spaceship at different positions, or by moving the background while the spaceship is centered on the screen. This means that the transformations are applied to a portion of scene (either the ship or the background, but not both). This requires the organization of display file to show sub-pictures called segments. Each segment corresponds to a component of overall display. A segment is a logical unit. It is a collection of display-file instructions representing the graphics primitives that can be manipulated as a single unit. A set of attributes (visibility, imagetransformation) is associated to each segment. A visible segment will be displayed whereas an invisible segment will not be shown. By varying the settings of visibility attribute, a picture can be built out of the selected sub-pictures. Image-transformation attribute allows the independent scaling, rotation, and translation of each segment. For the spaceship, two segments can be considered. One segment is used to store the ship and the other for the background. Either of them can be shifted using image-transformation, while leaving the other. The segments are stored in a table called segment table. This is useful to distinguish a segment from others. A number is used for the name of the segment. Simple arrays will serve to hold the segment properties and the segment name will be used as the index into these arrays. One array contains the display-file starting locations. A second array for size, while a third for visibility and so on. To display the image, the segment table is consulted to determine which segments are visible. For each visible segment, the properties are looked up and are passed to the display-file interpreter. The interpreter will therefore interpret only the visible segments. The Segment Table (SX, SY Scale Factors; A: Angle; TX, TY: Translation amounts) SEGMENT- SEGMENT- SEGMENT- VISIBILITY SX SY A TX TY NAME START SIZE 0 1 2 3

41

A special unnamed segment is used to make the graphics system to be compatible with segments. Information, such as display-file starting position and segment size, must be stored for unnamed segment just as for named segments. The unnamed segment is associated with a table index 0 and the segment table arrays are dimensioned with lower bound 0 instead of 1. Thus SEGMENT-SIZE [0] will be the number of instructions in the unnamed segment and SEGMENT-SIZE [1] will still be the number of instructions in the segment named 1. Segment Creation The first thing to create a segment is to check whether some other segment is still open. If it is so, it should be closed as two open segments lead to ambiguity of assigning instructions. The second thing is to check for a valid name for the segment (0 < SEGMENT-NAME < NUMBER-OF-SEGMENTS). The next thing is to see whether there exists any segment with the same name. If it exists then error should be generated. Else the items in the segment table are to be initialized with the segment name to indicate that it is a fresh new segment. The first instruction belonging to this segment will be located at the next free storage area in the display file. The current size of the segment is initialized to zero. The attributes are initialized to those of the unnamed segment, which provide the default values. Finally it is indicated that the created segment is open. [Ref: 5.1 Algorithm CREATE-SEGMENT (SEGMENT-NAME): User routine to create a named segment.] Closing a Segment After creating the segment, the instructions can be entered into the display-file. After all the instructions are entered the segment should be closed. At this point, all that is necessary is to change the value of NOW-OPEN variable that contains the name of the segment. This is set to 0 to make the unnamed segment the one that is open. To conserve the storage the earlier unnamed segment is deleted. All the instructions, if any, of the new unnamed segment are deleted and attributes are initialized (SEGMMENT-START = FREE, SEGMMENT-SIZE = 0). Now the unnamed segment is ready to receive instructions in the next free display-file location. [Ref: 5.2 Algorithm CLOSESEGMENT: User routine to close a currently open segment.] Deleting a Segment If a segment is no longer needed, the display-file storage occupied by its instructions should be recovered. The recovered storage can be used for some other segment. The method of deleting a segment depends upon the data structure used for the display-file. Here arrays are used. Recovery of a block of storage in an array is both simple and straight forward, but it is not very efficient. All the instructions entered after the segment to be deleted are moved up in the display-file so that they lie on top of the deleted segment. Thus the gap left by the deleted segment can be filled and an equivalent amount of storage can be recovered at the end of the display-file. [Ref: 5.3 Algorithm DELETE-SEGMENT (SEGMENT-NAME): User routine to delete a segment.]

42

Deleting All Segments At times it is required to delete all the segments especially to draw a new picture. Besides this is also useful for initialization. One method of doing is to delete all the segments individually. This approach is independent of the data structure used for display-file. A some what more efficient approach (for array data structure) is to simply set the size value of all segments to 0 and initialize the free cell index FREE to first cell in the display-file. At the same time all the starting positions are initialized to 1, so that there will not be any garbage in these locations, which might upset the DELETESEGMENT routine. [Ref: 5.4 Algorithm DELETE-ALL-SEGMENTS: User routine to delete all segments.] Renaming a Segment Consider a display device with a display processor. The display processor continuously reads the display-file and shows its current contents. Animated characters can be displayed by presenting a sequence of images, each with a slightly different drawing of the character. This can be done by deleting the segment, recreating it with the altered character, and showing the result. The problem with this is that during the time after the first image is deleted, but before the second image is completed, only a partially completed character can be seen. The same situation prevails for all the subsequent modifications. This can be avoided by not deleting the segment until a replacement for it is completed. The new invisible image is built under some temporary segment name. When completed, the original image is deleted and the replacement image is made visible and renamed to become the old segment. These steps can be repeated to achieve apparent motion. The method of maintaining two images, one to show and one to build or alter, is called double buffering. The renaming is carried out by RENAME-SEGMENT algorithm. It checks that the segment names are valid and that they are not open. It also checks against using the name of already existing segment. If these conditions are met, the segment table entries for the old name are copied into the new name position. The size of the old segment is set back to 0 and deleted. [Ref: 5.5 Algorithm RENAMESEGMENT (SEGMENT-NAME-OLD, SEGMENT-NAME-NEW): User routine to rename SEGMENT-NAME-OLD to be SEGMENT-NAME-NEW.] Visibility Each segment is assigned with a visibility attribute. The segments visibility is stored in an array as part of the segment table. By checking this array it can be determined whether or not the segment should be displayed. The user routine to set the visibility attribute is given by the algorithm SET-VISIBILITY (SEGMENT-NAME, ONOFF). If the visibility is being turned off, then a new-frame action is needed. [Ref: 5.6 Algorithm SET-VISIBILITY (SEGMENT-NAME, ON-OFF): User routine to set the visibility attribute.] Image Transformation Image transformation is carried out on the contents of display-file. It provides the means as how the picture is displayed and is often supported by hardware, which reads the display-file and generates the image. Each segment is provided with its own image transformation attributes. These count to be five in number (x and y scale factors, rotation

43

angle, x and y translation amounts). An array for each parameter is used as a part of the segment table. The user routine to set the image translation for a segment is given by the algorithm SET-IMAGE-TRANSLATION (SEGMENT-NAME, TX, TY). This algorithm saves the translation amounts for the SEGMENT-NAME segment. A new frame action is called only if the segment, which is being modified, happens to be visible. A single routine to set the image transformation parameters of a segment is given by the algorithm SET-IMAGE-TRANSFORMATION (SEGMENT-NAME, SX, SY, A, TX, TY). [Ref: 5.8 Algorithm SET-IMAGE-TRANSFORMATION (SEGMENT-NAME, SX, SY, A, TX, TY): User routine to set the image transformation parameters of a segment.] Saving and Showing Segments To attach the segment structure to the routines for saving and showing the display-file instructions two routines viz, PUT-POINT and MAKE-PICTURECURRENT are altered. A statement is added to the routine PUT-POINT to increment the size of the segment currently open every time a new instruction is added to a display-file. The second routine MAKE-PICTURE-CURRENT contains a statement to display the entire display-file. This statement is replaced by a loop, which steps through the segment table, examining each segment for valid name and visibility. If both conditions are met, then the image transformation matrix is formed and the segment is interpreted. The segment table provides the information about the number of instructions to be interpreted and where to begin and so on. This information is passed on to the INTERPRET routine as its arguments. [Ref: 5.14 Algorithm PUT-POINT (OP, X, Y): Extension of algorithm 2.1 to include updating the segment table.] [Ref: 5.15 Algorithm MAKE-PICTURECURRENT: (Revision of algorithm 4.10).] Other Display-File Data Structures There are many data structure schemes for storing the display-file (viz, arrays, linked list, and paging schemes). While insertion, selection, and deletion are easy for an array scheme, deletion is not very efficient. To remove an instruction at the beginning of display-file, all the instructions are to be moved up. If the display-file is large, this could mean a lot of processing to recover only a small amount of storage. One alternative data structure is the linked list. In a linked list the instructions are not stored in order; rather a new field is added to the instruction. This field, called link or pointer gives the location of the next instruction. The instruction cells, which have not yet been used, are also linked to form a list of available space. When a new instruction is added to a display-file, a cell is obtained from the list of available space, the correct instruction operation code and operands are stored and the cell is linked to the display-file list. Deletion is very easy and does not need the moving of instructions. To remove a cell the pointer that points to the cell is changed so that it points the succeeding cell. The removed cell is added to the list of available space. The disadvantages of the linked list scheme are that it requires more storage to hold the links and that it is costly to locate arbitrary cells. A third scheme, which is between the array and linked list methods, is a paging scheme. In this method the display-file is organized into a number of small arrays called pages. The pages are linked to form a linked list of pages. Each segment begins at the

44

beginning of a page. If a segment ends at some point other than a page boundary, then the remainder of the page is not used. In this scheme, display-file instructions can be accessed within a page just as they were accessed in an array. When the end of the page is reached, a link is followed to find the next page. By grouping the instructions into pages, the number of links can be minimized. A segment can be deleted by altering the links. A list of unused or available pages provides a source of new pages when display-file is extended. Deleted pages can be added to this list for reuse. Some disadvantages of this scheme are that storage is lost at the end of page if a segment does not completely fill it and that accessing is a bit more complex. Once a segment is closed, it can no longer be altered or extended. [Note: When editing is allowed linked list scheme is preferred.] Some Raster Techniques For storage tube and vector refresh displays, the only method available for altering an image is to change the display-file or image transformation and reinterpret the entire picture. This is quite inefficient for raster displays, as clearing and recomputing pixel values for the entire frame buffer is costly. The principle behind fast modification of a raster display is to change as little as possible. Special techniques have been developed for raster displays, which allow altering a portion of the display while leaving the remainder of the frame buffer intact. One technique is to redraw the segment that is to be made invisible with pixel values set to that of the background. This in effect erases all lines drawn by the segment and leaves gaps in lines belonging to other segments. This damage could be repaired by reinterpreting the segments, which are still visible. Another technique is for efficient translation of an image. The image can be moved by simply copying only those pixels that are involved. If the image is confined to a box, then only those pixels within that box need be copied. Pixels outside the box can be left unchanged. Another operation called RasterOp or bit block-transfer (BITBLT) is quite useful in working with raster displays. This performs logical operations on sub arrays such as turning all pixels on or off, shifting all values by a row or a column, and copying values from another sub array. For one-bit-per-pixel frame buffers, it performs logical AND, OR, XOR of pixel values in two sub arrays (say A and B). BITBLT operations can be implemented in hardware so that they are very fast. BITBLT operations for one-bit-per-pixel Frame Buffers Sub Arrays Logical Operations on Scenes A B AND OR XOR

45

WINDOWING AND CLIPPING [Introduction, The Viewing Transformation, Viewing transformation implementation, Clipping, The Cohen-Sutherland Algorithm, Clipping of Polygons.] Windowing is the process of extracting a portion of a database by clipping the database to the boundaries of the window. Clipping is the process of selecting the appropriate portion of the entire database of a picture. Computers are used in design applications as they can easily and accurately create, store, and modify very complex drawings. Complex drawings are too difficult to read. In such situations it is useful to display only those portions of the drawing that are of immediate interest. Further more, it is desirable to enlarge these portions to take full advantage of the available display surface. The method for selecting and enlarging portions of a drawing is called windowing. The technique for not showing that part of drawing in which one is not interested in is called clipping. It is useful to think of two models to display the item viz, object model and image model. The object model resides in the object space and represents the object using physical units of length. Lengths of the objects may be measured in any units from light years to angstrom. The image model resides in the image space and represents images using screen coordinates. The conversion of object space units to those of the image space (say normalized device coordinates) is done by scaling. The dimensions of large objects are reduced and those of small objects are enlarged. The physical dimensions of the object are scaled until they are suitable for display.

10 m

Position is measured in Physical Units

Position is given in Normalized Screen Coordinates (.2,.4) (.2,.2) (.4,.2)

10 m Object Space Image Space

46

The Viewing Transformation: In most of the cases, it is desired to display only a portion of the large object (window) at some suitable location on the screen (viewport). Thus a window is a rectangle in object space that is used to specify a transformation between object and image space. Thus a window is a rectangle in object space and a viewport is a rectangle in image space. These two together are used to specify a transformation between object and image space. When a window is changed a different part of the object is shown at the same position on the display. On the other hand, if a viewport is changed, the same part of the object is drawn at a different place on the display. The overall transformation (translation, scaling, and translation) which creates a particular view of the object is called viewing transformation. This is done with the following three steps: 1. The object together with its window is translated until the lower-left corner of the window is at the origin (translation). 2. The object and window are scaled until the window has the dimensions of the viewport (scaling). This converts the object and window into image and viewport. 3. The viewport is translated to correct position on the screen (translation). If the aspect ratios of the window and viewport are different, then the viewing transformation will cause some distortion of the image, as the window size is scaled to that of the viewport.

The Viewing Transformation

Translate

Scale

Translate

The Viewing Transformation Implementation: This is done with the following steps: 1. Specifying the size of the viewport. The viewport is confined to a rectangular shape parallel with the x and y axes. [Ref: 6.1 Algorithm SET-VIEWPORT (XL, XH, YL, YH): User routine for specifying the viewport.] 2. Specifying the size of the window. The window is confined to a rectangular shape parallel with the x and y axes. [Ref: 6.2 Algorithm SET-WINDOW (XL, XH, YL, YH): User routine for specifying the window.] 3. Copying the users specifications into the systems viewing parameters. This also sets the window-to-viewport scale factors. [Ref: 6.3 Algorithm NEW-VIEW-2: Set the clipping and viewing parameters from the current window and viewport specifications.]

47

4. Modifying the segment-creation routine to reset the viewing transformation to match the latest user request. [Ref: 6.4 CREATE-SEGMENT (SEGMENTNAME): User routine to create a named segment (Modification of algorithm 5.1)] 5. Performing the transformations: First, to translate by the lower x and y boundaries of the window. This moves the lower left corner of the window to the origin. Second, to scale by the size of the viewport divided by the size of the window. This changes the dimensions of the window to those of the viewport. Finally, to translate by the lower x and y boundary values of the viewport. This moves the lower-left corner from the origin to the correct viewport position. The overall transformation can be obtained by multiplying the individual transformation matrices. This can be applied to a general point to get the transformed point. [Ref: 6.4 Algorithm VIEWING-TRANSFORM (OP, X, Y): Viewing transformation of a point.]. [Note: This not only performs the viewing transformation but also enters the resulting instruction into the display-file. The display-file will hold the image space model.]

48

Clipping: Clipping is the process of selecting the appropriate portion of the entire database of a picture. The lines, which are outside the window, are cutoff, so that only the lines within the window are displayed. In clipping each line is examined whether it is completely inside the window, or lies completely outside or crosses a window boundary. If it is inside it is displayed. If it is outside, nothing is drawn. If it crosses the boundary, the point of intersection is determined and the portion, which lies inside the boundary, is drawn. Different graphic elements may require different clipping techniques. A character may either be either entirely included or omitted depending on whether or not its center lies within the window. This technique will not work for lines, and some methods used for lines will not work for polygons.

Before

After

The Cohen-Sutherland Outcode Algorithm: This is a popular algorithm for line clipping. The algorithm quickly removes the lines, which lie entirely to one side of the clipping region (both end points above, below, right, or left). It makes the clever use of bit operations (OutCodes) to perform this test efficiently. Segment end points are each given 4-bit binary codes. The high order bit is set to 1 if the end point is above the window; the next bit is set to 1 if the point is below the window. The third and fourth bits indicate right and left of the window, respectively. The lines, which form the window boundary, divide the plane into nine regions with their corresponding OutCodes. If the line is entirely within the window, then both endpoints will have OutCodes of 0000. Segments with this property are accepted. If the line-segment lies entirely on one side of window then both endpoints will have a 1 in the outcode bit position for that side. This can be checked by taking the logical AND of the OutCodes for the two points. If the result is non-zero, then the segment may be rejected. Thus one test decides whether the segment is entirely above, below, right, or left of the window. If the line crosses the window, the points of intersection between the line and the clipping boundary lines may be used to break up the line-segment. The resulting pieces may be tested for acceptance or rejection. The brief outline of the Cohen-Sutherland algorithm is given below: 1. Compute the OutCodes of the two endpoints (P1 and P2). 2. Enter a loop and check to see the OutCodes:

49

a. If both OutCodes are zero, then enter the segment into display-file, exit the loop, and return. b. If both OutCodes are non-zero then, perform the logical AND to check for a nonzero result. If the result is nonzero, then reject the segment, exit the loop, and return. c. If the outcode for P1 is zero, then swap the points P1 and P2 along with their OutCodes. 3. Find a nonzero bit in the Outcode of P1. a. If it is the higher order bit, then find the intersection of the line with the top boundary of the window. b. If it is the next bit position, then subdivide along the bottom boundary. c. If it is the third bit position, then find the intersection of the line with the right boundary of the window. d. If it is the fourth bit position, then find the intersection of the line with the left boundary of the window. 4. Replace the point P1 with the intersection and calculate its OutCode. 5. Repeat the loop.

OutCodes for the plane


1 0 0 1 1 0 0 0 Above Left 0 0 0 1 0 0 0 0 Right 0 0 1 0 1 0 1 0

Below 0 1 0 1 0 1 0 0 0 1 1 0

Testing and Dividing Line Segments

50

The Sutherland-Hodgman Algorithm: This is a popular algorithm for polygon clipping. It unbundles the clipping test to clip against each of the four boundaries individually. The idea behind the algorithm is that it is easy to clip a line-segment against any one of the window boundaries. Complete clipping can be performed by clipping against each of the four boundaries successively. In this method, all line-segment end points lying within the boundary and all points where lines intersect the boundary are passed on, while points lying outside are filtered out. It is possible to begin clipping on a second boundary before the clipping of entire figure against the first boundary is completed. In fact, each point may be run through all four clipping routines and entered into the display-file before the next point is considered. Algorithms for clipping a figure against each of the four window boundaries are given below: 1. CLIP-LEFT (OP, X, Y): Routine for clipping against the left boundary. 2. CLIP-RIGHT (OP, X, Y): Routine for clipping against the right boundary. 3. CLIP-BOTTOM (OP,X, Y): Routine for clipping against the bottom boundary. 4. CLIP-TOP (OP, X, Y): Routine for clipping against the top boundary. For the first three clipping algorithms, the next routine is the algorithm for clipping against the next boundary. The last clipping algorithm actually enters commands into display-file by calling the SAVE-CLIPPED-POINT routine. They all follow the same outline. They first check to see if the new point is the first point of a polygon as this point is used in closing the polygons. They examine the new point and last point to see whether the line-segment with these endpoints crosses the boundary. The algorithms are called for each new point. The Clipping Process

Clipping Clip Left Clip Right

Clip Bottom

Clip Top

Viewing Transformation

51

Example: Consider the window [(1, 1), (3, 1), (3, 3), (1,3)] and sequence of line-segments [(2, 2), (4, 2), (4, 4), (2,4)]. Starting with the pen at (2, 2) to draw the lines, the XS and YS arrays will be initialized to the current position XS [i] = 2 and YS [i] = 2. The CLIP-LEFT routine is entered with the point X = 4, Y = 2, and the work begins. The CLIP-LEFT routine will compare the segment from (2, 2) to (4, 2) against window boundary WXL =1. The segment will not require clipping at this boundary. XS [1] is set to 4 and YS [1] is set again at 2. This point is then passed to the CLIP-RIGHT routine. It compares the segment against window boundary WXH = 3. The third IF statement in this algorithm discovers that clipping is required and passes the point X = 3, Y = 2 to the CLIP-BOTTOM routine. The XS [2] and YS [2] values are set to 4 and 2 respectively. The CLIP-BOTTOM and CLIP-TOP routines do not have to clip but just pass along the point and remember the (3, 2) position in their XS and YS array elements. The command to draw the line from (2, 2) to (3, 2) is entered into the display file by SAVE-CLIPPEDPOINT. The next line segment is seen by CIP-LEFT as going from (4, 2) to (4, 4). Since this does not cross the left boundary, the point (4, 4) is passed along and remembered. The CLIP-RIGHT routine will also consider the line from (4,2) to (4, 4). Since both points are outside the right window boundary, this routine will not pass along the point to CLIPBOTTOM. It will only remember the point (4, 4) as its current pen position. The next point is (2, 4). Again the CLIP-LEFT routine will remember the (2, 4) position in the XS [1], YS [1] array elements and pass the point to CLIP-RIGHT routine. In CLIP-RIGHT the second IF statement will realize that this line crosses from outside the right boundary to inside. It will send a command to MOVE to the point (3, 4) to the CLIP-BOTTOM routine. It remembers the point (2, 4) and finally passes a LINE command to this point. The first of these two calls to CLIP-BOTTOM is for the move from (3, 2) to (3, 4). This does not cross the bottom boundary, so the routine remembers the current position (3, 4) and passes the move command to CLIP-TOP. The CLIP-TOP routine will clip this command at (3, 3) and enter this move into the display-file. It will set its current position to (3, 4). Now back to the second call to CLIP-BOTTOM by CLIP-RIGHT. This is a line command to the point (2, 4). CLIP-BOTTOM will remember this point and pass the command to CLIP-TOP. CLIP-TOP will also remember this point but will not pass the command any further because the line from (3, 4) to (2, 4) is above the window. Finally, the point (2, 2) is given to CLIP-LEFT routine. It passes the command to CLIP-RIGHT, which passes it to CLIP-BOTTOM, which passes it to CLIP-TOP. The CLIP-TOP routine sees the segment from (2, 4) to (2, 2). The second IF statement in the routine forwards a command to move to the point (2, 3) to the display-file. The final IF statement sends a command to draw a line to the point (2, 2). The net result has been a line from (2, 2) to (3, 2), a move to (3, 3), a move to (2, 3), and a line back to (2, 2).
(2,4) (4,4)

(1,3)

(3,3)

(2,2)

(4,2)

(1,1)

Window

(3,1)

52

The clipping of polygons: The clipping routine should be able to handle polygons as well as line segments. The routine will remove some of the sides and will insert a move command along the window boundary. This changes the number of sides of the polygon. Because of this the instructions are stored in a temporary area. When all the sides are clipped, the sides are counted and the polygon instructions are entered into the display-file. The instructions, which survive the clipping routines, are treated in two different ways. Instructions, which do not belong to a polygon are given a viewing transformation and placed in the displayfile, while, instructions, which are part of a polygon, are placed in a temporary storage buffer. This decision is made in the algorithm SAVE-CLIPPED-POINT based on a flag PFLAG that indicates polygon processing. [Ref: 6.10 Algorithm SAVE-CLIPPEDPOINT (OP, X, Y): Saves clipped polygons in the T Buffer and sends lines and characters to the display file.]. This algorithm also keeps track of the number of sides that are saved. Temporary storage for polygons is done by using arrays IT, XT, and YT. They must be large enough to hold the maximum number of polygon sides. PUT-IN-T routine is used to save instructions in these arrays. [Ref: 6.11 Algorithm PUT-IN-T (OP, X, Y, INDEX): Save an instruction in the T buffer.] Closing the Polygon

Clipping

Unclosed

Closed

When all the sides are clipped, it should be ensured that the polygon is closed. For example, when an edge crosses a boundary, clipping calculates the intersection points and results in a polygon, which is not closed. To fix this problem, each clipping stage should close its version of polygon. Each stage should store the first instruction it receives from the polygon. A flag (NEEDFIRST) is used to identify the first instruction. After all instructions are processed the CLOSING variable is set and output the saved instruction to each clipping stage. This causes each stage to check the edge between the last point and first point for intersection with the clipping boundary. If it does intersect, the intersection point is entered to complete the polygon. After closing the polygon, the final number of sides is checked. If it is less than 3, then the polygon has collapsed or has been clipped away and no entry is made. Otherwise the polygon command is updated to reflect the number of vertices and their corresponding values of x and y coordinates. The algorithm CLIP-POLYGON-EDGE does all of this. [Ref: 6.12 Algorithm CLIPPOLYGON-EDGE (OP, X, Y): Close and enter a clipped polygon into the display file.]

53

Polygon commands are handled by the algorithm CLIP [Ref: 6.13 Algorithm CLIP (OP, X, Y): Top-level clipping routine.]. It decides between the handling of polygons and other graphics primitives. Counters (COUNT-IN and COUNT-OUT) are set for the number of sides to be expected and the number of sides of the result. When a polygon is discovered it is entered in to temporary file. Last point variables for each of the clipping routines are initialized, and a flag is set so that future calls to the clipping routine will be recognized as polygon sides. The CLIP algorithm will clip, transform, and save drawing instructions in the displayfile. This can be included as part of the display-file instruction storage process by modifying the DISPLAY-FILE-ENTER routine. [Ref: 6.14 Algorithm DISPLAY-FILEENTER (OP): (Modification of algorithm 2.23) Combine operation and position to form an instruction and save it in the display file.]. Finally, initialization routine sets the boundaries of the viewport and window to be the same as normalized device coordinates. (ie, from 0 to 1 in both the x and y directions.). This makes the window and viewport transformation transparent for the user. [Ref: 6.15 Algorithm INITIALIZE-6]. Adding Windowing to the System
User Program Windowing And Clipping Display file

Interpreter

Display

Clipping can be generalized by using recursion. The four clipping routines can be combined and be written in a more general form, so that information about the boundary can be passed to the routine through parameters. This routine would be entered four times recursively, each time with a different boundary specified by its parameters. Further generalization is possible to clip along any arbitrary line (not just horizontal or vertical boundaries). Thus the window can have more than four sides. Thus the generalized algorithm that uses recursion can be used to clip along any arbitrary convex polygon. Some systems allow multiple windowing; ie, a first image is created by one or more window transformations on the object. Then, windows are applied to this first image to create a second image. Further windowing transformations may be done until the picture is created. Every application of a window transformation allows the user to slice up a portion of the picture and reposition it on the screen. Thus multiple windowing gives the user freedom to rearrange components of the picture. The same effect may be achieved, however, by applying a number of single window transformations to the object.

54

Potrebbero piacerti anche