Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
TRANSFORMATIONS
Many of the editing features involve transformations of the graphics elements or cells composed of elements or
even the entire model. In this section we discuss the mathematics of these transformations. Two-dimensional
transformations are considered first to illustrate concepts. Then we deal with three dimensions.
Two-dimensional transformations
To locate a point in a two-axis Cartesian system, the x and y coordinates are specified. These coordinates can be
treated together as a 1x1 matrix: (x, y). For example, the matrix (2, 5) would be interpreted to be a point which is
2 units from the origin in the x-direction and 5 units from the origin in the y-direction. This method of
representation can be conveniently extended to define a line as a 2 x 2 matrix by giving the x and y coordinates of
the two end points of the line.
Using the rules of matrix algebra, a point or line (or other geometric element represented in matrix notation) can
be operated on by a transformation matrix to yield a new element. There are several common transformations
used in computer graphics. We will discuss three transformations: translation, scaling, and rotation.
TRANSLATION
Translation involves moving the element from one location to another. In the case of a point, the operation would
be x' =x + m, y' = y + n where x', y' = coordinates of the translated point x, y = coordinates of the original point m,
n = movements in the x and y directions, respectively In matrix notation this can be represented as(x', y') = (x, y)
+T
The geometry traditionally followed is the Euclidean geometry. In the traditional sense we follow the Cartesian
coordinate system specified by the X, Y and Z coordinate directions. The three axes are mutually perpendicular
and would follow the right hand system
In handling of geometrical information, many a times It becomes necessary to transform the geometry. The
transformations actually convert the geometry from one coordinate system to the other. The main types of pure
transformations with which we are likely to come across arc the following
These transformations are symbolically shows in Fig.
Translation
Scaling
Reflection or Mirror
Rotation
In order to understand the system easily we would look at the transformations in 2 dimensional systems for the
sake o easy understanding. The same would then be extended to look at the 3 dimensional viewing.
Translation
It is the most common and easily understood transformations in CAD. This moves a geometric entity in space in
such a way that the new entity is parallel - all points to the old entity. A representation is shown in Fig. far an
object. Let us now consider on the object. Represented by P which is translated along X and Y axes by dX and dY
to a new position P. The new coordinates after transformations are given by following equation.
This is normally the operation used in the CAD system as MOVE command.
Since the scaling factor can be individually applied, there is a possibility to have differential scaling when Sx is
not equal to Sy. Normally in the CAD systems uniform scaling is allowed for object manipulation. In the case of
zoom facility in graphic systems, uniform scaling is applied zooming is only a display attribute and is applied
only to the display and not to the actual geometric database.
Reflection or Mirror
Reflection or Mirror is a transformation, which allows a copy of the object to be displayed while the object is
reflected about a line or a plane. Typical example shown Fig, when in (a) shows reflection about X axis. While
the one in (b) shows the reflection about Y axis. The reflection shown in (C) is shout the X and Y axis or about
the origin.
Here -1 in the first position refers to the reflection about Y axis where all the X coordinate values get negated.
When the second term becomes -l the reflection will be about the X axis with all Y coordinate values getting
reversed. Both the values are -1 for reflection about X and Y axes.
Rotation: Rotation is another important geometric transformation. The final position and orientation of a
geometric entity is decided by the angle of rotation ( ) and the base point about which the rotation. Is to be done
(Fig).
Concatenation of Transformation
Many a times it becomes necessary to combine the aforementioned individual transformation in order to achieve
the required result. In such cases the combined transformation matrix can be obtained by multiplying the
respective transformation matrix. However, care should be taken that the order of the matrix multiplication be
done in the same way as that of the transformation as follows.
Homogeneous Representation
In order to concatenate the transformation as shown in Fig. all the transformation matrices should be
multiplicative type. However, as seen earlier, the translation, matrix is vector additive while all others are matrix
multiplications. The following form could be used to convert the translation into multiplication form.
Similar to the above, there are times when the reflection is to be taken about an arbitrary line as shown in Fig.
I. Translate the mirror line along the Y axis such that the line passes through the origin 0.
2. Rotate the minor line such that it coincides with the X axis,
3. Mirror the abject through the X axis.
4. Rotate the minor line hack to the original angle with X axis.
5. Translate the mirror line along the Y axis back to the original position.
Following are the transformation matrices for the above operations In the given sequence.
Projections
Projection is 'formed' on the view plane (planar geometric projection) rays (projectors) projected
from the center of projection pass through each point of the models and intersect projection plane.
Since everything is synthetic, the projection plane can be in front of the models, inside the models, or
behind the models.
Parallel:
o center of projection infinitely far from view plane
o projectors will be parallel to each other
o need to define the direction of projection (vector)
o 2 sub-types
orthographic - direction of projection is normal to view plane
oblique - direction of projection not normal to view plane
o better for drafting / CAD applications
Perspective:
Orthographic Projection
The most common form of projection used in engineering drawing is the ‘orthographic projection. This means
that the projecting lines or projectors arc all perpendicular (orthogonal) to the projecting plane. As a result, if the
feature of the object happens to be parallel to the projecting plane, then the true picture and true dimensions
would be visible in the orthographic projection.
The orthographic projection system is to include a total of sax projecting planes in any direction required for
complete description, A typical example is shown in Fig where the object is enclosed in a box such that there are
6 mutually perpendicular projecting planes on which all possible 6 views of the object can be projected. This
would help in obtaining all the details of the object as shown in Fig. The visible lines are shown with the help of
continuous lines while those that are not visible are shown by means of broken lines.
The purpose of orthographic projections is to accurately represent object.
Accurately, means to make a drawing from which it is possible to manufacture or reproduce the object only using
the drawing as a guide.
for obtaining the front view y=0 and then the resulting coordinates (x, z) wiII be rotated by 90 0 such that the Z
axis coincides with the y axis. The transformation matrix will then be
Similarly for obtaining the right side view x=0 and then the coordinate system wiII be rotated such that y axis
coincides with the x axis and the Z axis coincides with the y axis. The transformation matrix will then be
Clipping
Clipping is a very important element in the displaying of graphical images. This helps in discarding the part or the
geometry outside the viewing window, such that all the transformations that are to be carried out for zooming and
panning of the image on the screen are applied only on the necessary geometry. This improves the response of the
system. For example, in Fig. the image shown inside the window with dark lines is the only part that will be
visible. All the geometry outside this window will be clipped (only for display purpose.
Clipping is used in addition to extracting the part of a scene, for identifying visible surfaces in three-dimensional
views; displaying multi- window environments, and selecting objects that can be applied with the necessary
geometric transformations such as rotation and scaling.
Clipping Lines
In order to carry out the clipping operation, it is necessary to know whether the lines are completely inside the
clipping rectangle, completely outside the rectangle or partially inside the rectangle as shown in Fig. To know
whether it line is completely inside or outside the clipping rectangle, the end points of the line can be compared
with the clipping boundaries. For example, the line P1, P2 is completely inside the clipping rectangle. Similarly
line P3, P4 and P9, P10 are completely outside the clipping rectangle. When a line, such as P5, P6 is crossing the
clipping boundary, it is necessary to evaluate the intersection point of the line (P’6) with clipping boundary to
determine which part of the line is inside the clipping rectangle. The resultant display after clipping is as shown in
fig.
The parametric representation of a line with the end points (x1, y1) and (x2,y2) which is given below could be
used to find out the intersection of the line with clipping boundaries.
X= x1+u(x2-x1)
Y= y1+u(y2-y1) where u≤ u≤ 1.
If the value of u for an intersection with clipping boundaries is outside the range 0 to 1 then the line is not outside
the clipping rectangle. If it is between 0 to 1 then the line is inside the clipping rectangle. This method needs to be
applied to each of the edges of the clipping rectangle to identify the position of lines. This requires large amount
of computations and hence a number of efficient line clipping algorithms have been developed.
In this method all the lines are classified as to whether they are in, out or partially in the window by doing an edge
test. The end points of the line are classified to where they are with reference to the window by means of a 4-digit
binary code as shown fig. The code is given as TBRL. The code is identified follows:
The full 4-digit codes of the line end points with reference to the window are shown in fig. Having assigned the 4-
digit code, the system first examines if the line is fully in or out of the window by the following conditions: The
line it completely inside the window if both the end points are equal to ‘0000’.
The line is completely outside the window if the end points are not equal to ‘(0000) and a 1 in the position for
both ends.
For those lines which are partly inside the window, they are split at the window edge and the line may be crossing
two regions as shown in fig. For the line P1P2, starting from the lower edge, the intersection point P’1 is found
and the line P1P’1 is discarded.
Clipping Polygon.
The line-clipping algorithm discussed earlier can be modified to obtain polygon clipping. However, as can be
seen in fig. extending the line-clipping procedure described above produces a result which can mean that there
exists more than one geometry. This ambiguity is removed by the use of the polygon clipping algorithm
developed Sutherland and Hodgman
The basic idea used in polygon clipping is that an n-side polygon is represented by n vertices. On each of the
polygons, two tests are conducted. If the line (edge of the polygon) intersects the window edge, the precursor
point is added to the output list. If the next vertex is outside the window edge, the precursor point is added to the
output list. This process is repeated for all the edges of the polygon. The resulting output is an m-side polygon,
which can be displayed as shown in fig. The main advantage of this algorithm is that it can be used for a clipping
window that need not be a rectangle. Further, this can be easily extended to 3D.
Compares objects and parts of object to each other to determine which surfaces and lines should be labeled as
invisible
2. Image Space
Visibility is determined point by point at each pixel position on the projection plane. It is further divided into
Vector and raster methods depending upon the type of display that are used.
There are many approaches to hidden surface removal, some basic approaches are highlighted here.
Black-face Removal
The basic concept used in this algorithm is for only those faces that arc facing the camera (centre of projection).
The normal form of a polygon face indicates the direction in which it is facing. Thus, a face can be seen if some
component of the normal N (Fig.) is along the direction of the projector ray P.
If an object can be approximated to a solid polyhedron then its polygonal faces completely enclose its volume. It
is then possible that all the polygons can be defined such that their surface normal point out of their polyhedron
faces as shown in Fig. for a polygon slice. If the interior of the polyhedron is not exposed by the front clipping
plane then those polygons whose surface normal point away front the camera (observer) lie on a part of the
polyhedron which is completely blocked by other polygons that are closer (Fig.2).
Such invisible faces called back faces can be eliminated from processing leaving all the front faces.
As can be seen, not all the front Paces are completely visible. Some may be completely obscured by other faces
(such as E) or partially visible (such as C). This method allows identifying the invisible faces for individual
objects only. However, in majority of the cases this removes almost 50% of the surfaces from the database, which
can then be processed faster by the other algorithms.
The Z-buffer is a separate depth buffer used to store the z-coordinate (depth) of each pixel in the geometric
model. This method utilizes the principle that for each of the pixel locations only that point with the smallest z -
depth is displayed. Figure shows two surfaces S1 and S2 with varying distances along the position (x, y) in a view
plane. Surface S1 is closest at this position, so its surface depth value is saved at the (x, y) position.
For this purpose, it constructs two arrays:
Z (x, y) the dynamic nearest z -depth of any polygon face currently examined corresponding to the (x, y)
pixel coordinates.
I (x, y), the final output colour intensity for each pixel, which gets modified as the algorithm scans through
all the faces that have been retained after the back -face removal algorithm.
The first face is projected on to the viewing plane and the Z (x, y) and I (x, y) arrays are filled with the z -depth
and colour of the face. The next polygon is projected and its z -depth for each pixel location is compared with the
corresponding one that is stored in Z (x. y). If the new z -depth is smaller then it replaces the existing value, while
its colour is stored in the corresponding position in I(x. y). This process is repeated for all the faces. Thus, the
image stored in I(x, y) is the correct image, accurate to the nearest pixel with all the hidden surfaces removed.
The main advantage of the algorithm is its simplicity and the amount of storage required. The disadvantage of the
method is the difficulty in implementing anti -al lasing, transparency and translucency effects. The for this is that
the algorithm writes the pixels to the frame buffer in an arbitrary order, and the necessary information for pm -
filtering anti-aliasing techniques is not easily available. Similarly, for transparency and translucency effects,
pixels may be written to the frame buffer in incorrect order, leading to local errors.
This method is often called the painter's algorithm, since it utilizes in the depth direction the procedures followed
by artists in oil painting. The artist first paints the background colors and then adds the distant objects first. Later,
he adds the nearer objects in the order of decreasing depth. Finally, the foreground objects are added to the canvas
over the background and other objects that have already been painted. Each new layer of paint added covers the
paint already present on the camas.
This process is carried out in a number of steps. All the surfaces (polygons) are ordered in the first pass according
to the smallest z -value on each surface. The surface with the largest depth (z -value) is compared with all other
surfaces in the list to compare if there is any overlap in the z -direction. If there is no overlap then it is scan
converted to fill the frame buffer. The same procedure is completed for all other surfaces in the list if there is no
overlap. If an overlap is detected then further tests need to be done on that surface to examine the visibility.
The following tests are conducted to see if re -ordering of surfaces is necessary. If any of these tests are true then
we proceed to the next surface.
The bounding rectangles in the xy plane for the two surfaces do not overlap.
One surface is completely behind the overlapping surface relative to the viewing position.
The overlapping surface is completely in front of the surface relative to the viewing position.
The projections of the two surfaces on to the viewing plane do not overlap.
Some examples are shown in Fig. There in an overlap in the z -direction between the surfaces S1 and S2.
However, there is no overlap in the x -direction. Then we also check in the y -direction. If there is no overlap,
then S2 cannot overlap S1. The surface S4 is completely in front of the surface S3 but the surface S3 is not
completely inside S4. When all the first three tests fail then the intersection between the bounding edges of the
two surfaces is checked as shown in Fig. The two surfaces may or may not interfere, hut still the test fails because
of the intersection of the bounding edges.
When all the tests fail, the order of the surfaces in the list is interchanged and the procedure repeated. There is no
guarantee that even after interchanging we may not come across situations where the surfaces may get into an
Computer Integrated Manufacturing Dept of IEM, SIT
Sub: Computer Aided Design Unit 2 Transformations and Graphic Standards
infinite loop where the same surfaces may need to be continuously reordered in the processing. In such cases,
some of these surfaces need to be flagged and re -ordered to a further depth position so that it cannot be moved
again. Alternatively, the surfaces that are being moved more than twice may be divided into two surfaces and the
processing continued.
Colour
The human visual system can distinguish only a few shades of grey, while it has much more discriminating power
with respect to colour shades. Thus, colour provides a lot of information about the object displayed.
Our perception of colour is determined by the colour of the light source and the reflectance properties of the
object, since only those rays that are reflected are seen, while tubers are absorbed. The use of colours enhances
the presentation of information in CAD/CAM in a number of ways. Using different colours for different types of
geometric entities during the construction stage helps the engineer to follow the modelling processes with more
clarity. For example, inn wireframe, surface, or a solid modelling process, the entities can be assigned different
colours to distinguish them. In addition, to get a realistic appearance of the object, colour becomes very important
for the shaded images produced by shading algorithms as seen later. In finite element analysis, colours can be
used effectively to display contour images such as stress or heat -flux contours.
Colour Models
RGB Model
CM Y Model
HSI Model
YIQ Model
RGB Model: In the ROB model, an image consists of three independent image planes: red, green and blue. This is
an additive model, i.e., the colours present in the light add to form new colours, as shown in Fig. The other
colours obtained arc yellow (red + green). Cyan (blue + green), magenta (red + blue) and white (red + green +
blue). A particular colour is specified by the amount of each primary colour to get all the other shades of colours.
This model is appropriate for the mixing of coloured light and is used for colour monitors and most video
cameras.
CMY Model: Unlike the RGB model, the CM Y (cyan -magenta -yellow) model is a subtractive model (Fig). The
three primary colours are cyan (C). magenta (M) and yellow (Y). The other colours obtained are red (yellow +
magenta), green (cyan + yellow), blue (cyan + magenta) and black (cyan + magenta +yellow)
Graphics Standard
Learning Objectives:
Need for CAD standard
Understand the graphic kernel system and its extensions for developing the graphic software systems
Requirement of graphic data exchange formats and their details such as IGES, DXF and STEP
Computer Integrated Manufacturing Dept of IEM, SIT
Sub: Computer Aided Design Unit 2 Transformations and Graphic Standards
Dimensional measurement interface specification for communication between coordinate measuring
machine and the CAD data
Introduction
The purpose of CAD standard is that the CAD software should not be device-independent and should
connect to any input device via a device driver and to any graphics display via a device drive.
The graphics system is divided into two parts: the kernel system, which is hardware independent and the device
driver, which is hardware dependent. The kernel system, acts as a buffer independent and portability of the
program. At interface ‘X’, the application program calls the standard functions and sub routine provided by the
kernel system through what is called language bindings. These functions and subroutine, call the device driver
functions and subroutines at interface ‘Y’ to complete the task required by the application program Fig.
Standardization in graphics
Necessary to standardise certain elements at each stage to minimize company investment on certain software and
hardware without much modification on the newer and different systems.
This means that there should be compatibility between various software elements as also between the hardware
and software.
Following are some of the interface standards at various levels;
GKS (Graphical Kernel System): provides a set of drawing features for two-dimensional vector graphics
suitable for charting and similar duties.
PHIGS (Programmers Hierarchical interactive Graphics System): The PHIGS standard defines a set of
functions and data structures to be used by a programmer to manipulate and display 3-D graphical objects.
CORE (ACM-SIGGRAPH)
GKS-3D
IGES (Initial Graphics Exchange Specification): It is an ANSI standard. Enables an exchange of model
data basis among CAD system.
DXF (Drawing Exchange Format): file format was meant to provide an exact representation of the data in
the standard CAD file format.
STEP (Standard for the Exchange of Product Model Data)
Computer Integrated Manufacturing Dept of IEM, SIT
Sub: Computer Aided Design Unit 2 Transformations and Graphic Standards
DMIS (Dimensional Measurement Interface Specification)
VDI (Virtual Device Interface): Lies between GKS and PHIGS. VDI is now called CGI (Computer
Graphics Interface).
VDM (Virtual Device Metafile):Defines the function to represent a picture. Can be stored or transmitted
from graphics device to another. VDM is now called CGM (Computer Graphics Metafile).
GKSM (GKS Metafile)
NAPLPS (North American Presentation Level Protocol Syntax) describes text and graphics in the form of
sequences of bytes in ASCII code.
Features of GKS
GKS were designed in accordance with the following six requirements:
First GKS has to provide all the capabilities that are important for the whole range of graphics from simple
passive output to highly interactive applications.
Second, different type of graphic devices such as vector and raster device, micro film recorders, storage tube
displays. Refresh displays and color displays must be controlled by GKS in a consistent manner.
Third. GKS must include all the capabilities required by a majority of applications without becoming excessively
large.
Fourth, a complete suite of display management functions, cursor control and other features are provided
Fifth, graphic functions arc defined in 2D or 3D
Sixth, all text or animations are in a natural language like English.
GKS.3D
The objective of GKS-3D is to enhance GKS to 3D by introducing
• The definition and the display of 3D graphical primitives
• Mechanisms to control viewing transformations and associated parameters
Data interchange method between two different CAD systems using neutral data format such as IGES or
STEP
IGES translators
IGES Testing
An IGES test library prepared by the committee which allows testing of the basic implementation of an
entity. The library does not allow variations to be checked that occur in production data due to numerical and
computational errors. These variations must be tested by implementers and users.
The common methods of testing are:
1. Reflection test
2. Transmission test
3. Loop back test
Computer Integrated Manufacturing Dept of IEM, SIT
Sub: Computer Aided Design Unit 2 Transformations and Graphic Standards
1. Reflection test
Reflection Test
During the Reflection test an IGES file created by a system’s preprocessor id read by its own post
processor to create an another model. It is used to develop that a system’s processors could read and write
common entities, making them symmetric.
2. Transmission test
During this test an IGES file of a model created by source systems preprocessor transferred to a target
system whose post-processor is used to recreate the model on the target system. Transmission test determines the
capabilities of the pre-processor and post –processor of the source and target system respectively.
Transmission test
3. Loopback test
In this Loopback test an IGES file created by the source system is read by the target system which creates
another IGES file and then transfers this file back to the source system to read. Loopback test checks the pre and
post processors of both the source and the target systems.
Loopback test
Advantages and Limitation of IGES
Advantages of IGES:
i) Saves drawing data in an ASCII which can then be transfer between various users.
ii) IGES initially maintained drawings and wireframes, and was later expanded to maintain surfaces and
solids.
iii) Permits seeing and editing of geometry using any CAD tool powered of interpreting STEP geometry
and cracks the need between CAD systems and product definition.
iv) More valuable for grouping mechanical fundamentals in certain view.
v) Added in the assembly plan having all connector elements.
Computer Integrated Manufacturing Dept of IEM, SIT
Sub: Computer Aided Design Unit 2 Transformations and Graphic Standards
vi) Simple for derivation of the hidden information contents.
Limitations of IGES:
i) Does not have a formal data model.
ii) Lack of a formal data model, issues during file handling, hard to realize file formats.
iii) If there is a mistake in the IGES file, it is very complicate to find the mistake and also correction.
iv) Issue of unfinished exchange due to a variety of ‘FLAVORS’ added by CAD vendors.
v) IGES does not maintain lifecycle data which may be applicable for engineering applications other than
design.
Example for STEP file generation Sheet Metal Die Planning and Design