Sei sulla pagina 1di 22

Computer Graphics

UNIT-1
Graphics Systems and Models
Computer graphics is concerned with all aspects of producing pictures or images using a
computer. The field began with the display of a few lines on a cathode-ray tube (CRT) and now,
the field generates photograph-equivalent images.
1.1 Applications of Computer Graphics:
Four major areas of application are:
1. Display of information:

Computer-based drafting systems produce the information needed for architects,

mechanical designers, and drafts people.


Computer plotting packages provide a variety of plotting techniques and color tools, and

that can handle multiple large data sets.


Medical imaging systems such as Computed Tomography (CT), Magnetic Resonance
Imaging (MRI), ultrasound, and positron-emission tomography (PET) can generate
three-dimensional data that must be subjected to algorithmic manipulation to provide

useful information.
Graphical tools provided by the field of scientific visualization help the researchers to
interpret the vast quantity of data that the supercomputers generate while solving

previously intractable problems.


Data can be converted to geometric entities and then the images can be produced. This
capability has yielded new insights into complex processes in fields such as fluid flow,

molecular biology, and mathematics.


Internet can develop & manipulate maps relevant to geographic information systems in
real time.

2. Design.

Professions such as engineering and architecture are concerned with design. Design starts
with a set of specifications and ends with a cost-effective and esthetic solution that
satisfies the specifications.

Design process takes several iterations. Thus, the designer generates a possible design,
tests it, and then uses the results as the basis for exploring other solutions.

The solutions are not unique. Moreover design problems are either over-determined, such
that they possess no optimal solution, or under-determined, such that they have multiple
solutions.

Computer graphics aids iterative process and also obtain optimal solution.

Computer-aided design (CAD) uses interactive graphical tools. CAD is used in


architecture and the design of mechanical parts and of very-large-scale integrated (VLSI)
circuits. In many such applications, the graphics is used in a number of distinct ways. For
example, in a VLSI design, the graphics provides an interactive interface between the
user and the design package, usually via tools such as menus and icons. In addition, after
the user evolves a possible design, other tools analyze the design and display the analysis
graphically.

3. Simulation
Graphics systems are capable of generating sophisticated images in real time and the engineers
and researchers use them as simulators. Uses of simulation include:

Graphical flight simulators: Are proved cost-effective and safety in training the pilots.

Priyanka H V

Page 1

Computer Graphics

Arcade games: are as sophisticated as flight simulators.


Games and educational software: for home computers are almost as impressive.
Robot designing and planning its path, and behavior in complex environments
The television, motion-picture, and advertising industries use computer graphics to

generate the photorealistic images.


Entire animated movies can be made by computer at a cost comparable to that of movies

made with traditional hand-animation techniques.


In the field of virtual reality (VR), a human viewer is equipped with a display headset
with which separate images can be seen with the right and left eyes. This effect on the

viewer is called the effect of stereoscopic vision.


In addition, the body location and position, possibly including head and finger positions,
of the viewer are tracked by the computer. The viewer can interact with other interactive
devices available, including force-sensing gloves and sound, can then act as part of a

computer-generated scene, limited only by the image-generation ability of the computer.


For example, a surgical intern might be trained to do an operation in this way, or an
astronaut might be trained to work in a weightless environment.

4. User interfaces

Most applications of computer have user interface that rely on desktop window systems
to manage multiple simulation activities and on point & click facilities to allow users to
select menu items, icons, dialogue boxes and objects on screen.

Visual paradigm includes windows, icons, menus, and a pointing device, such as a
mouse. This paradigm has increased human-computer interaction through windowing
systems such as the X Window system, Microsoft Windows, and the Macintosh operating

systems.
Graphical network browsers, such as Netscape and Internet Explorer have created

millions of users on the Internet.


There are graphical interfaces other than Graphical User Interfaces.

1.2 A Graphics System

A computer graphics system is a computer system having a special component called


frame buffer in addition to the common components of a general-purpose computer
system.

1. Processor

2. Memory

3. Frame buffer

4. Output devices

5. Input devices

Block Diagram & 5 Major components of a Graphics system

Priyanka H V

Page 2

Computer Graphics

1. Processor (CPU &GPU):

In a simple system, there may be only one processor, the central processing unit (CPU)
of the system, which must do both the normal processing and the graphical processing.

The main graphical function of the processor is to take specifications of graphical


primitives (such as lines, circles, and polygons) generated by application programs and to
assign values to the pixels in the frame buffer that best represent these entities.

For example, a triangle is specified by its three vertices, but to display its outline by the
three line segments connecting the vertices, the graphics system must generate a set of
pixels that appear as line segments to the viewer.

The conversion of geometric entities to pixel colours and locations in the frame buffer is
known as rasterization, or scan conversion.

Sophisticated graphics systems are characterized by special-purpose graphics processing


units (GPUs), custom-tailored to carry out specific graphics functions.

The GPU can be either on the mother board of the system or on a graphics card.

2. Frame Buffer:

Raster: A matrix (arranged in rows in columns) of discrete cells (pixels) which can be
illuminated in a Raster-Graphics display is called a raster.

Pixel: Each discrete cell in a raster is called a pixel (Pi(x)cture element).

A part of memory used by the graphics system to store pixels is called frame buffer. The
frame buffer can be viewed as the core element of a graphics system.

In simpler systems, the frame buffer is part of standard memory.

In high-end systems, the frame buffer is implemented with special types of memory chipsvideo random-access memory (VRAM) or dynamic random-access memory (DRAM)-that
enable fast redisplay of the contents of the frame buffer.

Resolution of frame buffer: The number of pixels in the frame buffer which determines the
detail of an image is called the resolution

Depth of the frame buffer: The number of bits that are used for each pixel which
determines properties of each pixel (say, color of a pixel) is called the depth of the frame
buffer.
E.g.

1-bit-deep frame buffer allows the frame buffer only two colors.
8-bit-deep frame buffer allows 28 (=256) colors.

Priyanka H V

Page 3

Computer Graphics

Full / true / RGB-color systems: Display systems with a frame buffer depth of 24-bits
per pixel, in which, individual three groups of 8-bits are assigned to each of the three
primary colors - red, green, and blue used in most displays. Such systems can display
sufficient colors to represent most images realistically.

3. Output Devices say Cathode-Ray Tube (CRT)

The dominant type of display (or monitor) was the cathode-ray tube (CRT). A
simplified picture of a CRT is shown in Figure 1.3.

An electron gun produces a beam of electrons.

The direction of the beam is controlled by two pairs of deflection plates.

The output of the computer is converted, by digital-to-analog converters, to voltages


across the x and y deflection plates which control the direction of the electron beam.

When electrons strike the phosphor coating on the tube, light is emitted.

Light appears on the surface of the CRT when a sufficiently intense beam of electrons is
directed at the phosphor.

TYPES OF CRTs

Random-scan or calligraphic CRT:

The CRTs used in early graphics systems in which an electron beam can be moved directly from
any position to any other position. If the voltages steering the beam change at a constant rate, the
beam will trace a straight line, visible to a viewer. If intensity of the beam is turned off, the beam
can be moved to a new position without changing any visible display.
A typical CRT will emit light for only a short time-usually, a few milli-seconds after the
phosphor is excited by the electron beam. For a human to see a steady image on most CRT
displays, the same path must be retraced, or refreshed, by the beam at least 50 times per second.

Raster system: The CRTs in present graphics system, in which the pixels are taken from
the frame buffer and displayed as points on the surface of the display.

Refresh Rate: The high rate at which the entire contents of the frame buffer are displayed on the
CRT to avoid flicker.
Two types of Raster Systems: These types are according to the two ways of displaying pixels.
Interlaced display: Odd rows and even rows are refreshed alternately. Interlaced
displays are used in commercial television. In an interlaced display operating at 60 Hz,
the screen is redrawn in its entirety only 30 times per second, although the visual system
is tricked into thinking the refresh rate is 60 Hz rather than 30 Hz.
Non-interlaced system: The pixels are displayed row by row, or scan line by scan
line, at the refresh rate, which is usually 50 to 85 times per second, or 50 to 85 hertz (Hz).
Priyanka H V

Page 4

Computer Graphics
Non-interlaced displays are becoming more widespread, even though these displays
process pixels at twice the rate of the interlaced display.
Viewers located near the screen, however, can tell the difference between the interlaced and noninterlaced displays.

Color CRTs:

They have three different colored phosphors ,(red, green, and blue),

arranged in small groups, One common style arranges the phosphors in triangular groups
called triads, each triad consisting of three phosphors, one of each primary. Most color
CRTs have three electron beams, corresponding to the three types of phosphors.
Shadow-mask CRT: A metal screen with small holes-the shadow mask-ensures
that an electron beam excites only phosphors of the proper color.

The other output devices such as the liquid-crystal displays (LCDs) , must be refreshed, whereas
hard-copy devices, such as printers, do not need to be refreshed, albeit both are raster-based.

CRTs are still common display devices, they are rapidly being replaced by flat-screen
technologies.

Flat-panel monitors are inherently raster based. There are multiple technologies
available, including light-emitting diodes (LEDs), liquid-crystal displays (LCDs), and
plasma panels, all use a two-dimensional grid to address individual light-emitting
elements.

Figure 1.5 shows a generic flat-panel monitor.

The two outside plates each contain parallel grids of wires that are oriented perpendicular
to each other.

By sending electrical signals to the proper wire in each grid, the electrical field at a
location, determined by the intersection of two wires, can be made strong enough to
control the corresponding element in the middle plate.

The middle plate in an LED panel contains light-emitting diodes that can be turned on
and off by the electrical signals sent to the grid.

Priyanka H V

Page 5

Computer Graphics

In an LCD display, the electrical field controls the polarization of the liquid crystals in
the middle panel, thus turning on and off the light passing through the panel.

A plasma panel uses the voltages on the grids to energize gases embedded between the
glass panels holding the grids. The energized gas becomes a glowing plasma.

4. Input Devices

Positional Input Devices: Provide positional information to the system and usually is
equipped with one or more buttons to provide signals to the processor. E.g. Mouse,

Joystick, and Data tablet.


Pointing devices: Allow a user to indicate a particular location on the display. E.g. Light

pen.
Other: say Keyboard

1.3 Images: Physical and S y n t h e t i c ( A r t i f i c i a l ) I m a g e s


Computer-generated images are synthetic or artificial, in the sense that the objects being imaged
do not exist physically. They can be formed similar to the traditional imaging methods say
optical systems, such as cameras and the human visual system.
Hence, to understand and develop computer generated imaging systems, following sequence of
study is needed:

Traditional imaging systems.


A model (paradigm) of the image formation process needs to be constructed. This model

is based on the traditional imaging methods.


Computer architecture for implementing that model (paradigm). (Covered in subsequent
chapters with relevant equations).

1.3.1 Basic entities of image formation:

Objects and Viewers

Object: The object exists in space independent of any image-formation process and of any
viewer.
In computer graphics, graphic objects (various geometric primitives, such as points, lines, and
polygons) are synthetic and are specified/defined/approximated with their positions (locations)
and sometimes relationships among them, in space.
E.g.

A line can be defined by two vertices.


Polygon can be defined by an ordered list of vertices.
Sphere can be specified by two vertices that specify its center and any point on its
circumference.

Viewer: It forms the image of objects. Viewer may be a human, a camera, or a digitizer.
It is easy to confuse images and objects. Usually an object is seen from an individuals single
perspective by forgetting that other viewers, located in other places, will see the same object
differently.
In a camera system viewing a building, both the object (i.e. building) and the viewer exist in a
three-dimensional world. However, the image that they define and find on the film plane is twodimensional.

Priyanka H V

Page 6

Computer Graphics

Figure 1.13(a) shows two viewers observing the same building. This image is what is seen by an
observer A who is far enough away from the building to see both the building and the two other
viewers, B and C. From As perspective, B and C appear as objects, just as the building does.
Figures 1.13(b) and (c) show the images seen by B and C, respectively. All three images contain
the same building, but the image of the building is different in all three.
Thus the process by which the specification of the object is combined with the specification of
the viewer to produce a two-dimensional image is the essence of image formation.
1.3.2 Other entities of image formation:

Light source: It makes the objects visible in the image without which the objects would

be dark and there would be nothing visible in the image.


Color: The way the color enters the picture
Different kinds of surfaces on objects affecting an image

A simple physical imaging system A camera system with a light source: taking a more
physical approach:
It consists of physical object and a viewer (the camera) and a
light source in the scene. Light from the source strikes various
surfaces of the object, and a portion of the reflected light enters
the camera through the lens. The details of the interaction
between light and the surfaces of the object determine how
much light enters the camera.
Light sources emit at a fixed rate of light energy or intensity.
Light travels in straight lines, from the sources to those objects
with which it interacts. A particular light source is characterized by the intensity of light that it
emits at each frequency, and by that light's directionality.

An ideal point source emits energy from a single location at one or more frequencies

equally in all directions.


More complex sources, such as a light bulb, can be characterized as emitting light over
an area and by emitting more light in one direction than another. More complex sources
often can be modeled by a number of carefully placed point sources (Chapter 6).

Note: Here only the monochromatic (means a source of single frequency), point sources are
considered for simplicity. This is analogous to discussing black-and-white television before
examining color television.
1.3.3 Image formation models:
Ray Tracing
Building an imaging model: by following light from a source.

Priyanka H V

Page 7

Computer Graphics
Consider the scene in the figure.
It is illuminated by a single point source. Viewer is
included because the light that reaches the eye of
viewer is of interest.
The viewer can also be camera as shown below:

Ray: It is a semi-infinite line that emanates from a point and travels to infinity in a particular
direction because light travels in straight lines. A portion of these infinite rays contributes to the
image on the film plane of the camera. E.g. If the source is visible from the camera, some of the
rays go directly from the source through the lens of the camera, and strike the film plane. Most
rays go off to infinity, neither entering the camera directly, nor striking any of the objects.
These rays contribute nothing to the image, although they may be seen by some other viewer.
The remaining rays strike and illuminate objects. These rays can interact with the objects
surfaces in a variety of ways.
E.g.

Mirror Surface: If the surface is a mirror, a reflected ray might-depending on the

orientation of the surface enter the lens of the camera and contribute to the image.
Diffuse surfaces: They scatter light in all directions.
Transparent surfaces: Allow the light ray from the source to pass through it; perhaps
being bent or refracted, and may interact with other objects, enter the camera, or travel to
infinity without striking another surface.

Ray tracing is an image-formation technique that is based on the ideas (aforesaid) of tracing
rays of light to form an image. This paradigm is useful in understanding the interaction between
light and materials that is essential to physical image formation. Only a small fraction of all the
rays leaving a source enter the imaging system and the time spent tracing most rays is wasted.
Ray tracing is an alternative way to develop a computer graphics system. It can be used to
simulate even complex physical effects with expense of requisite computing. It is a close
approximation to the physical world. But it is not well suited for fast computation.
However by further simplification, it is possible to reduce the computational burden.
E.g.

By assuming that all objects are uniformly bright- say from the perspective of the viewer,
a red triangle appears to have the same shade of red at every point, and is
indistinguishable from a uniformly red emitter of light. Given this assumption, sources

can be neglected, and simple trigonometric methods can be used to calculate the image.
From a physical perspective, if an object emits light, one cannot tell whether the looking
object is reflecting light, or whether the object is emitting light from internal energy
sources. It also reduces computations.

Photon Mapping (refer text book)


1.4 Imaging system
Priyanka H V

Page 8

Computer Graphics
1.4.1 The Pinhole Camera
The pinhole camera in Figure 1.19 provides an example of image formation. A pinhole camera
is a box with a small hole in the centre of one side of the box; the film is placed inside the box on
the side opposite the pinhole. The hole is so small that only a single ray of light, emanating from
a point, can enter it.

The film plane is located a distance d from the pinhole. A side view (Figure 1.20) is used to
calculate where the image of the point (x, y, z) is on the film plane z =d.

Using the fact that the two triangles in Figure 1.20 are similar, the y coordinate of the image is at
yp, where

.
A similar calculation, using a top view, yields

.
The point (xp, yp, d) is called the projection of the point (x, y, z). The field, or angle, of view
of our camera is the angle made by the largest object that our camera can image on its film plane.
We can calculate the field of view with the aid of Figure 1.21.4 If h is the height of the camera,
the angle of view is

.
The ideal pinhole camera has an infinite depth of field: Every point within its field of view is in
focus.
The pinhole camera has two disadvantages.

Priyanka H V

Page 9

Computer Graphics

The pinhole is so smallit admits only a single ray from a point sourcealmost no light
enters the camera.

The camera cannot be adjusted to have a different angle of view.

1.4.2 The Human Visual System


The major components of the visual system are shown in Figure 1.22.

Light enters the eye through the lens and cornea, a transparent structure that protects the eye.
The iris opens and closes to adjust the amount of light entering the eye. The lens forms an
image on a two-dimensional structure called the retina at the back of the eye.

The rods and cones (so named because of their appearance when magnified) are light
sensors and are located on the retina. They are excited by electromagnetic energy in the range
of 350 to 780 nm.

The rods are low-level-light sensors that account for our night vision and are not color
sensitive;

The cones are responsible for our color vision.

The sizes of the rods and cones, coupled with the optical properties of the lens and cornea,
determine the resolution of our visual systems, or our visual acuity.

Resolution is a measure of what size objects we can see. It is a measure of how close we can
place two points and still recognize that there are two distinct points.

The sensors in the human eye do not react uniformly to light energy at different wavelengths.

There are three types of cones and a single type of rod.

brightness is a measure of how intense we perceive the light emitted from an object to be.
The human visual system does not have the same response to a monochromatic (singlefrequency) red light as to a monochromatic green light. If these two lights were to emit the
same energy, they would appear to us to have different brightness, because of the unequal
response of the cones to red and green light.

Human eye is most sensitive to green light, and least sensitive to red and blue.

Human color-vision capabilities are due to the different sensitivities of the three types of
cones. The major consequence of having three types of cones is that instead of having to
work with all visible wavelengths individually, three standard primaries are used to
approximate any color that can perceive. Consequently, most image-production systems,
including film and video, work with just three basic, or primary, colors.

Priyanka H V

Page 10

Computer Graphics

The initial processing of light in the human visual system is based on the same principles
used by most optical systems.

The human visual system has a back end much more complex than that of a camera or
telescope.

The optic nerves are connected to the rods and cones in an extremely complex arrangement
that has many of the characteristics of a sophisticated signal processor.

The final processing is done in a part of the brain called the visual cortex, where high-level
functions, such as object recognition, are carried out.

1.5 The Synthetic-Camera Model


It is a modern model of three-dimensional computer graphics in which creating a computergenerated image is similar to forming an image using an optical system.

Consider the imaging system shown in figure 1.23 containing objects and a viewer. The
viewer is a bellows camera.

In a bellows camera, the lens is located at the front plane and the film plane is located at the
back of the camera.

These two are connected by flexible sides. Thus, the back of the camera can be moved
independently of the front of the camera, introducing additional flexibility in the imageformation process.

The image is formed on the film plane at the back of the camera so that this process emulates the
creation of artificial images.
Basic Principles:
1. The specification of the objects is independent of the specification of the viewer. =>
Within a graphics library, there will be separate functions for specifying the objects and
the viewer.
2. Image can be computed using simple trigonometric calculations in a straightforward
manner. Consider the side view of the camera and a simple object in figure below:

Priyanka H V

Page 11

Computer Graphics

Consider the side view of the camera and a simple object in Figure 1.24. The view in part (a)
of the figure is similar to that of the pinhole camera. Whereas with a real camera we would
simply flip the film to regain the original orientation of the object, with our synthetic camera
we can avoid the flipping by a simple trick.

Another plane is drawn in front of the lens (Figure 1.24(b)) and work in three dimensions, as
shown in Figure 1.25.We find the image of a point on the object on the virtual image plane
by drawing a line, called a projector, from the point to the center of the lens, or the center of
projection (COP).

3. The image size is limited i.e., not all objects can be imaged onto film plane. A clipping
rectangle or clipping window, in the projection plane(fig 1.26), placed to the front
indicates this limitation. This rectangle acts as a window through which a viewer, located
at the center of projection, sees the world. Given the location of the center of projection,
the location and orientation of the projection plane, and the size of the clipping rectangle,
it is possible to determine which objects will appear in the image.

Priyanka H V

Page 12

Computer Graphics

4. Synthetic-camera model leads to the notion of a pipeline architecture in which each of the
various stages in the pipeline performs distinct operations on geometric entities, then
passes on the transformed objects to the next stage.
1.6 The Programmers Interface
Interface provides ways that a user can interact with a graphics system. With completely selfcontained packages, using mouse and keyboard, the menus and icons representing possible
actions can be selected and the user can guide the software and produce images without having
to write programs.
Application programmer's interface (API) : A set of functions that resides in a graphics library
with which the interface between an application program and a graphics system is specified is
called API.

The application programmer sees only the API, and is thus shielded from the details of both the
hardware and the software implementation of the graphics library. From the perspective of the
writer of an application program, the functions available through the API should match the
conceptual model that the user wishes to employ to specify images.
1.6.1 The Pen-Plotter Model
Most early graphics systems were 2-D systems. The conceptual model that they used is now
referred to as the pen-plotter model, referencing the output device that was available on these
systems.

Priyanka H V

Page 13

Computer Graphics

A pen plotter produces images by moving a pen held by a gantry, a structure that can move the
pen in two orthogonal directions around the paper. The plotter can raise and lower the pen as
required to create the desired image. Pen plotters are still in use; they are well suited for drawing
large diagrams, such as blueprints.
Process of creating an image in a pen-plotter is similar to the process of drawing on a pad of
paper. The user works on a 2-D surface of some size and moves a pen around on this surface,
leaving an image on the paper.
Such a graphics system can be described with two drawing functions:
moveto(x,y) ; It moves the pen to the location (x, y) on the paper without leaving a mark.
lineto(x,y); It moves the pen to (x, y), and draws a line from the old to the new location of the
pen.
Example1:
Output:

Fragment
moveto(0,0);
lineto(1,0);
lineto(1, 1);
lineto(0,1);
lineto(0,0);

Example2:
Output:

Fragment
moveto(0,0);
lineto(1,0);
lineto(1, 1);
lineto(0,1);
lineto(0,0);
moveto(0, 1); /* Added Code */
lineto(0.5, 1.866);
lineto(1.5, 1.866);
lineto(1.5. 0.866);
lineto (1, 0);
moveto(1, 1);
lineto(1.5, 1.866);

Drawback of Pen-plotter model: It does not extend well to three-dimensional graphics systems.
E.g. To produce the image of a 3-D object on 2-D pad of pen-plotter model, the positions of 2-D
points corresponding to points on 3-D object are to be specified. These two-dimensional points
are the projections of points in three-dimensional space. The mathematical process of
determining projections is an application of trigonometry.
Priyanka H V

Page 14

Computer Graphics

1.6.2 Three Dimensional API:


The synthetic-camera model is the basis for a number of popular APIs, including OpenGL.
There are functions to specify:
1. Objects:
Objects are usually defined by sets of vertices. For simple geometric objects such as line
segments, rectangles, and polygons-there is a simple relationship between a list of vertices and
the object. For more complex objects, there may be multiple ways of defining the object from a
set of vertices. A circle, for example, can be defined by three points on its circumference, or by
its center and one point on the circumference.
Most APIs provide similar sets of primitive objects for the user. These primitives are usually
those that can be displayed rapidly on the hardware. The usual sets include points, line segments,
polygons, and, sometimes, text. OpenGL defines primitives through lists of vertices.
E.g. To define a triangular polygon in OpenGL through five function calls:
glBegin( GL_POLYGON ):
glVertex3f(0.0, 0.0, 0.0);
glVertex3f(0.0, 1.0, 0.0);
glVertex3f(0.0, 0.0, 1.0);
glEnd( ):
Note:
By adding additional vertices, an arbitrary polygon can be defined.
Same vertices can be used to define a different geometric primitive simply by changing
the type parameter, GL_POLYGON. GL_LINE_SCRIPT uses the vertices to define two
connected line segments, whereas the type GL_POINTS uses the same vertices to define
three points.
Some APIs let the user work directly in the frame buffer by providing functions that read
and write pixels.
Some APIs provide curves and surfaces as primitives; often, however, these types are
approximated by a series of simpler primitives within the application program. OpenGL
provides access to the frame buffer, curves, and surfaces.
2. Viewer
Viewer or camera can be defined in a variety of ways. Available APIs differ in both how much
flexibility they provide in camera selection, and in how many different methods they allow.
There are four types of necessary specifications:
1. Position: The camera location usually is given by the position of the center of the lens (the
center of projection).
2. Orientation: Once the camera is positioned, a camera
coordinate system can be placed with its origin at the center
of projection. Then the camera can be rotated independently
around the three axes of this system.

Priyanka H V

Page 15

Computer Graphics
3. Focal length: The focal length of the lens determines the size of the image on the film plane
or, equivalently, the portion of the world the camera sees.
4. Film plane: The back of the camera has a height and a width. On the bellows camera, and in
some APIs, the orientation of the back of the camera can be adjusted independently of the
orientation of the lens.

These specifications can be satisfied in various ways:


1. Developing the specifications for the camera location and orientation uses a series of
coordinate system transformations. These transformations convert object positions
represented in the coordinate system that specifies object vertices to object positions in a
coordinate system centered at the center of projection. This approach is useful, both for
doing implementation and for getting the full set of views that a flexible camera can
provide. (Chapter 5)
2. The synthetic-camera model emphasizes object is independent of the view. But, the
classical viewing techniques, stress the relationship between the object and the viewer.
Thus, the classical two-point perspective of a cube in below is a two-point perspective
because of a particular relationship between the viewer and the planes of the cube.

3. In OpenGL API, all transformations can be set with complete freedom. In addition to that,
OpenGL also provides helpful extra functions.
E.g. Function call
gluLookAt(cop_x, cop_y, cop_z, at_x, at_y, at_z, ...);
points the camera from a center of projection toward a desired point.
Function call
glPerspective(field_of_view, ...);
selects a lens for a perspective view.
4. However, none of the APIs built on the synthetic-camera model - OpenGL provides

functions for specifying desired relationships between the camera and an object.
Light sources

Light sources can be defined by their location, strength, color, and directionality. APIs provide a
set of functions to specify these parameters for each source.

Material Properties

Material properties are characteristics, or attributes, of the objects, and such properties are
usually specified through a series of function calls at the time that each object is defined. Both
light sources and material properties depend on the models of light-material interactions
supported by the API.
1.6.3 The Modeling-Rendering Paradigm
It assumes the image formation, a two-step process:
1. Modeling of the scene: This process designs and positions the objects of the scene. This
step is highly interactive. The details of images of the objects need not be specified at this
stage. Hence this step is carried out on a graphical workstation.
Priyanka H V

Page 16

Computer Graphics
2. Rendering/ Production of the scene: It renders the designed scene by adding light
sources, material properties, and a variety of other detailed effects, to form a productionquality image. This step requires a tremendous amount of computation, and hence
requires a number-cruncher machine.
These two steps not only differ in the required optimal hardware but also the in their required
software.
The interface between the modeler and renderer can be as simple as a file produced by the
modeler that describes the objects, and that contains additional information important to only the
renderer, such as light sources, viewer location, and material properties.
Pixar's Renderman Interface follows this approach and uses a file format that allows modelers to
pass models to the render in text format.
1.6.3 Modeling-Rendering Pipeline

It suggests that the modeler and the renderer can be implemented with different software and
hardware.
Advantages:

It allows developing modelers that, although they use the same renderer, are custom-

tailored to particular applications.


Likewise, different renderers can take as input the same interface file.
It is even possible, at least in principle, to dispense with the modeler completely, and to
use a standard text editor to generate an interface file.

Disadvantages: For complex scenes it is difficult for the users to edit lists of information for a
renderer. Hence an interactive modeler is used. Such modelers are based upon the simple
synthetic-camera model.
Applications: In CAD applications and in development of realistic images, such as for movies.

1.7 Graphics Architectures

A model of early graphics systems:

Priyanka H V

Page 17

Computer Graphics
They used general-purpose computers with the standard von Neumann architecture. Such
computers are characterized by a single processing unit that processes a single instruction at a
time. The display in these systems was based on a CRT display that included the necessary
circuitry to generate a line segment connecting two points. The job of the host computer was to
run the application program, and to compute the endpoints of the line segments in the image (in
units of the display). This information had to be sent to the display at a rate high enough to avoid
flicker on the display. Computers were so slow that refreshing even simple images, containing a
few hundred line segments, would burden an expensive computer.
Display Processor Architecture: It relieves the general-purpose computer from the task
of refreshing the display continuously by incorporating a special display processor. These
display processors had conventional architectures as that of general-purpose, but included
instructions to display primitives on the CRT.

The main advantage of the display processor was that the instructions to generate the image
could be assembled once in the host and sent to the display processor, where they were stored in
the display processor's own memory as a display list or display file. The display processor would
then execute repetitively the program in the display list, at a rate sufficient to avoid flicker,
independently of the host, thus freeing the host for other tasks. It is similar to client-server
architecture.

Pipeline Architectures

The major advances in graphics architectures parallel closely the advances in workstations. In
both cases, the ability to create special-purpose VLSI circuits was the key enabling technology
development. In addition, the availability of cheap solid-state memory led to the universality of
raster displays.
For computer graphics applications, the most important use of custom VLSI circuits has been in
creating pipeline architectures. The concept of pipelining is illustrated in the figure below for a
simple arithmetic calculation.

In this pipeline, there is an adder and a multiplier. If this configuration is used to compute a + (b
* c), the calculation takes one multiplication and one addition, - the same amount of work
required if a single processor is used to carry out both operations. However, suppose that the
same computation is to be performed with many values of a, b, and c. The multiplier can pass on
the results of its calculation to the adder, and can start its next multiplication while the adder

Priyanka H V

Page 18

Computer Graphics
carries out the second step of the calculation on the first set of data. Here, the rate at which data
flows through the system, the throughput of the system, has been doubled.
Pipelines can be constructed for more complex arithmetic calculations that will afford even
greater increases in throughput. There is no point in building a pipeline unless the same operation
is to be performed on many data sets.
Pipeline architecture suits Computer Graphics because in computer graphics, large sets of
vertices need be processed in the same manner.

Geometric Pipeline and four major steps in Image Procesing:

Suppose a set of geometric primitives are defined by a set of vertices. Set of primitive types and
vertices can be referred to as the geometry of the data. In a complex scene, there may be
thousands-even millions of vertices that define the objects. These entire vertices muse be
processed in a similar manner to form an image in the frame buffer. Processing the geometry of
objects to obtain an image can be pipelined as follows:

This pipelining shows four major steps in the imaging process:


1. Vertex Processing:
Each vertex is processed independently. This block includes two major functions:

Transformation:

Representation of same object in different coordinate system requires transformation.


Each change of coordinate systems can be represented by a matrix. Successive changes in
coordinate systems can be represented by multiplying, or concatenating, the individual matrices
into a single matrix.
Because multiplying one matrix by another matrix yields a third matrix, a sequence of
transformations is an obvious candidate for pipeline architecture. In addition, because the
matrices that used in computer graphics will always be small (4 x 4), there is opportunity to use
parallelism within the transformation blocks in the pipeline.
Eventually, after multiple stages of transformation, the geometry is transformed by a projection
transformation. This step can be implemented using 4 x 4 matrices and thus projection fits in the
pipeline.
In general, 3-D information needs to be kept as long as possible, as objects pass through the
pipeline. Further, there is a variety of projections that can be implemented .

Assignment of Vertex Color:

The assignment of vertex colors can be as simple as the program specifying a color or as
complex as the computation of a color from a physically realistic lighting model that
incorporates the surface properties of the object and the characteristic light sources in the scene.
2. Primitive assembly and Clipping:
Clipping is done because of the limitation that no imaging system can see the whole world at
once. E.g. Cameras have film of limited size and their fields of view can be adjusted by selecting
different lenses. Equivalent property can be obtained in the synthetic camera model, by
Priyanka H V

Page 19

Computer Graphics
considering a clipping volume, such as the pyramid in front of the lens. The projections of
objects in this volume appear in the image. Those that are outside do not and are said to be
clipped out. Objects that straddle the edges of the clipping volume are partly visible in the image.
Clipping must be done on a primitive by primitive basis rather than on a vertex by vertex basis.
Thus, sets of vertices must be assembled in to primitives, such as line segments and polygons
before clipping can take place within this stage of the pipeline. Consequently the output of this
stage is a set of primitives whose projections can appear in the image.
Clipping can occur at various stages in the imaging process. For simple geometric objects,
whether or not an object is clipped out can be determined from their vertices. Because clippers
work with vertices, clippers can be inserted with transformers into the pipeline. Clipping can
even be subdivided further into a sequence of pipelined clippers.
3. Rasterization or Scan Conversion
The primitives that emerge from the clipper are still represented in terms of their vertices and
must be further processed to generate pixels in the frame buffer. E.g. if three vertices specify a
triangle filled with a solid color, the rasterizer must determine which pixels in the frame buffer
are inside the polygon. (Chapter 8 discusses rasterization for line segments and polygons). The
output of the rasterizer is a set of fragments for each primitive. A fragment can be thought of as a
potential pixel that carries with it information, including its color and location, that is used to
update the corresponding pixel in the frame buffer. Fragments can also carry along depth
information that allows later stages to determine if a particular fragment lies behind other
previously rasterized fragments for a given pixel.
4. Fragment Processing
It updates the pixels in the frame buffer for the fragments generated by the rasterizer. If the
application generated 3-D data, some fragments may not be visible because the surfaces that they
define are behind other surfaces. The color of a fragment may be altered by texture mapping or
bump mapping. The color of the pixel that corresponds to a fragment can also be read from the
frame buffer and blended with the fragment's color to create translucent effects.
1.9 Performance Characteristics:
Two fundamentally different types of processing in Graphics architecture:

Front end geometric processing, based on processing vertices through the various
clippers and transformers. This processing is ideally-suited tor pipelining, and usually

involves floating-point calculations.


The geometry engine developed by Silicon Graphics was a VLSI implementation for
many of these operations in a special-purpose chip that became the basis for a series of
fast graphics workstations.
Later, floating-point accelerator chips, such as the Intel i860, put 4 x 4 matrixtransformation units on the chip, reducing a matrix multiplication to a single instruction.
Graphics workstations and add-on graphics boards use Application Specific Integrated
Circuits (ASICS) that perform many of the graphics operations at the chip level.
Pipeline architectures are the dominant type of high-performance system. As more boxes are
added to the pipeline, however, it takes more time tor a single datum to pass through the system.
Priyanka H V

Page 20

Computer Graphics
This time is called the latency of the system; Latency must be balanced against increased
throughput in evaluating the performance of a pipeline.

Back end direct manipulation of bits in the frame buffer: Beginning with rasterization
including many other features, process directly the bits in the frame buffer.

It is

fundamentally different from front-end processing, and can be implemented most


effectively using architectures that have the ability to move blocks of bits quickly.
The overall performance of a system is characterized by how fast the geometric entities are
moved through the pipeline, and by how many pixels per second can be altered in the frame
buffer.

Chapter 2

Graphics Programming

Programming oriented approach is used.

Minimal application programmer's interface (API) is used which allow to program many
interesting two- and three-dimensional problems, and to familiarize with the basic
graphics concepts.

2-D graphics is regarded as a special case of 3-D graphics. Hence the 2-D code will
execute without modification on a 3-D system.

A simple but informative problem called : :The Sierpinski gasket is used.

2-D programs that do not require user interaction can be written with knowledge
presented here.

The chapter is concluded with an example of a 3-D application.

2.1 The Sierpinski Gasket


It is the drawing of an interesting shape that has a long history and that is of interest in areas such
as fractal geometry. The Sierpinski gasket is an object that can be defined recursively and
randomly; in the limit, however, it has properties that are not at all random.

Priyanka H V

Page 21

Computer Graphics
Consider the three vertices in the plane. Assume that their
locations, as specified in some convenient coordinate system, are
(Xl, Y1), (X2, Y2), and (X3, Y3). The construction proceeds as
follows:
1. Pick an initial point at random inside the triangle.
2. Select one of the three vertices at random.
3. Find the point half way between the initial point and the randomly selected vertex.
4. Display this new point by putting some sort of marker, such as a small circle, at its location. .
5. Replace the initial point with this new point.
6. Return to step 2.
Thus, each time a point that is generated, it is displayed on the output device. In the figure p0 is
the initial point, and Pl and P2 are the first two points generated by the algorithm.

Priyanka H V

Page 22

Potrebbero piacerti anche