Sei sulla pagina 1di 21

1 A. A.

Applications of Computer Graphics
Graphics System
Image Representation
Output Primitives
2D Transformations
2D Viewing

Dot Matrix Printers
Ink-Jet Printers
Laser Printers
3D Printers

Cathode Ray Tube (CRT)
Liquid Crystal Display (LCD)
Dot Matrix - uses a head with 7 to 24 pins to strike a ribbon
(single or multiple color)

Ink Jet Printers: a printer in which the characters are formed

by minute jets of ink. (fires small balls of colored ink)

Laser Printers: a printer linked to a computer producing

printed material by using a laser to form a pattern of
electrostatically charged dots on a light-sensitive drum, which
attract toner (or dry ink powder). The toner is transferred to a
piece of paper and fixed by a heating process.

3D Printers: a machine allowing the creation of a physical

object from a three-dimensional digital model, typically by
laying down many thin layers of a material in succession.
In a raster system, the graphics system
takes pixels from the frame buffer and
displays them as points on the surface of
the display.

Cathode Ray Tube (CRT)
Liquid Crystal Display (LCD)
Light Emitting Diode (LED) Display
* When electrons strike the phosphor coating on the tube, light is emitted.

* The direction of the beam is controlled by two pairs of deflection plates.

* Light appears on the surface of the CRT when a sufficiently intense beam
of electrons is directed at the phosphor.
*The screen is coated
with phosphor, 3
colors for a color
*For a color monitor,
three guns light up
red, green, or blue
* Liquid crystal displays use small flat chips which change their
transparency properties when a voltage is applied.

* LCD elements are arranged in an n x m array call the LCD matrix

* LCDs elements do not emit light, but use backlights behind the
LCD matrix

* Color is obtained by placing filters in front of each LCD element

* Image quality dependent on viewing angle.

Also divided into pixels, but without an electron gun firing
at a screen, LCDs have cells that either allow light to flow
through, or block it.

* There are two primary types of input devices:
* Pointing devices and Keyboard devices.
* The pointing device allows the user to indicate a position on the
screen and almost always incorporates one or more buttons to
allow the user to send signals to the computer (mouse, joystick,
touch screen and spaceballs).
* Thekeyboard device is almost always a physical keyboard but
can be generalized to include any device that returns character
*Special Purpose
*Word, Excel etc.
*Animation and Simulation Packages e.g. Maya
*Visualization Packages e.g. GraphViz
*Painting Packages e.g. MSPaint
*General Purpose
*Programming API (Application Program Interface)
*Java2D and Java3D
* Programmer sees the graphics system through an
interface: the Application Programmer Interface (API)

Application High Level API (Java3D)

Low-Level Application Programming Interface (OpenGL)

Hardware and software

Output Devices Input Devices

* Functions that specify what we need to form an image

* Objects: are usually defined by sets of vertices.

* For simple geometric objects such as line segments, rectangles, and

polygonsthere is a simple relationship between a list of vertices, or
positions in space, and the object.

* For more complex objects, there may be multiple ways of defining the
object from a set of vertices. A circle, for example, can be defined by
three points on its circumference, or by its centre and one point on the
* Viewer/Camera :can be defined using four types of necessary

* PositionThe camera location usually is given by the position of the

center of the lens, which is the center of projection (COP).

* Orientation Once we have positioned the camera, we can place a

camera coordinate system with its origin at the center of projection. We
can then rotate the camera independently around the three axes of this

* Focallength The focal length of the lens determines the size of the
image on the film plane or, equivalently, the portion of the world the
camera sees.
* Much of the work in the pipeline is in converting object
representations from one coordinate system to another
* World coordinates
* Camera coordinates
* Screen coordinates
* Every change of coordinates is equivalent to a matrix
* Just as a real camera cannot see the whole world, the
virtual camera can only see part of the world space
* Objects that are not within this volume are said to be
clipped out of the scene

* Must carry out the process that combines the 3D viewer
with the 3D objects to produce the 2D image
* If an object is visible in the image, the appropriate pixels
in the frame buffer must be assigned colors
* Vertices assembled into objects
* Effects of lights and materials must be determined
* Polygons filled with interior colors/shades
* Must have also determine which objects are in front (hidden
surface removal)