Sei sulla pagina 1di 12

WHITE PAPER:

real-time rendering: an introduction

2017 © Ventuz Technology AG

2017 © Ventuz Technology AG 1


An Introduction to Real-Time Rendering

The world of real-time 3D For the sake of conciseness,


Graphics is a daunting one various oversimplifications have
both for beginners and 3D been made and, while some
professionals. It has its unique statements may even seem
rules and processes that at utterly wrong to the experienced
first seem to be the same as user, ignoring exceptions to
in 3D movie production or still the rule will help new users to
rendering, but the differences are not be overburdened by a flood
vast and affect everything from of information. The interested
planning a presentation up to the reader is referred to classic books
final rendering result. such as Computer Graphics -
Principle and Practice by Foley,
While this document does not
contain any Ventuz-specific van Damn et al. or Realtime
information and cannot deal with Rendering by Möller and Haines.
3D rendering in all its complexity,
knowledge of the inner workings
of a modern graphics card will
help to understand what the
possibilities and limitations of a
3D real-time presentation are.

2 2017 © Ventuz Technology AG


In 3D graphics, especially
in real-time 3D graphics,
everything is about
perception. What an
object appears to be
is more important than
what it really is. Plus, in
the end, everything boils
down to a bunch of
mathematics anyway.

Vertices and Triangles


All modern graphics cards basically create else where the time to create an image is less
3D images in the same fashion. Objects are important than the quality of it.
specified by describing their surfaces using
points in 3D space and triangles connecting The name Raytracing comes from the process
those points. of casting rays through the 3D scene in order
to calculate what the user sees at a given pixel
There is no volume, weight or other physical from his point of view. In its simplest form, such
property to an object in 3D graphics. Objects a viewing ray may bounce between various
may appear solid but in the end it is all just objects until it hits a lightsource, at which point
surfaces. Since everything is constructed out the algorithms can calculate what color the
of triangles, there also simply is no “smooth” pixel that cast the ray should be. While this
surface. However, a sphere can appear to can be quite computationally intense work,
be smooth if enough tiny triangles are used raytracing can easily deal with shadows,
such that the user cannot perceive where one reflections, transparencies and many other real
triangle starts and the next ends. world phenomena.

The purpose of a graphics card is to create a Although many optimizations have been
2D image that can be shown on a screen, and made over the years and processors power
that of course as fast as possible. Over the years, has increased tremendously, raytracing is
two major techniques have been established: still too performance intense to create 60 or
Raytracing and Rasterization. The former is more frames per second from a scene that
primarily used in movie production or anywhere constantly changes.

2017 © Ventuz Technology AG 3


View: Contains the position and orientation of
the camera in the scene. Instead of moving
objects in the world to have a specific position
in the rendered image, the world stays fixed
and the camera is moved much like in a real life
photo shoot.

Projection: Simulates the effect of perspective


shortening (i.e. objects farther away appear
smaller). This is often defined by specifying a
view angle (also called field of view). When a
more schematic look is required, this can also
be an orthogonal instead of a perspective
projection which does not have any perspective
shortening at all.
Each triangle is processed Once the 2D vertex positions have been
independently of all other calculated, the triangle can be converted to
pixels by connecting the vertices and then filling
triangles in a scene. This the interior of the triangle. This step of going
from a 2D continuous space to pixels is what
reduces the required gives the whole process the name Rasterization
processing power per as the 2D triangle is discretized to the 2D
raster of pixels.
triangle to an almost trivial
Rasterization will generate a number of
amount but it also ignores Fragments which can be thought of as potential
all global effects. pixel changes. A fragment contains the 2D
coordinates of the pixel it belongs to as well as
the color of the triangle at that point. Before
assigning that color to the pixel, the fragment
has to run through a number of tests which can
While Raytracing tries to mimic real world each reject the fragment and thus prevent the
physics, Rasterization takes the completely triangle from changing that particular pixel.
opposite approach: it projects triangles to the This can be used to mask off certain parts of
screen instead of searching through the 3D the screen or prevent overdraw to simulate
space. The vertices of a triangle run through a occlusion effects as will be described later.
series of matrices to simulate the effects of a
camera such as perspective shortening, thus This process is very efficient and today graphics
transforming them from 3D space to a 2D space. cards are capable of doing this millions or even
more times each second. The downside is that
The main part of this projection process is split any relation to the physical world is lost and has
into three matrices: to be created artificially. Or to be more precise,
the perception of reality has to be created.
World: Used to position an object in the scene.
Most objects are modeled to have their own During the evolution of Rasterization, more
local coordinate system with the origin, for and more complex phenomena have been
example, set to the center of gravity. Using adapted to somehow fit into the world
the world matrix, the object can be translated, of Rasterization. But they have been
rotated and scaled without having to change integrated as more of an afterthought to the
the vertex coordinates stored in the mesh. pre-existing technique.

4 2017 © Ventuz Technology AG


Visibility and Shadows
One of the earliest problems of Rasterization was that of occlusion.
When viewing two objects in space, one will occlude the other if it is
in front from the viewers perspective. This is one of the most basic
and most important principles on how humans grasp the spatial
relationship between objects.
Z-Buffer Depth Buffer and skips the pixel if it is farther
away than the memorized value. This is easy
As Rasterization works on a per triangle basis, to compute, creates correct occlusion results
the spatial relationship is completely ignored and the rendering order of triangles becomes
by the Rasterization itself. If two triangle occupy irrelevant again.
the same 2D pixel, the one that is drawn later
will overdraw the result of the other. As a It however also creates a complete new type of
result, correct occlusion is dependent on the problem: Z-fighting. Since the Z-Buffer has only
rendering order. If the object farther away from limited floating precision (usually 24 bits), the
the viewer is drawn first, everything ends up viewing field of a camera has to be artificially
correct, if it is drawn last, it will overdraw objects restricted to have a finite interval that can be
that are before it. Sorting all triangles based on mapped to a depth-value between zero and
their depth is performance intense, therefore a one. This is done by specifying the near and far
so called Depth Buffer was introduced and now plane which restrict the viewing field to a so
is standard everywhere. called view frustum. All triangles closer than the
near plane and farther than the far plane will
The Depth Buffer stores an additional value not be rendered.
for each pixel, specifying the depth (from
the viewer’s perspective) of the pixel that The obvious solution is to choose a near plane
has last been rendered to this pixel (this is very close to the camera and a far plane
called the Z-Value). Before changing the very, very far away. This however increases the
color of a pixel, Rasterization checks the depth likeliness of vertices at different Z coordinates
value of the pixel as calculated as part of the to be mapped to the same depth value (if
triangle against the value already stored in the the precision of the Z-buffer does not suffice to

2017 © Ventuz Technology AG 5


distinguish between the two values). This can
cause visual artifacts where parts of an object Material and Lighting
shine through another more or less randomly.
This is called Z-fighting, where the decision which Per-Pixel vs Per-Vertex
triangle occupies the pixel changes based on
the camera position (and not their Z position) When Rasterization was first developed,
as various rounding errors kick in. For this reason, lighting did not really play any role. At that
a compromise has to be made between a time, computer screens only had two colors
depth range that does not omit usually visible and actually seeing a 3D line on a display
or important objects but also does not cause was revolutionary. Even when processing
visual artifacts. power improved, shading a triangle meant
assuming a light source is an infinite small
Shadows point and computing a pixel color from that
information was only done for the vertices of
Casting shadows, which is basically the same a triangle and then interpolated over its face
problem as occlusion but from the view point of (per-vertex-lighting).
the light source, is to this day still a hard problem
in Rasterization. While there are dozens if not Today it is finally possible to do this computing
even hundreds of algorithms to add shadows per pixel on the screen (per-pixel-lighting)
to the Rasterization process, the amount of so that the lighting is computed as precisely
computation to create a visually pleasing as needed. Still it must be taken into account
shadow is immense. Every object, even the that this is very performance intense and
ones that are not visible from the cameras point sometimes one could still be better off using
of view, can potentially cast a shadow. per-vertex-lighting.

To make matters worse, what we know as Lighting computation in Ventuz is done per pixel,
shadows in the real world is not a simple “light is not per vertex. It requires each vertex to have
blocked or not blocked” question but the result a so called normal, a vector that is orthogonal
of light bouncing of surfaces multiple times, on the surface and of unit length. Most
reaching areas which are not directly visible 3D modeling program calculate these
from the light source. This is what Raytracing tries automatically during creation of an object.
to simulate and what makes it so complex. It is Instead of calculating the lighting value per
also a far stretch from splatting triangles to the vertex and then interpolating it, the normals
screen the way Rasterization does it. are interpolated and then the lighting model is
applied to each pixel filled by the triangle.
Engineers have thought of several ways to solve
these issues. One of them is using an Ambient Lighting Models
Light that applies a certain Color to every
Triangle’s face no matter its position, rotation or The other information required is the position
the light sources. This way, when a source’s light and type of one or more light sources. The most
does not reach a triangle, it is not completely common light sources are point lights (the light
black, but still has a small amount of brightness emits from infinite small “lamps” at a specific
simulating the indirect rays reflected from other position in 3D space) and directional lights
objects. Another newer approach is using​ (without a position but a constant direction as
Global Illumination algorithms. from a far away light source like the sun).

The graphics card uses some simple vector-


math to change the brightness and color based
on the relation of the vector from light to object,
the normal and the vector from the viewer to
the object.

There are different approaches to calculating


the color of a pixel depending on this
information, each one having different kinds
of adjustable parameters. The most common
ones are Gouraud and Phong which have
been used in the industry for many years. Rather
new but not less common today is Physically
Based Rendering.

6 2017 © Ventuz Technology AG


Ventuz uses a very similar All terms combined create a very characteristic
and somewhat cheap looking plastic look and
approach to PBR. Although feel. Therefore most objects are usually textured
to increase their realism.
there are a lot of differences
in the behavior and inputs to Gouraud uses a similar approach but does not
regard specular highlights. As a consequence
a Ventuz Material, in the end all parameters regarding the specularity are
omitted in this lighting model.
most real life materials can
be build with the help of the Physically Based Rendering

Ventuz Engine. PBR uses a more realistic approach to render


materials. Every engine uses slightly different
Algorithms and so the parameters may also
differ from engine to engine. This is because this
model is in a very early stage of development.

Commonly a physical material is described


using 4 Parameters:

Base Color: Defines the basic color of the


Material, similar to the Diffuse parameter in
Phong Shading (but not the same!). Usually a
While the Gouraud and Phong Lighting Models three-dimensional vector (RGB).
have parameters that would never be used to
describe materials in the real world, PBR has the Roughness: Determines how rough a Material
approach to do exactly this to make the work is, affecting the sharpness of reflections and
of an artist a lot easier - if he wants to create a specular highlights. Usally a floating point
stone surface he just has to look up the physical number between 0 and 1.
values of it and type it in. No feeling of faking
everything anymore. Specularity: Defines the Color of specular
highlights on the surface.
Phong and Gouraud
Metalness: Although this parameter is not
Phong uses four common terms that are mandatory for the model to work, many
combined to the final color: systems implement it as a direct input. It affects
how metal-like the material looks in the end -
Ambient: This term is independent of the light metals tend to be more reflective, have tinted
source and the viewer position. It simulates the reflections and no base color (black). Insulators
ambient light (the amount of background light) in contrast usually are less reflective (plastic is an
that comes from bouncing of other surfaces exception) and have untinted reflections and a
and is usually rather weak. base color. Ventuz does not use this parameter
in its lighting model but as stated before, it is
Diffuse: Varies the brightness of the surface not mandatory in order to create all kinds of
based on the light direction and the surface materials since reflectivity, specular color and
normal. This gives an object its characteristic base color are all controllable inputs as well.
shape, brighter towards the light source and
darker on the back side. Often parameters like an Ambient or Emissive
Color are used as well since they can be added
Specular: Simulates the bright highlight that to this model easily and offer very easy ways to
occurs where the reflection of the light source directly affect the rendering of a material.
is visible.

Emissive: Simulates light that comes from the


object itself, not a light source. Since this can
neither simulate the real world glow nor the
color bleeding to other objects that a real world
emissive light source would have, this term is
rarely used.

2017 © Ventuz Technology AG 7


Textures Since the pixels in a texture can have any
color, the designer is not limited to using them
to indicate a material. He can also brighten
One development which had a profound or darken individual pixels to put highlights
impact on 3D graphics and real-time rendering or shadows on to the surface. As long as an
in particular was the introduction of Textures. object does not move relative to the light-
A texture is a 2D image which is mapped to source or deform, the lighting information will
the object’s surface and used during be plausible.
Rasterization as an input property for one of the
lighting model’s parameters to generate the The great benefit of this is not only that the
final color of a fragment. Instead of modeling lighting computation does not have to be
a wall of bricks and assigning it a proper color, computed during rendering but that a more
one can use a simple rectangle with an image complex approach (such as raytracing) can
of a wall of bricks. be used to pre-compute high quality
shadows that would not be possible at all with
The first and still dominant application of textures Rasterization. Another use for this that greatly
is to give a surface the look of a certain material increases the visual quality is baking so called
(leather, water, clouds, etc.), reducing the Ambient Occlusion.
number of actual triangles required to achieve
the same effect. For example the stitches on a This is where the shadows that are caused by
car seat do not have to be modeled but can the object shadowing itself, usually where
be “overlayed”. regardless of the light source, there will be less
light (such as cracks, indentations and joins) are
The basic principle of a texture is very simple. baked into the model as a texture.
Each vertex of a triangle is assigned a new set of
2D coordinates (called U/V to distinguish them Lastly, Normal Maps can be used to change the
from X/Y/Z) to describe which part of the texture normals applied to the vertices of a geometry
should be mapped to the triangle. Usually, the pixel wise. The resulting normals will then be used
upper left corner of an image is assigned 0/0 by lighting model instead of the interpolated
and the lower right 1/1, but it is also common to ones. This way it is possible to make details on
use mapping matrices to modify (scale, rotate the surface (like the stitches on a car seat) look
and translate) the original U/V-coordinates even more realistic since the Lighting can still be
during Rasterization. It is the responsibility of the affected by the structure of the detail.
designer to create the original model to assign
proper U/V-coordinates, same as he decides A similar idea is used for user interface elements.
the position of vertices in 3D space. Why model a button, set proper light sources,
assign correct materials when an image of a
Over the years, various other useful applications button can be created in Photoshop and then
of textures have been developed. The most mapped to a rectangle?
crucial ones for real-time rendering being
Shadow/Highlight Baking, Ambient Occlusion Making the button glow takes two clicks in
and Normal Maps. Photoshop where it takes quite sophisticated
techniques to integrate it into the Rasterization
process. If it looks right and good, why bother
with complex techniques?

Going further, artists even use textures to “bake


geometry”. If an object is far away enough,
the difference between a still image mapped
to a rectangle and a rendered object will be
negligible. This is often used to render trees,
clouds, grass or similar objects which are part of
the background but would take thousands of
triangles to model.

Nowadays, the use of textures is so common that


scenes are often limited by the texture memory
available on the graphics card rather than
the number of triangles that can be rendered
each frame.

8 2017 © Ventuz Technology AG


Transparency As opposed to occlusion
testing, there is no simple
At one point or another, every scene will contain
some (partially)transparent objects. Whether way around having to sort
it be the front window of a sports car or the
surface of a pool of water, transparency is an
the objects back to front.
important part of the physical world... and again, Note, however, that the
Rasterization is not really capable of handling it.
Same as with visibility, transparency is the effect amount of work can be
of light bouncing of a surface and traveling
through a medium until it reaches the viewer.
reduced by first rendering
This interaction of multiple objects does not fit to all non-transparent objects
the per-triangle fashion of processing triangles as
Rasterization uses it. in any order and afterwards
What was introduced to address this problem
rendering all transparent
is a way to influence how the contribution of objects back to front.
multiple fragments to the same pixel is mixed.
The alpha value of a pixel in a texture or the color
of a material describes the opacity of the object,
zero being complete transparent and 1 being
completely opaque. When a triangle is drawn to rendered at all. So there is no chance to blend
a pixel, the graphics card computes a weighted green and blue because their fragments are
sum of the existing and new color value based on discarded before blending.
their alpha values and a blending function. The
most common use is to increase the contribution In the center configuration, depth testing has
of the new color the larger its alpha value is, been artificially turned off. First red is rendered
completely overwriting the existing color if alpha with nothing to blend it with since it is the first thing
is one. There is however one problem with this rendered. Then green is rendered and blended
approach: The rendering order. Imagine three and blue is rendered and blended. Each time the
sheets of glass, each in a different color. Let’s new object will take 50% of itself and 50% from the
say they are, from front to back, red, green and already rendered objects, so at the time blue is
blue. Each has an alpha value of 0.5. rendered, only 25% of red will be left.

In the left configuration, the red glass doesn’t In the right configuration, depth testing has been
look transparent at all. Why is that so? The render re-enabled but the rendering order has been
order of the objects is red then green then blue. changed to back to front: First blue then green
When red is rendered, the Z-values in the depth then red. When red is rendered, it takes 50% of
buffer are set and no fragment from another itself and 50% of the already rendered objects
triangle with a farther away Z-value will be which creates the correct effect.

2017 © Ventuz Technology AG 9


Reflection A lot of 3D modeling
programs use two-sided
Another classic effect is reflection. And yet again,
it is based on light bouncing off a surface and
lighting which means they
requires interaction between different objects, render triangles independent
therefore it is not part of standard rasterization.
There are two main cases of reflections in use in of their vertex orientation.
real-time rendering: Environment reflection and
mirror reflection.
If some triangles of an
object are missing when
Environment Reflection
importing them, make
Environment reflection describes effects where
reflections of the surrounding environment
sure the triangles are
can be seen on the surface of an object. oriented consistently and
For example, clouds in the lacquer of a car. For
this type of reflection it is usually less important the culling render option in
what exactly can be seen in the reflection
as long as it roughly matches the surrounding.
Ventuz matches that of the
The most common way to achieve this effect modeling program.
is to render the surrounding into a texture
(usually a cube map) and doing the texture
mapping not based on the U/V-coordinates
of the mesh but by computing a reflection
vector. Basically the cube map encodes the
color for each direction of a reflection vector
and thus producing the reflection requires little
computation during rendering.

Mirror Reflection Culling

Mirror reflection describes effects which mimic If you cannot see it, there is no need to render
a perfect mirror. In general this is only possible it. That is the motto of culling techniques which
for a planar surface, otherwise it would require try to avoid having to render an object or
raytracing. The idea to fake this effect is to triangle at all. For example, all triangles that
render the scene twice, once mirrored along are completely outside of the camera viewing
the reflection plane and once correctly. Think area do not have to be rendered as they will
of a pool in a hotel. The hotel can be seen as not contribute to the final image. This technique
expected everywhere except in the pool. In the is called Frustum Culling and is something
pool, the hotel can be seen upside down. that has to be done by the user or on the
application level and therefore will not be
further discussed here.

There is however a common culling technique


that is directly supported by the graphics card:

Performance Back-face Culling. The idea is that all triangles


belong to the part of an objects surface that
faces away from the user will not have to be
The definition of real-time is pretty vague. For rendered if the object is solid (i.e. there are no
some “real-time raytracers”, real-time means holes in the surface). The reason for this is that
that 2-3 frames per second can be rendered. if the object is solid, there will be some front-
Most people however agree that real-time facing part of the surface in front of the back-
graphics require a minimum of 30 FPS (frames facing triangles anyway.
per second) to be acceptable and 60 FPS to
have a good quality of animation. This of course The graphics card uses the order in which the
puts tremendous stress on the graphics card as three vertices of a triangle are specified to
well as the computer in general. There are a decide if a triangle is front- or back-facing. Based
number of different optimization strategies that on the render options set by the application,
are employed to achieve such high frame rates. either all clockwise or counterclockwise

10 2017 © Ventuz Technology AG


triangles will be dropped. In general, back-face As an example, have a look at a model of
culling simply just looks right. The only problems car used to render some advertisement and
are one-sided objects (i.e. a rectangle that has a model of the same car used for a console
no thickness) which will “disappear” when game. The number of triangles used can differ
rotated so that they face away from the user. by a factor of a thousand or more.
The other problem is when importing geometry
from other programs. For use of textures, the number of textures is
usually more crucial than the actual size of
Pre-Calculation them. This goes up to a point where multiple
independent textures are sometimes combined
All work that does not necessarily have to be to one larger Texture Atlas to reduce the
re-done every frame should be pre-calculated. number of times the graphics card has to switch
For example, if the lighting situation does not between different textures. However, reducing
change and the object does not change either, texture resolution can also optimize rendering
consider baking shadows into textures. If the times as less memory has to be managed
relative ordering of transparent objects always and accessed. It can even improve the visual
stays the same, do not sort them every frame quality if the texture resolution is adjusted so that
but build the presentation such that they are it matches the size the object as it appears in
rendered in the correct order in the first place. the rendered image.

Reduce Complexity

The easiest way for an artist to improve render


performance is to reduce the complexity of a
scene. The fewer triangles and objects and the
smaller the textures used, the faster it all can
be rendered. It is not uncommon to remodel Conclusion
objects specifically for the purpose of real-time
rendering. Most of the time, only the outer shell This concludes this very brief introduction to
of an object is seen, so the chips and cables real-time 3D rendering. There are a vast number
inside a mobile phone can be removed without of text books and papers out there on this topic
any visual difference. Even on the outer shell, for further reading. However, they pretty much
triangles can be saved if the object is only all are written for developers and not artists.
viewed in the distance. If an object is never Since real-time rendering is such a demanding
viewed from the back, remove the back as well. application when it comes to computer
resources, the reader is encouraged to at least
take a glimpse into the underlying programming
techniques and algorithms to get the most out
of his 3D presentation.

Never expect a mesh that


is modeled for another
purpose to be suited for
real-time rendering.
For models created for
raytracing or simulation
work, assume that the
number of triangles has
to be reduced as well
as the size and number
of textures.

2017 © Ventuz Technology AG 11


Ventuz Technology AG © 2017
Lutterothstr. 16a
20255 Hamburg
Germany

info@ventuz.com
www.ventuz.com

12 2017 © Ventuz Technology AG

Potrebbero piacerti anche