Sei sulla pagina 1di 89

SIX TUTS ON LIGHT AND SHADE

by Florian Wild
part 1

SUNNY AFTERNOON
Fig. 1

This tutorial series is intended to be used with mental ray for Autodesk Maya 8.5.

“Happiness is like the sun: There must be a little shade if man is to be comfortable.” - Let's start our exercise with
this little quote by Otto Ludwig.

Welcome to the first of the six-part tutorial series, discussing possibly the most challenging kind of 3D environment:
interiors. mental ray (for Maya) users typically get cold feet and sweating fingers when it comes to this “closed
combat”; the royal league of environment lighting. It’s for no reason though, as all you need for the battle is a simple
field manual (this tutorial), and just a little bit of patience...

So what is it all about? Let’s have a look at our object for this demonstration (Fig. 1) ...

As you can see, we have a closed room; you can tell by the porthole and the characteristic door that it is a room
inside a ship. Let’s imagine that it’s a tween deck of the ferry “MS No-Frills”, used as a lounge, and the staircase leads
to its upper deck.

From a lighter’s point of view, we can estimate by this analysis that there is light coming in from a) the opening in the
ceiling where the staircase leads outside, and b) from the porthole and the window beside it. That’s not much, and if
you ever took a photograph under such conditions you will know that, even with nice equipment, you would have a
hard time catching the right moment (the “magic hour”) to illustrate the beauty of this particular atmosphere.
(Atmosphere is also defined, besides by the lighting condition itself, by things like a point in time, the architecture,
the weather, and occasionally also the vegetation.)

So, for our first tutorial part, we will choose the following scenario: our ship, the MS No-Frills, is anchored somewhere
along the shore of Tunisia (North Africa) in the Mediterranean Sea; it’s summer, the time is around early afternoon,
and the weather is nice and clear. That’s all we need to know at this stage to get us started...
Fig. 2

If you open up the scene, you will see that there’s no proper point of view defined yet. Feel free to either choose your
own perspective or use one of the bookmarks I have set in the default perspective camera (Fig. 2). By clicking on one
of the bookmarks, all relevant camera attributes (position, orientation, focal length, etc.) are changed to the condition
stored in the bookmark. This greatly helps when trying out different views without committing oneself, and without
creating an unnecessary mess of different cameras.

Before we start lighting and rendering the scene, we should have a little introduction to the actual shading of the
scene and about a few of the technical aspects of things such as color spaces. If you find this too boring then you
might want to skip the next two paragraphs as this is not essential, but is nonetheless an explanation regarding how
to achieve to the result at the end of this tutorial.

A Note on Shading.

All the shaders you see are built on the new mia_material that ships with Maya 8.5. This shader was intended as a
monolithic (from the Greek words “mono”, meaning single, and “lithos”, meaning stone) approach for architectural
purposes, and it can be practically used to simulate the majority of the common materials that we see every day.
Unlike the regular Maya shaders, and most of the custom mental ray shaders, it implements physical accuracy, greatly
optimized glossy reflections, transparency and translucency, built-in ambient occlusion for detail enhancement of final
gather solutions, automatic shadow and photon shading, many optimizations and performance enhancers, and the
most important thing is that it’s really easy to use. And it’s all in one - thus “monolithic”. I therefore decided to use it
in our tutorial...
Fig. 3

Fig. 4

Fig. 5

A Note on Color Space.

As you may already know, usually all of the photographs and pictures that you look at on your computer are in the sRGB color
space. This is because, for example, a color value of RGB 200, 200, 200 is not twice as bright as a color with RGB 100, 100, 100,
as you would expect. It is of course mathematically twice the value, but perceptually it is not. As opposed to plain mathematics
(like 2 x 100 = 200), our eyes do not work in such a linear way. And here’s where the sRGB comes in... This color space ‘maps’ the
values so that they appear linearly. This is why most of the photographs are visually pleasing and look natural, which is not in a
true mathematically linear color space. However, almost every renderer spits out these old and truly linear images (because this
simply is how computers work - mathematically linear), unless we tell the renderer to do otherwise. Most people are not aware of
this, and instead of rendering in the right color space they unnecessarily add lights and ambient components to unwittingly
compensate for this error. In Fig. 3 and Fig. 4, you can see two photographic examples illustrating the difference between a true
linear (left) and an sRGB color space (right). In Fig. 5, you can see the same from a CG rendering; you’ll notice that the true linear
one looks a lot more “CGish” and unnatural. Even if you brightened it up and added/reduced the contrast, you still couldn’t
compensate for the fact that it’s in the wrong color space, specially if you carelessly used textures from an sRGB reference (i.e.
from almost any digital picture you can find), which adds up even more to the whole mess. This is an essential issue in order to
create visually pleasing and naturally looking computer graphics. If you have followed me up to here, and you think you
understand the need for a correct color space, then go take a break and get yourself some coffee or delicious green tea and enjoy
life for a while - you've earned it! This is all tricky yet fundamental knowledge. How this theory is practically applied in mental ray
will be shown later on...
So, let’s get started with lighting the scene... Maya 8.5
introduces, along with the mia package, a handy physical sun
and sky system. This makes it easy to set up a natural
looking environment and we can then focus more on the
aesthetic part of the lighting process, instead of tweaking
odd-looking colors. The sky system is created from the
render global’s environment tab (Fig. 6).

Fig. 6

By clicking on the button, you practically create:

a) a directional light which acts as the sun’s direction;

b) the corresponding light shader mia_physicalsun;

c) the mia_physicalsky, an environment shader that connects


to the renderable camera’s mental ray environment (Fig. 7);

d) a tone mapping lens shader called mia_exposure_simple,


which also connects to the camera’s mental ray lens slot.

It’s also worth mentioning here that this button also turns
Final Gathering ON.

Fig. 7
Fig. 8

Now that we have a default sun and sky system set up, we are almost ready to render. Before we do the first test render, let’s
make sure we are in the right color space, as mentioned. By default, we are rendering in true linear space (for an explanation
please refer to the previous notes on color space), which is - for our needs right now - not correct. The lens shader we created
however brings us into a color space which closely approximates sRGB by applying a 2.2 gamma curve (see the Gamma attribute)
globally to the whole rendered image, as we calculate it. Generally, this is a good thing and is desirable. But if we apply a gamma
correction in this way, then we would have to “un-gamma” every single texture file in our scene. This is due to the fact that the
textures already have the “right” gamma (this is usually true for any 8bit or 16bit image file), and adding a gamma correction on
top of that would double the gamma and could potentially wash out the textures’ colors. What a bummer!

So, we either have to “un-gamma” every texture file (boring and tedious), or instead of the lens shader’s gamma correction, we
can use mental ray’s internal gamma correction (still boring, but less tedious).

As you can see from Fig. 8, we set the Gamma value in the Render Globals’ primary framebuffer menu to the desired value, which
is - simply because mental ray works this way - 1 divided by the value (2.2 for approximating sRGB in our case), which equals
0.455. At the same time, we also need to remove the gamma correction of our lens shader, so we must set its Gamma attribute to
1.0 (linear equals no correction; you can select these shaders from the hypershade’s Utilities tab). Thus we completely hand over
the gamma correction to mental ray’s internal mechanism, which automatically applies the right “un-gamma” value to every of our
textures. There are no more worries for our color textures now. If we use “value” textures however (like for bump maps,
displacement maps, or anywhere where a texture rather feeds a value than an actual color), we'd have to disable this mechanism
for the particular “value” texture by switching a gammaCorrect node in front of it, with the desired gamma compensation (2.2 in
our case) filled into the gammaCorrect's Gamma attribute (note: this attribute does not mean the value for the actual
color^gamma function, it rather indicates the desired compensation situation, i.e. the “inverse”, or reciproke of the gamma
function - no one ever would tell you about that, but now you know better). This is a long-winded theory, but now we’re ready to
go!

I tweaked the Final Gathering settings (Fig. 9) so


that we will get a relatively fast converging, yet
meaningful, result. I also turned down the
mia_physicalsun’s Samples to 2.

Fig. 9
It’s kind of dark and has a few errors (Fig. 10), mainly
because of insufficient ray tracing settings.

Fig. 10

Let’s now increase the general ray depths (Fig. 11) and the
Final Gathering ray depths (Fig. 12). We’re also turning the
Secondary Diffuse Bounces on. However, the Secondary
Bounces button in the Render Globals only sets their Bounce
Depth to 1; we want it to bounce twice so we’re selecting
the actual node where all the mental ray settings are stored,
which is called “miDefaultOptions”.

Fig. 11

Fig. 12
You can do this by typing in “miDef*” in the input line with
LMB Select by name on (the asterisk is a wildcard for lazy
people like me, see Fig. 13).
Fig. 13

Once we select the miDefaultOptions, all more or less hidden


mental ray settings are exposed to the attribute editor.
There’s also some stuff in the mentalrayGlobals node, but
we’re focusing on the Final Gather tab in the
miDefaultOptions right now. Let’s set the FG Diffuse Bounces
attribute to 2 (Fig. 14). These ray depth settings should
suffice to get the result at the end of this tutorial.

Fig. 14

Let’s re-render (Fig. 15). It is still pretty dark, but you can
tell that the indirect light contribution is sufficient (don’t
worry about detailed shadowing, we’ll get to that later on),
so we need to actually raise the exposure level of our piece,
somehow.

Fig. 15
Remember, we’re all still on the very basic default settings for
everything. One setting used to tweak the exposure is the
Gain attribute in the mia_exposure_simple, which is
connected as a lens shader to our camera. Let’s increase the
Gain value to 0.5 (Fig. 16).

Fig. 16

That’s much better, and gives a more natural feeling (Fig.


17).

Fig. 17
Now we can start to actually make decisions on the lighting
and aesthetic accentuations. For this part, please don’t feel
constrained to the settings and colors that I choose - feel
free to follow your own ideas! I’m rotating the sunDirection
to X -70, Y 175, Z 0 to accentuate certain elements by direct
sunlight, and I’m setting the attributes of the
mia_physicalsky to the values you can see in Fig. 18. I
increased the Haze value to 0.5 (note that this attribute
takes values up to 15, so 0.5 is rather low). Then I set the
Red/Blue Shift to 0.1, which basically means a white-balance
correction towards reddish (towards blue-ish would be a
negative value, like -0.1). I also raised the Saturation
attribute to 2.0, which is it’s maximum value. I then made
slight adjustments to the horizon, which does not have much
effect on the global look but I experimented with what we
could see through the porthole and the window.

Fig. 18
The last thing I changed was the Ground color. I gave it a
greenish tint because I thought this gave it a more lagoon-
like feeling, and I think it gives the whole piece a more
interesting touch (Fig. 19). From my own point of view, this
is a good base for what we intended to accomplish with the
early afternoon in the Mediterranean Sea scenario.

Fig. 19

If we’re satisfied with the general look, we can then go about


setting up the scene for a final render. Firstly, let’s increase
the Final Gathering quality, because we can reuse the Final
Gathering solution later on. As you can see from Fig. 20, I
raised the Accuracy to 64, but more importantly, and
especially for the shadow details, the Point Density is now at
2.0. With a denser Final Gathering solution we can also raise
the Point Interpolation without losing too much shadowing
contrast. I also set the Rebuild setting to Off, because the
lighting condition is not changing from now on and we can
therefore re-use existing Final Gather points.

Fig. 20

Let’s have a look (Fig. 21). As you can see, there is still a
lack of detail in the shadowed areas, especially in the door
region. We can easily get around this with the new
mia_materials which implement a special Ambient Occlusion
mode. You only need to check on for Ambient Occlusion in
the shaders, as everything else is already set up fairly well by
default (all I did was set the Distance to a reasonable value
and darkened the Dark color a little).

Fig. 21
The main trick is the Details button in the mia_material
(leaving the Ambient at full black). By turning on the Details
mode, the Ambient Occlusion only darkens the indirect
illumination in problem-areas, avoiding the traditional global
and unpleasant Ambient Occlusion look. See Fig. 22 with the
enhanced details.

Fig. 22

Note: to adjust the shaders all at once, select all Fig. 23


mia_materials from the hypershade, and set the Ao_on
attribute in the attribute spread sheet to 1 (Fig. 23) (the
attribute spread sheet can be found under Window >
General Editors > Attribute Spread Sheet). Also note that
switching on the Ambient Occlusion in the shader scraps the
Final Gathering solution; it will be recalculated from scratch.
If you find the Final Gathering taking too long, turn the Point
Density down to 1.0 or 0.5, as this still gives you nice results
but the lighting details will suffer.
Now let’s increase the general sampling quality (Fig. 24). The
sample level is now at Min 0 and Max 2, with contrast at 0.05
and the Filter set to Mitchell for a sharp image.

Last but not least, if you are having problems with artifacts
caused by the glossy reflections, raise the mia_material’s
Reflection Gloss Samples (Refl_gloss_samples) up to 8 for
superior quality. You can do this with the attribute spread
sheet, as well.

Fig. 24

For the final render, I chose to render to a 32bit floating


point framebuffer, with a square 1024px resolution. This can
be set in the Render Globals (Fig. 25).

Fig. 25

If I want to have the 32bit framebuffer right out of the GUI


(without batch rendering), I need to turn the Preview
Convert Tiles option On and turn the Preview Tonemap Tiles
option Off, in the Preview tab of the Render Globals (Fig.
26).

Fig. 26
Important: I also need to choose an appropriate image
format. OpenEXR is capable of floating point formats and it’s
widely used nowadays, so let’s go for that (Fig. 27).

When rendering to the 32bit image, you will get some funky
colors in your render view, but the resulting image will be
alright - don’t worry. After rendering, you can find it in your
projects images\tmp folder. Fig. 28 shows my final result: a
pretty good base for the post production work.
Fig. 27

Fig. 28
Since we rendered to a true 32bit image, we have great freedom for possibilities. See for my final interpretation where
there is no additional painting, only color enhancement. Try it for yourself!

I hope you have enjoyed following this tutorial as much as I enjoyed writing it!
part 2

TWILIGHT
Welcome back aboard to the second part of the environment
lighting series for Autodesk Maya 8.5. Again, we will be using
mental ray for Maya for this challenging interior illumination,
so all you need for the this is to get your CPU at operating
temperature and the basic maya scene of our ship's interior.

Before we can start, we need to properly set the project (Fig.


1). If you're not familiar with the use of projects, you might
want to know that (one of) the main reasons for doing this is
because of the relative texture paths Maya uses. These
relative paths ensure that we can port the scene from one
file location (e.g. my computer) to another (your computer)
without any hassle, as opposed to absolute paths which
would always point to a static location that might differ from
system to system.

Fig. 1

So we're back aboard the MS No-Frills, still anchored


somewhere in the Mediterranean Sea (Fig. 2). For this
second tutorial, we will set our goals for accomplishing a
twilight atmosphere, which would usually occur at either
dusk or dawn.

Before we actually look at the scene, let's take a few


moments to think about this very special situation (you might
want to skip or come back later to this paragraph if you want
to go straight to the execution). Twilight, from a technical
point of view, is the time (usually around half an hour)
before sunrise or after sunset. In this condition the sun itself
is not visible; the sun's light is however scattered towards
the obeserver in the high layers of the atmosphere, either by
the air itself (Rayleigh-scattering) or aerosols. This scattering
effect causes the beautiful and different colors that we enjoy
every dusk or dawn. From an artistic point of view, twilight
may happen in a variety of occasions, for example in stormy
weather, or when natural and artificial light sources meet -
typically whenever two (thus “twi-”) light sources or light Fig. 2
conditions compete for predominance (imagine two wrestlers
intensely fighting on the floor, and it's absolutely impossible
to tell who's going to win the fight). Twilight always has this
dramatic sense to it, and often the dramatic colors as well. In
case of a storm, they might even range from greenish to
deep blue. Usually, in the case of dusk and dawn, colors
range from blue to purple, and from yellow to orange and
red. The crux is that these colors are mostly equally
dominant (and therefore leave us with great artistic and
interpretational freedom) - as opposed to any other lighting
condition, where there is usually one light source which is
predominant. With this in mind, we are now ready to
simulate the very particular case of twilight.

We will use the same base scene as used for part 1 of this
tutorial (the sunny afternoon), so all shaders and textures
are ready to rumble. All surface shaders are made from the
mia_material that ships with Maya 8.5 (you might want to
read back to the “note on shading” in part 1 - sunny
afternoon - which explains its basic functionality).
Again, we are using the newly introduced physical sun and
sky system, which can easily be created from the render
globals (Fig. 3). This button saves us time setting up all the
nodes and connections to make the system work properly
(thus it also turns final gathering ON). It basically consists of
three things:

Fig. 3
The sun, whose direction we control using the directional light (called sunDirection, by default) with it's light shader
mia_physicalsun; the sky, which consits of an environment shader (mia_physicalsky) connected to the camera; and a
simple, yet effective, so-called tonemapper (mia_exposure_simple), used as a lens shader on the camera (Fig. 4).

Fig. 4
Fig. 5

Before we start rendering, let's firstly think about a reasonable sun direction that would fit our needs for twilight. It is very
tempting to actually use an angle that leaves the sun below the horizon line, however this would yield a diffuse, not very dramatic
lighting. You might want to experiment with this a little, but I have decided to have a more visible indication of where the sun
actually is. I rotated the sun on X -12.0 Y 267.0 Z 0.0; this makes the direct sunlight shine through the back windows, still
providing a very flat angle (Fig. 5).

There's still one important point that we should consider before pushing the render button: the color space. As already explained
in the “note on the color space” in the first tutorial (sunny afternoon), we should make sure we work in a correct space, which is
sRGB, or in our case an sRGB closely approximating 2.2 gamma curve.
Fig. 6

The mia_exposure_simple already puts us into this space by default (The Gamma attribute defaults to 2.2), but by doing it this
way, we double the gamma on our filetextures, which by default are already in sRGB - that's a big secret no-one may have ever
told you before, but trust me - it's like that. So we either need to remove the gamma from our textures (“linearize” them) before
rendering, which can be done with a gammaCorrect node in front of them in the shader chain with Gamma set to 1/2.2, which is
0.455 rounded (important: the gammaCorrect node works inversedly - the value we put in there is the desired gamma
compensation value, not the actual gamma function itself!), OR we can use mental ray's internal gamma correction mechanism -
which I prefer. So we abandon the mia_exposure_simple's gamma correction, simply by setting its Gamma attribute to 1.0, and
enable mental ray's mechanism by setting the primary framebuffer's Gamma to 1/2.2 = 0.455, in the render globals (Fig. 6).

So we're ready to go and do the first test rendering (Fig. 7).


As you can see the scene is pretty dark and has a few errors
caused by the insufficient ray depths. However, we are still
using the render globals default Draft quality preset...

Fig. 7
Let's now increase the raytracing depths to a reasonable
amount (Fig. 8). The values you see in Fig. 8 should satisfy
our requirements; we might increase the reflection depth
later on...

Fig. 8

I also tweaked the final gathering settings to a lower quality


(Fig. 9). This way, we get a fast converging - yet meaningful
- indirect illumination for our preview renders. But besides
lowering the general final gathering quality, I increased its
trace depths, and, more importantly, turned the Secondary
Diffuse Bounces button on. This button however only gives
us a single bounce of diffuse light, as that's how they
designed the render globals, but as I'm not satisfied with
that let's go under the hood of the mental ray settings...

Fig. 9
We are selecting the miDefaultOptions node (for example by
typing “select miDefaultOptions” without the quote marks in
the MEL command line) (Fig. 10). This node is basically
responsible for the export of all the settings to mental ray.
The regular render globals are practically a more user
friendly “front-end” to the miDefaultOptions. There's also
some stuff in the mentalrayGlobals node, but this does not
affect us right now.

As you can see, the FG Diffuse Bounces attribute is actually


exposed; we set it to our desired depth, which is 2 for now.

Fig. 10
It looks better, but still appears to be seriously under
exposed (Fig. 11). There are several ways to adjust the
general exposure level in mental ray for maya, but let's
choose the easiest one: raising the Gain attribute of our
mia_exposure_simple...

Fig. 11

You can navigate to the mia_exposure_simple either by


selecting your camera (to which it is connected), or by
opening the hypershade and selecting it from the Utilities
tab. I gave it a serious punch and boosted the Gain to 4.0
(Fig. 12).

Fig. 12
Now it's much better from an exposure point of view, but it
looks very cold and not very twilightish (Fig. 13). You might
want to experiment with the sun's direction, but if we overdo
this then we will loose the nice light which is playing on the
floor. I therefore decided to solve the problem using the
mia_physicalsky - the environment shader which is
responsible for pretty much the entire lighting situation.

Fig. 13
I upped the Haze parameter to 2.0, which gives us a nice
“equalization” of direct light coming from the sun, and the
light intensity of the sky (Fig. 14) At lower haziness, the
sunlight would be too dominant for our twilight atmosphere.
I then shifted the Red/Blue attribute towards reddish, to
achieve a warmer look (if I wanted to shift it towards
blueish, i.e. doing a white balance towards a cooler
temperature, I would have to use a negative value for the
Red/Blue shift). I also slightly increased the Saturation,
which is pretty much self explanatory. Now, for an interesting
little trick to make the whole lighting situation more
sunset/sunrise-like, whilst still maintaining the direct light on
the floor (i.e. the actual light angle), I increased the Horizon
Height to 0.5. This not only shifts the horzizon line but also
makes the whole sky system think that we have a higher
horizon, and thus provides a more accentuated
sunset/sundawn situation. Remember this does not have too
much of an effect, yet it's still an interesting way to tune the
general look. The last two things I changed were the Horizon
Blur and the Sun Glow Intensity, however both of these
attributes dont have much of a visible effect on the general
illumination of our interior.

Fig. 14
Once we're finished setting up the basic look, we can go
about configuring the render globals for the final quality (Fig.
15). First of all, let's increase the final gathering quality, since
we can reuse the final gathering solution later on. In Fig. 15
you can see the values I used - 64 for accuracy, which
means each final gather point shoots - in a random manner -
64 rays above this point's hemisphere (less accuracy would
give us a higher chance of a blotchy final gathering solution).
To work against the blotchiness we could also increase the
Point Interpolation to really high values, like 100+, but this
would most likely wash out the whole contrast and detail of
our indirect illumination if we dont have a sufficient Point
Density value. The Point Density - in conjunction with a
reasonable Point Interpolation - is the most responsible part
in achieving nicely detailed shadowing, and so we have to
find a good correlation between these two. In our case, I
found it sufficient to have a Point Density of 2.0 and a Point
Interpolation of 50. You might want to try a density of 1.0
(or even 0.5) if you think the former settings take too long to
calculate, but you'll surely notice the lack of detail in the
indirect illumination. Note that increasing/decreasing the
interpolation does not affect the final gathering calculation
time at all. It also does not hurt the actual rendering time
too much. The crucial value is the point density which adds
to calculation time, as well as the accuracy. Also note that
you might be able to comfortably experiment with the Point
Interpolation if you freeze the final gathering solution (set
Rebuild to Freeze).

Fig. 15

It looks much better now, but there are still some areas that
seriously lack detail, such as the door region (Fig. 16). To
reveal these details we could render a simple ambient
occlusion pass and multiply it over in post production. This
would accentuate the problem areas, but at the same time it
would add this typical all-present, physically incorrect and
visually displeasing ambience. To overcome this, and still use
the advantage of ambient occlusion, we can use the
mia_material's internal ambient occlusion mode...

Fig. 16
We simply need to enable it in the shader, and set the Detail
attribute to ON (which it is by default) (Fig. 17). This special
ambient occlusion mode is intended to enhance the problem
areas' details, where the point density might still not suffice.

Fig. 17

Fig. 18

To enable the ambient occlusion in all shaders, we simply select them all from the hypershade and open the attribute spread
sheet, from Window > General Editors > Attribute Spread Sheet (Fig. 18). There we navigate to the attribute called Ao_on and set
its value to 1 (ON).
Although it still might be physically incorrect, it reveals all the
details that the final gathering was not able to cover (Fig.
19). Of course, it still looks very coarse, and this is mainly
because the general sampling settings are still at extremely
low values.

Fig. 19

To ensure nice edge antialising, as well as better shadow and


glossy sampling, we set the min/max sample levels to 0/2
and the contrast values each to 0.05 (Fig. 20). The filter
should be changed, too; I chose Mitchell for a nicely sharp
image. I'm also raising the Reflection Gloss Samples
(Refl_gloss_samples) up to 8 in the mia_materials. Note that
this happens on a per shader basis, and we can use the
attribute spread sheet again to do this all at once for all
shaders.

Fig. 20
Last time we rendered to a full 32bit floating point
framebuffer. This time, for my final render, I chose to render
to a 16bit half floating point framebuffer (Fig. 21). The 16bit
half takes less storage (and bandwith) but still provides the
increased dynamic range of floating point buffers. If we want
to render the floating point buffer right out of the GUI,
wihtout batch rendering, we need to make sure the data
written into the buffer actually is floating point; thus the
Preview Convert Tiles in the Preview tab of the render
globals needs to be switched ON, and the Preview Tonemap
Tiles option needs to be switched OFF. This will produce
funky colors in your render view preview, but the image
written to disk (typically in your project's images\tmp folder)
should be alright.

Fig. 21

The use of a 16bit half framebuffer forces us to use ILM's


OpenEXR format, as it is the only supported format right now
for this particular kind of framebuffer (Fig. 22). That's not
actually bad, since OpenEXR is a very good and nowadays
widely used format.

Fig. 22
Fig. 23

Here's the final rendered, raw image (Fig. 23) - a good base for the post production work.
Fig. 24

In my final interpretation I decided to exaggerate the colors that make a dramatic twilight atmosphere (Fig. 24). Again, there is no
painting happening, only color enhancement which done using Adobe Lightroom 1.0.

I hope you enjoyed following this second part of the series as much as I have enjoyed writing it. Stay tuned for part 3 where we
will be covering an extremely interesting and no less challenging lighting situation: moonlight.
part 3

MOONLIGHT
Hello and welcome to the third part of the environment
lighting series for Autodesk Maya 8.5, where we will be
discussing a very interesting lighting situation: natural
moonlight. So let’s wait for full moon and a cloudless sky,
then we can turn off the lights and get started...

If you followed the preceding two tutorials (which I


recommend), you will already be familiar with the scene (Fig.
1). Before we start placing lights and tuning parameters, we
should take some time to think about what ‘moonlight’
actually is. If you are not interested in this concept then you
might want to skip or come back later to the next two
paragraphs, as they are not essential. They are however
valuable for the understanding of why certain methods have
been used in the execution of this moonlight setup.

So what is moonlight? First of all, by moonlight we mean a


nighttime situation, and for the sake of convenience let’s say
we have a full-moon/nighttime situation. There are several
sources and components of illumination in this setting (i.e. in
the descending order of energy): the moon itself (by
scattering sunlight from its surface in all directions), the sun
(by scattering light around the edge of the earth), planets
and stars, zodiacal light (dust particles in the solar system
that scatter sunlight), airglow (photochemical luminescence
from atoms and molecules in the ionosphere), and diffuse
galactic and cosmic light from galaxies other than the milky
way. All of these illumination sources have their
characteristics, and in order to super-realistically simulate
such a night-sky, we would have to account for all of them.
But please bear with me, we will only be concentrating on
the moon itself, and an atmospheric ‘soup’ including all the
other ingredients.
Besides, and this is very interesting, even if we did that
super-realistic night-sky simulation then we would perhaps
get a very photo-realistic rendering, but I am sure many
people would be disappointed by it. This is for the simple
fact that, seeing a night-sky/moonlit photograph is
fundamentally different from actually viewing such a scene
with our own eyes. The photograph might be physically
correct, but also completely different from what we are used
to physiologically perceiving. In the end, we would most
likely shift the photograph’s white balance heavily towards
blue, because this is what we are used to seeing: Opposed
to how a camera sensor works in dim lighting levels, the
sensitivity of the human perception of light is shifted towards
blue; the color sensitive ‘cones’ in the eye’s retina are mostly
sensitive to yellow light, and the more light sensitive ‘rods’
are most sensitive to green/blueish light. At low light
intensities, the rods take over perception and eventually we
become almost completely color blind in the dark, hence it
appears that the colors shift towards the rods’ top sensitivity:
green and blue. This physiological effect is called the
“Purkinje” effect, and is the reason why blue-tinted images
give a better feeling of night - even though it’s not correct
from a photographic point of view.

So we will rely on a hint of artistic freedom, rather than strict


photo-realism, for this tutorial. To simulate the moon’s light I
chose a simple directional light with the rotation: X -47.0 Y -
123.0 Z 0.0 (Fig. 2).
For the light color I decided to use mental ray’s mib_cie_d shader (Fig. 3). Its
Temperature attribute defaults to 6500 K (Kelvin), which means an sRGB
‘white’ for this so-called D65 standard illuminant, which is commonly used for
daylight illumination, will be as follows: every temperature above 6500 K will
appear blueish, and every temperature below 6500 K will appear reddish. The
valid range is from 4000 K to 25000 K. Although the moon actually has a color
temperature of around 4300 K, I chose a temperature of 7500 K. This is not
necessarily correct from a physical point of view, for various reasons. Firstly,
the moon is not a black body radiator and so its color cannot precisely (only
approximately) be expressed with the Kelvin scale. Second, the moon’s actual
color is mainly a result of the sunlight (with a temperature of around 5700 K -
still lower than the white point of our D65 illuminant, or in other words more
reddish if expressed with it), a slightly reddish albedo of the moon’s surface
and the reddening effect of rayleigh scattering (blue light, i.e. smaller
wavelengths, tend to scatter more likely than red light and greater
wavelengths, therefore a higher amount of blue light gets scattered in the
atmosphere leaving more red light from the perspective here on Earth). This
would, in photo-reality, surprisingly yield a quite reddish moonlight, even if we
did choose a very low white balance for our photograph at maybe around
3200 K (which is considered ‘tungsten film’). However, for the physiological
reasons described previously, I went for 7500 K on the D65 illuminant as this
gives a pleasing - not too saturated but still very natural - blueish light.

To cut a long story short, if you wanted to go for photo-realism you would
have to use a reddish light color, but you would most likely white balance
everything towards blue afterwards to achieve the cool night feeling! And
that’s basically what I did - only in a rush...
For the same reasons I chose a turquoise (blue-greenish
color) for the surrounding environment, which was simply
applied as the camera’s background color. Although this will
only have a subtle effect it makes sense for the
completeness, and after all we will see this color through our
back windows. Note that what we see on the actual
Background Color’s color swatch will be (deliberately) gamma
corrected later on. To overcome this and to ensure that the
color I choose is the color that I will see later on in the
rendering, I use a simple gammaCorrect node, with the
inverse gamma applied. The gammaCorrect is connected via
mmb drag&drop onto the ‘Background Color’ slot.
Before we push the render button, let’s make sure we have
something that takes care of our indirect illumination, and
that we are rendering in an appropriate color space. For the
sake of simplicity I chose final gathering with Secondary
Diffuse Bounces for the indirect light contribution. This is
easy to set up, yet effective. As you can see I set low quality
values, but since we are only doing a preview this will
suffice.

Because there is a little shortcoming with the Secondary


Diffuse Bounces setting, I’m selecting the miDefaultOptions
node, which is basically the back-end of the render globals.
There I set the FG Diffuse Bounces to 2, which is my desired
value for the indirect illumination bounces. To select the
miDefaultOptions simply type “select miDefaultOptions”
(without the quote marks), in the MEL command line, and
then hit Enter.
I’m also setting the Ray Tracing depths to reasonable values
- they seem very low, but are absolutely sufficient for our
needs.

To take care of the desired color space (sRGB) we simply


need to set a gamma curve in the Primary Framebuffer tab
of the render globals. Since a gamma curve of value 2.2 is
similar to the actual sRGB definition, we only need to set the
Gamma attribute to 1/2.2 = 0.455, as this is how mental
ray’s gamma mechanism works. For a basic understanding as
to why we should render in sRGB, I greatly encourage you to
go through the “Note on color Space” in the first tutorial of
this series (Sunny Afternoon), if you haven’t already. As a
general note, it has to do with the non-linearity of human
light perception and rendering in a true linear space (gamma
= 1.0), as any renderer usually does by default, which is the
main reason for CG looking “CG-ish” (which we dont want).
Spread this knowledge to your buddies and with this
understanding you’ll be the cool dude at every party, trust
me!

So here is our first test render. It looks a bit dark, and since
we want to have a full-moon the shadow seems a bit too
sharp.
To soften the shadow, let’s increase the Light Angle of our
directional light. Because widening the light angle introduces
artifacts, we should also increase the amount of shadow rays
to yield a smooth and pleasing shadow. I’m also increasing
the intensity of the mib_cie_d a little.

This is a good base and all we need to do now is increase


the general quality settings for our final render.
For better anti-aliasing and smoother glossy reflections we
should crank up the global sampling rates (Fig12). A
min/max value of 0/2 and a contrast threshold of 0.05 should
suffice. I used a Gauss 2.0/2.0 filter for a sharp image.

For the final gathering this time I chose a fairly unorthodox


method... Remember the last couple of times we used the
automatic mode, which in most cases does a really good job.
Well, in automatic mode all we need to worry about are the
Point Density and Point Interpolation values. However,
sometimes in this mode the interpolation becomes quite
obvious and displeasing, especially in corners where you can
usually spot a darker line where the interpolation happens to
be very dull. For a sharper interpolation, I decided to use the
scene unit dependant Radius Quality Control (Fig13). It
generally takes a little time to estimate the proper min/max
values (in scene unit values), but as a guideline you might
want to do a diagnostic automatic Final Gathering solution
(see Diagnostics in the render globals) as a base, to see its
point densities. Then, step by step, approximate this density
with the scene unit Max Radius control. Note that the density
is only decided by the Max Radius (the lower the Max Radius,
the more Final Gathering points are being generated); the
Min Radius only decides for certain interpolation extents.
Once you are satisfied with this general density, you will
usually want to raise the Point Density value. This Point
Density is added to the density we estimated with the
min/max radii; however, the interpolation extents do not
change so we are basically only adding points to the
interpolation, which is similar to raising the Point
Interpolation in automatic mode (only more rigid and
somehow it puts the cart before the horse this way). It’s
always good to know how and why things are happening,
and this knowledge is useful if you ever want to use the
Optimize for Animations feature. It’s also a bit easier if the
View radii are being used, since the min and max radii can
be generalised (min/max 25/25 or 15/15 in pixel units is a
good starting point).
As a little trick to enhance details in our scene, I turned the
Ambient Occlusion on in the mia_material shaders, in the
Details mode. Simply select them all and switch the Ao_on
attribute to 1 (On), using the attribute spread sheet.
The Details flag, in combination with Final Gathering,
ensures that we don’t get that rather unpleasant dark-
cornered-and-strange Ambient Occlusion.
To prepare for the final render, I set the framebuffer to half
floating point and the image format to OpenEXR (Fig15).
Floating point means the image gets stored with a high
dynamic range, as opposed to 8bit or 16bit integer images,
which are clipped at RGB values greater than 1.0 (‘white’).
With a floating point image we can map values greater than
1.0 back to the visible range in post-production (i.e. we will
be able to eliminate completely burnt areas). Half floating
point means the floating point with half precision, taking less
memory and bandwidth. To be able to render a floating point
image right out of the GUI we need to set the Preview
Tonemap Tiles to Off, but keep the Preview Convert Tiles at
On. The preview in the render view might look very dark and
psychedelic, but the OpenEXR image written to disk in the
images\tmp folder will be alright, and that’s the one we will
be processing later on in Photoshop (or any other HDRI
editor of your choice). Mind that floating point images are
stored without gamma correction (i.e. linearly), and e.g.
photoshop applies (hopefully) proper correction by itself. If
the image looks incorrect when being imported to photoshop
or whereever else, you most likely have to apply the gamma
correction by yourself there. This does not relieve us from
setting the proper gamma value in the render global's
framebuffer menu however, as the textures still need to be
linearized before rendering!
Here’s my final render without post processing.
As with any photograph we shouldn’t judge the raw shot;
instead let’s take it into the ‘darkroom’ and apply some color
and contrast improvements here and there .

I hope you’ve enjoyed following this little exercise as much


as I have enjoyed writing it! Sadly this is the last part
concerning natural exterior lighting, but the upcoming
electric light tutorial will be no less challenging and just as
much fun, I’m sure!
part 4

ELECTRICAL
Hello and welcome back aboard! This time, following up our last
tutorial about natural moonlight, we will be discussing a very
'CGI-traditional' fashion of illumination: electrical lighting.
Although this kind of light is considered 'artificial' we will learn
later on that it has a very natural background (at least as long
as we stay with a tungsten light, which we propose in this
tutorial).
So, why 'CGI-traditional' you might ask? Well, ever since there is
CGI (computer generated imaging), tungsten bulbs have been a
very 'easy' to simulate type of lightsource, for mathematical
reasons. The classic tungsten bulb has a relatively limited area
of light emission, which, in the 3d/simulation world, can be
simplified down believably to an infinitesimal point - the classic
point-light (as a side note, its little brother, the spot-light, is
nothing but a point-light with more sohpisticated features). In
the past of CGI this infinitesimal (infinitely small) point made it
possible to render 3d images effectively and fast, due to a logic
reason: To simulate a lightsource, we basically need three
points for the math, i.e. the position of the 'eye' of the observer,
the point on the surface thats being lit, called the 'intersection
point', and the position of the lightsource - all these together
mathematically make out the rendering, and since an
infinitesimal point is obviously the most simple element in 3d
space, it can be computed with very little expense in this
context - and even more important, it converges noise-free per
se, since the point is strictly determined. Back in the times
when computers werent as high-clocked as today this was
crucial, and point-light based lighting was mandatory, along
with closely related techniques as spot-lights and directional-
lights (which uses an infinitely far away point instead).
So for CGI the point light was pretty much as important as
Edison's light bulb for real life. Computer lightsources have
evolved since then however, just as the real bulb did, and still
(for both!) the principles have stayed the same. And still the
most believable deployment of a point lights is at the simulation
of a tungsten bulb.
Enow with the history though, let's have a closer look at how
tungsten bulbs actually work and why they look as they look.
This is, as always, the essential starting point when trying to
simulate a specific case.
The operation of a usual incandescent bulb is quite simple: an
electric current is passed through a tungsten (also called
wolfram) filament, which is enclosed by a glass bulb that
contains a low pressure inert gas, to avoid oxidation of the
electrically heated filament. Depending on the type of the
filament, the operation heat is typically between 2000 and 3300
degree Kelvin (around 3140 to 5480 degree Fahrenheit, or 1727
to 3027 degree Celsius). This thermal increase induces radiation
(also, but not only) in the human visible light spectrum, in the
form of a so called 'black body'.
The interesting thing about this black body (which actually is an
idealized physical model of a radiator/light emitting body) is
that its emitted spectrum, i.e. the color, can be estimated by
solely knowing the (absolute) temperature of the black body,
according to Planck's Law. Inversedly, one application of this is
in astrophysics, where scientists can measure the temperature
of a star by analyzing its spectrum. And furthermore, this way
the movement of stars and galaxies can be determined, if this
estimated spectrum is shifted either towards blue (getting
closer) or red (moving away), due to the electromagnetic
equivalent of the sonic Doppler-effect, called redshift or Hubble-
effect.
Well, this all means we have a (at least theoretically) strictly
defined spectrum, or color in our case, for a glowing tungsten
bulb. This color lies on the so called Planckian locus , a
coordinate in a particular color space, and ranges, for our
needs, from the visible red, over white to blue. There is several
black-body-Kelvin-temperature-to-color converters on the
internet, but fortunately there is a standard tool that ships with
mental ray, which makes our life a bit easier.
It's called, guess what, mib_blackbody and can be found in
Maya under the 'mental ray lights' tab in the hypershade.
This utility outputs the desired color, according to the
temperature we feed it.

So let's model the actual light. To deliberately break with


the tradition I decided to use a spherical area light (instead
of the good ol' point light), placed closely to the center of
the actual bulb geometry, so that it's encompassed by it
(Fig. 3).
Obviously, if we rendered it this way, we would face trouble
due to the occlusion caused by the bulb geometry. Theres
several ways to come around this - we could either adjust
the bulb's glass shader, so it handles the transparency,
though we have to increase the ray depths accordingly. Or,
and that's a bit smarter in this case, because we wouldnt
have to mess with the ray depths, we simply exclude the
bulb from shadow and reflection/refraction tracing by setting
some flags in the object's shape node. Since the bulb is
'incandescent' anyway we can neglect its shadow.

To give our light the desired color, I simply create the


mib_blackbody node and connect it to the area light's color
slot.
I also set its decay rate to 'quadratic' - this is very important
to give it a natural falloff and to obey physical rules. The
intensity is left at 1.0, I completely hand this over to the
mib_blackbody, where I also set a reasonable temperature
for our tungsten filament (something between 2000 and
3300, I decided for 3000 degree Kelvin).

I repeat all this steps for the second bulb, except that I used
the same mib_blackbody node for its color, just to speed up
the workflow a bit, as we assume that both bulbs are of the
same type.
Because the final gathering diffuse bounces setting have a
little shortcoming in Maya 8.5, I set them in the actually
controlling node, which is called miDefaultOptions (type
'select miDefaultOptions' without the quotes into the MEL
command line to bring it up in the attribute editor).
Last but most important, I put ourselves into the right color
space, which is sRGB, the commonly used space for things
like photographs. Although we cannot precisely apply this
color profile right away (at least not easily in mental ray for
Maya 8.5), we simply apply a so called gamma correction
curve of value 2.2 to our image, which usually is sufficient.
This implies some caution: because the textures we usually
use are already in sRGB, or hence gamma corrected, we
need to un-gamma them before we correct the whole image
again. That seems awkward and unnecessary but makes
total sense for a reason - if we want the (gamma
corrected/sRGB) texture to look like we are used it to look
like, we need to remove the gamma correction first, before
we RE-apply it on the whole image. Odd stuff, but makes our
picture look pretty and more natural.

Thankfully mental ray has this remove-texture-gamma-and-


re-apply-it thing built in already, and we simply set the
desired gamma correction value in the framebuffer>primary
framebuffer tab of the render globals. However, mental ray
wants us to actually specify the inverted function, which is
1/2.2=0.455 in our case. For more information on the
gamma issue, I encourage you to read the 'Note on Color
Space' in the very first part of this tutorial series.

Well, here's our first test rendering with the settings above.
Straaange things happening, I know.
The reason for this is the very close proximity of geometry to
our area light - the final gathering usually goes nuts on this.
There's a cheap solution to this, we simply set the final
gathering filter to greater than 0, I decided for 1 which
usually does a good enough job (Fig12). Usually it is
desirable to completely avoid this filter (i.e. leave it at 0),
because it introduces strange bias in some situations, e.g. if
we lit our scene completely by HDRIs. So use it wisely, or
only if you are forced to, like in our case. If you are still
encountering artifacts, exclude the lamp guard and
basement as well from the reflection/refraction tracing.

Let's see if it helped, and yep, that looks much better.


I'm preparing for the final rendering now, by upping the
general anti aliasing quality. The final gathering needs some
lifting too
Here we go.
The last thing I added was the mia_material's built-in detail
ambient occlusion, by selecting all the mia_materials and
change the Ao_on attribute to 1 (ON) in the attribute spread
sheet (Fig16). This reveals little details without hammering Also, I decided to render to a higher fidelity fancy super
the wellknown and usually way too strong ambient occlusion duper 32bit framebuffer - simply because everyone does..!
corner-darkness onto our image. No seriously, at least for stills it's better of course to render
to a floating point format. After all this gives us a more
peaceful sleep while the renderer works over night. However,
for reasons of efficieny I decided for a 16bit half framebuffer,
which is still a floating point format but less space-
/bandwith-eating. To use this, the only possible fileformat for
now is OpenEXR - that's not a bad thing, since OpenEXR is
quite fancy (for real!).
After touching some contrasts and colors here and there I
came up with my final interpretation.

I hope you enjoyed following this little tutorial about electric


light, and be with us next time with the candle light session!
part 5

CANDLE LIGHT
Ahoy, and welcome back to the fifth part of our lighting tutorial series! Interestingly the general matter on this
one will technically be the same as the last time, where we discussed the behavior of electric light bulbs, however
the result will be considerably different. So lets turn off the lamps and fetch the matches, to get our candle light
tutorial started.

In the last tutorial we already learnt the technical aspects of heated bodies, like tungsten filament, or wick. It
became clear that in a simplified yet meaningful way the emitted color always has a very determined type, only
depending on the temperature of the heated body. And curiously this special rule does not depend at all on the
material of the heated body. So we can pick up where we left, and simply translate this rules to our new topic.

Lets recall the behavior of a heated 'black body'. Whenever matter is heated, it emits photons with certain
intensities at distinct frequencies. This 'fingerprint' of the radiation is then called a spectrum. Now a black body is
an 'ideal physical model' which absorbs all radiation and which does not reflect any at all. The interesting thing
about this is that the spectrum ('color') of such a body is strictly defined by physical law, and is solely dependant
on the actual temperature of the body. Of course this is somewhat simplified, as the actual emission spectrum of
our heated material (i.e. carbon and hydrogen, bounded in the, lets propose, paraffin of our candle) is neglected
this way. Still this 'ideal model' does a good job at simluating our situation.

Now that we have an idea of how to model the color of our candle-light we can start to give it shape. According
to gravity and buoyancy laws (hot things move upwards due to their lower density), the candle flame has this
wellknown 'drop shape'. If you ever wondered how a candle burnt at zero gravity, see the right picture - the hot
and 'lighter' gas does not circulate ('convect') as well as down here on earth, instead it spreads uniformly and no
oxygen (although available!) raises after it, so it is likely forced to extinguish soon.
For the sake of simplicity I decided to use a simple fotograph
of a candle flame as a so called sprite, or billboard object. I
already adapted the image's hue to the temperature we will
be using later on, which you might want to consider too, but
more on this shortly.

The billboard is then placed closely to the wick, to model the


flame. This is a simple and popular method of representing
rather complex to shape or simulate stuff, be it flames, snow,
leaves, grass, pylons, and probably an arbitrarily huge bunch
of other things one could think of.
It obviously makes sense to take care of certain factors when
dealing with such 'tricks', so I adjusted all necessary render
flags of this sprite, to avoid render artifacts. For example it of
course does not make sense to let this helper object cast
shadows (after all it is replacing a light emitting entity), or to
leave it visible to reflections or refractions (the actual light
will handle this later on with the 'highlights').
The next rational step in our abstraction of the candle light is
to build the actual light emitting 3d representative. I chose a
spherical area light for this job, with a little scale in the 'up'
direction'. I placed it closely to where our 'fake' flame is,
right above the wick. Since we took care of the sprite's
render flags it does not interfere with the light at all.

Now that we have our light source constructed, we shall give


it life with an appropriate color. As described earlier, we have
robust guidelines on how to deal with this, in order to create
naturally looking candle light. We solely need to know the
approximate temperature at which a candle flame burns in
average. The sources on this however seem to diverge quite
a bit; some state a temperature of around 1300 Kelvin
(~1000 degree Celsius, or ~1800 degree Fahrenheit) and
some even state it at around 2300 K (~2000 °C, or ~3600
°F). I went for the middle of these values, and decided for a
temperature of 1800 K, which equals to around ~1500 °C or
2800 °F. This is the tempereatre (color) we should align our
candle sprite texture to, in order to yield a convincing
congruence in the rendering.

There are many Kelvin-to-color converters on the internet


which we could use to obtain the desired color, but luckily
there is also a built-in tool that ships with mental ray for
Maya. It is called mib_blackbody and can be found under the
mental ray lights tab of the 'Create Render Node' menu in
the hypershade.
This node only has two attributes we need it to feed, the
temperature (in Kelvin, or 'absolute' temperature), and an
intensity value. If we wanted to really (really!) exactly
simulate a candle light, or any light at all, we would have to
actually know it's luminous power, also called luminous flux
or lumen (read on this link for further information), and then
we would have to convert this value into the Maya/mental
ray world with some effort on both the emitting (light) and
receiving (camera) side. Maya 2008 has some built-in
improvements on this, however, since we dont do a radio-
/photometric scientific simulation we simply GUESSTIMATE
the intensity. I went for a value of 2500. To finally make use
of this little tool, I connect it to the light's color slot - the
light's intensity is left at 1.0 (this is handled by the
mib_blackbody), and I also make sure the decay rate is set
to 'Quadratic'.

That's pretty much it for the scene part, lets head over to the
rendering department.

We prepare the final gathering settings for quick yet


meaningful convergence of the indirect illumination. We only
need few rays (32) and a coarse point density (0.5) for our
preview. Of course we will refine everything for our final
image. I left the final gather 'mode' at automatic, i.e. the
'Optimize for Animations' and 'Use Radius Quality Control' are
kept OFF.
The trace depths however need to be increased, along with
the general raytracing settings; I decided for 2 'bounces', as
well for the diffuse contribution, which I revise in the
miDefaultOptions node: although I turn them ON in the
render globals, they are stuck at 1 bounce due to a little bug.
I want them to be of depth 2, so I adjust them 'under the
hood' in the miDefaultOptions.
Before we actually render, we must care about the color
space, so its time for our little gamma-mantra (since we dont
want odd and cg-ish looking, grungy true-linear shadings).
Thus I put ourselves into the right color space, which is
sRGB, the commonly used space for things like photographs.
Although we cannot precisely apply this color profile right
away (at least not easily in mental ray for Maya 8.5), we
simply apply a so called gamma correction curve of value 2.2
to our image, which usually is sufficient. This implies some
caution: because the textures we usually use are already in
sRGB, or hence gamma corrected, we need to un-gamma
them before we correct the whole image again. That seems
awkward and unnecessary but makes total sense for a
reason - if we want the (gamma corrected/sRGB) texture to
look like we are used it to look like, we need to remove the
gamma correction first, before we RE-apply it on the whole
image. Odd stuff, but makes our picture look pretty and more
natural.

Thankfully mental ray has this remove-texture-gamma-and-


re-apply-it thing built in already, and we simply set the
desired gamma correction value in the framebuffer>primary
framebuffer tab of the render globals. However, mental ray
wants us to actually specify the inverted function, which is
1/2.2=0.455 in our case. For more information on the
gamma issue, I encourage you to read the 'Note on Color
Space' in the very first part of this tutorial series.

A quick test render yields some strange, blotchy artifacts


though. This is due to the close proximity of certain objects
to the area light - we would have to either move them (or
the light) a little farther away, or exclude them from the final
gathering and reflection/refraction computation somehow.
Since we obviously have a great demand to keep the light
close to the candle, we are forced to take the latter solution.
We simply switch OFF the corresponding render flags in the
candle's and wick's shape node. This basically cures the
bright-blotches-problem. To furthermore suppress this kind
of blotches, I decided to use a final gathering filter of 1. This
filter should be handled with care, and only be used as a last
resort.
Another test rendering verifies this, and we have takeoff
clearance. Lets raise the quality to something more usable
(which basically means we are extending the flaps, to stay in
the metaphor).

First, lets raise the general sampling settings. The minimum


level is kept at 0, the maximum level is set to 2, which
means a maximum of 4^2, or 16 samples per pixel (whereas
the rule is 4^n, and n means the sampling level). The
contrast is lowered to 0.05 each. I usually use a narrowed
gauss filter of width 2.0 (default is 3.0!) both in x and y,
which gives sharp, fast and nice sample filtering.
I also turned on the 'detail ambient occlusion' mode of the
mia shaders. All you need to do is to select all the mia
shaders in the hypershade, open up the attribute spread
sheet, and set the Ao_on attriubte to 1 (ON). This ensures
we see all the little details that are too small to be captured
properly by a rather coarse final gathering solution.
Last but not least, we could go for a floating point
framebuffer, if we liked. To do so from within the render view
and without going to a batch, we simply had to switch the
framebuffer to either RGBA (Float), or RGBA (Half), turn the
'Preview Convert Tiles' ON, the 'Preview Tonemap Tiles' OFF,
and use an appropriate file format, like OpenEXR.
Thats it. I came up with this final interpretation, after going
through a few color, white-balance and contrast image
operations.

I hope you enjoyed following our little candle light exercise


as much as I enjoyed writing it! And I'd be glad to welcome
you next time to our final part, which is probably the most
challenging and most definitely the eeriest one: about
underwater lighting!
part 6

UNDERWATER
Hello and welcome to the sixth and last part of our environment lighting tutorial series! In the preceeding parts we discovered the
world of natural environmental lighting, artificial kind of lighting, and the mixtures between them. In our last feature we will be
discussing a rather special case: the case of an underwater environment situation. This implies some more or less 'unusual'
prerequisite. More precisely, we will be in need of a truely visible 'medium', let's call it volume or ether. Most often people tend to
fake such volume by simply using so called 'volume shadows' on their 3d lights, i.e. lights casting a visible 'light ray' into an
apparent (though not existant) volume. This is not the real deal, however it is a favored method of both professionals (because it
renders fast, which is essential specially for animations) and beginners (because its rather easy to set up and.. well I dont know.
But its like the No.1 thing people wish to do when getting their hands on a 3d program). Anyhow, we will be going the way of the
cowboy, or cowgal, and do it the tough style. Since this is all about rendering stills, we can afford to have this extra nuance of
'bought' prettiness.

Well. So we're back aboard.. though this might be a rather inappropriate description - we are sunk! The ship's body is below the
waterline and filled with seawater. To believably illustrate this situation shall be the challenge of our tutorial. We will also be
creating an eerie, or unfamiliar, uncommon lighting to support the feeling of being in a different world.

Before we start to do anything we need to have a few thoughts on this different world, because this time we actually have a
whole different (or lets say: a more exaggerated) situation than usual. Mainly there are two things we need to consider: First
WHAT makes underwater look underwater, and second HOW can we achieve/simulate it. These might sound trivial, and in fact the
circumstances are so trivial indeed, that most people seem to forget about them.

Lets begin by comparing our usual situation (land / more or less dry air) with our new situation (under the sea). In our habitual
environment, like our office, the living room, or wherever inside a building, we usually do not have much of a visible 'volume' -
except if we romp around and raise some dust. When this dust gets into the air, it naturally, like any matter, reflects light. Thus it
gets 'visible'. The more dust we raise into the air, the 'thicker' the apparent volume gets, and the light rays seem to become
actually visible - although all we see is the dust reflecting them. There is a nice (albeit philosophical) quote of Andre Gilde that
aptly says: "Without the dust, in which it flashes up, the sunray would not be visible".
Now there is more 'things' than plain dust in the air we As a reference I like to use
breath, in fact there is tons of gases and particles, which all http://www.underwatersculpture.com by Jason Taylor, which
make up what is commonly called the 'aerosol'. This rather has various and no less beautiful photographs on the day-to-
invisible mixture of microscopic solid particles and liquid day-things-underwater subject.
droplets have the same reflecing, or essentially scattering
impact on incident light as the regular (substantially larger)
airborne dust.

This has an interesting effect: when light gets scattered (i.e.


forced to diffusely deviate it's naturally straight trajectory) by
a surface much smaller than the wavelength of it (like the
aerosol ingredients), the so called 'Rayleigh scattering'
occurs. Named after the physicist Lord Rayleigh, this general
approximation rule says that the scattering 'probability' of a
light ray is dependant on its wavelength - whereas the
smaller wavelengths (blueish, ultra violet domain) have a
higher chance of getting scattered than the larger
wavelenghts (reddish, infrared domain) (Fig. 1). Have you
ever asked yourself why the sky is blue? THIS is the answer.
The rather neutral, virgin and 'white' sunlight enters the
earth's atmosphere, and the distinct portions of it get
scattered by the aerosol - since the blue part of the light has
a largely higher probability to get scattered, we seem to be
surrounded by a diffuse blue environment. As opposed to a
sunset or dawn, where mostly unscattered light from the
direction of the sun reaches the observer - and appears red,
due to the lower wavelength.

Fair enough. Much pondering about the air, but what about
our concrete underwater situation? Well, its basically the
same story! The ocean IS blue. Not only because it reflects
the sky, but also because of the Rayleigh rules explained
above. This scattering rules basically apply to anything at
anytime. In cgi we only neglect it, or often we fake it based
on observational facts. And after all, computing true
wavelength based Rayleigh scattering is a seriously complex
task, and its questionable if the effort can be justified, since
it's mostly rather marginal effect would 'steal' the rendering
time we could spend on other things that make our image
pretty.

Have you ever asked yourself, why e.g. Maxwell Render


outdoor images look faint, whilst the indoors look pimp?
Because they neglect this light scattering (at least to this
point in time)! The scattering effect is not as eminent in the
indoor/interior renderings, but has a large impact on the
'naturalness' of outdoor, larger scale situations. The Rayleigh
rule is omnipresent, unless you're in complete vacuum.

And it is even more evident in 'thicker' mediums, or volumes,


like the ocean water, which is full of more or less tiny
particles. The only difference here is that the light gets
scattered and absorbed earlier, which is often referred to as
a higher 'extinction'. A light ray entering such volume has a
certain probability to either get scattered forwards (along it's
original trajectory), backwards (the direction it came from),
something inbetween, or to get completely absorbed by
some particle. Every volume has it's own characteristics at
how much of each of the former criteria is being applied, not
to forget that the wavelength of the light ray looms largely
over this...

This behavior can be modelled, or simulated by a so called


ray marching shader. We are not going to obey the
wavelength dependant rules strictly (it'll be more of a
guesstimation), but lets finally get our hands on our actual
scenery.
To build up our medium, I decided to simply create a large
surrounding cube as a 'container' of our volume. This is the
simplest and mostly fail-safe way to set up this kind of stuff.
We could alternatively build our volume through our
camera's volume shader slot, which would basically have the
same effect unless a ray would hit 'nothing', where this
second way would simply return the un-approximated
environment color. And besides this alternative way could
take longer to render, because the ray marcher could
possibly take some more and unncessary steps further into
the depth (not in our case however).

The ray marching utility we will be using is the rather ancient


though still nicely working mental ray 'parti_volume' shader,
which can be found under the 'mental ray Volumetric
Materials' tab in the hypershade. This is not to be confused
with the parti_volume_photon, which is used for volume
photon tracing, but we wont use photons to obtain indirect
illumination in our tutorial anyway. Our method will be a bit
less accurate but still nice and fast enough to create our
desired look and feel.
Lets have a look at the volume shader. Foremost, we assign
a new 'black' surface shader to our cube container, and
connect the parti_volume to its shading group's 'Volume
Shader' slot. Thats pretty much it for the set-up part, and we
can have a closer look at the parti_volume's diverse
attributes.
Most important for our needs right now is the scattering part
(Scatter, Extinction), the so called scatter lobes (R, G1, G2,
more on this later), and the ray marching quality settings
(Min_-, Max_step_len). The other attributes, which we will
neglect however, are for filling the volume only partially
(Mode - 1 means 'do it' - and Height), to add a noise, or
rather density variation (Nonuniform, 0.0 means 'no noise')
and stuff we really dont need (Light_dist, Min_level,
No_globil_where_direct). As you can see, there's lots of
techy stuff, but we'll concentrate on the essential things of it
(Fig. 4).

First the scattering factors, Scatter and Extinction. Scatter


basically controls the color of the medium and is closely
related to the Extinction, which controls the density of the
medium. Both go hand in hand, and the hassle about this is
that to work with half-way rational values we need to have a
quite dark Scatter color and a quite low Extinction factor - if
any of the two goes into higher extremes we'll typically end
up with undesired results. So I decided for a value of RGB
0.035, 0.082, 0.133 for the Scatter color, which is a natural
blueish tint. Since we dont do wavelength dependant
calculations I decided for this predominant color that mimics
and supports the Rayleigh rules explained above. For the
Extinction I used a low appearing value of 0.004, but keep in
mind that this is all correlative with the Scatter color, and
very sensitive. So this value will give us an extinction that
swallows almost all of the light in the rear corners, and that's
way enough.

Now about the scattering lobe. That's a bit more difficult at


first glance. Basically, a negative value for G (either G1 or
G2) means a backscattering lobe (back into the direction the
light ray came from) and a positive value means a forward
scattering lobe (forward along the original trajectory of the
light ray) - and R simply means the mixture between G1 and
G2. So you typically chose one backward scattering lobe (i.e.
a negative value for G1) and one forward scattering lobe (i.e.
a positive value for G2), and weighten both with the R
attribute. Whereas 1.0 for R means 'use only G1' and 0.0
means 'use only G2' and 0.5 would weighten both equally... I
know - there must have been some really funny guy at
mental images who wrote this shader, and I'm pretty sure
he's still laughing up his sleeve.

Anyhow. I chose a rather foward scattering volume, but I


encourage you to experiment with the values. The forwardish
scattering creates these nice glow-like appearing light
sources when the light points towards the camera (its vice
versa if the light is e.g. behind the camera of course). So I
used R 0.1, G1 -0.65, G2 0.95 for my final image.

Last but not least I trimmed the Min_- and Max_step_len to


50.0 each. This attribute decides at which distances (step
lengths) to stop for looking up a volume sample - hence the
rays 'march' through the medium, and the lower the step
lengths the more samples will be taken, the better (less
noisier) the image quality gets and the longer it'll take to
render. If you think it takes too long to render, boost this
value up. On the other hand, if you think you get too much
noise and artifacts in your image, reduce it. Generally
however the manual proposes to use a value of about 10
percent of the Max_step_len for the Min_step_len, so you
might want to try this as well (5.0min/50.0max). It is worth
mentioning that the step length values are in actual scene
units, so in our case it looks up a volume sample each 50
centimeters.
Ok, we have our medium set up and running (almost), now
lets create some lights to make it shine. Since our volume
shader relies more on direct rather than indirect light we
cannot rely much on the later final gathering for the 'diffuse'
incoming illumination. That's why I created two area lights
for this job, one above the hatch, and one right behind the
rear windows. For the main light source however I used two
spot lights shining in from outside.
For this main lights I used a mib_blackbody helper utility at
2200 Kelvin to obtain a rather warm and diver-flash-light-like
color (the method of using a blackbody temperature as color
source has been explained more extensively in the two
preceeding tutorials!). Though one could also imagine that
its the sun shining in from windows, you decide it, and feel
free to play around with it (to put it with Bob Ross: there's
no failures, only happy accidents!).
The two area lights need a mixture of natural blue (due to
Lord Rayleigh's stuff) and green (due to many small greenish
micro organisms floating in the sea, like plankton or algae).
This mixture is commonly referred to as cyan, turquoise,
mint or cobalt, depending on which color is weighted, or
most felicitous: aquamarine.
So far so good? Uhm.. there's one last very important thing
we need to consider. Remember the funny shader
programmer? He decided to omit every light that is NOT on
his list. That's strange attitude, but not stranger than the
other stuff in the parti_volume, no? So we need to link every
light on it's the light list. You can either put in the (case
sensitive!) name of the light, or mmb drag and drop the light
transform from the outliner onto a spare field (you need to
re-select the parti_volume each time you connect one light,
so the mechanism can add another open slot).
Now that we have this part running, lets think about adding
a few details that would add more to the underwater
impression. In Maya we fortunately have the Paint Effects
system, which is easy to use and even has some built-in
'underwater' brushes. I used some sea urchins here and
there, a hint of shells, and a few starfishes all around. Also I
added a little of the seaweed to some corners.

To be able to render the Paint Effects with mental ray we


need to convert them to regular polygons. I also converted
their Maya shaders to mental ray mia_materials, which is
always a good idea to obtain a consistent shading behavior
across the scene, since in our case everything else is built
with them as well. This needs to be done manually however.
That's it we're finally ready to render. I used a fixed sample
rate of 2/2 this time. This is quite a brute-force way, and you
might consider using an adaptive sampling of 0/2, but be
advised to tune up the sampling of the area lights along with
it, since they are all left at 1/1 right now. Also you should
consider lowering the parti_volume step lengths if you
encounter artifacts with the adaptive sampling. It is also
worth mentioning that to actually 'cast' a shadow into the
volume, we need to have a shadow (and general max-) ray
trace depth of at least 4.

For the indirect illumination I chose a rather low-quality


appearing final gathering with diffuse bounces. This time,
due to the volume stuff, the final gathering will not add all
too much to the image, but it still has a nice contribution to
the general look of our piece.
Before we push the render button we need to chant the
gamma mantra though, as always. Since we want our image
to look nice, natural and appealing, instead of dark, smudgy
and cg-ish, we need to pull it from it's default color space,
i.e. mathematically linear, into the one we are used to see,
i.e. gamma corrected sRGB. There's a deeper explanation on
this matter in the very first of the tutorials, the one about
sunny afternoon. To recall the essential basics however, lets
repeat why we need to care about the gamma issue BEFORE
we render out our image. As mentioned, the (any) renderer
does it's internal calculations in mathematically linear
manner, which foremostly is a good thing. We could pick this
truely linear result and take it into our post application and
gamma correct it there (because gamma correction / putting
things into the sRGB color space is desirable in almost any
case - probably almost everything you see, i.e. photographs,
pictures are in this sense already gamma corrected, without
your knowledge, they usually ARE per se). IF and as you can
see that's a big IF, we wouldnt use image textures, which are
ALREADY gamma corrected from the outset. When using
regular image files, which usually have the sRGB/gamma
correction 'baked' into them a priori, we need to remove this
gamma correction, before we RE-apply it on the whole
image. Makes sense, no? I know its confusing, but unless
you dont want to have double-gamma-washed-out-looking
textures we need to obey this little rule. Applying the right
gamma on the whole image afterwards isnt enough, if we
want the textures to look as they should (i.e. as we are used
to see them, in their sRGB color space). Now, many people
dont care about this whole issue and thus render in the plain
mathematically linear space. And wonder why their images
look strange and unnatural, and have this strangely dark and
smudgy look and blown out highlights and overbright areas
all over. Specially realtime 3d has yet to 'learn' that
mathematically linear rendering is not what the eye is used
to see in nature (the human brain reaches a 'gamma
corrected', or rather logarithmically corrected image too, if
you will! Although human perception is far more complex of
course).

So we want to have it gamma corrected/sRGB. Our renderer


mental ray has a built-in function to automatically 'remove'
the gamma from the textures before rendering, and apply
the inverse of this gamma on the rendered pixel/image. To
do so, we go to the Primary Framebuffer tab in the render
globals and put the appropriate gamma value, which is 1/2.2
or 0.455, into the Gamma field.
As a last enhancement lets turn on the 'detail ambient
occlusion' mode of our mia_materials. It should all be set up
already by default, we simply need to switch it on by
selecting the mia_materials and raising the Ao_on value from
0 (off) to 1 (on). We can do this easily for all selected
shaders at once by using the attribute spread sheet, from
the Window> General Editors> Attribute Spread Sheet
menu.
We should come up with a render similar to what I got. I
rendered to a regular 16bit image format, and took it into
photoshop for some contrast and color adjustments. That's
the most fun part of it.
After playing around with the white balance, crushing the
blacks, enhancing certain color elements (i.e. the blues and
aquamarines), and after having fun with the 'liquify' function
in Photoshop I came up with my final interpretation. I also
put a 'dust/grime' image on top of the image, to support the
feeling of a thick medium. I hope you like it.

And I hope you enjoyed following our environment lighting


tutorial series, as it is time to say good bye for the time
being. I have had a great time sorting out my guesses on all
the subject matters, and most defintely learned a lot along
the way, as you hopefully might have as well. If you have
any question, critic, comment, addition or whatever input on
the tutorials or me, dont hesitate to contact me in one of the
variously available ways.
Florian Wild http://www.floze.org/

Potrebbero piacerti anche