Sei sulla pagina 1di 128

Chapter Eighteen

Effect Files
Introduction
As our 3D worlds begin to use a larger variety of rendering techniques, developing code that draws all
of the polygons in the scene correctly can become a complex web of conditional code paths. Previously
we have been dealing with very simple demonstration scenes which require only a few rendering
techniques to be employed and, as such, we were able to get away with hardwiring this logic into our
rendering functions. However, you can certainly imagine how this approach to rendering quickly
becomes unmanageable when dealing with the advanced visuals we see in today’s top titles. It is very
common to deal with scenarios that can require hundreds or even thousands of different device state
configurations during the rendering of a single frame.

When we view the entire process of rendering a scene from a high level it all looks rather simplistic. At
load time, the polygons will be grouped into attribute groups -- essentially just batches of polygons that
all share the same device state and thus can be rendered together with a single draw call. When it comes
time to render the scene, we iterate the list of attribute groups and render them one batch at a time. The
rendering of a given attribute group is normally the core rendering functionality in our engine and
previously we employed special case code paths to handle certain attributes. For example, some
attributes might require lighting to be disabled or alpha blending to be enabled. Some might require
multiple textures to be set while others might render using multiple draw passes. Ultimately, rendering
any attribute group, regardless of the complexity of the technique we wish to employ, boils down to a
two step process.

1. Set device states

2. Draw primitives

In the first step we must configure the device with all of the states required to render the polygons
correctly. The states that we configure in the first step tell the device things like how to transform from
model space to screen space, how to light and shade our primitives, how textures should be applied, etc.
In the second step we issue the DrawPrimitive call that sends our primitives through the pipeline, using
the device states that we have just set to influence the final output of those primitives on screen.

There will certainly be times when a given attribute may have to be rendered using multiple passes to
achieve the desired effect, so we might imagine that steps 1 and 2 could be repeated for a given attribute
group for each unique pass it needed to employ:

For each pass:

1. Set device states

2. Draw primitives

Of course, since device render states remain set until otherwise changed by us, we typically add some
housekeeping code to restore device state after a given attribute group has been rendered:

www.gameinstitute.com
Page 2 of 128
For each attribute:

For each pass:

1. Cache current states

2. Set new states

3. Draw primitives

4. Restore previous states

The goal here is to make sure that we don’t trample on other primitives’ states as we transition between
different batches. As we have seen in prior lab projects, while this is not necessarily difficult code to
write, it can become a bit laborious to manage when renderable objects use lots of different states. In the
simpler demos that we wrote in Module I we saw how this process can often degenerate into ‘spaghetti
code’ consisting of multiple conditional code paths to handle each specialized rendering case. Going a
step further, this code can get even more complex as we are forced to factor in the wide variety of end-
user hardware capabilities that can exist at runtime. We often have to implement multiple code paths to
render a given attribute group in a variety of ways so that our software still runs on low-end systems, but
high-end users are able to enjoy the game in its full visual glory.

In Module I, our CTerrain class had two code paths in its rendering function to draw the terrain in
multiple passes for hardware that did not support two simultaneous texture stages. We could say in this
instance that our CTerrain class had to implement two different rendering techniques. When two stages
were present, the diffuse map and detail map were assigned to the first and second stages and a texture
operation was performed to blend the two maps together in a single pass (i.e., drawing all terrain
polygons one time). If two stages were not supported, a fallback technique was provided that rendered
the terrain in two passes. In the first pass, the terrain was rendered with only its diffuse map and in the
second pass the single texture stage was assigned the detail map. During the second pass, the alpha
blending pipeline was used to blend the result with the color data in the frame buffer from the previous
pass. Although the results were quite similar, the techniques were implemented very differently. In the
first case the blending happened in our texture stages, but for the fallback technique the blending
happened via the alpha blending pipeline.

Given the above example, you can understand that as we integrate more cutting-edge techniques that
require more modern hardware features, we will also require an increasing number of fallback
techniques. Imagine trying to render a polygon soup comprised of hundreds of different attribute
groups, all requiring different rendering techniques and multiple fallbacks. Hardcoding everything we
need to accommodate this data directly into our application is certainly not impossible, but it will
definitely make for some very complex and untidy code that is hard to maintain and upgrade.

Of course, there are other reasons why hardwiring per-attribute states directly into our application’s
rendering code is usually a bad idea. Whenever you wish to add a new rendering technique, it means you
have to write new code and recompile the application. This approach is very prone to introducing error
as the code becomes more complex. It also means that whenever the artist is constructing a game level
and decides he or she would like to use a new technique or set of device states to render a given

www.gameinstitute.com
Page 3 of 128
polygon, a report needs to be made to a programmer who then has to add yet more code paths to the
engine to fulfil that request.

Finally, implementing device setting logic in this manner also means that the moment your application
ships, its functionality is essentially fixed. If the design team decides to release additional levels for the
game, they will be limited to constructing levels in which only the very specific device configurations
and rendering techniques supported by the engine are used.

So the problems with hardcoding device state by design are essentially:

• Complex conditional code that is hard to maintain.

• Additional rendering techniques require recompilation of the application.

• Productivity decrease.

• Static engine functionality after application ships.

• Having to write robust housekeeping code for state saving/restoration between batches.

In order to remove these limitations from our applications, we can write a generic rendering subsystem
that can manage all of our attributes without the need for special case code paths. In order to implement
such a system we can turn to a method we have actually utilized several times throughout this series --
scripting.

You may recall in Module II that we built a reasonably comprehensive generic animation "action"
system that allowed us / an artist to define various different animation sets and transition properties in a
data driven fashion. This is an example of a generic code component that was ultimately reusable
because it used a form of scripting. In this case, it was the definition of collections of animation sets that
would collectively form the action that we wanted the character to perform in response to a given event.
This was scripted via the use of an .ini file that contained such things as the animation set name,
transition mechanism, blending details, and so on. This data-driven approach meant that the same code
could be used to apply an infinite number of character actions simply by plugging in a new action
definition script (i.e., the ini file).

It stands to reason then that this methodology can be applied to general rendering of our scenes as well.
Instead of placing all of the required state setting code in our rendering functions, we could simply have
our artist assign a script to each polygon as part of the scene building process just as they might assign a
texture or material. This script might be nothing more than a simple text file describing all of the various
device states that need to be set to render the polygon correctly. When the scene is loaded, polygons
could be batched into attribute groups based on the script file they have been assigned. This works
reasonably well because we know that many polygons in the scene will share the same device settings,
and can thus use the same script file.

Were we to develop such a system, we would first have to decide on a language to use when writing the
scripts. It would need to be simple enough so that using it isn't nightmarish, but comprehensive enough
to support all of the transform states, render states, texture stage states, etc. supported by the device. For

www.gameinstitute.com
Page 4 of 128
example, here is how a simple script that turns on alpha blending, enables lighting and sets the source
and destination blend modes of the pipeline might look:

AlphaBlending = TRUE
LightingEnabled = TRUE
SrcBlendMode = SrcAlpha
DstBlendMode = InvSrcAlpha

Our rendering engine would of course need a new code module that understands how to load, parse, and
execute such scripts. Loading such scripts as above would be trivial since they could be simple text-
based files parsed one line at a time. Executing those scripts would also be pretty simple, since
instructions within the script would just be mapped to specific device state calls. Our script parser,
assuming it was to execute on the fly (versus some form of pre-compilation into explicit instructions),
when encountering a line such as:
AlphaBlending = TRUE

would simply call:


pDevice->SetRenderState( D3DRS_ALPHABLENDENABLE, TRUE );

Hopefully you get the idea. Essentially, each command in the script would map to code that calls
SetRenderState, SetTransform, SetSamplerState, SetTextureStageState, etc. With such a system in place,
our rendering code would now instruct the script engine to execute the script for the current attribute
group about to be rendered and the device would receive all appropriate states for the polygon batch
about to be drawn.

The following pseudo-code shows how our rendering function might look, if we had such a device
scripting module in place.

for each ( AttributeGroup pAttribute )


{
ScriptingComponent->Begin( /* Cache Current Device State */ )
ScriptingComponent->ParseScript( pAttribute->Script )
DrawPrimitives( /* ...that match current attribute */ )
ScriptingComponent->End( /* Restore Previous Device State */ )
}

In this example, the Begin method of our scripting module records all states currently set on the device
and the End method restores those states after the script has been executed and the polygons have been
drawn. This way we ensure that no stray states remain set that could affect the rendering of attribute
groups which are to be visited subsequently. It is also worth paying special attention to the fact that the
scripting component’s ParseScript method in this example would not draw the polygons but would only
configure the device state based on the data in the input script. You can see that after the script has
executed, we assume that all device states for the current attribute group have been set and that the
device is ready for us to render our primitives in the usual fashion (using DrawPrimitive, etc.).

www.gameinstitute.com
Page 5 of 128
This is starting to look quite elegant as a generic solution, but as mentioned earlier, we may require a
script to describe the rendering of a batch of polygons over some number of passes. Furthermore, we
know that when we do wish to render polygons using a technique that requires multiple passes, the
device state is going to be different in some way for each pass. So, our script needs a way to specify that
a given attribute group needs to be rendered with multiple passes and the different device configurations
per pass. This could be handled with an extension of our current approach to support specifying a
number of passes and providing the per pass device state configurations inside the script.

[General]
NumberOfPasses = 2

[Pass 1]
AlphaBlending = FALSE
LightingEnabled = TRUE

[Pass 2]
AlphaBlending = TRUE
LightingEnabled = FALSE
SrcBlendMode = SrcAlpha
DstBlendMode = InvSrcAlpha

With this type of script, the scripting component would obviously have to understand the concept of n
passes and the rendering code would be updated to look something like this:

for each ( AttributeGroup pAttribute )


{
ScriptingComponent->Begin( /* Cache Current Device State */ )
ScriptingComponent->SetCurrentScript( pAttribute->Script )
ScriptingComponent->GetNumPasses( &NumPasses )

for ( i = 0; i < NumPasses; i++ )


{
ScriptingComponent->Pass( i )
DrawPrimitives( )
}

ScriptingComponent->End( /* Restore Previous Device State */ )


}

This is a slight shift, but a logical one. We still use the Begin and End methods to cache and restore
device state respectively and inform the scripting system of the current script we wish to use -- the one
assigned to the attribute group we are about to render. Note however that now we fetch the number of
passes defined in the script file and set up a loop. Each pass through the loop is an additional drawing
pass for the current polygon batch. During each iteration we tell the scripting system which pass we are
currently executing so that it can set the device states specified in the script for that pass. Then we can
just draw our polygons as usual.

So now we have sketched out a pretty minimal design for a device state scripting system to bring order
to the chaos of our rendering code. Although writing a data-driven scripting component would not be

www.gameinstitute.com
Page 6 of 128
terribly difficult (at least not the very limited one outlined above!), fortunately for us we don’t have to
worry about doing it at all.

18.1 Introducing Effect Files


Microsoft® introduced Effect Files and the Effect File Framework in DirectX 8 to address device state
management issues and to provide a way to describe rendering techniques to the D3D pipeline from
outside of the compiled application. At their simplest, effect files are external scripts that can be written
by the programmer, artist, or level designer in any basic text editor (e.g., Windows Notepad). They
describe which states should be set in order to correctly render the polygons that have had that particular
effect (script) assigned to them. Effect files contain the rendering state management logic outside of the
main application so that they can be easily updated without having to recompile the entire program when
changes are made. When developing a rendering architecture that uses effect files, the effect file itself
usually becomes one of the primary sorting keys for polygons (along with textures and materials) and
code becomes greatly simplified. The rendering system essentially becomes data-driven via these scripts
so a technical artist can quickly create and experiment with new ideas immediately without having to
ask the engine programmer to build specialized functionality.

In practice, programmers (often collaboratively with artists) will generate a pool of effect files
representing a variety of different visual effects making them available for assignment to scene polygons
in the level editor tool. For example, an artist might have a handful of polygons in the level that are to be
multi-textured and rendered transparently. In such a case, an effect file would be written that describes
the texture stage states and the render states that need to be set in the Direct3D pipeline for this to
happen using the effect system’s simplified scripting language. This effect file, which we might call
“MultiTexture.fx” would then be assigned to those polygons. We will examine the contents of an effect
file in a moment, but for now just know that it is a simple text file that specifies a series of rendering and
transformation states that should be set by the rendering engine prior to drawing the polygons that were
assigned the effect. Also notice that we named the file with a .fx extension. This is the common practice
for files that contain DirectX effect scripts.

When loading the scene, each polygon (or attribute) will provide the application with the name of the
effect file that contains its rendering technique. The engine can then load that effect file (if it hasn’t done
so already) and make sure that the effect is used when rendering any such polygons when the time
comes. Where we have previously batched polygons into attribute groups via their common texture and
material, now we will also store the effect script that should be used to configure the device for all the
polygons in that group.

Effects, in this context, are much more than just text files that we can leverage to remove the conditional
state setting logic from our rendering code however. DirectX 9 provides an interface (ID3DXEffect) that
represents each loaded effect file and whose methods we can use to load, compile, and execute the effect
prior to rendering the polygons that use it. By using just a few simple methods in the ID3DXEffect
interface, we can inform the D3D pipeline to read the effect file and set all the appropriate render states
and texture states contained within that file. Regardless of whether a particular effect represents a single

www.gameinstitute.com
Page 7 of 128
texture opaque render or a multi-texture transparent render, our rendering code handles each group of
polygons in the same way.

Even if this represented all of the benefits that the effect system gives us, we would be pretty happy with
it and consider integrating it into our projects. But this is not the end of the story. With effect files we
can not only set different fixed-function device states, we can also completely override the fixed-
function transformation, lighting, and texture blending code in DirectX 9 with just as much ease. That is,
we can supply our own programs to accomplish those tasks (to be executed on the GPU), giving us
nearly total control over every step of the pipeline. These mini-programs that we will write are called
shaders, and we will be talking about them in the next chapter and throughout the rest of the course. For
now though, just know that we can embed these custom shader programs right inside our scripts.

One of the more useful and practical features offered by the effect system is support for the development
of multiple techniques suitable for different hardware configurations. When the effect file is first loaded
and parsed (via D3DX interfaces/functions), the available techniques are examined and the one that
works best on the current system can be selected. Many techniques can be defined in a single effect file
allowing you to make the effect as backwardly compatible with older systems as possible whilst still
providing cutting-edge techniques for more modern hardware. Our core rendering code does not have to
be concerned with matters such as whether the end user’s system supports multi-texturing or not -- we
will have already determined which rendering technique in the effect file should be used on the current
hardware during the effect file loading stage. As a result, not only do we get compatibility across a
variety of hardware, we are also providing some degree of future-proofing to our code. When a new
graphics card comes to market down the road that provides a new feature that we want our engine to
support, in many cases we will simply be able to add a new technique to the effect file and, assuming we
have properly integrated effects into our rendering engine design, DirectX will select it when the user
has capable hardware installed.

Building an effect file driven rendering system means we will be moving our code in a direction that
will make the introduction of shaders fairly trivial. After learning the material in this chapter, your
rendering system will be pretty much ‘shader ready’ and then future-proofing becomes even more
realistic. You could release new effects later on down the road that contain not only more advanced
fixed-function state configurations, but completely overhauled transformation and lighting code which
upgrades your visuals dramatically. By separating the programming of visual effects from the
application code, effect files become an integral part of the art/asset production pipeline, accompanying
incoming geometry and assets with the specific information about how they are to be rendered.

In summary, effect files provide us with the following advantages:

• Rendering techniques and state setting logic are decoupled from the rendering code. This means
we have less hard-coded rendering logic and much tidier/simpler rendering code.

• Rendering techniques can be upgraded or added by updating and distributing new effect files for
the application. The application normally does not have to be recompiled to use them.

• An effect file can contain multiple techniques suitable for different hardware configurations.
When the effect file is loaded, the available techniques are examined in order, and the first

www.gameinstitute.com
Page 8 of 128
(generally, the best) one that is found to work on the current system can be selected. It is quite
common for a given effect script to contain various fallback techniques so that the game visuals
degrade gracefully across the range of legacy hardware might be encountered.

• Effect files can have shader programs embedded directly within them making the integration of
custom rendering behaviors (including shaders, as we will see later) considerably easier.

In order to demonstrate even a small amount of the potential power offered to us by effect files and other
data driven rendering techniques, it might be an ideal time to take a look at an example.

Lab Project 18.1 included with this chapter is intended to demonstrate some of the basic principles we've
discussed to this point. We will not be exploring the C++ code for this project just yet -- we'll look at
how effects are integrated into the application side a little bit later on. The most important aspects of this
lab project are in fact the included binary executable and the example effect script named 'Scene.fx'
which is found in the lab project "Data" directory.

When launching the included binary executable for the first time, we should be presented with a
rendered scene similar to the figure shown below.

Figure 18.1

While this scene may look fairly complex, in reality it consists of nothing more than a single vertex and
index buffer housing the scene geometry, a single base texture that contains a representation of the
lighting for the scene (commonly referred to as a light map), and the aforementioned 'Scene.fx' effect
file that describes how the scene geometry is to be rendered by the fixed-function pipeline in this case.

www.gameinstitute.com
Page 9 of 128
The example effect file 'Scene.fx' can be opened in any standard text editor such as Windows' Notepad
and edited directly. There are various states defined in the effect file, but just as a little experiment let's
modify just one of them to demonstrate the principles we've been discussing to this point.

Within the example effect you should see a line similar to the following, intended to set the value of the
FillMode device state to "Solid" when rendering the scene geometry.

FillMode = Solid;

In terms of our prior experience setting states directly in C++, this would be equivalent to the following
render state setting code:

pDevice->SetRenderState( D3DRS_FILLMODE, D3DFILL_SOLID );

Now, let's change the value of this state to 'Wireframe' and save the effect file.

FillMode = Wireframe;

Without having to modify our C++ code, or recompile our binary executable in any way, we can now
simply re-execute the application to find the main scene geometry rendered in wireframe form as
depicted below:

Figure 18.2

While this example is fairly simplistic in nature, hopefully you are convinced that a rendering system
that utilizes device scripting is potentially a very good design choice. Using effect files and the D3DX
www.gameinstitute.com
Page 10 of 128
based effect system to accomplish this task is going to provide a significant improvement to our current
engine design and will allow us to start to achieve visuals that rival modern commercial titles. The next
step then is to begin to more closely examine what an effect file is and what it actually contains.

18.1.1 What Comprises an Effect File?

The D3DX effect system is essentially a collection of code that knows how to load, compile, and
interpret state scripts called effect files. As such, effect files contain state information laid out using a
simple language that the D3DX effect compiler can understand and interpret both offline and at runtime.
Although effect files can contain custom shader code as well, for the moment we will keep things simple
and use effect files only as a means for conveniently setting the device state of the fixed-function
pipeline. We will return to the topic of shaders in the next chapter.

Fixed-function effect files are written using a simple scripting language that explains to the D3DX effect
system which states to set over n rendering passes. The language is so simple that you will instantly
recognize most of the states because they are the same as their SetRenderState, SetSamplerState,
SetTransform, and SetTextureStageState device counterparts. Effect files contain three different types of
information:

1. Parameter Declarations
Parameters are our primary means for communicating information from the game engine to the effect
script. While some of the states that a technique will configure can be defined at the time the effect file
is authored, by value, directly within the technique passes themselves; in order for a technique or any of
its passes to be truly useful, it will often be necessary to configure at least some of the device states
dynamically, based on settings or values that may not be known by the application until runtime. For
example, any effect script that is going to bind a texture to a texture stage or set the current world and
view matrices will need to somehow get that information from the application. That is what parameters
allow us to do. By defining parameters in our effect file we essentially create variables that can be set by
the application prior to the effect being executed. The effect will then use these parameter values to
configure the device with meaningful runtime data. Thus, parameters allow the effect file to be written at
development time using placeholder names for values that are not yet known (just like the parameters to
any function call).

Parameters are declared in much the same way as we would declare global variables in a C++ module as
shown below:

int parameter0;
float parameter1;
bool parameter2;

We'll be examining the concept of parameters in a little more detail in the next section, but for now the
key point to remember is that parameters are the primary means of communication between the
application and the effect file script.

www.gameinstitute.com
Page 11 of 128
2. Techniques
A technique represents a way of grouping device state settings by name inside the effect file. When you
render a polygon or batch of primitives assigned the effect script, you will choose which technique you
would like to use to configure the device as discussed in our overview. Multiple techniques can be
supplied and validated at load time so the best technique that is supported by the current hardware on
which the application is running can be selected. For example, a technique might contain a device
configuration that requires the use of four texture stages but a fallback technique that requires only two
stages and two passes may also be supplied. At runtime, the effect file techniques would be validated
against the hardware and, if supported, the superior technique would be chosen; if not, the lesser
technique would be used.

3. Passes
While techniques provide a form of high-level grouping of the rendering behaviors we're designing, we
also need a way to communicate device state data not on a per-technique basis but on a per-pass basis.
This is necessary largely due to the fact that some techniques might require that polygons be rendered
multiple times, each time with different device states. For this reason, a technique is actually a container
for one or more passes. It is within each pass block that the individual device state instructions are
specified. If a technique describes a single pass render, it will contain just one pass block.

Below we see a pseudocode example of how a two pass technique would be laid out in an effect file.

Technique MyTechnique
{
Pass MyFirstPass
{
Device state instructions for first pass
}

Pass MySecondPass
{
Device state instructions for second pass
}
}

4. Functions
Functions (i.e., shaders) are used to transform and light vertices and to shade pixels using custom code
that replaces some or all elements of the default fixed-function transformation and lighting module
provided by D3D. These days, functions will usually be written using a high level language (e.g., HLSL,
GLSL, or Cg) but can also be written using an assembly style language in DirectX 9. We will ignore
functions for the time being and remain focused on fixed-function effect files in this chapter.

18.1.2 A Simple Effect File Example

The effect file example we will look at in this section is assumed to have been assigned to polygons that
need to be rendered using multi-texturing with two textures. It will contain two possible techniques --
the first technique will be used on hardware which supports single pass multi-texturing (two stages) and
the second will be for hardware where multi-texturing is not available in a single pass. The second

www.gameinstitute.com
Page 12 of 128
technique will define two passes so that the effect can be instructed to set the correct states for each pass.
The fallback technique would be used by the application to render the polygons in such a situation
where two simultaneous texture stages were not supported on the end-user’s machine. Remember that
the following effect file could be authored in any text editor. We will cover the example effect file a
section at a time. The first section is shown below:

texture t0; // first texture to blend


texture t1; // second texture to blend
matrix world; // world matrix
matrix view; // view matrix

It is common practice to define any parameters that the effect file will use towards the top of the file,
although this is not a requirement. Here you can see that we have defined four parameters. The first two
are used to instantiate texture variables which we have decided to call t0 and t1 in this example. The
variables are of type texture which is a recognized parameter type in the D3DX effect system. It
identifies these variables as representing IDirect3DTexture9 surfaces. We can think of the t0 and t1
parameters as being abstract representations of D3D texture surfaces that will be present at runtime even
though we don’t know what they will actually contain. These parameters will be set by the application
and will be assigned the two textures necessary for rendering primitives with this effect in this example -
- whatever they may be. That is, before rendering the attribute group, the rendering engine will use the
ID3DXEffect interface to assign t0 and t1 real texture pointers prior to invoking the effect.

The second two parameters are of type matrix, which is another parameter type recognized by the D3DX
effect system. We use a variable of this type to represent 4x4 float matrices. In this example, we define
two matrices that will be used by the application to provide the effect with the current world and view
transform matrices. You will see in a moment that we are going to make the effect techniques
responsible for binding the current world and view matrices to the device. Before a given attribute group
is rendered, the application would load the current world and view matrices into these effect parameters
prior to applying the effect.

Note: We will see later how the application loads values into effect parameters.

There are many intrinsic parameter types in the effect file language including the standard types like
bool, int, float, and double. Furthermore, each of these types can be instantiated as 2D, 3D, or 4D
vectors by appending the dimension to the end of the base type. For example, a variable declared to be
of type int2 is a 2-component vector whose x and y components are integers. Alternatively, a variable
declared to be of type float3 is a 3D vector comprised of an x, y, and z component of type float. The
components can be accessed using standard ".x", ".y", and ".z" syntax, although accessing the individual
components of such types in this way is largely only necessary when developing the types of shader
programs we'll be looking at in the next chapter. We can also create matrices for all of our base types by
appending the base type with the mxn suffix (a 4x4 maximum is supported with respect to dimensions).
For example, a variable declared as type int2x2 would be a matrix comprised of two rows and two
columns of integers. A variable of type float4x4 would create the most common type of matrix used to
represent our world, view, and projection matrices (i.e., four rows by four columns with each element
being a float).

www.gameinstitute.com
Page 13 of 128
Note: A float4x4 type is so commonly used in effect files and shader programs that it has also been type
defined with the word matrix for convenience. As you saw in the above code snippet, we declared our
matrices to be of type matrix, which is exactly the same as defining them as type float4x4.

For matrices, structure syntax allows us to access values in the same way that we access the elements of
a D3DXMATRIX. If you wish to use a zero-based indexing scheme into the matrix elements then you
could use the syntax mtx._33 (where mtx is a variable of type float4x4) to fetch the bottom right element
of the matrix (last row and last column). Alternatively, if you want to access the same element using a
one-based indexing scheme, you would use mtx._m44 syntax. That is, an effect file matrix has both _mab
and _ab members defined depending on whether you wish to access the elements from a zero base or a
one base, respectively.

So, back to our example. We now know that the effect file is expecting to be provided with two textures
and two matrices by the application via the parameter declarations we saw earlier. Let’s take a look at
the first of the two techniques which will be used when single pass multi-texturing is available. As such,
it will consist of only a single pass requiring two texture stages.

A technique is defined with the keyword "technique" (case-insensitive) followed by the name of the
technique (in this example, it is called “singlepass”). You can name the technique whatever you like as
long as there are no two techniques within the same effect file that share the same name. Of course,
assigning a technique a meaningful name is recommended since it will allow others to better understand
the intent and will also allow you to access information about the technique, by name, from within the
application -- we'll see this in action later. Curly braces are used to encapsulate the technique code
block.

technique singlepass
{
pass p0
{
Texture[0]=<t0>;
ColorOp[0]=SelectArg1;
ColorArg1[0]=Texture;

Texture[1]=<t1>;
ColorOp[1]=Add;
ColorArg1[1]=Texture;
ColorArg2[1]=Current;

WorldTransform[0]=<world>;
ViewTransform=<view>;
}
}

In this example we are applying two textures to our polygon(s) using an additive blend.

Since each pass of the technique may require different device configurations, the state settings are not
stored directly inside the technique but are nested inside one or more pass blocks. A pass is defined
using the keword "pass" (also case-insensitive) followed by a name for that pass. Although this
technique will use only a single pass to accomplish its task, multiple passes can exist (e.g., p0, p1, etc.)
that are defined consecutively inside the technique braces. We will see multiple passes being

www.gameinstitute.com
Page 14 of 128
implemented inside an effect script in a moment when we look at the second technique stored in this
effect file.

As with the technique keyword, we can assign the pass block any name of our choosing just so long as
two passes within the same technique do not share the same name (duplicate pass names within the same
effect file as a whole are fine). As with the technique name, the effect system does not use the name for
anything functional, although pass names can be queried and used by the application for debugging or
any other tasks that might be needed. In this example, we have called the first pass of our "singlepass"
technique “p0”. As we saw for techniques, a pass block also uses curly braces to encapsulate its data
(simple fixed-function device states in our current example).

As you can see in the above example technique, the names associated with the various device states and
their possible values are mostly commonly simply shortened versions of the names we use when setting
them manually in our C++ code. The naming conventions used for the effect state assignments are
almost identical to the naming conventions used in the equivalent SetTextureStageState,
SetSamplerState, SetTransform, and SetRenderState device calls. The only difference is that the names
have been shortened from their enumerated type counterparts by stripping off the leading prefixes and
underscores (e.g. D3DTSS_, D3DTA_, etc.).

For example, the line...

ColorOp[1]=Add;

...is just the scripted way of providing the following state assignment, in code, within our render
function for example:

pDevice->SetTextureStageState( 1 , D3DTSS_COLOROP , D3DTOP_ADD );

So, we can see that in the first technique we assign whatever texture the application has bound to the
texture variable t0 to texture stage 0 and likewise for texture t1 to texture stage 1. Notice that when
assigning the value of any parameter to a state, we use angle brackets < > (alternatively, parenthesis can
also be used for the same purpose). This will be the same syntax that we must use when setting any
device state based on the value of an effect parameter. Also note that texture stage slots and their
corresponding color operations, states, etc., are defined using array notation. Each stage is indexed by
the stage number we would normally use in a standard device call to SetTextureStageState.

Let us step through the above example technique line by line and examine the language syntax more
closely before we move on.

pass p0
{
Texture[0]=<t0>;

First we assign the texture t0 to texture stage zero using the Texture state with array syntax.
Alternatively, if we wanted to assign texture t0 to texture stage 4 (relative to zero), we would write
Texture[4]=<t0> instead. Remember that the texture surface pointer represented by t0 will be set by the

www.gameinstitute.com
Page 15 of 128
application prior to applying this effect. All we are basically saying is, “Whatever texture pointer the
application has assigned to parameter t0, bind it to texture stage zero.”

Now that we have the first texture assigned to stage 0 we need to configure the texture stage so that it
samples the texture and passes the resulting color into the next stage. We do this by setting the color
operation for stage 0 to "SelectArg1" and color argument one to "Texture". Once again notice that all
these states and their assignments use names that are just shortened versions of their C++ code
counterparts.

ColorOp[0] =SelectArg1;
ColorArg1[0]=Texture;

For completeness, these two lines are equivalent to the following two state setting operations when
performed directly within our C++ based application:

pDevice->SetTextureStageState( 0, D3DTSS_COLOROP, D3DTOP_SELECTARG1 );


pDevice->SetTextureStageState( 0, D3DTSS_COLORARG1, D3DTA_TEXTURE );

With texture stage 0 configured, we will now assign the second texture to stage 1. We set the two color
arguments for this stage to the texture assigned to the stage and the color passed in from the previous
stage, respectively. We then set the color operation to perform an additive blend between the two
sampled colors.

Texture[1] =<t1>;
ColorOp[1] =Add;
ColorArg1[1]=Texture;
ColorArg2[1]=Current;

With our texture stages now configured, the final section of our example technique is where we set the
world and view matrices. Rather than the rendering code calling the device’s SetTransform method
directly, it might instead pass the matrices into the effect by setting the two matrix effect parameters
defined earlier (how this is achieved is discussed a little later on). As when assigning the textures above,
we assume that the application will have set the world and view matrices to something meaningful
before applying this effect and, as such, we can simply assign the world matrix and the view matrix to
the device using the WorldTransform and ViewTransform effect states within the effect pass as shown
below.

WorldTransform[0]=<world>;
ViewTransform=<view>;
}

Notice that we assign our standard world matrix using the [0] array syntax. This is because the
WorldTransform state actually incorporates a potential palette of 256 world matrices. Just as we
discussed in Module II of this series, when we are not using vertex blending, the very first matrix in the
matrix palette is used by the transformation pipeline.

www.gameinstitute.com
Page 16 of 128
18.1.3 Mixing Effects with Manual Device State Setting

You might be wondering why the effect file we have just discussed did not contain a parameter for the
projection matrix in addition to the world and view matrices. You might also be considering the
whereabouts of all of the other data that we did not pass into the effect. For example, what about the
states for lights and materials? Leaving such states out of this example was deliberate because we are
assuming that the application is sending those items to the device as it normally would – via calls to
SetLight, SetTransform, etc. There is no problem with mixing and matching these concepts because the
effect is essentially just wrapping these calls too. We are free to pass in only the data that we need for
the effect and just set the rest of these items in the normal way if we so choose.

In this example we are assuming that the projection matrix was set at application startup and never
touched again. Alternatively, we could have decided to remove the matrix assignments from the effect
file altogether and just let the application send them to the device manually just as we have always done
in the past. You will also see that all other states, such as those for the assignments of materials and
lights, are also scriptable from within the effect file should we choose to use them.

Although we have made the point in this section that we can mix and match the setting of states in code
with the additional setting of states in our effect file, it is not uncommon to choose to offload as much
work to the effect file as possible in an effort to clean up the rendering code design and make it as
flexible and as modular as possible. Once we start working with the shader programs (also commonly
referred to as "the programmable pipeline") later in the course, we are going to be taking over the
processing of vertices and pixels from the fixed-function pipeline and thus will not have access to the
fixed-function storage areas for matrices, lights, etc. Thus, offloading the data to parameters will become
even more important and our renderer's design will have to consider how best to do that.

We have now seen how to specify basic states for a given technique in an effect file. Although we have
not covered how these effect files are handled on the application side, just know for now that at runtime
an effect file would be loaded and represented by an ID3DXEffect interface. The rendering code uses the
methods of this interface to validate its techniques and to apply the states described in the effect file
prior to rendering any polygons that have been assigned it. The rendering code will still be responsible
for drawing the primitives via the usual device DrawPrimitive calls, the ID3DXEffect interface is simply
used to configure the pipeline prior to rendering.

Multiple Techniques
The only technique we have added to our example effect file so far requires support for two textures to
be set and blended simultaneously in the texture stage cascade. However, we are in trouble if we are
running on (admittedly old) hardware where only one texture stage is supported. As things currently
stand, our application would probably have no choice other than to inform the user that he or she should
upgrade their hardware and then exit.

Fortunately, one of the more useful features with effect files and the D3DX effect framework is that by
specifying multiple techniques of varying complexity, we get automatic scalability without cluttering
our rendering code with conditional code blocks. When the effect file is loaded, D3DX provides us with

www.gameinstitute.com
Page 17 of 128
methods that allow us to step through the effect file and locate the best technique that works on the
current system. An effect file might contain dozens of techniques from which we can choose, so we
know that we can always use the best supported rendering techniques in all situations -- assuming of
course that we are willing to spend the time writing the fallback techniques.

Continuing our example, we will now address this issue by defining a second technique within the same
file that will be used in the cases where multi-texturing is not supported in a single pass. In this second
technique, which we will call “multipass”, we will try to achieve the same visual results as the first, but
confine ourselves to the use of only a single texture stage. We saw in Module I that this can be achieved
by enabling the pipeline’s alpha blending functionality and performing color blending with the frame
buffer in a second draw pass. Thus, any polygons assigned this technique will need to be rendered twice.
In the first pass, the first texture will be rendered to the frame buffer as usual. In the second pass, the
polygons will be rendered again using the second texture, but blended on top of the existing pixels in the
frame buffer which contain the colors previously written to it in the first rendering pass (the polygons'
first texture). Since the application must draw the geometry twice, the technique must contain two
passes so that the effect can configure the pipeline with the correct states for each pass.

In the first pass, alpha blending is disabled and texture t0 is assigned to stage 0. The color operation for
stage 0 will be configured to sample directly from the texture and simply output the result. We also must
set the world and view matrices in at least the first pass so that each time the application renders the
geometry it is located in the correct place in the frame buffer.

technique multipass
{
pass p0
{
AlphaBlendEnable = False;
Texture[0] = <t0>;
ColorOp[0] = SelectArg1;
ColorArg1[0] = Texture;
ColorOp[1] = Disable;
WorldTransform[0] = <world>;
ViewTransform = <view>;
}
pass p1
{
AlphaBlendEnable = True;
SrcBlend = One;
DestBlend = One;
Texture[0] = <t1>;
}
}

In the second pass (pass p1), alpha blending is enabled and both blend modes are set to One (i.e.,
resulting in the addition of the new pixel value output by texture stage 0 to the current frame buffer pixel
value). This time the second texture (t1) is assigned to stage 0 and once again the color operation for
stage zero is configured to simply output the color sampled from the texture. Alpha blending will thus be
responsible for the additive combination of colors.

In this example, prior to rendering any geometry the application would instruct the effect to set the states

www.gameinstitute.com
Page 18 of 128
for the first pass. The polygons would then be rendered, resulting in a frame buffer image with just a
base texture (or whatever texture is desired). The application would then instruct the effect to set the
states for the second pass and render the geometry again to complete the equation.

Our example effect file is shown below in its entirety. Of course, we could take it a step further and
implement a third technique for hardware that does not support alpha blending. In such a case, we would
have a single pass technique that only renders the base texture. As it would be terribly hard to find a
system that contains a DirectX compatible graphics card that does not support alpha blending (or that
supports at least two texture stages for that matter), a third technique would be overkill in this case.
Hopefully however you see the more important point being made using these admittedly simplistic
examples -- the key idea is that we can implement a series of fallback techniques with varying system
requirements so that the application can better tailor its rendering according to the hardware on which it
is running.

texture t0; // first texture to blend


texture t1; // second texture to blend
float4x4 world; // world matrix
float4x4 view; // view matrix

technique singlepass
{
pass p0
{
Texture[0] = <t0>;
ColorOp[0] = SelectArg1;
ColorArg1[0] = Texture;
Texture[1] = <t1>;
ColorOp[1] = Add;
ColorArg1[1] = Texture;
ColorArg2[1] = Current;
ColorOp[2] = Disable;
WorldTransform[0] = <world>;
ViewTransform = <view>;
}
}

technique multipass
{
pass p0
{
AlphaBlendEnable = False;
Texture[0] = <t0>;
ColorOp[0] = SelectArg1;
ColorArg1[0] = Texture;
ColorOp[1] = Disable;
WorldTransform = <world>;
ViewTransform[0] = <view>;
}
pass p1
{
AlphaBlendEnable = True;
SrcBlend = One;

www.gameinstitute.com
Page 19 of 128
DestBlend = One;
Texture[0] = <t1>;
}
}

Note: Notice that variable declarations and assignments end with a semi-colon, just as in C++. Also
note that the C++ style double forward slashes can be used to add comments to lines within the effect
file (C-style /* . . . */ can also be used for block comments). Technique states are not case sensitive,
so feel free to use capital letters where you think readability will be improved.

Now we have seen an example of an effect file that scripts the setting of some of the pipeline’s texture
and render states. We have also seen how multiple techniques can be defined so that the effect is
supported on machines that both do and do not support multi-texturing in a single pass. The code in our
rendering engine requires no knowledge about any of this and is free to proceed with a very generic
rendering implementation (as we will see shortly).

18.1.4 Effect Fixed Function States

Below is a complete list of all of the device state names and their potential assignments understood to be
part of the effect file language with respect to the setting of fixed function states. As you will see, all
states that could be set using the device in our C++ code can also be represented in an effect file.

Render States
This table contains the effect file render states and their appropriate types and assignments. Notice that
most of the assignments are the same as their C++ counterparts but with the enumeration prefixes
removed.

Type Render state Values

DWORD ZEnable Same values as D3DZBUFFERTYPE without the D3DZB_ prefix.

DWORD FillMode Same values as D3DFILLMODE without the D3DFILL_ prefix.

DWORD ShadeMode Same values as D3DSHADEMODE without the D3DSHADE_ prefix.

DWORD LinePattern Same values as D3DLINEPATTERN.

DWORD ZWriteEnable Same values as D3DRS_ZWRITEENABLE.

DWORD AlphaTestEnable Same values as D3DRS_ALPHATESTENABLE.

DWORD LastPixel Same values as D3DRS_LASTPIXEL.

DWORD SrcBlend Same values as D3DBLEND without the D3DBLEND_ prefix.

DWORD DestBlend Same values as D3DBLEND without the D3DBLEND_ prefix.

DWORD CullMode Same values as D3DCULL without the D3DCULL_ prefix.

www.gameinstitute.com
Page 20 of 128
DWORD ZFunc Same values as D3DRS_ZFUNC without the D3DCMP_ prefix.

DWORD AlphaRef Same values as D3DRS_ALPHAREF.

DWORD AlphaFunc Same values as D3DRS_ALPHAFUNC without the D3DCMP_ prefix.

DWORD DitherEnable Same values as D3DRS_DITHERENABLE.

DWORD AlphaBlendEnable Same values as D3DRS_ALPHABLENDENABLE.

DWORD FogEnable Same values as D3DRS_FOGENABLE.

DWORD SpecularEnable Same values as D3DRS_SPECULARENABLE.

DWORD ZVisible This value is not supported.

DWORD FogColor Same values as D3DRS_FOGCOLOR.

DWORD FogTableMode Same values as D3DRS_FOGTABLEMODE.

FLOAT FogStart Same values as D3DRS_FOGSTART.

FLOAT FogEnd Same values as D3DRS_FOGEND.

FLOAT FogDensity Same values as D3DRS_FOGDENSITY.

DWORD EdgeAntialias Same values as D3DRS_EDGEANTIALIAS.

INT Zbias Same values as D3DRS_ZBIAS.

DWORD RangeFogEnable Same values as D3DRS_RANGEFOGENABLE.

DWORD StencilEnable Same values as D3DRS_STENCILENABLE.

DWORD StencilFail Same values as D3DRS_STENCILOP without the D3DSTENCILOP_ prefix.

DWORD StencilZFail Same values as D3DRS_STENCILZFAIL without the D3DSTENCILOP_ prefix.

DWORD StencilPass Same values as D3DRS_STENCILPASS without the D3DSTENCILOP_ prefix.

DWORD StencilFunc Same values as D3DRS_STENCILFUNC.

INT StencilRef Same values as D3DRS_STENCILREF.

DWORD StencilMask Same values as D3DRS_STENCILMASK.

DWORD StencilWriteMask Same values as D3DRS_STENCILWRITEMASK.

DWORD TextureFactor Same values as D3DRS_TEXTUREFACTOR.

DWORD Wrap0- Wrap 15 Values are the same as the values used by D3DRS_WRAP0. COORD0
(which corresponds to D3DWRAPCOORD_0)

• COORD1 (which corresponds to D3DWRAPCOORD_1)


• COORD2 (which corresponds to D3DWRAPCOORD_2)
• COORD3 (which corresponds to D3DWRAPCOORD_3)
• U (which corresponds to D3DWRAP_U)
• V (which corresponds to D3DWRAP_V)
• W (which corresponds to D3DWRAP_W)

www.gameinstitute.com
Page 21 of 128
BOOL Clipping Same values as D3DRS_CLIPPING.

BOOL Lighting Same values as D3DRS_LIGHTING.

D3DCOLORVALUE Ambient Same values as D3DRS_AMBIENT.

DWORD FogVertexMode Same values as D3DFOGMODE without the D3DFOG_ prefix.

BOOL ColorVertex Same values as D3DRS_COLORVERTEX.

BOOL LocalViewer Same values as D3DRS_LOCALVIEWER.

BOOL NormalizeNormals Same values as D3DRS_NORMALIZENORMALS.

DWORD DiffuseMaterialSource Same values as D3DRS_DIFFUSEMATERIALSOURCE without the D3DMCS_


prefix.

DWORD SpecularMaterialSource Same values as D3DRS_SPECULARMATERIALSOURCE without the


D3DMCS_ prefix.

DWORD AmbientMaterialSource Same values as D3DRS_AMBIENTMATERIALSOURCE without the D3DMCS_


prefix.

DWORD EmissiveMaterialSource Same values as D3DRS_EMISSIVEMATERIALSOURCE without the D3DMCS_


prefix.

DWORD VertexBlend Same values as D3DRS_VERTEXBLEND without the D3DVBF_ prefix.

DWORD ClipPlaneEnable Bitwise combination of CLIPPLANE0 to CLIPPLANE5

FLOAT PointSize Same values as D3DRS_POINTSIZE.

FLOAT PointSize_Min Same values as D3DRS_POINTSIZE_MIN.

FLOAT PointSize_Max D3DRS_POINTSIZE_MAX without the D3DRS_ prefix.

BOOL PointSpriteEnable Same values as D3DRS_POINTSPRITEENABLE.

BOOL PointScaleEnable D3DRS_POINTSCALEENABLE.

FLOAT PointScale_A Same values as D3DRS_POINTSCALE_A.

FLOAT PointScale_B Same values as D3DRS_POINTSCALE_B.

FLOAT PointScale_C Same values as D3DRS_POINTSCALE_C.

BOOL MultiSampleAntialias Same values as D3DRS_MULTISAMPLEANTIALIAS.

DWORD MultiSampleMask Same values as D3DRS_MULTISAMPLEMASK.

FLOAT PatchSegments Same values as D3DRS_PATCHSEGMENTS.

BOOL IndexedVertexBlendEnable Same values as D3DRS_INDEXEDVERTEXBLENDENABLE.

UINT ColorWriteEnable Bitwise combination of RED|GREEN|BLUE|ALPHA.

FLOAT TweenFactor Same values as D3DRS_TWEENFACTOR.

DWORD BlendOp Same values as D3DBLENDOP without the D3DBLENDOP_ prefix.

www.gameinstitute.com
Page 22 of 128
All of these states should hopefully be instantly recognizable to you since they are identical to those we
used when settings states via the IDirect3DDevice9::SetRenderState method. Below are some
examples of how we might set some of these states inside a technique.

Example: Setting Render States


AlphaBlendEnable = True; // Enable alpha blending
FillMode = Wireframe;// Set fill mode to wireframe
ShadeMode = Gouraud // Enable Gouraud shading
DiffuseMaterialSource = Color1; // Use first vertex color as diffuse material
FogVertexMode = Linear; // Set vertex fog mode to fall off linearly
Lighting = True; // Enable D3D lighting pipeline

Texture Stage States


The following table shows the effect file version of the texture stage states and their relative assignment
types. Notice that there are eight texture stages and the per-stage states are assigned within the effect file
using array syntax. Also notice that the first state is the Texture state which we have already seen being
used in our example effect file. It should be assigned a value by way of an effect parameter of type
texture, containing a valid IDirect3DBaseTexture9 object pointer that has been set by the application
prior to the effect being applied.

Type Texture stage state Values

texture Texture[8] NULL or a texture parameter/variable

DWORD ColorOp[8] Array of up to 8 D3DTSS_COLOROP values without the D3DTOP_ prefix.

DWORD ColorArg0[8] Array of up to 8 D3DTSS_COLORARG0 values without the D3DTA_ prefix.

DWORD ColorArg1[8] Array of up to 8 D3DTSS_COLORARG1 values without the D3DTA_ prefix.

DWORD ColorArg2[8] Array of up to 8 D3DTSS_COLORARG2 values without the D3DTA_ prefix.

DWORD AlphaOp[8] Array of up to 8 D3DTSS_ALPHAOP values without the D3DTOP_ prefix.

DWORD AlphaArg0[8] Array of up to 8 D3DTSS_ALPHAARG0 values without the D3DTA_ prefix.

DWORD AlphaArg1[8] Array of up to 8 D3DTSS_ALPHAARG1 values without the D3DTA_ prefix.

DWORD AlphaArg2[8] Array of up to 8 D3DTSS_ALPHAARG2 values without the D3DTA_ prefix.

DWORD ResultArg[8] Array of up to 8 D3DTSS_RESULTARG values without the D3DTA_ prefix.

FLOAT BumpEnvMat00[8] Array of up to 8 D3DTSS_BUMPENVMAT00 values.

FLOAT BumpEnvMat01[8] Array of up to 8 D3DTSS_BUMPENVMAT01 values.

FLOAT BumpEnvMat10[8] Array of up to 8 D3DTSS_BUMPENVMAT10 values.

FLOAT BumpEnvMat11[8] Array of up to 8 D3DTSS_BUMPENVMAT11 values.

DWORD TexCoordIndex[8] Array of up to 8 D3DTSS_TEXCOORDINDEX values without the


D3DTSS_TCI prefix.

www.gameinstitute.com
Page 23 of 128
FLOAT BumpEnvLScale[8] Array of up to 8 D3DTSS_BUMPENVLSCALE values.

FLOAT BumpEnvLOffset[8] Array of up to 8 D3DTSS_BUMPENVOFFSET values.

DWORD TextureTransformFlags[8] Array of up to 8 D3DTEXTURETRANSFORMFLAGS values without the


D3DTTFF_ prefix.

Below we see some examples of how we might set some of these states inside a technique.

Example: Setting Texture Stage States


Texture[0] = <Tex1>; // Bind texture variable Tex1 to stage 0
ColorOp[2] = Modulate; // Perform modulate in stage 2
ColorArg1[1] = Diffuse; // Set 1st color arg to diffuse in stage 1
AlphaOp[0] = AddSigned;// Set alpha op in stage 0 to perform
// signed add

Note: Remember, when assigning a state by reading the value in an effect parameter we must use the
angle brackets. This is demonstrated above where we show an effect parameter of type texture
namedTex1 being assigned to the first texture stage.

Sampler States
Below are the effect file names and assignments associated with the device' sampler states. Notice how
sampler states are declared as 16 element arrays. This may seem strange because, based on our
experience using the fixed-function pipeline, we know that we normally have access to only one sampler
per texture stage (of which there are a total of eight) and as such, we only have access to eight samplers
within the texture stage cascade. However, later, when we start to introduce the concept of pixel shaders
into our effect scripts, we will see that the fixed-function texture stages become obsolete for the most
part -- we will access the samplers directly within the shader. Since the textures themselves are bound to
a sampler instead of the traditional ‘texture stage’ in that case, with the sampler being the means used to
load a color from the texture using high level shader language function calls, the limits imposed by the
texture stage cascade do not apply.

Type Sampler State Values


Same values as D3DTEXTUREADDRESS without the D3DTADDRESS_
DWORD AddressU[16]
prefix. See D3DSAMP_ADDRESSU.
Same values as D3DTEXTUREADDRESS without the D3DTADDRESS_
DWORD AddressV[16]
prefix. See D3DSAMP_ADDRESSV.
Same values as D3DTEXTUREADDRESS without the D3DTADDRESS_
DWORD AddressW[16]
prefix. See D3DSAMP_ADDRESSW.
D3DCOLOR. Same values as D3DTEXTUREFILTERTYPE without the
FLOAT4 BorderColor[16]
D3DTEXF_ prefix. See D3DSAMP_BORDERCOLOR.
Same values as D3DTEXTUREFILTERTYPE without the D3DTEXF_ prefix.
DWORD MagFilter[16]
See D3DSAMP_MAGFILTER.
Same values as D3DSAMP_MAXANISOTROPY without the D3DSAMP_
DWORD MaxAnisotropy[16]
prefix.

INT MaxMipLevel[16] Same values as D3DSAMP_MAXMIPLEVEL without the D3DSAMP_ prefix.

DWORD MinFilter[16] Same values as D3DSAMP_MINFILTER without the D3DSAMP_ prefix.

www.gameinstitute.com
Page 24 of 128
DWORD MipFilter[16] Same values as D3DSAMP_MIPFILTER without the D3DSAMP_ prefix.

Same values as D3DSAMP_MIPMAPLODBIAS without the D3DSAMP_


FLOAT MipMapLodBias[16]
prefix.

FLOAT SRGBTexture Same value as D3DSAMP_SRGBTEXTURE without the D3DSAMP_ prefix.

The following diagram shows the mapping of texture stages to sampler units.

Prior to DirectX 9, the sampler functionality was


actually part of the texture stage and the texture
stages formed the entire fixed-function
texture/color blender. However, when pixel shaders
were introduced, the texture stages and samplers
became decoupled as the former is used only during
fixed-function sampling operations. When pixel
shaders are in use (and thus the texture stage
cascade is disabled), we still need the ability to
sample textures from within the shader code and
thus, in DirectX 9 the sampling functionality of the
texture stages was stripped away and represented
separately as sampler objects. This separation
meant that sampler units could then be accessed
from, and configured for use with, both the fixed-
function and programmable pipelines.

When using the fixed-function color blender (i.e.,


the texture stages), the first eight sampler units are
mapped to the eight texture stages. That is, Texture
Stage[4] would use Sampler[4] to access the texture
assigned to the stage. As there are only eight stages
(maximum) when using the fixed-function blender,
no additional samplers will be accessible.

When using the programmable pipeline, the fixed- Figure 18.3


function blender is replaced by our own color/texture blending code. In this case, the pixel shader has
access to sixteen sampler units which it can use to sample up to sixteen textures simultaneously. Thus,
all the sampler states (which are applicable to both fixed-function and shader based effect files alike) are
accessed and set using sixteen element arrays. However, assigning states for the latter half of the
sampler units (08 – 15) will have no meaning for fixed-function effect files. When we cover pixel
shaders later in this course, you will see how we access these sampler units from within the shader code.
In our current fixed-function example, where the texture stage cascade is still in use, there are only eight
sampler units available to sample the textures assigned to our stages.

MipFilter[0] = Point; // Perform point filtering between MIP levels


// of the texture in stage 0.
MinFilter[4] = Linear; // Apply a minification filter when
// sampling the texture in stage 4

www.gameinstitute.com
Page 25 of 128
MagFilter[7] = Linear; // Apply a magnification filter when sampling
// the texture in stage 7
AddressU[0] = Wrap; // Set U addressing mode to Wrap
// the texture assigned to stage 0
AddressV[1] = Clamp; // Set V addressing mode to Clamp
// the texture assigned to stage 1

The SamplerState Keyword


An alternative way to configure a sampler is to declare an object of type sampler, which is another
intrinsic object type available to us in the effect system. There are 1D, 2D, and 3D variations of the
sampler object -- with typenames sampler1D, sampler2D, sampler3D -- that can be used depending on
whether we are using 1D, 2D, or 3D texture coordinate lookups. There is also another variant called
samplerCUBE for use when we want to sample from a cube-map texture. Most of the time, we will be
performing 2D texture lookups using two texture coordinates <u, v> and as such will most commonly
use the generic sampler or sampler2D types.

We can think of the sampler as a dynamic structure/object that can contain members associated with all
of the properties/states/settings for a single sampler unit (i.e., all of the sampler states listed above). With
this sampler object defined and configured, we can then apply all of the states that we have set inside
that object in a single step by assigning it to the special case "Sampler" effect state within a technique
pass.

For example, we might create a sampler object called MySampler for 2D sampling of a texture:

sampler2D MySampler;

Declaring a sampler in this way -- with no additional configuration -- is fine if we do not wish to alter
any of the sampler settings. In this case, when assigning the sampler to the device, all settings will
remain at their defaults. The effect language also provides us with the sampler_state keyword however,
which serves to help us initialize the values of a sampler object that we instantiate. To demonstrate what
all this means, let’s look at some code.

In this next example, we will instantiate a sampler object called MySampler and configure it to use
wrapped texture addressing and linear mip-map filtering. Further, we will set its Texture property to
directly pair it with the texture we intend to be sampled. This is something that the fixed-function texture
blender would normally do on our behalf when we assign the texture to the stage.

texture texture0;
sampler2D MySampler = sampler_state
{
Texture = <texture0>;
AddressU = Wrap;
AddressV = Wrap;
MipFilter = Linear;
};

www.gameinstitute.com
Page 26 of 128
The sampler_state keyword instructs the effect system to create a new sampler object with the states
configured as described between the curly braces. Any states not included within the braces will remain
at their default. Thus we now have a packet of information that tells D3D how a sampler unit will need
to be set up to sample the texture we have assigned to it.

Now that we have defined a sampler object (often towards the top of our effect file along with other
parameter data), further down in the file -- within a technique pass -- we assign it to one of the sampler
units using the Sampler[n] effect state. This will take the settings in our sampler object and apply them
to the sampler unit of our choosing. If the fixed-function pipeline is being used and the sampler object
had a texture assigned to it (as in our above example), it is equivalent to setting the texture for the
matching stage. For example, we might have the following code that assigns our sampler object to the
sampler unit used by texture stage zero (i.e., sampler unit zero):

Sampler[0] = (MySampler)

Based on the states supplied when creating the sampler object, this would be equivalent to individually
setting the following states manually in the technique pass as we saw earlier:

Texture[0] = <texture0>
AddressU[0] = Wrap;
AddressV[0] = Wrap;
MipFilter[0] = Linear;

As you can see, the approaches are very similar. Using sampler objects, we get to configure the sampler
states at initialization time and simply apply all of those states in one go by using the Sampler[n]=X
syntax throughout our techniques. Otherwise, we would set each texture and sampler state in the body of
each technique pass individually. When we introduce shaders later on, the sampler object will be the
preferred mechanism and as such, you will see us using this method almost exclusively for configuring
the sampler units even in our fixed-function effect file examples, although it is not required.

At this stage, it might be useful to clarify with a more concrete, real-world example. To that end, let us
take a look at a fixed-function effect file that we might use to render the detail mapped terrain developed
originally in Lab Project 6.2 of the first module in the Graphics Programming series. We will implement
two techniques in this example – one for ideal single pass rendering and one multi-pass fallback
technique for hardware that supports only a single stage. You will see a little later that before rendering
each terrain block using this effect, we will pass the world, view, and projection matrices into the effect
along with the base and detail map textures.

#ifndef _TERRAIN_FX_
#define _TERRAIN_FX_

// Matrices
matrix matrix_world;
matrix matrix_view;
matrix matrix_projection;

// Textures/Samplers
texture texture0;
texture texture1;

www.gameinstitute.com
Page 27 of 128
sampler2D TerrainBaseMapSampler = sampler_state
{
Texture = <texture0>;
AddressU = Wrap;
AddressV = Wrap;

MinFilter = Linear;
MagFilter = Linear;
MipFilter = Linear;
};

sampler2D TerrainDetailMapSampler = sampler_state


{
Texture = <texture1>;
AddressU = Wrap;
AddressV = Wrap;

MinFilter = Linear;
MagFilter = Linear;
MipFilter = Linear;
};

////////////////////////////
// Techniques
////////////////////////////
Technique TerrainSinglePassRender
{
pass p0
{
Lighting = false;
SpecularEnable = false;
NormalizeNormals = false;
DitherEnable = true;
FillMode = solid;
ShadeMode = gouraud;
CullMode = CCW;
ZEnable = true;
ZWriteEnable = true;
AlphaBlendEnable = false;
AlphaTestEnable = false;

WorldTransform[0] = <matrix_world>;
ViewTransform = <matrix_view>;
ProjectionTransform = <matrix_projection>;

Sampler[0] = (TerrainBaseMapSampler);
TexCoordIndex[0] = 0;
ColorArg1[0] = Texture;
ColorOp[0] = SelectArg1;

Sampler[1] = (TerrainDetailMapSampler);
TexCoordIndex[1] = 1;
ColorArg1[1] = Texture;
ColorArg2[1] = Current;
ColorOp[1] = AddSigned;
}

www.gameinstitute.com
Page 28 of 128
}

Technique TerrainMultiPassRender
{
pass p0
{
Lighting = false;
SpecularEnable = false;
NormalizeNormals = false;
DitherEnable = true;
FillMode = solid;
ShadeMode = gouraud;
CullMode = CCW;
ZEnable = true;
ZWriteEnable = true;
AlphaBlendEnable = false;
AlphaTestEnable = false;

WorldTransform[0] = <matrix_world>;
ViewTransform = <matrix_view>;
ProjectionTransform = <matrix_projection>;

Sampler[0] = (TerrainBaseMapSampler);
TexCoordIndex[0] = 0;
ColorArg1[0] = Texture;
ColorOp[0] = SelectArg1;
AlphaOp[0] = Disable;
}

pass p1
{
AlphaBlendEnable = true;
SrcBlend = DestColor;
DestBlend = SrcColor;

Sampler[0] = (TerrainDetailMapSampler);
TexCoordIndex[0] = 1;
ColorArg1[0] = Texture;
ColorOp[0] = SelectArg1;
AlphaOp[0] = Disable;
}
}

#endif

Note: Later we will learn that effect files must be compiled when they are loaded so that they can be
more efficiently executed at runtime. The effect compiler includes a fairly robust pre-processor much like
the one we have become used to when writing C++. This means we can use #include and #define
macros, etc. and include conditional code blocks at the pre-processor level. In all of our effect files we
always define a unique name at the head of the file and wrap the entire code in a preprocessor
conditional. Just as with C++ header files, we may choose to place useful techniques or reusable
parameter declarations in separate files and include them in each effect file that we build. The #ifndef,
#define, and #endif directives shown above are traditional include guards that prevent duplication errors
when the same effect file is included in multiple effects.

www.gameinstitute.com
Page 29 of 128
As you can see in the above example, even when using Sampler objects, the texture stage color
operations (and other texture stage states) are still set up as normal to instruct the fixed-function blender
how to utilize the sampled texture fragments.

In the first technique we set the first sampler to sample from the base map and the second to sample
from the detail map. In the first stage, we configure the stage to sample the texture using the first set of
texture coordinates (and the first sampler) and pass the result on to the second stage. In the second stage,
the detail map is sampled using the second set of texture coordinates (and the second sampler) and is
blended with the results from the first stage using a signed addition. This also goes to demonstrate, once
again, how the sampler units are used by the texture stages in the fixed-function pipeline and how
closely the two work together.

In the second technique we define two passes. In the first pass the base map is bound to stage 0. In the
second pass, the detail map is bound to stage 0 and alpha blending with the frame buffer is enabled. The
blend modes are setup to perform a modulation of the base and detail maps.

Transform States
We have seen in our previous examples how effect files help manage the device’s fixed-function
transformation pipeline. This includes the setting of world matrices, vertex blending matrices, the view
matrix, the projection matrix, and even the texture transformation matrices of the texture stages. Below
is a list of the names and assignments for fixed-function transform states in effect files. Notice that the
TextureTransform state is an array of eight matrices providing a possible texture transformation matrix
for each of the eight texture stages. The WorldTransform array represents a possible 256 entry matrix
palette on the device for use in multi-matrix techniques like vertex blending/skinning. When matrix
blending is not being used and you only wish to set the standard fixed-function world matrix, simply
assign the matrix to array index zero as shown in the previous examples.

Type Transform State Values

A 4x4 matrix of floats. Same values as D3DTS_PROJECTION without the


Float4x4 ProjectionTransform
D3DTS_ prefix.

Float4x4 TextureTransform[8] A 4x4 matrix of floats. Same values as 0 without the D3DTS_ prefix.

A 4x4 matrix of floats. Same values as D3DTS_VIEW without the D3DTS_


Float4x4 ViewTransform
prefix.

Float4x4 WorldTransform[256] A 4x4 matrix of floats. World Matrix = WorldTransform[0].

Below we see some examples of how the fixed-function transformation states can be set from within the
effect file. It is assumed that the names between the angle brackets are valid effect parameter variables
of the correct type, initialized elsewhere in the effect file. As we will see a little later in this chapter, the
application will pass these values to the effect prior to applying it and rendering the polygons that use it.

WorldTransform[0] = <World_Matrix>; // Set world Matrix


ViewTransform = <View_Matrix>; // Set View matrix
ProjectionTransform = <Proj_Matrix>; // Set Projection matrix
TextureTransform[5] = <TexMatrix>; // Set texture matrix for 6th stage

www.gameinstitute.com
Page 30 of 128
Light States
In Chapter 5 of Module I, we learned that Direct3D provides a fixed-function vertex lighting pipeline.
All we had to do to utilize this system was to include normals in our vertices and enable the device’s
appropriate lighting states. We would then bind our lights to the device using the
IDirect3DDevice9::SetLight method and the lighting results were computed by the hardware (or
D3D) automatically. You should recall that when binding lights to the device we would pass the
SetLight method a D3DLIGHT9 structure which contained our light’s properties. We would also pass an
index describing the location within the device’s light palette (i.e., 'light slot') where the light should be
placed.

Effect files can also be used to set the fixed-function light properties so that SetLight calls can be
removed from our rendering code if necessary / desired. When setting lights in effect files, we set each
property of a light with a separate state assignment. That is, if we wish to set up light slot [0] on the
device, we would need to set LightAmbient[0] = value, LightDiffuse[0] = value, LightDirection[0] =
value, and so on. Each of the light properties that we are familiar with in the D3DLIGHT9 structure has
a matching state that can be individually set for any light slot.

Below we see the effect file states and assignments for setting fixed-function lighting parameters. For
the best performance, the effect file should set all the parameters for a light that it intends to use. Behind
the scenes Direct3D still requires a complete D3DLIGHT9 structure, so any properties that you do not
specify will need to be populated with default values (Direct3D cannot set individual states for a given
light). Notice that the states are accessed via array syntax allowing us to set any property of any light
slot in the device’s light palette.

Type Light State Values

float4 LightAmbient[n] See the Ambient member of D3DLIGHT9.

Float LightAttenuation0[n] See the Attenuation0 member of D3DLIGHT9.

Float LightAttenuation1[n] See the Attenuation1 member of D3DLIGHT9.

Float LightAttenuation2[n] See the Attenuation2 member of D3DLIGHT9.

float4 LightDiffuse[n] See the Diffuse member of D3DLIGHT9.

float3 LightDirection[n] See the Direction member of D3DLIGHT9.

Bool LightEnable[n] True or False. See the bEnable argument in LightEnable.

float LightFalloff[n] D3DCOLORVALUE. See the Falloff member of D3DLIGHT9.

float LightPhi[n] See the Phi member of D3DLIGHT9.

float3 LightPosition[n] See the Position member of D3DLIGHT9.

float LightRange[n] See the Range member of D3DLIGHT9.

www.gameinstitute.com
Page 31 of 128
float4 LightSpecular[n] See the Specular member of D3DLIGHT9.

float LightTheta[n] See the Theta member of D3DLIGHT9.

Same value as the array of up to n D3DLIGHTTYPE values without the


dword LightType[n]
D3DLIGHT_ prefix.

We can see here for example that the LightAmbient[n] state sets the ambient color for light slot[n]. It is
assigned a variable of type float4 which is a 4D vector containing the A, R, G, and B components of the
color we wish to assign. LightDirection[n] should be assigned a 3D vector whose <x, y, z> components
describe the direction that the light is facing in world space.

In the following example we’ll see how you might set up a point light in light slot[0] and enable it. We
also assign it a position and an ambient light color. Notice that we create the float3 and float4 variable
types inline during the assignment in this example, although they could also be assigned the value of any
effect parameters defined in the effect file and set by the application prior to the effect being invoked.
You will also see that the use of angle brackets in such cases invokes the type constructor, allowing us to
assign values during instantiation.

LightEnable[0] = TRUE;
LightType[0] = POINT;
LightPosition[0] = float3<10.0f, 1.0f, 23.0f>;
LightAmbient[0] = float4<0.7f, 0.0f, 0.0f, 1.0f>;

Of course, hardcoding your light settings is not what you would normally want to do in your effect
scripts. Rather, you will define some light property variables in your effect script so that the application
can set the values of these variables prior to applying the effect. We might imagine for example that if
we were using our light grouping system from Module I (recall that the scene was broken into attribute
groups that were influenced by a maximum of n lights), all of our effect files would also declare effect
parameters designed to describe the properties of those n lights. Before rendering a given light group,
the application would fetch the lights from the light group and use their properties to set the lighting
variables inside the effect. The effect would then be applied, which would bind all light states to the
device, and the light group’s geometry would be rendered.

The following example is a skeleton of how we might write an effect file that is designed to be applied
to polygons that are influenced by up to four lights. The application would be responsible for passing the
effect file the properties of the four lights prior to applying this effect for a given batch of polygons.

bool Light_Active [4]; // Allocate an array of eight booleans


int Light_Type [4]; // Allocate an array of eight dwords
// (Point, Spot, Directional flags)
float3 Light_Position [4]; // Allocate room for storing positions of 4 lights
float4 Light_Ambient [4]; // Allocate room for storing ambient
// color of 4 lights

Technique LightExample
{
pass p0
{
...

www.gameinstitute.com
Page 32 of 128
// Set Up 4 Possible Lights Here
LightEnable[0] = <Light_Active[0]>
LightEnable[1] = <Light_Active[1]>
LightEnable[2] = <Light_Active[2]>
LightEnable[3] = <Light_Active[3]>

LightType[0] = <Light_Type[0]>
LightType[1] = <Light_Type[1]>
LightType[2] = <Light_Type[2]>
LightType[3] = <Light_Type[3]>

LightPosition[0] = <Light_Position[0]>
LightPosition[1] = <Light_Position[1]>
LightPosition[2] = <Light_Position[2]>
LightPosition[3] = <Light_Position[3]>

LightAmbient[0] = <Light_Ambient[0]>
LightAmbient[1] = <Light_Ambient[1]>
LightAmbient[2] = <Light_Ambient[2]>
LightAmbient[3] = <Light_Ambient[3]>

...
}
}

In the above example you can see that we create four arrays to contain the properties of up to four lights
that the application can set to influence the polygons currently being rendered. If the application wishes
to render a batch of polygons which has only two lights affecting it, the application would simply set the
values of user-defined array elements Light_Active[2] and Light_Active[3] to false. This would disable
the third and fourth light slots when the effect is applied. This means such a technique could be used for
all light groups regardless of whether a given light group is influenced by 0, 1, 2, 3, or 4 lights.

Whether you choose to bind the lights to the device within an effect file or leave such management in
your render code is totally up to you and depends very much on what you are trying to achieve.
However, what this does allow an artist to do is hardwire a light and its properties in a very specific way
on a per-polygon basis. For example, an artist might add an object to the game world that he wants to be
lit in a very special way. In order for the effect to look correct, he might need the light to be hitting the
surface at precisely the correct angle. In such a situation, the artist could encode the exact lighting
parameters directly into the effect file to ensure that this is always the case. If nothing else, this can
certainly be useful for prototyping a given visual effect. Alternatively, a much more generic system of
communication can exist between the application and the effect, such that the light parameters inside the
effect are set by the application’s light manager. In the above example, the same effect file would work
for all polygons that share the same effect even if they are influenced by different light groups.

www.gameinstitute.com
Page 33 of 128
Material States
Effect files can also be used to configure the device material settings when the fixed-function lighting
pipeline is enabled. Below we see the names for material state assignments. As with lights, it is more
efficient to set all the states of the material from within the effect file, otherwise the D3DX effect
framework will need to assign default values to missing elements, causing some minor overhead.

Type Material State Values

float4 MaterialAmbient Same value as Ambient

float4 MaterialDiffuse Same value as Diffuse

float4 MaterialEmissive Same value as Emissive

float MaterialPower Same value as Power

float4 MaterialSpecular Same value as Specular

There are many other variable modifiers, features, and semantics that we have not yet covered that make
effect files even more efficient and easy to use and we will certainly cover them later in this chapter.
However, now that we have a preliminary understanding of the effect file language, let us move ahead
and see how these scripts can be loaded and used by our engine.

18.2 Integrating Effect Files – The Basics


The mechanism used to link effect files to the various polygons/meshes in your scene will differ
depending on the modeling/level editing application you choose. Most editors allow you to assign
custom properties to polygons or materials which could be used for storing the name of the effect file.
Many of the newer versions of commercial modeling packages (e.g., 3DS MAX) have effect file support
built in. With effect files, the artist has an enormous amount of control over how the scene he or she is
creating gets rendered. If the artist decides that he would like a particular group of polygons to have a
white material, a diffuse map, a light map and a bump map, he can simply write a new effect to
configure the Direct3D states accordingly and assign that effect filename to the material used by those
polygons. Because the effect file is only externally referenced by the material, many polygons that use
different textures and reflectance properties (i.e., different materials) can still use the same effect for
rendering. In such a case, each batch of polygons would be assigned to unique attribute groups, but each
will call upon the same effect file during rendering.

In our previous applications, we read in the textures and materials used by a polygon and tried to find
out if an attribute structure already existed in the scene’s attribute array that contained that combination.
If it did then the polygon was assigned the index of that attribute and thus became part of that attribute
group. Otherwise, a new attribute was created and added to the master attribute array. After loading the
entire scene, all polygons stored an attribute ID -- an index into the master attribute array that contained
the structures describing its texture(s) and material. Any polygons sharing the same textures and

www.gameinstitute.com
Page 34 of 128
materials would also be assigned the same attribute ID and could be rendered in a single draw call
(provided different transformation matrices were not needed).

It is quite easy to imagine how this process can now be extended to cope with effect scripts. Each
polygon that is loaded can have data assigned to it which contains an index into a script table. This table
entry can contain the filename of the effect file that should be used to render all polygons assigned that
script. The effect file can be loaded and compiled and stored in the scene’s master ID3DXEffect array.
Each attribute structure would now store not only a texture index and a material index into the texture
and material master arrays, but also an effect index into a master effect array.

For example, we might add a new structure to our application called EFFECT_ITEM to help manage our
effects. A master array of these structures can be maintained, where each item in the array represents an
effect file that has been loaded and compiled into an ID3DXEffect -- just as we have previously with
textures and materials.

typedef struct _EFFECT_ITEM


{
LPSTR FileName; // File used to create the effect
LPD3DXEFFECT Effect; // The effect pointer
D3DXHANDLE Techniques[ NUMTECHNIQUES ]; // Various techniques
} EFFECT_ITEM;

Looking at the above structure we can imagine that for each polygon we load, we will examine its
attributes to see if it ultimately contains an effect file string. If it does, we can create a new
EFFECT_ITEM structure and add it to a master effect array. The structure is then populated with the
following information.

LPSTR FileName
This is the name of the effect file that has been loaded, so we always know where our data came from.
More importantly, it can be used to avoid redundant loading and compilation of the same effect.

LPD3DXEFFECT Effect
We will see in a moment how when an effect file is loaded it must be compiled into a usable form.
When we compile an effect we get back an ID3DXEffect interface which encapsulates the effect
commands. The ID3DXEffect interface has many member functions that we can use to validate
techniques, set the values of parameters inside the effect, and of course, invoke the effect at render time.

D3DXHANDLE Techniques[ NUMTECHNIQUES ]


A handle is a unique identifier that is used to alias some arbitrary piece of data. Not unlike pointers and
references in C++, handles allow us to connect with our data in an efficient way. For example, when we
create a window in Win32, we get back a window handle (an HWND) which we use to send commands
to that window. A D3DXHANDLE is a very similar concept. We can cache handles to all sorts of
information inside the effect such as parameters and techniques so that if we need to refer to that
information later we can do so quickly. In our EFFECT_ITEM structure we cache handles to all the
techniques within the effect file so that we can use them later to specify/change the technique we wish to
use within that effect. We will talk more about handles later in the chapter.

www.gameinstitute.com
Page 35 of 128
Of course, we would also need to adjust our attribute structure to accommodate this new information.
Previously, our attribute structure stored two pieces of information -- an index into our texture list and
an index into our material list. A new attribute group had to be created during scene loading whenever
we found a polygon that used a texture/material combination that did not yet have an attribute structure
created for it in the master attribute array. In keeping with our prior design approach, we could imagine
upgrading our attribute structure so that it can now store n texture indices, a material index, and of
course, an index into our master EFFECT_ITEM array:

D3DMATERIAL9 m_pMaterialList[MAX_MATERIALS];
TEXTURE_ITEM *m_pTextureList[MAX_TEXTURES];
EFFECT_ITEM *m_pEffectList[MAX_EFFECTS];

. . .

typedef struct _ATTRIBUTE_ITEM


{
long TextureIndex[ 8 ]; // Indices into the texture array (8 for example)
long MaterialIndex; // Index into the material array
long EffectIndex; // Index into the effect array

_ATTRIBUTE_ITEM( )
{
MaterialIndex = -1;
EffectIndex = -1;
for ( int i = 0; i < 8; i++ ){ TextureIndex[ i ] = -1; }
}

} ATTRIBUTE_ITEM;

An attribute group (i.e., a subset) can now be defined as a group of polygons that share the same
material, all of the same textures (up to eight in the current example), and the same effect file.

Note: This section was not intended to be a thorough description of how our lab project framework has
been upgraded to handle effect files, but rather to give you some high level ideas about how effects and
attributes are going to be related. We will discuss some of the actual code used to integrate effects into
our demonstration projects when we get a little further on in the discussion.

18.2.1 Loading Effect Files and Creating Effects

Normally when an effect is first loaded from disk, the data contained within it will be in simple text
form. That is, we use text strings to define all of our parameters and techniques and states, just as we
have seen in our previous examples. While this is great for us, since we can read, understand, and edit its
contents with minimal effort, such information has little direct relevance to DirectX. Just like any high
level programming language (e.g., C++ or modern Basic implementations), humans will write the
“code” using some form of text editor, but ultimately it will need to be processed by a compiler or
interpreter in order to convert that information into a binary representation that a machine can
understand and execute efficiently.

www.gameinstitute.com
Page 36 of 128
In this particular case, it should be pretty clear that our text will need to be compiled into something that
the effect system can interpret for the purposes of knowing which states to set, which texture stages to
assign textures, what transform states will receive matrices, etc. For standard fixed-function states,
which are all our effect files have contained up to this point, this is little trouble as it is essentially
nothing more than a remapping of our “effect file instructions” into specific calls to the device driver
simply to set states. But when an effect file contains programmable pipeline shader programs, it is quite
a different story as they need to be converted into binary instructions and data so that the GPU can
execute them quickly (much like a C++ compiler generates machine code instructions and data from
human readable code for execution by the CPU). While we will not address shaders until the next
chapter, we are still going to have to contend with this compilation step to get our text-based effect files
into binary form.

DirectX includes an effect compiler tool that does the job of converting our text-based effect files into a
binary form useful to the machine. During development, the effect compiler can be invoked using the
command line (more on this later) or it can be triggered from within your Visual Studio™ IDE
whenever .fx files are encountered during the building of the solution. In both of these cases, we are
essentially compiling the effect file into byte code at development time and will be shipping the binary
version of the effect with our application. This is often preferable, especially if your effect files contain
proprietary shader code that you do not wish to distribute in human readable form. We will examine
how the command line effect compiler tool (fxc.exe) can be invoked at development time a little later,
but for now we will turn our attention to how effect files can be loaded and compiled at application
startup. This is quite useful during the debugging phase of development, or as in our case, when
shipping projects that are intended for academic purposes. It allows other people to examine your .fx
files, make changes and immediately see the results the next time the application is run without having
to first pump them through a separate compiler application.

There are a number of ways that we can compile effect files from within our engine if we are willing to
defer effect compilation until runtime. We will take a look at the most common methods in detail shortly
and examine how and where each might be used. However, as a brief introduction to using effects
within your application, we will begin with some simple examples to get you started.

The easiest way to load and compile effect files at runtime is to use the D3DXCreateEffectFromFile
function supplied by the D3DX library. This is one of many functions which wrap the behavior of the
underlying D3DX effect compiler which we will discuss a little later. Behind the scenes, this function
loads in the effect file specified in its second parameter and sends it to the effect compiler for
processing. Provided the effect is compiled successfully and does not contain any errors, we are returned
a pointer to an interface of type ID3DXEffect (via its penultimate parameter). This is the interface
through which we will access the effect's functionality. Its methods allow us to set and retrieve
parameters within the effect, query and validate techniques, and of course, apply the states it contains to
the device during rendering.

LPD3DXEFFECT pEffect;
D3DXCreateEffectFromFile(pDevice, "test.fx", NULL, NULL, 0, NULL, &pEffect, NULL);

This function can be used to load effects which are both in text-based and pre-compiled binary formats.
If the effect file you are trying to load is in binary form (i.e., it was compiled at an earlier stage), the
function will skip the invocation of the effect compiler and simply create an effect object directly from
www.gameinstitute.com
Page 37 of 128
the supplied binary. From the application’s perspective this keeps things simple -- we call this function
and get back our ID3DXEffect interface, regardless of effect type.

Note: Don’t worry about all of those NULL parameter values for the moment. We will look at a
complete reference of the D3DX effect functions shortly and discuss them in more detail. This section is
intended to be a quick introduction to application usage of effect files only.

Having loaded our effect file, we now have an ID3DXEffect interface which exposes methods that allow
us to communicate with the effect file from within our engine code. One of the first things we will
usually want to do is step through the techniques defined in the effect and find the most suitable one to
use on the current machine on which the application is running. There may be many techniques defined
in this effect file so that the effect is scalable to different hardware configurations. Indeed, depending on
the current hardware, there may be several techniques defined which are not supported. We generally
need to find the best technique we can use for this effect and then set it as the active technique for use
during rendering.

Fortunately, the ID3DXEffect interface provides the ValidateTechnique and FindNextValidTechnique


methods to help perform this task. These functions provide us with a simple way to enumerate and test
the various techniques defined in the effect file. One approach would be to step through each technique
in the effect and use the ValidateTechnique method to test each one. When we find a technique that
works (i.e., the current machine supports all of the required states), we simply store the returned handle
to this technique so that we can set it as the active technique during rendering. In the following example
you can see that we first validate the ‘singlepass’ technique as it is the most desirable. If it fails, we store
the handle to the ‘multipass’ technique instead.

D3DXHANDLE hTech = pEffect->GetTechniqueByName( "singlepass" );

if ( FAILED( ValidateTechnique( hTech ) ) )


hTech = pEffect->GetTechniqueByName( "multipass" );

Although this approach is perfectly valid, it is generally not ideal. First, the effect file might contain
many techniques and the code necessary to find the valid technique for each effect file could soon
become unwieldy. Second, and more importantly, this code relies on the application knowing the names
of each technique and which ones are preferable. This undermines a key advantage of using effect files
because our C++ code now has to be aware of the contents of each effect file that might be used. If we
decided to change the names of the techniques in our effect file, our code would break, so this is really
not a good design.

An alternative approach where the application does not have to know anything about the various
techniques in the effect file and does not have to be hard-coded in any way is shown next. This more
flexible method uses the FindNextValidTechnique function which does not require such explicit code
and will allow for any number of techniques in the effect file:

D3DXHANDLE hTech;
pEffect->FindNextValidTechnique( NULL, &hTech );

This method will iterate the techniques, starting with the first one defined in the effect, and return a

www.gameinstitute.com
Page 38 of 128
handle to the first one it finds that will work on the current system. If we arrange our techniques in the
effect file with our preferred techniques first, followed by fallback techniques, then this function will
always return the best technique that will run on the current system. This is an important point and is
worth repeating. As long as the effect developer puts the most demanding and cutting-edge techniques at
the top of the effect file and the simpler, legacy techniques towards the bottom, this single function call
will always return a handle to the best technique in the effect that is supported on the current hardware.

18.2.2 Rendering with Effect files

Effects simplify scene rendering code under most situations. For example, a scene that may have been
comprised of many different attribute groups, each requiring special case rendering paths, could now
have more generic rendering functionality that consists of code that iterates through each attribute group,
sends any appropriate parameter values into its effect (texture, materials, matrices, etc.) and then fetches
the number of passes in the effect. We would then set up a loop for that many passes and apply that pass
of the effect before rendering our polygons.

Since rendering with effects introduces state change overhead (just as our application would were it
changing state manually), we will want to render as many primitives within each effect application /
pass as possible. By storing effects in our attribute structure, they can automatically become a primary
key via which we batch our polygons into renderable subsets. The setting of most states is now removed
from the main code, although as previously touched upon, we will still need to setup the effect
parameters before we use an effect. Generally speaking, parameter data is fairly similar across objects
(matrices, textures, etc.) so we should be able to build a fairly generic rendering system to handle all of
this. Later on, we will look at methods to manage and provide custom data as well, which can all be
performed behind the scenes without the main rendering logic being aware of the details.

For any given attribute group we know the effect it uses, so instead of having dedicated code paths for
specific attributes, we can now use its associated ID3DXEffect interface to instruct the effect to setup the
device states accordingly. To do so, we will first inform the effect about the technique we wish to use
(theoretically determined earlier during the effect validation process) by passing the technique handle to
the ID3DXEffect::SetTechnique method. Then we initialize the effect for rendering using the
ID3DXEffect::Begin method.

To give you a general idea of how code designed to render with effect files might look, let us have a
look at some example code snippets that might be used to render our scene attributes using the new
structures we have discussed. Remembering that in our theoretical design we will now maintain a list of
EFFECT_ITEM structures to represent the loaded effects and that our attribute structure can now store
up to eight texture indices and an effect index, the code to render a simple scene with effects might look
similar to the example that follows.

For the time being we will forget about spatial trees and other rendering optimizations and imagine that
we are dealing simply with an array of objects that each need rendering. So, first we will set up a loop to
iterate through all objects in the scene. Inside the object loop, we first retrieve a pointer to the object’s
mesh.

www.gameinstitute.com
Page 39 of 128
UINT passes;
ATTRIBUTE_ITEM *pAttr;
CObject *pObject;
CMesh *pMesh;
EFFECT_ITEM *pEffectItem;
IDirect3DTexture9 *pTexture;

for( int k = 0; k < m_nObjectCount; k++ )


{
// Fetch next object to render
pObject = &m_Objects[k];

// Retrieve its mesh


pMesh = pObject->pMesh;

In the next section we iterate through each of our attributes and instruct the mesh to draw any subset it
manages that uses that attribute. Since we are iterating through a global list of attributes and only a few
may be in use for a given mesh, this example assumes that the DrawSubset method will efficiently
ignore attribute IDs for which no matching subset is defined.

for ( int i = 0; i < m_nAttributeCount; i++ )


{
// Fetch current attribute to render
pAttr = m_pAttribCombo[ i ];

// Fetch the effect item container used by this subset


pEffectItem = m_pEffectList [ pAttr->EffectIndex ];

// Get the D3DX effect interface


ID3DXEffect * pEffect = pEffectItem->Effect;

In the above snippet of code we see that for each attribute we fetch a pointer to its ATTRIBUTE_ITEM
structure, which gives us access to up to eight possible texture indices and the index of the effect used by
this attribute group. The effect index is used to perform a lookup into our effect item table from which
we can grab a pointer to the ID3DXEffect used by this attribute. This is obviously the compiled effect we
will use to configure the device for this subset.

Since our attribute can contain up to eight textures, we will assume that all of our effect files will have
eight texture parameters defined so that we can bind the physical textures to matching parameters. Since
we may be using lots of different effect files throughout our scene, a standard naming convention for our
parameters will allow us to use generic code to handle most state setting.

Here we will assume that our effect files have eight texture parameters which may be used and that they
are named Tex0, Tex1, … Tex7. While it is not guaranteed that all effect files will need to use all eight
parameters, we at least ensure a consistent interface on the application side.

In the next section of code we set up a loop to iterate through the texture index array stored in the
attribute structure. We will use the loop variable to build the parameter name for the texture we are
processing and then, if a texture exists in the attribute item at that slot, we send it into the appropriate

www.gameinstitute.com
Page 40 of 128
parameter in the effect. If a texture is not assigned via the attribute, we can simply pass along a pointer
to a global texture (e.g., a 1x1 white texture) or NULL if desired.

char strTexture[ 8 ];
for ( int k = 0; k < 8; k++ )
{
// Build string containing variable name
sprintf( strTexture, "Tex%d", k );

// Send any textures into effect


if ( pAttr->TextureIndex[k] > -1 )
{
pTexture = pScene->m_pTextureList[pAttr->TextureIndex[k]]->Texture;
pEffect->SetTexture( strTexture, pTexture );
}
else
pEffect->SetTexture( strTexture, g_pSystemDefaultTexture );
}

Here we see our first use of the ID3DXEffect interface for transfer of data from the application into the
effect. In this case, ID3DXEffect::SetTexture binds our texture to a parameter inside the effect. The
first function parameter contains the name of the variable we would like to set the value of (Tex0, Tex1,
etc.) and the second parameter contains the pointer to the texture object.

We are also going to assume that our effects will be responsible for setting the world and view matrices
and as such, every effect file will have two matrix parameters defined called World and View. The
ID3DXEffect::SetMatrix method will allow us to populate the relevant effect parameters as follows:

pEffect->SetMatrix ("World" ,&pObject->mtxWorld);


pEffect->SetMatrix ("View" ,&ViewMatrix);

There are various SetXX style methods in addition to the ones we see here and we will look at most of
them in detail later. For now, you can see that each function takes two parameters. The first is the name
of the variable to which we would like to bind a value and the second is the value we would like bind.
Once again, we are assuming that we have enforced a particular naming convention on our effect file
developers such that the world matrix inside the effect file is stored in a variable called “World” and the
view matrix is called “View”.

You might expect to see a call to set the technique that we would like to use at this point, but in simple
scenarios this is often best performed when the effect file is first loaded, compiled, and validated on the
current device. As such, in this example we will assume that each effect has already been informed of
which technique to use during the startup phase. Later on we will develop a more sophisticated effect
management system that changes techniques on the fly according to the current type of rendering we are
doing, but that is beyond the scope of this discussion.

All that is left to do now is apply the effect (our states) and render the polygons for the current subset.
Our first step is to call the ID3DXEffect::Begin method which, as we will discuss in more detail a little
later, (optionally) records the current device settings so that the matching ID3DXEffect::End call can

www.gameinstitute.com
Page 41 of 128
restore the device state after the effect has been applied and the polygons have been rendered. The
ID3DXEffect::Begin method also performs another important task -- it tells us the number of rendering
passes required for the active technique we are about to use.

// Inform effect we are about to begin rendering so we can retrieve


// the number of passes the current technique requires
pEffect->Begin( &passes, 0 )

Note: Don’t worry about that second parameter for now. We will drill down on the ID3DXEffect interface
methods in more detail a bit later.

With the required number of passes now known and the device state saved, it is time to iterate through
each pass of the effect and render the polygon subset (per pass). To achieve this, we set up a loop to
iterate through each pass in the technique and then call the ID3DXEffect::BeginPass method,
specifying the pass we would like to execute. This method will instruct the effect framework to apply all
of the scripted device states inside the current pass block. Below we see the final section of our example
code.

// Loop through each pass (BeginPass applies the states)


for ( int j = 0; j < passes; j++ )
{
pEffect->BeginPass(j);
pMesh->DrawSubset(i)
pEffect->EndPass();
}

pEffect->End();
}
}

You may be thinking at this point that we have damaged our ability to decouple our rendering
techniques from our rendering code because of the need for the application code to set effect file
parameters. Since the application had to know the names of those parameters, we introduced hard-coded
parameter names into our engine code, which is hardly fully generic. While this is true at a certain level,
the problem is not nearly as significant as it first seems. For example, you will often use the same names
across all your effect files, thus defining a common language amongst your scripts. The application can
then know (for example) that the world matrix will always be called ‘World’ in all of your effect files
and that textures 1 through 5 will always be called Tex0, Tex1, Tex2, Tex3, and Tex4, and so on.

Regardless of the number of effects used by the various objects in our scene, and regardless of how
differently each attribute’s effect may need to configure the device, this single section of code will
handle rendering for all of our standard meshes. All special case code is removed from the rendering
function and our state logic and assignments are tucked away inside the effect making the system far
more robust and customizable.

www.gameinstitute.com
Page 42 of 128
18.2.3 Setting Effect Variables Using Handles

One problem with the example rendering code we saw in the last section is that it could fairly be
described as inefficient. Typically, many effect parameters will need to be set during scene rendering so
we will want this process to be as quick as possible. Although there are other improvements that could
be made, the first simple improvement relates to the method used to assign values to effect parameters,
which in the above example was achieved in the following way:

pEffect->SetTexture( "Tex0", pTexture );

In this particular example, we are specifying the name of a parameter known to be in the effect file
(“Tex0”) so that we can associate it with the texture pointer specified as the second parameter. However,
when we use the function in this way, it means the effect will need to search internally for a parameter
with that name before the assignment can be done. String lookups are generally best avoided in a time
critical situation if possible, especially when they will need to be performed many times over.
Fortunately, the ID3DXEffect interface allows you to search for this parameter, by name, once and then
cache a handle to that parameter for subsequent use. Although we will defer our more detailed
discussions of the numerous methods that can be used to fetch handles until a little later, we will briefly
examine one such method now as it will serve as a useful basis for some of the discussions to come.

The ID3DXEffect::GetParameterByName method is one of many handle retrieval functions that can be
used to fetch the handle for a given parameter based on its name, and is likely the one you will
encounter the most in the initial stages of our exploration of effect files. As outlined above, similar to
handles designed to reference techniques, this method can be used to retrieve a handle to a parameter
that is guaranteed never to change for the lifetime of the effect object. In our current example then, we
might want to retrieve the handle to the effect parameter with the name "Tex0" as follows:

D3DXHANDLE hTexture = pEffect->GetParameterByName( NULL, "Tex0" );

As you can see above, the GetParameterByName method returns a value of type D3DXHANDLE which
can be subsequently passed to the various ID3DXEffect::Set... methods in place of the string based
name we were using earlier in order to set the parameter value each time.

pEffect->SetTexture( hTexture, pTexture );

In practice, much like referencing a variable or data structure directly through a pointer, this approach is
more efficient. Since we don't necessarily want to retrieve the handle by name each frame -- doing so
would negate the benefit of using handle based references entirely -- we will usually want to implement
a system that caches parameter handles during startup and then use these handles to set the parameters
inside the time critical render loop. This is the approach we will be using in the next section in which
we'll be taking a look at a more practical example project.

www.gameinstitute.com
Page 43 of 128
18.2.4 CTerrain Effect Experiment - Lab Project 18.2

At this point we should have enough general information about effect files under our belt such that we
can perhaps look at effects being used in the context of some code that is more familiar to us. In this
section we will examine some modifications to the CTerrain class developed in earlier lab projects
(specifically Lab Project 6.2 from Module I) to make it more effect file friendly. This is a good code
module for you to experiment with because it is both simple and self-contained (i.e., it is not wired into a
larger scene’s attribute system used in other lab projects). The terrain class also has the luxury of
knowing that all of its polygons can use the same effect. In fact, we will assume that the Terrain.fx file
we studied earlier is the one we will be using here.

The changes to the class declaration should be quite simple. Here we have added several members to the
CTerrain class:

Excerpt from CTerrain.h


LPD3DXEFFECT m_pEffect; // The effect file used to render the terrain.
D3DXHANDLE m_hMatrixView; // Parameter handle for effect view matrix
D3DXHANDLE m_hMatrixWorld; // Parameter handle for effect world matrix
D3DXHANDLE m_hMatrixProj; // Parameter handle for effect projection matrix
D3DXHANDLE m_hTextures[8]; // Parameter handles for effect textures

The first member is a pointer to the ID3DXEffect object that we will use for device configuration during
the rendering of terrain blocks. The following three members are handles for our three key
transformation matrix parameters. Our effects will assume in nearly all cases that the world, view, and
projection matrices will be passed by the application and will be the responsibility of the effect to bind
to the device. Finally, the last new member is an array where we can store handles to the effect file’s
texture parameters (two in our simple terrain case – diffuse and detail textures). We define an array of
eight handles currently since you may wish to extend this experiment later and add additional textures.

You will recall from previous discussions of the CTerrain class that the CTerrain::LoadHeightMap
method is responsible for constructing the terrain and loading the textures that will be applied to it. This
function will now be updated so that it also loads and compiles the "Terrain.fx" effect file (found in the
lab project's "Data" directory) and validates its techniques. It will also retrieve all matrix and texture
parameter handles needed to configure the effect at render time and store them in the new member
variables outlined above. The complete LoadHeightMap method is shown below, with the changes
noted as appropriate. The new additions are mostly towards the bottom of the function.

Recall that the first section of the method is designed to compute certain necessary values such as the
scale of the terrain on each axis, as well reading the contents of the heightmap specified by the calling
function in order to populate an appropriately sized byte array that will represent the pre-scale height of
each vertex in the terrain.

www.gameinstitute.com
Page 44 of 128
bool CTerrain::LoadHeightMap( LPCTSTR FileName, ULONG Width, ULONG Height )
{
HRESULT hRet;
FILE * pFile = NULL;

// Cannot load if already allocated (must be explicitly released for reuse)


if ( m_pMesh )
return false;

// Must have an already set D3D Device


if ( !m_pD3DDevice )
return false;

// First of all store the information passed


m_nHeightMapWidth = Width;
m_nHeightMapHeight = Height;

// A scale of 4 is roughly the best size for a 512 x 512 quad terrain.
// Using the following forumla, lowering the size of the terrain
// simply lowers the vertex resolution but maintains the map size.
m_vecScale.x = 4.0f * (512 / (m_nHeightMapWidth - 1));
m_vecScale.y = 2.0f;
m_vecScale.z = 4.0f * (512 / (m_nHeightMapHeight - 1));

// Attempt to allocate space for this heightmap information


m_pHeightMap = new UCHAR[Width * Height];
if (!m_pHeightMap)
return false;

// Open up the heightmap file


pFile = _tfopen( FileName, _T("rb") );
if (!pFile)
return false;

// Read the heightmap data (grayscale)


fread( m_pHeightMap, Width * Height, 1, pFile );

// Finish up
fclose( pFile );

No changes so far from earlier lab projects. This is also true of the next section which, if you recall, is
responsible for loading the base and detail textures.

// Load in the textures used for rendering the terrain


hRet = D3DXCreateTextureFromFile( m_pD3DDevice, BaseTextureName,
&m_pBaseTexture );
if ( FAILED(hRet) )
return false;

hRet = D3DXCreateTextureFromFile( m_pD3DDevice, DetailTextureName,


&m_pDetailTexture );
if ( FAILED(hRet) )
return false;

www.gameinstitute.com
Page 45 of 128
At this point the heightmap is loaded and the two textures to be applied to the terrain have been created
and are ready for use. The following section is where our new code comes into play.

First, our CTerrain class will expect to find its rendering techniques in an effect file called Terrain.fx.
The new code below demonstrates how we can load this effect using the D3DXCreateEffectFromFile
function provided by D3DX.

// Load and compile the effect file


hRet = D3DXCreateEffectFromFile( m_pD3DDevice,
_T("Data\\Terrain.fx"),
NULL, NULL, 0, NULL,
&m_pEffect,
&pErrorLog );

As you can in the above code, we pass in a pointer to the device and the name of the effect file we wish
to load. We also pass the address of our new m_pEffect member variable so that, on successful function
completion, it will be assigned a valid ID3DXEffect interface pointer.

The D3DXCreateEffectFromFile function will load our text-based effect script and invoke the effect
compiler behind the scenes to compile the effect. If the function does not return successfully then it
means that our effect script caused compilation errors, most likely due to bad syntax, or some other error
occurred. As the final parameter to this function we can pass the address of a variable of type
'ID3DXBuffer*' (or 'LPD3DXBUFFER') which, on function return, will contain a pointer to a D3DX
buffer containing a string with any output generated by the effect compiler. If an effect does fail to
compile, we can retrieve the contents of this buffer and output the compilation errors to help us identify
the problem areas.

if ( FAILED(hRet) )
{
// Print the build error information
if ( pErrorLog )
{
_tprintf( _T("Effect Build Error for file 'Terrain.fx': %s\n"),
pErrorLog->GetBufferPointer());
pErrorLog->Release();

} // End if error log exists


else
{
_tprintf( _T("Effect Build Error for file 'Terrain.fx'.\n"));

} // End if no error log


return false;

} // End if failed to load effect from file.

// Error log may still have been created even if the compilation
// did not fail (it may contain warnings in this case).
if ( pErrorLog )
pErrorLog->Release();

www.gameinstitute.com
Page 46 of 128
If the compilation step succeeds, and the above code is skipped, then we know that the effect file was
successfully loaded and compiled so next we must validate its techniques. In this case we will want to
select the best technique that is supported on the current hardware. You will recall from our earlier
examination of the Terrain.fx file that it contains two techniques. The first was a single pass technique
that performed the color blending between the base map and detail map in the texture stages. A multi-
pass fallback technique was also provided in case two stages were not supported.

Because we placed the techniques inside our effect file in order of most desired (top to bottom), we can
simply use the ID3DXEffect::FindNextValidTechnique method to find the best technique. Recall that
it will choose the first technique that works (i.e., is validated), starting with the first technique in the file.
If that one fails, it will naturally move down to the second technique, and so on.

// Find first valid technique in file


D3DXHANDLE hFirstValidTechnique;
if ( FAILED( m_pEffect->FindNextValidTechnique( NULL,
&hFirstValidTechnique ) ) )
{
_tprintf( _T("Could not find a valid technique in file 'Terrain.fx'.\n") );
m_pEffect->Release();
return false;

} // End if failed

// If we have a valid technique pointer, then we have selected the first


// valid technique from the effect file. Set the selected technique in the
// effect so that from this point on we will render using that technique.
m_pEffect->SetTechnique( hFirstValidTechnique );

The above code is hopefully pretty self-explanatory. Because we have a valid ID3DXEffect, we can use
its FindNextValidTechnique method to find the first (and theoretically best) technique that works. As
the first parameter to this function we pass NULL, which tells D3DX to begin the validation process
starting at the first technique defined at the top of the effect file. Alternatively, we could pass the handle
of a specific technique from which we wish to start the search, bypassing earlier entries. As the second
parameter, we pass the address of a D3DXHANDLE. On function return it will reference the technique
that was ultimately selected for use on the current hardware. Once we have this handle we can
immediately call the ID3DXEffect::SetTechnique method to inform the effect that this is the
technique we would like to use during rendering.

Since it is much more efficient to set parameter values using handles as discussed in the previous
section, in the next section of the method code we use the ID3DXEffect::GetParameterByName
method to fetch the parameter handles for the world, view, and projection matrices. We saw earlier that
in our Terrain.fx script the parameter names for these matrices are matrix_world, matrix_view and
matrix_projection, respectively.

www.gameinstitute.com
Page 47 of 128
// Retrieve matrix parameter handles from effect file
m_hMatrixView = m_pEffect->GetParameterByName( NULL, "matrix_view" );
m_hMatrixWorld = m_pEffect->GetParameterByName( NULL, "matrix_world" );
m_hMatrixProj = m_pEffect->GetParameterByName( NULL, "matrix_projection" );

As you can see, the three matrix parameter handles are stored in their respective member variables so
that they can be efficiently accessed at render time. For now we'll ignore the first parameter accepted by
the GetParameterByName method (we'll examine what this can be used for a little later) and simply pass
NULL in each case. The second input however is where we pass the name of the parameter whose
handle we wish to retrieve.

Using an identical approach, we also loop through (up to) eight textures and try to fetch handles for
them. Just as with our matrices, we can assume that our effect files will use a known naming convention
for texture parameters. Up to eight texture parameters might be present in our experimental effect file
with names in the format of texturen where n is [0, 7].

// Retrieve parameter handles for our textures


char TexName[32];
for ( int i = 0; i < 8; ++i )
{
sprintf( TexName, "texture%i", i );
m_hTextures[i] = m_pEffect->GetParameterByName( NULL, TexName );

} // Next texture handle

The final few lines that allocates and builds the terrain blocks are unchanged from prior projects.

// Allocate enough meshes to store the separate blocks of this terrain


if ( AddMesh( ((Width - 1) / QuadsWide) * ((Height - 1) / QuadsHigh) ) < 0 )
return false;

// Build the mesh data itself


return BuildMeshes( );
}

In our Terrain.fx file you may have noticed that we only defined two texture parameters, but in the
above example code we attempt to retrieve handles for eight. In the cases where a parameter with the
specified name is not found in the effect file script, the ID3DXEffect::GetParameterByName method
will fail gracefully and simply return NULL. So this works perfectly well for our experiment.

So, with the effect file now loaded, and all handles cached, let’s next look at the steps that might be
necessary in order to adjust our render function to make use of this information. Since we will now be
using effects to set device states, our old terrain rendering code should be a lot simpler since we no
longer have to worry about the conditional multi-pass/single-pass logic as we did in the past. These
separate rendering techniques are now defined in the effect script, so a single unified render function
should definitely be possible. Let’s see if that is true.

www.gameinstitute.com
Page 48 of 128
Note: If you happen to be following along with some code from earlier lab projects (Module I or
II), you might want to consider trying to code the Render function yourself before reading the
next section. You should have enough knowledge right now to do so.

In the first section we will add some code to send the world, view, and projection matrices to the
terrain’s effect. The same will be true for our base and detail texture parameters. Notice that we can now
pass the D3DXHANDLEs that we cached earlier into the ID3DXEffect::SetMatrix and
ID3DXEffect::SetTexture methods instead of the parameter names. This makes setting these
parameters much more efficient.

void CTerrain::Render( CCamera * pCamera )


{
ULONG i;
UINT nPassCount, nPass;

// Validate parameters
if( !m_pD3DDevice || !m_pEffect )
return;

// Set the FVF code for the terrain meshes.


if ( m_nMeshCount > 0 )
m_pD3DDevice->SetFVF( m_pMesh[0]->m_nFVFCode );

// Pass matrices to the effect


D3DXMATRIX mtxIdentity;
D3DXMatrixIdentity( &mtxIdentity );
m_pEffect->SetMatrix( m_hMatrixView, &pCamera->GetViewMatrix() );
m_pEffect->SetMatrix( m_hMatrixProj, &pCamera->GetProjMatrix() );
m_pEffect->SetMatrix( m_hMatrixWorld, &mtxIdentity );

// Pass textures to the effect


m_pEffect->SetTexture( m_hTextures[0], m_pBaseTexture );
m_pEffect->SetTexture( m_hTextures[1], m_pDetailTexture );

Next we need to adjust the code that actually draws our terrain.

To start, we call ID3DXEffect::Begin and retrieve the number of passes required for the technique that
was ultimately selected for the terrain just as we did previously in our original theoretical example. If the
primary technique was selected, then the following code will perform only a single render pass. If only
the fallback technique validated, then it will perform two passes. We set up a loop to iterate through the
required number of passes and inside that loop we will call BeginPass for each pass. Between the
BeginPass and EndPass methods we will render the terrain as we always have with the knowledge now
that our effect already has all the information it needs to utilize either technique.

www.gameinstitute.com
Page 49 of 128
// Render each visible block
if ( SUCCEEDED( m_pEffect->Begin( &nPassCount, 0 ) ) )
{
// Render for each pass required by the current technique
for ( nPass = 0; nPass < nPassCount; ++nPass )
{
// Begin this render pass
if ( FAILED( m_pEffect->BeginPass( nPass ) ) )
break;

// Render Each block


for ( i = 0; i < m_nMeshCount; i++ )
{
// Skip if mesh is not within the viewing frustum
if ( pCamera &&
!pCamera->BoundsInFrustum( m_pMesh[i]->m_BoundsMin,
m_pMesh[i]->m_BoundsMax ))
continue;

// Set the stream sources


m_pD3DDevice->SetStreamSource( 0, m_pMesh[i]->m_pVertexBuffer, 0,
m_pMesh[i]->m_nStride );
m_pD3DDevice->SetIndices( m_pMesh[i]->m_pIndexBuffer );

// Render the stream


m_pD3DDevice->DrawIndexedPrimitive( D3DPT_TRIANGLESTRIP,
0, 0,
BlockWidth * BlockHeight,
0, m_nPrimitiveCount );

} // Next Mesh

// Finish this pass


m_pEffect->EndPass();

} // Next pass

// Finish rendering with effect


m_pEffect->End();

} // End if succeeded
}

There is one other important step we must take in order to complete our integration of effect files in this
relatively simple demo. As we will come to see later in this chapter, the effect file system internally
maintains certain additional D3D resources which must be released prior to the device being reset (i.e. in
response to a lost device, or window resize), and restored once device reset is complete. As a result, we
must also inform the effect file object of the occurance of these two events. This is achieved by making
a call to the ID3DXEffect::OnLostDevice and ID3DXEffect::OnResetDevice methods at the

www.gameinstitute.com
Page 50 of 128
appropriate time. In this lab project, two new methods have been added to the CTerrain class as shown
below:

void CTerrain::OnDevicePreReset( )
{
if ( m_pEffect )
m_pEffect->OnLostDevice();
}

void CTerrain::OnDevicePostReset( )
{
if ( m_pEffect )
m_pEffect->OnResetDevice();
}

These two methods are called by the main application object (CGameApp) where necessary in order to
allow us to pass these messages on to the effect file object.

That pretty much wraps up the implementation of our first fixed-function effect file integration project.
Hopefully this example has demonstrated how easy it can be to introduce and make use of effect files
and the benefits they provide. Converting our lab project rendering framework over to use effect files as
a foundation will be a bit more involved because we have lots of brand new features to support, but not
very different in most ways from the key ideas we just saw here.

18.2.5 Parameter Blocks

When we consider that each effect will probably need to have many parameters set before it can be
applied, we can imagine a situation where our rendering code could once again grow quite large with the
number of SetXX calls needed for our effects. It would be a lot cleaner and more efficient if we could
assign all parameter values for a given attribute group’s effect without requiring myriad set calls. As it
happens, we can do this via the effect framework’s support for parameter blocks. These are basically
blocks of parameter data that can be (pre)recorded and later applied to a given effect to set all static
parameter data with a single function call.

There are two steps to using parameter blocks. The first creates and populates the parameter block and is
usually done at initialization time. That is, the parameter block will be built once when the effect is first
compiled and its best valid technique is chosen. The following snippet of code shows how a parameter
block is created and some application provided values are assigned to its parameters. As you can see,
informing the parameter block of the values to record for a given effect is simply a case of calling the
ID3DXEffect::Set… methods sandwiched between calls to the parameter block recording method pair.

www.gameinstitute.com
Page 51 of 128
// Start recording a parameter block
pEffect->BeginParameterBlock();

// Bind variables…
pEffect->SetTexture( "Tex0" , pTexture[0] );
pEffect->SetFloat ( "fReflect" , 20.0f );
pEffect->SetMatrix ( "Scale" , &ViewMatrix );
pEffect->SetInt ( "Score" , 10 );

// Finish recording and retrieve parameter block handle


D3DXHANDLE m_hParameters = m_pEffect->EndParameterBlock();

As illustrated, once we have created an effect we can use ID3DXEffect::BeginParameterBlock to


start recording a parameter block. A new parameter block will be created by the effect and any
parameter assignments we make between that call and a call to ID3DXEffect::EndParameterBlock
will be recorded in that block. When the ID3DXEffect::EndParameterBlock method is called, the
handle to the parameter block is returned to us. This handle can be stored and used later in our rendering
procedure to apply all values in the parameter block to the effect with a single call.

In the above example we are recording the values of four parameters in the parameter block. The code
assumes that the effect has a texture parameter called “Tex0”, a float parameter called “fReflect”, an
integer parameter called “Score” and a matrix called “Scale”. The parameter block will record the four
values passed in as the second parameter to each Set method and will assign these values to the relevant
effect parameters when the parameter block is later applied.

It is very important to understand that the parameter block does not store pointers to the variables we
provide (textures being the exception), and instead a copy of the value will generally be made. As a
result, they can really only be used to record values that we do not expect to dynamically change
throughout the life of the application. Thus, a world matrix would usually be a bad choice to record in a
parameter block because the parameter block would only store the values of the matrix at the time the
parameter block was created. If the values inside the matrix were to change, the world matrix values
stored in the parameter block would be out of date when we applied it. Indeed you will notice that for
the SetInt and SetFloat examples, literal values are passed. Not factoring in the static nature of the data
is a bug-inducing mistake to make, so it's worthwhile bearing this in mind. Any effect parameters whose
values will change dynamically and frequently over the life of the application will still need to be set
before the effect is used and should not be recorded in a parameter block. This limitation makes
parameter blocks somewhat less useful to be sure, but they can still be beneficial if you choose to
support them for reducing function call overhead for static parameter data.

Note: Textures are the obvious exception, where the effect variables themselves are actually pointers to
texture surfaces. Thus, there is no problem with recording a texture assignment inside a parameter block
even if the texture surface will be locked and altered or used as a render target. In such cases, the
texture pointer does not change and thus the parameter block binds the texture to a texture stage as
normal and any changes that have been made to the texture surface will be reflected during rendering.

Once a parameter block has been created and its handle cached, we can apply the values we recorded
using the ID3DXEffect::ApplyParameterBlock method. We will demonstrate this in the next example

www.gameinstitute.com
Page 52 of 128
by assuming that we have an array of objects to render. In this case, each object will store a pointer to its
effect and a handle to the parameter block that was created during compilation.

if( SUCCEEDED( pDevice->BeginScene() ) )


{
// Render the mesh objects
for( int i = 0; i < NUM_OBJS; ++i )
{
// Grab effect of current object and apply the parameters
ID3DXEffect *pEffect = Objects[i].m_pEffect;
pEffect->ApplyParameterBlock( Objects[i].m_hParameters );

...

pEffect->Begin( &Passes, 0 );
for( iPass = 0; iPass < Passes; iPass++ )
{
...
}
pEffect->End();
}

...
pDevice->EndScene();
}

18.2.6 Standardizing Communication I – Semantics

In our previous examples, the host application had to be aware of the names of all the various
parameters in our effect files. While enforcing a strict variable naming convention minimizes the impact
of this requirement, a potentially more attractive alternative is to use semantics and develop a common
labeling scheme for our parameters which describe their intent to the host application.

Semantics are basically descriptive labels that we can assign to effect parameters. Normally their
purpose is to tell the application how a variable is intended to be used. By defining a set of semantics,
you essentially define a language of communication between the host application and the various effect
files. If semantics are being used, we can use the ID3DXEffect::GetParameterBySemantic method to
search for parameters for the purpose of caching their handles or querying information. This means we
are no longer searching for parameters based on their name but instead based on their usage (which
makes a lot of sense for generic applications such as scene editors and other tools, etc.). As long as the
developer documents the various semantics that will be understood by the engine, an effect file writer
can use whatever parameter naming conventions they prefer for the variables (e.g., Hungarian notation,
etc.), and dramatically reduces the risk of introducing variable naming conflicts by forcing them to use
specific names that may already be in use elsewhere.

Let us have a look at how we might define some parameters in an effect file using semantics. You are
reminded that a semantic can be any identifier you choose and part of your job as an engine developer
will be choosing a set of semantics that your engine will recognize and support. A single effect

www.gameinstitute.com
Page 53 of 128
parameter can only ever have one semantic assigned to it. As you will see in the following code, we
assign a semantic to an effect parameter by following its name with a colon ‘:’, the semantic itself, and
finally ending with the traditional semi-colon.

Below we show our CTerrain effect file example adapted to use semantics. The only difference here is
that the five effect parameters/variables defined at the top of the file have now been assigned semantics.
The rest of the effect file is shown merely to demonstrate that the use of semantics does not change the
way in which those parameters are used throughout the various techniques. The semantics exist only to
inform the host application about our intentions for those parameters.

////////////////////////////
// Parameter Definitions
////////////////////////////
texture texture0 : TEX0;
texture texture1 : TEX1;

matrix matrix_world : WORLD;


matrix matrix_view : VIEW;
matrix matrix_projection : PROJECTION;

////////////////////////////
// Samplers
////////////////////////////
sampler2D ColorMapSampler = sampler_state
{
Texture = <texture0>;
AddressU = Wrap;
AddressV = Wrap;

MinFilter = Linear;
MagFilter = Linear;
MipFilter = Linear;
};

sampler2D DetailMapSampler = sampler_state


{
Texture = <texture1>;
AddressU = Wrap;
AddressV = Wrap;

MinFilter = Linear;
MagFilter = Linear;
MipFilter = Linear;
};

////////////////////////////
// Techniques
////////////////////////////
Technique TerrainSinglePassRender
{
pass p0
{
WorldTransform[0] = <matrix_world>;
ViewTransform = <matrix_view>;

www.gameinstitute.com
Page 54 of 128
ProjectionTransform = <matrix_projection>;

Lighting = false;
SpecularEnable = false;
NormalizeNormals = false;
DitherEnable = true;
FillMode = solid;
ShadeMode = gouraud;
CullMode = CCW;
ZEnable = true;
ZWriteEnable = true;
AlphaBlendEnable = false;
AlphaTestEnable = false;

Sampler[0] = (ColorMapSampler);
TexCoordIndex[0] = 0;
ColorArg1[0] = Texture;
ColorOp[0] = SelectArg1;

Sampler[1] = (DetailMapSampler);
TexCoordIndex[1] = 1;
ColorArg1[1] = Texture;
ColorArg2[1] = Current;
ColorOp[1] = AddSigned;
}
}

Technique TerrainMultiPassRender
{
pass p0
{
WorldTransform[0] = <matrix_world>;
ViewTransform = <matrix_view>;
ProjectionTransform = <matrix_projection>;

Lighting = false;
SpecularEnable = false;
NormalizeNormals = false;
DitherEnable = true;
FillMode = solid;
ShadeMode = gouraud;
CullMode = CCW;
ZEnable = true;
ZWriteEnable = true;
AlphaBlendEnable = false;
AlphaTestEnable = false;

Sampler[0] = (ColorMapSampler);
TexCoordIndex[0] = 0;
ColorArg1[0] = Texture;
ColorOp[0] = SelectArg1;
AlphaOp[0] = Disable;
}

pass p1
{

www.gameinstitute.com
Page 55 of 128
AlphaBlendEnable = true;
SrcBlend = DestColor;
DestBlend = SrcColor;

Sampler[0] = (DetailMapSampler);
TexCoordIndex[0] = 1;
ColorArg1[0] = Texture;
ColorOp[0] = SelectArg1;
AlphaOp[0] = Disable;
}
}

Here we have used a semantic naming system for our textures in the format of TEXn (where n is a value
between 0 and 7). In this particular effect file we use only two textures and label them with the
semantics TEX0 and TEX1. You are reminded that you can use whatever semantics you want and that,
as an engine developer, it will generally be your job to inform the artists/effect developers about the
semantics your engine will search for and support.

Using the above example, when building the terrain object the application would know in advance that it
requires a base map and a detail map and would therefore know to search for two texture parameters in
its effect. The engine no longer has to know what the parameters are called, only that the semantic
TEXn format is being used by all effect files to label parameters as textures. The search for the texture
parameters would now be by semantic and the returned handles cached as before.

You can also see that our engine would need to support the WORLD, VIEW, and PROJECTION
semantics for indicating that the effect would like to be provided with the corresponding matrices.

From a design perspective, for each effect file the engine loads, it can search for parameters using its list
of supported semantics. If a matching parameter handle is returned from the effect for a semantic, then
the engine knows that the effect should be passed the data that the semantic represents. As a simple
example of this type of scripting, when an effect is loaded we might have the following line:

m_hMatrixWorld = m_pEffect->GetParameterBySemantic( NULL, "WORLD" );

If m_hMatrixWorld is not NULL on function return, then the effect in question includes a parameter
marked with the WORLD semantic and as such is perhaps requesting that our application provide it with
the current world matrix prior to applying the effect. As with the other parameter enumeration methods
we have looked at so far, the returned handle can be used to set the value of the effect parameter
efficiently later on in the render loop.

Note: If you decide to use semantics, you can name parameters whatever you want, but you are now
enforcing a semantic labeling convention instead. The application still has to be aware of the semantics
you have decided to use, so in either case there is no workaround that permits the application to remain
totally unaware of the data requirements for effects.

With what we know about semantics and how they can be used to define a common language of data
interchange between the effects and the engine, you can imagine how a very simple set of semantics
could be defined for common parameters, as shown below. The following code could be defined in a
standalone effect file and used as a header file for other effect files. This means that you, as the engine

www.gameinstitute.com
Page 56 of 128
developer, could define a number of common parameters that all effects may wish to use marked with
semantics that the engine will understand. This file could be distributed to each effect developer and
included at the top of their effect files so that their effects too will have access to these parameters and
the data that will be supplied through them by the engine.

////////////////////////////
// Materials
////////////////////////////
float4 material_diffuse : DIFFUSE
float4 material_ambient : AMBIENT
float4 material_emissive : EMISSIVE
float4 material_specular : SPECULAR
float material_power : SPECULARPOWER

////////////////////////////
// Transforms
////////////////////////////
matrix matrix_world : WORLD;
matrix matrix_view : VIEW;
matrix matrix_projection : PROJECTION;

////////////////////////////
// Textures
////////////////////////////
texture texture0 : TEX0;
texture texture1 : TEX1;
texture texture2 : TEX2;
texture texture3 : TEX3;
texture texture4 : TEX4;
texture texture5 : TEX5;
texture texture6 : TEX6;
texture texture7 : TEX7;

If we assume that we have defined the above parameters and semantics in a file called “common.fx”
then any other effect can have access to these parameters (and their semantics) simply by placing the
following pre-processor instruction at the top of their effect files:

#include "common.fx"

Just as in C++, the effect compiler also has a pre-processor that can be used to declare #defines,
conditional code inclusion/exclusion blocks and of course, #includes.

Note: The semantics used in the above code snippets are just examples of ones you might choose to
use. You can use whatever semantics you want so long as your engine code knows what labels it is
searching for.

Whether you choose to use semantics or not, it can be a good idea to define your commonly used
parameter declarations in a separate header file and include them with each effect file you write as we
see above.

www.gameinstitute.com
Page 57 of 128
Semantics can also be useful for ideas beyond just setting state. Using the D3DX effect framework you
can possibly script other parts of your game engine as well. After all, effect files allow us to define
parameters and read back those parameters, which isn’t much different to how we used INI files in
Modules I and II of this series. For example, you might decide that in addition to the techniques and
passes required by a given batch of polygons, you will also define the vertex buffer format that should
be used. We have seen in the past that we often require many different vertex formats for the various
objects in our game world. A terrain object for example might use an unlit, untransformed vertex format
with space for two sets of texture coordinates. A skybox on the other hand might use a pre-lit and
untransformed vertex but with only space for one set of texture coordinates. A skinned mesh will need to
have weights, and so on. Thus, it is not necessarily out of the question to consider using the effect file
for storing a descriptive structure that describes to the engine the vertex format that should be used for
objects that are assigned the effect. In this case, when the engine loads a given effect file it can read in
the vertex elements defined in the file and construct a vertex structure on the fly that contains the
requested components. Assuming that we limit ourselves to the use of FVF-compliant vertex structures
for the simplicity of this example, perhaps somewhere towards the top of our effect files we would
define a structure that describes the vertex format as follows:

#include "common.fx"

////////////////////////////
// Vertex structure
////////////////////////////
struct VERTEXSTRUCTURE
{
float3 Position : POSITION;
float3 Normal : NORMAL;
float2 DiffuseUV : TEXCOORD0;
float2 LightmapUV : TEXCOORD1;
};

VERTEXSTRUCTURE VertexFormat;

Notice how we can define such a structure in an identical way to that of a structure definition in C or
C++. In this example we have defined a structure called VERTEXSTRUCTURE and have instantiated a
parameter of that type called VertexFormat. The parameter has four members used to describe the
position, normal, and two sets of texture coordinates. Our engine could then search the effect file for a
parameter called VertexFormat and examine each of the child members of this structure. Their
semantics would help in determining which vertex pool polygons assigned this effect should belong to
or whether a new vertex buffer needs to be built that supports these components.

Note: In the above example we could have called the structure and its members anything that we liked.
The structure and its members will not be used by the effect in any way -- this is simply a way to
communicate a desired vertex structure to the engine in a way that is familiar to C/C++ programmers.

The following code shows how the host application might retrieve the details of this vertex structure and
then build an FVF code to describe it. This FVF could then be passed along to the object in question so
that it constructs a vertex buffer using the correct format. There are a few ID3DXEffect methods being
used in this next example which we have not yet discussed. The ID3DXEffect::GetParameterDesc

www.gameinstitute.com
Page 58 of 128
method for example allows us to retrieve detailed information about an effect parameter, such as its
type, semantic and size. When the type of a parameter in question is a structure, the descriptor returned
will outline the number of child members that it has (amongst other things), so that we can then iterate
through each member and fetch a handle to it. You will see that in order to fetch the handle to a child
parameter in a structure, we will pass the handle of the parent parameter as the first parameter to the
GetXX series of functions (recall from our earlier discussion of the GetParameterByName method we
were previously explicitly passing a value of NULL to indicate that we wanted to search within the
global scope).

// Get the handle of the vertex format parameter


D3DXHANDLE hVertex = pEffect->GetParameterByName( NULL, "VertexFormat" );

// Retrieve variable description


D3DXPARAMETER_DESC Description;
pEffect->GetParameterDesc( Vertex, &Description );

Here we searched the effect for a parameter named ‘VertexFormat’. We have already established that
this is a known parameter that we have pre-defined in our effect files, whose type is a structure that
describes the vertex format layout for polygons assigned this effect. If it is found, then its handle is
returned. We then used the ID3DXEffect::GetParameterDesc method to retrieve a structure containing
detailed information about the parameter and its type.

We know that this parameter is a structure, so the StructMembers member of the returned descriptor will
describe how many child members the vertex structure has. We then set up a loop to parse each one.
Inside the loop we use the ID3DXEffect::GetParameter function to retrieve a handle to each child
member of the structure based on its integer index:

// Used to record FVF flags for vertex


ULONG FVF = 0;

// Loop though each member of the structure


for( ULONG i = 0; i < Description.StructMembers; i++ )
{
// Get the parameter handle
D3DXHANDLE pHandle = pEffect->GetParameter( Vertex, i );

As the first parameter to the ID3DXEffect::GetParameter function we pass the handle to the parent
parameter and as the second parameter we pass in the integer index of the child member we are currently
enumerating.

At this point we now have access to the handle of the child member of the vertex structure that we are
currently processing in this iteration of the loop. With it, we will retrieve a descriptor for this child
member parameter so that we can retrieve information about any semantic that might be attached to it.
Once again, we use the ID3DXEffect::GetParameterDesc method to fetch the descriptor for the child
member:

// Get the current parameter description


D3DXPARAMETER_DESC ParamDesc;
pEffect->GetParameterDesc( pHandle, &ParamDesc );

www.gameinstitute.com
Page 59 of 128
Finally, we examine the semantic assigned to that member and adjust our FVF flag accordingly.

// Is it a position member?
if ( _stricmp( ParamDesc.Semantic, "POSITION" ) == 0 )
FVF |= D3DFVF_POSITION;

// Is it a normal member?
else if ( _stricmp( ParamDesc.Semantic, "NORMAL" ) == 0 )
FVF |= D3DFVF_NORMAL;

// Is it a texcoord member?
else if ( _stricmp( ParamDesc.Semantic, "TEXCOORD0" ) == 0 )
{
int NumFloats = ParamDesc.Bytes / 4;
FVF |= D3DFVF_TEXCOORDSIZE0( NumFloats );
}
else if ( ... ) and so on
}

This is of course only one example of how you might consider using the effect framework to configure
parts of your engine. Although this discussion is only hypothetical at the moment, later in this chapter
(and in subsequent chapters) we will see this type of parameter parsing regularly in our lab project
applications.

18.2.7 Standardizing Communication II – Annotations

Annotations expand the possibilities for communication between the host application and the effect even
further by allowing user-defined data to be attached to any pass, technique, or parameter. This allows the
effect developer to provide hints to the engine about how a parameter should be used or initialized. With
annotations you can assign any supported data type and value to a parameter that can be read by the
engine. They can be used by the artist to inform the engine of a particular texture that should be used for
a given texture parameter for example, or to provide other default values, light settings, messages or
descriptions to the application. Both the ID3DXEffect interface and the ID3DXEffectComplier interface
(covered later) contain methods that allow us to search for annotations by name or index within an effect
and fetch the value assigned to that annotation. The application can do whatever it wants with this
information and that will be largely down to the communication protocol that is defined between the
engine developer and the effect script developers.

An annotation looks just like a normal parameter declaration and assignment except that it must exist
between open and closed chevrons at the end of a top level parameter, pass, or technique declaration.
Annotations must be assigned default values inside the effect file and they cannot be referenced (read)
from within the effect file’s techniques. They exist purely to provide custom information to the engine
from the effect file author.

www.gameinstitute.com
Page 60 of 128
In the following example effect file one of our texture parameters has been assigned both a semantic and
an annotation (of type string). The string annotation contains the texture filename that should be used as
the detail map in this example.

texture BaseTex : TEX0; // Base Texture


texture DetailTex : TEX1 < string Name = "Rough.bmp"; > ; // Detail Texture

matrix world; // world matrix


matrix view; // view matrix

You do not need to use semantics in order to use annotations. In this case we see both being used, but
you can also use just one or the other.

In the above example the artist is giving a hint to the engine that the image file “Rough.bmp” should be
used as the second texture (the detail map) for polygons that have this effect mapped to them. Usually,
you don’t want to hard-code texture names like this into your effect files because the same effect might
be used for multiple attribute groups or materials which each use different textures and detail maps.
However, there may be times when the artist will write a specialized effect for a particular object in the
scene where perhaps he/she would like a specific texture used and this could be one way to tackle that
requirement. It is important to realize that the above example does not automatically cause the
“Rough.bmp” texture to be loaded into the DetailTex parameter -- the annotation does nothing unless the
application decides to read it and take some action of its own.

The ID3DXEffect interface provides methods that can be used to search for and read annotation values.
There are two methods that allow the engine to search for an annotation, either by name or integer index.
These methods return the annotation’s D3DXHANDLE which you can then pass into the
ID3DXEffect::GetValue method (and other such GetXX functions) to retrieve the value assigned to it.

For example, the following code would search for an annotation that has been assigned to a top level
parameter whose handle we have already retrieved. Let us assume that we have the handle to a texture
parameter stored in a variable named TexHandle of type D3DXHANDLE. We could test to see if this
texture parameter has been assigned an annotation called ‘Filename’ with the following code:

D3DXHANDLE FileNameHandle = pEffect->GetAnnotationByName( TexHandle, "Filename" );

If the call to the ID3DXEffect::GetAnnotationByName above returned a non-NULL handle, we could


then retrieve the string value assigned to this annotation using the ID3DXEffect::GetString method as
demonstrated below:

char Filename[1024];
if ( FileNameHandle ) pEffect->GetString( FileNameHandle , &Filename );

Annotations of different types such as integers, floats, booleans, and more can be specified in a similar
fashion by simply altering the annotation declaration contained in the effect file, and using the
appropriate GetXX method exposed by the effect interface. For example, if instead we wanted to

www.gameinstitute.com
Page 61 of 128
describe the suggested dimensions of a texture to be dynamically created by our application instead of a
filename, we might now attach two integer annotations similar to the following:

texture CustomTex : TEX0 < int Width = 256; int Height = 128; >;

We could then retrieve this data in a similar fashion using the following code:

int Width, Height;


D3DXHANDLE DataHandle = pEffect->GetAnnotationByName( TexHandle, "Width");
if ( DataHandle ) pEffect->GetInt( DataHandle, &Width );
DataHandle = pEffect->GetAnnotationByName( TexHandle, "Height");
if ( DataHandle ) pEffect->GetInt( DataHandle, &Height );

As you might imagine, this can be an incredibly powerful tool in cases where we want to use a more
data-driven model within our engine or application design.

18.2.8 Sharing Effect Parameters – Effect Pools

In practice, even in scenarios where our scene consists of many different effect files, there is a tendancy
for many of these effects to need access to much of the same parameter data. For example, view and
projection matrices will often not change between the rendering of multiple effects. If we had three
effects that all required the view matrix, then it would potentially be wasteful to have to set this value
three times (once per effect). Figure 18.4 demonstrates the situation.

Figure 18.4

It would be far more efficient if we could just set the matrix once and allow all effects that require it to
share its access to it. Fortunately, this can be achieved very easily through the use of effect pools.

www.gameinstitute.com
Page 62 of 128
An effect pool is a memory buffer
that is used to share parameter
value data across effect
boundaries. In essence, it is simply
a container for the value of every
parameter that we wish to
participate in data sharing, from
which the effect will read instead
of using its own isolated, local
parameter value storage.

Once we have identified that a


specific parameter is a good
candidate for utilizing the shared
memory pool concept, the first
thing we must do is to ensure that
each instance of that parameter is
declared with the shared modifier,
and that they share exactly the Figure 18.5
same name and type within each
effect file that they occur. If semantics are being used, the shared parameters must also have matching
semantics.

To elaborate slightly with an example; imagine that we had two effect files that each initially declared a
variable intended to be used by our application for supplying an object's world matrix. In this first
example, we'll assume that the application was accessing these parameters using their semantic alone,
and due to this fact each effect file was free to use a different name to represent the same basic
parameter.

// A.fx
matrix ObjectWorldMatrix : WORLDMATRIX;

// B.fx
matrix WorldMatrix : WORLDMATRIX;

In order for us to make use of the shared parameter memory pool in this case, the parameter declaration
in each effect file should first be prefixed with the shared modifier keyword as follows:

// A.fx
shared matrix ObjectWorldMatrix : WORLDMATRIX;

// B.fx
shared matrix WorldMatrix : WORLDMATRIX;

With each of these parameters now marked for storage in the shared parameter pool, the effect system
will -- at this point -- still consider them to be separate parameters and will currently allocate space for
two matrices in the shared memory pool instead of one shared matrix as we intend. The reason for this is

www.gameinstitute.com
Page 63 of 128
that the effect system identifies which parameter declarations are intended to represent the same physical
parameter in the effect pool based, in part, on the name of the variable. By ensuring that both variables
use the same parameter name in both effect files, only then will the effect system consider them to
represent the same physical parameter whose value is to be shared.

// A.fx
shared matrix ObjectWorldMatrix : WORLDMATRIX;

// B.fx
shared matrix ObjectWorldMatrix : WORLDMATRIX;

With this change made, the effect system will now allocate space for only a single matrix in the shared
memory pool, the data for which will be shared by both parameters. Thus, when we update that single
matrix value within the shared memory pool, both effect files will have access to the same data.

There are often many parameters that we will require most or all of our effects to have access to.
Matrices, textures, and materials are certainly among the most common, so let’s have a look at some
examples.

#ifndef _SYSTEM_FX_
#define _SYSTEM_FX_

. . .

////////////////////////////
// Matrices
////////////////////////////
shared matrix WorldMatrix;
shared matrix ViewMatrix;
shared matrix ProjectionMatrix;

////////////////////////////
// Materials
////////////////////////////
shared float4 MaterialDiffuse = {1.0f, 1.0f, 1.0f, 1.0f};
shared float4 MaterialAmbient = {1.0f, 1.0f, 1.0f, 1.0f};
shared float4 MaterialEmissive = {0.0f, 0.0f, 0.0f, 0.0f};
shared float4 MaterialSpecular = {1.0f, 1.0f, 1.0f, 1.0f};
shared float MaterialPower = 1.0f;

////////////////////////////
// Textures
////////////////////////////
shared texture texture0;
shared texture texture1;
shared texture texture2;
shared texture texture3;
shared texture texture4;
shared texture texture5;
shared texture texture6;
shared texture texture7;

////////////////////////////

www.gameinstitute.com
Page 64 of 128
// Samplers
////////////////////////////
sampler2D BaseMapSampler = sampler_state
{
Texture = <texture0>;
AddressU = Wrap;
AddressV = Wrap;
MinFilter = Linear;
MagFilter = Linear;
MipFilter = Linear;
};

. . .

#endif

These are some fairly common parameters that many effects will want to have access to, so by declaring
them in a ‘header file’ and then including it in all effects we create, we both guarantee access and
maintain a standard naming convention at the same time.

It is important to remember that when we include the above .fx file in any other effect files, each effect
would usually get its own local copy of the parameters. However, when we specify the shared modifier,
we are stating that we would like to share this parameter with any other effect that has a parameter of the
same name and type, provided the effects are linked to the same memory pool.

This latter point is important because it is also the responsibility of our application to indicate to the
effect framework which effects will be grouped together to share data. This is achieved by creating one
or more memory pool objects, that are supplied to each effect at the point it is created as we will see
below.

Before we load and compile any effects that we wish to share variables, we must first create an effect
memory pool. We do this using the D3DXCreateEffectPool function, which returns an object with an
ID3DXEffectPool interface:

ID3DXEffectPool *pPool;
D3DXCreateEffectPool( &pPool );

It doesn’t get much simpler than that. If the function is successful and an effect pool could be allocated,
then an interface is returned to that object in the passed pointer. The ID3DXEffectPool interface contains
no methods -- it is a just an empty buffer that will eventually be used by the effect framework to
store/manage shared effect parameters. You can even create multiple effect pools if needed so that you
have different batches of effects that share their parameters.

In order for an effect to know about the effect pool we just created, we pass the effect pool to the
D3DXCreateEffectFromFile function as one of its input parameters. In previous examples of this
function we simply passed NULL for the effect pool parameter and, under those circumstances, each
effect will get its own unique pool in which only its own parameters are stored. However, if we do
supply an effect pool object, then any parameters that are marked with the shared modifier in the effect
will be registered with the effect pool. If we then pass this same pool into the creation function for all of

www.gameinstitute.com
Page 65 of 128
our other effect files, any shared parameters in those effect files which match the names of any
parameters already registered with the pool will be linked. In the following code we look at how to
create an effect pool and use it during the creation of three separate effects.

ID3DXEffectPool * pPool;
ID3DXEffect * pTerrainEffect , * pWaterEffect , * pBuildingEffect;

// Create the pool that will be used to share parameters


D3DXCreateEffectPool( &pPool );

// Create first effect and specify shared pool


D3DXCreateEffectFromFile( m_pD3DDevice,
"Terrain.fx",
NULL,
NULL,
0,
&pPool,
&pTerrainEffect,
NULL );

// Create second effect and specify the same shared pool


D3DXCreateEffectFromFile( m_pD3DDevice,
"Water.fx",
NULL,
NULL,
0,
&pPool,
&pWaterEffect,
NULL );

// Create third effect and specify the same shared pool


D3DXCreateEffectFromFile( m_pD3DDevice,
"Building.fx",
NULL,
NULL,
0,
&pPool,
&pBuildingEffect,
NULL );

That is pretty much all there is to sharing effect parameters from an implementation perspective. No
action has to be taken with the pool itself other than creating it and destroying it when it is no longer
needed. Any parameters in the three effects above that have the same name and the shared modifier will
automatically be stored in the pool and shared amongst the three effects. This means that when we come
to render these effects, we only need to set the matrices (for example) for the first effect using the usual
methods. As the matrices in this example are shared, setting the values for (or prior to using) the first
effect will actually set the values in the shared pool. Thus, all three effects can now use them freely.

In the following example, the matrix parameters are assumed to be shared but the textures are not.
However, note that even if a parameter is shared, you can still set it on a per-effect basis if you need to.

www.gameinstitute.com
Page 66 of 128
// Set shared variables only once for all effects in the pool
pTerrainEffect->SetMatrix( hProjMatrix , &matProj )
pTerrainEffect->SetMatrix( hViewMatrix , &matView )

// Set terrain non-shared values


pTerrainEffect->SetTexture( hTex0 , pTexture[0] );
pTerrainEffect->SetTexture( hTex1 , pTexture[1] );

// Process second effect and render


pTerrainEffect->Begin( &Passes , 0 );

… apply techniques and passes here

pTerrainEffect->End();

// Set water non-shared values


pWaterEffect->SetTexture( hTex0 , pTexture[0] );
pWaterEffect->SetTexture( hTex1 , pTexture[1] );

// Process first effect and render


pWaterEffect->Begin( &Passes , 0 );

… apply techniques and passes here

pWaterEffect->End();

// Set building non-shared values


pTerrainEffect->SetTexture( hTex0 , pTexture[0] );
pTerrainEffect->SetTexture( hTex1 , pTexture[1] );

// Process third effect and render


pBuildingEffect->Begin( &Passes , 0 );

… apply techniques and passes here

pBuildingEffect->End();

As you can see, because the matrices in all three effect files have the shared modifier and the same pool
was used to create each effect, the matrix data is shared across all three effects. This allowed us to set
our matrices once prior to doing any specific effect level processing.

Note: Parameters can only be shared across effects if the same effect pool is used during the creation of
each one. If no pool is specified, then the shared modifiers will be ignored. You can use multiple effect
pools in your application so that you have different groups of effects that share parameters.

Lab Project 18.3 is included to help demonstrate some of the principles outlined above. In this demo we
revisit another of our terrain based lab project from the first module in the series (Lab Project 7.1) in
which two separate rendering techniques were employed. The first related to the rendering of the terrain
geometry which we looked at earlier in the chapter. The second was necessary in order to render an
alpha blended plane intended to represent water in the scene.

In this updated version, the core state and resource setting logic has been replaced with two separate
effect files named 'Terrain.fx' and 'Water.fx' which utilize parameter value sharing in order to share their

www.gameinstitute.com
Page 67 of 128
transformation matrices. In this way, the application need only set the matrices once (see
CTerrain::Render) in order to render both batches of geometry through two separate effect files.

To summarize; here are a few points that you should bear in mind when using effect pools and shared
parameters.

• Shared parameters can be any non-static variables defined in an effect. This includes global
variables and annotations.

• For a parameter to be shared across multiple effects, the parameter must have the same name,
type, and semantic in each of the shared effects.

• Parameters can only be shared across effects that share the same device. This is enforced to
prevent us trying to share device-dependant parameters across effects (e.g., shader programs or
textures).

• A shared parameter will be deleted from the effect pool when every effect that shares that
parameter has been released.

• If you do not wish to share any parameters for a given effect, simply pass in NULL as the effect
pool parameter to the D3DXCreateEffectX family of functions.

• As we have seen with many other interfaces and objects in D3DX, it is possible to clone an
effect. Cloning an effect makes an exact copy of that effect including all global variables,
techniques, passes, and annotations. The cloned effect will always use the same effect pool as the
effect from which it was cloned and thus will share those same parameters (we will discuss the
cloning function later).

• A parameter is added to the pool the first time an effect containing that shared parameter is
added to the pool. Since only one copy of this parameter data is needed, this keeps effect
memory usage lower than otherwise would be the case. The default value of a shared parameter
in that pool will be taken from the first effect file that registered that parameter. Any effects that
are registered with the pool will subsequently get their values from the pool.

www.gameinstitute.com
Page 68 of 128
18.3 Effect File Creation Reference
Now that we have an overall understanding of what effects are and how they can be used, let’s flesh out
some more of the specifics and see the various ways that we can create an effect. In the example code
we have seen so far we have been using the D3DXCreateEffectFromFile function but we have yet to
examine all of the parameters that function accepts. We will also want to look at some other variations
of this function that allow us to load an effect from a file stored as a resource or from a buffer already
stored in memory. Additionally, there is an interface called ID3DXEffectCompiler which provides more
control and feedback during effect compilation, so we will want to make good use of that feature. There
is even a command line tool (called fxc.exe) that ships with the DirectX SDK which loads text-based
effect files and saves out compiled binary versions, and we will want to spend some time examining that
as well.

It should be noted that all of the functions we are about to discuss will be fully supported when our
effect files begin to contain vertex and pixel shaders in the next chapter. So we will be able to create,
compile, and use effects in exactly the same way we have been doing thus far, even when our effects
contain shader programs. In fact, it is when our effects include embedded shader programs that the effect
framework’s true power and convenience is realized. More to come on this front later.

18.3.1 D3DX Effect Creation Functions

The D3DX library exposes several global functions that can be used to create an ID3DXEffect interface.
There are also global functions to create effect pools and to disassemble an already compiled effect.

The D3DXCreateEffectFromFile Function

We begin by examining what is certainly the most straightforward way to create an effect from an effect
file -- the D3DXCreateEffectFromFile function, which we have already seen used in several of our
code examples up to this point:

HRESULT WINAPI D3DXCreateEffectFromFile


(
LPDIRECT3DDEVICE9 pDevice,
LPCTSTR pSrcFile,
const D3DXMACRO *pDefines,
LPD3DXINCLUDE pInclude,
DWORD Flags,
LPD3DXEFFECTPOOL pPool,
LPD3DXEFFECT *ppEffect,
LPD3DXBUFFER *ppCompilationErrors
)

www.gameinstitute.com
Page 69 of 128
This function can be used to create an ID3DXEffect from either an ASCII or pre-compiled binary effect
file. In the first case, the text-based effect file is loaded and the effect compiler is invoked behind the
scenes to create a binary representation of the effect code. The binary representation is then used to
create an effect object whose interface is then returned by the function. For pre-compiled binary
representations, the compiler invocation is skipped and an effect is created directly from the loaded byte
code.

While there are lots of parameters, and we will look at each in turn, in many cases you will need only
the basic functionality and many of the parameters can be set to NULL. In the simplest case, to load an
effect file using this function we can use code similar to the following:

LPD3DXEFFECT pEffect;
D3DXCreateEffectFromFile(pDevice,"myeffect.fx",NULL,NULL,0,NULL,&pEffect,NULL);

If the function call is successful, then pEffect will represent a valid instance of an ID3DXEffect, which is
our main effect interface. Through it we will be able to provide input to our effect in the form of
parameters or retrieve information about parameters or techniques that are stored in the effect.

Let us now look at each of the input parameters to D3DXCreateEffectFromFile:

LPDIRECT3DDEVICE9 pDevice
This is a pointer to an IDirect3DDevice9 interface that will manage the effect and its resources.
LPCTSTR pSrcFile
This parameter accepts a string that contains the path and filename to the effect file we wish to load
from disk. This can be either a text-based effect file or a binary file containing the pre-compiled effect
file byte code.
const D3DXMACRO *pDefines
This parameter allows us to provide a null-terminated array of pre-processor macros. We will look at
pre-processor macros in more detail when we study shaders later in the course since it is only then that
this feature really becomes useful. For now, we can just set this optional parameter to NULL. This
parameter is especially useful when the host application needs to understand effect compilation
directives based on hardware features known only at runtime (e.g., maximum shader model supported).
We will use a variety of pre-processor macros throughout our demo code and we will see this feature in
use a bit later on.

The elements in this array should be of type D3DXMACRO, with each element defining a single macro.
The D3DXMACRO structure is shown below:
typedef struct D3DXMACRO
{
LPCSTR Name;
LPCSTR Definition;
} D3DXMACRO

LPCSTR Name;
This string should contain the name of the macro that this structure will define. This is the
name that will be used to invoke the macro inside the shader code.

www.gameinstitute.com
Page 70 of 128
LPCSTR Definition;
This string should contain the actual macro code.

If you wish to store a multi-line macro in the Definition member, prefix each new line
character with a backslash (just like a #define in the C language). For example, the following
code snippet creates a macro called “Gary_Macro” and stores it in a D3DXMACRO array. This
array would later be passed into the D3DXCreateEffect… family of functions to make it
available to the effect during compilation.

D3DXMACRO Macros[n];
Macro[0].Name = "Gary_Macro";
Macro[0].Definition = "/* here is a block of code */\\\n"
"{ do something ... }\\\n";

Notice the three backslash characters at the end of the line. The first two are required to output
a single '\', followed by the newline character "\n".

If you don’t wish to send any macros/defines to the effect then you can simply set this parameter to
NULL.

LPD3DXINCLUDE pInclude

This parameter allows you to customize the file open/close behaviors performed by the compiler and its
pre-processor. If this parameter is set to NULL, the effect compiler will automatically open and include
any files whenever a “#include” is encountered in the effect. However, there are times when you may
need to override this behavior and manage #include directives in a custom fashion. One example might
be when compiling an effect from a resource or from a memory buffer, which we will discuss in a
moment. The default behavior for the compiler’s pre-processor is to assume that our #include refers to a
file on disk, but that is not so in these cases, so the compiler has no idea where to look. Is it a file on
disk? Is it in memory, or sitting in a resource? If you are using the resource or memory versions of this
function, compilation will fail if your effect file contains ‘#include’ directives and you have supplied
NULL for this parameter.

There may even be times when you wish to override the behavior even though you are loading an effect
from disk. For example, the application may have loaded in all the common header files for your effects
into memory to save them from having to be continually loaded from disk by the pre-processor for each
effect that is compiled (potentially many in a big application). In such a case, you would want #include
directives to be interpreted by the pre-processor such that it gets back a buffer from memory containing
the text that is to be included in the effect being compiled. Another reason might be if you are loading
your include files from disk but they are stored in some custom binary format to keep them away from
prying eyes. In such a case, the default processing which assumes the #includes are text-based files
would not suffice and custom loading and parsing code would have to be implemented.

As you can see, this parameter accepts a pointer to an interface called ID3DXInclude, which we have not
encountered before. As it turns out, ID3DXInclude is very much like the ID3DXAllocateHeirarchy
interface we used when loading meshes from X files back in Module II of this series. That is, it is a
www.gameinstitute.com
Page 71 of 128
purely abstract base interface that you must derive from to create an object consisting of application-
defined callback methods that will be invoked by the effect compiler pre-processor when #include
directives are encountered during the (pre-)compile pass.

The ID3DXInclude interface declaration is shown below. As you can see, any class that inherits from it
must implement its two member functions. As we might expect from any interface that allows for the
overriding of file access operations, the two methods are called Open and Close.

#define INTERFACE ID3DXInclude


DECLARE_INTERFACE(ID3DXInclude)
{
STDMETHOD(Open)(THIS_
D3DXINCLUDE_TYPE IncludeType,
LPCSTR pFileName,
LPCVOID pParentData,
LPCVOID *ppData,
UINT *pBytes) PURE;

STDMETHOD(Close)(THIS_
LPCVOID pData) PURE;
};

When an application-defined object derived from ID3DXInclude is provided to the


D3DXCreateEffectFromX family of functions, the effect framework will call its Open and Close
methods whenever an include directive is encountered. This grants the application the ability to load
and/or generate any data it likes for the given include directive. The data is then formatted as text data
and returned to the effect framework in the supplied buffer. The Close function will later be called when
the effect framework has finished integrating the returned include buffer into the effect script, giving the
application a chance to release any memory it allocated during the Open method.

Note: We will discuss the ID3DXInclude interface in its own dedicated section a little later.

The pInclude parameter discussed above is optional and can be set to NULL under most circumstances.

DWORD Flags
This parameter accepts a bitmask that consists of one or more flags that govern the behavior of shader
compilation. Since we are not going to learn about shader compilation until a bit later in the course, we
will defer this discussion until then and simply set this parameter to 0 (i.e., no flag bits provided).
LPD3DXEFFECTPOOL pPool
This parameter accepts a pointer to an effect pool (ID3DXEffectPool), which as we now know, allows us
to share parameters amongst multiple effects. A NULL value indicates that no parameters will be
shared.
LPD3DXEFFECT *ppEffect
This is the address of an ID3DXEffect interface variable that will point to a valid, fully-compiled, effect
if the function call is successful. This interface will be our primary means for setting techniques, passing
in parameter data, and setting states during rendering. We will look at the effect interface in more detail
very shortly.

www.gameinstitute.com
Page 72 of 128
LPD3DXBUFFER *ppCompilationErrors
If the effect compiler experienced any difficulties during the loading and/or compilation of the specified
effect file, this output parameter is the means by which the effect compiler returns information to us
about any errors that it encountered. It does so in the form of an ID3DXBuffer object that contains a
string representation of the compilation errors, including file names, line numbers and error description,
all of which can be an incredibly useful source of information when attempting to fix the problems with
our effect code. In the following example we call the D3DXCreateEffectFromFile function and if
unsuccessful, print out any failure information to the current debug window.

ID3DXBuffer *pCompilerErrors;
ID3DXEffect *pEffect;

if ( FAILED( D3DXCreateEffectFromFile( pDevice,


strFileName,
NULL,
NULL,
0,
&pEffect,
&pCompilerErrors )))
{
// Print error and return
printf( "Compiler creation error for file '%s': %s\n",
m_strName,
pCompilerErrors->GetBufferPointer());

return false;

} // End if failed

That was quite a lengthy discussion for a function which is apparently so easy to use. Yet, as the above
example code illustrates, most of the time when using simple fixed-function effects, we can largely omit
the majority of the parameters and specify zero or NULL.

As we have come to expect from D3DX, additional functions are also provided that allow us to load
effect files from either a resource or from an effect file in memory. These functions are virtually
identical to the one shown above, so we will only briefly touch on them.

The D3DXCreateEffectFromResource Function

Just as we saw with the texture loading functions in Chapter 6 of Module I, there are similar non-file
based functions that we can use to load effects. For example, we can create effects from information
stored in resources using the function D3DXCreateEffectFromResource:

HRESULT WINAPI D3DXCreateEffectFromResource


(
LPDIRECT3DDEVICE9 pDevice,
HMODULE hSrcModule,
LPCTSTR pSrcResource,
const D3DXMACRO *pDefines,

www.gameinstitute.com
Page 73 of 128
LPD3DXINCLUDE pInclude,
DWORD Flags,
LPD3DXEFFECTPOOL pPool,
LPD3DXEFFECT *ppEffect,
LPD3DXBUFFER *ppCompilationErrors
)

As you can see, all of the parameters are the same as those we encountered in the file based version of
the function with the exception of the second and third parameters (highlighted in bold). These two
parameters represent the module handle for the resource and its string name identifier, respectively.

The resource can be either an ASCII text script representing the effect in un-compiled form or a binary
pre-compiled effect script. In the first case the effect compiler is invoked behind the scenes and used to
compile the resource script into binary data. This binary data is then used to create an effect object
whose interface is then returned to the caller.

Note: It is important to remember that if your effect script is stored as a resource and it includes
#include preprocessor directives, you must pass in a pointer to a valid ID3DXInclude derived object
which will be responsible for interpreting those include requests.

The D3DXCreateEffect Function

There is also a version of the effect creation function that creates an effect from a buffer in memory. The
buffer can contain either an ASCII script (such as the contents of an effect file) or an already compiled
binary version of the effect script. As before, this function will compile the script behind the scenes if it
is not already compiled; otherwise compiler invocation is skipped. Either way, we end up with a
compiled effect script that is encapsulated and can be utilized via the returned ID3DXEffect interface.

HRESULT WINAPI D3DXCreateEffect


(
LPDIRECT3DDEVICE9 pDevice,
LPCVOID pSrcData,
UINT SrcDataLen,
const D3DXMACRO *pDefines,
LPD3DXINCLUDE pInclude,
DWORD Flags,
LPD3DXEFFECTPOOL pPool,
LPD3DXEFFECT *ppEffect,
LPD3DXBUFFER *ppCompilationErrors
)

There should be no surprises in the above parameter list. Instead of taking a file name or a resource
handle, this version of the function takes a void pointer to the data buffer for the script that is to be
compiled. As the third parameter you must also pass the size of the data buffer in bytes. The rest of the
function parameters are identical to the other versions of the effect creation functions.

This function will be particularly handy when we cover the ID3DXEffectCompiler interface. This
interface allows you to compile an ASCII effect script into its binary representation and get detailed
information about the compile procedure. The output from the effect compiler is not an ID3DXEffect
www.gameinstitute.com
Page 74 of 128
object in that case, but a memory buffer containing the compiled binary byte code. This is very useful
since it allows you to compile an effect script without tying it to any given device. Later, when the
device is known, you can use the D3DXCreateEffect function to create a device dependant
ID3DXEffect interface from the previously compiled binary buffer (more on this later). This pre-
compiled buffer can also be saved off to disk and loaded back later, so it will serve as a means, in
conjunction with this function, for managing the effect compilation process. As we will see later, when
we introduce shaders, our compilation times will begin to dramatically increase. Pre-compiling effects
and storing them to disk is going to make application startup times much easier to bear if we are simply
able to load them back in and immediately bind them to a device.

The D3DXCreateEffectPool Function

Although we discussed effect pools earlier in the chapter, the function used to create effect pools also
belongs in this section of the textbook since it can serve as a reference for you down the road. An effect
pool provides a means for sharing parameters like textures, matrices, materials, etc. amongst multiple
effects. This can be very handy for parameters that are constantly being reused across lots of effects,
which is quite common.

To create an effect pool we will use the following D3DX function call:

HRESULT WINAPI D3DXCreateEffectPool


(
LPD3DXEFFECTPOOL* ppPool
)

This global D3DX function takes as its single parameter the address of an ID3DXEffectPool interface
pointer. If the function is successful, a new effect pool will be allocated and its interface returned in its
only parameter.

Marking parameters for sharing in our effects is a simple matter of using the “shared” type modifier
before the parameter type in the effect script. When an effect pool is passed into any of the
D3DXCreateEffectX functions, any shared parameters in the effect will be registered with the pool and
automatically shared with any other effects that were created with the same effect pool parameter.

The ID3DXInclude Interface

We touched on the ID3DXInclude interface earlier and discovered that it is a pure abstract base class
that provides the interface for an application-defined file handler. You will derive your own class from
this interface and provide implementations for its two virtual methods. The Open and Close methods
will be called by the effect compiler to handle the inclusion of header files. If an object of this type is not
provided to the D3DXCreateEffectFromFile function, the #include directives will still be honored
using the default file inclusion behavior. However, if you are creating an effect from a resource and your
effect contains #include directives, you must implement and pass an object derived from ID3DXInclude

www.gameinstitute.com
Page 75 of 128
so that your application can handle the inclusion of this data. If the resource script contains no #include
directives, then NULL can be specified to the resource loading function.

#define INTERFACE ID3DXInclude


DECLARE_INTERFACE(ID3DXInclude)
{
STDMETHOD(Open)(THIS_
D3DXINCLUDE_TYPE IncludeType,
LPCSTR pFileName,
LPCVOID pParentData,
LPCVOID *ppData,
UINT *pBytes) PURE;

STDMETHOD(Close)(THIS_
LPCVOID pData) PURE;
};

When you derive your own object from this interface, you must implement the Open and Close methods
shown above. Since these are callback functions, their parameters are the transport mechanism by which
D3DX makes the include request to your object and by which your object returns the included ASCII
script data back to the application. Let us quickly have a look at the methods and their parameters.

The ID3DXInclude::Open Method

The Open function is called by D3DX whenever data needs to be included in the currently compiling
script. Perhaps surprisingly, when an ID3DXInclude derived object is passed into a D3DXCreateEffectX
function, the Open method is called even to load the top-most object. For example, imagine you had the
following lines in your application:

CMyInclude m_Include;
if ( FAILED( D3DXCreateEffectFromFile( m_pDevice,
"Terrain.fx",
NULL,
&m_Include,
0,
&pEffect,
&pCompilerErrors )))

In this instance, CMyInclude::Open would be called upon to load the root .fx file “Terrain.fx”. Thus,
you are fully overriding file access and loading behavior when an include interface is provided. This
allows you to parse and supply scripts to the loading and compilation process that might be stored in
your own proprietary, non-ASCII, format.

We will now discuss the parameters to the Open function. Remember, these are parameters that will be
passed to your implementation of the Open method by the D3DX compiler/loader.

D3DXINCLUDE_TYPE IncludeType
This is an input parameter that informs you about the type of include that has occurred. This information
is provided via a member of the D3DXINCLUDE_TYPE enumeration:
www.gameinstitute.com
Page 76 of 128
typedef enum D3DXINCLUDE_TYPE
{
D3DXINC_LOCAL = 0,
D3DXINC_SYSTEM = 1,
D3DXINC_FORCE_DWORD = 0x7fffffff,
} D3DXINCLUDE_TYPE, *LPD3DXINCLUDE_TYPE;

In cases where D3DXINC_LOCAL is supplied to this parameter, similar to C++ include directives, this
means that the effect file author intended your include handler to search local paths for the include file
and that the #include directive that spawned the call to the Open method was in the form:

#include "Common.fx"

If the type specified is D3DXINC_SYSTEM then it means the effect author intended your include
handler to search system paths as well, and was given in the form:

#include <Common.fx>

LPCSTR pFileName
The second parameter to the open method is a string containing the name of the file that was referenced
by the include directive. Remembering that this function will also be called to load the root file, the first
time this function is called during the loading process it would contain “Terrain.fx” in our example. This
function would then be responsible for reading the file into a memory buffer and returning that buffer
back to the compiler. Further calls to the Open method will then be made by the compiler for each
include directive it subsequently encounters in that buffer (the root file).

LPCVOID pParentData
When the Open method is first called to load the root file, this pointer will be set to NULL. Otherwise, it
will point to the memory buffer containing the parent script that contained the #include directive that
spawned the current call to the Open method.

For example, imagine we passed a valid ID3DXInclude derived object into the
D3DXCreateEffectFromFile function to load a file called “Terrain.fx”. Imagine also that this effect had
a directive to include a file called “Common.fx”. Furthermore, imagine that “Common.fx” itself
contained an include directive to include a file called “Core.fx”.

In this instance, the Open method would be called the first time by the framework to load the root file.
This file would then be loaded into a memory buffer by the Open method which would then return the
buffer to the caller. (The buffer is returned via the final two parameters to this function which we will
discuss next.) In this first invocation of the function, the pParentData parameter would contain NULL
because there is no parent script that spawned the root load.

When the buffer containing the root file is returned to the compiler, the pre-processor will process it and
will encounter the #include “Common.fx” directive, causing another invocation of the Open method. In
this second instance, the filename passed in as the second parameter would contain the string
“Common.fx” and the pParentData pointer would point to the parent buffer that the first invocation of
the Open method returned in the first call. In our example then, it would contain the contents of the
www.gameinstitute.com
Page 77 of 128
“Terrain.fx” root script. The Open function in this instance would load the file “Common.fx” into a new
buffer and would return that second buffer back to the compiler where it will be automatically included
(i.e., appended to) in the parent script -- you do not have to do this text insertion yourself. The
pParentData pointer is theoretically only useful if you have stored something there that should influence
the way you process any child include directives or if, for whatever reason, you are building up your
own version of the text file. If this is not the case, then you can usually just ignore this parameter.

The buffer containing the “Common.fx” script would be returned to the compiler where it would be
parsed, causing the third and final invocation of the Open method. This is because the “Common.fx”
script in this example contained its own #include directive to include the file “Core.fx”. When the Open
method is called for the third time, the string passed via the second parameter would contain “Core.fx”
and the pParentData pointer would point at the buffer returned by the second invocation of the function,
which contains the contents of the “Common.fx” script which spawned the third function call. Finally, in
this invocation, a new buffer would be allocated and in it we would store the contents of “Core.fx”
which would then be returned to the compiler.

LPCVOID *ppData
This output parameter is the address of a pointer to the data buffer that will be allocated within the
function. If the Open method was passed a filename of “Common.fx”, the Open method would allocate
the buffer, fill it with the contents of the effect script “Common.fx”, and then assign this pointer to point
to the buffer. On return, the compiler would then have access to your buffer via this pointer.

UINT *pBytes
With this output parameter we return the size (in bytes) of the buffer pointed to by ppData. This allows
the calling frame to know how big the effect script buffer is and/or how much it should read/copy.

To clarify these various ideas, below we see an example implementation of an Open method that could
be used to process, load, and return scripts from disk. If we imagine this code being used in our previous
example, this Open method would be called three times. Once to load the root file “Terrain.fx”, again to
process the include within that file “Common.fx”, and once more to include the file that was included by
“Common.fx”:

Note: In this example we are showing the version of the function that simply loads the specified files
from disk and returns them in a buffer. In practice, the location from which you actually fetch the script
data is totally up to you. Your effect include directives might describe a script stored as a resource or
might even be dynamically generated by a support function.

HRESULT WINAPI CMyInclude::Open( D3DXINCLUDE_TYPE IncludeType,


LPCSTR pFileName,
LPCVOID pParentData,
LPCVOID *ppData,
UINT *pBytes)
{
char * strEffectFile = NULL;
FILE * fp = fopen( pFileName, "rb" );
ULONG nNumBytes = _filelength( fp->_file );
strEffectFile = new char[ nNumBytes ];
fread( strEffectFile, nNumBytes, 1, fp );
fclose( fp );

www.gameinstitute.com
Page 78 of 128
*ppData = strEffectFile;
*pBytes = nNumBytes;
return S_OK;
}

Each time this function is called, we open the file and compute its length. We then allocate a buffer of
that size and read the contents of file into it. Finally we close the file and return our newly allocated
script buffer pointer via the ppData output parameter and output the size of the buffer via the pBytes
parameter. Notice that in this example that we had no need to examine the contents of the buffer
referenced by the pParentData parameter.

Note: Make sure that you do not append a null terminator to the returned buffer. If you do, it will be
interpreted as an end of file marker when embedded in the parent document.

The ID3DXInclude::Close Method

Every call to the Open method results in an eventual call to the Close method to give your application a
chance to release the memory buffer that was allocated during the corresponding Open request. You are
passed a single parameter -- a pointer to the buffer that you returned in the initial call.

HRESULT Close
(
LPCVOID pData
);

Now you can cast the supplied pointer back to the correct data type and delete it. For example, the code
for the appropriate Close method to match the prior Open method would likely be defined as follows:

HRESULT WINAPI CMyInclude::Close( LPCVOID pData)


{
delete [] (char*)pData;
return S_OK;
}

18.3.2 Creating a D3DX Effect Compiler

We have seen that when we call the D3DXCreateEffectX functions we get back a compiled effect
represented by an ID3DXEffect object. If we load from a file for example, then two stages occur behind
the scenes. First, the script is loaded and compiled into byte code which is stored in a temporary buffer.
Then in a second step, this buffer is used to create a device-dependant ID3DXEffect object. In practice, it
is not uncommon to want to separate these two processes. That is, you may wish to compile the script
into a device independent binary representation first and only later create an ID3DXEffect object from
that binary form.

www.gameinstitute.com
Page 79 of 128
The D3DX effect API allows us to create an effect compiler as a standalone entity for a given effect file,
completely separate from the device itself. We can then use this effect compiler interface
(ID3DXEffectCompiler) to compile the desired effect and to read and write data (semantics, annotations,
global variables, etc.) to/from the effect. This can be useful if you want to compile an effect and check it
for errors and then save the binary form for subsequent use. This is also helpful if you wanted to setup
your application’s communication pipeline for the global variables in the effect by reading out semantics
or annotations during application initialization without having to associate the effect with any particular
rendering device.

In large scale applications that use many effects (especially those with lots of complex shaders), effect
compilation time can start to become a real problem, particularly if it is all performed at startup. In such
cases, it will be beneficial to run effect compilation as an offline process, save the binary results to a file
(or resource), and then during application startup load the binary form directly into a memory buffer and
create the effect from it (without having to recompile). A side benefit is that it allows developers to
avoid distributing their proprietary effect code in human readable form.

Although we can compile effects offline using the fxc.exe command line tool that ships with the SDK,
the ID3DXEffectCompiler interface allows us to easily build the functionality directly into one of our
development tools (or even the runtime component if desired).

Much of the functionality provided by ID3DXEffectCompiler is actually inherited from a very important
base interface named ID3DXBaseEffect. This base interface provides all of the common methods that
might be supported by any effect class derived from it regardless of whether it is a development time
tool or runtime effect. Such methods include a means to set and retrieve the values of effect parameters
and to retrieve descriptors of the various techniques, parameters, and annotations included in the file.
Interestingly, this base interface is also found within the inheritance chain of the ID3DXEffect interface
we have been looking at to this point. Thus, these methods are available through both the ID3DXEffect
and ID3DXEffectCompiler interfaces. The methods to actually use the effect at runtime, such as
validating and activating techniques, executing a technique or a given pass, have no context in the case
of the effect compiler however and as such, these methods are not available in the base class and are
only available through ID3DXEffect.

Note: Although the effect compiler derives from the same base interface as the effect (ID3DXBaseEffect)
and thus has access to all of the same Get/Set functions, keep in mind that while the Set functions will
work as expected, your data will not be preserved after you create the effect. That is, you will need to
call the Set functions to pass data into the effect once it is associated with a device if you expect to have
access to those values during effect execution. This is logical since the device would not know about
anything that you did before you gave it the fully finalized effect.

We create an effect compiler using the D3DXCreateEffectCompilerX family of functions. However,


what might not be immediately obvious is that we have to create an effect compiler for each effect file
we wish to compile. This is quite unlike how we would normally think of a ‘compiler’ since we
generally assume that it is a tool that could be reused for processing multiple scripts.

The process for creating a standalone effect compiler in your application is very similar to what we
learned earlier when creating effects directly for use on the device. We can create an effect compiler
from a file by passing the filename of the effect script, or we can create an ID3DXEffectCompiler

www.gameinstitute.com
Page 80 of 128
interface from an effect stored as a resource. In fact, the parameters are almost identical to the
D3DXCreateEffectX functions with only a few minor differences. Those differences essentially boil
down to not passing in a rendering device and not passing in an effect pool, and of course, getting back a
different interface (ID3DXEffectCompiler versus ID3DXEffect).

Note: The reason we do not have to pass in an effect pool is because the concept of shared parameters
do not enter into the picture until the binary scripts become ID3DXEffect objects resident on the device.
The ID3DXEffect interface is an interface that encapsulates a binary compiled effect script, whereas the
ID3DXEffectCompiler interface encapsulates an underlying text-based script and the means with which to
compile it into a binary object.

Three global D3DX functions exist for the purpose of being able to create an effect compiler for a given
script: D3DXCreateEffectCompilerFromFile, D3DXCreateEffectCompilerFromResource, and
D3DXCreateEffectCompiler.

The D3DXCreateEffectCompilerFromFile Function

This version of the function should be used if you have a text-based effect script stored on disk and you
wish to load it and create an effect compiler interface with which to control it. The filename is provided
as the first parameter and, if the script exists and could be successfully loaded, an ID3DXEffectCompiler
interface will be returned in the fifth parameter. Notice that the final parameter represents the address of
an ID3DXBuffer interface. On function return, if the text-based script contained any errors, this will be
used to return a buffer containing a string describing the various errors that were found as the script was
being parsed.

HRESULT WINAPI D3DXCreateEffectCompilerFromFile


(
LPCTSTR pSrcFile,
CONST D3DXMACRO* pDefines,
LPD3DXINCLUDE pInclude,
DWORD Flags,
LPD3DXEFFECTCompiler *ppEffectCompiler,
LPD3DXBUFFER *ppParseErrors
)

It is important to understand that this function, just like its sibling functions which we will look at next,
does not create an ID3DXEffect and does not compile the script. It simply loads the script into a buffer
and returns an ID3DXEffectCompiler interface which can be used subsequently to control the
compilation of that script into its binary counterpart. We will look at the methods of the
ID3DXEffectCompiler interface in a moment.

The D3DXCreateEffectCompiler Function

This is the compiler equivalent of the D3DXCreateEffect function we looked at earlier in that it creates
a compiler object from a text-based script that is stored in a memory buffer. This function might be

www.gameinstitute.com
Page 81 of 128
useful if you were building the effect script procedurally in memory by piecing together several smaller
scripts, or wanted to load data from files in your own proprietary data format.

HRESULT WINAPI D3DXCreateEffectCompiler


(
LPCSTR pSrcData,
UINT SrcDataLen,
const D3DXMACRO *pDefines,
LPD3DXINCLUDE pInclude,
DWORD Flags,
LPD3DXEFFECTCOMPILER *ppEffectCompiler,
LPD3DXBUFFER *ppParseErrors
)

The first parameter is where we pass the pointer to the memory buffer containing the text-based effect
script. The second parameter is used to inform D3DX as to the size of that buffer.

The D3DXCreateEffectCompilerFromResource Function

This version of the function is responsible for creating an ID3DXEffectCompiler interface from a text-
based script stored as a resource. Remember that in resource based functions such as this, we are
required to provide the handle of the module that contains the resource and the resource ID.

HRESULT WINAPI D3DXCreateEffectCompilerFromResource


(
HMODULE hSrcModule,
LPCTSTR pSrcResource,
const D3DXMACRO *pDefines,
LPD3DXINCLUDE pInclude,
DWORD Flags,
LPD3DXEFFECTCompiler *ppEffectCompiler,
LPD3DXBUFFER *ppParseErrors
)

18.3.3 The ID3DXEffectCompiler Interface

The ID3DXEffectCompiler interface exposes a collection of Get/Set methods since, like ID3DXEffect, it
is also derived from the ID3DXBaseEffect interface. This is useful because we may want to use the
effect compiler to examine semantics, variables, number of techniques, etc., without having to worry
about committing our effect to any particular device. We will not review all of the Get/Set methods in
this section as we will have a more detailed examination of them in the next section.

Note: Lab Project 18.4 included with this chapter contains a more complete practical demonstration of
the ID3DXEffectCompiler interface. This project demonstrates pre-compilation of an example effect file
into its byte-code form (without the need for a device) in addition to parsing and enumerating the
contents of the effect file including techniques, passes, parameters and annotations.

www.gameinstitute.com
Page 82 of 128
The first method of consequence is one that allows us to compile the underlying text-based effect script
into a binary representation -- the CompileEffect method.

The CompileEffect Method

This method, exposed by the ID3DXEffectCompiler interface, is our means for instructing the compiler
to compile its underlying effect script into byte code. It is important to realize that this function does not
return an ID3DXEffect object (which would require a valid device to be available), but instead returns an
ID3DXBuffer containing the compiled byte code result. This buffer can then be used later to create an
ID3DXEffect by way of the D3DXCreateEffect function that we discussed earlier. The function
prototype is shown below.

HRESULT CompileEffect
(
DWORD Flags,
LPD3DXBUFFER *ppEffect,
LPD3DXBUFFER *ppErrorMsgs
)

DWORD Flags
The Flags parameter allows us to control certain aspects of the way the compiler generates its final
output. Like the flags seen in previous function calls, the possible values that can be used correspond to
various D3DXSHADER_* compile options. We will examine these compiler options in more detail
when we discuss shaders in the next chapter.

LPD3DXBUFFER *ppEffect
This buffer will contain the compiled effect if the function call is successful -- i.e., the compiler
experiences no difficulties compiling the effect source script into its binary form.

LPD3DXBUFFER *ppErrorMsgs
In the event of any problems during compilation, this buffer will contain a string that lists very detailed
error messages that can be used to help fix problems that were encountered with the effect source code.

Once we have access to the compiled effect buffer (returned via the ppEffect output parameter), we can
simply pass the contents of the buffer into the D3DXCreateEffect function when we are ready to bind it
to a particular device.

A simple example of creating an effect compiler, compiling the effect, and then creating the effect on a
device follows:

www.gameinstitute.com
Page 83 of 128
LPD3DXEFFECTCOMPILER pCompiler;
LPD3DXBUFFER pEffectBuffer = NULL;
LPD3DXBUFFER pErrorBuffer = NULL;
LPD3DXEFFECT pEffect;

// Create the effect compiler for the file-based effect


HRESULT hr = D3DXCreateEffectCompilerFromFile("terrain.fx", NULL, NULL, 0,
&pCompiler, &pErrorBuffer );
if( SUCCEEDED(hr) )
{
SAFE_RELEASE(pErrorBuffer);

// Compile the effect and check for errors


hr = pCompiler->CompileEffect(0, &pEffectBuffer, &pErrorBuffer );
if( SUCCEEDED(hr) )
{
SAFE_RELEASE(pErrorBuffer);

// Create the final effect on our device


hr = D3DXCreateEffect(m_pD3DDevice,
pEffectBuffer->GetBufferPointer(),
pEffectBuffer->GetBufferSize(),
NULL, NULL,
0,
NULL,
&pEffect,
&pErrorBuffer );
}
}

// Clean up
SAFE_RELEASE( pErrorBuffer );
SAFE_RELEASE( pEffectBuffer );

The second important method exposed by the effect compiler is one that we will actually defer
discussion on until a bit later in the course. Its purpose is to handle the compilation of a shader that is
contained within the effect.

HRESULT CompileShader
(
D3DXHANDLE hFunction,
LPCSTR pTarget,
DWORD Flags,
LPD3DXBUFFER* ppShader,
LPD3DXBUFFER* ppErrorMsgs,
LPD3DXCONSTANTTABLE* ppConstantTable
)

Setting/Retrieving the Literal Status of Parameters

Unlike the ID3DXEffect interface, which we will discuss again shortly, the ID3DXEffectCompiler allows
us to set the status of script parameters to type ‘literal’. Marking a parameter as literal indicates to the

www.gameinstitute.com
Page 84 of 128
compiler that its value will never change throughout the lifetime of the effect. This means that it will not
be changed within the effect or shader code and will not be altered by the application. This enables the
effect compiler to do some optimization because it can essentially treat such parameters as #defines,
which can often be compiled into a more efficient form.

Only non-shared top-level parameters can be marked as literal. Parameters can only be marked as literal
using the ID3DXEffectCompiler interface and as such shows another example of why you might want to
compile your effects using the ID3DXEffectCompiler instead of simply calling
D3DXCreateEffectFromFile. Note as well that this means we must set the literal status of any effect
parameters prior to calling the CompileEffect method.

It should be noted that effect parameters and variables can be defined inside the effect file with the const
modifier much like we can mark a variable as constant in C++.

const matrix WorldMatrix

Using the "const" modifier in your effect files is not the same as making the variable literal however.
Making a variable constant in this way means its value cannot be altered from within the effect and/or
any of its shader functions. That is, it is only constant within the scope of the effect file. However, it
cannot be treated as a literal because the value of such parameters can still be altered by the application
via the ID3DXEffect::Set... methods.

The declaration for the ID3DXEffectCompiler::SetLiteral method is shown below.

HRESULT SetLiteral
(
D3DXHANDLE hParameter,
BOOL Literal
)

The first input parameter is a handle to the effect parameter we are interested in adjusting. As we saw
earlier, we can retrieve the handle of an effect variable using one of the GetParameter style methods
exposed by the effect compiler (via the ID3DXBaseEffect base interface). For example, we used the
ID3DXEffect::GetParameterByName method to fetch the handle of an effect parameter based on its
name. The ID3DXEffectCompiler interface supports all of these same parameter retrieval functions as
well since they are present in the base interface. The second parameter is a boolean flag indicating
whether or not we want the parameter to be literal (TRUE) or not (FALSE). It is worth noting that
shared parameters cannot be marked as literal.

If we want to know whether a particular parameter is set as a literal or not, we can query into the effect
using the following method:

HRESULT GetLiteral
(
D3DXHANDLE hParameter,
BOOL* pLiteral
)

www.gameinstitute.com
Page 85 of 128
Like the prior function, the first input is the handle of the effect parameter we are interested in. The
second parameter accepts the address of a boolean variable that will be filled with the literal status of the
item in question.

As you have seen so far, there are many ways to create an ID3DXEffect from a text or binary effect
script. All of these same techniques can be used whether or not your effect file contains the code for
vertex and pixel shaders. Thus, in the next chapter you will see that everything we learn about effects in
this chapter can be carried over and, for the most part, will not change much on the application side.
Once we have our application set up to render using effects, we get virtually automatic shader support
simply by including shader programs in those effect files. The effect compiler built into the D3DX effect
framework (and the command line version of said compiler) understands shader code and is able to
compile it along with the effect. From the application’s perspective, we are still just creating an effect
and using it as normal, but now it can perform more advanced rendering operations with the inclusion of
shader programs.

Note: Shader programs do not have to be embedded inside effect files in order to make use of them.
The DirectX Graphics API provides interfaces and functions that allow you to compile standalone vertex
and/or pixel shaders that can be bound to the device and used independent of any effect script.
However, using effect files can be very convenient, certainly when getting started, because it allows you
to place multiple shaders in a given file mixed with fixed-function fallback techniques. The D3DX effect
framework also provides an easy mechanism for rendering with shaders due to its use of techniques and
passes and it exposes an interface to allow us to easily send parameter values into the effect (and hence
into shader functions within that effect). With all the groundwork behind us in this chapter, you will be
very pleased with the ease of shader integration in the next chapter.

18.3.4 The Extended Effect Creation Functions

Although their use will not be apparent until we start to study shaders, there are three other effect
creation functions that we have not yet discussed. For the most part, the functions are the same as their
siblings that we have already seen -- they are simply extended versions of them. Such functions have
their names appended with “Ex” which you will no doubt have noticed is the common method used by
Microsoft APIs to denote extended versions of pre-existing functions. In this particular case, the three
functions are D3DXCreateEffectEx, D3DXCreateEffectFromFileEx and the
D3DXCreateEffectFromResourceEx and each allows for a single additional parameter to be passed,
versus their un-extended counterparts. The extra parameter in all three cases is a string called
pSkipConstants.

HRESULT D3DXCreateEffectFromFileEx(
LPDIRECT3DDEVICE9 pDevice,
LPCTSTR pSrcFile,
CONST D3DXMACRO * pDefines,
LPD3DXINCLUDE pInclude,
LPCSTR pSkipConstants,
DWORD Flags,
LPD3DXEFFECTPOOL pPool,
LPD3DXEFFECT * ppEffect,

www.gameinstitute.com
Page 86 of 128
LPD3DXBUFFER * ppCompilationErrors
);

HRESULT D3DXCreateEffectFromResourceEx(
LPDIRECT3DDEVICE9 pDevice,
HMODULE hSrcModule,
LPCTSTR pSrcResource,
CONST D3DXMACRO * pDefines,
LPD3DXINCLUDE pInclude,
LPCSTR pSkipConstants,
DWORD Flags,
LPD3DXEFFECTPOOL pPool,
LPD3DXEFFECT * ppEffect,
LPD3DXBUFFER * ppCompilationErrors
);

HRESULT D3DXCreateEffectEx(
LPDIRECT3DDEVICE9 pDevice,
LPCVOID pSrcData,
UINT SrcDataLen,
CONST D3DXMACRO * pDefines,
LPD3DXINCLUDE pInclude,
LPCSTR pSkipConstants,
DWORD Flags,
LPD3DXEFFECTPOOL pPool,
LPD3DXEFFECT * ppEffect,
LPD3DXBUFFER * ppCompilationErrors
);

This additional parameter provides a means to optimize an effect’s internal parameter management,
which can become very important when using effects that contain shaders. To understand the
pSkipConstants parameter we need to briefly discuss how effect parameters are made accessible to
shader programs by the effect framework. For this next discussion, try not to focus too much on the
details of how and why shaders work or how we can embed them in an effect file; that is what we will
do in the following chapter. For now just know that the parameters defined in an effect file will be
accessible to any shader programs stored within that effect file.

Shaders are programs that can be written to replace the vertex transformation/lighting and pixel shading
modules of the fixed-function pipeline in DirectX 9. This allows developers the ultimate in flexibility
since they are no longer bound by the functionality offered via a limited set of fixed-function states.
With the inception shader programming, we recapture the many of the freedoms that graphics
developers had in the days of the software rendering engine. You can control how each pixel gets
shaded, how lights affect vertices and pixels, and which lighting model will be used. Shader programs
have one huge advantage over the old days of the homegrown software engines however -- they run on
optimized graphics hardware dedicated to and designed for this task alone. When an effect file contains
shaders, those shaders are compiled along with the rest of the effect script and when the effect is applied,
the shader programs are uploaded to the GPU so that they can be executed in hardware with maximum
efficiency.

You will see in the following chapter(s) that shaders will almost always need access to certain input
variables or parameters in order to do their jobs. A vertex shader for example is responsible for

www.gameinstitute.com
Page 87 of 128
transforming each vertex into clip space and, as such, it will need access to the world, view, and
projection matrices of the object for which the shader is being invoked. Depending on the ‘shader
model’ being used, a certain number of registers will exist on the graphics hardware to provide storage
for us to pass this type of parameter data from the host application to the shader. In order for the shader
to access any variables/parameters, they obviously must exist on the hardware with the shader program.

When we use a D3DX effect, this data transfer is handled for us behind the scenes and we are isolated
from the underlying mechanics (to a certain degree). Whenever we define matrices, vectors, integers,
etc. in an effect file that contains shaders, these variables will need to be uploaded into the constant
registers of the hardware before we render any polygons that use that effect (and its shaders).

When we create an ID3DXEffect object, we know that any parameters defined inside the effect script
will be accessible and configurable via Get/Set methods. Basically, the ID3DXEffect object will own and
manage system memory copies of all parameters that were defined in the effect script. Whenever we set
the value of an effect’s parameter, we are simply changing the value of these system memory variables
that are contained inside the effect's parameter value pool (shared or otherwise). Although we must
sandwich the render code for a given effect inside an ID3DXEffect::BeginPass and
ID3DXEffect::EndPass pair, it is usually prior to this that we use the ID3DXEffect::Set... methods to
configure the parameters for the effect that is about to be applied. At this point we are simply altering
the values of system memory variables, not the values of any registers on the hardware. After we have
configured all variables to be exactly how we want them for a given instance, and we've called the
ID3DXEffect::Begin method to inform the framework that we are about to start rendering with this
effect (and optionally cache device state), if any shaders are present in the effect, any variables accessed
by the shader functions will have their values uploaded into the constant registers of the GPU per
ID3DXEffect::BeginPass call, just like any other state.

Note: Not all effect data used by a shader is uploaded into the constant registers, as we will see in the
next chapter. Texture parameters, for example, are handled differently and are used to configure the
hardware’s texture samplers. In addition, only those parameters actually used by the shader program in
question will be uploaded.

It is very important that system memory copies of these variables are maintained by the effect object
because there are a limited number of constant registers available (particularly in early shader models)
and they can and often will be reused -- even between passes in the same technique.

Whenever ID3DXEffect::BeginPass is called, relevant parameters will be uploaded into the constant
registers, overwriting any values currently stored there. While it is certainly convenient that this call will
handle the data upload for us automatically, this process does take time. Ultimately, there may be cases
when we need to take manual control of certain parameters so that they are not uploaded needlessly.
Such parameters are those that may need to have their values set between the BeginPass and EndPass
calls because their initial values are only established after the fact, or those which we wish to set once
and then leave alone. To improve performance, the extended versions of the creation functions exist.
The new parameter pSkipConstants allows you to inform the effect about any variables/parameters that
you wish it not to manage because we will do it ourselves -- including sending it up to the hardware.

www.gameinstitute.com
Page 88 of 128
LPCSTR pSkipConstants
This string contains the names of all the constant parameters in the effect file that you wish the
application, not the effect, to manage. Each parameter name should be separated by a semi-colon and the
string should be null terminated. Essentially these parameters will be ignored by the effect system when
the other parameters are being uploaded to the constant registers.

Note: The extended effect creation functions and the idea of skipping constants will make more sense in
the next chapter when we start to use shaders.

18.3.5 The Effect Compiler Command Line Tool

Somebody new to the topic of effect files and shaders might be taken aback by the sheer number of
creation methods and strategies and be forgiven for thinking that compiling an effect is a complex task.
Of course, we have seen that this is not the case and that effects can be compiled with a single function
call in many cases; it is just that the many different ways that we can compile effects are available to suit
various situations and scenarios that commonly arise. In this final section on effect file creation and
compilation, we will discuss the standalone compiler tool that ships with the SDK. This tool allows us to
compile our effects at development time so that only the binaries are shipped with the finished product.
The effect command line compiler is called fxc.exe and can be found in the in the following folder:

C:\DX9SDK\Utilities\Bin\x86

Note: You should change ‘C:\DX9SDK’ to the path and folder in which you installed the DirectX 9 SDK on
your system.

This tool is also referred to as the High Level Shader Language (HLSL) compiler and can be used to
compile effects that both do and do not contain shaders. High Level Shader Language is a C-like
programming language that we will be using throughout this course to write our shaders. The HLSL
compiler can be used to compile binaries for effect scripts at development time and the results can be
quickly loaded and used to create an ID3DXEffect interface at runtime using a function such as
D3DXCreateEffectFromFile.

You can find details on all of the various command line parameters that can be passed into this tool to
control the compilation process by simply executing fxc.exe with the "/?" or "/help" command line
argument. The following box shows the output we might see from the command prompt when we do so.

C:\dx9sdk\Utilities\Bin\x86>fxc.exe /?

Microsoft (R) Direct3D Shader Compiler 9.29.952.3111


Copyright (C) Microsoft Corporation 2002-2009. All rights reserved.

Usage: fxc <options> <file>

/?, /help print this message

/T<profile> target profile


/E<name> entrypoint name

www.gameinstitute.com
Page 89 of 128
/I<include> additional include path
/Vi display details about the include process

/Od disable optimizations


/Op disable preshaders
/O{0,1,2,3} optimization level 0..3. 1 is default
/WX treat warnings as errors
/Vd disable validation
/Zi enable debugging information
/Zpr pack matrices in row-major order
/Zpc pack matrices in column-major order

/Gpp force partial precision


/Gfa avoid flow control constructs
/Gfp prefer flow control constructs
/Gdp disable effect performance mode
/Ges enable strict mode
/Gec enable backwards compatibility mode
/Gis force IEEE strictness
/Gch compile as a child effect for FX 4.x targets

/Fo<file> output object file


/Fc<file> output assembly code listing file
/Fx<file> output assembly code and hex listing file
/Fh<file> output header file containing object code
/Fe<file> output warnings and errors to a specific file
/Vn<name> use <name> as variable name in header file
/Cc output color coded assembly listings
/Ni output instruction numbers in assembly listings

/P<file> preprocess to file (must be used alone)

@<file> options response file


/dumpbin load a binary file rather than compiling
/Qstrip_reflect strip reflection data from 4_0+ shader bytecode
/Qstrip_debug strip debug information from 4_0+ shader bytecode

/compress compress DX10 shader bytecode from files


/decompress decompress bytecode from first file, output files should
be listed in the order they were in during compression

/D<id>=<text> define macro


/LD Load d3dx9_31.dll
/nologo suppress copyright message

<profile>: cs_4_0 cs_4_1 cs_5_0 ds_5_0 fx_2_0 fx_4_0 fx_4_1 fx_5_0 gs_4_0
gs_4_1 gs_5_0 hs_5_0 ps_2_0 ps_2_a ps_2_b ps_2_sw ps_3_0 ps_3_sw ps_4_0
ps_4_0_level_9_1 ps_4_0_level_9_3 ps_4_0_level_9_0 ps_4_1 ps_5_0 tx_1_0
vs_1_1 vs_2_0 vs_2_a vs_2_sw vs_3_0 vs_3_sw vs_4_0 vs_4_0_level_9_1
vs_4_0_level_9_3 vs_4_0_level_9_0 vs_4_1 vs_5_0

Since this compiler has the ability to compile shaders, there are many command line options that we
won’t understand until we cover shader development. However, the first very important one from our
perspective is "/T", which allows us to specify the target profile of the compiler binary. At the bottom of
the output window shown above, you can see all of the various compile targets that are supported by the
www.gameinstitute.com
Page 90 of 128
compiler. For example, if you specify "/T vs_1_1" then you are instructing the compiler to compile any
shader code included in the file such that it is compatible with the vertex shader model 1.1. If any
shaders exist in the file which contain instructions or operations that are not supported by the 1.1 vertex
shader model, then compilation would fail and a list of errors would be returned. You can see that the
compiler above supports vertex shader compiler targets from 1.1 right up to 5.0 and similarly with the
pixel shader (ps_x_x) models that it supports. When compiling shaders with this tool, you can test
whether your code is compatible with your required vertex and pixel shader target platforms and make
adjustments where necessary.

Although you will usually be using effect files which contain shaders, we have not yet covered shaders
and for now wish to learn how to use this tool to compile only fixed-function effects. If no compile
target is specified, then the default vertex shader and pixel shader targets will be chosen (1.1). The
compiler will then search for a main shader entry point function and will fail to locate it, as shown
below.

C:\dx9sdk\Utilities\Bin\x86>fxc.exe Terrain.fx /Fo Terrain.fxo

Microsoft (R) Direct3D Shader Compiler 9.29.952.3111


Copyright (C) Microsoft Corporation 2002-2009. All rights reserved.

error X3501: 'main': entrypoint not found

compilation failed; no code produced

In this example we tried to invoke the compiler to compile a text-based effect script called “Terrain.fx”.
The "/Fo" command line switch allows us to specify the name of the file where we would like the final
compiled binary to be stored. In this example we choose the filename “Terrain.fxo”. It is the .fxo file
that we would ship with our application and it would contain the script in binary form. In this example
however, we can see that the compiler failed because it could not find any shader function.

In order to compile effects we must use one of the "fx_x_x" target profiles. For DirectX 9 applications,
we would use "fx_2_0". Below we see the output when using this target profile in conjunction with the
"/T" profile switch.

C:\dx9sdk\Utilities\Bin\x86>fxc.exe Terrain.fx /T fx_2_0 /Fo Terrain.fxo

Microsoft (R) Direct3D Shader Compiler 9.29.952.3111


Copyright (C) Microsoft Corporation 2002-2009. All rights reserved.

compilation succeeded; see Terrain.fxo

C:\dx9sdk\Utilities\Bin\x86>

As you can see this has fixed the problem and we successfully compiled a pure fixed-function effect
script that contains no shader code. The output binary file produced by the fxc.exe tool in this case
(Terrain.fxo) can now be loaded and bound to the device directly using D3DXCreateEffect, and no
longer requires any kind of runtime compilation.

www.gameinstitute.com
Page 91 of 128
Compiling Effects from within the Microsoft Visual C++ IDE

Because the effect compiler is a command line tool, we can optionally choose to integrate it into the
larger compilation process within the Visual C++™ IDE. That is, we can add the .fx scripts to the
project in the solution explorer and then set up a custom build step for each one that invokes the effect
compiler to compile each one during the build. When we build the solution, the .fx files will all be
compiled along with the rest of the project files. Furthermore, any output from the compiler will be
reflected in the main output window of the IDE, along with our other compiler messages. This is handy
when your .fx files contain shaders since you can open them up like .cpp or.h files, make any changes to
the effect code from within the IDE, and then rebuild the solution. Everything now happens in the same
place, unifying the development and build processes of both the main game code and the effects
themselves.

Figure 18.6

In Figure 18.6 we see how one could edit effect files from within the IDE. In this example we have
created a folder in the Solution Explorer called “Effect Files” and added a single .fx file. Notice if we
double click on this effect file, it will open up in the main window just like a regular .cpp or .h file. We
can also right click on the file in the solution explorer and alter its properties.

As things currently stand, the C++ compiler will not know how to compile this file when we request that
the solution be built. That is where we need to assign a custom build step to the .fx file so that we can

www.gameinstitute.com
Page 92 of 128
instruct the IDE to invoke a different compiler for this file. We do this by right clicking on the .fx file
we wish to setup a build step for and select its Properties (see Figure 18.6).

Upon selecting Properties from the context menu, a property sheet will open that shows a number of
various compilation and configuration settings for the file in question. We are interested in setting up a
custom build for this file will that will invoke fxc.exe, thus, we select the ‘Custom Build Step’ folder
from the tree view on the left side. This will display a set of text boxes to fill in as shown in Figure 18.7.

Figure 18.7

In this example you can see that we have filled in the command line to invoke the effect compiler for the
Terrain.fx file and output the final binary to the Terrain.fxo file. All the command line switches we used
in our previous command line example are being used here also. We can also enter some text in the
“Description” field that will be output to the IDE output window as the effect compiler is invoked.

With these settings in place, we can simply select to build the project and the effect compiler will be
invoked to compile the effect files into their binary versions automatically. You can then ship the binary
versions with your product and get the dual benefits of fast application initialization and keeping your
proprietary human readable code under wraps.

www.gameinstitute.com
Page 93 of 128
Note: Later in the course we will talk about at an alternative approach to effect file compilation
management that operates on similar principles to what we see here, but without the need to manually
configure all of our compilation options, which can be laborious when you have lots of effect files. Also,
as the number of permutations of even a single shader (more on this later) gets larger, compilation can
begin to take a fair amount of time. Undoubtedly, this unified approach has a certain high level appeal,
but in practice may turn out to be too unweildy and time-consuming versus alternative designs.

18.4 Using Effects – Reference


Earlier in this chapter we discovered how easy it is to validate and use effects at runtime. However, in
our preliminary discussions we glossed over the parameters and flags that can be passed to many of the
ID3DXEffect methods and even then, we only examined a small subset of the methods exposed by the
D3DX effect framework. In this section, we will take a look at the ID3DXEffect interface in more detail
and examine the methods that are available at runtime to validate, manage, and apply an effect.

In the previous section we learned about a variety of ways in which we can create an ID3DXEffect from
a pre-compiled binary or text-based script. Of course, once the effect has been loaded and compiled and
we have been given back a valid ID3DXEffect interface, this does not mean it is ready to be used
immediately. Our next step will usually be to validate the various techniques contained within the effect
and select the best one suitable for the current hardware.

18.4.1 Techniques

After an application creates an ID3DXEffect, its next job will usually be to examine the internals of the
effect to understand what techniques are available and what parameters exist that will require the
provision of data. In this section we will examine the various methods that exist to facilitate the
validation and selection of techniques within a given effect file. Only after a technique has been selected
can the effect be used, so this is a very important step. We also need to make sure that we do not blindly
select the first technique in the effect because it may not be the best choice or even supported on the
current hardware.

In our earlier examples we used the ID3DXEffect::FindNextValidTechnique method to find the first
technique in the file that is supported on the current hardware. Furthermore, we discussed the fact that if
the original effect script was organized such that the effects at the top of the script were the more
demanding ones (with regards to needing more recent hardware) and the ones at the bottom the least
demanding, this single function call would always locate the best technique in the effect that will work
on the current hardware. The handle to the technique is returned and can be used to inform the effect
about which technique should be used.

While this is certainly the easiest and best option to use in many situations, there will be times when you
wish to know more about the techniques in the effect and perhaps make a more informed decision. We
might imagine, for example, that if you were creating a modeling tool you might want to give the user
detailed information about each of the techniques in the file and allow the artist to select each one to try

www.gameinstitute.com
Page 94 of 128
it out. Alternatively, you might have devised a level of detail (LOD) system in your game that alternates
between expensive and inexpensive techniques based on proximity to the viewer. Such cases cannot
necessarily rely solely upon top-to-bottom file order to choose a technique; the selection will need to be
managed via other means.

In addition to the handy, but perhaps too limiting, FindNextValidTechnique method, the ID3DXEffect
interface exposes other functions that allow you to individually examine and validate techniques as well
as set the current technique and retrieve information about the currently set technique.

Both the ID3DXEffect interface and the ID3DXEffectCompiler interface are derived from
ID3DXBaseEffect and as such many of the methods that we will cover are defined in the base interface
and are accessible to both of the derived classes. For each function that we cover, we will clearly mark
whether or not the method is particular to any one of the derived interfaces or available to both through
the base interface.

Note: The base interface provides all of the common methods that might be supported by any effect
class derived from it regardless of whether it is a development time tool or runtime effect. Such methods
include functions to set and retrieve the values of effect parameters, methods to retrieve descriptors of
the various techniques, parameters, and annotations included in the file. Thus, these methods are
available through both the ID3DXEffect and ID3DXEffectCompiler interfaces via the base interface. The
methods to actually use the effect at runtime, such as validating and activating techniques, executing a
technique or a given pass, have no context in the case of the effect compiler and as such, these methods
are not available in the base class and are only available through ID3DXEffect.

The GetTechnique Method – ID3DXBaseEffect

We know that an effect may contain multiple techniques, so in order to use that effect we must tell it
which of its techniques we would like it to use (via ID3DXEffect::SetTechnique). In order to do this
we must know the handle of the technique and, as such, ID3DXBaseEffect exposes several methods for
retrieving technique handles. The first is the method GetTechnique, which allows you to fetch a
technique handle by passing in the index of the technique. This is the index indicating where in the file it
is defined with respect to other techniques.

This method is useful if you wish to step through and examine all techniques in a given effect. For
example, you could first call the method described previously, GetDesc, to ascertain how many
techniques are in the file. You could then set up a loop and call the GetTechnique method to retrieve and
store the handle to each one. You could then use these handles to ask D3DX for specific information
about each technique (e.g., its number of passes or whether it can be validated on the current device).

D3DXHANDLE GetTechnique
(
UINT Index
);

The method takes a single parameter; the integer index of the technique’s position as defined within the
effect script.

www.gameinstitute.com
Page 95 of 128
The GetTechniqueByName Method – ID3DXBaseEffect

This method is useful if your effect files have been authored in a way such that you know which
techniques to select based on the name assigned to each technique within the effect script. For example,
you might have a technique called “Single_Pass_Multi_Texture” and a second technique called
“Multi_Pass_Multi_Texture”. Your application may prefer to retrieve the handles to its techniques using
their names, so in such a case we might imagine how the application would first fetch the handle to the
single pass technique and then try to validate it. If validation failed, it would then search for the multi-
pass technique and use that one instead.

D3DXHANDLE GetTechniqueByName
(
LPCSTR pName
);

The only parameter is an input parameter where you pass in a string containing the name of the
technique of which you would like the handle returned.

The FindNextValidTechnique Method – ID3DXEffect

When you just want a no-hassle, quick and easy way to get a handle to a validated technique that is
guaranteed to work on the current system, this is the method to use. It will search the effect for
techniques, validating each one it finds. As soon as it finds a technique that passes the validation test it
will stop the search and immediately return a handle to that technique. We saw this function used earlier
in the chapter in some of our examples.

HRESULT FindNextValidTechnique
(
D3DXHANDLE hTechnique,
D3DXHANDLE* pTechnique
);

D3DXHANDLE hTechnique
If this parameter is set to NULL then the search will begin with the very first technique defined.
However, if you pass in a handle to an existing technique via this parameter, the search will begin
starting at that technique and returns the next valid technique -- i.e. only techniques defined after that in
the effect script will be parsed. This is quite useful if you wish to compile a handle array of all valid
techniques (ignoring any that are not supported on the current hardware). You can simply call this
function each time passing in the handle returned from the previous call to the function until all
techniques have been enumerated (see below). As discussed, this would be useful for searching for valid
techniques for a given effect that could be used in a level of detail system. If the techniques are arranged
in the effect from most to least intensive, we would have an array of valid techniques with technique
handles stored later in the array that could be used for polygons in the far distance.

www.gameinstitute.com
Page 96 of 128
D3DXHANDLE LastTechnique = NULL;
D3DXHANDLE ValidTechnique = NULL;
D3DXHANDLE MyTechniqueHandleArray[ Max_Techniques ];
int ValidTechniqueCount = 0;

// Keep extracting valid technique handles till we have found them all
while ( pEffect->FindNextValidTechique( LastTechnique, &ValidTechnique ) )
{
// Store this valid technique handle in th array
MyTechniqueHandleArray[ ValidTechniqueCount++ ] = ValidTechnique;

// Start search from this technique next time


LastTechnique = ValidTechnique;
}

D3DXHANDLE* pTechnique
As the above code demonstrates, for this parameter we pass in the address of a variable of type
D3DXHANDLE into which the handle of the next valid technique that was found in the search will be
placed. If your effect files are arranged such that the most cutting-edge techniques are defined before the
legacy techniques, a single call to this function will return the handle for the best technique supported on
the current system on which the application is running.

The GetDesc Method – ID3DXBaseEffect

This method can be used to retrieve information about a given effect. We pass in a single parameter --
the address of a D3DXEFFECT_DESC structure.
HRESULT GetDesc
(
D3DXEFFECT_DESC* pDesc
);

The D3DXEFFECT_DESC structure contains four members that describe the number of techniques,
parameters, and functions that are defined within the effect. These values can then be used to further
enumerate each of these properties by index.

typedef struct D3DXEFFECT_DESC


{
LPCSTR Creator;
UINT Parameters;
UINT Techniques;
UINT Functions;
} D3DXEFFECT_DESC, *LPD3DXEFFECT_DESC;

LPCSTR Creator
This member contains a pointer to a string that may contain author information about the effect. For
example, if you use the command line effect compiler, that tool will embed its own signature inside the
binary effect so that it can be returned within this member. Likewise, if you compile your effect at
runtime using one of the D3DX functions, the author string embedded in the effect will read “D3DX

www.gameinstitute.com
Page 97 of 128
Effect Compiler”. So as you can see, this is a means for us to determine which tool was used to create or
compile the effect.
UINT Parameters
This member will contain the number of top level parameters defined in the effect. With this count we
can later enumerate all of the variables and cache handles to them for later data input into the effect. Of
course, usually you will not walk through all the parameters in the effect and examine them one by one,
but will more often just search for any that your engine understands (probably by either name or
semantic). However, if we do know the total number of parameters, we at least have the ability to set up
a loop to fetch the details of each one for examination.

It should be noted that this member contains the number of top-level parameters and therefore, a
parameter whose type is a structure with multiple members would count as one parameter. If we wanted
to examine the members of a structure, we would first have to fetch the descriptor for that top-level
parameter and then ask it for information about each of its child members separately. We saw an
example of this earlier in the chapter when we looked at building an FVF code based on a vertex
structure defined in the effect.

UINT Techniques
This member is important if you wish to examine the various techniques in the effect file. It contains the
number of techniques defined (including both validated and non-validated techniques) and as such can
be used to set up a loop so that we can fetch each technique and its details by index.

UINT Functions
This member will contain a value of 0 when enumerating fixed-function effects and will be used only
when shaders are present.

You will see in the following chapter that a vertex or pixel shader is very similar to a C function defined
inside the effect file. When a vertex is about to be transformed, the Direct3D pipeline will call that
shader function to transform the vertices instead of using its own fixed-function vertex transformation
module (and likewise for pixel shaders, only they are dealt with later in the pipeline). In turn, a shader
function can also call into other helper functions (not unlike functions in C/C++) and as such, to keep
your code orderly and easy to read your effect files may contain many functions which are called into by
the vertex or pixel shader main functions. As a result, an effect file may contain many functions which
can be examined through the ID3DXEffect and ID3DXEffectCompiler interfaces, even if there is only
one "shader" function.

The GetTechniqueDesc Method – ID3DXBaseEffect

When parsing the various techniques in a file, it is often necessary to examine the details of that
technique closely. This method allows you to pass in the handle of a technique and get back a
D3DXTECHNIQUE_DESC structure containing information about that technique. While this has
obvious uses for debugging and logging purposes, it also provides the means for enumerating any
annotations attached to the technique. Additionally, it allows us to examine the number of passes that the
technique uses, which might also be a factor in terms of whether or not our engine chooses to use it.

www.gameinstitute.com
Page 98 of 128
HRESULT GetTechniqueDesc
(
D3DXHANDLE hTechnique,
D3DXTECHNIQUE_DESC* pDesc
);

The first parameter is the handle of the technique for which we would like to retrieve the descriptor.
This handle would have been previously retrieved using any of the technique location methods discussed
above. As the second parameter the address of a variable of type D3DXTECHNIQUE_DESC should be
supplied, which the method will use to return information back to the caller.

The D3DXTECHNIQUE_DESC structure is shown below with a description of its three members. These
members will be populated by the GetTechniqueDesc method, allowing your application to further study
the technique.

typedef struct D3DXTECHNIQUE_DESC


{
LPCSTR Name;
UINT Passes;
UINT Annotations;
} D3DXTECHNIQUE_DESC, *LPD3DXTECHNIQUE_DESC;

LPCSTR Name
The name assigned to the technique inside the effect file.

UINT Passes
The number of passes implemented by this technique.

UINT Annotations
The number of annotations associated with the technique. Once we know how many annotations are
associated with a given technique, we can use the ID3DXBaseEffect::GetAnnotation method
(discussed later) to fetch each annotation by index for the purpose of enumerating them all. If you are
only after a very specific annotation, you can use the ID3DXBaseEffect::GetAnnotationByName
method (also discussed later).

The following snippet of code shows some of the methods we have discussed so far being used to iterate
through the techniques of an effect and print out detailed debug information about them, including
whether or not it will validate on the current machine on which the application is being run. In this
example we assume that m_pEffect is a valid ID3DXEffect interface that has been previously created.

// Retrieve the effect description


D3DXEFFECT_DESC EffectDesc;
m_pEffect->GetDesc( &EffectDesc );

printf( "Effect Creator : %s\n\n", EffectDesc.Creator);

// Find the first valid technique in the effect file


D3DXHANDLE hTechnique = NULL;
for ( i = 0; i < EffectDesc.Techniques; ++i )
{

www.gameinstitute.com
Page 99 of 128
// Retrieve the indexed technique
hTechnique = m_pEffect->GetTechnique( i );
if ( !hTechnique ) continue;

// Retrieve the technique description


D3DXTECHNIQUE_DESC TechniqueDesc;
if (FAILED(m_pEffect->GetTechniqueDesc( hTechnique, &TechniqueDesc)))
{
hTechnique = NULL;
continue;

} // End if description available

// Print initial information for technique


printf( "Validating technique '%s'....", TechniqueDesc.Name );

// Determine if this technique is valid


if ( FAILED(m_pEffect->ValidateTechnique( hTechnique )) )
{
// Reset and move on to next technique
printf( "Failed\n" );
hTechnique = NULL;
continue;

} // End if not a valid technique


else
{
// We have found a valid technique and can break
printf( "Success\n" );
break;

} // End if valid technique

} // Next Technique

The above example shows how to fetch the descriptor and output the name of the creator of the effect. It
uses the returned effect descriptor to determine how many techniques are defined inside the file and then
loops through each one. For each iteration of the technique loop, we use the GetTechnique method to
fetch the handle of the current technique we are testing using an integer index (we are iterating through
all techniques arranged in the file here) and then we use that handle to retrieve the technique’s
descriptor. With the returned technique descriptor, we output the name of the technique we are about to
try to validate and then print the results about success/failure given the current hardware.

The ValidateTechnique Method – ID3DXEffect

As shown in the previous code snippet, this method can be used to manually validate an individual
technique. This would not be required if you were using the FindNextValidTechnique method since that
method only returns handles to already validated techniques. However, if you do wish to manually test
techniques yourself, then this method can be used.

HRESULT ValidateTechnique

www.gameinstitute.com
Page 100 of 128
(
D3DXHANDLE hTechnique
);

It takes a single parameter containing the handle of the technique you wish to validate. You would
typically use one of the ID3DXEffect::GetTechnique... methods to fetch the handle first which would
then be passed into this method to test for success or failure.

The SetTechnique Method – ID3DXEffect

An effect can contain multiple techniques but only one can ever be active at any given moment in time
during rendering. Generally speaking, it is not uncommon to have multiple techniques in an effect file --
if for no other reason than to provide fallback techniques for a given rendering effect. In such an
instance, we would validate the techniques inside the effect file at runtime and then select the best one
for the job. As shown earlier, once we have compiled the effect and located the technique we wish to use
on the current system, we would call the SetTechnique method to inform the effect about the technique
we have chosen to use during rendering. This can be done just once at effect load/compile time or
changed on the fly when needed.

HRESULT SetTechnique
(
D3DXHANDLE hTechnique
);

Its only parameter is the handle of the technique you would like to activate.

The GetCurrentTechnique Method – ID3DXEffect

Just as there is a method to set the active technique, there is also a function to retrieve the handle of the
technique that is currently active. The returned handle might then be passed into the
ID3DXEffect::GetTechniqueDesc function to retrieve more information about the technique currently
being used.

D3DXHANDLE GetCurrentTechnique( );

The function takes no parameters and returns the handle of the currently active technique.

18.4.2 Communication with Effect Parameters

An effect file may have several parameters defined that allow the application to pass in runtime values.
Without parameters, the effect framework would lose a large portion of its utility since many states can
only be set based on dynamic information that changes from frame to frame (e.g., lighting parameters,
animated transformation data, etc.). To be sure, parameter usefulness in fixed-function effect files is

www.gameinstitute.com
Page 101 of 128
dwarfed by how powerful a concept they become when used with effect files that have embedded shader
programs as we will see both in the next chapter and in those that follow.

The ID3DXBaseEffect interface provides dozens of methods for setting and retrieving the values for all
of the supported parameter types that can exist in an effect file. When we list them in a moment, you
will see methods to set/retrieve parameter types that we have not yet seen any examples of. For now
however, just bear in mind that when we move into shaders we will begin to use all of these various
variable types, so many of the parameter setting/retrieval functions we will see will have much more
relevance once we introduce the programmable pipeline.

The numerous methods available to communicate values to and from effect parameters all follow the
same format. For example, in order to set the value of a boolean parameter inside an effect we use the
ID3DXBaseEffect::SetBool method shown below.

HRESULT SetBool( D3DXHANDLE hParameter, BOOL b );

As the first input we pass the handle of the parameter whose value we wish to set. In this example,
hParameter would be the handle to a parameter of type BOOL. As the second input we pass in the new
value for the parameter.

Likewise, each method in this format also has the corresponding method that allows you to retrieve the
current contents of a parameter value. For example, below we see the ID3DXBaseEffect::GetBool
method.

HRESULT GetBool( D3DXHANDLE hParameter, BOOL *pb );

Once again, we pass the handle of the parameter whose value we wish to retrieve from the effect. As the
second input, we pass the address of a boolean variable that will receive the result on function return.

There are similar functions for the other supported parameter types -- matrices, integers, floats, vectors,
and strings -- and we will list these in a moment. We saw earlier how even arrays of the standard
parameter types can be defined. Continuing our demonstration using the Boolean type:

HRESULT SetBoolArray( D3DXHANDLE hParameter, CONST BOOL *pB, UINT Count )

In this case, we have replaced our single boolean value with a pointer to the first element in an array of
boolean values which we would like copied into the effect’s matching parameter array. This version of
the method takes a third input value informing the effect framework as to the number of elements in the
passed array that we would like to copy.

And of course there are corresponding fetch methods for arrays of any given type. The boolean version
of the array retrieval method is:

HRESULT GetBoolArray( D3DXHANDLE hParameter, BOOL *pB, UINT Count );

www.gameinstitute.com
Page 102 of 128
The boolean pointer passed as the second parameter should now point to a buffer that will be filled with
the values extracted from the boolean parameter array inside the effect. The third parameter tells the
effect how many booleans you would like to copy from the effect array into the memory buffer.

Note: Although these functions all take a D3DXHANDLE to identify the parameter in question, the
functions are overloaded. This means you can also pass the name of the parameter if the handle is not
known. As discussed, this is less efficient because of the need for an internal name search and it is a
good rule of thumb to cache handles for any parameters that are going to be updated frequently.

Below are the methods of the ID3DXBaseEffect interface that can be used to set simple parameter types
such as integers, bools, floats, strings and matrices.

Parameter Value Assignment Methods – ID3DXBaseEffect

HRESULT SetBool ( D3DXHANDLE hParameter, BOOL b );


HRESULT SetFloat ( D3DXHANDLE hParameter, FLOAT f );
HRESULT SetInt ( D3DXHANDLE hParameter, INT n );
HRESULT SetMatrix ( D3DXHANDLE hParameter, CONST D3DXMATRIX* pMatrix );
HRESULT SetString ( D3DXHANDLE hParameter, LPCSTR pString );
HRESULT SetVector ( D3DXHANDLE hParameter, CONST D3DXVECTOR4* pVector );
HRESULT SetTexture ( D3DXHANDLE hParameter, LPDIRECT3DBASETEXTURE9 pTexture );
HRESULT SetValue ( D3DXHANDLE hParameter, LPCVOID pData, UINT Bytes );

Notice that there is a method called SetVector which allows you to send in a 4D vector. 4D vectors are
of particular importance when working with shaders since four float registers are pretty much how the
hardware arranges almost all of our data, as we will discover.

We mentioned earlier that we can define custom structures in our effect file, but we see no function
above to set their values. This is one case where the last method in the above list (SetValue) comes in
handy. It allows you to set the value of any type of parameter, child member, or even annotation. All
you need is the handle of the parameter (or child parameter in a structure or annotation) that you wish to
set. Then you will pass the address of a variable/buffer containing the value(s) to assign along with the
size of that variable/buffer. This is necessary of course because the second parameter could point to any
parameter type (matrix, bool, float, etc.) and the method will need to know how much data you wish to
set. We actually use this method quite a bit when we pass along vector information that has fewer than
four floats (e.g., a D3DXVECTOR2) since it is often simpler than having to copy data over into four
float vectors just to pass information along via SetVector.

We learned a few moments ago that there are also methods to set the values of parameter arrays inside
the effect. The following methods allow the application to populate an array with a single call. We might
imagine for example, how the SetMatrixArray method could be used to pass a palette of bone matrices
for use in character skinning into an array declared within the effect.

Parameter Array Assignment Methods – ID3DXBaseEffect


HRESULT SetBoolArray ( D3DXHANDLE hParameter, CONST BOOL* pB, UINT Count );
HRESULT SetFloatArray ( D3DXHANDLE hParameter, CONST FLOAT* pf, UINT Count );
HRESULT SetIntArray ( D3DXHANDLE hParameter, CONST INT* pn, UINT Count );
HRESULT SetMatrixArray( D3DXHANDLE hParameter, D3DXMATRIX*pMatrix, UINT Count );
HRESULT SetVectorArray( D3DXHANDLE hParameter, D3DXVECTOR4* pVector, UINT Count );

www.gameinstitute.com
Page 103 of 128
Of course, the mirrors of all these methods have also been implemented to facilitate the flow of
parameter values in the opposite direction (i.e., from the effect back to the application):

Parameter Value Retrieval Methods – ID3DXBaseEffect

HRESULT GetBool ( D3DXHANDLE hParameter, BOOL* pb );


HRESULT GetFloat ( D3DXHANDLE hParameter, FLOAT* pf );
HRESULT GetInt ( D3DXHANDLE hParameter, INT* pn );
HRESULT GetMatrix ( D3DXHANDLE hParameter, D3DXMATRIX* pMatrix );
HRESULT GetString ( D3DXHANDLE hParameter, LPCSTR* ppString );
HRESULT GetVector ( D3DXHANDLE hParameter, D3DXVECTOR4* pVector );
HRESULT GetTexture( D3DXHANDLE hParameter, LPDIRECT3DBASETEXTURE9* ppTexture);
HRESULT GetValue ( D3DXHANDLE hParameter, D3DXHANDLE hParameter,
LPVOID pData, INT Bytes );

Parameter Array Retrieval Methods – ID3DXBaseEffect

HRESULT GetBoolArray (D3DXHANDLE hParameter, BOOL* pB, UINT Count );


HRESULT GetFloatArray (D3DXHANDLE hParameter, FLOAT* pf,UINT Count );
HRESULT GetIntArray (D3DXHANDLE hParameter, INT* pn, UINT Count );
HRESULT GetMatrixArray(D3DXHANDLE hParameter, D3DXMATRIX* pMatrix, UINT Count );
HRESULT GetVectorArray(D3DXHANDLE hParameter, D3DXVECTOR4* pVector, UINT Count);

Alternative Matrix Parameter Array Methods

One potential problem with our earlier example of using the 'SetMatrixArray' method to set, for
example, a complete palette of bone matrices for use in skinning, is that it may not always be the case
that our matrices are contained in one contiguous block of memory in the way that method expects.
Thinking back to the development of our CActor class for instance, we know that there were cases
where it was far more convenient to work with an array of pointers to matrices that actually existed
elsewhere in our application. Perhaps the matrices we want to set to the effect are housed in separate,
individual D3DXFRAME objects within an existing transformation hierarchy.

Rather than forcing us to resolve all of these references and build a contiguous array of matrix data by
hand, ID3DXBaseEffect also exposes two additional methods that allows us to work with this type of
data in a much more convenient fashion. These are:

HRESULT SetMatrixPointerArray( D3DXHANDLE hParameter,


CONST D3DXMATRIX** ppMatrix,
UINT Count );

HRESULT GetMatrixPointerArray( D3DXHANDLE hParameter,


D3DXMATRIX ** ppMatrix,
UINT Count );

These two methods allow us to work directly with arrays of pointers to matrices. When setting matrices
in this form for example, instead of performing a simple block transfer, the effect will de-reference each
element and read the contents of each individual matrix separately when populating its matching local

www.gameinstitute.com
Page 104 of 128
system-memory array behind the scenes. This allows for a more streamlined approach to setting matrix
data when this type of referencing is necessary.

Setting Raw Parameter Data

The ID3DXEffect interface implements an additional method for setting parameter data in a more
efficient manner. Its efficiency derives from the use of a straight memcpy from the pointer you pass it
into the parameter memory area. No validation, type checking, or bounds checking are done. It also
performs no conversion from one type to another (e.g., converting a row-major matrix into the correct
format for storage in a column-major parameter).

We can use SetRawValue to set a series of contiguous effect parameters in a single block. For instance,
we could set an array of twenty matrices with 20 calls to SetMatrix or by using a single SetRawValue.

HRESULT SetRawValue( D3DXHANDLE Handle,


void * pData,
DWORD OffsetInBytes,
DWORD Bytes );

Note: All values are expected to be either matrix4x4s or float4s. Additionally, all matrices are expected to
be in column-major order by default. Int or float values are cast to float4, so it is highly recommended
that you use SetRawValue with only float4 or matrix4x4 data.

D3DXHANDLE Handle
The handle to the parameter whose data we would like to set.

void * pData
A pointer to a buffer containing the data we would like to set.

DWORD OffsetInBytes
Number of bytes between the beginning of the effect data and the beginning of the effect constants you
are going to set. This is essentially the count, in bytes, of all the data declared before the effect
parameter you wish to start setting data from.

DWORD Bytes
The size of the buffer to be set, in bytes.

Please note that this function is not available via ID3DXBaseEffect, and is available only to the
ID3DXEffect interface.

Checking for Parameter Usage

Ideally our application would not waste time setting parameter values that are not used. An effect file
might contain multiple techniques and, as such, some of the parameters declared in the effect might only
be used when more advanced techniques are being utilized. This is especially true when we start placing
shaders in our effect files. We can test parameters to see if they are being used by the technique we have
ultimately chosen with the following method:

www.gameinstitute.com
Page 105 of 128
BOOL IsParameterUsed( D3DXHANDLE hParameter,
D3DXHANDLE hTechnique );

D3DXHANDLE hParameter
This is the handle of a parameter whose usage within a given technique we would like to ascertain.

D3DXHANDLE hTechnique
This is the handle of the technique that we have chosen to inquire about.

18.4.3 Parameter Blocks Reference

Earlier in the lesson we mentioned that we can potentially speed up the input of static parameter data to
the effect using parameter blocks. We saw an example of a parameter block being recorded during a
runtime initialization procedure and later applied during rendering. The parameter block methods are not
part of ID3DXBaseEffect and are exposed only by ID3DXEffect. Let us have a look at the methods.

The BeginParameterBlock Method – ID3DXEffect

This method is the first step in initially recording a parameter block. It takes no parameters but informs
the effect to allocate memory for a parameter block and to start recording the values of any effect
parameter that is modified until the EndParameterBlock method is called.

HRESULT BeginParameterBlock()

The EndParameterBlock Method – ID3DXEffect

This method tells the effect that parameter value recording is complete and that no other changes should
be captured in the active parameter block. This method should only be called if you have previously
made a call to the BeginParameterBlock method.

D3DXHANDLE EndParameterBlock()

It will return a handle to the newly recorded parameter block so that you can apply it at render time.

www.gameinstitute.com
Page 106 of 128
Example: Recording a Parameter Block
// Start recording a parameter block
pEffect->BeginParameterBlock();

// Bind Texture variables


pEffect->SetTexture( "Tex0" , pTexture[0] );
pEffect->SetFloat ( "fReflect" , 20.0f );
pEffect->SetMatrix ( "Scale" , &ViewMatrix );
pEffect->SetInt ( "Score" , 10 );

// Finish recording and retrieve parameter block handle


D3DXHANDLE m_hParameters = m_pEffect->EndParameterBlock();

In the above example, the four lines of code sandwiched between the BeginParameterBlock and
EndParameterBlock calls set values for four parameters which are recorded inside the parameter block.
When EndParameterBlock is called, the handle to the newly created parameter block is returned and
stored for later use during rendering.

Remember that parameter blocks should only be used to record static data -- it records values, not
variables. In the above example, any changes made to the application's ViewMatrix variable after it had
been recorded in the parameter block would not be reflected inside the parameter block. The parameter
block will have captured the values stored inside the matrix at the time it was recorded and will not be
updated if ViewMatrix changes. This makes most matrices a poor choice for recording in a parameter
block. Textures are the exception to the rule. A pointer to the texture surface is recorded (reference
counted), so changes to the texture surface can occur dynamically.

The ApplyParameterBlock Method –ID3DXEffect

The ApplyParameterBlock method is our means for applying an effect’s previously recorded parameter
block at render time. We pass this method a single parameter -- the handle of the parameter block that
was returned from an earlier call to EndParameterBlock. This method will take all the values recorded in
the parameter block and apply them to the appropriate effect parameters with a single call.

HRESULT ApplyParameterBlock( D3DXHANDLE hParameterBlock );

Parameter blocks can provide a performance benefit by reducing the need for separate
ID3DXEffect::Set... calls when using effects that require lots of static data.

www.gameinstitute.com
Page 107 of 128
Example: Using parameter blocks
if( SUCCEEDED( pDevice->BeginScene() ) )
{
// Render the mesh objects
for( int i = 0; i < NUM_OBJS; ++i )
{
// Grab effect of current object
ID3DXEffect *pEffect = Objects[i].m_pEffect;

// Apply the parameters


pEffect->ApplyParameterBlock( Objects[i].m_hParameters );

...

pEffect->Begin( &Passes, 0 );
for( iPass = 0; iPass < Passes; iPass++ )
{
...
pMesh->RenderSubset();
...
}
pEffect->End();
}

...
pd3dDevice->EndScene();
}

The DeleteParameterBlock Method – ID3DXEffect

When you no longer need a parameter block, you can free its memory with a call to
DeleteParameterBlock. This method takes a single parameter -- the handle of the parameter block you
wish to free.

HRESULT DeleteParameterBlock( D3DXHANDLE hParameterBlock );

18.4.4 Retrieving Effect Parameter Handles/Descriptors

In order for the application to be able to send values into parameters, the parameter names, handles, or
semantics must be known. As discussed, it is more efficient to set a parameter value using its handle, so
usually we will want to fetch the handles for an effect’s parameters when it is first loaded and compiled.

There are several methods in ID3DXBaseEffect that allow you to retrieve the handle of a parameter or
annotation by searching based on the name of the parameter, its attached semantic, or the integer

www.gameinstitute.com
Page 108 of 128
position of its declaration with respect other parameters within the effect or structure. Let us have a look
at these methods one at time.

D3DXHANDLE GetParameter( D3DXHANDLE hParameter, UINT Index );

This method allows us to fetch the handle of a parameter by passing in the integer position of its
declaration either from the top of the effect file (if it is a top-level parameter we are after) or within a
parent structure.

D3DXHANDLE hParameter
If NULL is passed then the second parameter is interpreted as an integer index starting from the top of
the effect file. You will pass NULL for this parameter when it is a top-level parameter or array that you
are trying to retrieve. If however you wish to fetch the handle of a child member of a structure, you will
pass the handle to the top-level parameter (of which your chosen parameter is a child member).

UINT Index
If NULL is passed as the first parameter, this value represents the integer index of the effect parameter
declaration beginning at the top of the effect file. If the handle supplied to the first parameter is not
NULL, this value represents the integer index of a child member of that referenced effect parameter.

We saw this method used in one of our earlier examples where we built an FVF code based on a custom
vertex structure defined inside the effect. In that example, we first got the handle to a parameter declared
with a type matching our vertex structure and then used this method to fetch handles for each of the
individual child members of that structure.
D3DXHANDLE GetParameterByName( D3DXHANDLE hParameter, LPCSTR pName );

We have seen this method used in many of our earlier examples. It is most commonly employed to fetch
the handles of each parameter (or specific known parameters) when the effect is first loaded and
compiled. The application passes the name of the parameter it would like to fetch the handle of and, if it
is found, that handle is returned.

D3DXHANDLE hParameter
If NULL is passed, the function will search all top-level parameter declarations defined in the effect file.
However, you may also use this function to search for a child member of a structure by name. In this
case, we would pass the handle of the top-level parameter here and the name of one of its child members
in the second parameter.
LPCSTR pName
A string containing the name of the parameter you would like to retrieve a handle for. If the first
parameter to this method is NULL then this should be the name of a top-level parameter declared within
the effect. Otherwise, it should contain the name of a child member of a parent structure whose handle
was passed in the first parameter.

D3DXHANDLE GetParameterBySemantic( D3DXHANDLE hParameter, LPCSTR pSemantic );

If your parameters have been labeled with semantics then your application can choose to acquire
parameter handles using semantics for the search instead of parameter names. The method is identical in

www.gameinstitute.com
Page 109 of 128
its use to the previous method except that the second parameter is a string containing the semantic you
are searching for instead of the parameter name. Just like the other methods, this function allows you to
pass the handle of a parent parameter so that the semantic search can be used to search for child
members of custom structures.

D3DXHANDLE hParameter
The handle to the parent parameter whose child members are to be searched by semantic. Passing NULL
means the search will be carried out over all top-level parameters declared in the effect.

LPCSTR pName
A string containing the semantic you would like to search for.

D3DXHANDLE GetParameterElement( D3DXHANDLE hParameter, UINT ElementIndex );

As we move on to study shaders in the next chapter, you will see that there will often be times when we
wish to declare arrays of parameters. A classic example is an array of matrices that might be used for
skinning or an array of custom structures containing lighting information. Suffice to say, there may be
times when your application will need to retrieve handles for the individual parameters within such
arrays for filling in values. The GetParameterElement method provides for this by allowing you to
supply the handle to the parent array in its first parameter and an integer array index as its second. The
handle of the parameter stored at that location in the array will be returned.

D3DXHANDLE hParameter
The handle of the parent array from which you wish to fetch an element handle.

UINT ElementIndex
The integer index into the array of the parameter whose handle you wish to retrieve.

HRESULT GetParameterDesc( D3DXHANDLE hParameter, D3DXPARAMETER_DESC* pDesc )

There will certainly be times when the application will wish to know a little more about a particular
parameter. This is especially true when custom structures are being used in the effect similar to the
custom vertex structure we saw earlier.

The GetParameterDesc method returns a structure of type D3DXPARAMETER_DESC containing


descriptive effect parameter information. The parameters for this function are shown below.

D3DXHANDLE hParameter
The handle of the effect parameter for which you would like a descriptor structure returned.

D3DXPARAMETER_DESC *pDesc
This parameter should be supplied with the address of a variable of type D3DXPARAMETER_DESC
that you would like filled with the information pertaining to the referenced effect parameter. This
structure has quite a few members because it is required to be generic enough to return information
about arrays and structures, in addition to the standard parameter types. Let us have a look at the
D3DXPARAMETER_DESC structure now and discuss its members.

www.gameinstitute.com
Page 110 of 128
typedef struct D3DXPARAMETER_DESC
{
LPCSTR Name;
LPCSTR Semantic;
D3DXPARAMETER_CLASS Class;
D3DXPARAMETER_TYPE Type;
UINT Rows;
UINT Columns;
UINT Elements;
UINT Annotations;
UINT StructMembers;
DWORD Flags;
UINT Bytes;
} D3DXPARAMETER_DESC, *LPD3DXPARAMETER_DESC;

LPCSTR Name
The name of the parameter as it is declared in the effect file.

LPCSTR Semantic
If the parameter has a semantic defined in the effect file, this string will contain that semantic.

D3DXPARAMETER_CLASS Class
This member is used to identify the class of the parameter. There are several different types of
parameters, some of which we have already seen. For example, a parameter might be a vector,
a matrix, a string, or a texture. These are all different classes of parameter. The class identifier
is returned using a member of the D3DXPARAMETER_CLASS enumeration:
typedef enum D3DXPARAMETER_CLASS
{
D3DXPC_SCALAR,
D3DXPC_VECTOR,
D3DXPC_MATRIX_ROWS,
D3DXPC_MATRIX_COLUMNS,
D3DXPC_OBJECT,
D3DXPC_STRUCT,
D3DXPC_FORCE_DWORD = 0x7fffffff,
} D3DXPARAMETER_CLASS, *LPD3DXPARAMETER_CLASS;

These members are pretty self-explanatory so we won’t spend too long on them. To
give some quick examples, a parameter defined as type float4 would be described
as belonging to the D3DXPC_VECTOR class because it is a 4D vector. A standard
float parameter would belong to the D3DXPC_SCALAR class.

When we cover shaders in the next chapter we will discuss how matrices can be
stored in row-major or column-major format. The default is column-major, which
might surprise you given that Direct3D uses row-major matrices. However, column
major matrices are more efficient for performing vector/matrix math inside a shader
running on the GPU. If we wish our matrices to be interpreted as row-major instead
of column major, we can specify flags to the compiler during effect compilation to
make this so. Alternatively, if we wish for only some of our matrices to be
interpreted as row or column major, we can declare the matrix parameter inside the
effect file using the "row_major" and "column_major" modifiers. Thus, a matrix
www.gameinstitute.com
Page 111 of 128
parameter will belong to either the D3DXPC_MATRIX_ROWS class or the
D3DXPC_MATRIX_COLUMNS class depending on whether it was defined inside
the effect as a row major or column major matrix, respectively.

A custom parameter type (a structure) will belong to the D3DXPC_STRUCT class,


as we might expect.

What parameters would belong to the D3DXPC_OBJECT class? Well, there are
numerous other types of entities in an effect file that do not fit neatly into the
aforementioned classes -- textures, shaders, samplers, and strings all belong to this
class.

D3DXPARAMETER_TYPE Type
Each effect parameter not only belongs to a class, but also has a specific type within that class.
For example, we know that a parameter can be a vector belonging to the D3DXPC_VECTOR
class but that information alone doesn’t tell us the specific type of vector. For example, we
might declare a vector with three integer components (int3) or a vector with two floating point
components (float2). In such cases, the parameter class tells us that it is a vector, but the
parameter type tells us that it is an integer vector or a floating point vector, and so on. The
same is true for matrices, which might contain integer elements or floating point elements. A
parameter might also be a texture, in which case it would belong to the
D3DXPARAMETER_OBJECT class, but that alone does not tell us whether it is a 1D, 2D, 3D,
or cube texture.

This member will contain an entry from the D3DXPARAMETER_TYPE enumeration shown
below. There are quite a few members in this enumeration that we won’t understand until we
start working with shaders in the next chapter, but hopefully you get the idea -- collectively,
the parameter class and type tell us exactly what type of parameter we are dealing with.

typedef enum D3DXPARAMETER_TYPE


{
D3DXPT_VOID,
D3DXPT_BOOL,
D3DXPT_INT,
D3DXPT_FLOAT,
D3DXPT_STRING,
D3DXPT_TEXTURE,
D3DXPT_TEXTURE1D,
D3DXPT_TEXTURE2D,
D3DXPT_TEXTURE3D,
D3DXPT_TEXTURECUBE,
D3DXPT_SAMPLER,
D3DXPT_SAMPLER1D,
D3DXPT_SAMPLER2D,
D3DXPT_SAMPLER3D,
D3DXPT_SAMPLERCUBE,
D3DXPT_PIXELSHADER,
D3DXPT_VERTEXSHADER,
D3DXPT_PIXELFRAGMENT,
D3DXPT_VERTEXFRAGMENT,
D3DXPT_FORCE_DWORD = 0x7fffffff,

www.gameinstitute.com
Page 112 of 128
} D3DXPARAMETER_TYPE, *LPD3DXPARAMETER_TYPE;

The remaining members of the D3DXPARAMETER_DESC structure are not applicable to all
effect parameter types. Let us continue our explanation of the remaining members.

UINT Rows
UINT Columns
These values are only applicable to matrices, and vectors (or arrays of them) and will contain
the number of rows and columns for that type. For example, a parameter declared as type
matrix, which is really just a typedef for a float4x4 parameter type, would return values of 4 for
both of these members.

UINT Elements
This member is applicable only to array parameters. It will contain the total number of
elements in the array.

UINT Annotations
As discussed, an effect parameter may have one or more annotations attached to it to aid the
application in determining how best to use or initialize that parameter. This member will
describe the number of annotations defined for the effect parameter in question. This will be
zero if the parameter has no annotations defined.

UINT StructMembers
If the effect parameter about which we are inquiring is a structure, this member will contain the
number of child members it contains.

DWORD Flags
This member will contain a combination of zero or more of the following three
D3DX_PARAMETER flags that are used to provide additional information:

D3DX_PARAMETER_ANNOTATION
If this flag is set then it means the parameter is an annotation. This is very different from
the meaning of the Annotations member described above which will only be set when a
parameter is not an annotation, but has annotations defined for it. This flag tells us that
what we are inquiring about is not a parameter in the strictest sense, but is in fact an
annotation.

D3DX_PARAMETER_LITERAL
We discussed a little earlier how the ID3DXEffectCompiler interface gives us the ability
to optimize our effect by marking certain parameters that are not expected to change after
compile time as literals. Recall that when an effect parameter has been marked as a literal
its value can never change, either inside the effect or via actions of the external
application. This flag indicates that the parameter in question has been set as a literal
value. It is worth noting that shared parameters cannot be marked as literal.

D3DX_PARAMETER_SHARED

www.gameinstitute.com
Page 113 of 128
If this flag is returned then it means the parameter has been declared using the shared
modifier and can be shared amongst multiple effects and shaders using the effect pool
system.

UINT Bytes
Since all of the different parameter types require different quantities of memory (and constant
register storage), this member returns the total size of the parameter, in bytes. This can often be
useful when used in conjunction with ID3DXBaseEffect::SetValue that requires explicit
details about the amount of data being supplied when assigning a value to a parameter.

18.4.5 Rendering with Effects – ID3DXEffect

Prior to rendering with an effect we must first make sure that its parameters have been setup correctly.
The previous sections detailed all of the various methods we have at our disposal to do this. Once the
parameter values have been set, it is time to begin the process of invoking the effect for the purposes of
rendering our attribute groups. We have seen in earlier examples that we must sandwich our geometry
rendering between the ID3DXEffect::Begin and ID3DXEffect::End calls. In fact, all rendering is
further encapsulated between ID3DXEffect::BeginPass and ID3DXEffect::EndPass calls to
accommodate both single and multi-pass approaches. We will want to talk about these functions just a
bit more before moving on. Methods also exist that allow us to handle effects correctly when a device is
lost/reset, so we will need to be aware of those. And finally, there is even a method that allows us to
notify the effect when we change parameter values between the BeginPass and EndPass calls so that it
can re-upload data to the appropriate registers. In this section we will talk about these various methods.

The Begin Method – ID3DXEffect

The Begin method informs the effect that we are about to apply it, and to request that the effect perform
a couple of important tasks as a result. The first of these is to provide us with the number of passes
required for the technique we have chosen (i.e., the active technique). Obviously for this reason it is very
important that you set the technique prior to calling this method or the number of passes that it returns
will potentially be incorrect.

The second task that the Begin method is responsible for is to (optionally) save the current state of the
device (within a state block) so that it can be restored when the ID3DXEffect::End method is later
called. This allows our effect to change as many states as it requires within each of its passes safe in the
knowledge that the device will be left in a known state upon completion (i.e. the execution of the
technique will not interfere with the application's current understanding of the state of the device).

The Begin method is shown below with a description of its parameters.

HRESULT Begin( UINT *pPasses, DWORD Flags )

UINT* pPasses

www.gameinstitute.com
Page 114 of 128
As shown in earlier code examples, this is where we pass the address of an integer variable into which
we would like the Begin method to place the number of passes that are required to render the current
technique. This will allow us to correctly iterate through the technique and render our geometry n times,
as defined by effect author.

DWORD Flags
We can specify a combination of zero or more D3DFX flags which instruct the Begin method how the
device state should be saved. Under most circumstances we would usually pass 0, in which case all
device states that the effect will manipulate will be automatically saved (and restored when the matching
End call is encountered). Below is a list of some of the flags we can pass here however. Some will have
little context until we discuss shaders.

D3DXFX_DONOTSAVESTATE
Instructs the effect not to save and restore the device state during Begin and End calls. This is
useful if you wish to employ your own state management system or if you wish the states set
by an effect to remain set for other effects. This is obviously a fairly dangerous setting to use
because the device will be left in the state in which it was configured during the last pass of the
technique. If rendering code situated elsewhere in your application (perhaps code not using
effects) expects certain device states to remain unchanged, the results could be unexpected
depending on what your technique passes decided to do along the way.

D3DXFX_DONOTSAVESAMPLERSTATE
This informs the effect that we wish it to save all device states with the exception of our
sampler units. This is useful if each effect file that you use automatically sets up its samplers,
which is often the case. To save and restore all of the sampler states in this instance would be
wasteful since each effect is ultimately going to configure all of the sampler settings anyway.

D3DXFX_DONOTSAVESHADERSTATE
This flag instructs the effect not to save the state of any shaders assigned to the device or any
values currently stored in the shader constant registers (potentially the biggest memory chunk
and thus most expensive to save/restore with respect to bandwidth costs).

If no flags are specified (0), all state is saved and restored automatically (the slowest but safest option).

The BeginPass Method – ID3DXEffect

Once inside a Begin/End code block, we should now have access to the number of passes required to
render the given technique. Our next step is to activate each pass of the effect in turn, rendering the
polygons assigned this effect for each pass. We activate a pass by calling the BeginPass method, which
informs the effect that we are about to start rendering our primitives for a given pass and as such, it is
time for the states for the specified pass to be committed to the device. It is the BeginPass method that
actually configures the device with the states specified in the matching pass block inside the effect file.

As with fixed-function states, BeginPass also performs the task of uploading parameter data used by the
current pass into constant registers on the graphics hardware. This is usually where the bulk of any

www.gameinstitute.com
Page 115 of 128
performance issues will occur due to the cost of transferring potentially significant amounts of data from
system memory up to the GPU.

As discussed previously, every parameter in the effect has a system memory copy maintained and
managed inside the ID3DXEffect or its associated effect pool. When we set the value of a parameter, we
are not setting the value in the constant register(s) but are instead setting the value of its system memory
copy. Thus, before calling the BeginPass method we will have provided the values for all parameters
used by the effect (or at least for the current pass). After such a procedure, our effect will contain the
correct values for all its self-managed system memory parameters and when BeginPass is called,
automatically upload this data to the hardware for use.

Note: We will talk more about registers when we discuss shaders. In the case of fixed-function effects,
parameters do not need to be uploaded into constant registers -- they are simply passed through to the
relevant SetRenderState, SetTextureStageState, etc. device calls behind the scenes. However, when our
effects have shader programs, those programs will run on the GPU and will require access to our
parameter data. When this is the case, a copy of that parameter data must be available on the hardware
(in the constant registers) so that our shaders can access it. Fortunately, the same effect system can be
used for both fixed-function and shader driven effects -- the ID3DXEffect is intelligent enough to know
when data needs to be uploaded and/or when it can just be routed straight to the device.

HRESULT BeginPass( UINT Pass );

As you can see, the ID3DXEffect::BeginPass method takes a single parameter describing the integer
index of the pass we wish to execute. Once we have called BeginPass, the device is now configured
correctly for us to start rendering our primitives for the given pass. After we have rendered our
primitives, we must signify that the current pass is over by calling the ID3DXEffect::EndPass method.

The general strategy for rendering with effects is shown below. Notice how each Begin call must have a
matching End call and that each BeginPass call must have a matching EndPass call as well.

Example: Rendering a Subset with Effects


// Set parameters here
...

UINT NumPasses;

// Fetch #passes and save device state


pEffect->Begin( &NumPasses, 0 );

// Loop for each pass


for ( int i = 0; i < NumPasses; i++ )
{
// Configure device with per pass states, parameters
pEffect->BeginPass( i );

// Render the attribute group


DrawPrimitive ( /* All primitives assigned this effect */ )

// We have finished with this pass


pEffect->EndPass();

www.gameinstitute.com
Page 116 of 128
}

// Restore device state


pEffect->End();

The CommitChanges Method – ID3DXEffect

As discussed, parameter data is uploaded to the graphics hardware and/or the device state is changed
when the BeginPass method is encountered. But what happens if we need to change the value of a
parameter after the BeginPass call? For example, imagine a situation where we were using a shader-
driven technique to render a subset of a mesh hierarchy (e.g., CActor) and, inside the BeginPass and
EndPass block, we are traversing the hierarchy and changing only certain states, like world matrices, as
we go along. We've already called BeginPass to trigger the upload of the bulk of our other required
shader parameters, and we certainly do not want to waste bandwidth continuing to resend that data, but
the matrix registers do need refreshing. Of course, as we update our matrix parameters via Set calls, all
we are doing is changing the values of the system memory variables stored inside ID3DXEffect or its
associated effect pool, not the values in the constant registers that the shaders need. The constant
registers will not be updated until the next time BeginPass is called and that is no good to us under the
current conditions -- we need the shader to have access to that data immediately when we draw the
associated polygons. To remedy this, the ID3DXEffect interface has a method that can force a data
upload at our discretion:

HRESULT CommitChanges()

This method should be called whenever you have changed the value of one or more effect parameters
between the BeginPass/EndPass calls (via Set functions). It forces the effect to examine its system
memory copies and re-upload any parameters that have been flagged as dirty. It is vital that you call this
method before issuing any DrawPrimitive calls under these circumstances for the desired effect.

The End and EndPass Methods – ID3DXEffect

These methods require no major explanation other than that they must be called to terminate a given
effect or pass. The EndPass method informs the effect that the current pass is over and must match up
with a prior call to BeginPass. The End method similarly must correspond with a prior call to the Begin
method and signifies that we have finished rendering with the effect, triggering the required state
restoration (assuming the Begin call captured state). Please be aware that the EndPass call does not do
state restoration, so any states you set during a multi-pass technique situation will carry over between
those passes if you do not manually do something about it in the passes themselves. This is generally a
good thing as it reduces redundantly setting states that are the same amongst the passes, which is very
common.

HRESULT EndPass();
HRESULT End();

www.gameinstitute.com
Page 117 of 128
18.4.6 Miscellaneous Methods – ID3DXEffect

The ID3DXEffect interface has a few other methods that do not fit into any of the categories we have
discussed thus far. Let us have a look at them one at a time to complete our coverage.

The GetDevice Method – ID3DXEffect

Every ID3DXEffect object is bound to a particular device and thus is considered a resource of that
device. We must pass a device pointer into the effect creation functions as it is against a given device
that the validation of techniques must take place. It is ultimately the device that either will or will not
support a given effect technique so this should make sense to you. This method allows you to retrieve a
pointer to the device which currently owns/manages the ID3DXEffect.

Note: One of the nice things about the ID3DXEffectCompiler interface is that it allows us to compile an
effect file into a binary/compiled effect buffer in a way that is not dependant on device limitations. This
means we can use it to compile techniques even if a device is not currently available to support them.
This is ideal for compiling effects at development time when the end-user specifications are not known.
At runtime, that compiled buffer (output from the compiler) will be loaded and ultimately have to be
turned into an ID3DXEffect, at which point it will become device dependant.

HRESULT GetDevice( LPDIRECT3DDEVICE9* ppDevice )

As its single parameter, it accepts the address of a variable of type IDirect3DDevice9* (or
LPDIRECT3DDEVICE9) which, on function return, will point to the effect’s associated device.

The GetPool Method – ID3DXEffect

We have seen in previous examples that when we create an effect, we can also specify an optional
ID3DXEffectPool interface. This effect pool will have been previously created and represents a memory
buffer in which the values of any shared parameters will be stored. Any effects that have been created
with the same effect pool have the ability to share parameters between them.

This method allows you to retrieve the ID3DXEffectPool object currently being used by this effect to
store the values for its shared parameters. If it returns NULL then it means this effect was created
without a shared memory pool and as such has no parameters shared with other effects.

HRESULT GetPool( LPD3DXEFFECTPOOL* ppPool )

www.gameinstitute.com
Page 118 of 128
In order to share parameters between effects, the effects must have been created with the same effect
pool. Furthermore, each effect must have those parameters named identically and declared using the
shared keyword modifier. If the parameters you wish to share have semantics, they must also match.

The OnLostDevice Method - ID3DXEffect

Behind the scenes, an ID3DXEffect uses state blocks to efficiently deploy your scripted states to the
device. State blocks are a bit like parameter blocks, only they contain the recordings of device state
changes instead of parameter value changes. State blocks do not survive the device entering a lost state
so must be released when the device becomes lost and rebuilt again when the device is reset. In fact, a
device will fail to reset if we have a single outstanding state block that has not been released.

This method takes no parameters but should be called to inform the effect when the device has been put
into a lost state. Behind the scenes, it will release all of its state blocks so that the device can be reset at
the earliest opportunity.

HRESULT OnLostDevice()

It is vitally important that you call this method prior to trying to reset the device. The device will fail to
reset until this method has been called as DirectX will detect surviving invalid state blocks which have
not yet been released.

The OnResetDevice Method – ID3DXEffect

When a device is lost we must call the OnLostDevice method so that all state blocks currently being
used internally by the ID3DXEffect are destroyed. However, this is only half the story. When the device
is eventually reset and the application can resume rendering, we must then let the ID3DXEffect know
that the device is once again in a valid state and as such, it can rebuilt its state blocks.

HRESULT OnResetDevice()

This function must be called when the device is reset, prior to trying to use the effect.

The SetStateManager Method – ID3DXEffect

The D3DX effect framework does a reasonably good job of managing your states in the general case.
Redundant states will typically be filtered by the effect's internal state manager and/or the lower level
device driver and this all makes for pretty efficient management of the device. However, Microsoft
recognized that there may be times when you wish to have more control over state management, as it is
so critical to performance, and thus provided an abstract state manager base class from which you can

www.gameinstitute.com
Page 119 of 128
implement your own approach. This state manager can be plugged into the D3DX effect framework and
will be used -- instead of its own state manager -- for any states that need to be set on the device.

This method takes as its single parameter a pointer to an ID3DXEffectStateManager interface. This is an
abstract base class from which your own classes must be derived. The object we supply must implement
a series of callback functions that will be called by the effect framework whenever a state needs to be
set.

We have seen this type of system employed many times throughout this series.
ID3DXAllocateHierarchy is a great example, where we had to derive our own class from that base
interface and pass it into the D3DXLoadMeshHierarchyFromX function, and earlier in this lesson we
examined the ID3DXInclude interface that allows us to essentially overide the default effect compiler's
file handler. The former interface included methods that are called by the D3DX mesh loading process
whenever frame or mesh memory has to be allocated or destroyed in the case of, and the latter when
effect files need to be opened and closed. In both cases, the application was thus totally in control of
how their respective resources were populated with data.

In this particular case, the ID3DXEffectStateManager interface contains a list of callbacks that will be
called whenever a state needs to be set on the device. For example, it requires that you implement a
SetTexture method so that when the effect framework is executing one of your effects and it finds a
texture assignment instruction in your script, instead of binding that texture to the stage itself, it will call
your SetTexture method override and let you do it. This allows you to implement your own state
management that, for example, might apply redundancy filtering hierarchically based on application-
side requirements or data structures known only to you.

If you want an effect to use your proprietary state manager instead of the default, you have to call the
SetStateManager method and pass the address of your ID3DXEffectStateManager derived object.

HRESULT SetStateManager( LPD3DXEFFECTSTATEMANAGER pManager )

We will take a quick look at the ID3DXEffectStateManager interface in a moment, but we will not dwell
too long on it. Its design is fairly straightforward and you should have no trouble with it if you decide to
go down that road.

The GetStateManager Method – ID3DXEffect

This method allows you to retrieve a pointer to the interface of any state manager an effect may be using
(set using the prior function). To its single parameter you need to pass the address of a variable of type
ID3DXEffectStateManager* into which you would like this method to place the state manager object
that was previously assigned to the effect. If you have not assigned a custom state manager then NULL
will be returned, indicating that the effect is using the default D3DX effect state manager.

HRESULT GetStateManager( LPD3DXEFFECTSTATEMANAGER* ppManager )

www.gameinstitute.com
Page 120 of 128
We have now looked at nearly all of the methods of the ID3DXEffect interface and its base interface.
Remember that, where indicated in this reference, many of the methods are exposed by
ID3DXBaseEffect, from which both ID3DXEffect and ID3DXEffectCompiler are derived. Before
finishing off our effect file framework reference, we will take a brief look at the
ID3DXEffectStateManager interface in case you decide to provide your own custom class.

18.5 Custom State Managers


Although the D3DX effect system does a pretty good job of managing device state, plugging in your
own system and overriding the state management provided by default may be desirable. In practice, this
is going to be done via an abstract interface from which you must derive. The interface exposes 18
methods, all of which must be implemented by your derived class.

As an example, the ID3DXEffectStateManager interface has a SetRenderState method which you must
implement. Whenever the effect script specifies that a render state has to be set, the effect system will no
longer set that state automatically itself, but will instead call the SetRenderState method of your supplied
state manager object. Of course, the effect system will provide the callback with all the information it
needs to set the render state, so this does allow you to perform any custom processing you might need.
You might for example, record within your state manager exactly which states are set. Should a call
come into the state manager to set a state that is already set, your state manager can reject the call and
thus filter out these redundant device calls.

Creating your own state manager is actually going to be fairly simple since it uses a design we have seen
time and again throughout the series. Your derived object is ultimately passed to an effect, using the
ID3DXEffect::SetStateManager method, and from that point on, the methods provided by your state
manager object serve as a collection of callbacks which the effect will invoke whenever any device state
needs to be changed.

There is one very important point to note -- while most of your typical state management will get
triggered during a BeginPass invocation, you will also be responsible for saving/restoring device state
on Begin/End calls because all D3DX provided management behavior is completely bypassed when you
drop in your own state management class.

18.5.1 The ID3DXEffectStateManager Interface

Below we discuss the methods of the ID3DXEffectStateManager interface that must all be implemented
in your derived class. Most of these methods work exactly as you would expect and mirror the similarly
named IDirect3DDevice9 methods.

Note: When you implement these methods you should return S_OK to indicate a successful result. If the
callback fails when setting device state, the effect will either fail during BeginPass, or a dynamic effect
state change will fail to be applied.

www.gameinstitute.com
Page 121 of 128
There are essentially two situations when these functions will be called by the effect framework. The
first and most common is when BeginPass is executed in your effect. As we know, this method is
responsible for taking all states specified in the pass block and applying them to the device. When a
custom state manager has been supplied, this method will parse the scripted state assignment but instead
of setting them itself, call the appropriate callback to allow you to set it. The second case is when a
dynamic state change occurs while a given pass is still active.

Let us briefly look at the methods one at a time.

HRESULT LightEnable( DWORD Index, BOOL Enable )

This is called by the effect framework whenever a light is enabled inside the effect file. For example, the
following state assignment would cause this method to be called during BeginPass:

LightEnable[0] = TRUE;

The function takes two parameters. The first is the index of the light slot on the device that we would
like to enable/disable and the second is a boolean informing us of whether the light should be enabled
(true) or disabled (false).

Inside your callback function you can call the IDirect3DDevice9::LightEnable method to actually
carry out the request of enabling/disabling the light, in addition to performing any additional tasks you
may require.

Note: Generally speaking, all the methods of this interface will essentially be wrappers around calls to
IDirect3DDevice9 state setting functions with, perhaps, some additional state filtering logic. That is pretty
much exactly what the default effect state manager does too.

HRESULT SetFVF( DWORD FVF )

This method is called whenever the FVF of the device is changed. You are supplied with the single
DWORD value containing the new FVF code. Your implementation should ultimately call the
IDirect3DDevice9::SetFVF method to make those changes as requested.

HRESULT SetLight( DWORD Index, CONST D3DLight9* pLight )

This method is called whenever an instruction is encountered in the effect that assigns light settings to a
light slot. It is quite interesting that this method is passed a single D3DLIGHT9 structure when we
discovered earlier that inside the effect script, we must set the light properties with individual
instructions.

LightType[0] = POINT;
LightPosition[0] = float3<10.0f, 1.0f, 23.0f>;
LightAmbient[0] = float4<0.7f, 0.0f, 0.0f, 1.0f>;

www.gameinstitute.com
Page 122 of 128
This demonstrates that, behind the scenes, these individual state assignments and their values are
collected into a D3DLIGHT9 structure. We will be supplied with this single structure containing all the
lighting information for the light that is to be configured. Ultimately, this method must call the
IDirect3DDevice9::SetLight method to configure the device with this data.

HRESULT SetMaterial( CONST D3DMATERIAL9* pMaterial )

Just like the previous method, this callback is invoked when the effect script sets a material on the
device. Although we specify the various material properties using separate assignments inside the effect
script, they are collected into a D3DMATERIAL9 structure behind the scenes. We are sent this structure
so that our function can ultimately give it to the device (calling IDirect3DDevice9::SetMaterial).

HRESULT SetNPatchMode( FLOAT nSegments )

This method is called whenever the patch mode of the device has been changed. Patches are essentially
ways to represent curved surfaces in hardware. Although patch functionality has been available in
DirectX for many years now, it was never really adopted by mainstream developers and as such is
considered mostly a legacy system in DX9. We will not discuss patches in this course but feel free to do
your own research if the subject interests you.

HRESULT SetPixelShader( LPDIRECT3DPIXELSHADER9 pShader )


HRESULT SetVertexShader( LPDIRECT3DVERTEXSHADER9 pShader )

We can define vertex shader and pixel shader functions inside our effect to override the functionality of
the fixed-function pipeline. We override the fixed-function vertex and pixel processing by assigning our
own shader functions to the VertexShader and PixelShader effect states:

VertexShader = compile vs_3_0 MyVertexShaderFunction();


PixelShader = compile ps_3_0 MyPixelShaderFunction();

Setting either of the above states to NULL will invoke fixed-function processing for that section of the
pipeline. For example, if we specify NULL as the pixel shader (the default), the fixed-function color
blender (i.e., the texture stages) will be used instead.

We currently know very little about placing shaders in our effect files and we will dedicate the next
chapter (and, in a sense, all that follow) to shader programming. However, while it is very convenient to
store shaders in our effects, this does not mean that we must use effect files in order to use shaders.
DirectX ships with tools that allow you to compile standalone shader code as needed. That shader can
then be loaded into an IDirect3DVertexShader9 or IDirect3DPixelShader9 object at runtime. These
interfaces have very few methods and are simply a way to represent shaders so that they can be bound to
the device when needed, much like we do textures. For example, if we are not using an effect file, we
will be responsible for making sure that the device uses the correct vertex and pixel shaders for each
subset of polygons we wish to render, just like any other state. The Direct3D device has the following
two methods that allow you to set a vertex or pixel shader on the device. Passing NULL will essentially
cancel out any shader processing and any rendering thereafter will be carried out using the fixed-
function pipeline.

www.gameinstitute.com
Page 123 of 128
IDirect3DDevice9 Shader Setting Methods:

HRESULT SetVertexShader(IDirect3DVertexShader9* pShader );


HRESULT SetPixelShader(IDirect3DPixelShader9* pShader );

These methods look familiar because they are identical to the ID3DXEffectStateManager callbacks we
are currently discussing. This is because ultimately the effect system is just wrapping calls to the device.
Thus, even when an effect file contains embedded shader code, behind the scenes, when the effect is
compiled, the shader code is compiled into an IDirect3DVertexShader9 or IDirect3DPixelShader9
object. When the effect system encounters assignments such as the following in our effect file…

VertexShader = compile vs_3_0 MyFunction();

…it will compile MyFunction into an IDirect3DVertexShader9 object and during a call to BeginPass
will simply pass the shader object to the device using the following device method:

pDevice->SetVertexShader( pMyFunctionShader );

So as you can see, regardless of whether we are using shaders standalone or are embedding them inside
our effect files for ease of use, under the hood our shaders are compiled and treated as standalone
objects. Thus, when a vertex or pixel shader is assigned during the pass of a given effect, the shader in
question has already been compiled into an IDirect3DVertexShader9 or IDirect3DPixelShader9 object.
This is the interface that will be sent to our callback which would then call the device’s SetVertexShader
or SetPixelShader methods to bind the passed shader to the device, ready for rendering.

Note: This may seem a little daunting right now as we have been forced to discuss shaders a few times
throughout our coverage of the effect framework. This is unavoidable since the two are so tightly
interconnected in practice (or at least they can be). However, the shader topics we have touched on in
this lesson will make much more sense after the following chapter which introduces shader programming.

HRESULT SetRenderState( D3DRENDERSTATETYPE State, DWORD Value )

Whenever a render state assignment is made within the effect script, a call to this callback method will
take place (remember that they all happen during BeginPass). Behind the scenes, the effect system will
usually just call the IDirect3DDevice9::SetRenderState method on our behalf, but when a custom
state manager is supplied, it will send the request to this callback instead.

As the first parameter you are supplied a member of the familiar D3DRENDERSTATETYPE
enumeration which we have been using to set render states on the device since the first module of this
series. The second parameter contains the value of the state that should be set. Your implementation of
this method should ultimately call the IDirect3DDevice9::SetRenderState method.

HRESULT SetSamplerState( DWORD Sampler,


D3DSAMPLERSTATETYPE Type,
DWORD Value)

As its name suggests, this method is called whenever a sampler state is changed. With no custom state
manager supplied, the effect framework would simply call the IDirect3DDevice9::SetSamplerState

www.gameinstitute.com
Page 124 of 128
method for us. However, when a state manager is supplied, this method will be called, placing the
ultimate responsibility of setting sampler state in our hands.

The first parameter contains the sampler unit for which to set the state. When dealing with fixed-
function effects this is essentially the same as the texture stage index. As the second parameter we are
supplied a member of the familiar D3DSAMPLERSTATETYPE enumeration, which we have been using
for quite some time now. Finally, the third parameter contains the value to which the sampler state
should be set.

HRESULT SetTextureStageState( DWORD Stage,


D3DTEXTURESTAGESTATETYPE Type,
DWORD Value )

This method is called whenever the effect issues texture stage state assignments. Your implementation
of this method will be responsible for calling the IDirect3DDevice9::SetTextureStageState
method. The first parameter contains the stage to configure and the second parameter will contain a
member of the now familiar D3DTEXTURESTAGESTATETYPE enumeration.

HRESULT SetTexture( DWORD Stage, LPDIRECT3DBASETEXTURE9 pTexture )

This method is called during BeginPass whenever a texture assignment is encountered. With no custom
state manager supplied, the effect framework would simply call the IDirect3DDevice9::SetTexture
method for us behind the scenes. However, when a state manager is supplied, it calls this method instead
which gives us the ability to intervene in the process. Your method will ultimately, at some point, have
to call the IDirect3DDevice9::SetTexture method to bind the texture to the device.

The first parameter is the stage to which the texture should be bound and the second parameter contains
the texture to bind.

HRESULT SetTransform( D3DTRANSFORMSTATETYPE State,


CONST D3DMATRIX* pMatrix )

This method is called by the effect framework whenever a device matrix needs to be set. The function is
just like the IDirect3DDevice9::SetTransform function which would usually be called for us
automatically by the effect framework (when no state manager is supplied). When a state manager is
supplied, this method is called instead, placing the responsibility of setting the device matrices in your
hands.

The first parameter contains a member of the familiar D3DTRANSFORMSTATETYPE enumeration


which we have been using to set device matrices since the first module in the series. The second
parameter contains a pointer to the matrix that is to be bound to the device’s world, view, or projection
matrices (or to an entry in the device matrix palette).

Shader Constant Callbacks

In order to finish off our coverage of this interface we once again find ourselves prematurely drawn into
a shader discussion. The following six methods must be implemented to facilitate the uploading of

www.gameinstitute.com
Page 125 of 128
parameter data into the constant registers of the graphics hardware. For a system using only fixed-
function effects, we can simply implement no-op methods for these.

We have mentioned a few times in this lesson that when using shaders there are a number of constant
registers on the graphics hardware that can be used for the storage of parameter data from the
application. In short, a constant register is just a block of memory (4 floats in size) on the graphics
hardware that can be efficiently accessed by shader programs. The number of constant registers
available varies according to the hardware and depends on the shader model under which you are
compiling your shader code. From vertex shader model 2.0 on, we are guaranteed a minimum of 256 of
these registers in which to store our parameter data, and 32 for pixel shaders, but earlier models had
considerably less.

When using effects with shaders, the effect framework can automatically take care of uploading our
parameter data into constant registers when they are needed. This means we don’t have to worry too
much about them at all, just as long as we don’t use an insane number of parameter inputs which would
collectively overflow the number of constant registers available for storing parameter data (not generally
an issue these days). These constant registers will need to be reused by each shader and as such the data
in these registers can be dynamically updated whenever we begin a new pass.

When a custom state manager is supplied, it will be responsible for uploading the parameters into the
constant registers itself. Why would we want to do this? Well, perhaps you have designed an efficient
system that records the registers in which data is stored that can remain set across multiple effects and
thus reduce bandwidth requirements between system and video memory. It takes time to keep setting
constant registers so these callbacks afford you control over exactly which constant registers get
overwritten and when.

Below we show the six remaining methods of the ID3DXStateManager interface which provide
callbacks to the effect system for the uploading of different types of parameter data into the constant
registers of the graphics hardware. There are boolean, float, and integer versions for setting both vertex
and pixel shader constants.
HRESULT SetPixelShaderConstantB( UINT StartRegister,
CONST BOOL* pConstantData,
UINT RegisterCount );

HRESULT SetPixelShaderConstantF( UINT StartRegister,


CONST FLOAT* pConstantData,
UINT RegisterCount );

HRESULT SetPixelShaderConstantI( UINT StartRegister,


CONST INT* pConstantData,
UINT RegisterCount );

HRESULT SetVertexShaderConstantB( UINT StartRegister,


CONST BOOL* pConstantData,
UINT RegisterCount );

HRESULT SetVertexShaderConstantF( UINT StartRegister,


CONST FLOAT* pConstantData,
UINT RegisterCount );

www.gameinstitute.com
Page 126 of 128
HRESULT SetVertexShaderConstantI( UINT StartRegister,
CONST INT* pConstantData,
UINT RegisterCount );

Notice that each method is supplied with an array of data to set in the constant registers. For example, if
your effect contained two bool parameters, the boolean version of the function would be called and
supplied an array of these two bools as the second parameter and the number of items in the array in the
third parameter (matrices would be passed as an array of floats to the float version of the function, etc.).
As the first parameter, your method would be passed the integer number of the constant register from
which you would like the data to start being stored.

For example, remembering that a floating point constant register has enough storage for four floats, a
4x4 matrix would be supplied as a total of 16 floats. If the StartRegister specified contained the number
6, this means we are required to pass these 16 floats to the constant registers starting with constant
register 6. As 16 floats would consume four constant registers, the effect is requesting that the data be
assigned to registers 6, 7, 8, and 9.

So what do we do in our implementations of these functions? How do we transfer this data up to the
constant registers? It just so happens that the IDirect3DDevice9 interface also exposes these exact same
methods, which are normally used for the manual assignment of parameter data into the constant
registers for shader access when we are not using effects. When using effect files with default state
management, this is hidden from us when we simply use the ID3DXEffect::Set... methods to set our
parameter values. However, you also have the option to avoid the use effects in this regard and thus
handle all shader state manually. In such a case, the shader and its parameters are still defined in a
similar way and we will discuss how to manually load and compile standalone shaders in the following
chapter.

When we are writing shaders that will be compiled and used standalone (i.e., not embedded in effect
files), we can still define parameters inside our shader file, but we no longer have the ID3DXEffect
interface to encapsulate the management and population of these parameters from the application side.
Again, bear in mind that these parameters are ultimately just human readable names that represent one
or more constant registers. When we declare variables in an effect file or shader program we can, for
instance, also specify the registers that should be used to store them. For example, below we see how a
matrix might be declared for a standalone high level shader such that the matrix parameter will be stored
inside constant registers 6, 7, 8, and 9 (it is only necessary to indicate the starting register).

matrix WorldMatrix : register ( c6 )

In this example, the application would know that it must store the world matrix in constant registers c6
through c9 before invoking the shader. This is not necessary (although still possible) when using effect
files because the effect system provides automatic constant register assignment and data management.
An effect will know precisely which constant registers were chosen for each parameter so will know
exactly in which registers to upload the underlying data when ID3DXEffect::BeginPass is called.

www.gameinstitute.com
Page 127 of 128
Although we will generally be using effect files to manage our shaders in the lab projects for this course,
the following code shows how an application that is not using effect files, but is instead using a
standalone vertex shader, would set the world matrix for a vertex shader that it had previously compiled.

pDevice->SetVertexShaderConstantF( 6, &WorldMatrix, 4 );

As you can see, this is exactly the same IDirect3DDevice9 method that we would call from within our
callback methods when a request is made for an upload to the constant registers. In the case of our effect
state manager callbacks however, the effect has already decided in which constants to store our
parameter data and in that case we are simply told which registers to upload to and the amount of data to
upload. We can then call the above IDirect3DDevice9 method to actually set the constants correctly.

If this final section has been a little confusing, don’t worry too much about it for now. This should all
make more sense when we discuss shaders in the next chapter. Just know that such callbacks will most
likely just perform direct calls into the matching methods exposed by IDirect3DDevice9.

Conclusion
There were likely a lot of new ideas for you to absorb in this chapter. While the subject matter was not
necessarily very difficult, the introduction of an entirely new way to organize our rendering techniques
is something that can be daunting at first glance. That being said, while we’ve still got a ways to go,
hopefully the benefits of using effect scripts are becoming obvious, even at this early stage.

While we have laid down a solid foundation for almost all of the important methods in the effect file
framework, we really need to start moving beyond the reference stage. Certainly you are going to
become a lot more comfortable with these concepts as we move into the remaining chapters in this
course because from this point forward, pretty much all of our rendering will be effect-based. This will
be true for the new components we introduce in later chapters (per-pixel lighting/shadows, reflections,
etc.) as well as for ideas we might have encountered in the past (spatial tree rendering, terrain rendering,
etc.), which will be modified to use an effect-centric rendering design.

In the next chapter we will introduce the programmable pipeline and finally start writing our own vertex
and pixel shaders. With our effect file foundation in place, adding shaders to our rendering pipeline will
not be very difficult at all -- we can literally just code them right inside our effect files. In practice we
will see that there is actually a bit more to effect and shader integration than simply retrofitting in some
new pieces here and there, but the details will fall into place as we progress.

www.gameinstitute.com
Page 128 of 128

Potrebbero piacerti anche