Sei sulla pagina 1di 211

Tivoli Provisioning Manager

for OS Deployment

IBM Systems Director Edition:

Migration Guide
for
Remote Deployment Manager Users

A User’s Guide for Remote Deployment Manager V4.40.1 users migrating


to Tivoli Provisioning Manager for OS Deployment – IBM Systems
Director Edition V7.1
© Copyright International Business Machines Corporation 2009. All rights reserved.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by
GSA ADP Schedule Contract with IBM Corp.

2
Contents

Introduction..................................................................................................5
Product versions.........................................................................................................5
RDM Data migration: general considerations...........................................................6
Installing TPM for OSd - ISDE 7.1..............................................................7
Installing TPM for OSd server with Director..........................................................11
Migrating data from stand-alone TPM for OSd 7.1...................................................................15
Using TPM for OSd server with Director................................................................15
Discovering TPM for OSd servers with Director....................................................19
Step-by-step procedure...............................................................................................................20
Customizing config.csv file........................................................................................................27
Discovering a multi-server environment....................................................................................32
Post-installation........................................................................................................41
Upgrades..................................................................................................................44
TPM for OSd DE 7.1 uninstallation...........................................................48
RDM 4.40.1 data migration........................................................................49
General considerations.............................................................................................49
Data migration: specifics.........................................................................................52
Migrating RDM 4.40.1 multi-server environments....................................................................55
Migrating RDM images and tasks: details and limitations......................................55
Windows Clone Install and Linux Clone Install: Network settings...........................................57
Linux Clone Install: General settings.........................................................................................61
Windows Clone Install: Licensing.............................................................................................62
Windows Clone Install: Sysprep type........................................................................................63
Windows Clone Install: Creating a local account......................................................................67
Software applications.................................................................................................................68
The RDM data migration tool..................................................................................71
Prerequisites...............................................................................................................................72
Warnings....................................................................................................................................74
Preliminary steps on stand-alone TPM for OSd 7.1...................................................................74
Running the migration tool.........................................................................................................78
Checking the replication activity................................................................................................82
Incomplete object replication.....................................................................................................84
PXE server coexistence............................................................................................86
“Use alternate PXE server” option.............................................................................................87
DHCP options 60 and 43............................................................................................................91
Concept mappings......................................................................................95
OS and application images.......................................................................................95
Tasks........................................................................................................................97
Hardware configuration.........................................................................................100
Applications and drivers........................................................................................101
Deployment Servers...............................................................................................108
3
Target model checks..............................................................................................109
Replication.............................................................................................................110
Use cases..................................................................................................111
How to discover new targets..................................................................................112
How to run a deployment task...............................................................................117
How to import and export targets..........................................................................120
How to capture images...........................................................................................123
How to restore images............................................................................................131
How to create images from installation media.......................................................134
How to create drivers.............................................................................................143
Drivers from IBM ServerGuide...............................................................................................144
Drivers from UpdateXpress System Pack................................................................................150
Working with Hardware Configuration tasks........................................................154
Hardware environment.............................................................................................................157
Hardware configuration: Discovery.........................................................................................163
Hardware configuration: RAID................................................................................................179
How to run full OS deployment tasks....................................................................194
Appendix A: Web Interface Extension....................................................199
Working with the Web Interface Extension...........................................................199
Appendix B: Configure network settings with Software Modules..........200
Known issues and workarounds...............................................................207
DHCP options needed during Linux deployments................................................207
Parent boot server installation fails with “TPMOSD333E”...................................208
Child boot server installation or discovery fails on Windows machine running
Cygwin...................................................................................................................208
Links.........................................................................................................210
Trademarks...............................................................................................211

4
Introduction

This guide provides support and information for Remote Deployment Manager (RDM) V4.40.1
users who are migrating to Tivoli Provisioning Manager for OS Deployment - IBM® System
Director Edition (TPM for OSd - ISDE) V7.1.
Note: In this guide, the following abbreviations are used to aid readability:
 ISDE – IBM Systems Director Edition
 LCI – Linux® Clone Install
 MDS – Master Deployment Server
 RDM – Remote Deployment Manager
 RDS – Remote Deployment Server
 TPM for OSd – Tivoli® Provisioning Manager for OS Deployment
 WCI – Windows® Clone Install
 WNI – Windows Native Install
This guide describes step-by-step procedures to build from scratch new deployment systems with
TPM for OSd - ISDE 7.1 and to migrate legacy RDM 4.40.1 data. RDM 4.40.1 concepts, terms, and
use cases are mapped against TPM for OSd – ISDE 7.1 ones to facilitate and speed up the
replacement of RDM 4.40.1 environments. The main aim is to facilitate the migration for RDM
4.40.1 users.
This guide is a supplement to the official TPM for OSd 7.1 and TPM for OSd – ISDE 7.1
documentation.

Product versions
TPM for OSd 7.1 is available as a stand-alone product and as an IBM System Director 6.1 plug-in
(called TPM for OSd – ISDE). The stand-alone version is installed using a standard MSI file (or
.tar.gz for UNIX® platforms) and the main user interface is available with a common Web browser
at URL http://<IP>:8080. The plug-in version is provided in a tcdriver format that must be plugged
into the Director 6.1 environment; the user interface for the integrated version is provided through
the IBM System Director 6.1 console.

5
The two versions share the same engine and they have very similar features and capabilities: the
main difference being the interaction with users because TPM for OSd – ISDE 7.1 is usually
accessed from the Director console.
Moreover a Free Edition is available for Director 6.1 customers: it is provided as the TPM for OSd
– ISDE 7.1 with a tcdriver format and it has the following limitations:
- OS deployment stops working after the 10th target (IBM xSeries deployment only)
- The only supported operating systems for target deployments are Windows and Linux
- No multi-server infrastructure, one single OS deployment server (parent boot server) on
Windows or Linux x86 only, without replication
- It is not supported (no PMRs), but is used for testing purposes.
To check that the Free Edition is running, look for a line in the logs similar to this:
“IBM Systems Director Edition unsupported free copy”
If activities failed for the above limitations you can see a log messages like:
"Maximum number of targets exceeded"
"This free version is limited to deploy up to 10 servers only"
"This free version is limited to deploy x86 servers only"
Director users can easily migrate to the supported TPM for OSd – ISDE 7.1 version with the
tcdriver upgrade (see section “TPM for OSd - ISDE 7.1 installation: Upgrades”).

RDM Data migration: general considerations


RDM 4.40.1 and TPM for OSd – ISDE 7.1 are two different products, but a migration path has been
provided to let RDM users continue to use their legacy data in the new TPM for OSd 7.1
environments. The general idea is to have a coexistence time frame where both RDM and TPM for
OSd work in the same subnet until all the migrated RDM tasks can be performed from the new
TPM for OSd 7.1 engine.
The data migration consists of moving RDM data (basically Windows Clone Install (WCI) tasks,
Linux Clone Install (LCI) tasks, and software applications) into the new TPM for OSd 7.1 server
with a scripted tool. Note that the migration tool can work with both the stand-alone TPM for OSd
7.1 and TPM for OSd – ISDE 7.1 products.
Further details about the data migration are provided in the chapters “RDM 4.40.1 data migration”
and “Concept mappings”. The chapter “Use cases” describes the new TPM for OSd User Interface
for Remote Deployment Manager users.

6
Installing TPM for OSd - ISDE 7.1

TPM for OSd - ISDE 7.1 is an IBM Director 6.1 plug-in provided as a tcdriver: its name is usually
tpmfosd_director.tcdriver. You need a working IBM Director 6.1 environment to install this
package to start building your OS deployment environment. A detailed installation procedure is
described in the TPM for OSd - ISDE 7.1 Installation Guide.
When the plug-in has been correctly installed, IBM Director 6.1 has TPM for OSd capabilities. The
next step is to install the first OS deployment server (called parent boot server) locally on the IBM
Director 6.1 server: at least one boot server is needed and it must be installed on the same machine
where the IBM Director 6.1 server service is running.
Alternatively you can manually install TPM for OSd 7.1 as a stand-alone product, configure it, and
then discover it with the IBM Director 6.1 boot server discovery.
When the TPM for OSd tcdriver is plugged into IBM Director and you have your parent boot server
installed, you can start using the product.

Note that:

-The RDM 4.40 Master server (aka MDS) is now called parent boot server

-The RDM 4.40 Slave servers (aka RDS) are now called child boot servers

-In RDM 4.40, during RDM plug-in installation, the MDS was automatically installed on the
IBM Director server. With TPM for OSd - ISDE 7.1 no boot servers are installed with the
tcdriver; after the TPM for OSd – ISDE 7.1 plug-in installation completes, the IBM Director
6.1 administrator must manually install the parent boot server using the Install boot server
wizard.

When you install the TPM for OSd – ISDE 7.1 tcdriver in IBM Director 6.1 and the plug-in
installation completes successfully, you can see TPM for OSd entries added into the Director Web
UI. The following screen capture shows the OS Deployment page accessible from the Release
Management section:

7
From the Welcome page, under the Manage panel, the TPM for OSd blue icon indicates that the
plug-in is installed, but that there is no parent boot server installed on the Director server: the
message “Error communicating with Parent OS deployment server” asks you to install the parent
boot server or to discover it if it has already been installed.

8
If you cannot see the above links after the tcdriver installation, you might need to restart the IBM
Director server service and open the Director Web UI again.

When the above icons are available, you can start installing OS Deployment servers: they are also
called boot servers because the product acts as a PXE server providing network boot capabilities to
clients.

In a multi-server hierarchy you can have one master server (installed on the Director server) called
parent OS Deployment server and (optionally) its direct subordinates called child OS Deployment
servers.
As in RDM 4.40 for MDS and RDS, each child OS Deployment server is synchronized with the
parent OS Deployment server.

9
By default, child OS Deployment servers are configured to use the same database as the parent OS
Deployment server: using multiple databases (one for each OS deployment server) is not supported
in TPM for OSd – ISDE 7.1.
Note that, in the multi-server infrastructure, TPM for OSd objects (System Profiles, Software
Modules, and so on) created on the parent OS Deployment server are automatically replicated to the
child OS Deployment servers. The opposite is not true: if an OS clone image is created on a child
server, replication to the parent is not automatic.
Currently it is not possible to replicate just a single object between TPM for OSd servers: the
replication synchronizes the whole boot server repository. The only way to replicate single objects
is through TPM for OSd Java API. Consult the TPM for OSd documentation and IBM support
technotes for more details, for example, a useful technote “How to replicate a system profile
created on a child PXE server?” can be found at:
http://www-01.ibm.com/support/docview.wss?
rs=3176&context=SS3HLM&q1=1370072&uid=swg21370072&loc=en_US&cs=utf-8&lang=en

The suggested usage scenario consists of the following steps:


- Build a separate test infrastructure to create and test images
- Export data from the test infrastructure into RAD files
- Import RAD files at the necessary hierarchy level of the TPM for OSd – ISDE 7.1
production infrastructure (usually into the parent boot server)
- Let the automatic replication copy images to child boot servers

Be careful when working with RAD files because exporting and re-importing images on OSd
servers sharing the same database will create duplicate objects.

For more details, see “How to work with a multi-server infrastructure”.

You can install TPM for OSd boot servers using Director Console or by discovering manually
installed ones. It is suggested that you:
 Use Director console to install TPM for OSd boot servers
 Install and discover first the parent OS Deployment server and then any child OS
deployment servers
It is advisable to have the parent OS deployment server on the same machine as the Director server:
this is the default behaviour. It is also possible to have the parent boot server on another machine

10
but a local OS deployment server is then mandatory for bare metal discovery: this specific
configuration is out of the scope of this document.

Installing TPM for OSd server with Director


To install the parent OS Deployment server, start the wizard from Director Web UI and select
Release Management -> OS Deployment -> Boot servers -> Install boot server:

The wizard shows a message indicating that the parent OS Deployment server is going to be
installed locally on the Director machine. To proceed with the installation you must define the
following parameters:

 Administrator name: this is the TPM for OSd super-user name (default was rdmadmin in
RDM 4.40). This user can log in to the stand-alone TPM for OSd Web UI with all
privileges. This name can be chosen freely: it does not need either to be a Windows or a
UNIX account or to match the Director administrator name.
 Password: super-user password. This must be the same as that for the TPM for OSd 5.1
engine embedded in RDM 4.40 if you want to migrate RDM images.
 HTTP/HTTPS ports: ports needed by TPM for OSd for HTTP and HTTPS protocols
 Data Directory: TPM for OSd directory where logs, configuration, and data will be
stored.

11
Refer to the TPM for OSd 7.1 documentation for additional details about installation parameters.

Note that:
 RDM 4.40 did not have its own super-user as TPM for OSd versions 5.1 and 7.1 have.
The provided super-user credentials are now used by Director to start TPM for OSd –
ISDE 7.1 activities. It is also used to log in to the stand-alone TPM for OSd Web UI.
 RDM 4.40 had its repository folder at path <RDM_ROOT>\repository and the
embedded TPM for OSd 5.1 at path <RDM_ROOT>\EE_files. Now the TPM for OSd –
ISDE 7.1 repository is located under the Data Directory parameter provided at
installation time.

When the OS Deployment server installation has completed, you can see the related activity
showing no errors:

12
If your installation activity contains an error, check if it is a known error and that you can work
around it. Refer to “Known issues and workarounds”.

Go to the Manage section and click the Refresh button to see the green check as shown below:

The parent boot server is listed as an OS Deployment server:

13
Click on its name to see TPM for OSd 7.1 administrative pages:

The software is now installed, but you must restart Director services or reboot the machine to
unlock any features that are disabled by default. This step is needed only after installing the first
parent boot server.

14
Migrating data from stand-alone TPM for OSd 7.1
If you have a running stand-alone TPM for OSd 7.1 server and you want to migrate to TPM for
OSd – ISDE 7.1, but also preserve your data, you can use the following procedure:
- Export data from stand-alone TPM for OSd 7.1 into RAD files
- Check that the exported RAD files are correct and that they are not corrupted
- Completely remove stand-alone TPM for OSd 7.1
 (from Add/Remove programs -> Change -> clean all data): note that all TPM for OSd
7.1 data will be removed from the machine
- Install the OS deployment server automatically from IBM Director Console
- Import the RAD files into the boot server
After these steps, your OS deployment server integrated with IBM Director will have all the
exported data.

Using TPM for OSd server with Director

Any TPM for OSd activity can be started and configured from Director Web UI -> Release
Management -> OS Deployment, as described in the following four sections:

1) Hardware Configuration: tasks involving RAID, BIOS, and general hardware


configurations

15
2) OS Configuration: for operating system provisioning (RDM images containing operating
systems are now called System Profiles)

3) Software Modules: software installation and custom actions to be performed after the OS is
installed (RDM images of software applications are now called Software Modules).
Software modules include also device drivers.

4) Deployment Schemes: templates defining deployment activity behaviours

16
OS deployment tasks that involve targets can be found by selecting the resource from the Navigate
Resources menu and choosing the OS Deployment actions as described in the picture below:

In general, if you want to configure the deployment data (images, applications, and so on) you have
to browse the OS Deployment section under the Release Management menu; if you need to start an
OS deployment task, you have to select the target and its OS Deployment action.

These are all TPM for OSd 7.1 concepts so refer to its documentation for further information.

17
TPM for OSd 7.1 administrative pages are available by selecting the OS deployment server under
the Boot Servers section:

Here you can change the debug level, disable the PXE server component, and configure the product.

To find another administrative point of control, select the TPM for OS Deployment icon under the
Manage panel:

Here you can access additional useful pages:

18
Discovering TPM for OSd servers with Director
Although it is strongly suggested to install OS deployment servers using the Director wizard, it is
also possible to manually install stand-alone TPM for OSd servers and add them later into the
Director environment. Performing an automated installation using the IBM Director console as
described in the previous chapter is the recommended installation method that will give your OS
Deployment servers all the designed capabilities; the manual installation of stand-alone boot servers
followed by their discovery into the Director environment is described in this section: it will give
you running OS Deployment servers with some limited features.
Consider that the manual installation of OS deployment servers must be performed only when
specific configurations are needed and you are advised to check this customization with IBM
Software Support before starting.
If you manually discover an installed OS deployment server you can:

19
- Add TPM for OSd boot servers in your Director 6.1 environment
- Perform basic deployment tasks
These discovered boot servers will have limited functions, including:
- Child boot server cannot be promoted to parent role
- Boot server upgrade does not work from Director console
Before starting a manual installation, check that your requirements can be met; then you can start
your OS deployment server discovery by following these steps as reference.

Step-by-step procedure
In this scenario you:
1- Ensure that you have installed the tpmfosd_director.tcdriver on the Director server
2- Enable TPM for OSd credentials authentication in Director
3- For each OS deployment server:
a. Use the TPM for OSd installer under %DIRECTOR_HOME
%\tpm\repository\tpmfosd\<OSd_build_number>
to install the product in stand-alone mode
b. Place a custom config.csv file under <TPMfOSd_DataDir>\global\rad and
restart the TPM for OSd server
4- Configure TPM for OSd for launch-in context
5- Run a TPM for OSd server discovery from Director Web UI

These steps are now described in detail.

1. Install TPM for OSd – ISDE 7.1


To install the tcdriver, follow procedure described in the TPM for OSd – ISDE 7.1 Installation
Guide.

2. Enable TPM for OSd credentials authentication in IBM Director


An important step is let TPM for OSd interact with Director.Edit the user-factory.xml configuration
file to let TPM for OSd use the credentials stored in the dcm.xml file and then access the Director
server. To accomplish this, update the following line in the file <director_home>/tpm/config/user-
factory.xml:

20
<authentication-
realm>com.ibm.tivoli.tpmfosd.security.TpmfosdDirectorSecurityRealm
</authentication-realm>

So that it looks as follows:

When changing this file you need to restart the Director server service.
After this step, OS deployment servers can interact with Director using the credentials stored in the
%DIRECTOR_HOME%\tpm\config\dcm.xml file. Step 3 explains how to provide these credentials
to the TPM for OSd servers when invoking Director.

3. Install and configure TPM for OSd


Installing TPM for OSd in stand-alone mode is described in the TPM for OSd 7.1 Installation
Guide. If you want to migrate RDM 4.40 images, when installing the product in stand-alone mode,
remember to use the same super-user password of the rdmadmin user in TPM for OSd 5.1 (provided
with RDM 4.40). The super-user name (default in TPM for OSd 7.1 is admin) is not important.
To be sure that you are using the correct password:
- Open the browser at http://<RDM_440_Server_IP>:8080
- Insert the credentials provided at RDM 4.40 installation time for TPM for OSd 5.1.1
(default user is rdmadmin)
- When installing TPM for OSd 7.1, provide this same password for the super-user

When the stand-alone TPM for OSd 7.1 is installed, you have to create a specific config.csv file
under <TPMfOSd_DataDir>\global\rad (where <TPMfOSd_DataDir> is the Data Directory
selected at installation time): this file contains the needed parameters to let the OS Deployment

21
server work with Director. To help you when creating and editing this config.csv file,please refer to
“Customizing config.csv file”.

An additional step is required for TPM for OSd servers installed on UNIX systems: you need to
open the file /etc/sysconfig/tpmfosdvars and add the <Director_home>/tpm/tools value to the
PATH environment variable. For example, the file /etc/sysconfig/tpmfosdvars should contain a line
like the following:
PATH=$PATH:/opt/ibm/director/tpm/tools

4. Configure TPM for OSd 7.1 launch-in context


Before discovering each OS deployment server (parent or child boot server), you need to add the
RPC access role to the manually-installed TPM for OSd server: this role is responsible for the
launch-in context inside the IBM Director Console. To accomplish this, on each TPM for OSd
server you are going to discover (parent or child boot server), follow this procedure:
- Go to the TPM for OSd server
- Stop TPM for OSd server service (net stop remboserver on Windows)
- Edit the file C:\Program Files\Common Files\IBM Tivoli\rembo.conf by adding the
following section

HTTPRoles RPC access {


Members "RPC access"
AllowPages "*"
AllowGroups "*"
Policies "RADTOPO_RO"
}

- From a DOS shell, navigate to C:\Program Files\Common Files\IBM Tivoli and run the
command

rembo.exe -d -c rembo.conf -exit

- Start the TPM for OSd server service (net start remboserver on Windows)

Now you can discover your TPM for OSd servers.

5. Discover OS deployment boot servers

22
When you have installed and configured the TPM for OSd servers, you can discover them into your
IBM Director environment.
First of all check under Navigate Resources -> All systems that you have two entries for each
system where you want to discover the TPM for OSd 7.1 installation:
- An Operating system entry with the hostname of the machine
- A Server entry (or virtual server for virtual guest machines)

Check that both entries have Access = OK


If there is no Server resource related to your target system:
- Inventory -> System Discovery, enter the IP address of the target system to be discovered
and click "Discover".
- If the discovery is successful, a Server resource is created with accessibility of "No
Access"
To enable access to the system you need to provide an OS credential on the target system. This
allows workflows to be run on the target system.
To enable access:
- Go to Navigate Resources -> All Systems -> select your system
- Click on Actions -> Security -> Request Access, enter an OS user name and password for
the target system
- If Director is able to access the system with the provided credentials, Director creates
another object of type "Operating System" with the access set to "OK".

If you are installing the parent OS Deployment server, these resources refer to the Director server
itself so they are already there. In this case, check ACCESS = OK for both of them:

23
To create a discovery configuration for the TPM for OSd installation on the newly discovered target
system:
- Inventory -> Advanced System Discovery -> Create
- In the Profile Properties step, enter a unique profile name, and under profile type select
"Boot Server"
The following example shows only the parent OS Deployment server being discovered.
Start creating the Director System Discovery:

In the Protocol Selection step, select "TPM for OSD Boot Server Discovery"

24
In the Boot Server Information step, click "Browse" to select the target system.
Note: if there is more than one, select the Operating System resource type, not the Server.
An error message is displayed when you try to proceed to the next panel if you selected the wrong
resource type. Next provide the HTTP port and Java API credential used to connect to the TPM for
OSD using the Java API. The credentials must match the APISecret parameter in the config.csv file
on the TPM for OSD server you are discovering. HTTP Port is 8080 by default: this is the HTTP
Port parameter inserted when installing stand-alone TPM for OSd 7.1:

25
Proceed to the final step of the wizard and then click Finish.
Select the discovery configuration created and click "Run". When it completes, under Boot Server,
you can see the following:

26
If you inserted only the HostName,APISecret parameters, you can now customize the config.csv file
to add more settings.Restart the TPM for OSd server and you can use the TPM for OSd plug-in
from the Director console.

Customizing config.csv file


If you are working with a single parent OSd server (on a Windows machine where the default
Microsoft Access database is used) you need these column headers:

HostName;APISecret;TPMReporting;TPMUser;TPMPass;TPMPort;TPMBinDir

No database information is needed in this case because the default information is used as shown
below:

Otherwise, if you are using a single parent OSd server with another database (usually in this
scenario the ‘AutoDeploy’ ODBC datasource is manually created before running the TPM for OSd
stand-alone installer), you need these columns:

HostName;APISecret;MasterIP;DbName;DbUser;DbPass;AutoSync;TPMUser;TPMPass;TPMBinDir;TPMPort;TPMReporting

If you are working with a multi-server infrastructure, you need to use:


- On your parent OS Deployment server:

"HostName";"APISecret";"MasterIP";"DbName";"DbUser";"DbPass";"AutoSync";"TPMUser";"TPMPass";"TPMBinDir";"TPMPort";"TPMReportin
g"

- On your child OS deployment servers:

"HostName";"APISecret";"MasterIP";"DbName";"DbUser";"DbPass";"AutoSync";

An example config.csv file if you use a single parent OS Deployment server (on a 32-bit Windows
machine where the default Microsoft Access database is used) is:

27
HostName;APISecret;TPMReporting;TPMUser;TPMPass;TPMPort;TPMBinDir
director61;myApiSecret;b;tioadmin;myTpmPass;8421;c:/program files/ibm/director/tpm/tools

An example config.csv file if you use a single parent OS Deployment server with a MS SQL
database is:

HostName;MasterIP;DbName;DbUser;DbPass;AutoSync;TPMUser;TPMPass;TPMBinDir;TPMPort;TPMReporting;APISecret
raptor;SELF;AutoDeploy;tivoli;XXXXXXX;f;tioadmin;XXXXXXXX;c:/program
files/ibm/director/tpm/tools;8421;b;A7460F4A8FAFE97F7731579210233951

You need the related ODBC datasource as follows:

The following example shows a complete config.csv file to be used when working with parent/child
OS deployment servers:

"HostName";"APISecret";"MasterIP";"DbName";"DbUser";"DbPass";"AutoSync";"TPMUser";"TPMPass";"TPMBinDir";"TPMPort
";"TPMReporting"
"dir61srv";"A740034E80A0B07B7C31579210233951";"SELF";"derby://127.0.0.1:1527/C:/Program
Files/IBM/Director/tpm/tpmfosd;create=true";"tpmfosd";"A7460F4A8FAFE97F7731579210233951";"f";"tioadmin";"A740034E80A0B07
B7C31579210233951";"C:/Program Files/IBM/Director/tpm/tools";"8421";"b"

As already described, on child OS Deployment servers, not all the columns are needed.

Note that each TPM for OSd server must have its own config.csv file.

You must customize the following parameters:

28
- HostName (needed for Director boot server discovery): hostname of the machine where
TPM for OSd is installed. The name must be written in lowercase characters.

- APISecret (needed for Director boot server discovery): a password that you can choose
as you prefer. Provide it to the TPM for OSd server discovery wizard. Remember to
write it encrypted using the rbagent rad-hidepassword command

- MasterIP: (needed for parent/child architecture) the IP address of the OS deployment


server with parent role (use the value “SELF” for the parent server)

- DbName, DbUser, DbPass: (needed for parent/child architecture) JDBC connection


parameters to use a specific database instance instead of the default one. Not needed
when using only a parent OS deployment server because the default is used

- AutoSync (needed for parent/child architecture): default is "f". It indicates that new
objects are automatically replicated through TPM for OSd servers

- TPMUser (needed to run OS deployment tasks): this credential lets TPM for OSd 7.1
interact with Director: set it to the username parameter present in the
%DIRECTOR_HOME%\tpm\config\dcm.xml file. You enabled Director to accept this
credential in the previous step when changing the user-factory.xml file. The following
example shows the username tioadmin contained in the dcm.xml file

29
- TPMPass (needed to run OS deployment tasks): the password for the user TPMUser is
the same password as the Director console password. It is the password of the Operating
System user provided during IBM Director 6.1 installation. If you want to encrypt this
value use the rad-hidepassword command.

- TPMBinDir (needed to run OS deployment tasks): set it to %DIRECTOR_HOME


%\tpm\tools path

- TPMPort (needed to run OS deployment tasks): set it to the


com.ibm.pvc.webcontainer.port parameter contained in the %DIRECTOR_HOME
%\lwi\conf\overrides\USMi.properties file. The following example shows the default
value

-
- TPMReporting (needed to run OS deployment tasks): set value "b" only for the TPM for
OSd server that runs on the same machine as the Director (the parent OS deployment
server)

30
For more details, refer to the IBM Tivoli Technote “How to configure Rembo Auto-Deploy (TPM
for OS deployment) with a text file ?” available at this link:
http://www-01.ibm.com/support/docview.wss?uid=swg21247013

If you do not want to insert plaintext passwords, that is, you want to encrypt them, use the rbagent
rad-hidepassword command as follows:

C:\Program Files\Common Files\IBM Tivoli>rbagent rad-hidepassword mypassword


IBM Tivoli Provisioning Manager for OS Deployment Web extension v.7.1.0.0 (071.6
4)

Starting Rembo Agent
A <NOT> Connected to OS deployment server 1 (dir61srv), using database tpmfosd;
create=true on 127.0.0.1
Result: A55F1E5380B7AE657D63139210233951
Stopping Rembo Agent

To check that your file is correct, open the <TPMfOSd_DataDir>\vm.log file and check that there
are no errors and that the following lines are there:

[2008/10/24 12:16:11] <INF> Verifying deployment database structure


[2008/10/24 12:16:12] <INF> Interfaces registered: 344

[2008/10/24 12:16:21] <INF> RADReadConfig: text file without Database
[2008/10/24 12:16:21] <INF> A multi-server configuration file is present and contains an entry for this OS deployment server

[2008/10/24 12:16:28] <INF> Waiting for tasks on BomID X
[2008/10/24 12:16:29] <INF> Preloading console page templates

Remember to restart the TPM for OSd service when you edit config.csv.

If you are working on a Windows machine where the default Microsoft Access database is used, it
is possible to start by using only the HostName and APISecret parameters, starting the Director boot
server discovery, and then adding other parameters later. Otherwise, if another database is used by
stand-alone TPM for OSd 7.1, the starting parameters must be at least HostName, APISecret,
MasterIP, DbName, DbUser, DbPass, AutoSync.
If you did not insert all the needed parameters at discovery time, after the OS deployment server has
been correctly added into the Director environment, remember to set all the required headers in the
config.csv file as described in this section (and restart the TPM for OSd service).

31
By default all boot servers installed with IBM Director Console have AutoSync = "f" so objects are
automatically replicated to child boot servers.
When you manually install and discover OS deployment servers you can also customize this
behaviour: refer to the IBM Tivoli Technote “How to configure Rembo Auto-Deploy (TPM for OS
deployment) with a text file ?” available at:
http://www-01.ibm.com/support/docview.wss?uid=swg21247013

Discovering a multi-server environment

This chapter describes a manual installation of a multi-server TPM for OSd environment and its
discovery into a running IBM Director 6.1 server.

The multi-server environment consists of the following architecture:


- TPM for OSd database is a Microsoft SQL Server database running on the parent boot
server
- TPM for OSd database is published as ODBC datasource pointing to the Microsoft SQL
Server database:
o An ODBC datasource called AutoDeploy, is created on the parent boot server and
it points to the local Microsoft SQL Server database on the parent boot server
o An ODBC datasource called AutoDeployMaster, is created on the child boot
server and it points to the remote Microsoft SQL Server database on the parent
boot server
- TPM for OSd server (parent boot server) uses the local Microsoft SQL Server database
- TPM for OSd server (child boot server) uses the parent database
Note that both ODBC datasources point to the same database: TPM for OSd servers are using a
single shared database.
Moreover another important requirement is satisfied: TPM for OSd Administrator passwords must
be the same on each OSd server (parent and child).

First, install the TPM for OSd server on the parent and on the child machine. When the installation
is complete, you can configure their config.csv files as described above.
In our example, the config.csv file used on the parent OSd server is the following:

32
HostName;MasterIP;DbName;DbUser;DbPass;AutoSync;TPMUser;TPMPass;TPMBinDir;TPMPort;TPMReporting;
APISecret
raptor;SELF;AutoDeploy;tivoli;password;f;tivoli;password;"C:\Program Files
(x86)\IBM\Director\tpm\tools";8421;b;A7460F4A8FAFE97F7731579210233951

The config.csv file on the child is:

HostName;MasterIP;DbName;DbUser;DbPass;AutoSync;APISecret
skywarp;"10.0.0.17";AutoDeployMaster;tivoli;password;f;A7460F4A8FAFE97F7731579210233951

Note the DbName parameter pointing to the ODBC datasource:


- AutoDeploy on the parent boot server
- AutoDeployMaster on the child boot server

When you edit the config.csv file, remember to restart the TPM for OSd server service.
To check that the config.csv settings have been correctly applied on each TPM for OSd server, you
can look at the vm.trc log file (set debug level 4 to be sure). You find the config.csv parameters as
parsed by TPM for OSd and the line “A multi-server configuration file….” meaning that the correct
parsing of config.csv has been performed:

If config.csv is correct, then in the stand-alone TPM for OSd Web UI -> Server Parameters ->
Server Replication, the multi-server environment is correctly described as shown in our example:
- raptor is the parent boot server
- skywarp is the child boot server

33
If you have some errors in your config.csv files, then this page might show both the OSd servers,
but not in the master-child relation. The example below shows two OSd servers wrongly configured
that are unable to recognize themselves in the master-child relation:

As you can see, both OSd servers are listed but the multi-server environment is not correctly set.

34
If this occurs, as a possible solution, you can try directly from this page to manually promote the
Child OSd server and see if it solves the problem:
- Try to connect to the parent stand-alone TPM for OSd Web UI
- Navigate to Server Configuration -> Server Replication page and promote the child by
clicking on the link “Make this OSd server a child OSd server”

The steps are described below:

35
If it works, you can now see the correct architecture:

The yellow triangle on the child indicates that a replication is needed to synchronize the OSd
servers; you can now start the replication as shown below:

36
When the replication is completed, the green check is shown:

To ensure that the correct multi-server environment is configured, every time you log into the child
OSd server stand-alone Web UI, a pop-up indicates that you are on the child server.

Before discovering parent and child OS deployment servers you need to follow this procedure on
each TPM for OSd server to add the RPC access role:
- Go to the TPM for OSd server
- Stop the TPM for OSd server service (net stop remboserver on Windows)
- Edit the file C:\Program Files\Common Files\IBM Tivoli\rembo.conf by adding the
following section

HTTPRoles RPC access {

37
Members "RPC access"
AllowPages "*"
AllowGroups "*"
Policies "RADTOPO_RO"
}

- From a DOS shell, navigate to C:\Program Files\Common Files\IBM Tivoli and run the
command

rembo.exe -d -c rembo.conf -exit

- Start the TPM for OSd server service (net start remboserver on Windows)

You must run this procedure on each OS deployment server.

To discover the parent boot server, you can proceed as if you were working with a single parent, so
follows the steps described in the previous section:
- Edit the file <director_home>/tpm/config/user-factory.xml as needed
- Create a boot server discovery and run it
The parent boot server is added.

For the child OS deployment server, you need no changes in the xml files: changes were needed on
the parent boot server because it is the only OS deployment server that interacts with the Director.
Before starting the child boot server discovery, check if the machine where the child OSd server is
installed is shown correctly as a Director resource.
Go under Director console -> Navigate resource -> All system and find your machine: if the
machine is not listed or if it is listed as a resource with Type=Operating System and it has partial
access in Director as shown below:

then you can rediscover it as follows:


1. Remove (if possible) the resource related to your child OSd server
2. Run a system discovery with child OSd server IP address

38
3. Wait for the OS resource to be shown
4. Request access on the child OSd server with resource Type=Operating System (OK appears)

5. Then also the resource Type=Server is shown

When the child OSd server is correctly listed as a resource with both type Operating System and
Server, then you can discover it. Create the OSd discovery by adding the child server as Target
System (the Server resource just discovered for the child machine); remember to insert the same
APISecret parameter in both config.csv files and insert that value in the Java API password field.

39
You can now start the discovery:

The parent and the child boot servers are shown:

40
As a final step, restart the Director service and check under the Manage panel that the following is
shown:

Now you can use parent and child OS deployment servers to discover your targets and run
deployment tasks from IBM Director console.

Post-installation
When the OS deployment server installation and discovery have completed, you can test that the
environment is correctly configured. You can simply boot a target from the network and wait until
it loads the TPM for OSd deployment engine as shown below:

41
It is possible to configure the behaviour of new discovered systems to show the locked panel and
stay in the TPM for OSd deployment engine:

Otherwise the default behaviour is to let the targets boot from their Hard Disk just after the
deployment engine has been loaded and the resource has been discovered.
In both cases the hosts are shown in the Director console:

42
If you plan to work with:
- Windows WIM images
- Windows Vista
- Windows 2008
- Some specific hardware configuration tasks
then you need the Microsoft WAIK product to be installed either on your Director Server or on
another system running the Web Interface Extension.
Download and install it according to your operating system. The available links are:
- http://www.microsoft.com/downloads/details.aspx?FamilyID=C7D4BC6D-15F3-4284-
9123-679830D629F2&displaylang=en
- http://www.microsoft.com/downloads/details.aspx?familyid=94BB6E34-D890-4932-
81A5-5B50C657DE08&displaylang=en
Refer to the TPM for OSd 7.1 documentation for further information.
For details about hardware configuration tasks, see “How to work with Hardware Configuration
tasks”.

Now that TPM for OSd - ISDE 7.1 is ready to work, you can start migrating RDM 4.40 data and
performing deployment activities.

43
Upgrades
The recommended upgrade procedure for TPM for OSd – ISDE 7.1 is to upgrade the TPM for OSd
plug-in installed in the IBM Director 6.1 environment and then use the Director Console to upgrade
all the TPM for OSd boot servers.
The upgrade procedure can be summarized as follows:
1) In the IBM Director 6.1 environment, install a new version of the TPM for OSd 7.1
tcdriver on top of the existing one
2) From the IBM Director Console, upgrade all the TPM for OSd boot servers
All the details about the upgrade procedure are provided in the TPM for OSd – ISDE 7.1
Installation Guide: read it carefully before starting your upgrade.
This chapter shows a simple upgrade scenario (from TPM for OSd – ISDE 7.1 build 080.03 to build
080.20) with a single parent boot server.

First of all, check your TPM for OSd tcdriver version from the Director Welcome page:

Then check that your OS deployment servers are listed under the Release Management -> OS
Deployment -> Boot Servers page:

To check that an OS deployment server is correctly running you can also:


1) Connect to its stand-alone Web UI

44
2) Check that the build number is the same as the tcdriver one (shown in the Director
Welcome page)
3) Log in with the credentials provided at boot server installation time

To start the upgrade procedure, you need to install the TPM for OSd tcdriver on top of the existing
one, so under the path c:\Program Files\IBM\Director\tpm\drivers:
- Rename the old installed tcdriver to a name of your choice (here called
tpmfosd_director.tcdriver.old)
- Copy the new tcdriver and rename it as tpmfosd_director.tcdriver

Then you can install the new tcdriver build as shown below (follow the Installation Guide for TPM
for OSd – ISDE 7.1 installation steps):

After restarting the IBM Director 6.1 Server service, use the Director console and upgrade your
boot servers:

45
You can check the OS deployment server upgrade activity from the Active and Scheduled Jobs
page:

When the boot server upgrade activity has completed

from the stand-alone TPM for OSd Web UI you can see the new build number:

46
As the last step, you can restart the Director service and check that the new build is shown under the
IBM Director Welcome page:

Now the TPM for OSd – ISDE 7.1 upgrade has completed and you can start using the upgraded
product.

47
TPM for OSd DE 7.1 uninstallation

To uninstall TPM for OSd - ISDE 7.1 you must:


- Delete the OS deployment server from Director console
- Uninstall the TPM for OSd 7.1 product

To delete the OS deployment server, from Director Web UI go to Release Management -> OS
Deployment -> Boot servers ->, select the OS deployment server, and click Delete.
Then you must uninstall the stand-alone TPM for OSd 7.1 product: refer to the TPM for OSd 7.1
documentation for how to do this.

48
RDM 4.40.1 data migration
The migration from RDM 4.40.1 to TPM for OSd – ISDE 7.1 has been improved with a scripted
tool (called RDM2OSdMigration) that moves the RDM data to a TPM for OSd 7.1 server. The tool
was developed to let RDM 4.40.1 users continue to use their images and tasks in the new TPM for
OSd 7.1 environment. The migration tool RDM2OSdMigration can migrate RDM 4.40.1 data to
both stand-alone TPM for OSd 7.1 and TPM for OSd – ISDE 7.1. This chapter describes specifics,
limitations, details, and step-by-step procedures to be used during the migration.

General considerations
If you want to migrate your RDM 4.40.1 environment to a TPM for OSd 7.1 (ISDE or stand-alone
versions) environment, the suggested procedure is to:
1) Install the TPM for OSd 7.1 server in the same subnet as your running RDM 4.40 MDS
server
2) Configure your DHCP server to allow both OS deployment servers to coexist
3) Start the RDM data migration towards the TPM for OSd - ISDE 7.1 server
4) Check migration has completed and that your data has been correctly migrated
5) Leave the RDM 4.40.1 environment intact until your tests on TPM for OSd -ISDE 7.1 have
been completed.

This chapter describes a test environment being migrated from RDM 4.40.1 to TPM for OSd –
ISDE 7.1. The test environment is composed of:
 An ISC DHCP 3.0 server
 2 target machines
 The RDM 4.40.1 server to be migrated
 The TPM for OSd – ISDE 7.1 server
The test environment has been set up with virtual machines and is described by the following
picture:

49
The RDM 4.40.1 server has been configured with the following data:

50
After the migration the TPM for OSd – ISDE 7.1 server will have:
- RDM Images (OS and applications) migrated respectively into TPM for OSd 7.1 System
Profiles and Software Modules
- RDM Tasks (LCI and WCI) migrated into the corresponding TPM for OSd 7.1 OSd
Configurations

As an example, the picture below shows a TPM for OSd 7.1 System Profile named wXP_vm_sys
migrated from an RDM Image with the same name:

Note the OSd Configurations (starting with the text “RDM_”) that represent the RDM Tasks
containing the OS image.

In general terms, the migration can be summarized as follows:


 RDM Images (captured with GetDonor / PutDonor and usable for WCI / LCI; no native
images) are migrated into TPM for OSd System Profiles with the default empty OSd
Configuration
 RDM Tasks (WCI and LCI) are migrated into OSd Configurations
 RDM Applications are migrated into TPM for OSd Software Modules
 VMWare ESX 3.5 System Profiles and OSd Configurations created directly from the TPM
for OSd 5.1 embedded engine are migrated into the same TPM for OSd 7.1 objects

51
Data migration: specifics
The following RDM 4.40.1 objects are migrated:
- Operating system cloned images (Windows, Linux)
- Application images (Windows, Linux)
- VMWare ESX 3.5 native (unattended) images created directly in the embedded TPM for
OSd 5.1 engine
- RDM Windows Clone Install Tasks parameters
- RDM Linux Clone Install Tasks parameters

The following RDM data is not migrated:


- Windows Native images
- VMWare ESX Native images (except the ones created directly on the TPM for Osd 5.1.1
embedded engine)
- BIOS update images
- Drivers for Windows Native installations
- All other task information (except WCI and LCI tasks)
- All System/Task Configurations (Target-specific parameters)
- All RDM-specific topology information (D-Servers, Managed Subnets, and so on)

The data migration mainly consists of two steps:


1. RDM Images contained in the TPM for OSd 5.1.1 embedded engine are migrated to
TPM for OSd 7.1 (System Profiles with the default empty OSd Configuration and
Software Modules)
2. RDM WCI/LCI Task information contained in Director database is used to populate
and configure the migrated System Profiles with corresponding OSd configurations

Note that:
- RDM Images are migrated as TPM for OSd System Profiles using the same name
- RDM Tasks are migrated as OSd Configurations using the prefix
“RDM_<rdm_task_name>”
- OSd Configurations are created in the corresponding System Profile

52
To better understand the mapping between RDM and TPM for OSd objects, consider the RDM task
w2003 built on the RDM image w2003_vm_sys:

The RDM image w2003_vm_sys is migrated into a TPM for OSd System Profile with the same
name w2003_vm_sys:

The RDM task w2003 is migrated as an OSd Configuration of the System Profile w2003_vm_sys
and it is called RDM_w2003:

53
The data migration is summarized in the following table:

Source RDM object Migrated into OSd object At step

Operating system cloned images System Profile 1


(Windows, Linux)

Example: Example:
RDM image wXP_vm_sys System Profile wXP_vm_sys

Application images (Windows, Software Modules 1


Linux)

Example: Example:
RDM application OSd 71 for Software Module OSd 71 for Windows
Windows

RDM Linux / Windows Clone Install OSd Configuration renamed as 2


Tasks parameters RDM_<taskname>

Example: Example:
RDM Task winXP with RDM Image OSd Configuration RDM_winXP inside System
wXP_vm_sys Profile wXP_vm_sys

54
Details and important limitations are described in the section “Migrating RDM images and tasks:
details and limitations”.

Migrating RDM 4.40.1 multi-server environments


Typically, RDM 4.40.1 multi-server environments have all images stored on the RDM Master
Deployment Server (MDS): this means that the TPM for OSd 5.1.1 embedded engine on the MDS
has all the images contained in each child. To ensure that an RDM image is correctly replicated to
the TPM for OSd 7.1 server, it must be on the MDS and it must be visible in the Director Console
-> RDM tools -> Image Management panel.
However, it can occur that an image captured from a target under an RDS has not been replicated
because some errors occurred during the replication triggered immediately after the RDM GetDonor
task. In this case, run the RDM GetDonor task again, towards the same target, with the
SKIPCAPTURE parameter specified, to retry to upload the image to the MDS. When you can see
the image on the Director Console, then you know that the image has been uploaded to the MDS
and it is ready to be replicated to the TPM for OSd 7.1 server.

Migrating RDM images and tasks: details and limitations


As a general rule, all RDM Images are migrated into System Profiles with a default empty OSd
Configuration.
For example, consider the RDM Image w2003_vm_nosys without any RDM WCI Task associated
to it:

The image is migrated into an OSd System Profile with the same name (w2003_vm_nosys)
containing a default empty OSd Configuration as follows:

55
For each RDM Task ( WCI / LCI ) using that RDM Image, an OSd Configuration is created in the
corresponding System Profile.
As an example consider the RDM Task w2003_noSys using the RDM Image w2003_vm_nosys

An OSd Configuration RDM_w2003_noSys is created inside the System Profile w2003_nosys:

To better describe the mapping performed during the migration, a CSV file is created by the
migration tool to help you to understand how the RDM Task has been migrated.
The CSV file called RdmTasks.csv looks as follows:

Consider the line:

56
654, w2003, w2003_vm_sys, WCI, RDM_w2003, w2003_vm_sys

where:
- TOID = 654
- RDM_TASK_NAME = w2003
- RDM_IMAGE_NAME = w2003_vm_sys
- RDM_TEMPLATE = WCI
- OSD_CONFIGURATION = RDM_w2003
- OSD_SYSTEM_PROFILE = w2003_vm_sys

This means that the RDM WCI task named w2003 containing the RDM image w2003_vm_sys has
been migrated to the OSd Configuration RDM_w2003 inside the System Profile w2003_vm_sys.
Note that the RDM images that are not used in any WCI / LCI task are not included in the above
report.

Windows Clone Install and Linux Clone Install: Network settings

In TPM for OSd 7.1 there are some important behaviors that you need to know to be able to
correctly handle network settings.
In TPM for OSd 7.1, network setting configuration can be performed at both OSd Configuration
and Target Details level as shown below:

The following rules are followed during an OS deployment:

57
- When Target Details are in Advanced Mode, OSd Configuration settings are not used
- When Target Details are in Basic Mode, then OSd Configuration settings have higher
priority and the following rules are applied:
o OSd Configuration settings are used only for the first target NIC
o OSd Configuration settings cannot statically configure the IP address; so the first
network interface card (NIC) can be statically configured with DNS, WINS,
Gateway settings. but the IP is always provided by DHCP (see the section
“Appendix B: Configure network settings with Software Modules” where a
workaround is provided by using software modules)
o Other NICs are configured to work with DHCP settings
This means that when Target Details are in Basic Mode:
- If OSd configurations have DHCP settings, then all NICs have DHCP settings
- If OSd configurations have Static settings, then the first NIC has statically-defined DNS,
WINS, and Gateway, but dynamically-defined IP. All other NICs have DHCP settings.

Because RDM Tasks (LCI / WCI) are migrated into OSd Configurations, the following rules are
applied when migrating network settings contained in RDM tasks:
RDM Task - Network Settings OSd Configuration – Fixed Target Properties
(settings are applied to first target NIC)
First NIC is DHCP DHCP
First NIC is STATIC STATIC information for:
- Gateway
- Primary WINS IP (on Windows only)
- Secondary WINS IP (on Windows only)
- Domain Name
- Primary DNS Server IP
- Alternate DNS Server IP
IP address and Subnet Mask will be DHCP

For example, if you have this WCI with the first NIC configured with DHCP:

58
then the migrated OSd Configuration is dynamic as follows:

If you then deploy the above OSd Configurations on a target having settings in Basic Mode, only its
first NIC is configured (other NICs are always DHCP) and it is dynamic as the OSd Configuration
is set.
Otherwise consider this setting in RDM, where the first NIC is configured in static mode:

59
The above data is migrated into the OSd Configuration as follows:

Note that the IP address is not migrated because it is dynamic.

60
If you deploy the above OSd Configuration (supposing target settings are in Basic mode), the first
target NIC has:
- A dynamic IP address
- Gateway and WINS server statically defined
Any other target NICs are configured as dynamic.

So pay particular attention when migrating RDM Tasks with static NIC settings because OSd
configurations contain static configurations only for GW, DNS, and WINS: a static IP address
cannot be provided at OSd Configuration level when users need to work at Target Details level with
the “Advanced settings” panel in the TPM for OSd Web UI.
“Appendix B: configure network settings with Software Modules” shows a workaround to statically
configure target NICs using Software Modules to overcome this limitation.

Linux Clone Install: General settings

When migrating RDM LCI tasks, the related OSd Configurations in TPM for OSd 7.1 have empty
settings for these parameters:
- Timezone
- Language
- User Name
- Organization

The UserName and Organization parameters are not always required for Linux deployment while
particular attention is needed with the Timezone and Language parameters.
The migrated Linux images are all cloned ones (because they come from RDM LCI tasks) so note
that the following rule is valid in TPM for OSd 7.1: if these settings (Timezone and Language) are

61
left empty both in the OSd configuration and on the Target Details settings, then the deployed
machine inherits the settings from the donor (the machine from which you captured the image).
If you need to change them, set them from your Target Details or edit the OSd Configuration, and
your settings then replace the donor ones.

Windows Clone Install: Licensing

In RDM 4.40.1 you can set the Windows licensing type as follows:

Note that this feature is not available in TPM for OSd 7.1: you can only set the Windows License
key as follows:

62
For Windows Vista images you can also configure regardless of whether they have a Volume
License:

Windows Clone Install: Sysprep type

RDM WCI tasks containing images syspreped in Reseal mode are migrated into OSd
Configurations so that they can be used for common deployment tasks (with the “Deploy Now”
wizard). RDM images with syspreped with Factory or non-syspreped have a different behaviour
during the migration:
 RDM WCI tasks containing images with Sysprep = Factory are not migrated into OSd
Configurations. After the migration you will only see the OSd System Profile containing the
default OSd configuration, but there will be no OSd configurations migrated from RDM
tasks
 RDM WCI tasks containing non-syspreped are migrated into OSd Configurations. Even if
the OSd configurations are filled with RDM Task values, the common usage for non-
syspreped images is “Restore” (from Additional Feature) instead of “Deploy”.

For example, consider the following RDM tasks:


 w2003_factorySYSPREP contains an image with Sysprep run in Factory mode

63
 w2003_noSYSPREP contains an image where Sysprep has not been run

The non-syspreped WCI task is migrated into the OSd Configuration:

OSd Configuration values are migrated from the RDM task:

64
But if you run the “Deploy Now” wizard on a target, it will probably fail because the image contains
OSd Configuration parameters that cannot be applied because Sysprep was not run:

For these OSd Configurations, run the Restore a profile wizard (from Additional Feature):

The check on the “disable SysPrep mini-setup” option is not needed in this case because non-
syspreped images never run the mini-setup phase during the restoration: the profile restoration
always skips the mini-setup if the Sysprep tool has not been run on the donor before capturing the

65
image. The “disable SysPrep mini-setup” option is used to skip the mini-setup phase when restoring
syspreped images.

Note that with the “Restore” task the donor setting is used.

RDM tasks with images where Sysprep has been run in Factory mode are not migrated into any
OSd Configuration: the OSd System Profile corresponding to this image only has the default empty
OSd configuration, but no RDM parameters are migrated:

66
Windows Clone Install: Creating a local account

In an RDM WCI task you can have a local account created on the early deployed image:

This setting is also available in TPM for OSd 7.1 so all RDM Tasks containing that option are
migrated into OSd Configurations having the same behaviour. When this feature is enabled in TPM
for OSd 7.1, the option OSd configuration details > Windows > System customization > Create a
local account for the user must be set to Yes:

The user name of the local account that will be created is taken from OSd configuration details >
Users > User login:

67
After the TPM for OSd 7.1 image deployment, the user is created as follows:

Software applications
RDM application images are migrated into TPM for OSd 7.1 Software Modules as shown in the
picture below:

In RDM, each task also contains a list of the applications to be deployed after the OS image: this
means that each RDM task contains a list of RDM applications linked to it.
68
To keep the link between RDM applications and the RDM task, TPM for OSd Binding Rules are
used to automatically deploy software modules with the OSd Configuration. Every time an RDM
application is contained in an RDM task, the migrated software module is bound with binding rules
to the corresponding OSd configuration: this allows the application to be added to the deployment
task automatically.

For example, the RDM task wXP_withApplications contains three applications to be deployed after
the image wXP_vm_sys is installed:
 OSd 71 for Windows
 TRC windows MSI
 Perl MSI

The RDM task is migrated into the OSd Configuration called RDM_wXP_withApplications.
The three applications are migrated into software modules, each one containing a binding rule that
links it to the OSd Configuration RDM_wXP_withApplications. The binding rule, highlighted in red
below, shows the link between the application and the OSd Configuration
RDM_wXP_withApplications:

69
This means that when deploying the OSd Configuration RDM_wXP_withApplications, users do not
need to explicitly select the software modules during the “Deploy Now” wizard:

Even if no software modules are selected, the created binding rules will add the software modules to
the deployment.

An RDM 4.40.1 additional feature (not available for Linux Clone Install tasks) lets users insert
reboots during application installations: this information is contained at task level meaning that each
task has its own list of reboots between applications.

In TPM for OSd 7.1 this information is mapped to the concept of Software Stages: after the OS
image has been installed you can configure several phases and assign software application
installations to them.

70
Note that installation stages in TPM for OSd 7.1 are defined at global level so each software module
can have a single and global installation phase: this means that it is installed at the same stage for all
the deployments. The migration tool tries to overcome this limitation by creating a global ordering
in TPM for OSd 7.1.
It is suggested to check the Software Stages after the migration has completed: the migration tool
tries to recreate the same RDM installation order but you might need your own customization.

The RDM data migration tool


The migration tool RDM2OSdMigration can migrate the RDM 4.40.1 data contained in the
embedded TPM for OSd 5.1 engine to both TPM for OSd – ISDE 7.1 and TPM for OSd 7.1 stand-
alone servers.
Note that these steps refer to a development version of the product: the final release of the tool
might have some differences.
The RDM migration tool requires that:
- TPM for OSd 7.1 and TPM for OSd 5.1 embedded in RDM 4.40.1 have the same
administrator password
- A Java API connection is available towards the TPM for OSd 7.1 server where the data
will be migrated
The TPM for OSd – ISDE 7.1 servers installed from the Director console (with the parent boot
server installation wizard) are automatically configured to accept a Java connection by using as
connection token the same TPM for OSd administrator password value. Conversely, TPM for OSd
71
7.1 stand-alone servers, have no Java connection configured by default so you need to enable it as
described in “Preliminary steps on stand-alone TPM for OSd 7.1”.

Prerequisites
Code level:
Minimum code level requirements are:
 RDM fixes 2.25
 TPM for OSd 5.1.1.0 build 54.36
 TPM for OSd 7.1.0.0
o Build 73.06 if stand-alone version
o GA build if ISDE version
The requirements on code levels are checked when the migration tool is started and the user is
informed if any requirement is not met. Note that these minimum requirements are subject to
change.

Java Virtual Machine:


A Java Virtual Machine 1.5 must be available on the RDM 4.40.1 server and the environment
variable JAVA_HOME must be set to its root path. Because our RDM 4.40.1 server does not have
the suitable level of JVM and, in particular, it uses the Director provided JVM, we installed the
following JRE:

We also set the JAVA_HOME environment variable as follows:

72
To double-check you can open a new shell and see its value:

Other requirements:
 TPM for OSd 7.1 must be installed on a separate machine with respect to RDM; a TCP
connection must exist between the two
 TPM for OSd 7.1 and RDM (TPM for OSd 5.1.1 embedded engine) must have the same
network password

73
Warnings

 All information previously present on the TPM for OSd 7.1 server is canceled when you run
this migration tool. If you need to save your TPM for OSd 7.1 data, export it into RAD files
prior to running the tool. If you import RAD files containing objects already present on the
TPM for OSd server then duplicate entries are created. Check that you have no pending
activities on the RDM and TPM for OSd servers before starting the migration
 If you have customized RDM 4.40.1 preboot environment objects (dos71f, dos71discovery,
and so on) from the TPM for OSd 5.1.1 embedded boot server engine and their original
names have been changed, note that they are migrated to TPM for OSd 7.1 and you must
delete them manually from the TPM for OSd 7.1 Web UI.

Preliminary steps on stand-alone TPM for OSd 7.1


A TPM for OSd – ISDE 7.1 server installed with Director Console is automatically enabled to
accept Java API calls. The screen captures below show the Director-generated config.csv with the
encrypted Java connection token (the column is APISecret): it is the same as the TPM for OSd
administrator password provided to the Director parent boot server installation wizard.

Conversely, for a stand-alone TPM for OSd 7.1 you must manually enable the Java API access
before starting the tool; this step is needed because the RDM migration tool works using TPM for
OSd Java API.
To allow your stand-alone TPM for OSd 7.1 server to work with the TPM for OSd Java API you
must create a specific config.csv file and provide the encrypted password to be used during Java
API communications.

74
Here we show a stand-alone TPM for OSd 7.1 server running with IP address 192.168.1.170 where
we will enable Java API and test the connection by starting the RDM migration tool.
Note that the TPM for OSd 7.1 server where you are migrating the RDM 4.40.1 data, must have the
same administrator password as the TPM for OSd 5.1 embedded engine in RDM 4.40.1; this
password can be different from the Java API connection token that you provide inside the
config.csv file.
To set up your own Java API password on your local stand-alone TPM for OSd 7.1 server and
enable the connection through the Java API, connect to your TPM for OSd 7.1 server and complete
the following steps:
1. Disable the SSL encryption and restart the server

2. Open a command prompt in the same directory that contains the rbagent.exe executable
(on Windows operating systems, the path is typically C:\Program Files\Common Files\IBM
Tivoli\rbagent.exe)

3. Run the rbagent command to encrypt your chosen password:

rbagent.exe -d -s <ip_of_tpmfosd_server>:<tpmfosd_web_password> rad-hidepassword


<your_new_password>

Your new encrypted password is generated and can be found in the Result string.
For example:

75
Note that:
- osdpwd is the TPM for OSd administrator password provided at installation time:
this password must the same as that for the TPM for OSd 5.1 engine embedded in
RDM 4.40.1
- apipwd is the password we use for the Java API connection token

4. Create a file called config.csv that looks like this:

HostName;APISecret
<your_hostname>;<your_APISecret>

where <your_hostname> is your Tivoli Provisioning Manager for OS Deployment


hostname and <your_APISecret> is your encrypted password.
This is an example of a config.csv file:

Additional notes:
1) <your_hostname> must be either a short hostname or a fully qualified hostname; it
cannot be an IP address
2) <your_hostname> must be written in lowercase characters only
3) <your_hostname> must be immediately followed by the semicolon (;) without any
intervening space
4) Inside config.csv, the APISecret can either be in plaintext or encrypted format

76
5) Copy the config.csv file to the <TPM_for_OSd_DATADIR>/global/rad directory and
restart the TPM for OSd server. The full path of the directory is typically
C:\TPMfOS Files\global\rad.

Remember to restart the TPM for OSd server after every config.csv change. To be sure that your
changes have been successfully applied, it is strongly suggested you look in the vm.trc log file for
the line:

This confirms that the settings inside config.csv were parsed at server startup.
Now that you have enabled the Java connection on the stand-alone TPM for OSd 7.1 server, you
can use the Java encryption token to start the RDM migration tool.
For example, the migration tool towards a stand-alone TPM for OSd 7.1 server is started by
providing:
- TPM for OSd 7.1 server where the data will be migrated
- TPM for OSd 7.1 HTTP connection port
- TPM for OSd 7.1 Java API token (this is the APISecret value previously placed in
the config.csv file and here you have to provide it in plaintext)

Note:
- apipwd is the plaintext of the encrypted token written in the config.csv file
- This API password might be different from the TPM for OSd 7.1 administration one
- The TPM for OSd 7.1 administrator password must be the same as the TPM for OSd 5.1
server one where the data will be collected and migrated

77
Running the migration tool
After checking that all the requirements are met (see section “Prerequisites”), you can start the
RDM migration tool. The tool is provided in a compressed format that you must save on your RDM
4.40.1 server and then unpack.
Remember that TPM for OSd 7.1 data will be lost when running this script so extract data into RAD
files if needed. The migration can be run more than once and every time the TPM for OSd 7.1
database is cleaned up before starting. Make sure that there are no pending activities before starting
the migration.
To start the tool, run the following command.
On Windows:
TaskMigration.bat <7.1_IP_address> <7.1_port> <7.1_APISecret>

On Linux:

TaskMigration.sh <7.1_IP_address> <7.1_port> <7.1_APISecret>

You must provide the following parameters:


 <7.1_IP_address> is the IP address of the TPM for OSd 7.1 server where the data will be
migrated. This server must accept Java API calls: the Director integrated version is
automatically enabled; for the stand-alone version, see “Preliminary steps on stand-alone
TPM for OSd 7.1 ”.
 <7.1_port> is the HTTP listening port where the TPM for OSd 7.1 server is providing
WebUI access
 <7.1_APISecret> is the connection token used by the RDM migration tool to connect to the
TPM for OSd 7.1 user through the Java API. It is written encrypted in the config.csv file.
Note that the RDM migration tool requires the Java API password in plaintext.
If you installed a stand-alone TPM for OSd 7.1 server, the Java API connection token is the
value you encrypted in the config.csv file for the APISecret column.
If you installed TPM for OSd – ISDE 7.1 with Director console this value is the same as the
TPM for OSd administrator password provided during the parent boot server installation

78
wizard: the administrator password provided to the Director wizard is used for the TPM for
OSd 7.1 administrator credentials and for the Java API token encrypted in the auto-
generated config.csv.

Log information is written to:

 %TEMP%\TaskMigration.log on Windows
 /tmp/TaskMigration.log on Linux

In the example below, the tool is started as follows:


 TPM for OSd 7.1 server IP is 192.168.1.10
 TPM for OSd 7.1 server port is 8080
 TPM for OSd 7.1 server APISecret value in plaintext is adminpwd

Note that the Java API connection token must be provided in plaintext.

The tool starts providing you some basic information:

79
Then the migration tool starts checking the provided Java API token and that the prerequisites are
met; the picture below shows the check performed against the RDM fixes, the TPM for OSd 5.1.1
embedded engine, and the TPM for OSd 7.1 server:

If the requirements are met, then the data replication is started from the TPM for OSd 5.1 embedded
engine towards the TPM for OSd 7.1 server.

To accomplish this, a replication activity called “ReplicateAllFromRemote(/<IP_RDM_Server>)”


is scheduled on the TPM for OSd 7.1 server: this task populates the TPM for OSd 7.1 repository
with all the data contained in the TPM for OSd 5.1.1 embedded engine.

80
When the TPM for OSd repositories are synchronized (see the COMPLETED status below), the
RDM 4.40.1 and the IBM Director services are stopped:

The RDM 4.40.1 database is now ready to collect all task information to be migrated into OSd
Configurations:

81
For more details, see the full logging provided in the TaskMigration.log file:

Note the useful file called RdmTask.csv: it contains the mapping between the source RDM Task and
destination TPM for OSd Configurations

Checking the replication activity


To check that the repository replication from TPM for OSd 5.1.1 to TPM for OSd – ISDE 7.1 is
proceeding, run the following steps:
a) Log in to IBM Director Console on 6.1
b) Click the plug-in link of TPM for OSd – ISDE (bottom of the welcome page)

82
c) Click the link in the red circle

d) Look for “ReplicateAllFromRemote(/<IP_5.1.1_server>)” activity and check the percentage


and the current status
e) Replication is successful if the percentage is 100% and the icon status is

Moreover, any errors during the running of the migration tool are logged on the command prompt
and in the log files.
For example, the migration tool informs you about any replication activity failure by providing
details about how to troubleshoot the problem encountered:

83
The problem was simulated with a destination TPM for OSd 7.1 server not having enough disk
space:

In this case you can fix the problem and retry the migration.

Incomplete object replication


As the last step, the migration tool performs a consistency check on the migrated System Profiles:

84
Consider that this test was added to warn you about possible problems encountered during the
migration: it is not a reliable check for future deployments of migrated images. It is suggested that
you deploy the migrated images to ensure that they work successfully in the new TPM for OSd 7.1
environment. If a migrated TPM for OSd 7.1 System Profile is detected as inconsistent on the TPM
for OSd 7.1 server, you are shown a warning together with a possible explanation.
For example, a real customer scenario in an RDM world is represented by the image capture
(GetDonor task) on a target managed by a RDM Remote Deployment Server (aka RDS).
In this case the image is first saved into the shared repository of the child server of TPM for OSD
5.1.1, bound to the RDS, and then, during the same RDM GetDonor task, it is replicated to the
RDM MDS.
The image information is stored in the RDM database only after a successful replication. If, for any
reason, this replication fails, the image appears with a yellow triangle on the TPM for OSd 5.1.1
Web UI of the master server, to signal its incompleteness (see picture below). The profile does not
appear among the list of available RDM images.

When you run the tool, due to the full replication of the TPM for OSd 5.1.1 shared repository, any
data about this incomplete RDM image is also migrated to the 7.1 server, but the System Profile is
not usable.

85
This is indicated in the migration tool log file, as shown below, and you must decide how to
proceed.
2009-02-19 10:44:58 - Sanity check on System Profile: 4367_sles10sp1_x64
2009-02-19 10:44:58 - System Profile 4367_sles10sp1_x64 is incomplete on 7.1
server
2009-02-19 10:44:58 - The migrated System Profile 4367_sles10sp1_x64 is not
known to RDM server:
2009-02-19 10:44:58 - If this is a VMWare ESX 3.5 native image, check its
sanity directly on TPM for OSD 5.1.1 Web UI.
2009-02-19 10:44:58 - If this is an RDM image captured from a RDS, this means
that replication to the MDS failed.
2009-02-19 10:44:58 - Please:
2009-02-19 10:44:58 - - Run again the GetDonor with the SKIPCAPTURE
parameter to complete the image on the MDS
2009-02-19 10:44:58 - - Upon this RDM task completed successfully, run this
tool again
2009-02-19 10:44:58 - Sanity check on System Profile completed

You can now proceed in either of the following ways:


1) Complete the GetDonor, launching it again to the same target, with the SKIPCAPTURE
option. This starts the replication again. The target need not be physically attached to the
network. When the replication completes, you can launch the migration tool again. The
yellow triangle on 7.1 disappears.
2) Delete the profile from TPM for OSd 7.1 Web UI.

PXE server coexistence


The RDM migration scenario has been designed to let both the RDM 4.40.1 and TPM for OSd –
ISDE 7.1 servers coexist in the same subnet until the migration has completed: this allows users to
continue to use RDM for legacy tasks and begin to use the TPM for OSd server.
To perform deployment tasks, both RDM 4.40.1 and TPM for OSd – ISDE 7.1 act as PXE servers:
this protocol is used to take control of target machines performing a network boot; having more
than one single PXE server per subnet is a supported scenario but some configurations are needed to
prevent unpredictable behaviours.
In such a coexistence scenario, you need some targets to join your RDM server and some others to
PXE boot into TPM for OSd: here we explain how to handle this behaviour with the virtual
environment described above.

If you have two PXE servers running in the same subnet (the old RDM 4.40.1 and the new TPM for
OSd – ISDE 7.1) and an ISC DHCP server not configured with options 60 and 43, it is

86
unpredictable which server will control the network boot of a machine: a target PXE-booting might
join both PXE servers and you cannot predict which one will be used until you stop one of them.
In such a scenario, both the PXE servers allow the target PXE boot to try to contact the PXE client.
The following example shows a network with a target having 3 DHCP OFFER packets from:
- The DHCP server with IP address 192.168.1.99
- The RDM server with IP address 192.168.1.1
- The TPM for OSd server with IP address 192.168.1.10
It replies with a DHCP REQUEST packet to:
- The DHCP server to get the IP address
- The RDM server to continue the PXE boot
Without any configuration, you cannot be sure which PXE server is addressed by the last DHCP
REQUEST packet (sent to continue the network boot on the selected PXE server).

Note that in this network both the PXE servers are providing the PXE boot (by providing their
DHCP OFFER packet) and the target is choosing the RDM server (with a DHCP REQUEST packet
to the IP address 192.168.1.1).

To control the PXE process and be sure which server is selected during the target network boot, you
can use either of the following options:
- Use the options 60 and 43 on the DHCP server
- Use the option “Use alternate PXE server” on RDM 4.40.1 without adding any option to
your DHCP server.

“Use alternate PXE server” option


RDM 4.40.1 embeds a TPM for OSd 5.1 engine to implement the PXE protocol. To have PXE
server coexistence, the “Use alternate PXE server” option is provided (on both the embedded
engine TPM for OSd 5.1 and the TPM for OSd – 7.1). By enabling this option on the TPM for OSd
5.1 embedded engine, during a target network boot, the RDM 4.40.1 server does not reply with a
DHCP OFFER packet and the PXE process continues to the other listening PXE server.

87
To activate this option for a specific target follow this procedure:
1. On your RDM 4.40.1 server, open the TPM for OSd 5.1 GUI
a. Open the browser at http://<RDM_server_ip>:8080
b. Insert Bootserver GUI login credentials provided at RDM 4.40.1 installation time
(the default username is rdmadmin)
2. From OS Deployment -> Host Monitor -> double-click your target

3. Scroll down the page and click Edit in the Boot Settings section

4. Under PXE boot options, check “Use alternate PXE server”

88
To select this behaviour for unknown targets that will later PXE boot, but that are currently not
listed under the Host Monitor page:
1. On your RDM 4.40.1 server, open the TPM for OSd 5.1 GUI
a. Open the browser at http://<RDM_server_ip>:8080
b. Insert Bootserver GUI login credentials provided at RDM 4.40.1 installation time
(the default username is rdmadmin)
2. From OS Deployment -> Host Monitor -> select the Default group

3. On the pane below, select the option “Configure handling of unknown hosts”.

89
4. On the pop-up that opens, check the option “Let unknown computer contact another PXE
server”

If you enable this option (for a specific target or for unknown targets), even if the PXE server is
running, it does not reply with a DHCP OFFER packet and the other PXE server is contacted
during the target network boot.
If you set this option on a specific known target of the RDM 4.40.1 server, at the next network boot
this target joins the TPM for OSd - ISDE 7.1 server. The following example shows a network that
shows the RDM server not replying to the PXE process and the TPM for OSd 7.1 server continuing
the bootstrap for the target.
90
As you can see the target receives 2 DHCP OFFER packets from:
- The DHCP server with IP address 192.168.1.99
- The TPM for OSd server with IP address 192.168.1.10
It replies with a DHCP REQUEST packet to:
- The DHCP server to get the IP address
- The TPM for OSd server to continue the PXE boot
Note that the “Use alternate PXE server” option prevented the RDM server from sending a DHCP
OFFER packet and redirected the PXE boot to the TPM for OSd 7.1 server.

This line appears in the RDM server logs (<RDM_ROOT>\EE_Files\logs\boot.log file) meaning
that the PXE boot has been redirected to another available PXE server:

A common scenario in a real environment might be to redirect the PXE boot process from the RDM
server to the TPM for OSd 7.1 for all the unknown targets: in this way all RDM 4.40.1 known
targets continue to work with the RDM server, while all new machines (considered by RDM as
unknown computers) are redirected to the TPM for OSd – ISDE 7.1 server.

DHCP options 60 and 43


The PXE boot process relies on a running DHCP server to be run: as shown in the previous network
captures, the first step for a PXE client is to obtain an IP address and this is performed with the first
DHCP OFFER – DHCP REQUEST packets exchange; the second pair is then used by the PXE
client to start the communication with the PXE server.
It is also possible to configure the DHCP server and customize several behaviours of the PXE
process; here we show how the ISC DHCP 3.0 server will be configured to assign a specific PXE
server for each specific target: this is done with the DHCP options 60 and 43.
The PXE protocol specification defines in detail all the options available on DHCP servers
supporting the PXE process: read the PXE protocol specification for further details.

91
Supposing in the virtual environment described above, we want to assign the RDM 4.40.1 server to
the virtual target RDMTarget1 and the TPM for OSd – ISDE 7.1 server to the virtual target
RDMTarget2; in such a scenario every network boot performed with the machine RDMTarget1
joins the RDM 4.40.1 server while the other machine RDMTarget2 always PXE boots to the TPM
for OSd – ISDE 7.1 server.

As reference you can use the following dhcpd.conf file that was tested in the scenario:
# dhcpd.conf

option domain-name "example.org";


option domain-name-servers ns1.example.org, ns2.example.org;
default-lease-time 600;
max-lease-time 7200;
option subnet-mask 255.255.255.0;

# defining option 43
option space PXE;
option PXE.discovery-control code 6 = unsigned integer 8;
option PXE.boot-server code 8 = { unsigned integer 16, unsigned integer 8, ip-address};
option PXE.boot-menu code 9 = { unsigned integer 16, unsigned integer 8, text};
option PXE.menu-prompt code 10 = {unsigned integer 8, text};

ddns-update-style none;
ddns-updates off;
log-facility local7;

allow booting;
allow bootp;

# subnet declaration
subnet 192.168.1.0 netmask 255.255.255.0 {

# these options are also needed for Linux deployments


option subnet-mask 255.255.255.0;
option domain-name-servers 192.168.1.99;
option domain-name "site";

option broadcast-address 192.168.1.255;


default-lease-time 6000;
max-lease-time 6000;

# this target will PXE boot to RDM 4.40.1 server at IP 192.168.1.1


host RDMTarget1 {

hardware ethernet 00:0c:29:c4:73:20;

92
fixed-address 192.168.1.180;

# option 60
option vendor-class-identifier "PXEClient";

# option 43
vendor-option-space PXE;
option PXE.discovery-control 7;
# providing PXE server IP address
option PXE.boot-server 15 1 192.168.1.1;
option PXE.boot-menu 15 10 "RDM 4.40.1";
option PXE.menu-prompt 0 "RDM 4.40.1";

# this target will PXE boot to TPM for OSd - ISDE 7.1 server at IP 192.168.1.10
host RDMTarget2 {

hardware ethernet 00:0c:29:ef:c9:b7;


fixed-address 192.168.1.179;

# option 60
option vendor-class-identifier "PXEClient";

# option 43
vendor-option-space PXE;
option PXE.discovery-control 7;
# providing PXE server IP address
option PXE.boot-server 15 1 192.168.1.10;
option PXE.boot-menu 15 15 "TPM for OSd 7.1";
option PXE.menu-prompt 0 "TPM for OSd 7.1";

range 192.168.1.150 192.168.1.180;

Such a DHCP configuration can be summarized as follows:


Virtual machine Virtual machine Virtual machine PXE server IP PXE server
name MAC address IP address address name
RDMTarget1 00:0c:29:c4:73:20 192.168.1.180 192.168.1.1 RDM 4.40.1
RDMTarget2 00:0c:29:ef:c9:b7 192.168.1.179 192.168.1.10 TPM for OSd 7.1

If we perform a network boot with machine RDMTarget1, it will contact the RDM 4.40.1 server:

93
RDMTarget2 will be routed to the TPM for OSd – ISDE 7.1 server:

94
Concept mappings

This section explains how RDM 4.40 concepts and terms are mapped against TPM for OSd - ISDE
7.1 objects; you are led through the TPM for OSd 7.1 object model to become familiar with its
terminology. The same mapping is valid for stand-alone TPM for OSd 7.1.

OS and application images


An RDM image can be an operating system or a software application and consists of all the files
needed to install the given operating system or application.
In RDM, images are physically stored as:
- A compressed zip file inside the RDM repository at path
<RDM_Root>\repository\image\<image_internal_name> (this is the legacy format)
- A TPM for OSd 5.1.1 image (this is the new format from RDM 4.40)
The RDM images are any of the following types:
- Windows Native Install (WNI) - Application
- Windows Native Install (WNI) - Operating System
- Linux Clone Install (LCI) - Application
- Linux Clone Install (LCI) - Operating System
- Windows Clone Install (WCI) - Operating System
To check the RDM image type,refer to the following table:
RDM Image type Description How to check
Director Console -> Tasks -> RDM tools ->
Windows application images Image Management -> Create and Modify
WNI Application
used for WNI and WCI Tasks Images -> Edit image -> Setup panel -> “image
type” is Application
Director Console -> Tasks -> RDM tools ->
Windows OS images used for Image Management -> Create and Modify
WNI OS
WNI Tasks Images -> Edit image -> Setup panel -> “image
type” is Operating System
Director Console -> Tasks -> RDM tools ->
Linux application images used Image Management -> Create and Modify
LCI Application
for LCI Tasks Images -> Type LCI and Internal Name ending
with .zip
Linux OS images used for LCI Director Console -> Tasks -> RDM tools ->
LCI OS
Tasks Image Management -> Create and Modify

95
Images -> Type LCI and Internal Name not
ending with .zip
Windows OS images used for
WCI OS WCI are always Operating System images
WCI tasks.

In TPM for OSd 7.1, images are grouped into:


- System Profiles: they refer to OS images, both for unattended setup and for deployment
of cloned profiles
- Software Modules: they refer to applications, drivers, and custom script run after an OS
installation
TPM for OSd 7.1 images are indexed with an internal method and are stored at path
<TPMfOSd_DataDir>\shared

RDM images can be mapped against TPM for OSd System Profiles and Software Modules as
shown in this table:

RDM 4.40.1 TPM for OSd 7.1 Can be migrated


Image equivalent object to 7.1

Software
WNI Application Yes
Modules

WNI OS System Profile No

Software
LCI Application Yes
Modules

LCI OS System Profile Yes

WCI OS System Profile Yes

96
Tasks
RDM WNI / LCI / WCI Tasks contain information used at deployment time to customize the image
installation with parameters such as network configuration, user details, product-key, and so on.
This RDM concept is mapped with the TPM for OSd OS Configuration object.
In TPM for OSd, each System Profile contains at least one OS Configuration describing the
parameters to be used at deployment time. Just as you can have several WNI / LCI / WCI Tasks
referring to the same RDM image, you can have several OS Configurations for each System Profile.
You find the same parameters in an RDM image:

as you find in a TPM for OSd 7.1 OS configuration:

The table below shows the mapping between some RDM Task parameters and TPM for OSd OS
Configuration parameters:
RDM Task TPM for OSd OS Additional details
Parameter configuration
Parameter
This is both a System Profile and OS Configuration
Disk Configuration Partition Layout
parameter. First one can override the last one.
Images Binding Rule (for In RDM here are listed additional applications to
the Software install at deployment time. In TPM for OSd this can

97
be done by creating a Binding Rule for the Software
Module. Otherwise you can manually add an
Modules) application when running a deployment directly with
the “Deploy Now” wizard

Personal, Password, Viewing / Editing TPM for OSd OS Configurations,


Similar names for
Licensing, Regional, you will access several panels showing these
OS Configurations parameters
Network

The RDM GetDonor task is now (in TPM for OSd – ISDE 7.1) performed capturing an image as
follows (see “how to capture an image” section):
-OS Deployment -> OS Configuration -> New Profile -> Cloning from a reference machine

In TPM for OSd 7.1, if the Microsoft Sysprep tool has not been run on the reference machine, you
are prompted by a warning message.

The RDM PutDonor task requires that you (see “how to restore an image” for more details):
- Select your target from the Director console -> Actions -> OS Deployment -> Additional
Features -> Restore a profile

98
This wizard also asks you if you want to run the Sysprep mini-setup phase after the restoration:

If you are restoring an image captured without running Microsoft Sysprep, then this panel is not
important because the mini-setup step is always skipped during the Restore a profile task. It is
suggested that you check the option “disable SysPrep mini-setup” when you are using syspreped
images and you need to skip the mini-setup. Skipping the mini-setup phase during a profile
restoration allows users to have the same behaviour as that for an RDM PutDonor task.

Note that the default behaviour in TPM for OSd – ISDE 7.1 after a deployment task is completed is
to boot the target from its local hard disk, while in RDM the default behaviour was to shut down the
system.

Hardware configuration
In RDM 4.40, the tasks involving hardware were:
- CMOS update
- RAID clone
- RAID configuration
- Remote Storage configuration
- Secure Data Disposal
- System Firmware Flash
As shown by the picture below they were listed together with other tasks:

99
Some of them required specific RDM images to be run while others required some configuration
parameters.
In TPM for OSd 7.1, the hardware configuration tasks you can perform are:
- RAID configuration
- BIOS update / settings
- Hardware discovery / Capture hardware parameters
- Hardware custom task
- Destroy Hard-Disk content
The hardware tasks are mapped as described in the following table:

RDM HW task OSd – ISDE HW Director 6.1 Web UI path


task
CMOS update Bios Settings OS Deployment -> HW Configurations -> New
Hardware configuration
RAID clone Capture hardware Select the target -> Actions -> OS Deployment
parameters -> Additional features
RAID configuration RAID configuration OS Deployment -> HW Configurations -> New
Hardware configuration
Remote Storage * *
configuration
Secure Data Disposal Destroy HD contents Select the target -> Actions -> OS Deployment
-> Additional features
System Firmware Flash Bios Update and OS Deployment -> HW Configurations -> New
Hardware Custom Hardware configuration
Configuration

100
* This task is not explicitly available. TPM for OSd – ISDE 7.1 can discover the WorldWide Port
Name (WWPN) of IBM Fibre Channel adapters (using the Hardware Discovery / Capture
Hardware Parameters task) and can be used to start any command line needed to do host-local Fibre
Channel adapter configuration (using the Hardware Custom Configuration task).

As you can see, RDM hardware configuration tasks are also available in TPM for OSd – ISDE 7.1
except for the Remote Storage Configuration which is limited as described above.

The new concept in TPM for OSd 7.1 is the notion of hardware environment: it is the engine where
the hardware tasks are run. Basically it is composed of an operating system running in a ramdisk
(example WinPE 2.x) and the vendor-specific scripting toolkit to access the specific hardware
devices. This means that you can create hardware environments only by using vendor-specific
binaries.
When the hardware environment is ready, you can create the hardware configurations object as you
did with RDM tasks. In TPM for OSd 7.1 each hardware configuration task (except for Secure Data
Disposal / Destroy Hard-disk content) always needs to know which hardware environment to use
according to the target model type. This means that a hardware environment is a prerequisite for
most of the hardware configuration tasks. For more information, see “How to work with Hardware
Configuration tasks” or refer to the TPM for OSd 7.1 documentation.

Applications and drivers


RDM applications were considered as RDM images installed at deployment time after the operating
system setup. It was possible to select which applications to install with a specific operating system
image by using the RDM Task. Together with operating system parameter configuration, it was
possible to add RDM applications to be installed. Moreover, for each RDM task, you could separate
each application installation by an optional reboot. The image below shows a specific WCI Task
containing a Windows application winApp installed after a Windows cloned image 2665_sysrepped:

101
In TPM for OSd 7.1, you can link an application installation to an operating system setup in two
ways:
- Explicitly when starting a deployment, during the Deploy now wizard
- Automatically with Binding Rules

The first way is the simplest one: when you start an OS deployment selecting one target, the Deploy
now wizard asks you which applications to deploy together with the image.

102
103
The screen capture below shows a list of the available applications (Software Module in TPM for
OSd 7.1) that can be explicitly deployed with the previously selected profile:

Note that if a software module (as example a driver) is specific for an OS version that differs from
the one you selected during this wizard, it is not displayed in the section above.

104
The second automatic method is to create Binding Rules. These rules are TPM for OSd objects that
can be assigned to system profiles and software modules. Every time a deployment is started on a
target, all binding rules are checked and it is verified if a software module must be bound to the
current deployment.
In this scenario, we use binding rules to link each software module to a specific system profile so
that when the OS image is deployed, the linked application is bound to the same task. In this way
we obtain the same behaviour as RDM tasks where there was a matching between OS image and
linked application images.

To create Binding Rules, select your application and edit a new Rule

Then choose as matching parameter the “System Profile” and as matching value the OS image you
want this application to be bound to:

105
In the above image we are linking a Windows Application to a Windows XP profile: now this
application will be automatically deployed when deploying that OS image.

You can customize similar behaviour by using several other parameters: target model, target
architecture, and so on.
Note that when creating software modules containing Windows drivers, the wizard by default
creates binding rules using PCI definitions. This means that TPM for OSd adds the Windows driver
using the PCI definition of target system devices.
If you created a driver using default parameters, you see all the binding rules created using PCI
definition:

106
For more details about TPM for OSd drivers, see “How to create drivers”.

An important difference from RDM is the application order when they are installed at deployment
time, because it is no longer at RDM Task level. Software modules are now handled with a broader
scope by what are now called Software Stages.
A software stage is a static property of a software module that you cannot change for each
deployment as you could in RDM where the reboots between applications were related to each task.
From Director Console -> Os Deployment -> Software Modules -> Reorder software it is now
possible to choose the execution stage for application installation because the time when the
application is installed is no longer a task property but a global parameter of each specific
application.

107
Deployment Servers
In RDM, the component which handled the deployment activities was called DServer; in TPM for
OSd – ISDE 7.1 it is called OS Deployment Servers. In IBM Director 6.1 you install a boot server
object that is the OS deployment server implemented by TPM for OSd 7.1.

Note that:
- IBM Director 5.20 had Boot servers implemented by the RDM plug-in with the TPM for
OSd 5.1.1 engine
- IBM Director 6.1 has Boot servers implemented by TPM for OSd – ISDE 7.1.

Moreover, in the multi-server infrastructure:


- RDM DServers were either Master or Slave according to their role in the hierarchy
- TPM for OSd – ISDE 7.1 now has parent and child OS Deployment servers

108
- In both cases only one Master or parent was allowed.

Target model checks


In RDM and TPM for OSd – ISDE 7.1 you can deploy a previously captured image on a target with
a model type different from the original one (but consider that the image might not work properly).
If you try a putDonor, WCI, LCI on a different target model, RDM 4.40.1 informs you with a pop-
up message like the following:

In TPM for OSd – ISDE 7.1 you can choose how the product is to behave when a deployment is
started on a different model type; you have three possible choices:
- No checks: do not check model
- Softly: the target UI informs you with a warning and allows you to bypass it
- Strictly: do not allow deployment on different models

This behaviour is handled by a TPM for OSd object called Deployment Scheme (listed in the Task
Templates list):

The “Default” deployment scheme has the softly behaviour, but you can change it as shown below:

109
Refer to the TPM for OSd 7.1 documentation for more details.

Replication
Image replication now behaves differently:
- RDM images captured on RDSs were automatically uploaded to the MDS at the end of
the GetDonor tasks. RDM images captured on the MDS were not replicated on all RDSs
but downloaded on the involved RDSs only when a deployment was started
- By default, TPM for OSd – ISDE images on the parent OS deployment server are
automatically replicated to all the child OS deployment servers. Images on the child
servers are not automatically uploaded to the parent server.
The suggested procedure for TPM for OSd 7.1 multi-server environments is to have a separate
infrastructure to create and test images, export them to RAD files, and then import them at the
necessary hierarchy level of the TPM for OSd – ISDE 7.1 production infrastructure.

110
Use cases

The RDM plug-in has two entries in the Director 5.20 console:
- Remote Deployment Manager
- Remote Deployment Manager Tools

111
In IBM Director 6.1 you can access TPM for OSd – ISDE 7.1 activities from:
1. Welcome -> Manage panel
2. Release Management -> OS Deployment section
3. By selecting an OS Deployment Action on a resource

How to discover new targets


TPM for OSd – ISDE 7.1 acts as a boot server in Director 6.1: this means that you can discover
unknown targets performing a PXE boot. If you have a working OSd server you just need to boot
your target from the network and it is shown in the Director console.
TPM for OSd 7.1 behaviour is handled by a Task Template of type Idle Layout. This kind of
template configures the behaviour of the OS deployment server for unknown targets which have
just made their first PXE-boot.

From OS Deployment -> Deployment Schemes -> Idle Layout select the default Idle State

112
Customize the parameters as needed:

113
Pay particular attention to the following parameters:
- Perform inventory systematically: a basic HW inventory is performed so that the new
discovered targets are shown in the Director console with some basic information about
their H/W
- Completely ignore unknown targets: if set to yes, unknown computers never contact OS
deployment servers during PXE boot and are never shown in the Director console
- Make unknown computer boot on their Hard Disk: if set to yes, unknown computers are
added to the Director console and will boot from their hard disk
- Make unknown computer boot on their Hard Disk when there is no pending task: if set
to yes, unknown computers are added to the Director console and boot from their hard
disk if there is no deployment activity scheduled.

Default behaviour for unknown targets that are just being discovered (unknown targets for TPM for
OSd 7.1) is to boot from their hard disk just after the basic inventory scan. This means that you just

114
need to boot from the network, new resources and are shown in the Director console, and then start
the operating system (if any) installed on their hard disk.
Consider that default behaviour for known hosts is to boot from hard disk if there is no pending
activity. In this way it is possible to set network as the first boot device and TPM for OSd redirects
the boot settings if no deployment activity is scheduled.

If you power on your target to boot a network, it PXE boots, joins TPM for OSd, and is redirected
to its hard disk. It is then added as a resource in the Director console.

Because a basic hardware inventory has been performed by the OS deployment server, you can
view some information in two ways:
- Select the resource and view the Inventory panel
or
- Check your host -> Actions -> View and Collect Inventory

115
The above page only shows general inventory details; if your target was discovered with a TPM for
OSd network discovery (as in this scenario), you can see a detailed inventory report if you select
your resource -> Deployment Properties -> Inventory:

If you need additional details during an inventory scan performed at discovery time (RAID, Fiber
Channel), you have to modify the Idle Layout template to run a Hardware discovery capture task:
see “Hardware configuration: Discovery” for more details.

116
Note that at every network boot of unknown hosts, TPM for OSd – ISDE 7.1 notifies Director 6.1
of newly discovered resources. This causes the target to be shown in the Director console; for
troubleshooting purposes, refer to the <TPMforOSd_DataDir>\logs\event.log file which contains
notifications sent by TPM for OSd to the Director.

How to run a deployment task


Here we explain in general terms how to start an OS deployment activity: to create OS images to be
deployed you might need to read sections “How to capture images”, “How to restore images”, and
“How to create images from installation medias”.
First of all you must have your target under the Director console.
You can do this with:
- A network discovery, which is the suggested way when using OS deployment server, see
“How to discover new targets”
- An import, see “How to import/export targets”
- Other Director discoveries

The suggested way is to perform a network boot for an unknown target, wait until it joins the TPM
for OSd Deployment Engine, and it is shown under the Director console -> Navigate Resources ->
All Systems as a Server resource type as shown below:

To start a deployment task, select the target (check you selected your resource listed with type
Server and not Operating System) -> Actions -> OS Deployment –> Deploy now:

117
When running OS Deployment tasks, ensure you always use resources listed with type “Server”.
A wizard starts and leads you to configure this deployment. You are prompted to choose, not the
System Profile, but the OS configuration: the first is only the image while the second contains the
configuration needed to deploy the OS.

The deployment starts. Click the link below to see the task activity status:

118
To check OS deployment activity you can always use this path:
- Director Console -> Welcome -> TPM for Osd icon -> History -> OS Deployment task

119
How to import and export targets
To export and import targets, you must directly use the stand-alone TPM for OSd Web UI:
- Open a browser
- Go to http://<serverIPaddress>:8080 where <serverIPaddress> is the IP address of
your parent OS deployment server
- Log in with TPM for OS Deployment Administrator credentials provided during OS
deployment server installation
- Go to OS deployment -> Target Monitor and click the “Export Targets” / “Import
Targets” buttons

When you export targets, a CSV file containing all the hosts known to your TPM for OSd server is
created:

120
If you need to import targets, you have to use a similar CSV, but the following requirements must
also be met:
- Description column must not exist or it must be empty
- UUID column must not be empty
- MAC column must not be empty

121
For example, you can use the following CSV file:

and the target is shown in the Director Console:

If you do not use the IP column in the config.csv file:

122
then the machine is shown in the Director console with its MAC address.

How to capture images


To capture an image from a target the suggested way is to discover your donor with a network boot
and check that the resource object is then listed under the “Navigate resources” menu with Server
type. Otherwise, if you discovered your target with a Director discovery you have to:
1. From “Navigate resources”, check that your host is listed with a Server resource type. If
not, then find your host listed with the Operating system resource type and select
“Request Access”: this creates the related Server resource type. Director discovery only
creates hosts with Operating System resource type; to be able to use these resources for
OS Deployment tasks, you need to create the Server resource type as described.
2. Select your donor object from the “Navigate resource” menu (make sure you select the
Server resource type and not the Operating System resource) and click the “Deployment
Properties” tab. Check that the page is correctly displayed and check Serial Number,
UUID, MAC, IP values (not all fields are necessarily filled). This second step is optional
and just verifies that the resource is ready to be captured.

When your donor is listed as a Server resource type, then you can start the image capture with the
following steps.

First of all consider that the OS capture task needs to know the IP address which identifies the target
in the TPM for OSd host list. If you discovered your target with a TPM for OSd network discovery,
it is the IP address obtained by the target during the last discovery: this means that you have to
know the IP address provided by the DHCP server during the last PXE boot.
Note that when you start a capture task, your target might have a different IP address from the one
obtained at discovery time, so be sure that you are using the IP address as known by TPM for OSd –
ISDE 7.1.

123
To know the target IP address, if you just discovered your machine with a network boot, you can
find it on the Director console:

Alternatively, for a further check, you can open the stand-alone TPM for OSd console and browse
the Target Monitor page looking for your target:
- Open a browser
- Go to http://<serverIPaddress>:8080 where <serverIPaddress> is the IP address of
your parent OS deployment server
- Log in with TPM for OS Deployment Administrator credentials provided during OS
deployment server installation
- Go to OS deployment -> Target Monitor and find your machine

When you know the IP address of the target as listed in TPM for OSd, you can start the capture
task.

124
From the Director console go to OS Deployment -> OS Configuration -> New Profile you can start
the capture from a reference machine:

Then insert the IP address that you registered from the “Navigate Resources” page:

If the machine is running its operating system or it is powered off you have to force a network boot
when you see the following image.

125
Note that if you change the Idle State template, you can have early discovered hosts load the TPM
for OSd Deployment engine (locked screen) and wait in this state for scheduled OS deployment
tasks: if you see your target as the picture below, you do not need to reboot it to capture its image.

126
When the target loads the TPM for OSd deployment engine the OS is starting to be captured:

Because this is handled as a normal profile, the first needed OS Configuration is created with the
prompted parameters:

Then the image capture activity starts as follows:

127
If you have not run Sysprep on a Windows image, you are informed and prompted for confirmation
before the wizard completes.
Note that if you captured a non-sysprepped image, when deploying it on a target using the “Deploy
Now” wizard (a deployment activity see “How to run a deployment task”), your image might not
work properly. Instead you have to run a “Restore” task for that image (a restore profile activity see
“How to restore an image”).

To map these scenarios with RDM 4.40 you can consider (just as a simplified reference) the
following table:
RDM Task OSd Task
GetDonor Capture image with or
without sysprep
PutDonor Restore an image not
sysprepped
WCI Deploy a sysprepped
image

When completed, the captured image is shown as a System Profile under OS Deployment -> System
Profiles:

128
with its parameter:

and its first default OS Configuration, which you can now edit, change, or duplicate:

129
Note that the above configuration parameter does not work on images captured without running the
Microsoft Sysprep tool. If you want to restore a non-syspreped image, follow the Restore Profile
task (see “How to restore an image”).

To summarize, if you want to capture an image:


1. Discover your target with a network boot
2. Start the profile creation wizard from Os Deployment -> System Profile -> New Profile
and force the target to boot from the network

130
If Idle State template is configured in this way:

The early discovered target automatically starts the capture without needing a reboot from the
network (step 2). Otherwise it boots from its hard disk and you must manually reboot from the
network just after you start the image capture activity from the Director.

It might happen that you discover a machine with a given IP address and when you start a capture
(or a deployment) task that IP has been leased and a new one is provided by your DHCP server:
TPM for OSd tasks will work without problems. If the IP address used at discovery time is changed
during the network boot that starts the OS deployment activity (capture or reboot) the product
handles this and allows the task to run without any manual intervention.
For example, the following scenario can happen:
- With a network boot you discover a machine with IP = 192.168.1.1
- After one week that IP is being used by another machine
- You start an image capture or a deployment on this target that is registered and known
with IP 192.168.1.1
- At network boot time a new IP is provided 192.168.1.2
- The capture or the deployment starts without problems because TPM for Osd can use the
MAC, UUID, Serial number to discover IP address changing at PXE boot time

How to restore images


This scenario can be mapped to the RDM putDonor task: usually this task is performed if you want
to install OS images captured without having run sysprep on them before. Be sure that the image
you captured without sysprep can be deployed on the selected target: to check this, choose your
System Profile and verify the H/W model, partitions, and so on:

131
Then select the target Actions -> Os Deployment-> Additional feature:

Restore a profile:

132
All the available OS Configuratiosn to restore for that target are shown.

Note the “disable SysPrep mini-setup” flag. If the image you are restoring was captured without
running sysprep on the donor this is not important because the mini-setup phase is automatically
skipped. If you are restoring a syspreped image and you need to skip the mini-setup step, then you
must enable the flag.
Here we checked the option, but because we are restoring an image that was nor syspreped, it would
also work without checking it, because, for this kind of image the mini-setup step is not run.

133
If you are working with images where sysprep has been run on the donor, then a common scenario
is not the restore but the System Profile deployment (see “How to run a deployment task”) which is
more similar to the RDM WCI / LCI task.

How to create images from installation media


In RDM you created images (from installation media or with RDM GetDonor tasks) and used them
inside RDM tasks: with tasks it was possible to configure image parameters. In TPM for OSd –
ISDE 7.1 OS images are now called System Profiles and their customizations (product key, other
parameters) are called OS Configurations.
The section “How to capture images” explains how to create images (here called system profiles)
by capturing them from a reference machine (RDM getDonor task). Here you can see how to create
images from installation media; in TPM for Osd 7.1, this is also called unattended setup.
Basically these steps replace what you did when using the RDM “Create and Modify images”
panel:

134
When creating the unattended system profile (from installation media) the wizard leads you to
create an initial OS configuration. You can then edit it or create new configurations as your prefer.
This is an important difference from RDM where an image was created independently from its
parameters. In TPM for OSd each system profile is already created with the first initial OS
configuration.
To start this task, first check that the Web Interface Extension is running:

For additional details about the Web Interface Extension, refer to “Appendix A: Web Interface
Extension”.
From OS Deployment -> OS Configuration -> New Profile -> Unattended setup you can start the
wizard to create an image starting from installation media:

135
The wizard leads you to provide the image parameters for the needed initial OS configuration until
the System Profile creation process completes:

In TPM for OSd, system profiles have general parameters related to the OS image while each
configuration contains specific details.

136
As you can see, system profile parameters are related to disk partitions, HW model

While each OS Configuration contains more specific parameters:

137
At the bottom of the System Profile page, its OS configurations are listed. The first initially created
OS configuration is shown below:

138
Right-click the configuration to duplicate it and edit the new one:

In our example we create a new OS configuration to be used for a specific HW model:

139
To link the duplicated configuration to a specific HW model, you have to create a Binding Rule to
automatically bind this image to a specific HW model. An example is shown below:

140
In an RDM task you selected which applications were installed with a specific image: here you can
do the same by linking some applications to this profile using binding rules. You just have to create
rules for the specific application you want to bind and select as matching parameter the profile your
application will be deployed with.
For example, to link our test “Windows Application” to a Windows XP system profile, first edit
your TPM for Osd 7.1 software module from Os Deployment -> Software Modules -> Edit:

From the Software binding rules section -> New Rule, insert the matching parameter that you need:

141
Here we created the “Binding to Xp profile” rule to allow this application to be always deployed
with the specified system profile.

Note that the disk configuration parameter is both a system profile property and an OS
configuration property (the latter has higher priority):

142
We also created a RHEL 4 system profile:

With its OS configuration and specific parameters:

How to create drivers


In RDM 4.40, drivers were part of the WNI template as shown below:

143
In TPM for Osd 7.1, drivers are considered to be a type of Software Module: they are not bound to
any OS image but you can add them to a deployment activity as for other applications:
- Explicitly during the “Deploy Now” wizard
- With Binding Rules
You can deploy a Windows driver (during text-mode setup, GUI setup, or later) with both Windows
cloned and unattended images by creating a software module for your driver and linking to a
deployment activity.

With TPM for Osd you can create a single driver by providing the folder where it is installed or you
can create several drivers at the same time by providing them in each specific subfolder and giving
to the wizard the parent directory as the source path. To import a single driver, refer to the TPM for
Osd 7.1 documentation.

Drivers from IBM ServerGuide


To import drivers from the IBM ServerGuide CD, create the software module by following the New
Software Module wizard.
From Release Management -> Software Module, select New Software and “A Windows Driver to
include in a deployment”

144
Supposing we want to load all the text-mode drivers for Windows 2003, this is the path from the
ServerGuide CD:

Here all drivers are contained in their specific subfolder. Because TPM for OSd expects each driver
to have a specific subfolder, you need to use this path from Director CD
\ServerGuide\SG_8.1.0\SGUIDE\W2003DRV\$OEM$\$1\DRV, so that the driver creation wizard
automatically recognizes each subfolder and creates a specific driver for each subfolder:

145
Not all subfolders contain valid drivers: some of them contain particular .exe files that are not
recognized by TPM for OSd. You can now select the checkboxes of the drivers you want to import.
It is also strongly suggested that you group your drivers using folders as shown below:

146
Note that TPM for OSd asks you to create Binding Rules (see TPM for OSd 7.1 documentation for
more details) to automatically link drivers to OS deployment according to the PCI devices of the
target machine. This is one of the best ways to ensure that the correct drivers are added to the
correct machine without explicitly selecting the driver when starting the deployment. If you accept
to create binding rules based on PCI ID, when starting a deployment activity on a target, the
inventory scan performed on it tells TPM for Osd which drivers are needed and they are
automatically added to your OS image.

147
Be careful when selecting the OS version for a specific driver: inserting wrong values here will
prevent the software module from being bound during the deployment. Only drivers for compatible
OSs are added to the deployment.

Your drivers are now created:

148
Check each driver and modify it if necessary:

149
Drivers from UpdateXpress System Pack
A useful tool to handle drivers is UpdateXpress System Pack: it assists you in downloading the
correct drivers according to IBM machine type. If you download and start the UpdateXpress
System Pack Installer (it is a single executable on Windows) a graphical wizard leads you to
download the correct drivers according to your machine type. Usually drivers are provided in an
auto extractable format: you run the executable, unpack it, and load the driver files as Software
Modules in TPM for OSd.

150
Supposing you want to deploy Windows 2003 on an IBM x3200 - 436720z, you will need at least
the text-mode storage drivers that prevent the blue error screen (aka BSOD) during Windows setup.
First you have to download the driver: it is an .exe that you have to extract to your hard disk:

Then you can create the software module containing the driver:

151
Select the folder where you unpacked the driver and choose the correct OS type:

152
This driver is bound only when deploying Windows 2003 profiles:

The binding rules created by default using PCI ID, link this driver to the target machine matching
the driver PCI to ensure that this driver is installed only on targets containing that specific device.
To check this, select your system under Director resources -> Deployment Properties -> Bindings
and you see the bounded driver: “by generic rule” means that the software has not been linked
manually, but by using binding rules (PCI ID in this case):
153
Working with Hardware Configuration tasks
In TPM for OSD – ISDE 7.1 you can perform the following hardware configuration tasks:
- RAID configuration
- BIOS update/settings
- Hardware discovery and Capture hardware parameters
- Hardware custom task
- Destroy Hard Disk content (Secure Data Disposal in RDM)
Note that these tasks can be performed:
- Only on x86 and x86-64 platforms
- With IBM and non-IBM machine models.
TPM for Osd 7.1 hardware configuration features are all available under two separate paths in the
Director console:
- OS Deployment tasks
- Additional features
OS Deployment tasks are available from Director Web UI -> Release Management -> OS
Deployment -> Hardware Configurations as shown below:

154
Additional features are shown when you select your resource from Director Web UI -> Actions ->
OS deployment -> Additional features:

The following wizard starts:

Similarly, in RDM 4.40 the following hardware configuration tasks were listed:
155
- CMOS update
- RAID clone
- RAID configuration
- Remote Storage configuration
- Secure Data Disposal
- System Firmware Flash
This is also shown by the picture below:

These tasks are now mapped to TPM for OSd – ISDE 7.1 as described in the following table:

RDM HW task OSd – ISDE HW Director 6.1 Web UI path


task
CMOS update Bios Settings OS Deployment -> HW Configurations -> New
Hardware configuration
RAID clone Capture hardware Select the target -> Actions -> OS Deployment
parameters -> Additional features
RAID configuration RAID configuration OS Deployment -> HW Configurations -> New
Hardware configuration
Remote Storage * *
configuration
Secure Data Disposal Destroy HD contents Select the target -> Actions -> OS Deployment
-> Additional features
System Firmware Flash Bios Update and OS Deployment -> HW Configurations -> New
Hardware Custom Hardware configuration
Configuration

156
* This task is not explicitly available. TPM for OSd – ISDE 7.1 can discover the WorldWide Port
Name (WWPN) of IBM Fibre Channel adapters (using the Hardware Discovery / Capture
Hardware Parameters task) and can be used to start any command line needed to do host-local Fibre
Channel adapter configuration (using the Hardware Custom Configuration task).

In TPM for OSd – ISDE 7.1, all hardware configuration tasks (except Secure Data Disposal)
require a hardware environment linked to it. A hardware environment is the place where the
hardware configuration is run; it consists of an operating system and the vendor-specific toolkit
script, both running in a ramdisk, to access the target machine. Without a hardware environment it
is not possible to work with the specific devices installed on a machine. The hardware configuration
task uses this environment for the specific operation you want to perform. You can think of it as if
you were updating the BIOS on your laptop running an operating system: you have to download the
vendor-specific executable and execute it. The same happens with TPM for OSd – ISDE 7.1: the
hardware environment is the operating system where the hardware configuration task (the
executable updating the BIOS) is run. Read the TPM for OSd 7.1 documentation for more details.
So basically for every hardware configuration task you must:
- Create the specific HW environment if not yet created
- Create the HW configuration task specifying the HW environment to use
“Hardware Environment” contains a scenario describing how to create a Hardware Environment
based on the IBM WinPE 2.x. “Hardware configuration” describes some scenarios involving
hardware configuration.

Hardware environment
Following the steps in the TPM for OSd 7.1 documentation from Working with Hardware
Configuration -> Creating an environment, you can create the needed HW environment to run HW
configurations tasks. The suggested steps for creating these environments can be performed on the
Director server itself or on another machine running the Web Interface Extension.
The scenario below shows you how to create the HW environment named “IBM ServerGuide
Scripting Toolkit WinPE 2.x based” from a machine that is not the Director server: we will use this
environment on our IBM x3200 target.
First you have to check that Web Interface Extension is running:

157
If it is not, then click the red cross and follow the steps:

For additional details about the Web Interface Extension, refer to “Appendix A: Web Interface
Extension” of this document.
As a prerequisite we:
- Downloaded and installed Microsoft WAIK
- Downloaded and unpacked the IBM ServerGuide Scripting Toolkit
(ibm_sw_sgtkw_2_1_windows_i386.zip) at path “C:\IBM-SGTSK-WinPE2.x”
Then we followed the steps in the TPM for OSd 7.1 documentation:

In the example below the IBM ServerGuide Scripting Toolkit has been unpacked and the
SGTKWinPE.cmd command has been run as documented:

158
As described in the last line of this procedure:
“A directory .\sgdeploy\WinPE_ScenariosOutput\Local\RAID_Config_Only\ISO is created and
contains the environment tools”,
this folder contains both:
- The vendor specific scripting toolkit
- The WinPE binaries
So remember the name of this folder because you need to provide it as input when creating the HW
environment.

You might need a reboot after the SGTKWinPE.cmd command and before starting the HW
environment wizard in the TPM for OSd – ISDE Web UI.

Open the browser to Director Web UI -> OS Deployment -> HW configurations -> New
environment and follow the wizard:

159
As an example, suppose you followed the above steps and you unpacked the IBM ServerGuide
Scripting Toolkit WinPe 2.x in path “C:\IBM-SGTSK-WinPE2.x”; you should see something similar
to the picture below:

When you create the hardware environment IBM ServerGuide Scripting Toolkit WinPe 2.x, you
must fill in some parameters. The first panel asks you for the path where ServerGuide Scripting
Toolkit (SGST) is located:

160
From the previous picture, the path to be inserted is:
"C:\IBM-SGSTK-WinPE-2.x\sgdeploy\WinPE_ScenarioOutput\Local_Raid_Config_Only\ISO"

The next step asks for the WinPE material location:

161
Also in this case the path to be inserted is:
"C:\IBM-SGSTK-WinPE-2.x\sgdeploy\WinPE_ScenarioOutput\Local_Raid_Config_Only\ISO"

Click next to finish the creation of the hardware environment.

If your targets need other hardware environments such as:


 Dell WinPE 1.x
 HP WinPE 1.x
 IBM WinPE 1.x
 IBM DOS

refer to the TPM for OSd 7.1 information center as shown below for further details:

162
Hardware configuration: Discovery
In TPM for Osd 7.1, the Capture hardware parameters task lets you capture the target
configuration for the following devices:
- RAID
- Fiber channel
Consider that, as default behavior, on every target loading the TPM for OSd deployment engine, the
inventory for CPU, memory, Logical Disks, PCI devices, motherboard, and so on is automatically
performed and all information is immediately sent to the Director console.
To complete the hardware target inventory with RAID and Fiber Channel information, you need to:
1- Create the specific HW environment -> see “Hardware Environment” section
2- Create your Capture hardware parameters configuration
Then you can run the Capture hardware parameters task from Actions -> Os Deployment ->
Additional feature explicitly when needed:

Alternatively, you can set to discover this information as default behaviour from the Idle State
template:

163
Note that a hardware configuration object of type Capture hardware parameters must exist if you
want to run a Capture hardware parameters task.
In this scenario we describe all the above details about how to capture the RAID configuration for
an IBM x3200.

First of all check that you have the correct HW Environment for your specific target model: because
it contains the WinPe environment and the vendor-specific toolkit script to work with your target.
According to our target model type we need the IBM ServerGuide Scripting Toolkit WinPe 2.x as
Hardware environment. In our example we then use the Win PE 2.x from IBM (see “HW
environment” for how to create it).

164
Then you can start the creation of the Capture hardware parameters object:

Note that you can create a single Capture hardware parameters object, which once created, is no
longer available, but that you need to select and edit as described in the “Available hardware
configurations” section as shown below:

165
Follow the wizard and you are prompted to match your existing HW environments to your target
models. This is necessary because each target model needs a specific environment and vendor-
specific toolkit: this information is contained in the HW environment you previously created. Once
linked, the Capture hardware parameters task will use the selected HW environment to capture the
Hardware configuration from the target.
Below we are binding the IBM WinPE 2.x HW environment previously created to the target model
(IBM x3200) from which we want to capture RAID information.

166
You can select to add more than one matching in this wizard:

When the hardware configuration object has been created:

167
It is no longer available in the wizard:

But you have to select it from the Hardware configuration tasks:

168
You can now view and edit it by adding additional <Machine model, HW Environment> pairs to
discover hardware configuration from other target models.

169
Because we have two similar IBM x3200s:
- 436730z
- 436720z
and we created the HW configuration only for the model 436730z, now we can add the same HW
environment for the 436720z:

Note that if you start a Capture hardware parameters task on a target without having linked the
HW environment to that model, the target will PXE boot, join TPM for OSd deployment engine,
and show you a red screen saying that no HW environment was available for that target.

Now that we have for our specific target IBM x3200:


- The hardware environment
- The hardware configuration object Capture hardware parameters
we can start a hardware discovery task on it. We have two ways to do this:

170
- Perform it once manually
- Set to capture this information as default behaviour
To start it manually you select your resource from the Actions menu -> Os Deployment ->
Additional feature and follow the wizard. In the example below we already discovered our IBM
x3200 with IP 74.0.0.203 and started the RAID capture on it:

171
You must reboot the target from the network to load the TPM for Osd deployment engine; then the
linked HW environment will be loaded and the hardware configuration task (the RAID discovery in
this scenario) starts. At the end, by default, the machine reboots from its hard disk because there are
no more pending tasks (you can change this behaviour from the Idle State template).

After running a discovery, go to Manage panel -> TPM for OSd -> OS Deployment Task to view the
status of the discovery task.

172
To have a RAID capture for every host which made a network boot without any linked OSd activity
bounded to it, from Deployment Schemes -> Idle Layout -> Idle state: here you can configure
behaviour for unknown hosts or targets which PXE-booted without any OSd activity scheduled.
Below we checked the RAID option to start the RAID and Fiber Channel 1 capture when a machine
unknown or without any pending activity will PXE-boot.

1
In version 7.1, selecting RAID indicates both RAID and Fiber Channel capture. Be aware that this behavior might
change in future releases and that a new Fiber check box might appear.

173
As described before, you can run Capture hardware parameters tasks only on target models where
you have previously linked the correct HW environment: so you must have your Capture hardware
parameters object with the correct <Machine Model-HW Environment> pairs. If you change the
Idle State template to perform a Capture hardware parameters task for unknown hosts, you might
find a new target model to be scanned, but without any linked hardware environment. To prevent
this, remember to edit the Hardware Discovery configuration by adding the correct HW
environment to your specific target model or by using wildcards for future and not-yet-known
machine models.
For example, consider this scenario where we have two target models:
- IBM x3200 -436730z
- IBM x3200 -436720z
From Hardware Configurations -> Hardware Configuration Discovery -> Capture hardware
parameters you can see the HW environment matching only for the known target model IBM x3200
– 436730z:

174
If you want to add another match, remember that known target models are already listed so you can
easily find them from the menu. If machine model the IBM x3200 -436720z is known, we can find
it in the drop-down menu as shown below:

and add this model to match the HW environment:

Supposing we do not know the target model or that the model has not yet been discovered, the only
way to proceed is to use the wildcard as follows: the Model Pattern field is a select box that you can
edit. The second matching below allows the unknown model 436720z to load the correct HW
environment because its model name matches the inserted pattern “IBM System x3200 -4367*”:

175
Note that you need to match the correct HW Environment with the correct model so use the
wildcard correctly.

When the RAID scan has completed, to see this information, select your resource then view ->
Deployment properties -> Inventory to see the details for the target from TPM for OSd – ISDE
point of view. As you can see, the RAID field in the inventory tab is empty before running the
capture.

176
After the Capture hardware parameters task we can see the RAID information:

177
This is the RAID scan on another machine with three physical disks of 70 GB configured to appear
as a single logical disk of 200 GB:

178
Hardware configuration: RAID
This section describes a RAID configuration scenario.
In our scenario we have not yet discovered the machine so we:
- Discover the target machine IBM x3200 436720Z with RAID inventory
- Start a HW RAID configuration task
First check that the Capture hardware parameters object has the environment linked to your model
type. The example below shows that it is missing because only 436730Z has this match while we
want to work on 436720Z.

179
If the model is already known, from the drop-down menu you can easily select it:

Otherwise you can edit the model pattern field with wildcards:

Now we have the HW environment ready to run a HW capture task.


From Deployment Scheme -> IDLE Layout -> IDLE state we now add the RAID scan as default
behaviour:

180
We boot the target from the network and it is discovered with the RAID details shown below:

181
As you can see, the machine has three physical disks (listed in the RAID section of the inventory)
configured to appear as a single logical disk (listed in the Disks section of the inventory).
Now we can create a RAID configuration task:

182
In the same way as for the Hardware capture parameters we need to link every model to its HW
environment: this is necessary because each HW configuration task needs the specific HW
environment previously created.

183
The target machine 436720Z has three physical disks of 70 GB and we can create a single RAID
array with RAID-1 of 60 GB using only the first two physical disks.

As shown below, we set that we want a single RAID array without explicitly defining how many
hot-spare disks we want.

Make sure that you insert all the physical disks that you want to include in this RAID configuration,
separating them by commas.
For example, a value of “1,2” refers to physical disks 1 and 2.
The comma-separated list inserted in the physical drives parameter is valid for all the controllers,
both IBM and non-IBM.
We do not include physical disk 3, which will be seen as a single separated disk.

184
Consider that the “Auto” values in this wizard are specific to every vendor and that their behaviour
is described in the related RAID documentation. For example, for IBM, using the “Auto” value for
the parameter “How many disk arrays do you want?” has the following behaviour:

"Creates arrays using drives that have the same size in MB. This is the default. Each set of drives with the same size
will be combined into a single array. The maximum number of drives allowed per array is determined by the limits of
the RAID controller. Only drives in a Ready state after resetting the controller to factory-default settings are used in
arrays. "

Be careful when creating RAID tasks; the parameters chosen here must be consistent with a
possible RAID configuration, otherwise there will be an error when applied to the RAID controller
or the RAID configuration tool might add or exclude a physical disk to be able to have a coherent
configuration. This depends mostly on how the vendor-specific RAID tool manages the entry
parameters.
If you have three physical disks and set to create a single RAID array, the number of drives
provided for the single Array must be consistent with the Raid Level chosen.
For example,
- If you set Raid Level 0, the three disks can be used and the data is spread across all three
disks (good for speed performance).

185
- If you set Raid Level 1, the data is mirrored on the three disks in parallel (good for
redundancy).
- If you set Raid Level 01, you need at least four disks.

If necessary you can later modify this HW configuration task:

As shown by the RAID Inventory we ran before, the machine was originally configured with
RAID-0 so that all three physical disks appeared as a single logical disk of 203 GB:

186
Then we started the task by selecting the resource from the Director console -> Actions -> OS
deployment-> Deploy Now:

187
We chose to work only on hardware configurations without any operating system installed:

188
When the deployment completes the target machine has:
- One logical disk of 60 GB with RAID Level 1 created on the two physical disks ->
Array A

189
- One physical disk without any RAID configuration because the third physical disk was
not configured

From the Navigate Resource -> select system -> Deployment Properties -> Inventory we can see
what happened.
Two logical disks are seen:
- One disk of 68 GB -> mapped on physical disk 3 and not configured with any RAID
level
- One disk of 60 GB -> mapped with RAID Level 1 on physical disks 1,2

Under the Disks section, the details of the two logical disks are displayed:

190
Each physical disk is listed under the RAID section.
The first two physical disks are mapped in the Array A:

191
The third physical disk has no RAID details because it is not included in the Disk Array A:

When the target machine is powered on and the RAID controller is initialized you will see:
- One physical disk of 68 GB
- One logical disk of 60 GB
This is shown in the picture below:

192
193
How to run full OS deployment tasks
This section describes a complete OS deployment scenario performing HW configuration, OS
deployment, and additional application installations.
Our target machine is an IBM x3200 - 436720z, which we previously configured to have one logical
disk (200 GB) mapped on all three physical disks (70 GB each one).
If you run a RAID capture task you see the following:

We now run a previously-created HW configuration task to configure the RAID controller and
create:
- One logical disk of 60 GB. It is a RAID-1 60 GB on two physical disks
- One logical disk mapped on one physical disk of 70 GB

Because the target will have two logical disks (1 60 GB and 1 70 GB, as explained above) and we
want to create one partition for each disk, we need to change the partition layout of the system

194
profile we are deploying. We edit the system profile to add one disk with one partition: otherwise
the second disk will not be formatted and you will have to do it manually after the OS installation.

First from Navigate Resource select your target and start a “Deploy Now” wizard:

Then select all the steps you want to run:

195
We will use our HW configuration tasks:

We will deploy the configuration, shown below, that we previously set to create two disks with one
partition each, (it is a Windows 2003 EE).

196
We will not explicitly link a driver because automatically created binding rules are added at
deployment time by the product.

197
From Navigate Resources -> Deployment Properties -> Bindings, you can find our driver included
in the list “by generic rule”, meaning that the matching has been performed automatically by the
product using a defined rule (PCI ID in our case).

After deployment completes, we will see that Windows correctly recognizes our disk
configurations:
- One logical disk mapped on the physical disk of 70 GB
- One logical disk of 60 GB mapped with RAID-1 on two physical disks

198
Appendix A: Web Interface Extension

The Web Interface Extension is a TPM for OSd component, which is used to perform several
deployment tasks. It is installed automatically on parent and child boot servers; it is however
possible to install only the Web Interface Extension on a computer which is not an OS deployment
server.
When the Web Interface Extension is running on a target, it can be used by the OS deployment
server to perform actions on the target and to gather information from it. When browsing the Web
Interface Extension from a computer which is not an OS deployment server, the Web Interface
Extension allows the computer on which the Web Interface Extension is running to exchange
information with the OS deployment server. Without the Web Interface Extension, several features
of the Web interface are disabled.

Working with the Web Interface Extension


Currently the Web Interface Extension cannot be managed from the IBM Director console: to
install, uninstall, and check its status you need to use TPM for OSd – ISDE 7.1 as the stand-alone
version. Refer to the TPM for OSd – ISDE 7.1 Installation Guide, Appendix A for detailed
instructions about how to do this).

199
Appendix B: Configure network settings with Software
Modules

Target network settings can be easily configured from TPM for OSd 7.1 Web UI by setting the
“Advanced IP settings mode” from the “View Target Details” page as shown below:

Using the “Advanced IP settings mode” you can customize advanced network settings (DNS,
WINS, Gateway) for each target’s NIC.

The workaround described here allows you to work with target details in “Basic IP Settings Mode”
and configure the network settings with two simple software modules.

Before starting, check that your target is in the “Basic IP Settings Mode” by looking in the
“Common networking info” section; it should look as follows:

Then you must create two software modules:


1- The first package copies a text file containing OS-specific commands to configure the
network settings on your target after the deployment. This script can contain parameters
dynamically resolved by TPM for OSd at deployment time.
2- The second package runs the text file containing the OS-specific commands

On Windows, an example command to statically configure your NIC might look as follows:

200
netsh interface ip set address name="Local Area Connection" source=static addr=”192.168.1.101”
mask=”255.255.255.0”

Create a text file with the above command and save it with bat extension so that it is run as a batch
executable on your target.
Note that the IP address and the Subnet Mask values are hard-coded in this script; if you need these
parameters to be automatically resolved during the deployment, you can use these TPM for OSD
variables:
- {$User.UserCateg0$}
- {$User.UserCateg1$}
- {$User.UserCategX$} where X is from 1 to 9

that are dynamically resolved with the User Details values shown below in the Target Details page:

Your script, here called static.bat, should look as follows:

201
and the target details might have these settings:

To create the first Software Module, navigate the Web UI -> Software Modules -> New Software
-> Windows 2003-> A custom action on the target computer -> A configuration change to perform
on the target computer -> Copy a single text file (Activate keyword substitution) -> provide the
path to your text file containing the OS specific commands and where it will be saved on the target
machine

Do not forget to check the “Activate keyword substitution” because it resolves the
{$User.UserCategX$} variables in your script.
This is how the software module should look: in this example it is saved in the path c:\static.bat on
the target machine:

202
When the text file has been copied to the target and the variables have been resolved with the
target’s values, then you need another package to run it. Navigate the Web UI -> Software Modules
-> New Software -> Windows 2003-> A custom action on the target computer -> A configuration
change to perform on the target computer -> Execute a single command -> provide the path where
the text file is saved on the target machine

In this case you do not need to “Activate keyword substitution”.


Here is an example of how the package should look:

Note that the execution order of the software modules is important; first the script is copied and
then you configure the second package to run at the next reboot

203
Ensure that changing the network settings is the last operation of your deployment to prevent any
problems with the new configuration.

When starting the deployment, remember to manually bind the two created software packages (if
you are not using binding rules):

For example, if you use these settings:

204
Activity logs confirm that the script has been correctly run and that the variables have been
correctly resolved:

[2009/01/29 20:21:02] A Install c:\static.bat


[2009/01/29 20:21:02] A Running [c:\static.bat]
[2009/01/29 20:21:08] *** Output of command [c:\static.bat]
C:\WINDOWS\system32>netsh interface ip set address name="Local Area Connection"
source=static addr="192.168.1.101" mask="255.255.255.0"
Ok.

*** End of output for [c:\static.bat]

After the deployment, “ipconfig /all” confirms that the NIC configuration completed successfully:

205
Known issues and workarounds

DHCP options needed during Linux deployments


When working with Linux images you might get some problems if your DHCP server is not
configured correctly. During the task you might see on your target messages like the following:

The root cause is that there are some required options in your DHCP server, which are described in
the Installation Guide, Chapter “Additional Linux cloning options”.

The solution is to add these lines to your dhcpd.conf file (if you are using DHCP ISC 3.0):

Or for Windows DHCP server a working configuration contains:

206
Parent boot server installation fails with “TPMOSD333E”
When installing the parent boot server with the IBM Director console, the activity might give you
the following error message:

“TPMOSD333E Failed to find the TPM for OS Deployment software module Tivoli Provisioning
Manager for OS Deployment."

If you look at the TPM for OSd – ISDE tc-driver installation logs, you might see the line: "Import
skipped".
On Windows systems, to work around this problem, you must:
- Extract the tpmfosd-software.xml file from the tpmfosd_director.tcriver file using a
common uncompressing program
- Place the extracted tpmfosd-software.xml file under the
<DIRECTOR_HOME>\tpm\tools directory
- Open a DOS shell and navigate to the <DIRECTOR_HOME>\tpm\tools directory
- Run the following command

xmlimport.cmd file:tpmfosd-software.xml

When the file is imported, you can repeat the parent boot server installation task: this import
operation does not require a restart of the IBM Director service.
A similar procedure can be applied to UNIX and Linux systems.

Child boot server installation or discovery fails on Windows machine


running Cygwin
When IBM Director is installed on a Windows machine, installing child boot servers or discovering
existing OSd servers on computers running Cygwin might fail with this error message when
running the task:

com.ibm.tivoli.orchestrator.de.engine.InvokeJavaException: COPDEX040E
An unexpected deployment engine exception occurred: COPDEX040E
An unexpected deployment engine exception occurred:
scp: c:DOCUME~1SSHD_S~1LOCALS~1Temp/rb7.1.00-072.05.msi: No such file or directory

207
This is caused by Cygwin dependencies in IBM Director 6.1 (installed on a Windows machine) and
there is no known workaround.

208
Links

 IBM Tivoli Provisioning Manager for OS Deployment – IBM Systems Director Edition
information center:
http://publib.boulder.ibm.com/infocenter/tivihelp/v3r1/topic/com.ibm.tivoli.osdisde.doc/welcome/osdisdehome.htm

 IBM Tivoli Provisioning Manager for OS Deployment – IBM Systems Director Edition forum:
http://www.ibm.com/developerworks/forums/forum.jspa?forumID=1522

 IBM Tivoli Provisioning Manager for OS Deployment information center:


http://publib.boulder.ibm.com/infocenter/tivihelp/v3r1/topic/com.ibm.tivoli.tpm.osd.doc/welcome/osdhome.ht
m

 IBM Tivoli Provisioning Manager for OS Deployment support:


http://www-
01.ibm.com/software/sysmgmt/products/support/IBMTivoliProvisioningManagerforOSDeployment.html

 IBM Tivoli Provisioning Manager for OS Deployment forum:


http://www.ibm.com/developerworks/forums/forum.jspa?forumID=1065

 IBM Tivoli Provisioning Manager for OS Deployment technotes:


http://www-1.ibm.com/support/search.wss?rs=3176&tc=SS3HLM&q=

 IBM Systems Director documentation:


http://www-304.ibm.com/jct03004c/systems/management/director/resources/#2subsec

Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer
the products, services, or features discussed in this publication in other countries. Consult your local
IBM representative for information on the products and services currently available in your area.
Any reference to an IBM product, program, or service is not intended to state or imply that only that
IBM product, program, or service may be used. Any functionally equivalent product, program, or

209
service that does not infringe any IBM intellectual property right may be used instead. However, it
is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or
service.

IBM may have patents or pending patent applications covering subject matter described in this
publication. The furnishing of this publication does not give you any license to these patents. You
can send license inquiries, in writing, to:

IBM Director of Licensing


IBM Corporation
North Castle Drive
Armonk, NY 10504-1785 U.S.A.

The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law:

INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS


PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-
INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Some states do not allow disclaimer of express or implied warranties in certain transactions,
therefore, this statement might not apply to you.

This information could include technical inaccuracies or typographical errors. Changes are
periodically made to the information herein; these changes will be incorporated in new editions of
the publication. IBM may make improvements and/or changes in the product(s) and/or the
program(s) described in this publication at any time without notice.

Any references in this information to non-IBM Web sites are provided for convenience only and do
not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are
not part of the materials for this IBM product and use of those Web sites is at your own risk.

IBM may use or distribute any of the information you supply in any way it believes appropriate
without incurring any obligation to you.

Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business
Machines Corporation in the United States, other countries, or both. If these and other IBM
trademarked terms are marked on their first occurrence in this information with a trademark symbol
(® or ™), these symbols indicate U.S. registered or common law trademarks owned by IBM at the
time this information was published. Such trademarks may also be registered or common law
trademarks in other countries. A current list of IBM trademarks is available on the Web at
“Copyright and trademark information” at: www.ibm.com/legal/copytrade.shtml.

Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun
Microsystems, Inc. in the United States, other countries, or both.
210
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.

Microsoft and Windows are registered trademarks of Microsoft Corporation in the United States,
other countries, or both.

UNIX is a registered trademark of The Open Group in the United States and other countries.

Other company, product, and service names may be trademarks or service marks of others.

211

Potrebbero piacerti anche