Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
WebSphere MQ Integrator
Deployment and
Migration
Migration possibilities for brokers and
NEON-based message flows
Multi-broker and multi-platform
configuration scenarios
New features, including
enhanced MRM and Java
nodes
ibm.com/redbooks
SG24-6509-00
Take Note! Before using this information and the product it supports, be sure to read the
general information in Notices on page ix.
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
The team that wrote this redbook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Comments welcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Chapter 1. WebSphere MQ Integrator features overview . . . . . . . . . . . . . . 1
1.1 WebSphere MQ Integrator basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.1 Architecture of WebSphere MQ Integrator . . . . . . . . . . . . . . . . . . . . . 4
1.1.2 The role of the Configuration Manager . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.3 The functionality of the broker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.1.4 Publish/subscribe and the User Name Server . . . . . . . . . . . . . . . . . 10
1.1.5 The user interface: the Control Center . . . . . . . . . . . . . . . . . . . . . . . 11
1.2 Anatomy of a message flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.3 Enhanced Message Repository Manager . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.4 XML support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.5 Database support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
1.6 New ESQL features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.7 Summary of new features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Chapter 2. Migration considerations . . . . . . . . .
2.1 Considerations overview . . . . . . . . . . . . . . . . .
2.2 Uninstalling the older version . . . . . . . . . . . . .
2.3 Message repository changes . . . . . . . . . . . . .
2.4 Moving components . . . . . . . . . . . . . . . . . . . .
2.5 Enhancement and schedule considerations . .
2.6 DB2 database upgrade . . . . . . . . . . . . . . . . . .
2.7 Recommendations . . . . . . . . . . . . . . . . . . . . .
......
......
......
......
......
......
......
......
.......
.......
.......
.......
.......
.......
.......
.......
......
......
......
......
......
......
......
......
..
..
..
..
..
..
..
..
27
28
28
29
30
31
33
34
iii
......
......
......
......
.......
.......
.......
.......
......
......
......
......
.
.
.
.
109
116
123
128
iv
Contents
vi
......
......
......
......
......
......
.......
.......
.......
.......
.......
.......
......
......
......
......
......
......
.
.
.
.
.
.
449
449
449
449
450
450
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
Contents
vii
viii
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area.
Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product, program, or service that
does not infringe any IBM intellectual property right may be used instead. However, it is the user's
responsibility to evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document.
The furnishing of this document does not give you any license to these patents. You can send license
inquiries, in writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer
of express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may
make improvements and/or changes in the product(s) and/or the program(s) described in this publication at
any time without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm
the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on
the capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrates programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the
sample programs are written. These examples have not been thoroughly tested under all conditions. IBM,
therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy,
modify, and distribute these sample programs in any form without payment to IBM for the purposes of
developing, using, marketing, or distributing application programs conforming to IBM's application
programming interfaces.
ix
Trademarks
The following terms are trademarks of the International Business Machines Corporation in the
United States, other countries, or both:
Redbooks(logo)
e (logo)
IBM
AIX
AS/400
CICS
Database 2
DB2
DB2 Universal
Database
Everyplace
IMS
iSeries
MQSeries
RAA
Redbooks
RETAIN
RS/6000
OS/390
ServicePac
SP
SP2
SupportPac
Tivoli
VisualAge
WebSphere
xSeries
z/OS
Preface
The release of WebSphere MQ Integrator Version 2.1 has introduced quite a few
changes; these changes make it more appealing for message flow developers
and message set designers to migrate to the new version so that they can exploit
this functionality.
After a brief overview of the product, we look at several migration issues that
might arise; we also describe the migration process for MQSeries Integrator
Version 2.0.1 and Version 2.0.2 on Windows NT and AIX platforms. The migrated
environment is further extended with a broker running on Solaris using Oracle,
and a broker running on Windows using SQL Server. Besides the migration of
brokers and the Configuration Manager, this redbook also investigates the
migration of message flows that use NEON functionality. We discuss several
scenarios explaining how to take advantage of the tighter integration between
NEON-based functionality and base WebSphere MQ Integrator functionality.
Finally, we introduce the major new features of this release. The extended
functionality of the Message Repository Manager and the extended support for
XML messages is explored using several examples. The support for Java-based
plug-in nodes and input nodes is demonstrated by developing a simple input
node which reads files from a directory to initiate a message flow. The new
aggregate nodes also demonstrate how to combine several messages of several
formats into a single XML message.
xi
Comments welcome
Your comments are important to us!
We want our Redbooks to be as helpful as possible. Send us your comments
about this or other Redbooks in one of the following ways:
Use the online Contact us review redbook form found at:
ibm.com/redbooks
xii
Chapter 1.
WebSphere MQ Integrator
features overview
While this redbook is intended for more experienced users of MQSeries
Integrator, we will start this first chapter with a quick summary of the basic
structure of the WebSphere MQ Integrator product and the roles of each
component. We will then introduce and briefly discuss the major new features
therein.
Family Traits
Workflow, Process Flow
Application Services
Tools
MQSeries
Workflow
MQSeries
Integrator
Messaging Services
Standard Formats
Tools
MQSeries
Control
Center
Admin
Appl
GUI
Message Formats
Management
Message Flow
Configuration
Manager
Message
Repository
Broker
MQ Messages
Client
Client
Client
Appl
Appl
Appl
Message Flow
Controller
Filter
Input
Node
NodeFlow Engine(s)
Message
Administrative
Agent
Filter Node
Warehousing Node
Source
Client
Client
Client
Appl
Appl
Appl
Output Node
Output
Node
MQ Messages
Target
RDBMS
Input
Node
Filter
Node
Output Node
Compute Node
Output
Node
Message
Dictionary
UserNameServer
Warehousing Node
Domain
Local
Configuration
Repository
Shared/Deployed
Configuration
Repository
Message
Repository
JDBC
MQSeries
Java
Client
Control Center
Database
MQSeries
ODBC
Broker
Database
Configuration
Manager
ODBC
Queue Manager
Broker
User
Name
Server
Applications
Queue Manager
Queue Manager
A set of fixed named queues is defined on the MQSeries queue manager that
hosts the Configuration Manager. This MQSeries queue manager must exist on
the same physical system as the Configuration Manager, and is identified at the
time the Configuration Manager is created.
These queues are created when the mqsicreateconfigmgr command
(including associated variables) is executed through either the graphical
command assistant or through the command prompt. No actions are required
by the administrator to add the required definitions.
A server connection channel is defined to the MQSeries queue manager that
hosts the Configuration Manager. This connection is used by all instances of the
Control Center that communicate with the Configuration Manager.
Sender and receiver channels need to be defined to each broker in the broker
domain, except for a broker that shares its queue manager with the Configuration
Manager.
Broker
Execution group
ODBC
connection
Message
flow
Message flow
Message dictionary
Queue Manager
Broker database
Each broker has a related local MQSeries queue manager, which cannot be
shared with another broker, and a set of database tables to hold the broker
definitions; this set of tables is accessed through an ODBC connection.
These tables can be created using a number of database products:
IBM DB2
Microsoft SQL server (Windows only)
Oracle
Sybase
Since the broker uses a set of predetermined queue names, this creates the
dependency of an MQSeries Queue Manager per broker.
For operation, a broker requires both configuration and initialization data.
Configuration and initialization data are logically separate and stored in
different physical repositories.
Each broker instance needs an assigned, permanent and fixed name. This is the
Broker Instance name. This name, which is similar to the static identifier
assigned to a database before it is created, is used to distinguish tables
pertaining to one broker from tables where multiple brokers have been set up to
use the same database.
Creating a broker on the target execution platform does not itself update the
configuration repository; you need to create a reference to it using the Control
Center. Once this is done, and the broker is deployed (to initialize it), then
message flows and sets can be assigned to the broker. Deploying the flows
starts their execution.
Several different message flows can be assigned to an execution group,
although using different execution groups provides better application isolation. A
particular message flow can also be configured to run more than one instance of
itself inside the execution group.
The execution group environment is also known as a data flow (or message flow)
engine. The engine is responsible for loading any plug-in nodes which can be
developed, or provided, to extend the function beyond the supplied IBM nodes.
These are known as loadable implementation libraries (or .lil files).
Brokers provide information, in the form of published event messages, in
response to changes; these can be used by system management tools to update
their management agents as to the status of the broker components.
10
.
Publish: STOCK/FORD
Publish: STOCK/IBM
Broker
M
/IB
CK
O
ST
Subscribe: STOCK/*
ta
ta
da
da
/F
CK
ta
da
Se
l
ST ecte
OC d
K/I
BM
O
ST
D
OR
Broker
Subscribe: STOCK/IBM
d O
cte F
le CK/
e
S O
ST
RD
ta
da
Subscribe: */FORD
WHERE number>1000
WHERE value>10000
Can create execution groups within brokers, then assign and deploy
message flows and message sets to brokers.
Can access the Assignments view only.
11
Can create topics and their access control lists, and deploy them.
Can access the Topics view and the Topology view.
All Roles
12
The Control Center allows the export and import of message flows to and from a
flat file in XML format. Message sets can be imported and exported using the
mrmimpexpmsgset command.
There are many other IBM primitive nodes available; these are described in the
manual Using the Control Center, SC34-5602-04. Each node will have properties
which can be set to certain values for the parameters of the node. For example,
the queue manager name is a optional property of the MQOutput node.
13
Some nodes have their function controlled by ESQL statements. ESQL is a free
format, high-level programming language derived from SQL version 3. By coding
ESQL, it is possible to develop complex logic for manipulating either message or
database contents (or both), without using excessive nodes in a flow.
ESQL, like most programming languages, can be used to develop business
logic, but is generally coded to select and analyze message and/or database
contents, and to perform the more complicated conversions of data values.
Details about ESQL can be found in the manual ESQL Reference,
SC34-5923-01.
Although ESQL does not provide subroutines to reuse common code, it is
possible to include smaller message flows as sub-flows; such flows can be used
for generic parts of message flows such as error handling routines. In addition,
the use of sub-flows improves readability of the logic, regardless of reuse.
14
Each node has properties that must be set; for example, with the MQInput node,
you would specify the input queue name (the queue managers is always the
same as the brokers), conversion and other MQGET options, the transaction
mode, the message domain, etc. Generally, it is necessary to specify more of
these values when handling messages that do not contain an MQRFH2 header.
Figure 1-8 shows the default page of the properties on an MQInput node; this
message flow is able to handle messages that do not have an MQRFH2 header.
In most cases, the parsed message is passed unchanged from node to node,
except in the case of the Compute node, which can modify it; the output
message of the Compute node then becomes the input to the next node in the
flow. Some nodes provide a drag-and-drop graphical interface to assist with
element selection; for example, the DataUpdate node does so for its output
database table, as shown in Figure 1-9.
15
ESQL statements are added using the ESQL tab window on the Properties page
of WebSphere MQ Integrator nodes such as the Compute and Filter node. In
addition to simple assignments, ESQL includes branching and looping
capabilities.
Example 1-1 ESQL statements
SET OutputRoot = InputRoot;
DECLARE TotalItemQuantity INTEGER;
SET TotalItemQuantity = (SELECT SUM(CAST(T.itemquantity AS INT))
FROM InputBody.Message.receiptmsg.transactionlog.purchaseselement[] AS T
WHERE CAST(T.itemname AS CHAR) = 'Shampoo');
SET
OutputRoot.XML.Message.receiptmsg.transactionlog.totalselement.totalitemquantit
y = TotalItemQuantity;
The visual message flow builder in the Control Center can be used to assemble
sequences of nodes (or primitives) that provide logic similar to that of a
programming language. Loops and branches can be performed using nodes.
16
In Figure 1-10, the Filter node tests a counter stored in the message tree; if the
comparison proves false, the flow passes to a DataInsert node. This is followed
by a Compute node, which increments the counter. When the filter comparison
test eventually proves true, control passes to the MQOutput node.
17
Message parsers
Run-time dictionaries
Physical format descriptors
The MRM message model consists of meta-data representing logical message
definitions that are platform- and language-independent. It has layers of
additional data used for mapping to physical formats. Definitions are made up of
reusable components or objects and organized into message sets. Figure 1-11
shows the definition model for a message.
18
Message
A message has a structure based on
..
User defined
Compound
Type
Or made up
of groups ..
Type
Element
Format defined by ..
Pre-defined
Simple type
Element
Value
A message set is a group of messages that are related in some way. A message
defines the format of a single unit of information to be exchanged between
applications. A message can contain, or embed, other messages (so-called
multi-part messages). An element defines the format of a single unit of
information within a message. A type defines the format of an element and can
be a simple or compound type. A category groups messages and transactions
within a message set. A transaction (in this sense) is a logical grouping of
messages within a message set.
Figure 1-12 shows what simple types are available in the MRM. These simple
types are the building blocks for constructing compound types.
19
Binary
Boolean
Datetime
times
Decimal
Float
Integer
String
Value constraints can be defined for elements for such things as minimum
length, scale, default value, etc. This is primarily for documentation purposes, as
currently the broker does not perform run-time validation.
20
A message in the MRM domain can have one of the following wire formats:
CWF (Custom Wire Format or Legacy messages)
TDWF (Tagged / Delimited Wire Format, such as SWIFT)
XML (defined, as opposed to self-defining or generic)
PDF (not Adobe, but a specialized financial format)
Tagged/Delimited messages:
Consist of text strings
Are optionally preceded by fixed text
Are followed by child elements, which:
21
The example SWIFT message includes two headers and a message body. Each
is wrapped within curly braces ({...})
{ is the Group Indicator.
: is the Tag Data Separator.
}{ is the Delimiter.
} is the Group Terminator.
1, 2, 4 are Tags.
In Chapter 6, New MRM features explored on page 163, some more examples
of tagged data formats are discussed.
You can now import a DTD into the Control Center. This can be a DTD generated
from a WebSphere MQ Integrator message repository, or a DTD from another
source. You can then use the MRM to model these XML messages.
22
Several examples of this extended support for XML messages are given in
Chapter 7, Exploring the new XML features on page 205.
23
Some of the new ESQL features in WebSphere MQ Integrator version 2.1 are:
Environment: passes user defined variable information between nodes.
ESQL UUID function: creates a universally unique identifier.
Field references variables: these act as variable pointers.
Binary and Date/Time casting.
SQLCODE and SQLSTATE database indicators: these are introduced.
ESQL node rationalization for Compute, Filter and Database nodes:
These each can modify both DBMS data and environment variables.
Performance is improved by avoiding the need to create new message
tree elements for passing data between nodes in a flow.
They have drag and drop facility for ESQL code, selectable from a palette.
Improved support of NULLs.
PROPAGATE keyword to send messages to the output terminal.
24
Figure 1-13 shows the new ESQL palette on the left. From there, you can drag
and drop ESQL constructs to easily create valid ESQL statements.
25
Performance improvements
Support of MQSeries 5.2 and Windows 2000
Supported HP-UX 11 platform
AS/400 supported via the integrated Windows NT environment
Usability improvements and plug-in node development wizard
A new manual: ESQL Reference
26
Chapter 2.
Migration considerations
This chapter discusses a variety of items that need to be considered when
planning an upgrade to WebSphere MQ Integrator V2.1. We do not mean to
cover every consideration possible. Many elements must be considered; many of
these are environment-specific and will depend on your individual installation.
Hopefully, the major points we cover will give you plenty of insight to plan your
upgrade successfully.
Some of the things that need to be factored into an upgrade plan are based on
the current environment. Some are based on schedules. Others, and perhaps
the most often overlooked, are the requirements imposed by the software. Many
users fail to fully read about and understand the differences between the release
they are implementing and the one being replaced. We cannot stress enough the
need to read all documentation and any readme files shipped with the software.
Readme files contain the most current information available on what you need to
know about the release being installed. For WebSphere MQ Integrator V2.1, this
information is also available on the Web:
http://www-3.ibm.com/software/ts/mqseries/support/readme/
This is especially helpful if you received the software some time before
installation.
27
28
The install on a Windows platform will automatically install the upgrade in the
directories and folders where the current version is installed. It gives you no
choice.
This can be quite confusing for some. Imagine that you have WebSphere MQ
Integrator V2.1 installed, but all folders and directories have 2.0 in their name. Is
this confusing to you or your clients? You may want to correct this during the
upgrade. Either create all folders and directories without any release or version
information in the name, or use the current release and version in the names.
The most important thing to remember when performing an uninstall is that the
uninstall does a delete of the broker. Therefore, if you plan to uninstall, be sure to
undeploy everything from the brokers on the machine you are upgrading, before
you begin the uninstall.
29
30
31
actually working, it will probably not work in a near future patch or upgrade or in a
new release. This is good advice for any product at any time. Nothing is as
frustrating as having to rewrite an application because holes have been closed in
the software you have been using.
The changes to the message repository in V2.1 make it necessary to carefully
choose when the upgrade is to be performed. Once the upgrade has begun on
any environment that will have components promoted to another environment, all
promotions from the upgraded environment that involve MRM messages must be
postponed until the environment promoted to is also upgraded to V2.1.
This can have a significant impact on the schedule, especially if there are many
environments involved in an applications promotion path. A great deal of time
can pass between the production environment upgrade and the first environment
upgrade. This is where the application problems or enhancements must be taken
into consideration. It is very annoying to begin an upgrade knowing it is going to
take a fixed amount of time and then find that an application change is scheduled
to begin after you have begun, but is scheduled for production before you are.
This affects both the personnel performing the upgrade and the client who must
now wait for changes until the upgrade is complete.
When performing an upgrade, it is common for the upgrade to begin at the lowest
common denominator. In the case of WebSphere MQ Integrator, that would be
the broker. The WebSphere MQ Integrator V2.1 Administration Guide documents
that you can have a V2.1 broker with a V2.0.1 or V2.0.2 Configuration Manager if
you install the compatibility modules on the broker. It also strongly recommends
that the Configuration Manager be upgraded first. The brokers should be
upgraded immediately following the Configuration Manager upgrade and before
anything is deployed.
The best scenario is to upgrade both the Configuration Manager and all of its
brokers at the same time. If there is a broker on a UNIX machine, the software
must be uninstalled before the new release can be installed. Because of this and
the changes in the message repository, it is best if all flows and message sets
are undeployed, the software upgraded and flows and message sets redeployed.
This provides the surest method of insuring that all components are upgraded
and that all deployed flows and message sets match exactly what is in the
Configuration Manager.
If there is no UNIX machine attached to the Configuration Manager, then
undeploying the message flows is not required. However, because of the
changes in the message repository, the currently deployed message sets should
be undeployed and redeployed after the upgrade. Though this is not absolutely
required at this point, it will be required before the message sets can be deployed
again after the upgrade. As documented in 2.3, Message repository changes
on page 29, the UUID of the message set will change when it is reimported after
32
the upgrade. When the message set is later deployed, a duplicate message set
name error will occur. This is because the message set is already deployed with
the UUID created previously and the upgraded message set has a new UUID but
the same message set name. The previously deployed message set must be
undeployed before the new one can be deployed.
In an ideal world, there would be only one application environment per machine.
That is, if there were Development, Test, QA and Production environments, the
brokers for each of the environments would be on separate machines. When
dealing with UNIX environments, it is very common to find multiple environments
on one machine. Production environments are almost always separated from the
others, but all of the other environments can often be found on the same
machine.
This poses another problem. There is only one version of the software running
multiple environments. When the software is upgraded, all environments must
also be upgraded. When upgrading UNIX machines to WebSphere MQ
Integrator V2.1, the prior release must be uninstalled. The brokers should be
deleted before the uninstall is executed, as documented in 2.4, Moving
components on page 30.
Once the uninstall and upgrade is complete for the brokers on the UNIX machine,
the choice is now whether to upgrade all of the associated Configuration
Managers at once and redeploy, or to upgrade them one at a time. If you choose
to upgrade them one at a time, it must also be decided whether or not to deploy
anything from those not yet upgraded. The availability of compatibility modules
makes this a viable alternative.
The safest method would be to upgrade the Configuration Managers one at a
time, but not redeploy anything from a Configuration Manager that has not yet
been upgraded. This is, again, where scheduling is critical. Is there time for this
method? How much time is there to test each message flow after it has been
deployed following the upgrade? How many environments are involved? What
applications will this affect? These are all critical factors to the upgrade process
completing successfully without disrupting the ongoing daily business.
33
Many organizations have dedicated database servers and the database software
is maintained separately from the MQSeries Integrator software. The databases
themselves reside on servers apart from those where MQSeries Integrator
resides. Also, the group responsible for the databases and the database software
is most often a separate group from the group responsible for MQSeries
Integrator.
Other organizations may not require that the databases be on designated
database servers. The databases are on the same machines as the MQSeries
Integrator components. The group responsible for the MQSeries Integrator
software and the database software may be one and the same.
The reality is that a database upgrade is more complicated than simply installing
the new version of the software. It can be a large undertaking of its own.
Upgrading the database software can greatly impact your schedule.
Therefore, it is recommended that the database upgrade be made a separate
project from the upgrade of MQSeries Integrator when:
The databases are on dedicated database servers.
The databases are not on the same machine as MQSeries Integrator.
The database software is not dedicated to only the MQSeries Integrator
databases.
The databases cannot or should not be deleted.
2.7 Recommendations
Below are several recommendations that should be considered when planning
the upgrade to WebSphere MQ Integrator V2.1.
If upgading from MQSeries Integrator V2.0.1, bypass upgrading to MQSeries
Integrator V2.0.2.
Do not complicate the upgrade by also upgrading the version of the database
software used.
34
35
36
Chapter 3.
Migration of an MQSeries
Integrator environment
This chapter describes the evolution of a typical MQSeries Integrator
environment. This environment consisted first of a single broker using MQSeries
Integrator. Over time, new applications and message flows were added. An
application was added on Windows and AIX using MQSeries Integrator V2.0.2.
This environment used its own Configuration Manager. When a third application
was added, the latest version of WebSphere MQ Integrator was implemented.
Over time, the described environment has come to consist of three different
Configuration Managers using three different versions of the product. We will first
investigate the migration of each Configuration Manager and its brokers and then
consolidate the brokers on a single Configuration Manager.
37
Not only did the environment used different product levels, it also changed
functionally:
Brokers were added on different Windows machines, creating multi-version
environments.
Brokers were added on multiple operating systems and database products.
Multiple Configuration Managers were created and had to be managed.
Existing Configuration Managers and brokers were upgraded to V2.1.
All the applications were consolidated in a single Configuration Manager
running at version 2.1.
The Windows 2000 machines used in this environment are referred to as NT1,
NT2, NT3, NT4, NT5 and NT6. The AIX machine is referred to as AIX1 and the
Sun Solaris machine is referred to as SUN1. The hardware and software
configurations of these machines are listed in Appendix A, Hardware/software
configuration on page 411.
Whenever the environment changes, we describe:
38
Configuration Manager
Configuration Manager queue manager NT1_CM_QMGR
Broker NT1_BK1
Broker queue manager NT1_BK1_QMGR
The V2.0.1 Control Center is installed on NT5. User ID DEVUSER is defined for
use by the MQSeries Integrator developer. This user ID is defined on both the
Control Center machine and on the Configuration Manager machine. On NT1,
where the Configuration Manager is running, DEVUSER is made a member of
the mqbrdevt, mqbrasgn, and mqbrops groups for MQSeries Integrator and the
mqm group for MQSeries. In our scenario, there is only one developer; he is
responsible for creating flows and getting them deployed. The roles that are
defined by MQSeries Integrator can be distributed among your staff as desired
by your organization. These roles correspond to the groups:
mqbrkrs: group of user IDs used by brokers and the Configuration Manager
mqbrasgn: message set and message flow assigners
mqbrdevt: message set and message flow developers
mqbrops: operational controllers
mqbrtpic: publish/subscribe topic security administrator
These roles are documented in the manual MQSeries Integrator Using the
Control Center, SC34-5602 and MQSeries Integrator Administration Guide,
SC34-5602.
The service user ID on NT1, used to run the Configuration Manager, is NT1CM.
This user ID is a member of the mqm group for MQSeries and a member of the
mqbrkrs group for MQSeries Integrator. This user ID is also a member of the
Administrators group, as this is required to allow the Configuration Manager
service complete access to the registry. This is required on Windows 2000
machines because of changes to the access rules for the registry. Alternatively,
the registry permissions can be modified to allow the user IDs of the
Configuration Manager, the broker and the User Name Server required access.
You can use the Windows utility regedt32 to alter the security settings on keys in
the registry.
39
The service user ID on NT1, used to run the broker, is NT1BK1. This user ID is a
member of the mqm group for MQSeries and a member of the mqbrkrs group for
MQSeries Integrator. This user ID is also a member of the Administrators group,
for the same reason as described for user ID NT1CM.
Graphically, the environment is shown in Figure 3-1.
DEVUSER
NT1BK1
Broker NT1_BK1
NT1_BK1_QMGR
Control Center
NT1CM
Configuration Manager
NT1_CM_QMGR
NT1 Machine
NT5 Machine
Legend:
MQSeries Integrator V2.0.1
In addition to the required software for MQSeries and DB2, we also installed
SupportPac MA88, MQSeries classes for Java on NT5. Java is used by the
Control Center on NT5 to connect to the Configuration Manager on NT1. This
SupportPac is required when using MQSeries V5.2. with a V2.0.1 Control
Center.
On NT1, we have an MQSeries queue manager for the Configuration Manager
(NT1_CM_QMGR) and a second queue manager for the broker NT1_BK1
(NT1_BK1_QMGR). It is not necessary to create two separate queue managers
when running just one broker and a Configuration Manager on one machine. The
Configuration Manager can share a queue manager with one broker. The queue
managers were defined using MQSeries Explorer prior to creating the
Configuration Manager and broker.
Sender and receiver channels are defined between the two queue managers on
NT1.
40
The message flows used to validate this environment are the retail flows,
provided with the MQSeries Integrator - Business Scenarios SupportPac IC03.
The message flows were constructed to run on NT1, but were modified and
deployed to broker NT1_BK1 on NT1 using the remote Control Center on NT5.
These message flows illustrate the following features:
Publish/Subscribe
Warehousing and writing to databases
Subflows
Controlling the flow of a message using FlowOrder and Filter nodes
Manipulating the content of a message or message header using Compute
and Extract nodes
Data conversion
Tracing message flow activity
The SupportPac includes files that support two versions of this scenario:
1. A message flow that uses a message set and a message defined using the
Message Repository Manager (MRM). The MRM is a component of the
Configuration Manager that manages message definitions and maintains the
message repository in which they are stored. You use the Control Center to
define messages to the MRM.
2. A message flow that uses a self-defining XML message.
The two versions illustrate the use of different nodes to handle the different
messages.
To connect to the Configuration Manager from the Control Center (Figure 3-2):
1. Click File -> Connection.
2. Enter the Host Name of the computer where the Configuration Manager
resides.
3. Enter the listener Port for the Configuration Manager Queue Manager.
4. Enter the Queue Manager Name for the Configuration Manager.
41
5. Once connected to the Configuration Manager, click the Topology tab in the
Control Center.
6. Right-click the Topology document in the left pane and select Check Out.
7. Right-click the Topology document again and select Create -> Broker. A
new window comes up (Figure 3-3), in which you need to specify the name of
the broker and the name of the queue manager that is hosting the broker.
Click Finish to complete this step.
8. Right-click the Topology document again and select Check In to store the
changes in the database of the Configuration Manager.
9. Right-click the Topology document once more and select Deploy ->
Complete Topology Configuration.
42
10.The main message flow of the retail scenario is shown in Figure 3-4. To
assign this message flow and the corresponding message set, select the
Assignments tab in the Control Center.
11.In the left pane, expand the folder representing the broker NT1_BK1 and
check out the broker and the default execution group.
12.Drag the message set from the middle pane to the broker.
13.Drag the message flow from the middle pane to the execution group. The
boxes in the right pane should now display the assigned message set and
message flow.
14.Check in the execution group and the broker and deploy the topology.
43
For more information about the message flow and the message sets and how to
use them, refer to the documentation that is part of the SupportPac.
44
DEVUSER
NT1BK1
Broker NT1_BK1
NT1_BK1_QMGR
Control Center
NT1CM
Configuration Manager
NT1_CM_QMGR
NT5 Machine
NT1 Machine
Legends:
NT2BK1
Broker NT2_BK2
NT2_BK2_QMGR
NT2 Machine
45
User ID NT2BK2 was created to be used as the service user ID for the broker
NT2_BK2. This user ID was made a member of the mqm MQSeries group and
the mqbrkrs group for MQSeries Integrator. This user ID was also made a
member of the Administrators group as this is required on Windows 2000 to
allow the broker service process complete access to the registry. Alternatively,
you could change the access rights for the MQSeries Integrator related registry
keys.
46
47
10.Fill out the appropriate values and click Next, which will bring you to the
window shown in Figure 3-7.
11.Provide the name of the database, MQSIBK02, and the user ID that was
granted the required authorities for this database.
12.Click Next again; this will bring you to the window shown in Figure 3-8.
Review the options and click Finish to start the creation process.
48
Figure 3-8
49
Prior to adding the new broker to the topology for the Configuration Manager, you
must make sure that the Configuration Manager can communicate with the
broker. To do that, the queue manager for the Configuration Manager and the
queue manager for the broker need to have a sender/receiver channel pair
defined. Along with the channels, transmission queues need to be defined. The
name of the transmission queue should be the same as the queue manager
name.
It is important to remember that MQSeries Integrator uses XML messages
between components. These XML messages can be extremely large, especially
when a full deploy is performed. Therefore, we have chosen to set the maximum
message length to the maximum allowed for the transmission queues, sender
channels and receiver channels. All three should be set to the same value
because MQSeries will choose the lowest common denominator when
determining the allowed message size.
1. Start MQSeries Explorer on the NT2 machine and expand the folder for
queue manager NT2_BK2_QMGR. Right-click the folder Queues and select
New -> Local Queue. You will see a window as shown in Figure 3-9.
2. Set the name of the queue to NT1_CM_QMGR, which is the name of the queue
manager that is used by the Configuration Manager. Make sure to select
Transmission as the value for the Usage field.
50
3. Click the Extended tab in Figure 3-9 and you will see the window shown in
Figure 3-10. Increase the value for Maximum Message Length.
51
4. Click the Triggering tab to automate the start-up of the channel from queue
manager NT2_BK2_QMGR to the queue manager used by the Configuration
Manager.
5. Figure 3-11 shows the values to set for channel triggering to work. Set Trigger
Control to On and Trigger Type to First. The value for Trigger Data is the
name of the sender channel that will be created starting with Figure 3-12. The
value of Initiation Queue Name is normally SYSTEM.CHANNEL.INITQ.
52
6. The next step is to create a sender channel. From the MQSeries Explorer,
expand the folder NT2_BK2_QMGR and select Advanced -> Channels.
Right-click the Channels folder and select New -> Sender Channel. A
window will appear as shown in Figure 3-12.
53
7. The Channel Name field should contain the same name as was entered in the
field Trigger Data in Figure 3-11. The drop-down menu Transmission Queue
should list the transmission queue that was defined in Figure 3-9 on page 51
and onward. Choose the appropriate Transmission Protocol and provide the
correct Connection Name for the machine hosting the Configuration Manager.
8. Click the Extended tab (Figure 3-13) to change the value of Maximum
Message Length from its default value of 4194304 to the value that was also
set during the definition of the transmission queue (see Figure 3-11).
54
55
10.Provide a name for the receiver channel that is in line with established naming
conventions in your environment. Select the Extended tab to change the
value for the Maximum Message Length field (see Figure 3-15).
56
11.Instead of using the MQSeries Explorer, you could also use the command line
based tool runmqsc. The definitions for the objects could be stored in a text
file: NT2_BK2_QMGR_objects.txt. Open a command prompt, change the
current directory to the directory where this file is stored and type:
runmqsc NT2_BK2_QMGR < NT2_BK2_QMGR_objects.txt > output.txt
57
TRIGTYPE(FIRST) +
TRIGDATA('TO.NT1_CM_QMGR') +
INITQ('SYSTEM.CHANNEL.INITQ')
DEFINE CHANNEL ('TO.NT1_CM_QMGR') CHLTYPE(SDR) +
TRPTYPE(TCP) +
MAXMSGL(100000000) +
XMITQ('NT1_CM_QMGR') +
CONNAME('m23bk64b(1414)')
DEFINE CHANNEL ('TO.NT2_BK2_QMGR') CHLTYPE(RCVR) +
TRPTYPE(TCP) +
MAXMSGL(100000000)
58
Finally, we can add the V2.0.2 broker NT2_BK2 on NT2 to the topology of the
Configuration Manager (which is using V2.0.1 of the product). The Control
Center is used by user ID DEVUSER on machine NT5.
To connect to the Configuration Manager from the Control Center:
1. Click File -> Connection (Figure 3-16).
2. Enter the Host Name of the computer where the Configuration Manager
resides.
3. Enter the listener Port for the Configuration Manager Queue Manager
4. Enter the Queue Manager Name for the Configuration Manager
59
7. Enter the name of the queue manager the broker uses on the machine it
resides, NT2_BK2_QMGR.
8. Click Finish.
9. Click File -> Check In -> All (Save to shared).
10.Right-click the Topology folder in the left pane.
11.Click Deploy -> Delta Topology Configuration.
12.Click the Log tab.
13.Click View -> Refresh, or the Refresh button in the tool bar.
14.Verify that the deploy was successful.
The retail flow from the Business Scenarios SupportPac is now deployed to the
V2.0.2 broker NT2_BK2 on NT2. This is accomplished using the V2.0.1 Control
Center on NT5.
The retail flow was tested on broker NT2_BK2 on NT2. The sample data
supplied by the SupportPac was used as input and the proper queues contained
data after the flow processed the data. The data was input to the queue using
mqsiput.exe from SupportPac IH02.
At this point we have:
A V2.0.1 Configuration Manager on NT1.
A V2.0.1 broker on NT1, NT1_BK1.
A V2.0.2 broker on NT2, NT2_BK2.
The retail flow from the Business Scenarios SupportPac, running in broker
NT1_BK1 on NT1, deployed from the V2.0.1 Configuration Manager on NT1,
using the V2.0.1 Control Center on NT5.
The retail flow from the Business Scenarios SupportPac, running in broker
NT2_BK2 on NT2, deployed from the V2.0.1 Configuration Manager on NT1,
using the V2.0.1 Control Center on NT5.
60
Since machine NT2 already has MQSeries Integrator V2.0.2 installed, we decide
to build the V2.0.2 Configuration Manager on NT2. At this point, we just need to
create the required components, since we performed a full install of MQSeries
Integrator V2.0.2 when we installed it for the broker NT2_BK2 that was used in
3.3, Adding a new broker on page 44. The new environment is shown in
Figure 3-17.
DEVUSER
NT1BK1
Broker NT1_BK1
NT1_BK1_QMGR
Control Center
NT1CM
Configuration Manager
NT1_CM_QMGR
NT1 Machine
NT2BK2
NT5 Machine
AIX1BK1
Broker NT2_BK2
NT2_BK2_QMGR
Broker AIX1_BK1
AIX1_BK1_QMGR
NT2BK1
Broker NT2_BK1
NT2_BK1_QMGR
NT2CM
AIX Machine
Broker NT2_BK2
NT2_CM_QMGR
Legends:
MQSeries Integrator V2.0.1
NT2 Machine
61
The service user ID used to run the V2.0.2 Configuration Manager on NT2 is
NT2CM. This user ID is made a member of the mqm group for MQSeries and a
member of the mqbrkrs group for MQSeries Integrator. This user ID was also
made a member of the Administrators group, since this is required to allow the
Configuration Manager service complete access to the registry. This is required
on Windows 2000 machines due to changes to the access rules for the registry.
Alternatively, the registry permissions can be modified to allow the user IDs of
the Configuration Manager and broker.
62
User ID NT2BK1 was created on NT2 to be used as the service user ID for
broker NT2_BK1. This user ID was made a member of the mqm MQSeries group
and the mqbrkrs group for MQSeries Integrator. This user ID was also made a
member of the Administrators group.
The database administrator created the MQSICMDB and MQSIMRDB
databases on NT2, as documented in the MQSeries Integrator Installation Guide.
The service user ID for the Configuration Manager, NT2CM, is granted the
following authorities to these databases:
Database MQSIBKDB already exists on NT2, but the service user ID for the new
broker, NT2BK2, has to be granted the following authorities to this database:
63
3. In the next window (see Figure 3-19), enter the name of the configuration
database (MQSICMDB) and the name of the message repository (MQSIMRDB).
Since we have given the service user ID the proper database access rights,
there is no need to mention them again. For clarity, we have entered them
anyway.
64
4. Click Next again and a new window (Figure 3-20) will be shown that contains
all parameters that were entered in the steps illustrated in Figure 3-18 and
Figure 3-19.
65
5. Click Finish to start the creation process for the new Configuration Manager.
Now broker NT2_BK1 is created on NT2 using the Command Assistant for
MQSeries Integrator. Again, the requirement is to be in the Administrators group,
but it is recommended that the user ID of the service be used, NT2BK1 in our
case.
1. Click Start -> Programs -> IBM MQSeries Integrator -> Command
Assistant -> Create Broker.
2. You will see a window like the one shown in Figure 3-6 on page 48 and
onward. Alternatively, you can execute the mqsicreatebroker command in a
command window, as shown in Example 3-3.
Example 3-3 mqsicreatebroker command for broker NT2_BK1
mqsicreatebroker NT2_BK1 -i NT2BK1 -a ******** -q NT2_BK1_QMGR -n MQSIBKDB -u
NT2BK1 -p ********
66
MQSeries Integrator V2.0.2 is installed on AIX1. Only the broker components are
installed. MQSeries and DB2 are already installed. The installation of MQSeries
Integrator is not documented here, as it is not an upgrade and the installation
procedures are documented in the MQSeries Integrator for AIX Installation
Guide, GC34-5841. The installations of MQSeries and DB2 are not documented
here either. Each product has an installation guide that describes the installation
procedures.
The user ID aix1bk1 is created to be used as the service user ID for broker
AIX1_BK1. This user ID must be a member of the mqbrkrs group for MQSeries
Integrator and the mqm group for MQSeries. This is accomplished on AIX using
smit or smitty, the interfaces provided by the AIX operating system to allow
maintenance and administration. The mqm group was created before MQSeries
was installed and the mqbrkrs group was created before MQSeries Integrator
was installed.
67
A broker uses the ODBC interface to access its database. To configure ODBC,
we need a file .odbc.ini. The broker will find the location of this file by referring to
the environment variable ODBCINI. This environment variable will be part of the
profile for the broker user ID aix1bk1.
The .odbc.ini file is updated as show in Example 3-4. A line, starting with
MQSIBKDB, is added to the section [ODBC Data Sources]. A new section
[MQSIBKDB] is added to link the ODBC name to the DB2 alias and to refer to the
correct driver to load.
Example 3-4 ODBC configuration for the broker on AIX
[ODBC Data Sources]
MQSIBKDB=IBM DB2 ODBC Driver
MYDB=IBM DB2 ODBC Driver
[MQSIBKDB]
Driver=/u/db2inst1/sqllib/lib/db2.o
Description=MQSIBKDB DB2 ODBC Database
Database=MQSIBKDB
[MYDB]
68
Driver=/u/db2inst1/sqllib/lib/db2.o
Description=MYDB DB2 ODBC Database
Database=MYDB
[ODBC]
Trace=0
TraceFile=/var/mqsi/odbc/odbctrace.out
TraceDll=/usr/opt/mqsi/merant/lib/odbctrac.so
InstallDir=/usr/opt/mqsi/merant
The user ID used to create the broker is aix1bk1 and must be a member of the
mqbrkrs group for MQSeries Integrator and the mqm group for MQSeries. The
.profile file for user ID aix1bk1 is modified to set all required environment
variables and all path entries for:
DB2
The correct DB2 database instance
MQSeries Integrator
69
# The following are required to use the NEON support within MQSI V2.0.2
NEON_ROOT=/usr/lpp/neonsoft
export NEON_ROOT
NEON_CATALOGUES=$NEON_ROOT/NEONCatalogues
export NEON_CATALOGUES
ICU_DATA=$NEON_ROOT/share/icu16/data
export ICU_DATA
LIBPATH=$LIBPATH:$NEON_ROOT/bin
PATH=$PATH:$NEON_ROOT/bin
#
MQSI_REGISTRY=/var/mqsi
export MQSI_REGISTRY
export LIBPATH
export PATH
As on the Windows machines, the queue manager for the broker was created
manually. Most likely, an existing MQSeries environment is in place, including
procedures for the creation, configuration, and start-up of queue managers.
The queue manager is defined by executing:
crtmqm -u SYSTEM.DEAD.LETTER.QUEUE -lp 5 -ls 3 -lf 2048 AIX1_BK1_QMGR
70
The sender channel from the NT2 Configuration Manager queue manager to this
broker is created on NT2. As always, the channel names must match. This
channel definition on NT2 can be done with either the command line, as we did
on AIX, or the MQSeries Explorer, as previously described.
For reasons of completeness, the definitions made for queue manager
NT2_CM_QMGR are listed in Example 3-6
Example 3-6 MQSC definitions on NT2_CM_QMGR
DEFINE QLOCAL ('AIX1_BK1_QMGR') +
DESCR('xmitq to AIX1 Broker AIX1_BK1') +
MAXMSGL(100000000) +
USAGE(XMITQ) +
TRIGGER +
TRIGTYPE(FIRST) +
TRIGDATA('TO.AIX1_BK1_QMGR') +
INITQ('SYSTEM.CHANNEL.INITQ')
DEFINE CHANNEL ('TO.AIX1_BK1_QMGR') CHLTYPE(SDR) +
TRPTYPE(TCP) +
MAXMSGL(100000000) +
XMITQ('NT1_BK2_QMGR') +
CONNAME('rs617002(1415)')
DEFINE CHANNEL ('TO.NT2_CM_QMGR') CHLTYPE(RCVR) +
TRPTYPE(TCP) +
MAXMSGL(100000000)
For both channels and the transmission queue, the maximum message length
has been set to the maximum value, that is 100 MB. The transmission queues
are defined to support channel triggering.
To support inbound MQSeries communication on the AIX machine, two system
files need to be updated.
71
The /etc/services contains a list of all network services. For each queue
manager, a line needs to be added. The following is added to the /etc/services
file for the queue manager AIX1_BK1_QMGR:
AIX1BK1
1415/tcp
The value AIX1BK1 should be unique and is used to search in another file for the
corresponding program to start when an incoming communication request is
intercepted by the operating system. We keep all entries in port number
sequence. 1415 is the port number for our queue manager to receive
communications (listen). As discussed before, port numbers need to be unique
and the network administrator should be involved to avoid conflicts.
The file /etc/inetd.conf contains the name and path of the actual program to start
for an incoming request. For our situation, the following is added to the
/etc/inetd.conf file:
AIX1BK1 stream tcp nowait mqm /usr/mqm/bin/amqcrsta amqcrsta -m
AIX1_BK1_QMGR
After these files are modified, the inetd service must be refreshed for the
changes to be in effect. This refresh is accomplished by entering the following
command:
refresh -s inetd
The entries in this file define what is executed when a message is received on
the associated port number in the /etc/services file. The entries in these two files
are connected by the identifier entered first in both files. This is AIX1BK1 in our
files. So when services detect a communication on port 1415, program amqcrsta
is started with the supplied parameters.
At this point, it is recommended that the two sender channels be verified. This is
best accomplished by issuing a PING CHANNEL from either MQSeries Explorer or
from the runmqsc command service. Channel connection problems are best
detected and dealt with prior to attempting a deploy of MQSeries Integrator
objects.
At this time, we can execute the mqsicreatebroker command. Note that there is
no command assistant available on AIX. The command needs to be executed by
the user ID aix1bk1 for which all the required configuration steps have been
performed. The parameters and the actual execution are shown in Figure 3-22.
72
The Control Center on NT5 was used to interact with the Configuration Manager
on NT1. Both systems were using version 2.0.1 of the product. To manage the
new Configuration Manager running on NT2 with product code V2.0.2, we will
need to add a separate machine (NT4) for the user ID DEVUSER. The Control
Center has to be at the same level as the Configuration Manager. Managing a
multi-version broker environment is possible using a single version of the Control
Center, as shown in 3.3, Adding a new broker on page 44. Managing an
environment with multiple Configuration Managers at different software levels
can be more difficult for this reason.
Now the V2.0.2 broker, NT2_BK1 on NT2, and the V2.0.2 broker on AIX1 are
defined to the MQSeries Integrator V2.0.2 Configuration Manager on NT2. The
Control Center is started by clicking Start -> Programs -> IBM MQSeries
Integrator -> Control Center.
Then, to define the brokers to the topology,
1. Click the Topology tab.
2. Right-click the Topology folder in the left pane.
3. Click Check Out.
4. Right-click the Topology folder in the left pane.
73
74
The flow is modified to use the proper database on NT2, as described in the
SupportPac documentation. We use db2admin for our schema identifier.
The loan flow and message set are now deployed to the V2.0.2 broker,
NT2_BK1, running on NT2 (see Figure 3-24). All other setup required for the
execution of these flows from the SupportPac is done prior to testing. This
includes the application database setup and initialization and the creation of the
required MQSeries objects. This is described in the SupportPac documentation.
75
Figure 3-24 Deployed message set and message flow for the loan application
This flow was tested using the sample input provided by the SupportPac. The
test was a success when data was propagated to the proper output queues after
the flows had processed the input data. The data was put to the flow input queue
using mqsiput.exe from SupportPac IH02 .
The loan flow and the message set are now deployed to the V2.0.2 Broker
AIX1_BK1 on AIX1. Any other setup required for these flows from the
SupportPac was performed prior to testing. This includes all other database
setup and MQSeries component definitions.
The flow is tested the same way on AIX1 that it is on NT2. The sample input data
provided by the SupportPac was used. The test was a success when data was
propagated to the proper output queues. Since the mqsiput.exe facility only
existed on NT, a remote queue definition was defined on an NT broker to route
the data to the proper queue on the AIX machine.
At this point we have :
A V2.0.1 Configuration Manager on NT1
A V2.0.1 broker on NT1: NT1_BK1
A V2.0.2 broker on NT2: NT2_BK2
The retail flow from the Business Scenarios SupportPac, running in broker
NT1_BK1 on NT1, deployed from the V2.0.1 Configuration Manager on NT1,
using the V2.0.1 Control Center on NT5
The retail flow from the Business Scenarios SupportPac, running in broker
NT2_BK2 on NT2, deployed from the V2.0.1 Configuration Manager on NT1,
using the V2.0.1 Control Center on NT5
A V2.0.2 Configuration Manager on NT2
76
The V2.1.0 product must be upgraded to fixpack level U200167, which can be
downloaded from:
ftp://ftp.software.ibm.com/software/mqseries/fixes/wmqiv21/winnt/
Message sets cannot be deployed from a V2.1 Configuration Manager to a
V2.0.2 broker without first applying the above fixes available on the Web.
77
Message sets that contain XML messages with an MRM message domain
are not supported when deploying from a V2.0.2 Configuration Manager to a
V2.1 broker, even after applying the fixes.
Message sets that contain XML messages with an MRM message domain
are not supported when deploying from a V2.1 Configuration Manager to a
V2.0.1 broker, even after applying the fixes.
78
13.Click Next.
14.Click Create listener configured for TCP/IP.
15.Enter the port number for your queue manager to use. Remember to use a
unique port number for the machine running the queue manager.
16.Click Finish.
User ID NT3BK2 is created on NT3 to be used as the service user ID for the
broker NT3_BK2. This user ID is made a member of the mqm MQSeries group
and the mqbrkrs group for MQSeries Integrator. This user ID was also made a
member of the Administrators group as this is required to enable the broker
service complete access to the registry.
The database administrator created the MQSIBKDB database on NT3. The
service user ID for the broker, NT3BK2, is granted the following authorities to this
database:
79
1. To define the broker, click Start -> Programs -> IBM WebSphere MQ
Integrator -> Command Assistant -> Create Broker.
2. Enter the Broker Name, Service User ID, Service Password and the Queue
Manager Name. Click Next.
80
3. Enter the Broker ODBC Data Source Name, User ID to access Database
(should be the broker service user ID) and the Broker Data Source Password.
Click Next.
81
Prior to adding the new broker to the topology for the Configuration Manager, you
must make sure that the Configuration Manager can communicate with the
broker. To do that, a sender/receiver channel pair is defined to the queue
manager for the Configuration Manager and the queue manager for the broker.
Along with the channels, the transmission queues are also defined. The name of
the transmission queue is the same as the name of the target queue manager.
82
83
TRPTYPE(TCP) +
MAXMSGL(100000000) +
XMITQ('NT3_BK2_QMGR') +
CONNAME('m23cabxk(1416)')
DEFINE CHANNEL ('TO.NT2_CM_QMGR') CHLTYPE(RCVR) +
TRPTYPE(TCP) +
MAXMSGL(100000000)
The V2.1 broker, NT3_BK2 on NT3, is then defined to the MQSeries Integrator
V2.0.2 Configuration Manager on NT2. This is accomplished by logging onto the
NT4 Control Center as DEVUSER. The Control Center is started by clicking
Start -> Programs -> IBM WebSphere MQ Integrator -> Control Center.
Connect to the Configuration Manager by clicking Connect.
To define the V2.1 broker to the V2.0.2 Configuration Manager topology, follow
these steps
1. Click the Topology tab.
2. Right-click the Topology folder in the left pane.
3. Click Check Out.
4. Right-click the Topology folder in the left pane.
5. Click Create -> Broker.
6. Enter the Broker Name in the Name field: NT3_BK2.
7. Enter the name of the queue manager the Broker uses on the machine it
resides: NT3_BK2_QMGR.
8. Click Finish.
9. Click File -> Check In -> All (Save to shared).
10.Right-click the Topology folder in the left pane.
11.Click Deploy -> Delta Topology Configuration.
84
85
According to the readme file, fixes must be applied before deploying across a
multiple version environment. If the fixes are not applied, the deploy will fail. The
messages that appear are shown below.
86
Figure 3-30 shows the message that is indicative of the problem. The following
two messages are standard messages that occur when these types of problems
are encountered.
In this case, the broker was running without any fixes applied. The Configuration
Manager was not using the fix IC32427.
Figure 3-31 and Figure 3-32 show the text associated with the two other error
messages.
87
88
After upgrading the V2.1 product on machine NT3, the deploy still fails. The
errors (messages BIP5145 and BIP5347) are shown in Figure 3-33 and
Figure 3-34.
After applying the fix IC32427 on machine NT2, the deploy was successful
(messages BIP4040 and BIP2056 in Figure 3-34).
89
Figure 3-33 Messages during deploy with and without fix IC32427
90
Figure 3-34 Messages during deploy with and without fix IC32427
The flow is tested the same way on NT3 as it was on NT2. The sample input data
provided by the SupportPac was used and mqsiput.exe was used to put the data
to the input queue. The test was a success when data was propagated to the
proper output queues.
The resulting environment is shown in Figure 3-35.
91
DEVUSER
NT1BK1
Broker NT1_BK1
NT1_BK1_QMGR
Control Center
NT5 M achine
NT1CM
DEVUSER
Configuration Manager
NT1_CM_QMGR
Control Center
NT1 M achine
NT4 M achine
NT2BK2
AIX1BK1
Broker NT2_BK2
NT2_BK2_QMGR
Broker AIX1_BK1
AIX1_BK1_QMGR
NT2BK1
Broker NT2_BK1
NT2_BK1_QMGR
AIX1 Machine
NT3BK2
NT2CM
Broker NT3_BK2
NT3_BK2_QMGR
Configuration Manager
NT2_CM_QMGR
NT3 M achine
NT2 M achine
Legends:
Figure 3-35 WebSphere MQ Integrator environment with three different product versions
92
The new version 2.1 Configuration Manager will be used first for a new set of
message flows. These flows will be hosted on a new broker.
Once this new Configuration Manager is in place, we would like to consolidate all
Configuration Managers. There are two approaches to this. First, we could create
a new broker on NT1 and NT2 and connect these brokers to the new
Configuration Manager. This means that the Configuration Manager would be
used to manage brokers at three different levels of the product:
A version 2.0.1 broker on NT1
A version 2.0.2 broker on NT2
A version 2.0.2 broker on AIX1
A version 2.1.0 broker on NT3, the same machine that is used for the
Configuration Manager.
A second approach is to first upgrade the product and the components on NT1
and NT2. This would result in three Configuration Managers on NT1, NT2, and
NT3. However, all would use the same level of the product. At the same time, all
the brokers would be upgraded as well. At that time, a new broker would be
defined on all machines to connect to the Configuration Manager on NT3. The
existing brokers would then no longer be needed and could be deleted, including
the Configuration Managers on the machines NT1 and NT2.
The first approach has several limitations. Fixes need to be installed to allow the
coexistence of a version 2.0.1 broker, a version 2.0.2 broker and a version 2.1.0
Configuration Manager. Even when the fixes are applied, there is always a risk
that developers will use new features of the product that are not available on the
brokers. An example of this approach is provided in 3.7, Using a V2.0.2 broker
with a V2.1.0 Configuration Manager on page 105.
The second approach will be started in the remainder of this section. A new
Configuration Manager and broker are defined on NT3. Section 3.8, Product
upgrade on AIX1, NT1 and NT2 on page 108 describes the upgrade of
machines NT1, NT2 and AIX1. Finally, Section 3.9, Consolidating Configuration
Managers on page 128 describes the consolidation to a single Configuration
Manager.
NT3 has WebSphere MQ Integrator V2.1 already installed, since it was used to
create broker NT3_BK2 that was connected to a version 2.0.2 Configuration
Manager (see 3.5, Adding a new broker using WebSphere MQ Integrator V2.1
on page 77). Since we did a full install of the product, we simply need to create
the required components: databases and the Configuration Manager itself.
As with the procedures that were used previously, we create a queue manager
NT3_CM_QMGR and a queue manager NT3_BK1_QMGR on machine NT3.
93
The service user ID used to run the Configuration Manager on NT3 is NT3CM.
This user ID is made a member of the mqm group for MQSeries and a member of
the mqbrkrs group for MQSeries Integrator. This user ID is also made a member
of the Administrators group, since this is required to allow the Configuration
Manager service complete access to the registry.
User ID NT3BK1 is created on NT3 to be used as the service user ID for broker
NT3_BK1. This user ID is made a member of the mqm MQSeries group and the
mqbrkrs group for MQSeries Integrator. This user ID is also made a member of
the Administrators group, since this is required to allow the broker service
complete access to the registry.
Machine NT6 is used to host an additional Control Center, to be used by the user
ID DEVUSER.
Legends:
NT3BK2
Broker NT3_BK2
NT3_BK2_QMGR
DEVUSER
NT3BK1
Broker NT3_BK1
NT3_BK1_QMGR
Control Center
NT3CM
Configuration Manager
NT3_CM_QMGR
NT3 machine
NT6 machine
94
Database MQSIBKDB already exists on NT3, but the service user ID for the new
broker, NT3BK1, has to be granted the following authorities to these databases:
95
2. Figure 3-37 shows the first window of the Command Assistant. Provide the
name of the queue manager (NT3_CM_QMGR) and the service user ID and
password.
3. Figure 3-38 shows the second window of the Command Assistant where
database access parameters are provided. Figure 3-39 shows the complete
command. Select Finish to start the creation.
96
Now broker NT3_BK1 is created on NT3 using the Command Assistant for
WebSphere MQ Integrator. We logged on as user NT3BK1 to create the broker.
Again, the requirement is to be in the Administrators group, but it is
recommended that the user ID of the service be used, NT3BK1 in our case. Click
Start -> Programs -> IBM WebSphere MQ Integrator -> Command Assistant
-> Create Broker.
97
The interface of the Command Assistant to create a broker was shown previously
in 3.5, Adding a new broker using WebSphere MQ Integrator V2.1 on page 77.
Here, we will only show the final command.
mqsicreatebroker NT3_BK1 -i NT3BK1 -a nt3bk1 -q NT3_BK1_QMGR -n MQSIBKDB
-u NT3BK1 -p nt3bk1
Prior to adding the new broker to the topology for the Configuration Manager, you
must make sure that the Configuration Manager can communicate with the
broker. To do that, a sender/receiver channel pair is defined to the queue
manager for the Configuration Manager and the queue manager for the broker.
Along with the channels, transmission queues need to be defined.
Example 3-9 shows the definitions required on queue manager
NT3_BK1_QMGR to connect to the queue manager used by the Configuration
Manager.
Example 3-9 MQSC commands on NT3_BK1_QMGR
DEFINE QLOCAL ('NT3_CM_QMGR') +
DESCR('xmitq to NT3 Config Manager') +
MAXMSGL(100000000) +
USAGE(XMITQ) +
TRIGGER +
TRIGTYPE(FIRST) +
TRIGDATA('TO.NT3_CM_QMGR') +
INITQ('SYSTEM.CHANNEL.INITQ')
DEFINE CHANNEL ('TO.NT3_CM_QMGR') CHLTYPE(SDR) +
TRPTYPE(TCP) +
MAXMSGL(100000000) +
XMITQ('NT3_CM_QMGR') +
CONNAME('m23cabxk(1415)')
DEFINE CHANNEL ('TO.NT3_BK1_QMGR') CHLTYPE(RCVR) +
TRPTYPE(TCP) +
MAXMSGL(100000000)
98
TRIGGER +
TRIGTYPE(FIRST) +
TRIGDATA('TO.NT3_BK1_QMGR') +
INITQ('SYSTEM.CHANNEL.INITQ')
DEFINE CHANNEL ('TO.NT3_BK1_QMGR') CHLTYPE(SDR) +
TRPTYPE(TCP) +
MAXMSGL(100000000) +
XMITQ('NT3_BK1_QMGR') +
CONNAME('m23cabxk(1414)')
DEFINE CHANNEL ('TO.NT3_CM_QMGR') CHLTYPE(RCVR) +
TRPTYPE(TCP) +
MAXMSGL(100000000)
99
The RouteToLabel flow and message set from the Business Scenarios
SupportPac are used as validation message flow and message set. The flow and
message set are imported to the V2.1 Configuration Manager on NT3.
Prior to importing the flow and message set, additional setup is required as
documented in the SupportPac documentation. MQSeries queues must be
defined. An application database must be created and tables created and
populated with data.
The message set for this flow is imported by:
1. Opening a command prompt window.
2. Stopping the Configuration Manager using the mqsistop configmgr
command.
3. Changing to the directory containing the RouteToLabel.mrp file.
4. Importing the message set by typing:
mqsiimpexpmsgset -i -u NT3CM -p nt3cm -n MQSIMRDB -f RouteToLabel.mrp
-x XML
where -u and -p are the user ID and password that are used for message
repository access, and -n specifies the name of the message repository (you
specified these parameters using the -n, -u, and -p flags on the
mqsicreateconfigmgr command).
5. Restart the Configuration Manager using the command mqsistart configmgr.
6. Restart the Control Center from the Start menu.
7. In the Control Center Message Sets view, right-click the Message Sets folder
and select Add to Workspace. Select the message set RouteToLabel, and
click Finish.
The -x XML in the mqsiimpexpmsgset command is a new feature with WebSphere
MQ Integrator V2.1. Multiple message types can now be defined for a given
message. The -x parameter is used to provide an identifier for the XML message
type. This parameter is required for our exercise due to the fact that our input
message is XML. The sample input data supplied by the SupportPac contains
the name XML for the message type. Therefore we use XML as our message
type identifier following the -x parameter.
The import of the flow to the workspace is accomplished as follows:
1. Select the Message Flows tab in the Control Center.
2. Import the dynamic routing message flow definition:
a. Select Import to Workspace... from the File menu.
100
101
102
11.Click OK.
12.Click the Log tab.
13.Click the Refresh button on the menu bar.
14.Verify that the deploy was successful.
103
Figure 3-42 Operations tab view showing deployed message set and flow
This flow was tested using the sample input provided by the SupportPac. The
test was a success when data was propagated to the proper output queues after
the flows had processed the input data. The data was put to the flow input queue
using mqsiput.exe from SupportPac IH02 :
mqsiput LABELIN NT3_BK1_QMGR <trademsg.xml
104
105
This same technique could be used for the environment on AIX1 and NT1.
Figure 3-44 shows the broker domain using a V2.1 Configuration Manager next
to the broker domain using a V2.0.2 Configuration Manager.
NT2BK2
Broker NT2_BK2
NT2_BK2_QMGR
Legends:
NT2BK1
Broker NT2_BK1
NT2_BK1_QMGR
NT2CM
NT3BK2
Configuration Manager
NT2_CM_QMGR
Broker NT3_BK2
NT3_BK2_QMGR
NT3BK1
NT2BK3
Broker NT2_BK1
NT3_BK1_QMGR
Broker NT2_BK3
NT2_BK3_QMGR
NT3CM
NT2 Machine
Configuration Manager
NT3_CM_QMGR
DEVUSER
NT3 Machine
Control Center
NT6 Machine
According to the readme file, fixes must be applied before deploying across a
multiple version environment when MRM message sets are involved. At this
time, the fixes have not been applied and therefore the deploy fails. The
messages that appear are shown below.
106
This is the message that is indicative of the problem. The following two
messages are standard messages that occur when these types of problems are
encountered.
We have several options to fix this problem.
We could apply the aforementioned fixes.
We could insure that we do not have a mixed release environment.
We could change the message flow to use generic XML instead of
MRM-defined XML. That means that all Compute and other nodes containing
ESQL that references message fields must change.
We created a flow, genericxml, that uses generic XML. This flow is deployed and
tested, and works.
After applying the fix IC32427 on the NT2 installation of MQSeries Integrator
V2.0.2, we see that the deploy of the original message flow works also. Using the
sample data provided with the SupportPac, the behavior of the message flow
was again validated.
107
108
109
When checking the log, we received the message stating that the broker
successfully processed the configuration. However, we never received the
message saying that all references were deleted.
Regardless, we decided to proceed to see what would happen. We defined the
AIX1_BK1 broker to the Configuration Manager on NT3 (V2.1) topology.
When the deploy was executed, the following message were received:
110
111
112
113
7. Click OK.
8. Components are listed in the Software name field (Figure 3-50). Choose no
for Preview Only and for the other options.
114
115
COUNT(*)
COUNT(*)
COUNT(*)
COUNT(*)
COUNT(*)
COUNT(*)
COUNT(*)
COUNT(*)
COUNT(*)
COUNT(*)
COUNT(*)
COUNT(*)
COUNT(*)
COUNT(*)
COUNT(*)
COUNT(*)
COUNT(*)
COUNT(*)
COUNT(*)
COUNT(*)
COUNT(*)
COUNT(*)
COUNT(*)
COUNT(*)
COUNT(*)
COUNT(*)
COUNT(*)
FROM
FROM
FROM
FROM
FROM
FROM
FROM
FROM
FROM
FROM
FROM
FROM
FROM
FROM
FROM
FROM
FROM
FROM
FROM
FROM
FROM
FROM
FROM
FROM
FROM
FROM
FROM
AIX1BK1.BACLENTRIES
AIX1BK1.BCLIENTUSER
AIX1BK1.BGROUPNAME
AIX1BK1.BLOGICALTOPHYSNAME
AIX1BK1.BMQEPUBDEST
AIX1BK1.BMQEPUBMSGIN
AIX1BK1.BMQEPUBMSGOUT
AIX1BK1.BMQESTDMSGIN
AIX1BK1.BMQESTDMSGOUT
AIX1BK1.BMQPSTOPOLOGY
AIX1BK1.BNBRCONNECTIONS
AIX1BK1.BPHYSICALFILE
AIX1BK1.BPUBLISHERS
AIX1BK1.BRETAINEDPUBS
AIX1BK1.BRMCONFIG
AIX1BK1.BROKERAA
AIX1BK1.BROKERAAEG
AIX1BK1.BROKERRESOURCES
AIX1BK1.BSCADADEST
AIX1BK1.BSCADAMSGIN
AIX1BK1.BSCADAMSGOUT
AIX1BK1.BSUBSCRIPTIONS
AIX1BK1.BTOPOLOGY
AIX1BK1.BUSERCONTEXT
AIX1BK1.BUSERMEMBERSHIP
AIX1BK1.BUSERNAME
AIX1BK1.BWFFRELATIONSHIP
The same script was executed after the uninstall. This was to determine whether
the tables were indeed cleaned up.
After the uninstall of MQSeries Integrator V2.0.2 from AIX1, all the counts for
these tables were zero.
This indeed proves that the uninstall completes a successful delete of the
defined brokers. The undeploy did not delete the records from the tables. The
undeploy might have had it finished properly. Or the undeploy may just have
marked the rows as deleted. We believe they should have been deleted.
116
At this point, the Configuration Manager was deleted without deleting the tables.
This is accomplished by entering the command mqsideleteconfigmgr, without
specifying any parameters.
Then WebSphere MQ Integrator V2.1 was installed. The install was executed
per the documented instructions without first uninstalling the current version of
the product. Because we saved our current installation and installed without
uninstalling the product, the directories that it was installed in were the old
directories that had 2.0 in their name. This is a bit confusing, but is documented
in the installation guide.
When the install reaches the point where it is going to install NEON, the choice is
to continue or cancel. Choosing Cancel displays a box that states that the
installation will be incomplete if you continue. We needed support for NEON
messages anyway, so we chose to install. The machine was then rebooted.
The mqsidropmrmtables command was executed from a DB2 command line
window.
mqsidropmrmtables -n MQSIMRDB -u NT1CM -p NT1CM
From this same DB2 command window, we migrated the remaining tables using
the mqsimigratetables command.
mqsimigratetables -n MQSICMDB -u NT1CM -p NT1CM
117
Now the Configuration Manager and broker are started. This can be
accomplished from the command line with mqsistart configmgr and mqsistart
NT1_BK1 or from the Windows NT/2000 Services applet in the Control Panel.
When we imported the message set previously, we failed to include the -x
parameter, which is new. Since our sample messages were defined in the MRM
but have a message type of XML, we needed an XML layer added to the new
message set. We started the Control Center and connected to the Configuration
Manager. At that time, we received the message that obsolete data was present
in the message set in the workspace. To correct this problem, we needed to
import the workspace element values. We are not required to do this, since the
message set is correct in the database. If we really do not need the values for
any reason, they need not be added to the workspace. Then we checked out the
message set and added the XML layer with type name XML. This was checked
in and deployed to the broker.
To understand more about physical layers in the MRM and other new features of
the MRM, refer to Chapter 6, New MRM features explored on page 163.
To add the XML layer, right-click the message set. Then click Add -> Physical
Format -> XML Format (see Figure 3-52).
118
A new window will appear to name the new format (Figure 3-53).
119
120
121
The changed message set is now assigned to the broker and deployed
successfully (Figure 3-55).
The message flow is tested as before. We use mqsiput.exe to put the sample
data file, provided in the SupportPac, to the input queue. Messages appear on
the appropriate output queue to prove that the flow still works.
122
Figure 3-56
At this point, the Configuration Manager was deleted without deleting the tables.
This was accomplished by entering the command mqsideleteconfigmgr, without
specifying any parameters.
mqsideleteconfigmgr
On NT1, we had chosen to install the new version on top of the existing one. The
result was that the new product code was stored in a folder with a name that
included the old version number. Because this is quite confusing, we decided to
uninstall the software on NT2.
Because we have saved the message sets and message flows, we can uninstall
the product completely, including the data.
MQSeries Integrator V2.0.2 was uninstalled as follows:
1. Click Start -> Programs -> MQSeries Integrator -> Uninstall..
2. Click the radio button Uninstall MQSI completely, including data.
3. Click Next.
4. Click Finish.
Choosing the option to delete the data and files is necessary to clean up all
registry entries.
123
From this same DB2 command window, we migrated the remaining table using
the mqsimigratetables command.
mqsimigratetables -n MQSICMDB -u NT2CM -p NT2CM
When we imported the message set, we included the new -x parameter. This
parameter allows you to enter an XML type name at import time and therefore
create the XML layer. Since our sample messages were defined in the MRM
domain and have a message type of XML, adding this XML layer is required.
124
We added the loan request message set and all components that are a part of
that message set to the workspace. This is done to verify that the import worked
properly and that the XML layer was added. This is not required since the
message set is correct in the database. If you really do not need the values for
any reason, they need not be added to the workspace.
125
Click the XML tab to verify the XML layer (Figure 3-59).
126
Figure 3-59 Imported Loan Request Message Set - XML Wire Format
The XML layer was added to the message set when it was imported. This was
accomplished by using the new -x parameter. Notice that the XML Wire Format
Identifier is XML, which is the name we supplied with the -x parameter.
The Configuration Manager still thinks it has three deployed brokers. This is
really not a problem. All that is required to make things work is to check out the
brokers. All message sets deployed to these brokers are deleted from each
broker. You need to drag each message set to the appropriate broker. This is
necessary, because the imported message sets will have new UUIDs. Check in
everything by clicking File -> Check In -> All (Save to Shared). Now a complete
deploy of all types is performed by clicking File -> Deploy -> Complete
Configuration (all types) -> Normal.
Now the message flows and message sets for brokers NT2_BK2 and NT2_BK3
can be redeployed. We connect to each Configuration Manager, NT1_CM and
NT3_CM and perform a complete deploy to the NT2 broker attached to each of
these Configuration Managers.
127
Having performed the upgrade of 2.0.1 to 2.1 in place without uninstalling the
2.0.1 version on NT1, and the upgrade of 2.0.2 in place with the uninstallation of
the 2.0.2 version on NT2, we have now a very complex environment. Since there
is already a planned consolidation of Configuration Managers, we decide to
proceed to that step.
128
DEVUSER
NT1BK1
Broker NT1_BK1
NT1_BK1_QMGR
Control Center
NT5 Machine
NT1 Machine
DEVUSER
NT3CM
Control Center
Configuration Manager
NT3_CM_QMGR
NT6 Machine
NT3BK1
AIX1BK1
Broker NT3_BK1
NT3_BK1_QMGR
Broker AIX1_BK1
AIX1_BK1_QMGR
NT3 Machine
AIX1 Machine
NT2BK1
Broker NT2_BK1
NT2_BK1_QMGR
NT2 Machine
Legends:
129
130
Chapter 4.
131
4.1 Introduction
One of the platforms that WebSphere MQ Integrator supports for running a
broker is Sun Solaris. With Solaris, there is also support for running a broker
using DB2, Sybase and Oracle. The installation and configuration options for all
of these data sources can be found in the documentation that accompanies the
product: WebSphere MQ Integrator for Sun Solaris Installation Guide,
GC34-5842, and the WebSphere MQ Integrator Introduction and Planning
Guide, GC34-5599. Since configurations utilizing DB2 and SQL Server were
used in other chapters of this book, we chose to take advantage of the
opportunity to conduct our broker installation using Oracle 8i while working with
Solaris.
On this platform and with these databases, there is full support for running a
message broker and a User Name Server on the same queue manager, as on all
of the other supported platforms. WebSphere MQ Integrator also supports the
MQSeries ability to implement an XA configuration with a compliant database,
such as DB2 or Oracle.
While this chapter will discuss these features, you should also consult all of the
supporting documentation for Solaris, Oracle and MQSeries to ensure proper
configuration for your requirements.
132
MQSeries installation
1. While logged in as root or super user, create the user and group mqm using
the admintool utility. Install the MQSeries software package using the pkgadd
utility.
2. Create a queue manager to be used by our broker called SUN1_BK1_QMGR. with
the command:
crtmqm SUN1_BK1_QMGR
Consider increasing the values for logging. You can update mqs.ini in
/var/mqm before running the crtmqm command. Or, if you do not want to
change the logging defaults, use the parameters -lp and -lf to increase the
size of the logging space. Remember also that you will need linear logging in
MQSeries to be able to use MQSeries built-in backup and restore facilities.
The next command creates a queue manager that has 20 MB of logging
space.
crtmqm -lp 5 -lf 1024 SUN1_BK1_QMGR
3. You also need to create a listener by editing the /etc/services file to add an
entry similar to the following:
sun1bk1
1415/tcp
4. Finally, you will need to make one additional entry to the /etc/inetd.conf file:
sun1bk1 stream tcp nowait mqm /export/home/mqm/software/bin/amqcrsta
amqcrsta -m SUN1_BK1_QMGR
This should be all on one line. The value sun1bk1 in inetd.conf needs to
match the value sun1bk1 in the file services. You can choose any name you
want, as long as it is the same in both files.
5. We did not choose to make this queue manager the default queue manager,
so to start the queue manager you will need to specify:
strmqm SUN1_BK1_QMGR
133
6. Verify that the queue manage is usable by starting the MQSC commands
with:
runmqsc SUN1_BK1_QMGR
If you execute runmqsc as the root user and root is not member of the mqm
group, the command may fail. Add root to the mqm group, or switch to the
mqm user by executing the command su - mqm.
2. Edit the /etc/group file and add root to the end of the mqbrkrs entry:
mqbrkrs::103:root
3. Create the new broker user ID, such as mqbroker, using the command:
user add -G mqbrkrs,mqm -c "WMQI user" mqbroker
passwd mq1broker
4. Once you have mounted the CD-ROM to a directory such as /cdrom, you are
ready to proceed with the software installation from the CD, using the
command:
pkgadd -d /cdrom/wmqi_solaris/
134
Oracle customization
For the installation of the Oracle software packages, please refer to the specific
installations guides provided by Oracle. For further help, you can access the
installation guidelines that we used in Appendix C, Oracle8i installation and
configuration on Solaris on page 429. This demonstrates the basic installation
and configuration that we used for this project so that your installation will follow
our configuration. Every enterprise environment will have different configuration
requirements for performance and scalability. It is recommended that the data
source be housed on the same physical machine as the broker for performance
reasons and that a new database instance be created for the broker for ease of
management.
Once the Oracle software is installed and you have created a database instance
with a name such as WMQIBKDB, there are some additional tasks that must be
performed to support ODBC and connectivity with the broker.
1. If you are using Oracle 8.1.6, as we did, you will need to issue the following
command to update the Java Runtime Environment (JRE) that Oracle uses:
mqsi_setupdatabase oracle <database_install_directory>
2. Though we will discuss XA support later in the chapter, you can execute the
following command at this time:
ln -s <db_directory>/lib/libclntsh.so /usr/lib/libclntsh.a
If you are not signed on as the oracle user, you may have to change to that
user (oracle) to execute these commands.
3. Make sure that your SQL*Net listener is running by issuing the command:
lsnrctl status
6. It is suggested that you create a new Oracle user ID to be used with your new
Oracle database to support the broker. Log in to SQLPlus with the
administrator user ID and password. If you just now created your new
135
Notice that we used the same user name and password for the database as
we did for the broker administrator. This was done mainly for the sake of
simplicity. You can create the Oracle user ID with whatever name you choose,
since it can be specified in the mqsicreatebroker command.
7. Exit SQLPlus and attempt to log back in using the newly created user name
and password:
sqlplus mqbroker/mq1broker@wmqibkdb
8. If you followed the instructions for installing and configuring the database
from Appendix C, Oracle8i installation and configuration on Solaris on
page 429, you should be able to see that the connection is using SQL*Net
and is not local. You can verify this by opening another terminal and logging in
as root. Now issue the command
ps -ef | grep mqbroker
9. Now you need to make some final adjustments for ODBC by editing the
/var/wmqi/odbc/.odbc.ini file as follows:
At the top of the file, add the entry:
[ODBC Data Sources]
WMQIBKDB=MERANT 3.70 Oracle 8 Driver
[WMQIBKDB]
Driver=/opt/wmqi/merant/lib/UKor816.so
WorkArounds=536870912
WorkArounds2=2
Description=Oracle8
ServerName=wmqibkdb
EnableDescribeParam=1
OptimizePrepare=1
136
Logging
After completing the software installations and creating the database, there are
still a few configuration requirements that must be completed before creating the
broker.
10.Add the stanza user.info /tmp/wmqisys.log to the /etc/syslog.conf file. This
will create logging ability.
11.Create the logging file by executing the command touch /tmp/wmqisys.log
1. Edit the file .profile of the mqbroker user ID and add the lines shown in
Example 4-1.
Example 4-1 Sample additions to the .profile for the mqbroker
ORACLE_HOME=<oracle_home_directory>
export ORACLE_HOME
LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH
LIBPATH=$LD_LIBRARY_PATH
export LIBPATH
NLS_LANG=AMERICAN_AMERICA.UTF8
export NLS_LANG
2. Save your changes. Rerun the .profile command by logging off and back on
or by executing it like so:
. ./.profile
3. At this point, you should be ready to create the broker. If, when issuing the
mqsicreatebroker command, you are presented with some errors involving
data source, ODBC or connection to the database, investigate the error
137
message closely. Most likely the problem will be with one of the configuration
settings in either the .odbc.ini file or the .profile file for the broker user.
3. If the broker was created successfully, you should be able to issue the
command mqsilist and see the output:
BIP8099I: SUN1_BK1
SUN1_BK1_QMGR
138
2. Copy the source file, the include file, and the make file for the switch module
into this directory.
cd /opt/mqm/samp/xatm
cp oraswit.c xa.h xaswito8.mak /var/mqm/exit/oracle
3. Compile the source module. You will need a compatible C compiler, such as
Forte C++ V6. The user ID executing the make command will need to have an
environment set up for compilation.
make -f xaswit.mak
4. Depending upon your environment, you may need to create the following
links:
ln -s /usr/ucblib/libucb.so /usr/lib/libucb.so
ln -s /usr/ucblib/libucb.so.1 /usr/lib/libucb.so.1
139
When the compilation is finished, verify again that the queue manager user ID
has access to the shared library.
Changes to qm.ini
The queue managers initialization file qm.ini has a special stanza for XA
coordination. Add the stanza XAResourceManager: to this file for each Oracle
database. You may add more than one database to this file. Example 4-2 shows
the settings for the brokers own database. Add similar statements for any other
database that is accessed by the database nodes in your message flows.
Example 4-2 Settings in qm.ini for an Oracle database
XAResourceManager:
Name=Oracle WMQIBKDB
SwitchFile=/var/mqm/exits/oracle/ora8swit
XAOpenString=
Oracle_XA+Acc=P/mqbroker/mq1broker+SesTm=35+logDir=/var/mqm/exits/oracle/ora.lo
g+DB=WMQIBKDB
Note that XAOpenString= should be on a single line. The XA protocol also allows
for XACloseString, but this is not required for Oracle databases.
Changes to Oracle
The user ID that was included in XAOpenString needs to have select authority for
a specific Oracle table. To grant this authority, log on using the user ID oracle and
start the SQLPlus interface. Within SQLPlus, log on to Oracle using user ID sys.
Execute the following command:
grant select on DBA_PENDING_TRANSACTIONS to mqbroker;
140
Verify the log file /tmp/wmqissy.log (refer to Logging on page 137) and check
that no errors are reported.
When no errors are encountered, you can add the broker to the topology in the
Configuration Manager and deploy the topology. Once the deploy of the topology
is successful, you can assign message flows to the broker.
141
142
Chapter 5.
Deploying a broker on
Windows 2000 using SQL
Server
This chapter describes the process of installing a WebSphere MQ Integrator
Broker on Windows 2000 with SQL Server 7.0.
With WebSphere MQ Integrator, it is now possible to configure and deploy a
broker using Microsoft SQL Server as the database. It is not possible to use MS
SQL Server as the data source for either the Configuration Manager or the
Message Repository.
143
The broker that we are going to create is going to run on its own stand-alone
server, with the Configuration Manager running on a separate machine. There is
no good reason to install a broker on SQL Server and Configuration Manager on
the same machine, since they would require a separate database product.
Important: Remember that you should conduct all of these installation steps
signed on as a user that is a member of the Administrator group.
144
5. Right-click the new user and select Properties. Select the Member Of tab
and add the user to the Administrators group.
6. Close the window and proceed to the SQL Server installation.
145
5. Here you can select a Typical or Custom installation. For our purposes, the
Typical installation is sufficient. Verify the installation directory and click Next.
6. The next step is to assign the user that will run the SQL Server services. Use
the user ID that we defined above and the correct password. Use the local
machine name as the domain to be specific for our example. Click Next.
146
7. This should be all of the information that you need to begin the install. When
the install completes, you will need to apply the Service Pack. If your
installation prompts you to reboot, then do so. If the system does not prompt
you to reboot, you should be able to conduct the service pack install
immediately.
8. We received Service Pack 2 from this URL:
http://www.microsoft.com/sql/downloads/sp2.asp
For this example, it was only necessary to use the Database Components
Service Pack 2, and not the OLAP Service Pack 2, since we did not install
those features. The file to download is called sql70sp2i.exe for Intel platforms.
Since the minimum requirement for running a WebSphere MQ Integrator
Broker on SQL Server 7.0 is Service Pack 2, that is what we choose to use.
You may want to attempt to use a later service pack, if needed.
9. Save this file to a temporary directory and unzip it. Run the Setup.bat
command file. The installation should start and you should be able to select
Windows NT authentication. That should be all the information you need to
let the installation continue. Several scripts are run and it will take some time
for the service pack to finish the installation.
10. When the installation is completed, open the Windows Services menu and
locate MSSQLServer and SQLServerAgent. Set both of these services to
start up automatically (this may already have been done).
147
11. This would be a good time to reboot. Once the reboot is completed, open he
Services menu again and ensure that the two required services are running.
12. Open the Event Viewer and see whether there are any new messages in the
Application or System Event log that pertain to the SQL Server installation.
13. Go to Start -> Programs -> Microsoft SQL Server 7.0 -> Enterprise
Manager. Expand the nodes until Databases is visible. If the two services are
running and you still get a message similar to the one shown in Figure 5-4,
simply ignore it and click Yes.
14. Now we are going to create a database that will be used by the WebSphere
MQ Integrator broker. Right-click Databases and select New Database. The
name of the database should be WMQIBKDB. Accept the default values for the
other fields; these can be optimized later if necessary.
15. Now expand the Security folder. Right-click Logins and select New Login.
For the name, use the user that you created earlier, wmqi_db. Use the
Windows NT Authentication and set the Domain name as your local
machine name for this example. Use Grant Access and set the Default
Database to WMQIBKDB. The pane should look like Figure 5-5.
148
16. Under the Database Access tab (see Figure 5-6), click the Permit checkbox
for WMQIBKDB. Under Database Roles, you can also select public,
db_owner, db_accessadmin, db_securityadmin, db_ddladmin,
db_datareader and db_datawriter. When you are finished, click OK.
149
17. Close the SQL Server Enterprise Manager. We now have to define the
ODBC datasource.
18. Select Start -> Settings -> Control Panel -> Administrative Tools -> Data
Sources (ODBC). Select the System DSN tab and click Add.
19. Scroll down the driver list until you find SQL Server and double-click it. For
the datasource Name, just use the same name as the database,
WMQIBKDB. Use the same for the Description and select (local) from the
pull-down menu to choose the SQL server (see Figure 5-7).
150
20. Click Next.In the next window, select Windows NT authentication using
the network login ID. It is probably also a good idea to select the Connect to
SQL Server to obtain default settings for the additional configuration
options checkbox (see Figure 5-8).
21. Click Next. Change the default database to WMQIBKDB and accept the other
default values.
151
23.If you want to test the data source, you can do so in the next window by using
the button provided.
152
24.When finished, click OK. Close the data source window by clicking OK. Now
you are ready to install WebSphere MQ Integrator 2.1.
153
3. Click Next through the next several windows until the installation starts. The
installation may pause at the NNSYRules and NNSYFormatter Installation
window. Go ahead and install this support by clicking Next.
4. If you have already registered, you can bypass the registration window
without completing it again.
5. Most likely, WebSphere MQ Integrator will prompt you for a reboot. Close
down any other applications that you have running and proceed with the
reboot.
6. We are going to use a separate user to run the broker service. Therefore, you
need to go back to Start -> Settings Control Panel -> Administrative Tools
-> Computer Management.
7. Expand Local Users and Groups. Right-click the Users folder and add a
New User.
8. For the service user ID, use NT4_BK1 with the same password. Click Create
then Close.
9. Right-click the new user and select Properties. Click the Member Of tab and
add the user to the Administrators, mqbrkrs and the mqm groups.
154
10. The easiest way to create a broker on Windows 2000 is to first create the
queue manager. By creating the queue manager yourself, you can, for
example, specify appropriate values for logging. This procedure is well
described in other places. For a quick refresher, go to Start -> Programs ->
IBM MQSeries V5.2.1 -> MQSeries Services. Select Action -> New ->
Queue Manager. Name the queue manager NT4_BK1_QMGR (you do not have
to use this name, but it is suggested for consistency with our installation). Fill
out the other fields as needed for your environment. To be consistent with our
installation, set Listener to run on port 1415. Finish using the wizard and start
the new queue manager from the MQSeries Services applet by right-clicking
NT4_BK1_QMGR and selecting All Tasks -> Start.
11.At this point, you are ready to create the broker itself. Go to Start ->
Programs -> IBM WebSphere MQ Integrator 2.1 -> Command Assistant
-> Create Broker. From the GUI that opens, you will be able to run the
mqsicreatebroker command. If you prefer to use the command line instead,
just do it from any new command window.
155
12. Name the broker NT4_BK1 and specify the new user ID you just created,
NT4_BK1, with the correct password. Set the queue manager name to
NT4_BK1_QMGR or whatever name thelocal MQSeries queue manager you are
using has.
13. Click Next. Specify the data source to use, that is, the ODBC data source
that we previously created. The name that we used was WMQIBKDB. Use the
database service user ID and password that we specified previously. We used
wmqi_db. Leave the last two checkboxes empty for this installation.
14. Click Next and verify the command (the passwords are hidden).
156
15. If the command string is correct and complete, click Finish to create the
broker.
It is important to check the system and application error logs once the create
broker command finishes. If there are no error messages in the log files, then
you should be able to go to the Services applet and see a new entry that says
IBM MQSeries Broker NT4_BK1.
16. Set the service to start up automatically and then start it. Once again, you
should check the application and system logs to ensure that there are no
messages. To verify which elements are installed in the WebSphere MQ
Integrator environment locally, open a command prompt and issue the
command mqsilist.
BIP8099I: NT4_BK1
NT4_BK1_QMGR
17. You are ready to begin using your new SQL Server-based broker in your
WebSphere MQ Integrator network.
157
We will configure the channels using the MQSeries Explorer as an example, but
you can use the command line version.
1. Click Start -> Programs -> IBM MQSeries V5.2.1 -> MQSeries Explorer.
2. The first step is to define a transmission queue. Under your queue manager
for your broker, right-click Queues -> New -> Local Queue.
3. For the Queue Name, use NT3_CM_QMGR and for the Usage, use Transmission
(see Figure 5-18).
4. Click the Extended tab and change the Maximum Message Length to
100000000.
5. Click the Triggering tab and set Trigger Control to On. Set Trigger Type to
First and set the Initiation Queue Name to SYSTEM.CHANNEL.INITQ. The field
Trigger Data should be set to TO.NT3_CM_QMGR, which is the name of the
sender channel that we will create here.
6. Click OK and verify that your queue is added to the display by refreshing the
screen.
7. Expand the Advanced folder and right-click the Channels folder. Select New
-> Sender Channel.
158
8. Under the General tab (see Figure 5-19), set the Channel Name to
TO.NT3_CM_QMGR, the Description to whatever you want, the Transmission
Protocol to TCP/IP, the Connection Name to your the remote machine name
with the port in () and the Transmission Queue to NT3_CM_QMGR.
9. Now click the Extended tab. Change the Maximum Message Length to
100000000.
159
10.Click OK and verify that the channel appears in the right-hand pane. You may
need to refresh the screen for it to appear.
11. Now it is time to define a receiver channel. Right-click the Channels folder
again and select New -> Receiver Channel.
12.Under the General tab, set the Channel Name to TO.NT4_BK1_QMGR, the
Description to whatever you want and the Transmission Protocol to TCP/IP
(Figure 5-21).
160
13.Click the Extended tab and set the Maximum Message Length to 100000000
(Figure 5-22).
161
162
Chapter 6.
163
164
The above steps are then repeated for a tagged message format; only the
differences are shown for clarity. We also show use of the message flow
debugger with a tagged message.
Give the new message set a name; this name needs to be unique within the
Configuration Manager. Leave the default value for the other fields in this window
and click Finish, as shown in Figure 6-2.
165
The message set will now be shown in your workspace; note especially the
identifier which has been created automatically: this is used to reference the
message set in the message flow. It does appear in a drop-down list for
completion in some Control Center fields, but it is vital that you make sure you
are using the correct one, because the identifiers for different message sets may
be similar.
166
If you click the Run Time tab, you will see that the Parser value is set to MRM, as
shown in Figure 6-4. This is the default and indicates that this message set will
use the MRM. It is important to make the distinction between MRM-defined XML
and generic (or self-defining) XML. This example uses defined XML, which
allows the MRM to perform most of the work required for message
transformation. Defining XML messages in the MRM has the advantage that you
can use the drag-and-drop features of a Compute node when building ESQL
statements for a transformed output message.
167
Give the format a name (in this example, TDS1). After it is created, you will see a
new tab appear with the name of the format. Select this tab, then set and check
the identifier. For ease of use, we recommend making the identifier the same as
the format name. This avoids the problems that can arise if the wrong value is
entered in another part of the message set or in an MQInput node of a message
flow.
At the same time, the delimiter character (a comma) can be entered, and the
changes applied. Note that the messaging standard defaults to UNKNOWN. This is
correct, because we are not using one of the pre-defined industry standard
formats (such as SWIFT) in this example.
168
Repeat the previous step to add an XML physical layer format. Remember to set
the identifier to the same value as the format name, in this example XML1, as
shown in Figure 6-7. Take care to choose appropriate format names, because
once they are added, formats cannot be deleted from the message set.
Note: You could name and identify your XML physical format as XML if you
wish; this is suggested as a good installation standard, so that if you have
existing MRM messages which specify XML as their RFH2 format, they will
pick up the new XML physical format. However, the use of XML1 here illustrates
that we are specifying an identifier (which we set) and not simply selecting
XML from a pre-defined Control Center list of formats.
169
170
When you are finished defining all four elements, your Elements folder in the
Control Center should look as shown in Figure 6-9. Do not be concerned about
the order in which the elements appear in the element list; you can re-order the
elements later, in the compound type definition.
171
172
Name the compound type; again, we recommend setting the identifier to the
same value, in this example LIST. Set the type content to Closed, as shown in
Figure 6-11. This means that all fields in the message must be defined in the
message type tree structure. An open type content can be useful if you only need
to parse the message partially. The remainder part of the message, for which no
elements are defined, is then considered a BLOB.
173
Next, we need to add the elements to the compound type. Right-click the
compound type LIST and select Add -> Element.
174
The elements previously created are displayed (see Figure 6-13). Select all of
them (hold the CTRL key down while selecting each row in turn) and click Finish.
At this point, your message set should look like that shown in Figure 6-14.
175
176
At this point, you should check in the elements and types of the message set.
Select any element, right-click it and select Check in. The Control Center will
check in the other elements for you at the same time. The check-in process will
store all definitions in the Configuration Managers repository and link the types
to the physical formats that we had defined previously. When the check-in
process is complete, you will see tabs for the new formats when you select
elements or types.
To complete the definition of our message type and set the details of the physical
formats, check out the LIST type and click the TDS1 tab. Change the Data
Element Separation to Variable Length Elements Delimited, because in this
example we want to use variable length comma separated variable data, as
shown in Figure 6-16. Set the delimiter (to a comma) here also, because
otherwise you will get an error when deploying.
177
Re-order the element fields within the compound type by using the reorder
option in the context menu of the compound type LIST. A window will appear as
shown in Figure 6-17. Set the order to FIRST_NAME, LAST_NAME, EMAIL_ADDR and
LOCATION, in that order, by moving the fields up or down the list as needed, and
click Finish when they are correct.
178
Check in the type again, as well as the message set, if necessary. Remember to
note the message set identifier, because it needs to be set in the MQInput Node.
The next stage is to develop a message flow to transform a delimited message
into an XML message using the MRM.
Value
Description
Message Domain
MRM
Message Set
DNTBFL807M001
Message Type
TEST
Message Format
TDS1
Except for Message Type, the MQInput node presents a drop-down box for each
property, as shown in Figure 6-18. For the Message Format, if the name of the
physical format is not the same as the identifier, you should not select the
presented name, but instead type in the identifier.
179
Also, these default values are only used if the input message has no MQRFH2
header. If it does, the same fields in the MQRFH2 should have the values shown
in Figure 6-1.
With this configuration of the MQInput node, the MRM parses input messages
using the delimiter, and populates the MRM message tree, in this case with four
string elements which will be named FIRST_NAME, LAST_NAME,
EMAIL_ADDR and LOCATION.
180
The Output Message refers to the output of the Compute node and is not
necessarily the final output message from the flow. The actions of the Compute
node are performed by ESQL statements. These can be coded manually or
generated by selecting options in the properties window, or created using a bit of
both methods.
If all the elements are to be transformed to XML and included in the output
message without any other transformation, then you can select Copy Entire
Message in the Compute node properties. Then click the ESQL tab and add this
one additional statement:
Set OutputRoot.Properties.MessageFormat = XML1;
This instructs the MRM (which is the default message domain) to construct the
output message (from the Compute node) using the message format XML1
(which we defined in our message set as an XML physical layer).
Set the MQOutput node to use a queue (such as WMQI.OUT); the queue
manager name can be left to default to the broker queue manager, and other
fields can be left to their default for now.
181
3. Select Use as Message Body on the output side. This generates ESQL to
copy the message set identifier and message identifier, but does not copy any
other fields. You do that by dragging and dropping fields from the input to the
output message in the Compute node properties. After doing this for the first
three fields (leaving out LOCATION in this example), the node properties look
like those shown in Figure 6-20.
182
4. Click the ESQL tab to manually add the two statements at the end of the
ESQL, as shown in bold in Example 6-2:
Example 6-2 Compute node ESQL to transform fields to XML with the MRM
DECLARE I INTEGER;
SET I = 1;
WHILE I < CARDINALITY(InputRoot.*[]) DO
SET OutputRoot.*[I] = InputRoot.*[I];
SET I=I+1;
END WHILE;
SET "OutputRoot"."MRM"."FIRST_NAME" = "InputBody"."FIRST_NAME";
SET "OutputRoot"."MRM"."LAST_NAME" = "InputBody"."LAST_NAME";
SET "OutputRoot"."MRM"."EMAIL_ADDR" = "InputBody"."EMAIL_ADDR";
SET OutputRoot.Properties.MessageSet = 'DNTBFL807M001';
SET OutputRoot.Properties.MessageType = 'TEST';
-- Enter SQL below this line. SQL above this line might be regenerated,
causing any modifications to be lost.
Set OutputRoot.Properties.MessageFormat = 'XML1';
Set OutputRoot.Properties.MessageDomain = 'MRM';
183
Message Format refers to the physical format in the message set, which we
called XML1. The message domain would default to MRM, but it is a good
practice to explicitly specify it to avoid confusion with the generic XML
domain.
5. Set the MQOutput node to use a queue such as WMQI.OUT (if you have not
already done this) and then check in your message flow.
2. Deploy the changes to the broker (using either a Delta or a Complete deploy).
184
The input count of 1 on the WMQI.IN queue is due to the execution group having
opened this queue for input.
185
186
You have now transformed a comma delimited message into XML using the
MRM. To format the XML message for readability, you could display the data in
Internet Explorer, or another XML-enabled program. Note that the names of the
XML tags are exactly the same as the names of the elements in the MRM. This
may not be what your applications expect, so let us have another look at the XML
physical format definition to control the generation of the XML message.
You can display XML messages easily using the program RFHUTIL, which is part
of SupportPac IH03. SupportPacs can be downloaded from the following Web
site:
http://www-4.ibm.com/software/ts/mqseries/txppacs/txpm1.html
187
The output XML tag names can be customized at the element level in the
message. Go to the Message Sets tab in the Control Center and select the
required element. Check out the element, select the XML1 tab, and change the
name, as shown in Figure 6-25.
You can also change the XML tags and attributes on the element properties. You
need to check out the LIST type before doing this, as shown in Figure 6-26.
188
In Chapter 7, Exploring the new XML features on page 205, we will take a
closer look at the functionality of the XML physical layer and the possible
customization of XML elements and tags.
The XML headers such as DOCTYPE and VERSION can be modified by
checking out the message set and selecting the XML1 tab, as shown in
Figure 6-27.
189
You can add further physical layers to the same message set; for example, if you
wanted to output the data as a fixed structure, you could add a Custom Wire
Format layer to the message set, configure it, and refer to this format name in a
Compute node.
You may want to create a new message set for each new pair of formats; this
avoids the constraints of one format definition affecting another. For example, a
delimited format expects an ordered set (meaning that the elements must be in
order in the message), whereas a tagged format could accept an unordered set
(since the tags can identify the values regardless of the order).
Note: It may seem that the ResetContentDescriptor node would transform a
messages format into another format, since you can set the values as
properties, but this is not the case. You must use a Compute node to
transform a message (or part of a message) from one MRM format to another.
190
The process is similar to the one shown in the previous example, so only the
differences are shown. You could also simply modify the message set and
message flow created for the CSV transformation, rather than create new ones.
Using the same procedures as in the previous example, perform these steps:
1. Create a message set (for example TAGGED_TEST).
2. Add physical formats for tagged/delimited and XML.
3. Create string elements.
4. Create a compound type.
5. Add elements to the compound type.
6. Create a message (using the compound type).
7. Reorder the elements.
Check in the message set, elements and types. We are now ready to configure
the message set to process the tagged data correctly.
191
The type definition should then look like the one shown in Figure 6-28. Note
that the tagged physical format has been called TAG, both for its name and
identifier, and the XML physical format (not visible in this window) has been
given the name and identifier of XML. Default values for these formats were
used up to this point.
4. Now click the TAG tab and set the Tag Data Separator to = (an equal sign),
the Data Element Separation to Tagged Delimited and the Delimiter to ; (a
semi-colon). Be careful not to add any spaces after the delimiter unless you
wish these to become part of the delimiter string. Apply the changes.
192
This configures the message set to process tagged data with a delimiter
(rather than fixed length fields) and with a separator between the tag and the
data. An equals sign separates each tag from its data. Each tagged field is
delimited with a semi-colon from the next field.
5. Now expand the LIST type to show the elements within it and check out each
element in turn. For each element, click the TAG tab to set the tag names.
6. Given the message structure shown in Example 6-5 on page 191, we set the
name of the tag for each element. Under the XML tab, you can then provide
the name of the XML tag that you want to assign to each element. Table 6-2
on page 194 provides an overview of all the names.
193
Tag
XML name
FIRST_NAME
Given
First
LAST_NAME
Surname
Last
EMAIL_ADDR
Email_Address
LOCATION
Location
Location
Figure 6-30 shows the TAG page for the element FIRST_NAME.
Figure 6-31 shows the XML page for the same element FIRST_NAME.
194
195
2. Configure the Compute node to Copy entire message and add the two
ESQL lines to set the message format and domain. In this example, the
output message format has an identifier of XML. The properties window
should look like the one shown in Figure 6-33.
3. Now wire the three nodes together and set the MQSeries input and output
queue names as for the previous example.
196
4. Assign the new (or modified) message set to the broker and the message
flow to an execution group. Make sure that you do not have two message
flows running from the same input queue (you can either stop one flow or
remove it from the execution group). Perform a deploy (a complete deploy
may be needed for a new message set).
5. Create a test tagged message on the input queue (for example using
MQSeries Explorer), which uses tags, as shown in Example 6-6.
Example 6-6 Tagged message
Given=John;Surname=Doe;Email=jdoe@nowhere.com;Location=Earth;
Note that we configured the message set XML tab to suppress the DOCTYPE
declaration and to set the XML root name to IBM (these changes are not required
for the example to work).
The simplest way to view XML messages in a parsed format is to use the
RFHUTIL program to read the queue; this program can also write messages to
queues from files, or from messages previously read. It is part of SupportPac
IH03 and can be downloaded free of charge from the IBM MQSeries Web site.
Try changing the order of the tags in the input message; you will see that it still
works correctly because we specified Unordered set. If we had specified an
ordered set, then changing the input tag order would generate an error.
However, the output message element order reflects the input order. If you want
to predetermine the output order, you can change the Compute node to select
fields individually and the order of assignment of these to the output message will
determine their order in the resulting XML message.
Figure 6-34 shows the Compute node properties that are required to generate
the XML tags in a fixed order regardless of their input order.
197
198
3. Expand the tree of brokers and execution groups until you can select your
flow (see Figure 6-36).
199
4. The message flow appears in the debugging window. You now set
breakpoints where you want the debugger to get control (so you can examine
the message). Do this by right-clicking the connections between nodes. In
this example, we have set breakpoints before and after the Compute node
(see Figure 6-37).
5. Select Debug Actions -> Start Debugging. This causes the Control Center
to deploy a modified version of the message flow to the broker. This contains
plug-in nodes for each breakpoint that you set. You will receive a pop-up
message confirming that deployment has been initiated. Check the log view
to confirm completion.
6. Put a test message in the input queue such as the one shown in Example 6-8.
Example 6-8 Tags in different order
Email=jdoe@nowhere.com;Location=Earth;Surname=Doe;Given=John;
Note that the tags have deliberately been set in a different order than desired
for the XML.
200
8. You can examine the message tree, including any headers, by expanding the
appropriate part of the message content display. You can also modify the field
values (for example changing the name Doe to Smith). To proceed to the next
breakpoint, select Debug Actions -> Step Into or click the Step Into icon.
9. The message flow stops at the next breakpoint and you can see the message
tree after the Compute node; notice the different order of the fields and the
effect of manually changing the data value for one of the fields. Select Step
Into once more and the message will be displayed at the end of the message
flow. You do not need to manually add a breakpoint here for this to work.
10.Notice that the destination list has been completed by the MQOutput node.
Select Run To Completion to commit the message to the queue.
201
11.Select Debug Actions -> Stop Debugging unless you want to continue with
another input message. Examine the output message with RFHUTIL; you
should see an XML message like the one shown in Example 6-9.
Example 6-9 Output message after debugging
<?xml version="1.0"?>
<IBM xmlns="www.mrmnames.net/DNUMQ4G07G001">
<TEST>
<First>John</First>
<Last>Smith</Last>
<Email_Address>jdoe@nowhere.com</Email_Address>
<Location>Earth</Location>
</TEST>
</IBM>
12.If you want to transform XML to tagged messages, you can demonstrate this
with the message set that you have created, using the XML format as the
input and the tagged format as the format set in the Compute node. Place an
202
XML message on the input queue and it will be generated as tagged on the
output.
To import the message flows, use the Control Center; select File -> Import to
Workspace, then specify the file names TAG_TO_XML.xml or
CSV_TO_XML.xml, which are the exported message flows.
The mqsiimpexpmsgset command has a new operand of -x <xml format name>.
This can be used to add an XML physical format when importing a message set.
This operand is not needed when the exported message set already contains an
XML format. Please see the readme file for WebSphere MQ Integrator 2.1 for
details of when to use the -x operand.
203
204
Chapter 7.
205
206
DTDs can be transported with the actual XML message, but may also be stored
separately. For the purpose of this example we created a small separate DTD:
Example 7-1 Sample DTD
<!ELEMENT
<!ELEMENT
<!ATTLIST
<!ATTLIST
<!ELEMENT
<!ELEMENT
<!ELEMENT
<!ELEMENT
people (person)+>
person (name, email*)>
person location CDATA #REQUIRED>
person age CDATA #IMPLIED>
name (first,last)>
first (#PCDATA)>
last (#PCDATA)>
email (#PCDATA)>
This DTD defines the people element as consisting of one or more persons.
Each person has a name and zero or more e-mail addresses. Location is an
attribute of person and is required. Age is an attribute of person and is optional.
Each name consists of first and last names (in that order). The first, last and
email elements are PCDATA (Parsed Character Data).
Attributes are associated with elements and appear in the same XML tag as the
element; they can be used for values that occur only once per element. They are
an alternative to using child elements, and XML designers must choose which
method to use. Example 7-2 shows a valid XML message that conforms to this
DTD.
Example 7-2 Sample XML message
<?xml version="1.0"?>
<people>
<person location="London" age="35">
<name>
<first>Fred</first>
<last>Bloggs</last>
</name>
<email>fred@nowhere.com</email>
</person>
</people>
207
208
3. Enter the fully qualified file name of the DTD file that you want to import. If you
prefer, you can browse the file directories to select the file. When you have
entered or selected the file, the name will be displayed as shown in
Figure 7-2:
209
210
You now have the imported definitions displayed in your message set. Note that
this behavior is different from that using the C or COBOL importer, where the
import function in the Control Center writes the definitions directly to the
Configuration Managers repository. When the import of the C structure is
completed, you need to add the defined message to the workspace. This also
means that the message is verified and correct. The DTD importer adds a
message set to the workspace of the Control Center and not to the Configuration
Managers repository. The result is that some important validation is not
performed. That will only be done when the message definition is checked in. At
that time, the Control Center will communicate with the Configuration Manager
and the Configuration Manager will actually perform the final validations.
211
212
213
3. Select all of these. You can select multiple rows by holding down the CTRL
key while selecting each row, then click Finish.
This action adds the types to your workspace so that you can examine and
modify them; it does not add them to the message set, because they are
already there.
You can also add the imported elements to the workspace in a similar way.
Right-click the Elements folder and select Add to Workspace -> Elements.
4. Select the elements; they then appear in your workspace. Again, they are
already part of the message set. You are simply making them accessible to
the Control Center.
214
215
3. Now expand the Types folder and check out person_attType, age and
location.
4. Click the XML tab and examine the XML names for these attributes. If the
XML name for the age attribute is person_age then change this to age. Do the
same for the location attribute, changing person_location to location.
Note: The MRM element names used in ESQL remain person_location
and person_age; we are amending only the external XML name.
Figure 7-11 shows the age attribute after correcting the XML name. Note that
the XML name appears and is changed in two places.
The other XML names should all be correct; you only need to amend the XML
names for the message type and any attributes, but it is a good idea to check
the others if you have problems with parsing or generating XML messages.
With later releases, these amendments should not be required.
216
Table 7-1 summarizes the values that were generated by the importer and
values that should have been set.
Table 7-1 XML names
Type name
people (message)
m_people
people
location (type)
person_location
location
age (type)
person_age
age
Notice that two compound types have been generated for person; one
contains the attributes (person_attType) and the other contains the child
elements (person_cmType).
217
Attributes and child XML elements are grouped in two types, because they
have different sequencing requirements. Attributes can be in any order.
Therefore, the type person_attType has Simple Unordered Set as value for
the field Type Composition. XML elements should appear in a certain order.
Therefore, the type person_cmType has Sequence as value for the field Type
Composition.
5. Check in the types that were checked out. We now have a good message
type.
Value
Description
Message Domain
MRM
Message Set
DNUMQ4G07O001
Message Type
m_people
Message Format
XML
218
1. The message set name is CONTACTS and the message name is people.
Select the checkboxes Use as message body and Copy message headers.
The properties page now looks like that shown in Figure 7-12.
2. Expand the message tree on the input and output side. Drag and drop the
following elements in this sequence from the input side to the same
element/attribute name on the output side:
person_age
person_location
first
last
email
This sequence matches the order in which the elements were defined in the
DTD, and therefore the order of the elements in the message set. When
constructing the output message, the MRM expects to process the elements
in the same order; if they are assigned out of sequence, an error occurs, as
shown in Section 7.2.5, Some possible errors on page 227.
219
3. To generate the message field tree layout for the input and output message,
just click the Add button and select the CONTACTS message set and people
message.
4. After dragging and dropping the five fields, the mappings area should be
similar to the one shown in Figure 7-14. You can correct any mistakes by
deleting the mappings and re-creating them.
220
The generated ESQL, including the statements needed for copying the
message header, is shown in Example 7-3.
Example 7-3 Generated ESQL code
DECLARE I INTEGER;
SET I = 1;
WHILE I < CARDINALITY(InputRoot.*[]) DO
SET OutputRoot.*[I] = InputRoot.*[I];
SET I=I+1;
END WHILE;
SET "OutputRoot"."MRM"."person"."person_age" =
"InputBody"."person"."person_age";
SET "OutputRoot"."MRM"."person"."person_location" =
"InputBody"."person"."person_location";
SET "OutputRoot"."MRM"."person"."name"."first" =
"InputBody"."person"."name"."first";
SET "OutputRoot"."MRM"."person"."name"."last" =
"InputBody"."person"."name"."last";
SET "OutputRoot"."MRM"."person"."email" = "InputBody"."person"."email";
SET OutputRoot.Properties.MessageSet = 'DNUMQ4G07O001';
SET OutputRoot.Properties.MessageType = 'm_people';
-- Enter SQL below this line.
This message flow can now be checked in; currently coded, it will not modify
the message contents, but it is advisable to test everything developed up to
now.
221
perform this. Make sure you do not have two flows using the same input
queue.
3. The easiest way to repeatedly test the message flow is to use the RFHUTIL
program from SupportPac IH03. This program can read a message from a file
and then put it to a queue. It can also get a message from a queue and
display it in various ways.
Use RFHUTIL to read the message file, then make sure that there are no
extraneous spaces outside of tags by viewing the data display tab (please
refer to the documentation that comes with the SupportPac for details on how
to use this program). Send the data to queue WMQI.IN (or whatever you have
configured the input queue name to be).
The message flow should process the message and generate an output XML
message on the queue WMQI.OUT; get the message from this queue using
RFHUTIL. It should look like that shown in Example 7-4:
Example 7-4 Output XML message
<?xml version="1.0"?>
<MRM xmlns="www.mrmnames.net/DNUMQ4G07O001">
<people>
<person age="35" location="London">
<name>
<first>Fred</first>
<last>Bloggs</last>
</name>
<email>fred@nowhere.com</email>
</person>
</people>
</MRM>
Notice that the MRM has inserted a tag into the message but that it is
otherwise identical to the input message. In the next section, we will modify
the message in the Compute node.
You can use the drag and drop statement and field assignment feature to help
construct these ESQL lines by dragging statements and element names into
the ESQL code where they are needed.
222
2. Check in the message flow and test it again with the example message.
3. The output message now has the location set to Boston; test it again with a
different location set in the input message to make sure the conditional logic
is working correctly.
Notice that the name used to refer to attribute fields in ESQL is not the same
name as used for the generated XML name in the tags, whereas the other
elements have their ESQL name matching their XML tag names, as shown in
Table 7-3.
Table 7-3 XML names versus MRM names
Parsed XML format
MRM.people.person.(XML.attr)location
"MRM"."person"."person_location"
MRM.people.person.name.first
"MRM"."person"."name"."first"
MRM.people.person.email
"MRM"."person"."email"
We have now created a message set from a DTD and used it to process and
manipulate an XML message, with the MRM parsing the input and generating the
output message.
The remainder of this chapter looks at variations and caveats along with common
problems and how to diagnose them; this will help you in case you were not able
to make this example work the first time.
223
This writes the message tree and exception list to the user trace file. You could
instead choose to specify a file destination. However, the user trace allows us to
see WebSphere MQ Integrator errors as well as our message data in one place.
With Trace nodes and extra MQOutput nodes for failure queues, the message
flow now looks as shown in Figure 7-16.
User tracing for the CONTACT_FLOW message flow itself is activated with this
line command, issued on the broker system.
mqsichangetrace zpat -u -e default -f CONTACT_FLOW -l debug -r -c50000
Replace zpat with the name of the broker; default is the name of the execution
group to which the message flow is deployed. The message flow is then tested
and the trace log can be read and formatted with these commands (again
substituting your broker and execution group names). Delete any prior log and
.txt files before re-running the following commands.
224
-i CONTACT_FLOW.log -o CONTACT_FLOW.txt
notepad CONTACT_FLOW.txt
225
(0x3000000)PutDate
= DATE '2001-11-28'
(0x3000000)PutTime
= GMTTIME '19:42:51.310'
(0x3000000)ApplOriginData
= '
'
(0x3000000)GroupId
= X'000000000000000000000000000000000000000000000000'
(0x3000000)MsgSeqNumber
= 1
(0x3000000)Offset
= 0
(0x3000000)MsgFlags
= 0
(0x3000000)OriginalLength
= -1
)
(0x1000008)MRM
= (
(0x1000001)person = (
(0x3000001)person_location = 'London'
(0x3000001)person_age
= '35'
(0x1000001)name
= (
(0x3000012)first = 'Fred'
(0x3000012)last = 'Bloggs'
)
(0x3000012)email
= 'fred@nowhere.com'
)
)
)
The important part of this user trace is the MRM tree at the end. If this does not
appear, then the broker has been unable to parse the message on input and the
log file should contain an error code, which may indicate the reason for it.
Conversely, if the input message has been successfully parsed but the output
message is not generated, examine the user trace after the Compute node. The
MRM tree here is used to construct the output message and any errors in it,
including sequence errors which can cause a failure.
Example 7-6 User trace after Compute node, MRM portion only
(0x1000008)MRM
= (
(0x1000000)person = (
(0x3000001)person_age
= '35'
(0x3000001)person_location = 'Boston'
(0x1000000)name
= (
(0x3000012)first = 'Fred'
(0x3000012)last = 'Bloggs'
)
(0x3000012)email
= 'fred@nowhere.com'
)
)
226
This XML message would have been parsed easily by the generic XML parser.
227
This will cause a failure in the MQOutput node when the broker tries to build the
physical message from the message tree. Example 7-9 shows what error
messages you might see in the user trace.
Example 7-9 User trace with sequence error
2001-11-28 15:53:27.416999
1944 Error
BIP5286E: Message Translation
Interface Writing Errors have occurred.
Errors have occurred during writing.
Review further error messages for an indication to the cause of the errors.
2001-11-28 15:53:27.447000
1944 Error
BIP2628E: Exception condition
detected on input node 'CONTACT_FLOW.WMQI.IN'.
The input node 'CONTACT_FLOW.WMQI.IN' detected an error whilst processing a
message. The message flow has been rolled-back and, if the message was being
processed in a unit of work, it will remain on the input queue to be processed
again. Following messages will indicate the cause of this exception.
Check the error messages which follow to determine why the exception was
generated, and take action as described by those messages.
2001-11-28 15:53:27.447000
1944 ImbMqOutputNode::evaluate
2001-11-28 15:53:27.447000
1944 MtiImbParser::subSyncWithDictionary ,
DNUMQ4G07O001, first
228
229
<name>
<first>John</first>
<last>Smith</last>
</name>
<email>John@somewhere.com</email>
</person>
</people>
If left unchanged, the Compute node would only generate a single output person
element as before. To allow for more than one person, we can amend the ESQL
in the Compute node as follows:
1. Edit the ESQL and make a copy of the five automatically generated field
assignment statements that the mappings created.
2. Delete the mappings; this removes their generated ESQL.
3. Paste the statements back, but in the user modified ESQL area, below the
line, and then modify them to operate in a loop with an array subscript.
The code shown in Example 7-11 is the complete ESQL from the Compute node;
this version will iterate through multiple occurrences of the person element and
therefore the output message will also contain multiple person elements. You can
see where the variable J has been used as an array subscript.
The first loop is the WebSphere MQ Integrator generated one from using the
Copy the message header option, but this does not copy the body. Then the Use
as message body option generates the assignments for the message set identifier
and message type. We then set the output message format and domain, and
iterate through the person elements, performing our processing on each one in
turn.
This iteration is implemented within a single message; the flow would be invoked
again if multiple MQSeries messages were involved.
Example 7-11 Processing multiple person elements
DECLARE I INTEGER;
SET I = 1;
WHILE I < CARDINALITY(InputRoot.*[]) DO
SET OutputRoot.*[I] = InputRoot.*[I];
SET I=I+1;
END WHILE;
SET OutputRoot.Properties.MessageSet = 'DNUMQ4G07O001';
SET OutputRoot.Properties.MessageType = 'm_people';
-- Enter SQL below this line. SQL above this line might be regenerated,
causing any modifications to be lost.
Set OutputRoot.Properties.MessageFormat = 'XML';
Set OutputRoot.Properties.MessageDomain = 'MRM';
230
DECLARE J INTEGER;
SET J =1;
WHILE J <= CARDINALITY("InputBody".*[]) DO
SET "OutputRoot"."MRM"."person"[J]."person_age" =
"InputBody"."person"[J]."person_age";
SET "OutputRoot"."MRM"."person"[J]."person_location" =
"InputBody"."person"[J]."person_location";
SET "OutputRoot"."MRM"."person"[J]."name"."first" =
"InputBody"."person"[J]."name"."first";
SET "OutputRoot"."MRM"."person"[J]."name"."last" =
"InputBody"."person"[J]."name"."last";
SET "OutputRoot"."MRM"."person"[J]."email" = "InputBody"."person"[J]."email";
If "InputBody"."person"[J]."person_location" = 'London' then
Set "OutputRoot"."MRM"."person"[J]."person_location" = 'Boston';
end if;
SET J = J + 1;
END WHILE;
If you want to generate separate output messages for each loop of an input
person element, you can use the new PROPAGATE statement. This causes the
message tree to be immediately sent on the out terminal, without the node
terminating. The message tree is then cleared and so needs to be fully set up in
each loop. As per our test message from Example 7-10 on page 229, two
MQSeries messages are created, one for each person element.
When using PROPAGATE, you would normally end the node ESQL with RETURN
FALSE; to suppress the automatic send of the message tree at the end of the
node. It is actually good practice to explicitly code RETURN TRUE; at the end of the
ESQL in all the Compute nodes when you do not explicitly code a PROPAGATE
statement.
Example 7-12 Generating multiple messages
DECLARE J INTEGER;
SET J =1;
WHILE J <= CARDINALITY("InputBody".*[]) DO
DECLARE I INTEGER;
SET I = 1;
WHILE I < CARDINALITY(InputRoot.*[]) DO
SET OutputRoot.*[I] = InputRoot.*[I];
SET I=I+1;
END WHILE;
SET OutputRoot.Properties.MessageSet = 'DNUMQ4G07O001';
SET OutputRoot.Properties.MessageType = 'm_people';
Set OutputRoot.Properties.MessageFormat = 'XML';
Set OutputRoot.Properties.MessageDomain = 'MRM';
SET "OutputRoot"."MRM"."person"."person_age" =
"InputBody"."person"[J]."person_age";
231
SET "OutputRoot"."MRM"."person"."person_location" =
"InputBody"."person"[J]."person_location";
SET "OutputRoot"."MRM"."person"."name"."first" =
"InputBody"."person"[J]."name"."first";
SET "OutputRoot"."MRM"."person"."name"."last" =
"InputBody"."person"[J]."name"."last";
SET "OutputRoot"."MRM"."person"."email" = "InputBody"."person"[J]."email";
PROPAGATE;
SET J = J + 1;
END WHILE;
RETURN FALSE;
232
It is worth contrasting the way the XML attributes are generated in MRM XML
and generic XML.
With MRM-defined XML, an attribute is set with ESQL code like this:
SET "OutputRoot"."MRM"."person"[J]."person_age" =
"InputBody"."person"[J]."person_age";
The MRM knows that this field is an attribute and will generate the appropriate
XML automatically, whereas with generic XML, you need to be more explicit, like
this:
SET "OutputRoot"."XML"."people"."person"[J].(XML.attr)age =
"InputBody"."person"[J]."person_age";
Also, it is necessary to explicitly generate the XML header with the following
code:
SET OutputRoot.XML.(XML.XmlDecl)='';
SET OutputRoot.XML.(XML.XmlDecl).(XML.Version)='1.0';
Even with generic XML, the order of creating parts of the XML message is
important to avoid the generation of invalid XML.
It is also important to create any new message headers such as the MQMD or
MQRFH2 before creating body elements in the ESQL; otherwise, the message
parser may simply treat them as additional body elements.
For more information on generic XML and ESQL, refer to the IBM manual ESQL
Reference, SC34-5923. This manual is supplied with WebSphere MQ Integrator
V2.1.
233
Important: The following note is provided in the readme file that comes with
WebSphere MQ Integrator 2.1.
Changes were made to the syntax of the CREATE, MOVE and SET
statements primarily to avoid ambiguity of meaning. These changes were not
fully reflected in the manual ESQL Reference, SC34-5923-01.
The readme file goes on to document the correct syntax.
The code sample shown in Example 7-14 is the complete ESQL from the
Compute node after having been amended to use reference variables. Some
points to note:
Elements need to exist before they can be referred to, so use the CREATE or
SET statement to make sure they exist (but do so in sequence, if required).
MOVE is used to increment the reference to the next element, but you cannot
use MOVE unless the element exists, so use CREATE for any new element
first.
Avoid creating a new output element after processing the last input element
by testing the LASTMOVE operation result.
Use the user trace facility or the debugger to examine the MRM tree before
and after the Compute node to make sure that the results are as expected.
Example 7-14 Using reference variables
DECLARE I INTEGER;
SET I = 1;
WHILE I < CARDINALITY(InputRoot.*[]) DO
SET OutputRoot.*[I] = InputRoot.*[I];
SET I=I+1;
END WHILE;
SET OutputRoot.Properties.MessageSet = 'DNUMQ4G07O001';
SET OutputRoot.Properties.MessageType = 'm_people';
-- Enter SQL below this line. SQL above this line might be regenerated,
causing any modifications to be lost.
Set OutputRoot.Properties.MessageFormat = 'XML';
Set OutputRoot.Properties.MessageDomain = 'MRM';
DECLARE ptrin REFERENCE to "InputBody"."person"[1];
CREATE FIELD "OutputRoot"."MRM"."person"[1] FROM ptrin;
DECLARE ptrout REFERENCE to "OutputRoot"."MRM"."person"[1];
WHILE LASTMOVE(ptrin) DO
/* while ptrin is within tree */
SET ptrout."person_age"
= ptrin."person_age";
SET ptrout."person_location" = 'Raleigh';
SET ptrout."name"."first"
= ptrin."name"."first";
234
SET ptrout."name"."last"
= ptrin."name"."last";
SET ptrout."email"
= ptrin."email";
MOVE ptrin NEXTSIBLING; /* move ptrin to next input person */
IF LASTMOVE(ptrin) THEN
/* if move was still inside tree */
CREATE NEXTSIBLING OF ptrout FROM ptrin; /* create next output person */
MOVE ptrout NEXTSIBLING; /* refer to next output person */
END IF;
END WHILE;
This code is only meant as an example. If you really want to amend only one
field, you should copy the input message at a higher level and then amend it.
SET ptrout = ptrin;
SET ptrout."person_location" = 'Raleigh';
Copying the entire structure all at once also has the advantage of ensuring that
the sequence of fields will be the same in the output message as it is in the input
message. As we have seen, it is essential to create fields in the correct sequence
when using a compound type defined as an ordered set (or sequence).
However, if you use the ESQL statement SET "OutputRoot" = "InputRoot"; and
then do not subsequently modify any of the message elements, the parser
directly copies the input bitstream to the output bitstream without regenerating it;
this means that attributes in the output wire format layer will not take effect.
Remember, when creating or modifying headers such as the MQMD, that a
message set definition for this (and other headers) is supplied with WebSphere
MQ Integrator and can be added to your workspace, then used with the drag and
drop feature of the Compute node.
It is possible for the Compute node to report a ESQL syntax error when the
statement is valid. For example, the following statement is flagged, but is correct:
SET OutputRoot.MQRFH2.mcd.Set = OutputRoot.Properties.MessageSet;
235
To import the message flows, use the Control Center; select File -> Import to
Workspace, then specify the file name CONTACT_FLOW.xml which is the exported
message flow.
CONTACT_FLOW_basic, _generic, _propagate and _reference.xml are the other
variations of the same flow.
The example DTD and test message are also supplied.
236
Chapter 8.
237
238
WebSphere MQ Integrator
239
240
NNSY gui
Message Flow
Development
NN
SY
NNSY
Database
Config
Manager
MRDB
La
un
ch
Java Client
Control
Center
CMDB
Runtime
Broker
BKDB
ODBC Connection
241
The broker itself has, of course, the task of managing its broker database, but it
also has a connection to the New Era Of Networks database which is required
when the broker needs to know about the New Era Of Networks rules and
formats.
This architecture no longer refers to the Rules Engine or Daemon, as shown in
Figure 8-1 on page 238.
8.2.1 Configuration
Besides the installation of the New Era Of Networks product code and the
installation of the database management system, setting up the New Era Of
Networks support requires a few configuration steps, which are similar with all
versions of MQSeries Integrator/WebSphere MQ Integrator.
These steps are:
Creating a database for use with New Era Of Networks
Depending on the database vendor:
242
WebSphere MQ Integrator
As with the previous version 2.0.2, WebSphere MQ Integrator provides a more
comprehensive inst_db configuration application that covers most of the
configuration steps above. You just have to provide a couple of parameters to run
the installation application that sets up a proper database environment for use
with the New Era Of Networks tools.
8.2.2 Environment
New Era Of Networks makes use of dedicated environment settings and a
database connection file in a development and runtime environment.
The database connection file includes information on how to access the New Era
Of Networks database. The New Era Of Networks applications (eg. NNFie,
NNRie, Visual Tester, msgtest, apitest, ...) and the runtime environment (Rules
Engine, WebSphere MQ Integrator Broker) refer to the definitions in this file. The
name and the contents of the database connection file have changed with the
versions.
The environment variables used with New Era Of Networks have changed from
version to version. However, all MQSeries Integrator versions make use of the
environment variable NN_CONFIG_FILE_PATH, which points to the directory
where the database connection file is located.
243
This configuration file has to be used with MQSeries Integrator V2.0 and
V2.0.1 only. It is not necessary for later versions. Table 8-4 on page 348 lists
the different parameters of this configuration file and the equivalents in a
WebSphere MQ Integrator environment.
WebSphere MQ Integrator
WebSphere MQ Integrator introduces a new database configuration file:
nnsyreg.dat. The definitions of the sessions in this file are different than in
previous versions.
244
=
=
=
=
=
dbt26db250
NNSesDB2Factory
NEON
neonadmin
password
Session.inst_db
NNOT_SHARED_LIBRARY
NNOT_FACTORY_FUNCTION
NN_SES_SERVER
NN_SES_USER_ID
NN_SES_PASSWORD
=
=
=
=
=
dbt26db250
NNSesDB2Factory
NEON
neonadmin
password
Session.nnfie
NNOT_SHARED_LIBRARY
NNOT_FACTORY_FUNCTION
NN_SES_SERVER
NN_SES_USER_ID
NN_SES_PASSWORD
=
=
=
=
=
dbt26db250
NNSesDB2Factory
NEON
neonadmin
password
Session.MQSI_PLUGIN
NNOT_SHARED_LIBRARY
NNOT_FACTORY_FUNCTION
NN_SES_SERVER
NN_SES_USER_ID
NN_SES_PASSWORD
=
=
=
=
=
dbt26db250
NNSesDB2Factory
NEON
neonadmin
password
245
NNSY_CATALOGUES:
This points to the directory where the New Era Of Networks catalogues files
are located.
Default on Windows
c:\Program Files\IBM\Websphere MQ Integrator 2.1\nnsy\NNSYCatalogues
Default on AIX
/usr/lpp/nnsy/NNSYCatalogues
8.2.3 Functionality
The implemented New Era Of Networks functionality has been significantly
enhanced with the later versions of MQSeries Integrator V2.0.2 and WebSphere
MQ Integrator. In particular, the introduction of the NEONMSG domain extends
the abilities to work with NEON messages in an MQSeries Integrator/WebSphere
MQ Integrator environment.
This node takes as input a message defined using the NEONFormatter GUI
and reformats it to another message which is also defined in the
NEONFormatter GUI.
NeonRules:
246
This node takes as input a message defined using the NEONFormatter GUI
and applies the rules to it that are defined using the NEONRules GUI.
Possible actions are the reformat, the putqueue and the propagate actions.
NEONMSG
The NEONMSG parser is able to construct two-dimensional logical message
trees from New Era Of Networks messages. Access to New Era Of Networks
messages is made available to other nodes used in a message flow.
Three new New Era Of Networks nodes were introduced to implement the
NEONMSG domain:
NEONTransform
This node has exactly the same role as the deprecated NeonFormatter node.
However, the NeonFormatter node produced a bit stream that you could not
handle in another message processing node. The new NEONTransform
produces a message tree, with which you can work in a Compute node, for
example.
NEONMap
This node has exactly the same role as the deprecated NeonRules node.
However, like the other new NEON nodes, it works on a message tree and
passes a message tree along its output terminals. It is this node that will allow
you to simulate the rules engine in a broker.
It is recommended that you use the new NEON nodes for new message flows.
However, for compatibility reasons and migration purposes, the previous nodes
(NeonFormatter and NeonRules) and the NEON domain are still available for
use.
247
Literals
Literals can now be binary values as well as string values. In earlier versions, this
distinction is not made and literals were uniformly represented as binary values.
Important: You have to consider this when you import literals form an earlier
version into a V2.0.2 New Era Of Networks database or later, because there
can be code page translation issues.
WebSphere MQ Integrator
WebSphere MQ Integrator comes with changes to:
MQSeries Integrator V1.1 Rules Daemon
Log files
Graphical User Interface
Mqsireload
Documentation
248
Log files
With WebSphere MQ Integrator, the log files used with earlier versions have
been renamed to:
NNSYStatus.Log (previously NEONetStatus.Log)
NNSYMessageLog.nml (previously NEONMessageLog.nml)
mqsireload
The command mqsireload has been added to provide a refresh for brokers and
execution groups. It replaces mqsinrfreload (reloading New Era of Networks
rules and formats), which has no functionality anymore with WebSphere MQ
Integrator.
The command mqsireload enables you to request the execution groups in the
broker to reload the Rules and Formats database dynamically.
249
If you run user exits with earlier versions of New Era Of Networks, you have to
recompile those for use with WebSphere MQ Integrator as multi-threaded
objects, which must be linked with multi-threaded (thread-safe) libraries. Refer to
New Era Of Networks Formatter Programming Reference V5.6, SC34-6086 for
instructions on how to do this.
250
The consistency checker lists objects that are out of synchronization for any
reason as invalid. It checks whether the records have corresponding features in
the database. All formats and rules in an inconsistent state generate a report
indicating the problem.
The results of the consistency check are documented in report files created in
the current directory. By default, the name of the log files refers to the version of
the consistency checker you are running, for example formatcc110.log. You can
redirect the output to a different file and path if you like.
After running the consistency checker, review the log files (formatccxxx.log and
ruleccxxx.log) and check whether any problems are reported. A guide to
interpreting the results in the reports files can be found in MQSeries Integrator
System Management Guide, SC34-5505 shipped with the MQSeries
Integrator/WebSphere MQ Integrator version you currently use.
Most of the checks verify the internal structure of the rules to confirm that they
were properly created. However, some checks verify that user-typed data was
correctly entered.
The consistency checker depends on the version of the installed New Era Of
Networks database. Therefore, in step 1, you have to use the version that
matches the version of the New Era Of Networks database from which you are
exporting. You will use a different version of the consistency checker in step 8,
after you have imported the formats and rules in step 7.
Table 8-2 lists the versions of the consistency checker for checking formats and
rules used with the different MQSeries Integrator versions.
Table 8-2 Consistency checker versions
Consistency checker
formatcc40; rulecc40
formatcc110; rulecc110
formatcc520; rulecc520
formatcc560; rulecc560
251
The input parameters that you need to run the consistency checker and the
environment in which you run the command depend on:
The type of database you use with New Era Of Networks (DB2, Oracle,
Sybase, SQLServer)
The platform you run it on (Windows NT/2000, UNIX)
To check the consistency of formats and rules for a DB2 database used with a
V1.1, V2.0 and V2.0.1 MQSeries Integrator installation on Windows NT, you
need to perform the following steps:
1. Open a DB2 command window:
db2cmd
Refer to Table 8-2 on page 251 for running the commands with other MQSeries
Integrator versions.
For further details on using the consistency checker, refer to the MQSeries
Integrator System Management Guide, SC34-5505.
Important:
Note that there might be a problem when running the rulecc110 command.
You may see a message similar to the following:
The system cannot find the file specified...
Edit the file rulecc110.cmd or rulecc110.sh and correct the following entries:
@type rulecc410.sql >> temp.sql
to:
@type rulecc110.sql >> temp.sql
@db2 -f temp.sql -l rulecc410.log -t
to:
db2 -f temp.sql -l rulecc110.log -t
252
Also, the structure of the session definition has changed from earlier versions of
MQSeries Integrator to , as shown in Example 8-2 and Example 8-3.
Example 8-2 NNFie session definition in MQSeries Integrator
nnfie:NEON:db2admin:password:
=
=
=
=
=
dbt26db250
dbt26db250
NEON
db2admin
password
The utility NNFie, used for the import and export of formats, uses nnfie or
Session.nnfie as a key to search the database connection file. Similarly, the utility
NNRie used for an import and export of rules uses nnrmie/nnrie (DB2/other
databases) or Session.nnrmie/Session.nnrie respectively as a key.
Make sure that you have proper definitions for both sessions in the database
connection file, suitable for your database environment.
253
NNFie40.exe, NNFie40.ksh
NNRie40.exe, NNRie40.ksh
NNFie411.exe;NNFie411.ksh
NNRie411.exe; NNRie411.ksh
NNFie520.exe; NNFie520.ksh
NNRie520.exe; NNRie510.ksh
NNFie560.exe; NNFie560.ksh
NNRie560.exe; NNRie560.ksh
Since NNFie and NNRie are used for export and import, the direction of the
operation has to be defined by the following parameter:
-e (for export)
-i (for import)
According to Table 8-3 on page 254, to export formats and rules from a
V1.1/V2.0 and V2.0.1 database on any platform, you would type the following
commands:
NNFie411 -e <export file name>
NNRie411 -e <export file name>
254
Note that the user running the NNFie and NNRie commands must be allowed to
create new files in the current directory.
Important: Due to possible database problems, it is not recommended that
you run NNFie and NNRie simultaneously.
If you do not redirect the output and/or define your own export file names, NNFie
and NNRie create export files in the current directory called NNFie.exp and
NNRie.exp, which can be used for import. Since the export format is platform and
database neutral, the exported files can be used for import into any New Era Of
Networks target database (for example, export from DB2 and import into Oracle).
You can verify the contents of the export files by checking the NNFie.log and the
NNRie.log, created in the current directory. Any export failures are recorded in
these files. Additionally you should see the NEONMessageLog.nml file (up to
MQSeries Integrator V2.0.2) or the NNSYMessageLog.nml file (WebSphere MQ
Integrator), in which you can also find more details about a possible failure.
If you experience problems with NNFie and NNRie, or you would like to know
how to use these commands for generating conflict reports or inventory export
files, refer to the New Era Of Networks Rules and Formatter Support for
WebSphere MQ Integrator System Management Guide, SC34-6083.
Hint: The utilities NNFie and NNRie might not always include the version
identifier with their names (for example NNFie for Version 411 is simply called
NNFie.exe).
If you have problems running the commands, or you are not sure if you have
the correct version available, you can use the utility nnident, which identifies
the version of the file that you define as input parameter.
For example, running the following command on a Windows platform:
c:\IBM\MQ Series Integrator 2.0\bin\nnident NNFie.exe
255
Refer to the MQSeries Integrator Installation Guide for the platform you use for
detailed instructions on how to uninstall the New Era Of Networks environment.
When you have removed the New Era Of Networks installation, make sure that
the environment variables have been removed properly too. On Windows
NT/2000, a reboot of the machine is recommended to make sure that registry
entries are cleaned up.
Important: Note that step 4 only refers to an uninstallation of the New Era Of
Networks components, which is performed separately from an uninstallation of
your MQSeries Integrator environment.
Before you uninstall your current MQSeries Integrator software, make sure
that you have followed the migration process for the MQSeries Integrator
components as well.
256
Windows NT/2000
In addition to the WebSphere MQ Integrator components Control Center and
Configuration Manager the NEONFormatter GUI, the NEONRules GUI and the
Visual Tester run on Windows NT/2000.
The installation of the New Era Of Networks product code on Windows NT/2000
is automatically launched when you install WebSphere MQ Integrator.
Alternatively, it is possible to install the New Era Of Networks product code
separately from WebSphere MQ Integrator, for setting up a New Era Of Networks
GUI front-end only, for example.
UNIX
On the supported UNIX-platforms (AIX, Solaris, HP-UX), the installation of the
New Era Of Networks product code is a separate step from the installation of the
WebSphere MQ Integrator product code. You need to install the New Era Of
Networks code before installing the WebSphere MQ Integrator product itself.
Otherwise, you cannot select the New Era Of Networks integration package that
is part of the WebSphere MQ Integrator product.
257
Important: Note that, when upgrading, it is not recommended that you use the
old New Era Of Networks database with the new product code, simply by
re-instantiating the identical database. You are urgently advised to create a
new New Era Of Networks database.
Running inst_db
If you use DB2, you need to create a user group NNSYGRP on your machine
prior to running the inst_db application. The application inst_db grants DB2ADM
privileges to this group when invoked.
For details, refer to our migration example in Chapter 8.3.2, Example migration
from MQSeries Integrator to WebSphere MQ Integrator on page 264.
The inst_db application creates a New Era Of Networks database and installs the
schema in the database. During the installation process, you specify the name of
the database, the name of the database user ID and some further connection
information. You have the opportunity to adjust the size of the database before it
is created. Run inst_db according to the database and platform you use.
When you invoke inst_db, it comes up with a sequence of dialogs, varying with
your database environment. You must specify:
The type of database you use: Oracle, DB2, Sybase, SQL Server.
What kind of installation you want to perform: Basic or Advanced.
Further parameters necessary to create the database, such as database user
ID, password, and database name.
258
This file must be called nnsyreg.dat. Any number of database sessions may be
defined in this file. The NNOT_SHARED_LIBRARY and
NNOT_FACTORY_FUNCTION parameters of each session must be set to the
appropriate values for the desired DBMS type.
As with previous versions of MQSeries Integrator, you need to add two further
sessions to the database connection file to be able to run the import utilities
NNFie and NNRie successfully in step 7. The sessions you need to add are
called:
Session.nnfie (for use with NNFie)
Session.nnrmie (for use with NNRie and DB2)
Session.nnrie (for use with NNRie and other databases)
Add these entries to the database connection file and save the file in the current
directory.
Tip: If you have problems running the import commands as shown, try to
explicitly specify one of the defined sessions in the database connection file
and a full qualified path to the database configuration file. Note that the
session parameter nnfie used with this command differs from the entry in the
database connection file, which is actually Session.nnfie.
For example:
NNFie -i <exportfile> -s "nnfie" -c "c:\mqsi21\nnsy\bin\nnsyreg.dat"
259
After running the import commands, you should look for the NNFie.log, the
NNRie.log and the NNSYMessageLog.nml files if any problems occurred during
the import.
Note:
When using NNFie and NNRie with DB2, pre-existing user permissions are
not maintained when importing your New Era Of Networks database. To
maintain existing permissions, you can update the permissions table
before or after you import your formats.
New Era Of Networks support with WebSphere MQ Integrator treats literals
as binary or string value. With earlier versions, this distinction is not made
and literals are uniformly represented as binary values. Therefore, when
importing formats from earlier versions into WebSphere MQ Integrator, you
should run NNFie with option -p if you wish to override the default migration
for literals from binary to string. You can also change the literal values to
have a string or binary value by using the New Era Of Networks Formatter
GUI, shipped with WebSphere MQ Integrator after import.
260
NN_CONFIG_FILE_PATH
Regardless of the platform where you run the New Era Of Networks environment
(Windows or UNIX), New Era Of Networks support makes use of the
NN_CONFIG_FILE_PATH environment variable to locate the database
connection file nnsyreg.dat.
NNSY_ROOT
NNSY_ROOT has to point to the directory where the New Era Of Networks
component is installed. The defaults on the platforms are:
Windows:
export NNSY_ROOT=/usr/lpp/nnsy
Solaris and HP-UX:
export NNSY_ROOT=/opt/nnsy
NNSY_CATALOGUES
NNSY_CATALOGUES must point to the directory where the New Era Of
Networks catalogue files are located. The defaults on the platforms are:
Windows:
export NNSY_CATALOGUES=/usr/lpp/nnsy/NNSYCatalogues
Solaris and HP-UX:
export NNSY_CATALOGUES=/opt/nnsy/NNSYCatalogues
ICU_DATA
ICU_DATA is automatically set during the installation of the WebSphere MQ
Integrator product code and points to the following directory:
Windows:
export ICU_DATA=/usr/lpp/nnsy/share/icu/data
Solaris and HP-UX:
export ICU_DATA=/opt/nnsy/share/icu/data
261
PATH
The installation procedure on Windows adds the following entries to the PATH
environment variable:
c:\Program Files\IBM\WebSphere MQ Integrator 2.1\nnsy\gui
c:\Program Files\IBM\WebSphere MQ Integrator 2.1\nnsy\bin
Windows NT/2000
In addition to the Control Center and the Configuration Manager, you run the
Graphical User Interfaces (Formatter, Rules and Visual Tester) shipped with New
Era Of Networks on Windows.
Therefore, you need a database client connection to the platform where the New
Era Of Networks database is installed. This presumes, of course, that a suitable
database client software is installed on the Windows machine.
The database client connection you have to define depends on the database you
use. To connect to a DB2 database from a Windows client, for example, you need
to define an alias for the New Era Of Networks database on the Windows
machine, associated with an ODBC driver to connect to the database server.
262
Note: If you intend to deploy message flows that use New Era Of Networks
nodes to a broker (NEONTransform, NEONMap, NEONRules Evaluation), you
should copy the New Era Of Networks message properties file
NEONMIF20.properties from your broker installation to your Control Center
platform. This will allow exceptions thrown by the New Era Of Networks nodes
at deploy time to be displayed in the Control Center log.
The property file can be found on the broker machine in the following
directory:
AIX:
/usr/opt/mqsi/messages
Example
#include <INFR/NEONObjects.h> changes to
#include <INFR/NNSYObjects.h>
2. Update the following global compiler names:
Current Global Compiler Name
Updated Global Compiler Name
------------------------------------------------------USING_NAMESPACE(NEON)
USING_NAMESPACE(NNSY)
NEON_NAMESPACE
NNSY_NAMESPACE
263
264
MQSeries
Integrator V201
NEON GUI
programs
DB2 Client
connection
Windows 2000
broker
DB2
MQSeries
Integrator V201
NEON
database
AIX
265
The consistency checker creates a log file in the current directory, called
formatcc110.log.
3. Assuming the consistency check for our formats yielded good results, or we
have solved the problems, we run the consistency checker for the rules in a
similar manner. Using DB2 in our example, we again have to specify the
database user, the password and the name of the New Era Of Networks
database with this command:
rulecc110.ksh neonadm neonadm NEON
The command creates a log file in the current directory, called rulecc110.log.
We again review the log file to check whether any problems were reported.
Important: Note that there might be a problem when running the rulecc110
command. You may see the message:
The system cannot find the file specified...
Edit the file rulecc110.cmd or rulecc110.ksh and correct the following entries:
@type rulecc410.sql >> temp.sql
to:
@type rulecc110.sql >> temp.sql
@db2 -f temp.sql -l rulecc410.log -t
to:
db2 -f temp.sql -l rulecc110.log -t
266
When the export has finished, we can verify the export by checking the
NNFie.log file in the current directory. Any export failures are recorded in this file
and/or in the NEONMessageLog.nml file.
After exporting the formats we export the rules using the NNRie utility in a similar
manner, by running the command NNRie with option -e.
NNRie411.ksh -e rules_20011114
When the export has finished, we again verify the export by checking the
NNRie.log file in the current directory. Any export failures are recorded in this file
and/or the NEONMessageLog.nml.
267
Running inst_db
Following the prerequisites, we have created a group, NNSYGRP, on our AIX
machine. We decide to install the new New Era Of Networks database into the
existing database instance.
268
269
270
271
This is the TCP/IP port on which the database server is listening for
connections. The default is 50000. Check the current setting in the file
/etc/services.
5. Database name: NEON
6. Install or refresh the database: 1 for install
Important: Note that, if you refresh the schema, it is an absolute requirement
that the schema be refreshed using the same installation mode (that is, Basic
or Advanced) that was used to originally install it.
272
Note: After running the inst_db installation, the DB2ADM privilege can be
removed from NNSYGRP because the DB2ADM privilege is automatically
granted to all users in NNSYGRP.
273
Windows
On Windows NT/2000, we run the NEONFormatter GUI, the NEONRules GUI
and the Visual Tester.
Since the New Era Of Networks database is running on our AIX machine, we
have to set up a DB2 client connection to this new database.
We define the client connection using the Client Configuration Assistant started
by selecting:
Start->Programs->DB2 for Windows NT->Client Configuration Assistant
274
Once started, click the Add button and the Add Database SmartGuide window
appears. We select the following options:
Panel 1. Source:
275
Panel 2. Protocol:
276
Panel 3. TCP/IP:
Specify the name of the host machine where the database instance is running
and the port on which the instance is listening. We get the port number from
the file /etc/services on AIX.
277
Panel 4. Database:
278
Panel 5. ODBC:
Select Register this database for ODBC and select also the As a system
data source radio button.
279
When the test is successful, we are able to work with our New Era Of Networks
GUIs, giving the correct connection parameters at start-up (see Figure 8-16):
User ID: neonadm
Password: neonadm
DBMS: ODBC - DB2 (ODBC)
Driver: NEON
280
= dbt26db250
NNOT_FACTORY_FUNCTION
= NNSesDB2Factory
NN_SES_SERVER
= NEON
NN_SES_USER_ID
= neonadm
NN_SES_PASSWORD
= neonadm
Session.inst_db
NNOT_SHARED_LIBRARY
= dbt26db250
NNOT_FACTORY_FUNCTION
= NNSesDB2Factory
NN_SES_SERVER
= NEON
NN_SES_USER_ID
= neonadm
NN_SES_PASSWORD
= neonadm
Session.nnfie
NNOT_SHARED_LIBRARY
= dbt26db250
NNOT_FACTORY_FUNCTION
= NNSesDB2Factory
NN_SES_SERVER
= NEON
NN_SES_USER_ID
= neonadm
NN_SES_PASSWORD
= neonadm
281
Session.nnrmie
NNOT_SHARED_LIBRARY
= dbt26db250
NNOT_FACTORY_FUNCTION
= NNSesDB2Factory
NN_SES_SERVER
= NEON
NN_SES_USER_ID
= neonadm
NN_SES_PASSWORD
= neonadm
282
We are now ready to import our formats and rules into the newly created New
Era Of Networks database.
2. We run the NNFie utility with options -i and -p (we want to import our literals
as string) and with the name of our import file as we called it in step 3:
NNFie -i formats_20011114 -p
This command imports our formats into the newly created New Era Of
Networks database. After importing the formats, we check the
NEONMessageLog.nml file for a report of the import procedure.
283
3. After successfully importing the formats, we are ready for an import of the
rules using the NNRie utility in a similar manner:
NNRie -i rules_20011114
The command creates a log file in the current directory, called rulecc560.log.
We review the log file to check whether any problems were reported.
284
We need to add some further session entries to that file. To be prepared to run
New Era Of Networks utilities (apitest, NNFie, etc.) on AIX, we add the following
sessions:
Session.new_format_demo
285
NNOT_SHARED_LIBRARY
NNOT_FACTORY_FUNCTION
NN_SES_SERVER
NN_SES_USER_ID
NN_SES_PASSWORD
=
=
=
=
=
dbt26db250
NNSesDB2Factory
NEON
neonadm
neonadm
Session.nnrmie
NNOT_SHARED_LIBRARY
NNOT_FACTORY_FUNCTION
NN_SES_SERVER
NN_SES_USER_ID
NN_SES_PASSWORD
=
=
=
=
=
dbt26db250
NNSesDB2Factory
NEON
neonadm
neonadm
Session.nnfie
NNOT_SHARED_LIBRARY
NNOT_FACTORY_FUNCTION
NN_SES_SERVER
NN_SES_USER_ID
NN_SES_PASSWORD
=
=
=
=
=
dbt26db250
NNSesDB2Factory
NEON
neonadm
neonadm
=
=
=
=
=
dbt26db250
NNSesDB2Factory
NEON
neonadm
neonadm
Environment settings
We check or set the following environment variables. These environment
variables need also to be defined in the profile of the broker.
NNSY_ROOT:
This points to the directory where the New Era Of Networks product code
component is installed:
export NNSY_ROOT=/usr/lpp/nnsy
NN_CONFIG_FILE_PATH:
This points to the directory where the database configuration file nnsyreg.dat
is located:
export NN_CONFIG_FILE_PATH=/usr/lpp/nnsy/install.sql_rulfmt56/inst_db
NNSY_CATALOGUES:
This points to the directory where the New Era Of Networks catalogues-files
are located:
286
export NNSY_CATALOGUES=/usr/lpp/nnsy/NNSYCatalogues
ICU_DATA
Development in Windows
To configure the development environment in Windows, we need a database
connection file and some additional environment variables.
= dbt26db250
NNOT_FACTORY_FUNCTION
= NNSesDB2Factory
NN_SES_SERVER
= NEON
NN_SES_USER_ID
= neonadm
NN_SES_PASSWORD
= neonadm
= dbt26db250
NNOT_FACTORY_FUNCTION
= NNSesDB2Factory
NN_SES_SERVER
= NEON
NN_SES_USER_ID
= neonadm
NN_SES_PASSWORD
= neonadm
= dbt26db250
NNOT_FACTORY_FUNCTION
= NNSesDB2Factory
NN_SES_SERVER
= NEON
NN_SES_USER_ID
= neonadm
287
NN_SES_PASSWORD
= neonadm
= dbt26db250
NNOT_FACTORY_FUNCTION = NNSesDB2Factory
NN_SES_SERVER
= NEON
NN_SES_USER_ID
= neonadm
NN_SES_PASSWRD
= neonadm
Environment settings
As with our runtime environment, we have to check the settings of the following
environment variables:
NNSY_ROOT:
This points to the directory where the New Era Of Networks component is
installed. On Windows, this is done automatically during installation:
c:\Program Files\IBM\WebSphere MQ Integrator 2.1\nnsy
NN_CONFIG_FILE_PATH:
This points to the directory where the database configuration file nnsyreg.dat
is located.
c:\Program Files\IBM\WebSphere MQ Integrator
2.1\nnsy\install.sql_rulfmt56\inst_db
NNSY_CATALOGUES:
This points to the directory where the New Era Of Networks catalogues-files
are located:
c:\Program Files\IBM\Websphere MQ Integrator
2.1\nnsy\NNSYCatalogues
ICU_DATA:
288
289
290
Sample message
In our scenarios, we work with a simple SWIFT message. According to the
SWIFT conventions, our message consists of the following logical blocks:
{1:
{2:
{3:
{4:
{5:
In the real world, blocks 2-4 and their elements are optional and thus do not
necessarily have to appear in a message. In our example, however, we will have
all blocks with this message available. The sample message we work on looks
like this:
{1:F01BANKBEBBAXXX2222123456}
{2:I100BANKDEFFXXXXU}
{3:{113:9601}{108:abcdefgh12345678}}
{4::20:PAYREF- TB54302:32A:910930USD2000,:50:EDWARDS BANK LILLIPUT POOL
EDORSET:59:/123-456-789BUGS BUNNY WABBIT ELMER FUDD -}
{5:{MAC:41720873}{CHK:123456789ABC}}
291
In both cases, input format and output format are predefined in the New Era Of
Networks environment using the integrated New Era Of Networks Formatter GUI.
The input format SWIFTDataIn reflects the logical structure and the physical
layout of the SWIFT message we receive. The output format GenericDataOut
removes tags and brackets from the input message and extracts the business
data, preparing the message for further use by any application in the following
steps.
Scenario 1a
The input format SWIFTDataIn and the output format GenericDataOut are
predefined in the New Era Of Networks environment using the New Era Of
Networks Formatter GUI (see Figure 8-20 on page 293), which you can invoke
with MQSeries Integrator V2.0.1 by clicking Start -> Programs -> IBM
MQSeries Integrator 2.0 -> Neon Support - NEONFormatter.
In working with SWIFT messages need, special format conditions should be
considered when defining the input format:
All SWIFT messages conform to a defined block structure. Each block of a
message contains data of a particular type and is used for a particular
purpose.
Each block of message begins and ends with a curly bracket (or brace)
character, "{" and "}" respectively. All main blocks are numbered, and the
block number followed by a colon (:) are always the first characters within
any block.
Only block 1 (the Basic Header block) is mandatory for all messages. Blocks
2-5 are optional and depend upon the nature of the message and the
application in which the message is being sent or received.
Blocks 1, 2 and 3 relate to header information, block 4 contains the text of the
message, and block 5 contains trailer information.
Blocks 3, 4 and 5 may contain sub-blocks (that is, blocks within blocks) or
fields delimited by field tags, depending on the nature of the message.
292
Figure 8-20 Scenario 1a: NEONFormatter GUI showing input format SWIFTDataIn and
output format GenericDataOut
293
Figure 8-21 Scenario 1a: a simple reformat procedure with the NEONFormatter node
The Message Type property of the MQInput node (Get SWIFT Message) is set to
the name of the input format (SWIFTDataIn) defined in the New Era Of Networks
environment (Figure 8-22 on page 295). Since we want to invoke the NEON
parser with this message, we additionally have to set the message domain to
NEON in this node.
Referencing the NEON domain causes the bitstream of the message body to be
passed to the integrated New Era Of Networks Formatter engine. The Formatter
engine uses the value of the Message Type to interpret the message according
to the predefined input format SWIFTDataIn and maps and transforms the
message to the output format defined in the TargetFormat attribute of the
NEONFormatter node (GenericDataOut).
Note: For a simple one step reformat process we do not need to define a rule
via the NEONRules GUI as in a native NEON environment and thus it is not
necessary to set a value for Message Set in the MQInput node.
294
After running the message flow, the reformatted message in the output queue is
ready for use with any application interpreting this data:
F01BANKBEBBAXXX2222123456I100BANKDEFFXXXXU9601abcdefgh12345678PAYREFTB54302910930USD2000,EDWARDS BANK LILLIPUT POOL EDORSET/123-456-789BUGS
BUNNY WABBIT ELMER FUDD -41720873123456789ABC
295
Scenario 1b
In Scenario 1b, we look at the same reformat procedure in a WebSphere MQ
Integrator V2.1 environment, where the NEONFormatter node and the NEON
domain are still available for compatibility reasons. Therefore we run the
message flow from Scenario 1a in the WebSphere MQ Integrator V2.1
environment without any changes, which would be suitable for our simple
reformat process.
However, we could also decide to build this message flow by using either of the
new NEONMap (Figure 8-24 on page 297) or NEONTransform node
(Figure 8-25 on page 297) in the NEONMSG domain to be able to work with the
enhanced functionality given in this domain. Messages in the NEONMSG
domain can be used in message flows, exactely like messages that belong to the
other supported domains. This is a significant improvement over messages in the
NEON domain. We will not make use of these enhancements in this scenario, but
will refer to those in Scenario 2.
Let us go back to the NEONMap and NEONTransform nodes we might choose
in this scenario. Both nodes offer the mapping stage of a reformat procedure able
to reformat an input format to an output format. However, the NEONMap node
does not perform any output operation defined with the output format (for
example, transforming lower case input data to upper case output data). Given
this limitation for NEONMap nodes, full data transformation is supported by the
NEONTransform node only.
Which of the nodes we use basically depends on the business case. In some
cases, it might be useful to have the data from an input message simply mapped
to an output message, keeping the attributes of the data as they are. In our
simple reformat scenario, we could use either of the New Era Of Networks
nodes, since we have no significant output operations defined with the message.
The message flow and the definitions of the nodes barely differ from those in
Scenario 1a.
Both the NEONMap and the NEONTransform node are defined to reformat this
message into the target format SWIFTDataOut and, since we still use the
predefined input format in the New Era Of Networks environment
(SWIFTDataIn), we define exactly the same input format (given as Message
Type attribute) for this message flow.
296
The difference here is the message domain, which is set to NEONMSG to invoke the
NEONMSG parser with this message. But again, as long as we simply reformat
the message as given in this scenario, there is no difference between using the
NEON domain or using the NEONMSG domain with this message.
297
298
Access to Map objects, defined by the NEONFormatter GUI, is given via the Map
object and Map version attributes of these nodes, with the limitation that output
operations are not supported with the NEONMap node. This functionality is not
supported with the NEONFormatter node.
Important: Map version is for further use and is not operational in the current
release.
299
Scenario 2a
Assuming we have a New Era Of Networks message containing data already
prepared for interpretation, as given with the output of Scenario 1a shown in
Example 8-5, we can now easily transform this data into XML format data by
using either the NEONMap node or the NEONTransform node.
Example 8-5 Input message for scenario 2a
F01BANKBEBBAXXX2222123456I100BANKDEFFXXXXU9601abcdefgh12345678PAYREFTB54302910930USD2000,EDWARDS BANK LILLIPUT POOL EDORSET/123-456-789BUGS
BUNNY WABBIT ELMER FUDD -41720873123456789ABC
300
The next step is to define an adequate output format. We define a simple flat
output format SWIFTOUT, as shown in Figure 8-29 on page 302.
301
Now that we have defined the input and output formats in the New Era Of
Networks environment, we construct the message flow in the WebSphere MQ
Integrator environment, as shown in Figure 8-30.
302
The MQInput node (Figure 8-31) defines the input format SWIFT-IN3 that we use
with this message as Message Type attribute of the node. Since we use the
NEONMap node, we need to define the NEONMSG domain as domain attribute.
The NEONMap node (Transform SWIFT to XML) defines the target format to
which the input information must be mapped and defines the output domain as
XML.
303
After running the flow, the resulting message looks like the one shown in
Figure 8-33. The tool used to display and parse the message contents is the
MQSeries Integrator V2 Test Tool, which is available as a SupportPac and can be
downloaded from:
http://www-4.ibm.com/software/ts/mqseries/txppacs/ih03.html .
304
Figure 8-33 Scenario 2a: XML output without considering output operations
Scenario 2b
A variation on using the NEONMap node is using the NEONTransform node with
this flow (Figure 8-34 on page 306). The properties of the participating nodes are
identical; the difference is only that we use a NEONTransform node instead of a
NEONMap node.
305
306
Scenario 3a
The problem with a SWIFT message is that we have tags with the message and
that we want to extract the pure data from it.
The original message is shown in Example 8-6.
Example 8-6 Input message for Scenario 3a
{1:F01BANKBEBBAXXX2222123456}
{2:I100BANKDEFFXXXXU}
{3:{113:9601}{108:abcdefgh12345678}}
{4:
:20:PAYREF- TB54302
:32A:910930USD2000,
:50:EDWARDS BANK LILLIPUT POOL EDORSET:
59:/123-456-789BUGS BUNNY WABBIT ELMER FUDD
-}
We want to extract the pure data from this message,as shown in Example 8-7.
Example 8-7 Extracted data
F01BANKBEBBAXXX2222123456I100BANKDEFFXXXXU9601abcdefgh12345678PAYREFTB54302910930USD2000,/123-456-789EDWARDS BANK LILLIPUT POOL EDORSETBUGS
BUNNY WABBIT ELMER FUDD -41720873123456789ABC
307
The message, now split in its logical parts, is then mapped to a second
predefined output format XML-OUT. This output format predefines the
complete XML structure by combinations of XML tags and values gained from
the input message.
To define the input and output formats, we use the New Era Of Networks
Formatter:
308
Figure 8-36 Scenario 3a: NEON Formatter GUI showing input format SWIFT-IN1 and
output format SWIFT-OUT1
Since we need two reformat procedures, we need to define two input and two
output formats:
SWIFT-IN1 -> SWIFT-OUT1
SWIFT-IN2 -> XML-OUT
309
The definition of the XML output format implies an enormous definition effort.
Since the complete XML structure of the message is predefined, we need to
define a large number of:
Literals
Fields
Defaults
Output Controls
This is a very trying task if you do not have an XML adapter available.
Once we have built the input and output formats, we have to define the sequence
in which the formats have to be used for the reformat operations. This means we
have to define that SWIFT-IN1 has to be used first, followed by SWIFT-OUT1,
then SWIFT-IN2, then XML-OUT.
This is done again in the New Era Of Networks environment by using the New
Era Of Networks Rules GUI, where we define a combination of Application Group
and MessageType. An NEON Application Group is comparable to the Message
Set used with WebSphere MQ Integrator. We call this Application Group
SWIFTSampleScenario2. MessageType is the name of the initial input format
(SWIFT-IN1) with which we start our reformat operations.
In addition to Application Group and MessageType, we need to define a rule and
a subscription combined with this rule. The rule is a simple dummy rule that
checks whether there is a specific field with the message (here:
BasicHeaderData); the subscription describes the sequence of operations on a
message that must be performed when the rule hits. The rule definition is shown
in Figure 8-37. The subcription definition is shown in Figure 8-38.
310
311
The settings for the nodes used in this scenario are shown in the next figures.
312
Note that Message Set refers to the name of the Application Group and that
Message Type refers to the name of the predefined input format with which we
start the reformat sequence as specified in the Formatter GUI.
The NEONRules node has no properties to be considered. It is just an
instantiation of the NEON RulesEngine.
After putting the reformatted message to the destination queue, the message is
as shown in Figure 8-42.
313
Another way to achieve the same result without using NEONRules is to define a
message flow with a sequence of two NEONFormatter nodes, as shown in
Figure 8-43.
The initial input format SWIFT-IN1 is defined in the MQInput node (Get SWIFT
Message) given as MessageType (see Figure 8-44 on page 315). The first
NEONFormatter node in the sequence (see Step1 of the SWIFT reformat,
Figure 8-45 on page 316) defines the intermediate output format (SWIFT-OUT1)
as target format and, additionally, defines the second input format (SWIFT-IN2)
to which the reformatted message from step1 is passed.
314
It is important to define the New Era Of Networks domain in this node, since the
message has to be parsed in the NEON domain. The second NEONFormatter
node (see Step 2 of the SWIFT reformat, Figure 8-46 on page 316) simply
defines the second output format (XML-OUT) to which the message has to be
reformatted before it is passed to the MQOutput node.
315
316
Scenario 3b
In Scenario 3b, we use a predefined XML output format in the New Era Of
Networks environment.This may be feasible with a short message, but it is too
problematic with a large message, where we would have to define hundreds of
literals, fields, output operations and output controls. With MQSeries Integrator
V2.0.1, an XML adapter may be a solution. With WebSphere MQ Integrator, we
can construct an output message using the new NEONTransform or NEONMap
nodes in the NEONMSG domain, without an XML adapter and without
predefining the XML output format.
Since we still work with a two step reformat procedure to extract our data, we use
a combination of NEONFormatter and NEONTransform/NEONMap node.
Working in both New Era Of Networks domains (NEON and NEONMSG), we
also need a ResetContentDescriptor node in this example.
The message flow is shown in Figure 8-47.
The MQInput node defines the initial input format SWIFT-IN1 in the NEON
domain, as shown in Figure 8-48.
317
318
The NEONMap node (we could use a NEONTansform node instead) defines the
second output format SWIFTOUT as the ultimate target format and the output
domain XML as the target domain.
Note that SWIFTOUT is a flat output format and defines only the output fields of
the message and not the XML tags, as in scenario 3a.
319
320
Scenario 4a
The message flow in scenario 4a is shown in Figure 8-53.
321
The NEONRules node does not contain any significant definitions. It is just an
instantiation of the NEON rules engine, as shown in Figure 8-56. The node
transforms the message from SWIFT-OUT1 to SWIFT-IN2, converting the message
to XML, and routes it to its destination.
322
Since we use Destination List as output, queue definitions are not required in
the basic tab of the MQOutput node, as shown in Figure 8-58.
The NEON Rules environment defines the routing conditions for the message as
rules. Figure 8-59 shows the definition of the rules.
323
Figure 8-59 Scenario 4a: Rule definitions in the NEON Rules environment
Code
Code
Code
Code
=
=
=
=
"BE"
"GE"
"UK"
"US"
Associated with the rules are a number of subscriptions. Figure 8-60 shows the
actions associated with the rule Country Code = "BE".
324
Figure 8-60 Scenario 4a: subscription definitions in the NEON Rules environment
All actions associated with the rules as subscriptions are listed in Example 8-9.
Example 8-9 Defined subscriptions and their actions
reformat
reformat
reformat
reformat
SWIFT-IN2
SWIFT-IN2
SWIFT-IN2
SWIFT-IN2
->
->
->
->
XML-OUT
XML-OUT
XML-OUT
XML-OUT
+
+
+
+
putqueue
putqueue
putqueue
putqueue
BELGIUM
GERMANY
UK
US
Scenario 4b
Within the NEONMSG domain, the NEONRules offer a broader functionality with
NEON messages. The putqueue and the new introduced route action cause the
LocalEnvironment (previously called Destination List) of the output message to
be appropriately configured for processing by an MQOutput or a RouteToLabel
node.
Within WebSphere MQ Integrator, we can build an alternative to the flow build
with MQSeries Integrator V2.0.1 by using some of the new functionality
introduced with the NEONMSG domain.
The completed message flow is shown in Figure 8-61.
325
The MQInput node (Get SWIFT Message, Figure 8-62) parses the SWIFT
message in the NEONMSG domain with an input format SWIFT-IN4 (quite similar
to SWIFT-IN1). It specifies the MessageSet SWIFTSampleScenario4b, which
refers to the Application Group in the NEONRules environment.
326
The Label node (Figure 8-64) simply defines the label associated with this node.
The output terminal of the Label node is connected to an MQOutput node,
defining the queue BELGIUM as target queue of the message.
The rules are defined in exactly the same manner as in Scenario 4a, as shown in
Figure 8-65 and Figure 8-66.
327
328
Figure 8-67 Scenario 4b: subscription Step 1 (Transform to target format SWIFT-OUT)
In a second step, we route the message to Label BELGIUM and transform it from
NEONMSG to XML, as shown in Figure 8-68.
329
330
the NEON Message Set in a database node, where we extract the values of
these fields and store them in a database. In parallel, the message is
transformed into generic XML and is passed to an MQOutput node for further
processes.
The input format SWIFT-IN5 and the output format SWIFT-OUT5 are defined in
the NEON domain, using the NEONFormatter (see Figure 8-69):
Figure 8-69 Scenario 5: NEONFormatter showing input format SWIFT-IN5 and output
format SWIFT-OUT5
331
The MQInput node (Get SWIFT Message, as shown in Figure 8-71) receives the
SWIFT message in the NEONMSG domain and instructs the NEONMSG parser
to parse the message according to the definitions, given with input format
SWIFT-IN5.
332
To be able to work with the NEON Message Set in the Database node we use in
this message flow, we need to define the following session entry in our database
connection file (nnsyreg.dat) for the NEON database. Note that this entry is
necessary only on the machine where you run the Configuration Manager. The
values in the session definition depend on the database you use.
333
=
=
=
=
=
dbt26db250
NNSesDB2Factory
<dbms instance>
<user>
<password>
After adding this session entry to our database connection file, we have to restart
the Configuration Manager and the Control Center for the information to be
picked up by the Configuration Manager.
In the Control Center we go to the Message Sets tab, right-click Message Sets
and add the NEON Message Set to our workspace by selecting:
Message Sets -> Add to Workspace -> Message Set -> NEON Message Set
When the NEON Message set has been added to the current workspace, we add
the NEON formats we want to work on, as Elements:
Elements -> Add to Workspace -> Element
334
When the output format SWIFT-OUT5 has been added to the workspace, we are
able to work on this message set in other nodes, such as Compute nodes.
Note: You can only view NEON Message Sets. You cannot edit, change or
re-arrange the components in the Control Center. This still has to be done in
the NEON environment, using the NEONFormatter GUI.
335
336
In the Message field, we enter a name for our message. In this case, it is SWIFT.
Figure 8-78 Scenario 5: constructing the input side by defining the message (SWIFT)
337
Figure 8-79 Scenario 5: resulting entry in the database node after defining the message
name
The button Add element allows you to select elements (fields) from the NEON
Message Set (Figure 8-80).
Figure 8-80 Scenario 5: constructing the input side by selecting elements from the NEON
Message Set
338
339
Figure 8-83 Scenario 5: constructing the output to a database by defining the columns in
the SAMPLE database
When all columns are added, the Database node should look like the one shown
in Figure 8-84.
340
Finally, we code the ESQL statement (Example 8-11 ) that puts the values of the
message into the database (Figure 8-85).
Example 8-11 ESQL insert statement
INSERT INTO DATABASE.DB2ADMIN.SWIFT (BANKCODE,COUNTRYCODE,
BRANCHCODE,LOCATIONCODE,CUSTOMERINFO) values
("Body"."BankCode","Body"."CountryCode","Body"."BranchCode","Body"."LocationCod
e","Body"."Text4");
341
Figure 8-85 Scenario 5: ESQL code to input selected data from the NEON message into
the SAMPLE database
When a message has been passed through this message flow, the XML
message is put to the defined output queue and parts of the message are written
into the database (Figure 8-86).
342
=
=
=
=
=
dbt26db250
NNSesDB2Factory
<dbms instance>
<user>
<password>
After adding this session entry to your database connection file, you need to
restart the broker for the information to be picked up.
343
The Default tab provides information on how to process the message if it does
not have an MQRFH (or the newer MQRFH2 header). The Message Set field
equates DefaultAppGroup and the Message Type equates DefaultMsgType that
was formerly in the rules engine (RULENG) parameter file.
The Message Domain should be defined as NEONMSG from the pull down menu
(Figure 8-89).
344
NEONRulesEvaluation1 node
This is a NEONRulesEvaluation node, which has no properties set. It has five
output terminals which are discussed in NEONRulesEvaluation1 node on
page 345. In the message flow shown in Figure 8-87 on page 343, we have
connected up the failure, nohit and putqueue terminals.
345
NNSY.PUTQUEUE.ACTION
This node is also an MQOutput node, but it is configured differently so that your
rules and subscriptions can decide where your output message goes.
The Basic tab is left blank with no values for the Queue Name or Queue Manager
Name properties.
The Advanced tab is configured with the Destination Mode property set to
Destination List from the drop-down menu. When a rule results in a putqueue
action, WebSphere MQ Integrator passes the queue name internally in the
message (in the LocalEnvironment folder of the message tree) for the MQOutput
node to process. Refer to Figure 8-90 for details.
346
Figure 8-90 MQOutput node properties for New Era Of Networks support
347
Table 8-4 Rules Engine (RULENG) parameter file values in WebSphere MQ Integrator
348
WebSphere MQ Integrator
Configuration equivalent
CredentialsEnabled
QueueManagerName
MaxBackoutCount
InputQueueName
NoHitQueueName
FailureQueueName
DefaultAppGroup
WebSphere MQ Integrator
Configuration equivalent
DefaultMsgType
MaxHandles
PurgeInterval
LogFileName
LogLevel
ServerName
UserId
Password
DatabaseInstance
DatabaseType
349
Additional brokers
If you can justify the additional overhead, you can run more than one broker on a
single machine. Each broker requires its own queue manager and can be
configured to use a separate rules and formats database schema.
350
Chapter 9.
351
352
directory from the <install_dir>\classes directory. You should find several new
sub-directories creating the path /com/ibm/broker/plugin/. In the plugin directory,
you will see several classes that should match the classes found in the Java API
documentation, found in:
<install_dir>\docs\JavaAPI
<install_dir>/docs/JavaAPI
There are two important interfaces that should be reviewed before attempting to
write a new plug-in node. In the javadoc, you will see a listing for
MbNodeInterface and MbInputNodeInterface. These interfaces are for standard
Java plug-in nodes and Java input nodes, respectively. The implementation and
skeleton code are described in the JavaAPI javadoc for reference.
9.1.2 MbInputNodeInterface
The input node interface is designed to shield the programmer from the internal
operations of WebSphere MQ Integrator input features. Though most of the
communication and specification has been encapsulated, you still need to follow
several procedures such as setting your new node to extend MbInputNode,
create the output terminal definitions (there is no need fo an input terminal) and
assemble the new message using MbMessageAssembly. If an error does occur
and you want WebSphere MQ Integrator to be aware of it for error handling,
throw a message to MbException. All other processing logic can be built to suit
your needs, for example, to monitor a directory, FTP, HTTP or a telnet session. In
Section 9.3, Implementing an input node on page 355, we will provide some
specific examples.
9.1.3 MbNodeInterface
The node interface is the standard interface definition for all plug-in nodes other
than input nodes. As with the MbInputNodeInterface, the developer has been
shielded from implementation details but must adhere to a general framework.
The important aspects to note are that your plug-in node must extend MbNode.
Unlike MbInputNode, you must define your input terminal at the same time you
define your output terminals. MbMessageAssembly and MbException are used
just like the previous interface. All other processing logic can be built to suit your
needs. You could use a plug-in node to process specific data formats or
implement specific application logic. Based on the content of the message, you
could route the message to one of multiple output terminals or even translate it
into entirely new content taken from multiple data sources. In Section 9.4,
Implementing a plug-in node on page 371, we will provide some specific
examples.
353
<install_dir>/classes\jplugin.jar
354
In the Services control pane, set your broker to run with the local system account
(assuming you are logged in as a user that is a member of the Administrators
group) and select Interact with Desktop. When you restart the broker, you will
see two command windows open up. One of these windows will display standard
output sent from your Java plug-in nodes when executed. If you include such
lines as:
System.out.println("This is my debug message")
then you can see how your program is doing. This is only a temporary debugging
feature for development. You should not rely on these messages for a production
environment because the system standard out settings will most likely be
unusable and/or unpractical.
For a more permanent tracing and logging feature, you should instead have the
messages that you need written to a log file. By using the attributes (discussed
later) options in WebSphere MQ Integrator, you can set a tracing level such as 0,
1 or 2 and the path to where a file should be written. This way, you should be able
to write a custom node program once in Java, but generate a logging file on
many platforms if your dynamic path settings are correct.
Version 1.4 of Java (currently available in Beta at the time of this writing) provides
new Logging APIs. When it is available for wide release, you could interface to
these APIs to provide more robust logging capability, or continue to use your own
code.
355
The main class for an input node should be declared like this:
Example 9-1 Extending MbInputNode and implementing MbInputNodeIterface
public class SocketNode extends MbInputNode implements MbInputNodeInterface
You will also need to declare variables that will correspond to attributes defined in
the WebSphere MQ Integrator node installation. The values of these attributes
can be set when the node is used in a message flow.
Example 9-2 Setting attribute variables
String _filePath = "";
String _dataSource = "";
String _portNumber = "";
Example 9-3 shows the function calls to tell a message flow what the names of
the output terminals should be. On a standard plug-in node, you also need to
define input terminals.
Example 9-3 Creating terminals
public SocketNode() throws MbException
{
// create terminals here
createOutputTerminal("out");
createOutputTerminal("failure");
}
For every attribute that you can set in the message flow, you must use similar
statements, as in Example 9-4, to extract the values at runtime in the broker. A
description of how to configure the attributes in WebSphere MQ Integrator will be
discussed in 9.3.3, Simple message flow to test the input node on page 360.
Simply stated, dataSource is the name of the attribute you will declare in a
message flow while _dataSource is the temporary variable that you will use for
internal processing in this program.
Example 9-4 Get and Set attributes
public String getDataSource()
{
return _dataSource;
}
public void setDataSource(String dataSource)
{
_dataSource = dataSource;
}
356
If you do want to have messages or errors sent to the system or event log file
through WebSphere MQ Integrator, utilize the MbService as follows:
Example 9-5 MbService
private void logError(String traceText) throws MbException {
MbService mbsLog = new MbService();
mbsLog.logWarning(
this, methodName, "com.ibm.samples.SocketNodeMessages", messageID,
traceText,new Object [] {});
}
357
The bulk of the input node program is reading data, building and passing the
message. Example 9-7 shows how we call a build to the message.
Example 9-7 MbMessageAssembly
public int run(MbMessageAssembly assembly) throws MbException {
byte [] generatedMessageBytes = null;
if (_dataSource.equalsIgnoreCase("file")) {
try {
generatedMessageBytes = useFilePath();
}
catch {
//some catch code here;
}
}
Furthermore, you need to include some code to look for the file in a directory,
then read it. The DirFilter is loaded as a separate class and can be found in
Appendix B.1.4, DirFilter.java on page 421.
Example 9-8 Reading the file based on the file path
private byte[] useFilePath() throws Exception {
String [] inputFileList = sourceDirectory.list(new DirFilter());
if (inputFileList.length >0) {
File sourceFile = new File(
_filePath + File.separator + inputFileList[0]);
File newSourceFile = new File(
_filePath + File.separator + inputFileList[0] + ".inuse");
byte[] inputBytes = new byte[1024];
InputStream in = new FileInputStream(newSourceFile);
ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
int len;
while ((len = in.read(inputBytes)) > 0) {
byteStream.write(inputBytes,0,len);
}
in.close();
newSourceFile.delete();
return byteStream.toByteArray();
}
}
This is the basic structure of the program that you can use to retrieve a file from a
directory and pass it to WebSphere MQ Integrator as a message input rather
than from a queue. It is important to note here that we did not show any error
handling or exception catching scenarios in these snippets, but the full source
code in Appendix B.1, Java input node getting started example on page 416
shows you how to do this
358
2. If you used the same package path, then create a sub-directory structure
below a self-created working directory like javasource, such as:
<install_dir>\javasource\com\ibm\samples\
then copy all of your Java source files to that samples directory.
3. You must ensure that in your classpath, there is a reference to the jplugin.jar.
This file has the packages for the Mb*.class files that you will need for a
successful compilation.
4. Now you are ready to compile your source files. Open up a command window
and change to your working directory that has \com\ibm\samples\ below it,
such as:
cd <install_dir>\javasource\
If there are no errors, you should see the compiled *.class files in the same
directory as the source.
<install_dir>\javasource\com\ibm\samples\*.class
6. Now you need to create the JAR file. Change to your working directory using
cd <install_dir>\javasource\
This should create a SocketNode.jar file in the directory that you are currently
in. By issuing the command:
jar tvf SocketNode.jar
you should see output that lists all of the classes that were added. Ensure that
all of the desired classes were added. The output looks like that shown in
Example 9-9 on page 360 if you followed the source that was in the appendix.
359
10.You can copy the new JAR file by issuing the command:
copy SocketNode.jar <install_dir>\jplugin\SocketNode.jar
360
361
3. Click Next so that we can specify the attributes. These attributes correspond
to the fields that we specified in the various set methods of our Java program,
for example, when we declared:
Example 9-10 Attribute Extract from Declare
public void setDataSource(String dataSource)
{
_dataSource = dataSource;
}
The dataSource represents the attribute that we specify in this window. Click
InputNode in the Hierarchy then click Create Attribute. You must enter the
attribute names exactly as you did in your program. This is case sensitive. For
the first one we entered dataSource. Repeat the procedure until all of the
attributes that you declared have been added. For this example, we were able
to accept the defaults for all the other fields in this screen. Add the attribute
portNumber so that you can use it in our future discussion.
362
4. Click Next. In the next window, you can declare default values for the
attributes that you just specified. We left these fields empty so that there
would be no accidental parameter settings when using this node. By clicking
the Description tab, you can add a short and long description. It is
recommended that you do this here because once you click Finish, you
cannot edit this parameter further.
5. Click Next. In this window, there are other parameters than can be set. You
can have WebSphere MQ Integrator create some templates for files, create a
properties file (a file that contains start-up/run-time parameters.) or a Stub for
a customizer. Please refer to the Programming Guide for an explanation of
these special features. Click Finish to create the node.
6. Now under IBMPrimitives you should see your new custom input node. The
green box implies that this is a brand new node in a checked out state. You
can right-click it to check it in if you so choose.
7. You are now ready to create a test message flow. Right-click the Message
Flows folder and select Create -> Message Flow. Give it a test name such
as InputNodeTestFlow.
8. Drag and drop your new InputNode into the window to the right. Right-click
the new node and select Properties. In the first window, use the dataSource
of file. For the filePath, use something meaningful to you such as
c:\temp\XML. This is where you are going to place test messages. If you want
to add another description, please do so now.
9. Click OK. If you wish, you can change the name of the InputNode1 to
something else by right-clicking it and selecting Rename.
363
10. Now drag and drop one Compute node from the IBMPrimitives to the
Message Flow on the right. Since our message is coming from a directory
and not from an MQSeries queue, we need to attach a header to the
message. This is done by right-clicking the Compute1 node and selecting
Properties. Click the ESQL tab without selecting either Copy message
headers or Copy entire message. In the ESQL window, insert the code like
this:
Example 9-11 ESQL creating MQMD header
SET OutputRoot.MQMD.MsgType=MQMT_DATAGRAM;
SET OutputRoot.XML=InputBody;
11. Ensure that there is no message that says Syntax Error and click OK. You
can rename the node to MakeMQMD.
12. Now we need to add three output terminals. Drag and drop three MQOutput
nodes to the far right of the message flow. Rename each one of the output
terminals to Out, MQMDFailure, and Failure respectively.
13.For the Out terminal, right-click and go to Properties. Click the Basic tab. You
can leave the Queue Manager Name field blank. For the Queue Name, use
OUT_Q.
14.Click the Advanced tab. For Message Context, select Default. If you not
want to add a description, click OK.
364
15. Now set the Queue Names for the other output terminals. For MQMDFailure,
use MQMD_FAILURE_Q. For Failure, use FAILURE_Q. Make sure to remember to
set the Message Context to Default for each of them. Do not worry that you
have not created the queues yet, we will do it in another step.
16.Now that each of the output terminals has been defined, we need to connect
the nodes together. Right-click the Input Node and select Connect -> Out.
Attach the out connection to the MakeMQMD node. Connect the failure
connection to the Failure node (this is really just a dummy node; since there
is no MQMD attached, it is not accessible).
17.For the MakeMQMD, attach the out connection to the Out node and the
failure connection to the Failure node.
18. Once you have set up your flow, click File -> Check In -> All (Saved to
Shared).
19. Before you deploy the flow, you need to create the queues on the respective
brokers queue manager. Do this by opening the MQSeries Explorer and
adding the queues with the exact names that we used above. If you used
different names than in our example, make sure that they are consistent with
the message flow. You can make simple queue definitions and accept the
defaults for this example.
365
20. Now that your have added the necessary queues, it is time to deploy your
message flow to the broker. Return to the Control Center and select the tab
Assignments. Check out the default execution group of your broker (if that is
what you are using). Move the new message flow, InputNodeTestFlow, to the
correct broker. Check in the default execution group.
21. You may now deploy the changes to your broker by right-clicking the broker
name in the left pane and selecting Deploy -> Complete Assignments
Configuration.
22. Now by clicking the Log tab, and after waiting several seconds for the
processes to finish, you should be able to click the Refresh button and see
two messages returned to the display. If the deploy was successfull, then
there should be two messages with the blue "i" to the left. If the deploy was
not successful, it is important to read all the error messages associated with
deploying a flow with a custom built node, because the problem could have
several sources, from your Java code to the message flow to broker. Read the
messages carefully and you shouldeasily be able to determine what was
wrong with your configuration.
366
1. To test your message, create a simple piece of similar XML data using a text
editor such as Notepad:
Example 9-13 Simple XML test file
<data><action>add</action></data>
2. Ensure that there are no extraneous white spaces and save the file as
TestMessage.xml, but do not save it into the c:\temp\XML directory. You do
not have to use a true XML file for this example, but the file must end with the
.xml extension for it to be pulled from the directory. We are using a true XML
file here so that it can be reused if required in a future example.
3. Place a copy of the message into the c:\temp\XML directory; it should
immediately disappear from the directory. In the command window, you
should see a message that says File Closed. Message created.
Example 9-14 Sample standard output when file is created
File closed. Message created.
Searching for Incoming File...none found.
Searching for Incoming File...none found.
Searching for Incoming File...
4. Once you have tested your new node, you may want to stop the flow from the
Control Center using the Operations tab; right-click the flow and stop it. This
will keep the messages from deploying to the screen constantly or the
process from taking up system resources while your are developing the
application.
367
5. Now for the final assurance, open up the MQSeries Explorer. Double-click the
OUT_Q; if your message was created, it should be in the list. To view the
contents of your message, double-click the message then click the Data tab.
It may only let you select a hexadecimal format, but you should be able to see
the text of the message for verification.
6. That is all there is to it to testing the input nodes directorys reading ability. If
you want to be able to change the location of the directory that is being read,
you will have to change the filePath setting in the message flow and redeploy.
368
Necessary changes
1. If you have not yet saved your workspace in your Control Center, do so now
with a name like InputNodeTestWorkspace.xml. Also, this would be a good
time to stop running your InputNodeTestFlow if you have not done so
according to the instructions above.
2. Make a copy of your current message flow, by right-clicking the message flow
and selecting Copy. Then click the top-level Message Flow folder, right-click
and select Paste. The new copy will be given a default name. Change the
name of the new message flow to InputNodeTelnetTestFlow or something that
helps you remember the difference between the two.
3. Now open the new message flow so that we can make some changes to the
custom node definition. Right-click the Input node and select Properties. For
the dataSource, change it to, for example, port. For the portNumber, you can
use any valid port number that is not already being used by this machine. We
are going to specify 3500, but the Java program will default to 3000 if you do
not specify a port.
4. Click OK. Check-in All (Save to Shared). Check-out the default execution
group of your broker and assign the new message flow to the broker. Check in
the default execution group. Deploy the Delta Assignments Configuration
for the broker.
369
5. Switch to the log view and wait for the confirmation messages to appear by
clicking the Refresh button.
6. Once the new flow begins, you should see in your standard output terminal
messages stating that it is waiting for incoming data. This is because the Java
program is monitoring the port constantly until it detects a new session being
started.
Example 9-15 Telnet session waiting for incoming data
Waiting
Waiting
Waiting
Waiting
Waiting
for
for
for
for
for
Incoming
Incoming
Incoming
Incoming
Incoming
Data...
Data...
Data...
Data...
Data...
370
3. Now use the Escape keystroke for your telnet session. On Windows 2000 it is
CTRL+]. In your command window, you should see messages confirming that
a new message was created.
4. Check the OUT_Q to see if the XML data that you entered appears in the
queue.
Tip: In this chapter, we explain how to deploy a custom Java input node on a
Windows 2000 Broker installation. We did, however, conduct nearly these
exact installation steps to deploy the same JAR file on Solaris and z/OS
successfully without even recompiling the .java files. If you choose to try this
also, please ensure that you specify a correct path setting as your filePath
attribute.
371
The source code for both programs can be found in Appendix B.2, Java plug-in
node getting started example on page 422 and also in the Web materials. You
can learn how to access the Web materials from the Appendix , Locating the
Web material on page 445.
Otherwise, the structure of the program is very much like that of the input node.
There are a few value changes, such as the string of getNodeName():
Example 9-19 getNodeName ComIbmSwitchNode
public static String getNodeName()
{
return "ComIbmSwitchNode";
}
There are only two sections of code that provide the functionality that we require.
We must parse the XML file and then determine where to route based on the
value found. Parsing the file is accomplished through only two lines of code since
MbElement is already provided for us.
Example 9-20 Parsing the XML content
MbElement rootElement = assembly.getMessage().getRootElement();
MbElement switchElement =
rootElement.getLastChild().getFirstChild().getFirstChild();
372
This way, the parser will be able to locate the appropriate tag. Depending on
which XML structure you are required to parse, this can be altered by the
getLastchild().getFirstChild() order or precedence.
Now that the parser has based the value of the action on the switchElement, we
can evaluate its contents. First, we have to cast the switchElement to a string:
Example 9-21 Casting switchElement to a string
String terminalName;
String elementValue = (String)switchElement.getValue();
Now that the value is housed in elementValue, we can evaluate it with simple if
-> then -> else logic:
Example 9-22 Evaluating elementValue
if(elementValue.equals("add"))
terminalName = "add";
else if(elementValue.equals("change"))
terminalName = "change";
else if(elementValue.equals("delete"))
terminalName = "delete";
else if(elementValue.equals("hold"))
terminalName = "hold";
else
terminalName = "failure";
Notice that if the elementValue does not have a match, the message will be
routed to the failure output terminal.
The last step is to let WebSphere MQ Integrator know which terminal we want to
send the message to. This is accomplished by defining the MbOutputTerminal
value and propagating it:
Example 9-23 Setting the MbOutputTerminal value and propagating
MbOutputTerminal out = getOutputTerminal(terminalName);
out.propagate(assembly);
There are several features that you may want to add to a routing program, such
as dynamic error handling or creating multiple into terminals. Review the API
javadoc for more ideas on how to use the other jplugin.jar features.
373
374
6. Click Next. There are no attributes that we need to define for this example.
7. Click Next. Add a description here.
8. Click Next; right-click the InputNodeTestFlow and click Check Out.
9. Right-click the Failure node and click Delete.
10.Right-click the Out node and click Delete.
11. Drag and drop a new SwitchNode from the IBMPrimitives to the place where
the Out node was.
12. Right-click SwitchNode1 and rename it to SwitchNode since it is the only one
that we are going to use for this flow.
13. Drag and drop five MQOutput nodes from the IBMPrimitives to the far right of
your message flow.
375
14. Rename each of the new output nodes to Add, Change, Delete, Hold, and
Failure.
15. Right-click the Add output terminal and click Properties.
16. Click the Basic tab and change the name of the Queue to ADD_Q.
17.Click the Advanced tab and change the Message Context to Default. If you
do not want to change the description, click OK.
18.For each of the terminals, name the queues CHANGE_Q, DELETE_Q, HOLD_Q and
FAILURE_Q, respectively. Make sure that you remember to change the
Message Context to Default for each.
19.Right-click the MakeMQMD node and select Connect -> Out, then join the
connection to the SwitchNode.
20.Right-click the SwitchNode and select Connect -> Add, then join the
connection to the Add terminal node.
21. Complete the last task for each of the output terminals.
376
26. Return to the Control Center and redeploy your broker from the Assignments
tab by right-clicking the broker with the existing message flow and clicking
Deploy -> Complete Assignments Configuration.
27. Switch to the Log tab and refresh the screen to ensure that the message flow
deployed successfully.
377
28. To test the message flow, simply copy and paste a version of the file
TestMessage.xml that you created for the input node test into the
c:\temp\XML directory. It should immediately be removed by the InputNode. If
your <action> tags contained the value change, then you should see your
complete message appear in the CHANGE_Q.
9.5 Summary
By now you should have a fairly good understanding of what it takes to create
and deploy a custom built Java node with WebSphere MQ Integrator. At this
point, you should probably review the programming guide and the API javadoc
once more to make sure the points we made here are perfectly clear to you. Also
be sure to take a look at the Web materials for any last minute updates to these
code examples or additional information.
There are several other custom features and classes that have been built into the
jplugin API and that you can take advantage of. Likewise, there is a limitless
number of possibilities stemming from the functionality within Java itself, so you
may want to consider implementing a custom Java node. All the reasons that
make Java a good programming language (platform independence, ease of use
and a robust toolset) also explain why you may choose to exploit this functionality
for your own purposes.
378
10
Chapter 10.
379
380
381
The folder name becomes part of the compound (or aggregated) message tree
built by the AggregateReply node from the reply messages. You can specify any
value for the folder name and it does not have to be unique. Bear in mind that a
long name will make your ESQL statements harder to code.
However, if you use the same folder name for more than one request message in
a single aggregation, then the replies become an array in the compound
message under the folder name and need to be accessed with an array
subscript.
The AggregateRequest node does not itself generate the request message. You
generate the actual request message with a MQOutput (or MQeOutput) node
before using this node.
As well as using the LocalEnvironment information placed in the message by the
AggregateControl node, the AggregateRequest node uses information in the
WrittenDestinationList that is placed there by the output node. You must therefore
preserve this information if you manipulate the message tree in a Compute node.
382
This node waits for all the replies to arrive (or for the time-out interval to expire)
and then generates a compound (or aggregated) message built from all the
replies on the out terminal. The compound message includes the top level tree
name of ComIbmAggregateReplyBody, then the folder names and then the
messages, including everything below the Root of each message such as the
MQMD.
This node reads the broker database to retrieve information stored there by the
AggregateRequest nodes. In the event of a time-out (that is, if one or more
messages fail to arrive in the specified interval), the incomplete compound
message is sent to the timeout terminal.
You can use different time-out values for different requests by configuring more
than one AggregateControl node (but with the same aggregate name) in the
FAN_OUT flow. If messages arrive that cannot be identified as part of this
aggregation, they are sent to the unknown terminal. The other terminals are the
failure and catch terminals. Exceptions can be caught at either the MQInput node
383
The above message can be cut and pasted into Notepad to create a test file to
read with RFHUTIL, but remember to remove all blanks outside of XML tags and
remove any carriage returns before saving it. Failure to do so will result in the
MRM not parsing the message. A correct example is supplied as file
Atesting.xml.
384
If you have read Chapter 7, Exploring the new XML features on page 205, you
will recognize this message, as we defined it to the MRM in the message set
named CONTACTS.
Note that the contents of this XML message are relevant, because we will be
accessing the DB2 sample database using the name shown above. Make sure
that the message data for the name is correct and in upper-case format.
At this time, you should ensure that you have installed the SAMPLE database
that is supplied with DB2, and registered it as an ODBC data source of the same
name. You should also find out the schema (or owner) name for this database
(usually the user ID that created it). In our case it is ZPAT.
We use the EMPLOYEE table from the SAMPLE database; if you run the DB2
Control Center, the database should be displayed as shown. If you do not want to
install this database, you can amend the ESQL statements in the processing
flows to use some hard-coded values, and remove the database SELECT
statements, since this database access is not required to demonstrate message
aggregation.
385
386
These defaults are needed if the input message does not have an MQRFH2
header specifying them. You could choose to specify the MQRFH2 header
information when using the RFHUTIL program to put the message in the queue.
The input queue is named WMQI.IN in this example.
In the FAN_OUT flow, the failure node of the MQInput node is wired to an
MQOutput node which has a queue name of WMQI.FAIL. The message flows
also have Trace nodes connected between most of the important nodes; these
have been left in so that you can run a trace if you want to.
See Section 7.2.4, Problem solving with this flow on page 223 for details on
running user traces. We will show some extracts from the trace files later on.
387
The out terminal propagates the input message and is wired up to the flow logic
used to create request messages; in our case, it is wired up twice. The logic of
the request generating parts of the flow will be described shortly. If the order of
execution of these flow paths is important, you can use a FlowOrder node to
control it.
This is the actual fan-out logic, where the execution path divides into as many
parts as needed to generate the number of request messages required. We have
two parts, but you could have more than two, or you could have a loop in the
message flow generating messages (using a Filter node to control the loop
iteration), or send multiple messages from ESQL using the PROPAGATE
statement.
Each request message needs to pass through an MQOutput node, followed by a
AggregateRequest node to be included in the reply aggregation. You can also
generate a message that did not form part of the aggregation, or decide not to
generate a message for a particular branch of the fan-out logic.
The control terminal generates the aggregation control message; this needs to
be sent to a queue that can be read by the FAN_IN flow. However, the message,
as generated by the AggregateControl node, does not have an MQMD, so if
MQSeries is used to transport the message, we need to add one of these.
For this reason, a Compute node is wired between the control terminal and an
MQOutput node rather than connecting the two directly. This Compute node
simply adds an MQMD. This is done by setting a single field in the MQMD
(ensure that this header is created before the message body). The other default
values will be used when the message is generated.
388
The above two lines of ESQL are all that is needed to generate an MQMD and
copy the control message XML to the output message. The output of the
Compute node is then wired to an MQOutput node and the queue name property
is set to WMQI.CONTROL (other values can be left to default). This will place the
control message into the named queue.
389
The output terminal of the Compute node is wired to an MQOutput node which
places the message in the WQMI.REQUEST_A queue.
In this node, set a reply queue name of WMQI.REPLY (and use your queue
manager name in place of ZPAT for the reply-to queue manager).
390
In this case, we call the folder ALPHA. This terminates our first request flow,
although you do not have to make this the last node.
391
392
Again, this is an arbitrary example and is not required for aggregation, but it
serves to demonstrate the aggregation of a generic XML reply message with an
MRM-defined one. You can aggregate messages from different domains or from
the same domains, since the complete message tree of the reply messages are
added to the message tree under ComIBMAggregateReplyBody.
The syntax warning about the reserved word Set can be ignored or the offending
word placed in double quotes. Notice that the message sections are created in
order.
Note: An MQRFH2 header has been added to the message. This is because
in the FAN_IN flow we receive all the messages on a single MQInput node
and therefore need to ensure that they are correctly parsed. We do this by
coding the message domain, set, type and format in the MQRFH2 header for
this request and defaulting to generic XML for the other one.
Use your message set identifier in the ESQL if it is different from the example.
The output from the Compute node is wired to an MQOutput node just as in the
previous request path. This time, the output queue name is set to
WMQI.REQUEST_B. The reply-to queue remains set to WMQI.REPLY.
The output from the MQOutput node is again finally wired to a AggregateRequest
node, but this time the property folder name is set to BRAVO.
This completes the FAN_OUT flow; the messages are summarized in Table 10-1.
Table 10-1 The messages
Function
Queue name
Reply-to queue
Input (in)
WMQI.IN
Control (out)
WMQI.CONTROL
Request A (out)
WMQI.REQUEST_A
WMQI.REPLY
Request B (out)
WMQI.REQUEST_B
WMQI.REPLY
393
The request messages are in whatever format you used to create them; in our
example, the generic XML request message (Request A) is as shown below:
Example 10-5 Request A
<?xml version="1.0"?>
<staff>
<given>JAMES</given>
<surname>WALKER</surname>
</staff>
Check in, deploy and test the FAN_OUT flow by placing the original test XML
message onto WMQI.IN and you should see these three messages appear in the
appropriate queues (do not forget to create the queues as local queues in
MQSeries Explorer). Test the FAN_OUT flow in isolation until it is successful.
Remember to deploy the DLM_TEST and CONTACTS message sets as well.
If you experience problems, you can import the working FAN_OUT flow available
as a Web material or run traces until you have debugged your flow. You might
also use the Control Center debugger to help track down the problem.
Figure 10-14 shows the entire FAN_OUT flow along with Trace nodes (the
MQInput node also has a failure queue wired up to it).
394
The default message domain is set to XML and the other default fields are left
blank. In all these examples, the MQInput nodes transaction mode is set to the
default of Yes, but you may prefer to set it to No, or to wire up the failure terminal
to a failure queue to avoid message rollback and looping during initial testing.
A Compute node is next in the flow. We access the SAMPLE database (supplied
with DB2), so you need to add the database table to the inputs side in the
Compute node properties using a Data Source value of SAMPLE and a Table
Name value of EMPLOYEE.
395
Select Copy entire message and then add the following line of ESQL at the
bottom.
Example 10-7 ESQL needed to get database record
Set "OutputRoot"."XML"."staff"."record" = THE
(SELECT T.* FROM Database.ZPAT.EMPLOYEE AS T WHERE T.LASTNAME =
"InputBody"."staff"."surname");
Note that the schema name here is ZPAT; this will be different for your database
and you should change it to the value for your schema. Use the DB2 Control
Center to find out the correct value for your installation of DB2.
This ESQL statement reads a single row from the database where the last name
matches the one from the input message. All columns are returned (because we
used T.* in the SELECT statement) and will be added to the message tree. This
has the effect of adding the entire employee record to the message. This is just
an example to illustrate the power of XML; you could select certain columns or
use fixed format messages if you so chose.
The Compute node is wired to an MQReply node. The use of MQReply is
important, since we want to ensure that the incoming MQSeries message ID is
moved to the correlation ID of the reply message. The reply-to queue name is
stored in the MQMD and so no parameters are needed for this node.
396
397
However, to verify that the message is the one we expected, we connect a Check
node to the flow as the next node. This is configured as shown in Figure 10-18:
Copy entire message is selected (which generates the first line) and then the
WORKDEPT column is extracted from the table and used to set the LOCATION
field in the MRM message tree. Remember to change ZPAT to the name of your
schema.
398
The database table has again been added to the inputs side; you can manually
add some column names such as EMPNO and WORKDEPT to the input display
if you want. The MRM message TEST has also been added to the display.
The output of this Compute node is an MRM message wired to an MQReply
node (again, this logic is important to ensure that the MSGID is moved to the
CORRELID). The reply-to-queue name is already in the message MQMD
header. Be sure to preserve the MQMD header in these processing flows by
copying the headers, or the entire message, in any Compute node.
399
The output message from this flow looks like the one shown in Example 10-10.
Example 10-10 Delimited message output
JAMES,WALKER,,D11
If you look at this message with MQSeries Explorer, you should also see the
MQRFH2 header preceding the payload data in the message data when you
browse it.
Check in, deploy and test the PROC_A and PROC_B flows (one at a time is the
method suggested) along with the working FAN_OUT flow. Once all of these are
working properly (use RFHUTIL to examine the various messages content in
detail), you should clear all the test queues of messages before testing the
FAN_IN flow.
Tip: You can remove unwanted messages from a queue (for example the
control messages) during testing by using the RFHUTIL program to read
them. This program will also display the message contents in different ways,
such as parsed XML. To put messages to a queue, use RFHUTIL to read the
message data from a file first, and then to put it into the queue.
400
When the number of messages expected (determined from the broker database
state information) is received, the AggregationReply node generates the
compound output message. If messages that are not part of this aggregation are
received, they are passed to the unknown terminal. If the expected messages fail
to arrive within the time-out interval, then the incomplete compound message is
passed to the timeout terminal. A time-out value of 0 indicates indefinite time.
Failure and catch terminals exist and can be wired up as required. We should
now have a compound message being produced which can be processed in the
remainder of this flow to generate a response to the original request.
401
To construct a final reply, a Compute node is used which has access to the
compound message. Unfortunately, because the compound message is not in
the MRM, you cannot directly drag and drop message fields. However, you can
still drag the MRM field names and then insert the extra levels.
In this example, the compound message has two folders, one for each message,
so the PROC_B message fields have the following prefix (or stem):
InputRoot.ComIbmAggregateReplyBody."BRAVO"
BRAVO is the folder name associated with this request by the AggregateRequest
node in FAN_OUT. The remainder of the structure is the MRM message, so the
fully qualified field name for LAST_NAME is:
InputRoot.ComIbmAggregateReplyBody."BRAVO"."MRM"."LAST_NAME";
Similarly, the PROC_A generic XML fields are available as follows:
InputRoot.ComIbmAggregateReplyBody."ALPHA"."XML"."staff"."record"."EMPN
O";
You can access any combination of the two (or more) message structures in the
Compute node to build a new output message. There will be as many folders as
there are different folder names from the aggregation requests for TESTAGG.
If more than one request message has used the same folder name in a single
aggregation instance, then each message for the same folder becomes an array
element within the folder (which is not the case in our example).
You could have several different message domain types in a compound reply, or
a number of XML messages, or a number of MRM messages, etc. If you
compound several XML messages, then the results are not merged into a single
XML tree, unless you choose to do so using appropriate Compute node logic.
Example 10-11 ESQL to produce a final response message
Set OutputRoot.MQMD.Version = 2;
Set OutputRoot.MQMD.Format = 'MQSTR';
Set OutputRoot.Properties.MessageDomain = 'XML';
Set OutputRoot.XML."REP"."name" =
InputRoot.ComIbmAggregateReplyBody."BRAVO"."MRM"."LAST_NAME";
Set OutputRoot.XML."REP"."loc" =
InputRoot.ComIbmAggregateReplyBody."BRAVO"."MRM"."LOCATION";
Set OutputRoot.XML."REP"."emp" =
InputRoot.ComIbmAggregateReplyBody."ALPHA"."XML"."staff"."record"."EMPNO";
Set OutputRoot.XML."REP"."tel" =
InputRoot.ComIbmAggregateReplyBody."ALPHA"."XML"."staff"."record"."PHONENO";
402
The Compute node does not select Copy message headers (since there are
none produced by the aggregation reply node), nor does it use Copy all
messages. A new message header must be constructed for the new output
message, although you can copy fields from one of the included messages
header.
The properties, MQMD and other headers of each included message are
available under their folder name (for example using the element name
InputRoot.ComIbmAggregateReplyBody.BRAVO.MQMD.Format).
When building a new message, be sure to create at least one field in each
header section in the correct order (for example, create the MQMD before
creating the MQRFH2 before creating the XML), otherwise the output message
may not be generated correctly or not at all by the parser during MQOutput.
The output of the Compute node is connected to an MQOutput node which uses
a queue name of WMQI.OUT. The final response message looks like this:
Example 10-12 Final response message
<REP>
<name>WALKER</name>
<loc>D11</loc>
<emp>000190</emp>
<tel>2986</tel>
</REP>
This example message shows how both process flows have extracted different
data (although, in our test case, from the same database) and then how the two
reply messages have been aggregated and a final response built using data from
the different messages that was available in the compound message.
403
UserTrace
BIP4060I: Data 'TRACE AFTER AGGREGATION
----------------------(
(0x1000000)Properties
= (
(0x3000000)MessageSet
= NULL
(0x3000000)MessageType
= NULL
(0x3000000)MessageFormat
= NULL
(0x3000000)Encoding
= NULL
(0x3000000)CodedCharSetId = NULL
(0x3000000)Transactional
= UNKNOWN
(0x3000000)Persistence
= UNKNOWN
(0x3000000)CreationTime
= NULL
(0x3000000)ExpirationTime = NULL
(0x3000000)Priority
= NULL
(0x3000000)ReplyIdentifier = NULL
(0x3000000)ReplyProtocol
= 'MQ'
(0x3000000)Topic
= NULL
)
(0x1000000)ComIbmAggregateReplyBody = (
(0x1000000)BRAVO = (
(0x1000000)Properties = (
(0x3000000)MessageSet
= 'DNUMQ4G07I001'
(0x3000000)MessageType
= 'TEST'
(0x3000000)MessageFormat
= 'TDS1'
(0x3000000)Encoding
= 546
(0x3000000)CodedCharSetId = 437
(0x3000000)Transactional
= TRUE
(0x3000000)Persistence
= FALSE
(0x3000000)CreationTime
= NULL
(0x3000000)ExpirationTime = -1
(0x3000000)Priority
= 0
(0x3000000)ReplyIdentifier =
X'414d51207a7061742020202020202020f46a0f3ca2400100'
(0x3000000)ReplyProtocol
= 'MQ'
(0x3000000)Topic
= NULL
)
(0x1000000)MQMD
= (
(0x3000000)SourceQueue
= 'WMQI.REPLY'
(0x3000000)Transactional
= TRUE
(0x3000000)Encoding
= 546
404
1880
(0x3000000)CodedCharSetId
= 437
(0x3000000)Format
= 'MQHRF2 '
(0x3000000)Version
= 2
(0x3000000)Report
= 0
(0x3000000)MsgType
= 2
(0x3000000)Expiry
= -1
(0x3000000)Feedback
= 0
(0x3000000)Priority
= 0
(0x3000000)Persistence
= 0
(0x3000000)MsgId
=
X'414d51207a7061742020202020202020f46a0f3c42f00000'
(0x3000000)CorrelId
=
X'414d51207a7061742020202020202020f46a0f3ca2400100'
(0x3000000)BackoutCount
= 0
(0x3000000)ReplyToQ
='
'
(0x3000000)ReplyToQMgr
= 'zpat
'
(0x3000000)UserIdentifier
= 'zpat
'
(0x3000000)AccountingToken =
X'0000000000000000000000000000000000000000000000000000000000000000'
(0x3000000)ApplIdentityData = '
'
(0x3000000)PutApplType
= 26
(0x3000000)PutApplName
= 'zpat
'
(0x3000000)PutDate
= DATE '2001-12-06'
(0x3000000)PutTime
= GMTTIME
'21:56:01.120'
(0x3000000)ApplOriginData
= '
'
(0x3000000)GroupId
=
X'000000000000000000000000000000000000000000000000'
(0x3000000)MsgSeqNumber
= 1
(0x3000000)Offset
= 0
(0x3000000)MsgFlags
= 0
(0x3000000)OriginalLength
= -1
)
(0x1000000)MQRFH2
= (
(0x3000000)Version
= 2
(0x3000000)Format
= '
'
(0x3000000)Encoding
= 546
(0x3000000)CodedCharSetId = 437
(0x3000000)Flags
= 0
(0x3000000)NameValueCCSID = 1208
(0x1000000)mcd
= (
(0x1000000)Msd = (
(0x2000000) = 'MRM'
)
(0x1000000)Set = (
405
(0x2000000) =
)
(0x1000000)Type
(0x2000000) =
)
(0x1000000)Fmt
(0x2000000) =
)
'DNUMQ4G07I001'
= (
'TEST'
= (
'TDS1'
)
)
(0x1000008)MRM
=
(0x3000001)FIRST_NAME
(0x3000001)LAST_NAME
(0x3000001)LOCATION
)
(
= 'JAMES'
= 'WALKER'
= 'D11'
)
(0x1000000)ALPHA = (
(0x1000000)Properties = (
(0x3000000)MessageSet
(0x3000000)MessageType
(0x3000000)MessageFormat
(0x3000000)Encoding
(0x3000000)CodedCharSetId
(0x3000000)Transactional
(0x3000000)Persistence
(0x3000000)CreationTime
(0x3000000)ExpirationTime
(0x3000000)Priority
(0x3000000)ReplyIdentifier
X'414d51207a7061742020202020202020f46a0f3cb2400100'
(0x3000000)ReplyProtocol
(0x3000000)Topic
)
(0x1000000)MQMD
= (
(0x3000000)SourceQueue
(0x3000000)Transactional
(0x3000000)Encoding
(0x3000000)CodedCharSetId
(0x3000000)Format
(0x3000000)Version
(0x3000000)Report
(0x3000000)MsgType
(0x3000000)Expiry
(0x3000000)Feedback
(0x3000000)Priority
(0x3000000)Persistence
(0x3000000)MsgId
X'414d51207a7061742020202020202020f46a0f3c42e00000'
406
=
=
=
=
=
=
=
=
=
=
=
''
''
''
546
437
TRUE
FALSE
NULL
-1
0
= 'MQ'
= NULL
=
=
=
=
=
=
=
=
=
=
=
=
=
'WMQI.REPLY'
TRUE
546
437
'MQSTR
'
2
0
2
-1
0
0
0
(0x3000000)CorrelId
=
X'414d51207a7061742020202020202020f46a0f3cb2400100'
(0x3000000)BackoutCount
=
(0x3000000)ReplyToQ
='
'
(0x3000000)ReplyToQMgr
= 'zpat
'
(0x3000000)UserIdentifier
=
(0x3000000)AccountingToken =
X'0000000000000000000000000000000000000000000000000000000000000000'
(0x3000000)ApplIdentityData = '
'
(0x3000000)PutApplType
=
(0x3000000)PutApplName
= 'zpat
'
(0x3000000)PutDate
=
(0x3000000)PutTime
=
'21:56:01.180'
(0x3000000)ApplOriginData
=
(0x3000000)GroupId
=
X'000000000000000000000000000000000000000000000000'
(0x3000000)MsgSeqNumber
=
(0x3000000)Offset
=
(0x3000000)MsgFlags
=
(0x3000000)OriginalLength
=
)
(0x1000010)XML
= (
(0x5000018)XML
= (
(0x6000011) = '1.0'
)
(0x1000000)staff = (
(0x1000000)given
= (
(0x2000000) = 'JAMES'
)
(0x1000000)surname = (
(0x2000000) = 'WALKER'
)
(0x1000000)record = (
(0x1000000)EMPNO
= (
(0x2000000) = '000190'
)
(0x1000000)FIRSTNME = (
(0x2000000) = 'JAMES'
)
(0x1000000)MIDINIT
= (
(0x2000000) = 'H'
)
(0x1000000)LASTNAME = (
(0x2000000) = 'WALKER'
'zpat
'
26
DATE '2001-12-06'
GMTTIME
'
'
1
0
0
-1
407
)
(0x1000000)WORKDEPT = (
(0x2000000) = 'D11'
)
(0x1000000)PHONENO
= (
(0x2000000) = '2986'
)
(0x1000000)HIREDATE = (
(0x2000000) = '1974-07-26'
)
(0x1000000)JOB
= (
(0x2000000) = 'DESIGNER'
)
(0x1000000)EDLEVEL
= (
(0x2000000) = '16'
)
(0x1000000)SEX
= (
(0x2000000) = 'M'
)
(0x1000000)BIRTHDATE = (
(0x2000000) = '1952-06-25'
)
(0x1000000)SALARY
= (
(0x2000000) = '20450.00'
)
(0x1000000)BONUS
= (
(0x2000000) = '400.00'
)
(0x1000000)COMM
= (
(0x2000000) = '1636.00'
)
)
)
)
)
)
)
Tip: Please refer to Chapter 6 of the IBM manual Using the Control Center,
SC34-5602, which also covers the use of aggregation nodes, and, in
particular, read the section dealing with exception handling. The online help in
the Control Center also explains the node properties and terminals in more
detail.
408
409
Web materials
Instructions for importing message sets and flows can be found in Section 6.7.1,
Importing the Web materials message sets and flows on page 203.
The following exported flows are supplied:
FAN_OUT.xml
PROC_A.xml
PROC_B.xml
FAN_IN.xml
The original input test message is supplied as file Atesting.xml. The message
sets are CONTACTS.mrp and DLM_TEST.mrp.
A formatted trace file for a complete fan-out and fan-in transaction using the
sample flows is supplied as a file named AGGREGATION.txt.
410
Appendix A.
Hardware/software
configuration
This appendix provides details about the hardware and software configuration of
the computers that were used to build the examples and to perform the migration
exercise.
411
Operating system
Windows 2000 Professional + ServicePac 2
Software details
DB2 Enterprise Edition V6.1 + Fixpak 7
MQSeries for Windows NT and Windows 2000 V5.2.1 with CSD U200151
MQSeries for Java MA88
412
Operating system
AIX V4.3.3
Software details
DB2 Enterprise Edition V7.1
MQSeries for AIX V5.2 + CSD2 (U474779, U474840, U478788)
MQSeries for Java MA88
Operating system
Solaris 8
413
Software details
Oracle 8.1.6
MQSeries for Solaris V5.2 + CSD2 (U474789)
WebSphere MQ Integrator V2.1 + GA CSD (U481172).
414
Appendix B.
415
416
}
public int run (MbMessageAssembly assembly) throws MbException {
byte [] generatedMessageBytes = null;
// select correct routine
if (_dataSource.equalsIgnoreCase ("port")) {
try {
generatedMessageBytes = usePortNumber ();
}
catch (InterruptedIOException e) {
System.out.println ("Timed out");
return TIMEOUT;
}
catch (SocketNodeException e) {
return FAILURE;
}
catch (Exception e) {
logMessage ("run", "4E", e.getMessage ());
e.printStackTrace ();
return FAILURE;
}
}
else {
if (_dataSource.equalsIgnoreCase ("file")) {
try {
generatedMessageBytes = useFilePath ();
}
catch (SocketNodeException e) {
return FAILURE;
}
catch (Exception e) {
logMessage ("run", "4E", e.getMessage ());
e.printStackTrace ();
return FAILURE;
}
}
else {
logMessage ("run", "0W", "");
System.out.println ("Invalid datasource specified.");
// use defaults and run port version.
_portNumber = "3000";
_dataSource = "port";
try {
generatedMessageBytes = usePortNumber ();
}
catch (InterruptedIOException e) {
System.out.println ("Timed out");
return TIMEOUT;
}
catch (SocketNodeException e) {
return FAILURE;
}
catch (Exception e) {
417
418
}
File newSourceFile = new File (_filePath + File.separator + inputFileList[0] +
".inuse");
if (sourceFile.renameTo (newSourceFile) == false) {
logMessage ("useFilePath", "0E", "Source '" + _filePath + File.separator +
inputFileList[0] + "' Dest '"+ _filePath + File.separator + inputFileList[0] +
".inuse");
throw new SocketNodeException ();
}
System.out.println ("\rFile found. Reading message ...");
// copy file into byte array for propagation
byte[] inputBytes = new byte[1024];
InputStream in = new FileInputStream (newSourceFile);
ByteArrayOutputStream byteStream = new ByteArrayOutputStream ();
int len;
while ((len = in.read (inputBytes)) > 0) {
//append to input;
byteStream.write (inputBytes,0,len);
}
in.close ();
newSourceFile.delete ();
System.out.println ("\rFile closed. Message created.
\n");
return byteStream.toByteArray ();
}
}
System.out.println ("none found.\n");
return null;
}
private byte[] usePortNumber () throws Exception {
System.out.println ("Waiting for Incoming Data...");
// open server port if not already open
if (server == null) {
try {
server = new ServerSocket (Integer.parseInt (_portNumber,10));
}
catch (Throwable t){
logMessage ("usePortNumber", "1E", "Port '" + _portNumber + "'\r\nError:\r\n" +
t.getMessage ());
throw new SocketNodeException ();
}
}
server.setSoTimeout (10000);
Socket sock = server.accept ();
System.out.println ("\rConnection established. Reading message ...");
byte[] inputBytes = new byte[1024];
InputStream in = sock.getInputStream ();
ByteArrayOutputStream byteStream = new ByteArrayOutputStream ();
int len;
while ((len = in.read (inputBytes)) > 0) {
//append to input;
419
byteStream.write (inputBytes,0,len);
}
in.close ();
System.out.println ("\rConnection closed. Message created.
return byteStream.toByteArray ();
\n");
}
}
//EOF
B.1.2 SocketNodeException.java
Source code
package com.ibm.samples;
public class SocketNodeException extends Exception {
public SocketNodeException () {
super();
}
public SocketNodeException (String s) {
super(s);
}
}
B.1.3 SocketNodeMessages.java
Source code
package com.ibm.samples;
import java.util.*;
public class SocketNodeMessages extends ListResourceBundle {
/**
* <p>static multi-dimensional array used as a catalog structure to store
* messages containing a varying number of inserts, accessed via a msg
* key</p>
*/
private static final Object[][] contents = {
{"0E", "\r\nUnable to rename file.\r\n"},
{"1E", "\r\nUnable to open port."},
{"2E", "\r\nThe directory path specified is invalid.\r\nAborting read."},
{"3E", "\r\nUnable to read from file.\r\n"},
{"4E", "\r\nA general critial error occurred.\r\nThe stack trace is as
follows:\r\n"},
{"5E", "\r\nThe path specified is not a directory.\r\n"},
{"0W", "\r\nThe data source specified is invalid.\r\nDefaulting to Socket mode, Port
3000.\r\n"},
420
B.1.4 DirFilter.java
Source code
package com.ibm.samples;
public class DirFilter implements java.io.FilenameFilter {
public DirFilter () {
super();
}
public boolean accept (java.io.File dir, String name) {
// accept if filename ends in .xml
if (name.endsWith (".xml") == true) {
return true;
} else {
return false;
}
}
}
(where <install_dir> is the drive and the directory where the WebSphere
MQ Integrator installation is located)
if the setting is not already in your classpath.
2. Write programs shown above in a text editor or copy them from the Web site.
3. Save programs to <install dir>\javasource\com\ibm\samples\. Ensure that you
save the files with a .java extension and that a .txt or other extension is not
attached.
4. From a command prompt, type cd <install dir>\javasource\
5. Run the command javac com\ibm\samples\*.java
6. Create the JAR using jar cvf SocketNode.jar com\ibm\samples\*.class
7. Stop the broker with mqsistop <broker_name>
8. If the configuration manager is running on the same machine, issue the
command mqsistop ConfigMgr
421
422
423
out.propagate (assembly);
}
/* Attributes are defined for a node by supplying get/set methods.
* The following two methods define an attribute 'nodeTraceSetting'.
* The capitalisation follows the usual JavaBean property convention.
*/
public String getNodeTraceSetting () {
return _nodeTraceSetting;
}
public void setNodeTraceSetting (String nodeTraceSetting) {
_nodeTraceSetting = nodeTraceSetting;
}
}
B.2.2 TransformNode
Source code
/*
* Licensed Materials - Property of IBM
* 5648-C63
* (C) Copyright IBM Corp. 1999, 2001
*/
package com.ibm.samples;
import com.ibm.broker.plugin.*;
/**
* Sample plugin node.
* This node alters the content of the incoming message before passing it to
* its output terminal.
* A minimal test message for this node would be:
* <data><action>add</action></data>
*/
public class TransformNode extends MbNode implements MbNodeInterface {
String _nodeTraceSetting;
/**
* Transform node constructor.
* This is where input and output terminal are created.
*/
public TransformNode () throws MbException {
createInputTerminal ("in");
createOutputTerminal ("out");
createOutputTerminal ("failure");
}
/**
* This static method is called by the framework to identify this node.
424
425
elementValue);
// Add an attribute to this new tag
tag.createElementAsFirstChild (MbElement.TYPE_NAME_VALUE,
"NewValue",
switchElement.getValue ());
MbOutputTerminal out = getOutputTerminal ("out");
// Now propagate the message assembly.
// If the terminal is not attached, an exception will be thrown. The user
// can choose to handle this exception, but it is not neccessary since
// the framework will catch it and propagate the message to the failure
// terminal, or if it not attached, rethrow the exception back upstream.
out.propagate (newAssembly);
}
public String toString () {
return getName ();
}
/* Attributes are defined for a node by supplying get/set methods.
* The following two methods define an attribute 'nodeTraceSetting'.
* The capitalisation follows the usual JavaBean property convention.
*/
public String getNodeTraceSetting () {
return _nodeTraceSetting;
}
public void setNodeTraceSetting (String nodeTraceSetting) {
_nodeTraceSetting = nodeTraceSetting;
}
}
(where <install_dir> is the drive and the directory where the WebSphere
MQ Integrator installation is located)
if the setting is not already in your classpath.
2. Write programs shown above in a text editor or copy them from theWeb site. .
426
427
428
Appendix C.
429
Getting started
During the installation, except when noted otherwise, you have to be logged in as
root.
These are the basic steps that need to be completed:
1. Configure Solaris and apply the recommended fixes.
2. Change the parameters of the operating system, especially semmsl and
semmns. Otherwise, Oracle will not have enough processes and memory
available and will not let you create databases.
430
3. Install Oracle.
4. Create a database for the broker. You can use any name for the SID (the user
that owns the database), but preferably limit it to names up to four characters
long (Oracle recommendation).
5. Prepare the database for an ODBC connection to be used by the WebSphere
MQ Integrator broker.
We allowed for this space when we installed Solaris 7 by allocating space to the
/opt file system.
431
3. Add or modify the Solaris kernel configuration parameters. In our case, for
1024 MB of physical memory, the kernel parameters are:
set
set
set
set
set
set
set
set
msgsys:msginfo_msgmax=65535
msgsys:msginfo_msgmnb=65535
msgsys:msginfo_msgmap=258
msgsys:msginfo_msgmni=256
msgsys:msginfo_msgssz=16
msgsys:msginfo_msgtql=1024
msgsys:msginfo_msgmax=65535
msgsys:msginfo_msgseg=32768
semsys:seminfo_semmni=1024
semsys:seminfo_semmap=1026
semsys:seminfo_semmns=2048
semsys:seminfo_semmnu=2048
semsys:seminfo_semume=200
semsys:seminfo_semopm=200
4. Add also this extra kernel configuration parameters needed for Oracle:
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=100
set semsys:seminfo_semmsl=2048
set semsys:seminfo_semvmx=32767
432
(If you have trouble with this command, try issuing the command reboot.)
6. Create groups by typing the following:
# groupadd dba
# groupadd oinstall
where:
433
If you will be working on the Solaris machine, you can use :0.0 without the host
name.
Important: A SID is the owner of a database, a user inside the Oracle
environment, because the owners of the databases are not Solaris users as
with DB2. No matter what you type here, in subsequent steps we will create
new SIDs for each database. It is necessary to use a SID. If a SID is not
provided at this stage, we found that the installation produces an error.
434
3. Change the owner of the directories to user oracle and group dba by typing
the following command:
# chown -R oracle:dba /opt/oracle8
2. Insert the Oracle8i Enterprise Edition Release 2 (8.1.6) for Sun SPARC
Solaris CD-ROM on the database server.
3. To start the Oracle8i install, type the following:
# xhost +
access control disabled, clients can connect as any host
# su - oracle
$ /cdrom/oracle8i/runInstaller
6. When the UNIX Group Name window appears, enter the group oinstall
defined above, and then click Next.
7. When the Oracle Universal Installer window appears, you will be prompted to
run a script as user root in another Solaris Console window.
a. Start a Solaris Console window and log in as root.
b. Execute the script by typing the following:
# /tmp/OraInstall/orainstRoot.sh
435
8. Click the Oracle Universal Install window to bring it back to the foreground,
and then click Retry.
9. When the Available Products window appears, select Oracle8i Enterprise
Edition 8.1.6.0.0, and then click Next.
10. When the Installation Types window appears, select Typical or Standard,
and then click Next.
11. For the next windows, you can accept the defaults by clicking Next. When the
Create Database window appears, you can create a default database if you
want, but it is not necessary, because we are going to the specific database
required in future steps. To avoid creating the default database, select No and
click Next.
12. When the Summary window appears, review all the information. You can
click Back to change any selection. When everything looks correct, click
Install.
The installation will begin. It cannot be left unattended because it requires
some scripts to be run as root during the process.
13.When the Setup Privileges window appears, a message will be displayed to
run the following script:
a. Start a Solaris Console window and log in as root.
b. Execute the script by typing:
# /opt/oracle8/u01/app/oracle/product/8.1.6/root.sh
c. You will be prompted to enter the full path to the local bin directory. We
entered /usr/bin. Click Enter.
d. Click the Setup Privileges window to bring it to the foreground, then click
OK.
14.When the End of Installation window appears, click Exit.
The Oracle8i Enterprise Edition server installation is now complete.
436
In this section, we will add Oracle8i required environment variables to the oracle
user .profile that we created by completing the following steps:
1. Start a Solaris Console window.
2. Log in as the oracle user.
# su - oracle
3. Add entries to the end of the oracle user .profile as seen in Example 10-14.
Example 10-14 oracle user .profile entries required for Oracle8i
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$CLASSPATH
CLASSPATH=$ORACLE_HOME/product/jlib:$CLASSPATH
export CLASSPATH
ORACLE_SID=oracle
ORAENV_ASK=NO
. /usr/bin/oraenv
4. This script overrides the ORACLE_HOME defined in the oracle user .profile.
Modify /usr/bin/oraenv to correct the problem. Find the following text and add
a # at the beginning of each line to make it a comment:
ORAHOME=`dbhome "$ORACLE_SID"`
case $? in
0)ORACLE_HOME=$ORAHOME ;;
*)echo $N "ORACLE_HOME = [$ORAHOME] ? $C"
read NEWHOME
case "$NEWHOME" in
"")ORACLE_HOME=$ORAHOME ;;
*)ORACLE_HOME=$NEWHOME ;;
esac ;;
esac
export ORACLE_HOME
Net 8 Configuration
This section describes the required configuration for Net8 after the Oracle8i
installation.
1. Start a Solaris Console window and log in as root.
2. Update the /etc/services file with the listener. The listener name and port were
defined during the Oracle8i installation (Net8).
We added the following line to our services file:
LISTENER 1521/tcp
437
Where LISTENER is the name or your listener (in our case the name is
LISTENER).
5. You can start the listener with:
$ lsnrctl start LISTENER
438
439
15. When the Review the SYSTEM tablespace information window appears, look
for the directories where it will save the database data, and correct them if
necessary.
16.When the Review the redo log file parameter information window appears,
carefully review the paths and click Next.
17.When the Review the logging parameter information window appears, review
it carefully and click Next.
18.When the Review the SGA information window appears, check the number of
processes this database will use.
If you have followed our instructions and set the kernel parameters semmsl
and semmns, then the default setting of 150 processes will work.
If you have not set high values for semmsl and semmns, decrease the
processes value. Decreasing this value will not prevent the creation of the
database and this value can be changed in the future. If this value is too
high for your system, the database will not be created.
19.Click Next (more tuning will be done in later steps).
20. When the Review the directory path information window appears, accept the
defaults and click Next.
21.Select Create database now, and click Finish.
22.When the Oracle Database Configuration Assistant alert appears, asking if
you want to proceed, click Yes.
23.The database creation progress indicator should be visible. This process
takes approximately 20 minutes.
If the username/password combination does not work, use whatever the default
value was that was displayed during the database creation.The Oracle user
system has a default password of manager and has access to all databases.
Later we will create users for each database.
440
Database tuning
1. Start a Solaris Console window and log in as user oracle.
# su - oracle
open_cursors
200
db_block_buffers
2000
shared_pool_size
20000000
processes
100
3. Stop and start the Oracle8i Server for the database tuning changes to take
effect:
# su - oracle
$ dbshut
$ dbstart
Note: The Oracle8i dbshut and dbstart scripts will stop and start all
databases. This behavior is desired in this example, but may not be
appropriate in other situations.
Using svrmgrl is another option for stopping and starting the databases.
4. If your Oracle8i Server database instances do not start, check the tuning
parameters entered for errors. Changing the tuning values beyond the
physical memory available on the system will result in the database instance
not starting.
441
where Y or N specifies whether you want the dbstart and dbshut scripts to
start up and shut down the database. Find the entries for all the databases
that you want to start up. They are identified by the SID in the first field.
Change the last field for each to Y to enable the database to start when the
dbstart command is run.
For example:
oracle:/opt/oracle8/u01/app/oracle/product/8.1.6:Y
b. Create a file called dbora. Example 10-15 provides a sample dbora file.
We changed the ORA_HOME to reflect the directory specified during the
Oracle8i server installation.
Example 10-15 Sample dbora
#!/bin/sh
# Set ORA_HOME to be equivalent to the ORACLE_HOME
# from which you wish to execute dbstart and
# dbshut
# set ORA_OWNER to the user id of the owner of the
# Oracle database in ORA_HOME
ORA_HOME=/opt/oracle8/u01/app/oracle/product/8.1.7
ORA_OWNER=oracle
case "$1" in
start)
# Start the Oracle databases:
# The following command assumes that the oracle login will not prompt the
# user for any values
su - $ORA_OWNER -c $ORA_HOME/bin/dbstart &
;;
stop)
442
443
444
Appendix D.
Additional material
This redbook refers to additional material that can be downloaded from the
Internet as described below.
Select the Additional materials and open the directory that corresponds with
the redbook form number, SG24-6509.
445
CM202
CM210
Description
Contains exported message flows and message sets
used in an MQSeries Integrator V2.0.1 environment. A
sample input file for the utility mqsiput is also included.
Contains exported message flows and message sets
used in an MQSeries Integrator V2.0.2 environment.
Sample input files for the utility mqsiput are also included.
Contains exported message flows and message sets
used in an WebSphere MQ Integrator V2.1 environment.
Sample input files for the utility mqsiput are also included.
These files should be used in an environment that has the required level of
software. The level of MQSeries Integrator and WebSphere MQ Integrator that
we have used to create these files is documented in Appendix A,
Hardware/software configuration on page 411.
File name
TAGGED_TEST.mrp
DLM_TEST.mrp
TAG_TO_XML.xml
CSV_TO_XML.xml
Description
Exported message set.
Exported message set.
Exported message flow.
Exported message flow.
File name
CONTACTS.mrp
CONTACTS_flows.zip
Example1.xml
Example2.xml
Example.dtd
Description
Exported message set.
Zip file containing five exported message flows.
Sample XML message to test the message flows.
Sample XML message to test the message flows.
Document Type Definition for the sample messages.
446
Description
Sample SWIFT message.
Sample message flows for SWIFT messages.
Collection of rules for import in a New Era Of Networks
environment.
formatsfromNEONB
formatsfromNEONA
Description
Message flow that uses the Java input node and plug-in
node.
Source code for the SwitchNode plug-in.
Compiled code for the SwitchNode plug-in.
All of the source code for the SwitchNode input node.
Compiled code for the SocketNode input node.
Source code for the TransformNode plug-in.
Description
User trace output that shows the aggregation of
messages.
Exported message flow FAN_OUT.
Exported message flow FAN_IN.
Exported message flow PROC_A.
Exported message flow PROC_B.
447
448
Related publications
The publications listed in this section are considered particularly suitable for a
more detailed discussion of the topics covered in this redbook.
IBM Redbooks
For information on ordering these publications, see How to get IBM Redbooks
on page 450.
MQSeries Publish/Subscribe Applications, SG24-6282
Intra-Enterprise Business Process Management, SG24-6173
WebSphere MQ Integrator for z/OS V2.1 Implementation Guide, SG24-6528
Other resources
These publications are also relevant as further information sources:
WebSphere MQ Integrator Introduction and Planning, GC34-5599
WebSphere MQ Integrator Using the Control Center, GC34-5602
WebSphere MQ Integrator Programming Guide, SC34-5603
WebSphere MQ Integrator Administration Guide, SC34-5792
WebSphere MQ Integrator ESQL Reference, SC34-5923
WebSphere MQ Integrator Problem Determination Guide, GC34-5920
WebSphere MQ Integrator Working with Messages, SC34-6039
WebSphere MQ Integrator Messages, GC34-5601
449
W3C consortium
http://www.w3.org
Java documentation
http://docs.sun.com
450
Glossary
Access Control List (ACL). The list of
principals that have explicit permission (to
publish, to subscribe to, and to request
persistent delivery of a publication message)
against a topic in the topic tree. The ACLs
define the implementation of topic-based
security.
451
452
Glossary
453
454
MQI
MRM
API
Application Programming
Interface
NEON
BLOB
OAG
BOD
ODBC
PCDATA
RRS
AMI
CSV
CWF
SQL
DB2
Database 2
SWIFT
DTD
EAI
Enterprise Application
Integration
TDWF
Tagged/Delimited Wire
Format
EDI
UOW
Unit of Work
ESQL
UR
Unit of Recovery
GUI
URI
HTML
URL
XML
HTTP
IBM
International Business
Machines Corporation
IDE
Integrated Development
Environment
ITSO
International Technical
Support Organization
JAR
Java Archive
JDBC
JDK
JMS
JRE
lil
loadable implementation
library
MMC
Microsoft Management
Console
455
456
Index
AIX 71
Solaris 138
Windows 52, 158
A
add
a breakpoint 200
a delimiter 168
a plug-in node to Control Center 360
broker to topology 42, 59, 74, 84, 99
element to workspace 214, 334
elements to a compound type 174
message set to workspace 100
NEON message set to workspace 334
new group
AIX 67
Solaris 134
new user ID
AIX 67
Oracle 136
Solaris 134
Windows 145
physical format 168, 208
after import 118
during import 100, 124
types to workspace 212, 214
aggregation 380
AIX administration 67, 112
application communication protocol
MQSeries Everyplace 3
point to point 3
publish/subscribe 3
request/reply 380
SCADA 3
application group
specifying in MQInput node 313
versus message set 313
application isolation 7
applying maintenance 89, 107
SQL Server 147
assignment
message flow 9, 43, 85, 102, 184, 366
message set 43, 85, 102, 184
role of assigner 11
security 39
automate
channel start-up
B
broker
See message broker
build time 18
C
channel
create receiver
AIX 71
Solaris 138
Windows 55, 161
create sender
AIX 71
Solaris 138
Windows 53, 160
for brokers 7, 57, 138
for Configuration Manager 58
for Control Center users 7
triggering
AIX 71
Solaris 138
Windows 52, 158
Check node 398
check-in process 177
classpath for plug-in nodes 359
coexistance of multiple versions 87, 107
collective 7, 10
command
add group
Solaris 134
add user
Solaris 134
change password
Solaris 134
consistency check 251, 265, 284
create broker
AIX 72
Solaris 138
Windows 47, 50, 66, 82, 98, 105
457
458
message broker
AIX 72
dummy 117
Solaris 138
Windows 47, 50, 66, 80, 98, 105, 155
message element 170
message flow 179
message set 165
new user ID
AIX 67
Oracle 136
Solaris 134
Windows 154
plug-in node in Control Center 360
queue 185
queue manager
AIX 70
Solaris 133
Windows 45, 62, 78, 155
receiver channel
AIX 71
Solaris 138
Windows 55, 161
sender channel
AIX 71
Solaris 138
Windows 53, 159
transmission queue
AIX 70
Solaris 138
Windows 50, 158
user group
AIX 67
Solaris 134
create a JAR file 359
D
database
authorities
for broker and DB2 47, 63, 68, 79, 95
for broker and Oracle 136, 140
for broker and SQL Server 148
for Configuration Manager 63, 94
broker and NNSY 242
client configuration 274
create
for NNSY support 269
SQL Server 148
NNSY
access for broker 262, 286
access for Configuration Manager 262, 288
database configuration
DB2 68
Oracle 135
database connection file
add development entries 287
add runtime entries 285
create
V2.0.x 243
V2.1.0 245, 272
entries for export and import 253
entry for broker 343
entry for Configuration Manager 262, 288, 334
name
V1.x 253
V2.0.x 244
V2.1.0 253
updating initial file 258
database listener
start 135
status 135
Database node 333
database schema 243
DB2 command window 117, 252
debug
Compute node 198
message flow 200
define
attributes of a node
in Control Center 362
in source code 356
queue manager 40
terminals of a node
in Control Center 361
in source code 356, 372
definition model 18
delete
broker
during uninstall 124125
broker from topology 109
Configuration Manager 117, 123
message set from broker 127
delimiter 168, 177, 192
deployment
debug message flow 200
deleted broker 109
errors in a multi-version environment 86, 106
Index
459
F
features of
broker 7
Configuration Manager 5
ESQL 24
Filter node 388
flow
See message flow
FlowOrder node 388
E
element
See message element
environment variable
after uninstall 256
ICU_DATA 261, 287288
MQSI_PARAMETERS_FILE 244
NEON_ROOT 244
NN_CONFIG_FILE_PATH 243, 245, 253, 261,
282, 286, 288
NNSY_CATALOGUES 245, 261, 286, 288
NNSY_ROOT 245, 261, 286, 288
ODBCINI
AIX 68
Solaris 134
ORACLE_HOME 140
PATH 262, 288
error
invalid input XML 227
wrong order of elements 228
ESQL
code palette 17, 25
create statement 234
features 24
for XML attributes 233
loops and branches 16
move statement 234
reference variables 233
using in a Compute node 181
ESQL attribute names versus XML attribute names
223
execution group
See message broker
export
DTD 209
message flow 13
460
G
generate
DTD 209
generic XML 232
multiple messages 231
XML header 233
grant database authorities
for broker and DB2 47, 79, 9495
AIX 68
for broker and Oracle 136, 140
for broker and SQL Server 148
for Configuration Manager and DB2 63
NNSYGRP 258, 268
I
import
C and COBOL 211
Cobol copybook 74
DTD 206, 208
message flow 13, 100, 203, 236
error 101
message set 21, 100, 117, 124, 203, 236
physical format 203
NNSY format 248, 259, 283
NNSY rule 259, 284
inbound MQSeries communication
AIX 71
Solaris 133
input format
versus message type 313
input node 352
definition of an input node class 356
install after uninstall 117, 123
invalid order of elements 227
J
Java Development Kit 354
Java interface for nodes 353
L
Label node 327
language versions of log messages 58
log file
deployment 185, 229
queue manager 46, 62, 78, 133
renamed NNSY log files 249
syslog
Solaris 137
writing log records from a plug-in 357
writing to the log file 354
log messages 58
logging requirements 46, 62, 78, 133
loop
in a message flow 17, 388
in ESQL 16, 230
M
message
create 176
examples of tagged and delimited 239
logical model 19
sample SWIFT message 291
wire format 21
message broker 2
access to NNSY database 262, 286, 343
add to topology 42, 59, 74, 84, 99
assign message flow 9, 43, 85, 102, 184, 366
assign message set 43, 85, 102, 184
collective 7, 10
communication 6, 50, 71
create 80
AIX 72
Solaris 138
Windows 47, 50, 66, 98, 105, 155
create dummy 117
database access
AIX 68
database authorizations 63, 68, 79, 136,
148149
database connectivity 242
database security 47
delete 29
delete during uninstall 109, 124
Index
461
stop 367
sub-flow 14, 17
transactional 23
message length 50
message processing model 290
message processing node
See node
message repository
drop tables 117, 124
message routing
in NEON domain 320
in NEONMSG domain 327
message set
add to workspace 100
assignment 43, 85, 102, 184
create 165
deployment 18
export 116, 123
identifier 166
import 117, 124, 203, 236
name 165
NEON message set 334
properties 166
repository 6, 29
security 39
unassign 127
UUID 32
UUID after import 127
versus application group 313
message transformation 181
from NEONMSG to XML 300, 306
NEON domain 292
NEONMSG domain 296
message tree 383
copy 196
copying 181
copying individual elements 181
examine contents 201
message type
versus input format 313
messaging standard 168, 206
migrate
broker database 117
configuration repository 117, 124
migration steps
for NNSY 250
MQInput node 196, 218
messages in NEON domain 293, 313
messages in NEONMSG domain 298
462
N
NEON Application Group 310
NEON message set 330
NEONFormatter node
definition 246
example 294, 297, 308, 314, 318
NEONMap node 298
definition 247
example 297, 304, 319
NEONMap versus NEONTransform nodes 296
NEONRules node
definition 246
example 308, 314, 322
subscription 311
NEONRulesEvaluation node
definition 247
example 327, 343
NEONTransform node
definition 247
example 306
network requirements 46
network services 72
New Era of Networks
using an MQRFH or MQRFH2 header 345
NNFie
session definition
V2.0.1 253, 266
V2.1.0 281
NNRie
session definition
V2.0.1 253, 266
V2.1.0 281
NNSY format
check consistency 252, 284
consistency check 265
export 254, 267
import 259, 283
reload 249
NNSY message properties file 289
NNSY rules
check consistency 252
consistency check 266, 284
export 254, 267
import 259, 284
reload 249
sample conditions 324
sample subscriptions 325
NNSY version identification 255
node
add plug-in to Control Center 360
AggregateControl 381
AggregateReply 381
AggregateRequest 382
Check 398
Compute 180, 182, 196, 218, 222
create JAR file 359
Database 333
define attributes
in Control Center 362
in source code 356
define terminals
in Control Center 361
in source code 356
definition of a plug-in node class 372
duplicate name 101
ESQL 14
FlowOrder 388
identifier 360, 374
input nodes 352
Label 327
label of node 360, 374
MQInput 179, 196, 218
messages in NEON domain 293, 313
messages in NEONMSG domain 298
MQReply 396
NEONFormatter
definition 246
example 294, 308, 314, 318
NEONMap
definition 247
example 304
NEONMap versus NEONTransform 296
NEONRules
definition 246
example 308, 314, 322
NEONRulesEvaluation
definition 247
example 327, 343
NEONTransform 247
example 306
output node 352
plug-in node 352
primitive nodes 13
property 13
ResetContentDescriptor 317
Trace 223, 387
node development
software requirements 354
node interface 353
O
ODBC
configuration 63, 135, 150
AIX 68
Index
463
Solaris 136
environment variable 68
ini file 136
operational security 39
order
of assignments in ESQL 219, 227
of attributes 218
of elements in output message 197
of message elements 178
of tags of input message 197
of XML elements 218
output
of trace 225
output message
generate 231
generate multiple 231
output node 352
P
parameter file for rules engine 244
equivalent in V2.1.0 348
parser 18, 167
custom-written 14
NEON 246
NEON and NEONMSG 289
NEON and NEONMSG example 317
NEONMSG 247
parser, built-in 2
partially parsing 173
physical format 18, 184, 192, 208
add 168
after import 118
during import 100, 124
changing 181
identifier 168169, 179
name 168169
properties 169170
specifying during import 203
ping channel 72
ping database listener 135
point to point 3
port number 46, 72, 79
properties
MQInput node 179
publish/subscribe 3
criteria 10
security 39
464
Q
queue manager
create
AIX 70
Solaris 133
Windows 45, 62, 78, 155
definition requirements 45
logging requirements 46, 62, 78, 133
requirements for WMQI components 40
start 70
R
Redbooks Web site 450
Contact us xii
refresh
rules and formats 249
TCP/IP configuration
AIX 72
registry security 39, 46, 62, 79, 94
repository 56
request/reply 380
required maintenance 87, 107
ResetContentDescriptor node 317
resource
saved states 13
resource control 13
resource coordinator 139
resource manager 23, 139
roles 1112
rules engine 248
parameter file 244
simulation 343
run-time 18
S
SCADA 3
script for defining MQSeries objects 57
security 4, 6, 10, 26
MQSeries 39
roles based 12, 39
Windows NT 12
select
individual fields 181
software installation
AIX 256
HP-UX 256
Solaris 256
Windows 256
T
TCP/IP port number 46, 72, 79
topology
add broker 42, 59, 73, 84, 99
delete broker 109
deploy 42, 60, 74, 84, 99
full versus delta deploy 109
trace
activating 224
analyzing output 226
example output 225
format log 225
format output 225
read log 225
to a file 224
Trace node 224, 387
transaction 23
transaction coordination 23, 139
transaction mode 223, 384
triggering
AIX 71
Solaris 138
Windows 52, 158
type
link to a physical format 177
type composition 191
type content 173, 191
U
uninstall
AIX 112, 256, 268
delete broker 109
HP-UX 256
Solaris 256
Windows 117, 123, 256, 268
user group 12, 39, 67
for NNSY support 258
user ID authorization 4, 6
user profile 69, 134, 140
user roles 11
user trace file 224
UUID
ESQL function 24
message set 32
V
validation of imported MRM definitions 211
variables in ESQL 233
version of NNSY utilities 255
W
WebSphere MQ family overview 3
WebSphere MQ Integrator
components 4
extending functionality 2, 14
folder name 29
installation 134, 153
NNSY support 257
moving components 30
positioning 3
Windows administration 145, 154
Windows NT registry 39, 46
wire formats 21
See also physical formats
workspace
add message set 100
import message flow 100
Index
465
X
XML
attributes 206, 216
MRM versus XML 233
change XML name 215216
import a DTD 206
message 50
names 216
physical format 208
processing blanks and carriage returns 384
466
WebSphere MQ Integrator
Deployment and Migration
(1.0 spine)
0.875<->1.498
460 <-> 788 pages
Back cover
WebSphere MQ Integrator
Deployment and
Migration
Migration
possibilities for
brokers and
NEON-based
message flows
Multi-broker and
multi-platform
configuration
scenarios
New features,
including enhanced
MRM and Java nodes
INTERNATIONAL
TECHNICAL
SUPPORT
ORGANIZATION
BUILDING TECHNICAL
INFORMATION BASED ON
PRACTICAL EXPERIENCE
IBM Redbooks are developed by
the IBM International Technical
Support Organization. Experts
from IBM, Customers and
Partners from around the world
create timely technical
information based on realistic
scenarios. Specific
recommendations are provided
to help you implement IT
solutions more effectively in
your environment.
ISBN 0738424293