Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
http://www.redbooks.ibm.com
This book was printed at 240 dpi (dots per inch). The final production redbook with the RED cover will
be printed at 1200 dpi and will provide superior graphics resolution. Please see “How to Get ITSO
Redbooks” at the back of this book for ordering instructions.
SG24-4693-01
IBML
SG24-4693-01
International Technical Support Organization
March 1998
Take Note!
Before using this information and the product it supports, be sure to read the general information in
Appendix D, “Special Notices” on page 435.
This edition applies to DB2 for MVS/ESA Version 4.1, DB2 Server for OS/390 Version 5 DB2 for OS/2 Version 2.1.1,
DDCS for OS/2 V2.3.1, DB2 for AIX Version 2.1.1, DDCS for AIX Version 2.3.1, DB2 for OS/2 Version 5, DB2 for AIX
Version 5, DB2 for Windows NT Version 5, DB2 for Windows 95 Version 5, and other current versions and releases
of IBM products. Consult the latest edition of the applicable IBM bibliography for current information on products.
When you send information to IBM, you grant IBM a non-exclusive right to use or distribute the information in any
way it believes appropriate without incurring any obligation to you.
Copyright International Business Machines Corporation 1996 1998. All rights reserved.
Note to U.S. Government Users — Documentation related to restricted rights — Use, duplication or disclosure is
subject to restrictions set forth in GSA ADP Schedule Contract with IBM Corp.
Contents
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
The Team That Wrote This Redbook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Comments Welcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
Chapter 2. DB2 for MVS/ESA, DB2 for OS/390 and Stored Procedures . . . . . . . . . . . . . . . . . . 7
2.1 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Installation Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.1 DB2-Established Stored Procedures Address Space . . . . . . . . . . . . . . . . . . . . . . . 10
2.2.2 RACF Considerations for DB2-Established Address Space . . . . . . . . . . . . . . . . . . . 11
2.2.3 Serializing Access to Non-DB2 Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.4 Updating Installation Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 Administration Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3.1 Defining Stored Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3.2 DB2 Commands Related to Stored Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Contents v
9.2.4 DESCRIBE CURSOR Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
9.2.5 Summary of Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
9.2.6 Example of Client Program Processing a Single Result Set . . . . . . . . . . . . . . . . . . . 164
9.2.7 Example of Client Program Processing Multiple Result Sets . . . . . . . . . . . . . . . . . . 166
9.2.8 Reasons Why Cursors May Not Be Returned to Client . . . . . . . . . . . . . . . . . . . . . . 170
9.2.9 Example of Client Program Using the DESCRIBE CURSOR SQL Statement . . . . . . . . . 170
9.2.10 SQLCODEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
9.3 Blocking Rows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
9.3.1 The TR0C2CC2 Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
9.3.2 The TR0C2S Stored Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
Contents vii
14.5 Embedded SQL and CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
14.6 The KEEPDARI Indicator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
14.7 Keeping Stored Procedures in Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
14.8 Performance Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
Contents ix
How Customers Can Get ITSO Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
IBM Redbook Order Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
Figures xiii
161. Global Monitor List Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
162. Debug Tool Command Log Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
163. Selecting the Stored Procedures Catalog Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
164. Stored Procedure Catalog Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
165. IBM VAB Window: Salary Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
166. The VAB Code Editor Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
167. Stored Procedure Definition Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
168. Local Call Using a Subprocedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
169. Remote Call Using a Stored Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
170. Setting Breakpoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
171. The Inspector Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
172. Using the Immediate Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
173. The qtText2 QuickTest Dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
174. The qtArray QuickTest Dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
175. The QuickTest SmartGuide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
176. Client Program Naming Convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
177. Stored Procedure Naming Convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
178. Sample JCL sampbld.jcl to Rebuild Sample Data Sets . . . . . . . . . . . . . . . . . . . . . . . . 365
179. Configuration with DB2 for MVS/ESA and DDCS for OS/2 . . . . . . . . . . . . . . . . . . . . . . 378
180. Configuration with DB2 for MVS/ESA and DDCS for AIX . . . . . . . . . . . . . . . . . . . . . . . 379
181. CPU Utilization in Relation to Throughput: DDCS for OS/2 . . . . . . . . . . . . . . . . . . . . . 385
182. Configuration with 3174: DDCS for OS/2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
183. Aggregate Response Time: DB2 for MVS/ESA and DDCS for OS/2 . . . . . . . . . . . . . . . . 390
184. Response Time for Transaction Tx1: DDCS for OS/2 . . . . . . . . . . . . . . . . . . . . . . . . . 391
185. Response Time for Transaction Tx2: DDCS for OS/2 . . . . . . . . . . . . . . . . . . . . . . . . . 392
186. Response Time for Transaction Tx3: DDCS for OS/2 . . . . . . . . . . . . . . . . . . . . . . . . . 393
187. Response Time for Transaction Tx4: DDCS for OS/2 . . . . . . . . . . . . . . . . . . . . . . . . . 394
188. Response Time for Transaction Tx5: DDCS for OS/2 . . . . . . . . . . . . . . . . . . . . . . . . . 395
189. Response Time for Transaction Tx6: DDCS for OS/2 . . . . . . . . . . . . . . . . . . . . . . . . . 396
190. Response Time for Transaction Tx7: DDCS for OS/2 . . . . . . . . . . . . . . . . . . . . . . . . . 397
191. CPU Utilization in Relation to Throughput: DDCS for AIX . . . . . . . . . . . . . . . . . . . . . . 400
192. Configuration with 3174: DDCS for AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
193. Aggregate Response Time: DDCS for AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
194. Response Time for Transaction Tx1: DDCS for AIX . . . . . . . . . . . . . . . . . . . . . . . . . . 405
195. Response Time for Transaction Tx2: DDCS for AIX . . . . . . . . . . . . . . . . . . . . . . . . . . 406
196. Response Time for Transaction Tx3: DDCS for AIX . . . . . . . . . . . . . . . . . . . . . . . . . . 407
197. Response Time for Transaction Tx4: DDCS for AIX . . . . . . . . . . . . . . . . . . . . . . . . . . 408
198. Response Time for Transaction Tx5: DDCS for AIX . . . . . . . . . . . . . . . . . . . . . . . . . . 409
199. Response Time for Transaction Tx6: DDCS for AIX . . . . . . . . . . . . . . . . . . . . . . . . . . 410
200. Response Time for Transaction Tx7: DDCS for AIX . . . . . . . . . . . . . . . . . . . . . . . . . . 411
201. Configuration with DB2 Connect for NT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
202. CPU Utilization in Relation to Throughput - DB2 Connect for NT . . . . . . . . . . . . . . . . . . 421
203. Response Time for Transaction FEWROWS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
204. Response Time for Transaction MANYROWS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424
205. Response Time for Transaction Qry1_2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
206. Response Time for Transaction Qry1_9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
207. Response Time for Transaction Qry1_17 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
208. Response Time Comparison (Qry1_2, Qry1_9, Qry1_17) . . . . . . . . . . . . . . . . . . . . . . . 428
This redbook is the result of a residency project conducted at the IBM International Technical Support
Organization (ITSO) San Jose Center. The project tested and implemented stored procedures
executing in DB2 for MVS/ESA, DB2 for OS/390, DB2 for AIX, DB2 for OS/2, and DB2 for Windows NT
servers. Information is provided to guide the reader through the steps required to implement stored
procedures. The material is based on the actual tests performed and experience gained during the
project.
This redbook is written for IBM technical professionals and customer technical personnel responsible
for implementing stored procedures locally or in a client/server environment. A background in
programming, Distributed Relational Database Architecture (DRDA) connectivity among IBM relational
database products, and IBM relational database products and their associated operating systems is
assumed.
This redbook was produced by a team of specialists from around the world working at the International
Technical Support Organization San Jose Center.
Silvio Podcameni, a Data Management Specialist for DB2 and DRDA, has been on assignment at the
International Technical Support Organization San Jose Center since August 1994. During his
assignment, he has led projects to produce DRDA and DB2 for MVS/ESA redbooks on such topics as
security, stored procedures, connectivity, and data sharing. Silvio has also conducted workshops
worldwide on DB2 for MVS/ESA data sharing recovery, client/server features of DB2 for OS/390 Server
Version 5, and connectivity. Before his assignment he worked for the Open Systems Center in Brazil
as a DB2 and DRDA specialist.
Mark Leung is a Systems Specialist working for IBM Global Services Australia in Sydney. He joined
IBM Australia in 1989, first working on application development and maintenance, and subsequently as
a DB2 database administrator. During 1996 and 1997, Mark worked at the IBM Toronto Laboratory on
system test for DB2 Universal Database. He is also the co-author of the redbook Visual PL/I for OS/2 .
Michael J. Fischer is a DB2 Technical Specialist, working for Santa Teresa Laboratory, in San Jose,
CA. He joined IBM in 1996 and since then has been responsible for supporting DB2 Data Sharing
customers in the database and application development areas, in Minneapolis, Minnesota. Michael
has aided the implementation of several DB2 Data Sharing systems for customers in Minnesota.
Gavin Letham is a Staff Software Analyst, working at the IBM Toronto Laboratory in Toronto, Ontario,
Canada. He joined IBM in 1993 working in Customer Support for DB2. He has spent the last year
testing DRDA connectivity for DB2 UDB from various workstation platforms.
Silvio Podcameni
International Technical Support Organization, San Jose Center
Boudewijn de Bliek
IBM Belgium
The technical report presented in Appendix B, “Performance Benchmark and Capacity Planning” on
page 377 is the result of a project conducted at the IBM Santa Teresa Laboratory. This report was
written by:
• Robert Gilles
• Todd Munk
• Hugh Smith
The authors of the technical report thank the following people for the invaluable advice and guidance
provided in the production of the report: Jerry Heglar, IBM Santa Teresa Laboratory; Akira Shibamiya,
IBM Santa Teresa Laboratory; Mark Ryan, IBM Application Business Systems; Serge Limoges, IBM
Toronto Laboratory; and Bill Wilkins, IBM Toronto Laboratory.
The technical report presented in Appendix C, “DB2 Connect Result Set Study” on page 415 is the
result of a project conducted at the IBM Santa Teresa Laboratory. This report was written by:
• Robert Gilles
• Todd Munk
The authors of the technical report thank the following people for the invaluable advice and guidance
provided in the production of the report: Jerry Heglar, Serge Limoges, IBM Toronto Laboratory; Hugh
Smith, and Peter Shum, IBM Toronto Laboratory.
Thanks to the following people for the invaluable advice and guidance provided in the production of
this document:
Robert Begg
Nuzio Ruffolo
Frankie Sun
John F. Betz
Thanks also to Maggie Cutler, Shirley Weinland Hentzell, and Gail Wojton for their editorial assistance.
Comments Welcome
We want our redbooks to be as helpful as possible. Please send us your comments about this or other
redbooks in one of the following ways:
• Fax the evaluation form found in “ITSO Redbook Evaluation” on page 449 to the fax number shown
on the form.
• Use the electronic evaluation form found on the Redbooks Web sites:
For Internet users http://www.redbooks.ibm.com
For IBM Intranet users http://w3.itso.ibm.com
• Send us a note at the following address:
redbook@vnet.ibm.com
Preface xix
xx Getting Started with DB2 Stored Procedures
Chapter 1. Stored Procedures Overview
In this chapter, we describe how stored procedures can be used for applications running in a
client/server environment and explain the advantages of using this technique.
Stored procedures are user-written structured query language (SQL) programs that are stored at the
DB2 server and can be invoked by client applications. A stored procedure can contain most
statements that an application program usually contains. Stored procedures can execute SQL
statements at the server as well as application logic for a specific function.
A stored procedure can be written in many different languages, such as COBOL, OO COBOL, C, C++,
PL/I, FORTRAN, Assembler, and REXX. The language in which stored procedures are written depends
on the platform where the DB2 server is installed.
Local client applications, remote Distributed Relational Database Architecture (DRDA), or remote data
services (private protocol) can invoke the stored procedure by issuing the SQL CALL statement. The
SQL CALL statement is part of the International Organization for Standardization/American National
Standards Institute (ISO/ANSI) proposal for SQL3, an open solution for invoking stored procedures
among database management system vendors that support the SQL ISO/ANSI standard.
The client program can pass parameters to the stored procedure and receive parameters from the
stored procedure.
The DRDA architecture allows SQL CALL statements to use static or dynamic SQL. The version of the
products used during this project only supports the SQL CALL statement as static SQL. Nevertheless,
parameters in the CALL statement, including the stored procedure name, can be supplied at execution
time. Thus, you can use the SQL CALL statement to dynamically invoke any procedure supported by
DB2.
The client program and the stored procedure do not have to be written in the same programming
language. For example, a C client program can invoke a COBOL stored procedure.
In previous releases of DRDA, the client system performed all application logic. The server was
responsible only for SQL processing on behalf of the client. In such an environment, all database
accesses must go across the network, resulting in poor performance in some cases. Figure 1 shows
an example of the processing for a client/server application without using stored procedures.
This is a relatively simple model, which makes the application program easy to design and implement.
Because all application code resides at the client, a single application programmer can take
responsibility for the entire application. However, there are some disadvantages to using this
approach.
Because the application logic runs only on the client workstations, additional network input/output (I/O)
operations are required for most SQL requests. These additional operations can result in poor
performance. This approach also requires the client program to have detailed knowledge of the
server′s database design. Thus, every change in the database design at the server requires a
corresponding change in all client programs accessing the database. Also, because the programs run
at the client workstations, it is often complicated to manage and maintain the copies there.
Stored procedures enable you to encapsulate many of your application′s SQL statements into a
program that is stored at the DB2 server. The client can invoke the stored procedure by using only one
SQL statement, thus reducing the network traffic to a single send and receive operation for a series of
SQL statements. It is also easier to manage and maintain programs that run at the server than it is to
manage and maintain many copies at the client machines.
Stored procedures enable you to split the application logic between the client and the server. You can
use this technique to prevent the client application from manipulating the contents of sensitive server
data. You can also use it to encapsulate business logic into programs at the server. Figure 2 on
page 3 shows an example of the processing for a client/server application with stored procedures.
The stored procedure can issue static or dynamic SQL statements. Data definition language (DDL),
most data manipulation language (DML), and data control language (DCL) statements can be coded in
a stored procedure.
Stored procedures also enable access to features that exist only on the database server. These
features include commands that run only on the server, software installed only on the server that can
be accessed by the stored procedure, and the computing resources of the server, such as memory and
disk space.
Because stored procedures are defined in DRDA, they also take advantage of DRDA features, such as
data transformation between platforms, database security and accounting, and two-phase commit
support.
The minimum software requirements for using the SQL CALL statement to invoke stored procedures
with DB2 servers are:
• MVS environment
− DB2 for MVS/ESA Version 4 Release 1
− IBM High Level Assembler/MVS Version 1 Release 1
− IBM SAA AD/Cycle C/370 Version 1 Release 2
− IBM C/C++ for MVS/ESA Version 3 Release 1
− IBM COBOL for MVS and VM Version 1 Release 1
− IBM PL/I for MVS and VM Version 1 Release 1
− IBM SAA AD/Cycle Language Environment/370 Version 1 Release 1
• AIX Environment
− DB2 for AIX Version 2
− IBM XL C Version 1 Release 2.1
− IBM C for AIX Version 3 Release 1
During this project, we tested many different scenarios with stored procedures in our environment at
the International Technical Support Organization (ITSO) San Jose Center.
We used three PS/2s, one RISC System/6000, and one ES/9000 mainframe. Table 1 on page 5 shows
the hardware and software versions used for the first edition of this book.
Table 2 shows the hardware and software versions used for this second edition.
Some machines were used as both a database client and a database server; others, such as the
Windows 95 client, were used only as a database client.
Our PS/2s and the RISC System/6000 were attached to a 16 MB local area network (LAN) at the ITSO
San Jose Center. The connections between these machines can be made through TCP/IP, APPC,
NetBIOS, or IPX. In our tests we used TCP/IP.
For details about connecting the different DRDA platforms using APPC, refer to the ITSO redbook
Distributed Relational Database Cross Platform Connectivity and Application and using TCP/IP, refer to
the ITSO redbook WOW! DRDA Supports TCP/IP: DB2 Server for OS/390 and DB2 Universal Database .
For this book we used sample programs that are provided with the different products and a number of
programs we developed ourselves. Most of the programs used during this project are on the diskette
shipped with this book. The diskette contains the SG244693.EXE file that when executed, produces the
source for the samples. The SG244693.EXE file can also be downloaded from the Internet at:
ftp://www.redbooks.ibm.com/redbooks/sg244693
Refer to Appendix A, “Sample Programs” on page 347 for the content of the diskette.
In this chapter, we describe the support for stored procedures introduced in DB2 for MVS/ESA Version
4.1. Some of the topics we focus on are installation considerations, administration tasks, and new DB2
commands related to the use of stored procedures.
For details on how to code and generate stored procedures for the MVS platform, please refer to
Chapter 6, “Coding Stored Procedures in DB2 on MVS” on page 85. Unless explicitly stated, in this
chapter all references to DB2 refer to DB2 for MVS/ESA (DB2 Version 4) and DB2 for OS/390 (DB2
Version 5).
2.1 Architecture
In this section, we describe the processing flow of the SQL CALL statement. Refer to Figure 4 on
page 8. The flow follows this sequence:
1. A thread must be created for each application that needs DB2 services. If the application is local,
the thread is created when the first SQL statement is executed. If the request comes from a
remote client, the thread is created when the client application issues the SQL CONNECT
statement. After the thread is created, SQL statements can be executed.
2. When a client application issues an SQL CALL statement, the stored procedure name and the I/O
parameters are passed to DB2.
3. When DB2 receives the SQL CALL statement, it searches in the SYSIBM.SYSPROCEDURES catalog
table for a row associated with the stored procedure name. From this table, DB2 obtains the load
module associated with the stored procedure and related information.
4. Stored procedures are executed in address spaces. For DB2 Version 4, only one address space is
available, called the DB2-established address space. For DB2 Version 5, in addition to the
DB2-established stored procedures address space, you can have several work load manager
(WLM) established address spaces. For DB2-established or WLM-established address spaces you
can specify a number of task control blocks (TCBs) in this address space available for stored
procedures. Each stored procedure is executed under one TCB. After searching the
SYSIBM.SYSPROCEDURES table, DB2 searches for an available TCB to be used by the stored
procedure and notifies the stored procedure address space to execute the stored procedure.
5. When DB2 notifies the stored procedures address space to execute a stored procedure, the thread
that was created for the client application is reused for an execution. This has the following
implications:
• CPU cost is low because DB2 does not create a new thread.
• Accounting is on behalf of the client application.
• For static SQL, the OWNER of the client program must have execute privilege on the stored
procedure package. For dynamic SQL issued by the stored procedure, security is checked
against the user of the client program, unless the DYNAMICRULES(BIND) option was specified
when binding the package for the stored procedure. No sign-on or connection processing is
required.
• Any processing done by the stored procedure is considered a logical continuation of the client
application′s unit of work. Thus, locks acquired by the stored procedure are released when the
client application commits or rolls back.
6. The stored procedures address space uses the LE/370 product libraries to load and execute the
stored procedure. Through the SYSIBM.SYSPROCEDURES, you can pass run-time information for
LE/370 when the stored procedure is executed.
Although stored procedures are supported from DRDA remote clients, they are also supported locally.
If a local application issues the SQL CALL statement, the distributed data facility (DDF) is not involved
and need not be started.
To enable your DB2 subsystem to use stored procedures, you must specify some parameters during
the installation or migration process.
The DSNTINST CLIST includes a new panel, the stored procedures parameters panel (DSNTIPX; see
Figure 5 on page 9 for DB2 Version 4 and Figure 6 on page 10 for DB2 Version 5). The values on this
DSNTIPX INSTALL DB2 - STORED PROCEDURES PARAMETERS
===>
1 ACCEPT SQL CALL ===> YES Accept SQL CALL statements (stored
procedure requests)? YES or NO.
* 2 MVS/ESA PROC NAME ===> DB41SPAS Stored procedure JCL PROC name
3 NUMBER OF TCBS ===> 8 Number of concurrent TCBs (1-1000)
4 MAX ABEND COUNT ===> 0 Allowable ABENDs for a procedure (0-255)
5 TIMEOUT VALUE ===> 180 Seconds to wait before SQL CALL fails
5-1800 or NOLIMIT (no timeout occurs)
6 LE/370 RUNTIME ===>
Chapter 2. DB2 for MVS/ESA, DB2 for OS/390 and Stored Procedures 9
interval expires, the SQL CALL statement fails. Refer to 2.3.2.3, “DISPLAY
PROCEDURE Command” on page 21.
6 LE/370 RUNTIME
The value specified is for the LE/370 library name. It is placed in the JCL procedure
generated for the stored procedures address space.
After you have selected your options for the stored procedures parameters, the installation process
generates the procedure to start the stored procedures address space.
INSTALL DB2 - STORED PROCEDURES PARAMETERS
===>
Figure 7 on page 11 shows an example of the JCL that starts the DB2-established stored procedures
address space generated by the installation process and later customized for our environment.
You can change this procedure to fit your installation′s needs. As shown in 1, set the REGION size
to REGION=0 to obtain the largest possible amount of virtual storage below and above 16 MB. DB2
Version 4 stored procedures require that you have installed LE/370 Version 1 Release 1 or later. The
LE/370 run-time libraries data set 2 must be available for the stored procedures address space. You
must enter the LE/370 run-time data set name on the DSNTIPX installation panel. You can also include
additional load libraries that contain your stored procedures 3.
To access non-DB2 resources, you also may have to change this procedure to specify the required DD
JCL statements. 4 Also include DD statements for debugging such as CEEDUMP (from LE) and
SYSPRINT (used by C and PL/I) and files specified by the LE run-time option MSGFILE.
However, be aware that there is no commit coordination between the stored procedure and any
recoverable resource you may be accessing, such as CICS or IMS resources. This support is provided
only when you use WLM-established address space (refer to Chapter 12, “Recoverable Resource
Manager Services Attachment Facility” on page 255).
DB2 automatically issues an MVS START command to activate the DB2-established stored procedures
address space during DB2 startup. This address space runs as a DB2 allied address space, providing
an isolated execution environment for the stored procedures. Therefore, you can stop and start the
stored procedures address space without restarting DB2. In addition, because DB2 is isolated from
user program errors, you can test and install new versions of stored procedures without stopping DB2.
The RACF ID and group for the DB2-established stored procedures address space do not have to
match the RACF ID and group name used for the other DB2 address spaces. However, the RACF ID
and group name that you select must have authority to run DB2 call attachment facility (CAF)
application programs.
Chapter 2. DB2 for MVS/ESA, DB2 for OS/390 and Stored Procedures 11
If you access non-DB2 resources such as VSAM files and flat files in your stored procedure, you must
ensure that the RACF ID associated with the stored procedures address space has the privileges
needed for the access. The RACF ID associated with the client application is not checked for privileges
to access non-DB2 resources for stored procedures that run in the DB2-established address space.
Another solution is to use WLM-established address space. You can set up an application
environment that can serialize the execution of the stored procedure.
If for DB2 Version 4, you choose not to use stored procedures during the installation process but
decide to use them later, you cannot use the DSNTINST CLIST. You must update the DSNTIJUZ
member.
DSN6SYSP AUDITST=NO,
CONDBAT=64,
CTHREAD=70,
.
.
.
STORMXAB=0,
STORPROC=DB41SPAS, <-- (1)
STORTIME=180,
TRACSTR=NO,
TRACTBL=16
Figure 8. Changing DSNTIJUZ Parameters
By changing the STORPROC parameter (1) to the name of the stored procedures address space JCL
procedure, support for stored procedures becomes available. In addition, to updating DSNTIJUZ, you
have to manually create the JCL procedure that starts the stored procedures address space.
You cannot use the DSNTINST CLIST to change the number of concurrent TCBs. Instead, you must
update the JCL procedure for the stored procedures address space. Figure 9 shows how to change
the number of concurrent TCBs.
//DB41SPAS PROC RGN=0K,TME=1440,SUBSYS=DB41,NUMTCB=8 <-- (1)
//IEFPROC EXEC PGM=DSNX9STP,REGION=&RGN,TIME=&TME,
// PARM=′&SUBSYS,&NUMTCB′
//STEPLIB DD DISP=SHR,DSN=DSN410.RUNLIB.LOAD
.
.
.
Figure 9. Changing the Number of Concurrent TCBs
Update NUMTCB parameter (1) in the JCL procedure to increase the number of concurrent TCBs
available for stored procedures. However, when you code your stored procedures, do not rely on the
existence of multiple TCBs in the stored procedures address space.
To use stored procedures in a DB2 environment, you have to define the stored procedures to DB2 by
updating the SYSIBM.SYSPROCEDURES catalog table, and use commands to control the stored
procedures execution.
You can use the INSERT statement or the LOAD utility to insert rows in the SYSIBM.SYSPROCEDURES
table. It is also possible to use the DELETE or UPDATE statement to delete or change any value
previously specified in the SYSIBM.SYSPROCEDURES table.
Chapter 2. DB2 for MVS/ESA, DB2 for OS/390 and Stored Procedures 13
To reduce I/O operations DB2 caches data read from the SYSIBM.SYSPROCEDURES table. If you
update any column on the SYSPROCEDURES table, you may have to refresh the DB2 buffers related to
the stored procedure you updated. To refresh buffers issue the START PROCEDURE command. Refer
to 2.3.2, “DB2 Commands Related to Stored Procedures” on page 19 for more information about DB2
commands.
Note that in Table 3, rows 1 and 2 refer to the PROC1 stored procedure. By creating multiple rows in
the SYSIBM.SYSPROCEDURES table with the same value for the PROCEDURE column, you can
indicate that specified users have access to different versions of the stored procedure. In our case, in
row 1, the AUTHID and LUNAME columns are blank. Any user or location without a specific entry can
use row 1. Row 2 applies only to SQL CALL requests coming from AUTHID BO and LUNAME
LUXEMBRG. When this user invokes the PROC1 stored procedure, a different load module (PROG2) is
loaded. The load module can be a test version of the stored procedure or a version that is specific for
that user.
Row 3 applies to stored procedure PROC2 and AUTHID SILVIO. Because there is no other row for
stored procedure PROC2, user SILVIO is the only one who can call this stored procedure. Because the
LUNAME column is blank, user SILVIO can call this stored procedure from any client program, either
local or remote.
Chapter 2. DB2 for MVS/ESA, DB2 for OS/390 and Stored Procedures 15
Row 4 applies to stored procedure PROC3 and LUNAME LUTEST. Because there is no other row for
stored procedure PROC3, only client programs running LUNAME LUTEST can call stored procedure
PROC3. Because the AUTHID column is blank, any user from LUNAME LUTEST can call stored
procedure PROC3.
As shown in Table 3 on page 15, it is possible to have more than one row in the
SYSIBM.SYSPROCEDURES table for a given stored procedure. DB2 has a search precedence for
determining which row it selects for a specific client. The following is the search precedence that DB2
uses to select the stored procedure:
1. A row with AUTHID and LUNAME matching the caller′ s AUTHID and LUNAME
2. A row with AUTHID matching the caller′ s AUTHID and LUNAME blank
3. A row with AUTHID blank and LUNAME matching the caller′ s LUNAME
4. A row with AUTHID and LUNAME columns blank
Caution
The support for AUTHID and LUNAME may be suppressed in future versions of DB2.
where:
parm-name Is a one- through eight-character string defining the name of the parameter for use in
messages. If you do not specify a name, the position of the parameter in the
parameter list is used in the DB2 messages.
INTEGER or INT
Large integer parameter
SMALLINT Small integer parameter
REAL Single precision floating-point parameter
PARM1, PARM2, and PARM3 are identifiers for error messages. You can specify any name you want.
The stored procedure associated with this PARMLIST string would expect three parameters:
• An input character parameter of length 10
Chapter 2. DB2 for MVS/ESA, DB2 for OS/390 and Stored Procedures 17
• An integer parameter for both input and output
• An integer parameter for output only
If the number of parameters in the SQL CALL statement does not match the number of parameters
specified in the PARMLIST column for the stored procedures, an SQLCODE -440 is returned to the
client application.
If you specify N for this column, all updates performed by the unit of work are committed when the
client application commits. This is the default.
If you specify Y for this column, all updates performed by the unit of work are committed when control
returns to the client application if the SQLCODE for the CALL statement is zero or has a positive value.
This implies that updates performed before the stored procedure is invoked, are also committed.
Those updates can be done by the stored procedure locally or remotely (through a three-part name or
using RRSAF), or done locally or remotely by the client application before invoking the stored
procedure. Any updates performed by the client application after the execution of the stored
procedure are part of another unit of work. The client application does not need to code the COMMIT
statement. This specification is valid for DB2-established or WLM-established address spaces. This
specification is valid regardless of whether the client application is using CONNECT TYPE 1 or
CONNECT TYPE 2. If the application updated other DRDA servers before invoking the stored
procedures, those updates are also committed upon successful execution of the stored procedure.
The advantage of committing automatically is that all locks acquired previously are released, except
for locks acquired for opened cursors declared with both the WITH HOLD and WITH RETURN options.
If the client application is running under a TP monitor that does not support this function such as CICS,
IMS, RRS, and Encina, a -925 SQLCODE is returned to the client application. If a ROLLBACK is
requested by the DB2 server, due for example because a ROLLBACK statement was executed by the
stored procedure, a -926 SQLCODE is returned to the client application. The client application can use
this information to either commit or rollback using whatever facilities are provided by the TP monitor.
If the level of the DRDA application requester does not support the commit request from the DRDA
application server, the specification of Y is ignored and no SQLCODE is returned to the client
application. In this case, the client application should explicitly commit or roll back.
The DDCS product does not support the commit request. The DB2 Connect product supports the
commit request with maintenance. Client applications running on DB2 Version 4 need APAR PQ11161
to invoke a stored procedure that has COMMIT_ON_RETURN=Y specified. If APAR PQ11161 is not
applied, the COMMIT_ON_RETURN specification is ignored and no SQLCODE indicating this is returned
to the client application.
If the stored procedure is coded to return multiple result sets you must declare the cursors for the
result sets with the WITH RETURN and WITH HOLD options, otherwise the cursors are closed when
control returns to the client application.
The DISPLAY THREAD command output also provides more information about stored procedures.
2.3.2.1 START PROCEDURE Command: The START PROCEDURE command activates the
definition of stored procedures. It reads and validates information from the SYSIBM.SYSPROCEDURES
table. The START PROCEDURE command also refreshes the DB2 buffers with information from the
SYSIBM.SYSPROCEDURES table.
If the DB2-established stored procedures address space has not been started, it is automatically
started after the START PROCEDURE command is executed.
To execute a START PROCEDURE command, you must have one of the following privileges:
• SYSOPR authority
• SYSCTRL authority
• SYSADM authority
Figure 11 shows the syntax of the START PROCEDURE command.
procedure-name
Specifies the name of the stored procedure to be started. The information stored in
the SYSIBM.SYSPROCEDURES table for the stored procedure is read and cached.
If you do not specify a value for the procedure-name argument, or you specify (*), all
stored procedures are started.
Example:
-START PROCEDURE(*)
DSNX946I @ DSNX9ST2 START PROCEDURE SUCCESSFUL FOR *
Chapter 2. DB2 for MVS/ESA, DB2 for OS/390 and Stored Procedures 19
partial-name * Starts a set of stored procedures. The names of the stored procedures in the set start
with partial-name and can end with any string.
Example:
-START PROCEDURE(BBMMS*)
DSNX946I @ DSNX9ST2 START PROCEDURE SUCCESSFUL FOR BBMMS*
2.3.2.2 STOP PROCEDURE Command: The STOP PROCEDURE command stops access to one
or more stored procedures. According to arguments you specify, new requests to stopped stored
procedures can be queued or rejected.
If a stored procedure is not running correctly, you can stop the stored procedure and replace or add a
load module associated with a stored procedure.
For the DB2-established stored procedures address space, the STOP PROCEDURE command also
enables you to stop the stored procedures address space.
To execute a STOP PROCEDURE command, you must have one of the following privileges:
• SYSOPR authority
• SYSCTRL authority
• SYSADM authority
Figure 12 shows the syntax of the STOP PROCEDURE command.
procedure-name
Specifies the name of the stored procedure to be stopped.
If you do not specify a value for the procedure-name argument, or you specify (*), all
stored procedures are stopped, and for the DB2-established stored procedures
address space, the address space is terminated. The STOP PROCEDURE command
does not check whether the procedure-name is in the SYSIBM.SYSPROCEDURES
table.
Example:
-STOP PROCEDURE(*)
DSNX947I @ DSNX9SP2 STOP PROCEDURE SUCCESSFUL FOR *
-STOP PROCEDURE(WRONGNAME)
DSNX947I @ DSNX9SP2 STOP PROCEDURE SUCCESSFUL FOR WRONGNAME
partial-name * Stops a set of stored procedures. The names of stored procedures in the set start
with partial-name and can end with any string.
Example:
DB2 automatically issues the STOP PROCEDURE ACTION(REJECT) command for any stored procedure
that exceeds the maximum abnormal termination (abend) count. That count is set on the DSNTIPX
panel during DB2 installation. See 2.2, “Installation Considerations” on page 8 for more information.
The effects of the STOP PROCEDURE command do not persist if DB2 is restarted. If you want to
permanently disable a stored procedure, you can:
• Delete the row in the SYSIBM.SYSPROCEDURES table that defines the stored procedure.
• Update the row in the SYSIBM.SYSPROCEDURES table so that the LOADMOD column points to a
nonexistent MVS load module.
• Rename or delete the MVS load module.
To execute a DISPLAY PROCEDURE command, you must have one of the following privileges:
• DISPLAY privilege
• SYSOPR authority
• SYSCTRL authority
• SYSADM authority
Figure 13 shows the syntax of the DISPLAY PROCEDURE command.
procedure-name
Specifies the name of the stored procedure to display.
If you do not specify a value for the procedure-name argument, or you specify (*), all
stored procedures that have been accessed by a DB2 application are displayed.
partial-name * Displays a set of stored procedures. The names of stored procedures in the set start
with partial-name and can end with any string.
Chapter 2. DB2 for MVS/ESA, DB2 for OS/390 and Stored Procedures 21
Examples:
-DISPLAY PROCEDURE(*)
DSNX940I @ DSNX9DIS DISPLAY PROCEDURE REPORT FOLLOWS -
PROCEDURE MODULE STATUS ACTIVE MAXACT QUEUED MAXQUE TIMEOUT
BBMMSPR0 STOPREJ 0 0 0 1 1
PPMMSSM1 STOPQUE 0 0 0 1 0
TS0BMS TS0BMS STARTED 0 1 0 1 0
PPMMSSM0 PPMMSSM0 STARTED 0 1 0 0 0
DSNX9DIS DISPLAY PROCEDURE REPORT COMPLETE
-DISPLAY PROCEDURE(BBMMSTS1,WRONGNAME)
DSNX940I @ DSNX9DIS DISPLAY PROCEDURE REPORT FOLLOWS -
PROCEDURE MODULE STATUS ACTIVE MAXACT QUEUED MAXQUE TIMEOUT
BBMMSTS1 STOPQUE 0 0 0 0 0
DSNX9DIS PROCEDURE WRONGNAME HAS NOT BEEN ACCESSED
DSNX9DIS DISPLAY PROCEDURE REPORT COMPLETE
-DISPLAY PROCEDURE(BBMMS*)
DSNX940I @ DSNX9DIS DISPLAY PROCEDURE REPORT FOLLOWS -
PROCEDURE MODULE STATUS ACTIVE MAXACT QUEUED MAXQUE TIMEOUT
BBMMSPR0 STOPREJ 0 0 0 1 1
TS0BMS TS0BMS STARTED 0 1 0 1 0
DSNX9DIS DISPLAY PROCEDURE REPORT COMPLETE
If you issue a DISPLAY PROCEDURE command when a STOP PROCEDURE (*) is in effect, the following
output line is also displayed:
DSNX943I @ DSNX9DIS PROCEDURES A THROUGH
Z99999999999999999 HAVE BEEN STOPPED WITH ACTION(QUEUE)
A new message (DSNV429I) is also included to provide the stored procedure name and the load
module name when a thread is executing a stored procedure.
Example:
-DISPLAY THREAD(*)
DSNV401I @ DISPLAY THREAD REPORT FOLLOWS -
DSNV402I @ ACTIVE THREADS -
NAME ST A REQ ID AUTHID PLAN ASID TOKEN
SERVER SP * 3 SM0PMCC2.EXE STDRD2A DISTSERV 0065 98
V429 CALLING STORED PROCEDURE PPMMSSM0, LOAD MODULE PPMMSSM0
V445-USIBMSC.SC02130I.AC5D2A9757DC=98 ACCESSING DATA FOR <SC02130I>
SERVER SW * 2 BB2MCPR0.EXE STDRD2A DISTSERV 0065 96
V429 CALLING STORED PROCEDURE BBMMSPR0, LOAD MODULE
V445-USIBMSC.SC02130I.AC5D2A740EA8=96 ACCESSING DATA FOR <SC02130I>
DISPLAY ACTIVE REPORT COMPLETE
DSN9022I @ DSNVDT ′ -DISPLAY THREAD′ NORMAL COMPLETION
In this example, the second thread is waiting for stored procedure BBMMSPR0. By issuing a DISPLAY
PROCEDURE command you can get more information to find out why the thread is waiting:
-DISPLAY PROCEDURE(BBMMSPR0)
DSNX940I @ DSNX9DIS DISPLAY PROCEDURE REPORT FOLLOWS -
PROCEDURE MODULE STATUS ACTIVE MAXACT QUEUED MAXQUE TIMEOUT
BBMMSPR0 STOPQUE 0 0 1 1 3
DSNX9DIS DISPLAY PROCEDURE REPORT COMPLETE
In this case, the thread was in a waiting state because the stored procedure was stopped. This wait
period is limited by the time you specified for the TIMEOUT VALUE parameter during installation.
Chapter 2. DB2 for MVS/ESA, DB2 for OS/390 and Stored Procedures 23
24 Getting Started with DB2 Stored Procedures
Chapter 3. WLM-Established Stored Procedures Address Spaces
DB2 Version 5 supports WLM-established stored procedures address spaces in addition to the
DB2-established stored procedures address space, which was already supported in DB2 Version 4.
DB2 for OS/390 Version 5 (with OS/390 Release 3 or above) enables you to run multiple WLM-managed
stored procedures address spaces. With WLM-established stored procedures address space you can:
• Access non-DB2 resources with two-phase commit.
• Execute stored procedures based on individual transaction priorities.
• Achieve workload balancing across multiple stored procedures address spaces.
• Have more flexibility to group or isolate applications.
• RACF check to non-DB2 resources based on the client authorization.
In this chapter, we present an introduction to WLM, and how it interfaces with stored procedure. We
also describe how you can implement WLM-established stored procedures address spaces. In
summary these steps are:
1. Setting up RRS (refer to 3.14, “Implementing OS/390 Resource Recovery Services (RRS) Support”
on page 59 for this task)
2. Setting up WLM
3. Placing the JCL for the WLM-established stored procedures address space in a system procedure
library such as SYS1.PROCLIB
4. Preparing the stored procedure
5. Updating SYSIBM.SYSPROCEDURES
Throughout this section, any reference to DB2 refers to DB2 for OS/390.
WLM provides two modes of operation: The previous method of performance management using
installation performance specification (IPS) and the installation control specification (ICS), is called the
compatibility mode. The new goal-oriented performance management with the service definition (see
3.1.2, “WLM Definitions Relationships” on page 26) is called goal mode.
The goal mode of operation became possible with the enclave implementation available in MVS 5.2.
The enclave implementation allows a transaction to span multiple dispatchable SRBs or TCBs in one
or more address spaces. OS/390 MVS Workload Management Services , GC28-1773 contains
information about enclaves. Without using an enclave, work can be managed on an address-space
basis only.
You can have several service definitions stored in user data sets, although only one service definition
can be installed (active) in the WLM couple data set for the entire Sysplex environment.
As illustrated in Figure 14, a service definition applies to several supported types of work (such as
DB2, DDF, and CICS), and consists of one or more service policies.
You can have more than one service policy defined in your service definition, although you can have
only one service policy active in the Sysplex. However, not all WLM commands are Sysplex-wide. For
example, you can switch each system in and out of goal mode independently.
For WLM, a workload is a group of related work meaningful for the installation. When defining a
workload to WLM, the name of the workload does not have to match any keyword defined in the
supported products. It is a symbolic name to refer to a group of related work. A workload can be, for
example, all the work created by a development group, all the work started by a set of applications, or
a grouping of DB2 and CICS transactions. As illustrated in Figure 15 on page 27, you associate a
workload with one or more service classes.
You group work that has the same performance characteristics in one service class. In turn, a service
class can have one or more service class periods. The concept of a service class period is that a
piece of work can consume up to a certain limit of resources (service units) with a certain priority.
When the limit is reached, the work switches to the next period, where the priority is lower than that of
the previous period.
A service class period has performance objectives that can be expressed in terms of importance and
goals.
There are five levels of importance, ranging from lowest to highest. When there is not sufficient
capacity for all work in the system to meet its goals, WLM uses importance to determine which work
should give up resources and which work should receive more resources.
As illustrated in Figure 17 on page 29, performance goals for stored procedures generally inherit the
performance of the calling client application.
For example, if the stored procedure is invoked by a CICS transaction, IMS transaction, TSO
attachment, CAF, or RRSAF, the stored procedure inherits the performance goals of the client
transaction (in Figure 17, PERFORMANCE GOAL X). For stored procedures invoked through DDF, you
can assign performance goals for the stored procedure that can be independent of the performance
goals assigned to the client, in the following way:
• If the first SQL statement is not an SQL CALL statement, the performance goals used for the unit of
work executed on this DB2 server are those assigned to the client (in Figure 17, PERFORMANCE
GOAL Y). If the unit of work later executes an SQL CALL statement on this DB2 server, the
performance goals used for the stored procedure are those defined for the client.
• If the first SQL statement executed on the DB2 server is an SQL CALL statement, the whole unit of
work uses the performance goals of the stored procedure (in Figure 17, PERFORMANCE GOAL W).
This implies that even after the stored procedure finishes, any new SQL statement executed in this
unit of work has the same performance goals as those defined for the stored procedure.
Up to this point, we have summarized some of the concepts deployed by WLM related to performance.
For implementation and scheduling of stored procedures, the important WLM definition is the
application environment definition.
Each application environment definition represents one or more stored procedures. You can group
stored procedures that access the same resources and have similar characteristics in one WLM
application environment definition.
To use a WLM-established stored procedures address space, you must define one or more application
environments in the WLM service definition.
If you request that the stored procedures address spaces be automatically managed, WLM starts and
stops stored procedures address spaces as needed. For example, when an SQL CALL statement for a
stored procedure is received by DB2, DB2 informs WLM, which determines whether there is a server
address space to execute the stored procedure. If an address space is available to execute the stored
procedure, the stored procedure is executed in this address space. If there is no address space
available to execute the stored procedure, WLM can create one. Application environments can be
used in either goal mode or compatibility mode. In compatibility mode, the server address space
cannot be automatically created by WLM. In this case, you have to start the address space manually
or through some automation tool.
Application-Environment Notes Options Help
--------------------------------------------------------------------------
Create an Application Environment
Command ===> ______________________________________________________________
Using application environments in compatibility mode involves manually starting and stopping stored
procedures address spaces. In this case, it is your decision of how many address spaces should be
available at a certain point in time. You have to use the MVS operator START command to start the
stored procedures address space and the MVS operator CANCEL command to stop the stored
procedure address space. Alternatively, you can use some automation tool such as System
Automation for OS/390.
You can issue the following MODIFY command (short form of the command is F) to change from
compatibility mode to goal mode:
F WLM,MODE=GOAL
You can issue the following MODIFY command to change from goal mode to compatibility mode.
F WLM,MODE=COMPAT
When you issue the command to change modes you get the following message:
IWM007I SYSTEM SC62 NOW IN WORKLOAD MANAGEMENT COMPATIBILITY|GOAL MODE
If WLM is already in the specified mode, you get the following message:
IWM008I MODIFY WLM REJECTED, SYSTEM SC62 ALREADY IN WORKLOAD
MANAGEMENT GOAL|COMPATIBILITY MODE
This command takes effect across the whole Sysplex environment. However, if you issue the VARY
WLM,APPLENV= command (explained in 3.5, “Managing Application Environments” on page 34), it
has no effect on the address spaces of MVS systems for which the address space was started using
compatibility mode. This means that if you issue the quiesce or refresh options of the VARY
WLM,APPLENV command on a Sysplex where some systems are running in compatibility mode, the
application environment state on the compatibility mode system remains in the QUIESCING or
REFRESHING state until all address spaces for the application environment on the compatibility mode
system are manually terminated. For more information on the VARY WLM,APPLENV= command refer
to 3.5, “Managing Application Environments” on page 34.
In goal mode, you can manually start and stop stored procedures address spaces, or you can let WLM
automatically start and stop stored procedures address spaces. If you want WLM to automatically start
stored procedures address spaces, you must define the JCL procedure name associated with the
application environment. This is called automatic control . Under automatic control, WLM creates the
stored procedures address spaces as started tasks. The startup parameters can be contained in either
the JCL procedure or the application environment. The parameters specified in the application
environment definition override those of the JCL procedure.
When the address spaces are no longer needed, WLM deletes them after a certain time period.
Under automatic control, the quantity of stored procedures address spaces is controlled by WLM. If an
operator starts or cancels the address space under automatic control, WLM:
• Uses address spaces not started by WLM as if they were started by WLM. This means that if an
address space is available, WLM uses it regardless of whether it was started by WLM or by an
operator.
• Terminates the address space not started by WLM. This means that if an address space is not
needed anymore, WLM deletes it regardless of whether it was started by WLM or by an operator.
• WLM also terminates all address spaces if you issue the VARY command with the QUIESCE option.
You can use operator commands to manage application environments. There are options on the VARY
WLM,APPLENV command that allow you to quiesce, resume, or refresh application environments.
These options allow you, for example, to make changes to the JCL procedure, start parameters, and
change application libraries. The resume option also allows you to recover from error conditions that
have caused WLM to stop an application environment.
The action taken for an application environment is saved in the WLM couple data set and is not
discarded across an IPL.
You can query the current state of an application environment using the DISPLAY WLM,APPLENV=
command.
The scope of both the VARY and the DISPLAY commands is Sysplex-wide, regardless of whether you
use DB2 data sharing.
An application environment initially enters the AVAILABLE state when the service policy that contains
the application environment is activated. The AVAILABLE state indicates that the application
environment is available for use and address spaces can be started. You can change the state of an
application environment using the VARY WLM,APPLENV command. This is the format of the command:
VARY WLM,APPLENV=xxxxx,option
where:
xxxxx is the application environment name
option can be QUIESCE, RESUME, or REFRESH
You can issue the QUIESCE option for an application environment that is in the AVAILABLE state.
When you specify the QUIESCE option, the application environment first enters a QUIESCING state until
all stored procedures address spaces for this application environment terminate. It then enters the
QUIESCED state.
You should use the REFRESH option to refresh a copy of the load module containing the stored
procedure program.
You can specify the REFRESH option for an application environment that is in the AVAILABLE state.
When you specify the REFRESH option, it first enters the REFRESHING state until all address spaces
terminate. It then enters the AVAILABLE state.
When the application environment is in STOPPED state, you can make changes to libraries, JCL
procedure, or any other changes needed to repair the condition that caused WLM to stop address
space creation. After you solve the problem, use the RESUME option of the VARY WLM command.
Chapter 5, “Assigning Stored Procedure to WLM Application Environments” in DB2 for OS/390
Administration Guide outlines the steps to create an application environment in a service definition for
the WLM-managed stored procedures address spaces.
If your installation is already running WLM, you already have an active service policy within a service
definition. In this case, you have only to define an application environment for your stored procedure
or for each group of stored procedures. Refer to 3.8, “Existing Service Definition” on page 50 for a
description of how to perform this task.
If you don′t have WLM installed, ask your systems programmer to install it for you. After the
installation is complete, you have to perform the tasks described in this section. Figure 20 on page 36
shows the relationship among the WLM definitions.
When using the WLM ISPF application, you have to enter most of the information manually. If some
information for a definition was already specified on one panel and the same type of information is
required on other panels, you can enter a question mark to list the available choices.
W W L M M
W W L MM MM
W W W L M M M
WW WW L M M
W W LLLLL M M
ENTER to continue
Figure 21. WLM First Panel
A Sysplex installation may run different levels of OS/390 among its members. If you get an
IWMAM052 message because of mismatch in levels of WLM service definition functionality and the
WLM ISPF application, contact your systems programmer to ensure you are using the appropriate
data sets.
2. Press Enter to get the panel displayed in Figure 22.
File Help
--------------------------------------------------------------------------
_______________________________________________
| Choose Service Definition |
| |
| Select one of the following options. |
| 3_ 1. Read saved definition |
| 2. Extract definition from WLM |
| couple data set |
| 3. Create new definition |
| |
| |
| |
_______________________________________________
ENTER to continue
Figure 22. Choose Service Definition
3. Select option 3 to create a new definition. The panel shown in Figure 23 on page 38 is displayed.
4. Fill in the definition name, which can be any name, and the description.
5. Select option 1 to define a service policy. The panel shown in Figure 24 is displayed.
Service-Policy Notes Options Help
--------------------------------------------------------------------------
Create a Service Policy
Command ===> ______________________________________________________________
_____________________________________________________________
| Selection List empty. Define a service policy. (IWMAM100) |
_____________________________________________________________
Figure 24. Create a Service Policy
6. Give the policy a name, and optionally a description and press END (PF3) to get to the service
policy selection list. The panel shown in Figure 25 on page 39 is displayed.
7. Press END (PF3) to get back to the Definition Menu panel shown in Figure 26.
File Utilities Notes Options Help
--------------------------------------------------------------------------
Functionality LEVEL003 Definition Menu WLM Appl LEVEL004
Command ===> ______________________________________________________________
8. Select option 2 to create a workload, and the Create Workload panel shown in Figure 27 on
page 40 is displayed.
_______________________________________________________
| Selection List empty. Define a workload. (IWMAM200) |
_______________________________________________________
Figure 27. Create a Workload
9. Give the workload a name, which can be any name and optionally a description. Press END (PF3)
and the Workload Selection List panel is displayed as shown in Figure 28
Workload View Notes Options Help
--------------------------------------------------------------------------
Workload Selection List Row 1 to 1 of 1
Command ===> ______________________________________________________________
10. Press END (PF3) to get back to the definition menu and select option 4 to add a service class as
shown in Figure 29 on page 41.
Service-Class Notes Options Help
--------------------------------------------------------------------------
Create a Service Class Row 1 to 1 of 1
Command ===> ______________________________________________________________
---Period--- ---------------------Goal---------------------
Action # Duration Imp. Description
i_
******************************* Bottom of data ********************************
Figure 30. Create a Service Class
13. Choose a goal type according to your performance goals. For this example, we chose 1 for
Average response time. Press Enter and the Average response time goal pop-up window shown in
Figure 32 is displayed.
Service-Class Notes Options Help
- ___________________________________________ ----------------------------
| Choose a goal type for period 1 | ss Row 1 to 1 of 1
C | | _____________________________
| |
S | 1 1. Average response time | ired)
D ____________________________________________________________________
W | Average response time goal |
B | |
| Enter a response time of up to 24 hours for period 1 |
S | |
E | Hours . . . . . 0__ (0-24) |
| Minutes . . . . 0__ (0-99) |
| Seconds . . . . 3_____ (0-9999) |
A | |
| Importance . . 1 (1=highest, 5=lowest) |
* | Duration . . . 10000____ (1-999,999,999, or | ********
| none for last period) |
| |
| |
| F1=Help F2=Split F5=KeysHelp F9=Swap F12=Cancel |
____________________________________________________________________
Figure 32. Average Response Time Goal Pop-Up Window
14. On this pop-up window, enter the values of your choice for the response time, importance, and
duration, for the first service class period. When you press Enter, the panel displayed in Figure 33
on page 43 is displayed.
---Period--- ---------------------Goal---------------------
Action # Duration Imp. Description
__
i_ 1 10000 1 Average response time of 00:00:03.000
******************************* Bottom of data ********************************
_________________________________________________________________________
| Press EXIT to save your changes or CANCEL to discard them. (IWMAM970) |
_________________________________________________________________________
Figure 33. Create a Service Class Panel - Period 1
15. To create another service class enter “I” under action and the Choose a goal type for period 2,
pop-up window shown in Figure 34, is displayed.
Service-Class Notes Options Help
- ___________________________________________ ----------------------------
| Choose a goal type for period 2 | ss Row 1 to 2 of 2
C | | _____________________________
| |
S | _1 1. Average response time | ired)
D | 2. Response time with percentile | es from DDF
W | 3. Execution velocity | or ?)
B | 4. Discretionary | or ?)
| |
S | F1=Help F2=Split F5=KeysHelp | I=Insert new period,
E | F9=Swap F12=Cancel |
___________________________________________
---Period--- ---------------------Goal---------------------
Action # Duration Imp. Description
__
i 1 10000 1 Average response time of 00:00:03.000
******************************* Bottom of data *******************************
Figure 34. Choose a Goal Type for Period 2 Pop-Up Window
16. For this definition, we also chose 1 for Average response time. After you press Enter, the Average
response time goal pop-up window shown in Figure 35 on page 44 is displayed.
17. On this pop-up window, enter values of your choice for the response time, importance, and
duration, for the service class period 2. If this is the last service class period, you do not specify a
duration. When you press Enter, the panel displayed in Figure 36 is displayed.
Service-Class Notes Options Help
--------------------------------------------------------------------------
Create a Service Class Row 1 to 3 of 3
Command ===> ______________________________________________________________
---Period--- ---------------------Goal---------------------
Action # Duration Imp. Description
__
__ 1 10000 1 Average response time of 00:00:03.000
__ 2 2 Average response time of 00:00:50.000
******************************* Bottom of data *******************************
Figure 36. Create a Service Class Panel - 2 Service Class Periods
18. Press PF3 to save changes and exit. The panel shown in Figure 37 on page 45 is displayed.
19. Press PF3 again to get back to the definition menu panel (Figure 29 on page 41).
You can now associate a stored procedure with classification rules. Choose option 6 as shown in
Figure 38.
File Utilities Notes Options Help
------------------------------------------------------------------------
Functionality LEVEL001 Definition Menu WLM Appl LEVEL004
Command ===> ___________________________________________________________
The panel shown in Figure 39 on page 46 is displayed. Some subsystem types already come
predefined.
------Class-------
Action Type Description Service Report
__ ASCH Use Modify to enter YOUR rules
__ CICS Use Modify to enter YOUR rules
__ DB2 Use Modify to enter YOUR rules
3_ DDF Use Modify to enter YOUR rules
__ IMS Use Modify to enter YOUR rules
__ IWEB Use Modify to enter YOUR rules
__ JES Use Modify to enter YOUR rules
__ LSFM Use Modify to enter YOUR rules
__ OMVS Use Modify to enter YOUR rules
__ SOM Use Modify to enter YOUR rules
__ STC Use Modify to enter YOUR rules
__ TSO Use Modify to enter YOUR rules
******************************* Bottom of data *************************
Figure 39. Subsystem Type Selection List for Rules
20. On the definition menu panel, you can select the subsystem types for which your client program
invokes the stored procedure. Just as an example, we choose DDF. For an explanation of how the
performance policies apply to stored procedures, refer to 3.1.3, “Classification Rules” on page 28.
Enter 3 under action to modify the rules of the DDF subsystem type. The panel in Figure 40 is
displayed.
Subsystem-Type Xref Notes Options Help
--------------------------------------------------------------------------
Modify Rules for the Subsystem Type Row 1 to 1 of 1
Command ===> ____________________________________________ SCROLL ===> PAGE
Subsystem-Type Xref Notes Options Help
--------------------------------------------------------------------------
Modify Rules for the Subsystem Type Row 1 to 2 of 2
Command ===> ____________________________________________ SCROLL ===> PAGE
__________________________________________
| Qualifier name is required. (IWMAM724) |
F1=Help F2 __________________________________________ F8=Down
Figure 41. Modify Rules for the Subsystem Type
23. Choose option 9 to define the application environment for a stored procedure (or a group of stored
procedures) and the Create an Application Environment panel (Figure 43) is displayed.
Application-Environment Notes Options Help
--------------------------------------------------------------------------
Create an Application Environment
Command ===> ______________________________________________________________
24. Specify details for the application environment. Refer to 3.2.2, “Specifying Application
Environments to WLM” on page 30 for an explanation of each entry field. Press END (PF3) to get
back to the application environment selection list, and the Application Environment Selection List
panel in Figure 44 on page 49 is displayed.
File Utilities Notes Options Help
----- ___________________________________________________ ----------------
Funct | 1 1. Install definition | Appl LEVEL004
Comma | 2. Extract definition | _________________
| 3. Activate service policy |
Defin | 4. Allocate couple data set |
| 5. Allocate couple data set using CDS values |
Defin ___________________________________________________
Description . . . . . . . Sysplex at ITSO San Jose
26. Select option 1 to install the definition created. When the definition is installed, the following
message appears on the definition menu panel:
Service definition was installed. (IWMAM038)
27. Place the cursor at the Utilities pull-down and press Enter. Select option 3 to activate a service
policy and the Policy Selection List panel shown in Figure 46 on page 50 is displayed.
28. Select the service policy (we have only one) and press Enter.
To work with an already defined WLM service definition, invoke the WLM ISPF application and select
option 2 Extract definition from the Choose Service Definition panel shown in Figure 47.
--------------------------------------------------------------------------
_______________________________________________
| Choose Service Definition |
| |
| Select one of the following options. |
| 2_ 1. Read saved definition |
| 2. Extract definition from WLM |
| couple data set |
| 3. Create new definition |
| |
| |
| |
_______________________________________________
ENTER to continue
Figure 47. Choose Service Definition Panel
If you are using an existing stored procedure that used the DB2-established address space, you have
to update the WLM_ENV column with the value of the application environment. You also have to
re-link-edit the stored procedure with the DSNRLI module, which is the RRSAF language interface. You
should add the following to the link-edit step:
INCLUDE SYSLIB(DSNRLI)
If you want the stored procedure load module to execute on both the DB2-established and the
WLM-established address spaces, you have to link-edit it with both language interfaces: DSNALI and
DSNRLI. In this case, you must have different names for the stored procedure, that is, two entries in
the SYSIBM.SYSPROCEDURES table, both with the same value for the LOADMOD column.
Table 4. Sample Setup To Test Same Program in DB2- and WLM- Address Spaces
stored procedure address space DB2-established WLM-established
The RACF ID and group name that you select must have authority to run the Recoverable Resource
Manager attachment facility (RRSAF) application programs.
When you access non-DB2 resources such as VSAM files and flat files in your stored procedure, you
must ensure that the RACF ID associated with the client application has the privileges needed for the
access.
Note that when you access non-DB2 resources such as VSAM or QSAM files, DB2 does not provide
serialization for stored procedures that run in the WLM-established address space; you must provide
the serialization in the stored procedure code.
If you try to run a stored procedure with EXTERNAL_SECURITY=Y in a DB2-established address space,
you get a -471 SQLCODE with reason code 00E79009.
If you define the symbolic parameter APPLENV with some of the characters in lowercase, the WLM
address space is not started and you get the following message:
IEF403I WLMENV3 - STARTED - TIME=22.14.44
+DSNX981E DSNX9WLM THE PARAMETER APPLEN CONTAINS AN INVALID VALUE
2,WLM_ENVIrONMENT_03 PROC= WLMENV3
IEA995I SYMPTOM DUMP OUTPUT
SYSTEM COMPLETION CODE=0C4 REASON CODE=00000011
When you display the WLM environment, the application environment is in a STOPPED state:
RESPONSE=SC62
IWM029I 22.20.04 WLM DISPLAY 170
APPLICATION ENVIRONMENT NAME STATE STATE DATA
WLM_ENVIRONMENT_03 STOPPED
ATTRIBUTES: PROC=WLMENV3 SUBSYSTEM TYPE: DB2
After you modify the APPLENV parameter in the application environment definition and activate the
policy, you get the following message:
You can use the following display command to check the active policy name:
D WLM
The result of the command is the following:
D WLM
IWM025I 13.50.04 WLM DISPLAY 806
ACTIVE WORKLOAD MANAGEMENT SERVICE POLICY NAME: DAYTIME
ACTIVATED: 1997/11/02 AT: 23:52:04 BY: DB2RES1 FROM: SC62
DESCRIPTION: Policy from 9:00 am to 5:00 pm
RELATED SERVICE DEFINITION NAME: ITSO_SJ
INSTALLED: 1997/11/02 AT: 23:51:57 BY: DB2RES1 FROM: SC62
WLM VERSION LEVEL: LEVEL004
Use the following display command to check which application environments are defined:
D WLM,APPLENV=*
The result of the command is the following:
D WLM,APPLENV=*
IWM029I 15.45.12 WLM DISPLAY 360
APPLICATION ENVIRONMENT NAME STATE STATE DATA
APENVIRON AVAILABLE
WLMENV1 AVAILABLE
WLMENV2 STOPPED
You can activate another policy by issuing the following command:
V WLM,POLICY=policyname,RESUME
If your client program is hanging, waiting for a response from a stored procedure on OS/390, then
perform the following:
1. Display all threads at the server using -DIS THD(*) LOC(*) and look for the P R O C = field.
DSNV401I =DBC1 DISPLAY THREAD REPORT FOLLOWS -
DSNV402I =DBC1 ACTIVE THREADS -
NAME ST A REQ ID AUTHID PLAN ASID TOKEN
SERVER SW * 2 CMD.EXE DB2V5 DISTSERV 0204 141
V429 CALLING PROCEDURE=PPMMSSMW3 , LOAD MODULE=PPMMSSM0,
PROC= , ASID=0000, WLM_ENV=WLMENV3
V445-09019639.0461.AF813BF41AD8=141 ACCESSING DATA FOR 9.1.150.57
DISPLAY ACTIVE REPORT COMPLETE
DSN9022I =DBC1 DSNVDT ′ -DIS THD′ NORMAL COMPLETION
If it is blank, there can be a problem with the application environment.
2. Issue the display command to check the state of the application environment:
D WLM,APPLENV=WLM_ENV4
IWM029I 23.40.55 WLM DISPLAY 514
APPLICATION ENVIRONMENT NAME STATE STATE DATA
WLM_ENV4 STOPPED
ATTRIBUTES: PROC=WLMENV4 SUBSYSTEM TYPE: DB2
3. If it is in STOPPED state, issue the VARY command with the resume option:
For our DB2 subsystem, DBC1, STORTIME is set at 180, indicating that a stored procedure times out
after 180 seconds. Queued requests time out when the STORTIME value is exceeded. If the client
program that invokes the stored procedure is running on a client workstation and the client workstation
cannot interpret the SQLCODE, you get the following message:
SQL0969N There is no message text corresponding to SQL error ″-471″ in the
message file on this workstation. The error was returned from module
DSNX9WCA″ with original tokens ″PPMMSSMW 00E79002 ″ . SQLSTATE=
The explanation for this error is the following:
Explanation: DB2 received an SQL CALL statement for a stored
procedure. The CALL statement
was not accepted because of DB2 reason code 00E79002.
SQLSTATE=55023
If you don′t specify the APPLENV symbolic parameter either in the JCL procedure or in the start
parameters of the application environment definition, then when the address space is started, you get
the following messages:
IWM034I PROCEDURE DBC1WLM2 STARTED FOR SUBSYSTEM DBC1 599
APPLICATION ENVIRONMENT WLMENV2
PARAMETERS DB2SSN=DBC1,NUMTCB=1
$HASP100 DBC1WLM2 ON STCINRDR
$HASP373 DBC1WLM2 STARTED
IEF403I DBC1WLM2 - STARTED - TIME=18.10.35
+DSNX981E DSNX9WLM THE PARAMETER APPLEN CONTAINS AN INVALID VALUE 603
PROC= DBC1WLM2
IEA995I SYMPTOM DUMP OUTPUT 604
SYSTEM COMPLETION CODE=0C4 REASON CODE=00000004
WLM tries to start the address space three times. After failing for three times, the WLM places the
application environment in the STOPPED state. You can verify this by issuing the DISPLAY WLM
command:
D WLM,APPLENV=WLMENV2
IWM029I 18.13.43 WLM DISPLAY 640
APPLICATION ENVIRONMENT NAME STATE STATE DATA
WLMENV2 STOPPED
ATTRIBUTES: PROC=DBC1WLM2 SUBSYSTEM TYPE: DB2
If you want to check WLM data sets and usage in a Sysplex environment, you can issue the following
command:
D XCF,COUPLE,TYPE=WLM
You get the following information:
IXC358I 18.45.14 DISPLAY XCF 384
WLM COUPLE DATA SETS
PRIMARY DSN: SYS1.WLMR4.CDS01
VOLSER: TOTDS0 DEVN: 0CEE
FORMAT TOD MAXSYSTEM
05/29/1997 18:39:02 32
ALTERNATE DSN: SYS1.WLMR4.CDS02
VOLSER: TOTDS1 DEVN: 0FEE
FORMAT TOD MAXSYSTEM
05/29/1997 18:39:04 32
WLM IN USE BY ALL SYSTEMS
In this section, we show how WLM-established stored procedures behave under goal and compatibility
modes. We also tested automatic control (where WLM knows the name of the JCL procedure to start
the WLM-established stored procedure) and manual control (where WLM does not know the name of
the JCL procedure). We highlight the differences in starting and stopping stored procedures address
spaces using MVS operator commands and WLM commands.
Base goal:
Base goal:
For the second version, the DISPLAY statement is with REPLY (DISPLAY and ACCEPT).
dcl response char (1) init(′ R′ ) ;
display (′ respond with ″ ′ | | response || ′ ″ ′ ) reply (response) ;
The program waits for operator response. This way, we could easily start a number of client programs
and control how they are dispatched by entering the required responses through the MVS console.
We invoke this stored procedure using the sm0pmcr2.cmd REXX command from an OS/2 client. This
sample program stored procedure executes commands passed by the client program.
To display the status of the WLMENV2 application environment, we issued the following command:
DISPLAY WLM,APPLENV=WLMENV2
We got the following output:
IWM029I 14.15.07 WLM DISPLAY 998
APPLICATION ENVIRONMENT NAME STATE STATE DATA
WLMENV2 AVAILABLE
ATTRIBUTES: PROC=DBC1WLM2 SUBSYSTEM TYPE: DB2
3.13.3.1 Test WLM Management of Multiple Address Spaces: The following are the tests
performed to check WLM capabilities of managing multiple address spaces:
1. We set the maximum number of TCBs (NUMTCB) for the WLM-established stored procedure to
one. The stored procedure is classified under the general service class, SCDB2STP. When we
started the client program, WLM started up one WLM-established stored procedure. WLM issues
the following messages when the address space is started:
IWM034I PROCEDURE DBC1WLM2 STARTED FOR SUBSYSTEM DBC1
APPLICATION ENVIRONMENT WLMENV2
PARAMETERS DB2SSN=DBC1,NUMTCB=1,APPLENV=WLMENV2
2. We invoked 30 identical client programs using the first version of the stored procedure, which uses
a very small amount of resource and completes in a very short time. Because the performance
goals were met, WLM did not start another address space.
3. We changed the WLM classification so that the stored procedure now uses the high priority service
class, HIGHPRT. We invoked 30 client programs, but now calling the second version of the stored
procedure. We did not reply immediately to the pending message on the MVS console, which
3.13.3.2 Operator Commands: The following are the tests we performed using operator
commands having WLM in goal mode and automatic control:
1. We succeeded in canceling stored procedures address space using an operator command (instead
of QUIESCE). The address space came down with the following command:
C DBC1WLM2,APPLENV=WLMENV2
If WLM decides it needs the address space, WLM starts it again. If WLM decides it no longer
needs them, it does not start them up.
2. We issued the following command with the QUIESCE option:
VARY WLM,APPLENV=WLMENV2,QUIESCE
WLM brought down all address spaces for this application environment and issued the following
message:
IWM032I VARY QUIESCE FOR WLMENV2 COMPLETED
To use WLM-established stored procedures address spaces, you have to implement OS/390 resource
recovery services (RRS). You must be running your system in one of two Sysplex environments.
Check the PLEXCFG parameter of IEASYSnn member in SYS1.PARMLIB:
• If the parameter specification is PLEXCFG=(MULTISYSTEM,OPI=NO), it indicates that the system
is to be part of a Sysplex consysting of one or more MVS systems that reside on one or more
processors, and share the same Sysplex couple data sets.
• If the parameter specification is PLEXCFG=(MONOPLEX,OPI=NO), it indicates that the system is
to be part of a single-system Sysplex that must use a Sysplex couple data set. In this case, you
don′t need to have a coupling facility nor set up the whole Sysplex environment.
DB2 requires that RRS be active, because WLM-established stored procedure address spaces use the
new RRS attachment facility (RRSAF), not the call attachment facility (CAF) used for DB2-established
stored procedure address space. You cannot use the CAF in WLM-established stored procedure
address spaces.
As with the implementation of DB2-established stored procedure address spaces, for WLM-established
address spaces, you cannot explicitly code any call to DSNRLI.
RRS is an MVS system logger application that records events related to protected resources. RRS
records these events in five log streams. In a Sysplex environment, these log streams are shared by
the systems of the Sysplex. Before you can start RRS, you must:
1. Define RRS′ s log streams. The log streams can be placed on DASD or in the coupling facility. If
the log streams are placed in the coupling facility, you must:
a. Add definitions for the structure in the CFRM policy.
b. Define the log streams.
c. Activate the new definitions.
2. Set up the RRS procedure in SYS1.PROCLIB.
3. Define the RRS subsytem to MVS.
The five log stream names used by RRS are (where gname can be your Sysplex name or any name in
a non-Sysplex environment):
• Main unit-of-recovery log state stream:
ATR.gname.MAIN.UR
• Delayed unit-of-recovery log state stream:
ATR.gname.DELAYED.UR
• Resource manager data log stream:
ATR.gname.RM.DATA
• Restart log stream:
ATR.gname.RESTART
• Archive log stream (This log is recommended but optional.)
ATR.gname.ARCHIVE
3.14.1.1 Defining the RRS Log Streams to DASD: In a MONOPLEX environment, you must
allocate your log streams to DASD. Here is an example of the JCL to map each RRS log stream to
DASD:
//STEP1 EXEC PGM=IXCMIAPU
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
DATA TYPE(LOGR)
DEFINE LOGSTREAM NAME(ATR.gname.RM.DATA)
DASDONLY(YES)
STG_DATACLAS(vsamls)
LS_DATACLAS(vsamls)
DEFINE LOGSTREAM NAME(ATR.gname.MAIN.UR)
DASDONLY(YES)
STG_DATACLAS(vsamls)
LS_DATACLAS(vsamls)
DEFINE LOGSTREAM NAME(ATR.gname.DELAYED.UR)
DASDONLY(YES)
STG_DATACLAS(vsamls)
LS_DATACLAS(vsamls)
DEFINE LOGSTREAM NAME(ATR.gname.RESTART)
DASDONLY(YES)
STG_DATACLAS(vsamls)
LS_DATACLAS(vsamls)
DEFINE LOGSTREAM NAME(ATR.gname.ARCHIVE)
DASDONLY(YES)
STG_DATACLAS(vsamls)
LS_DATACLAS(vsamls)
/*
Note that:
• gname can be any name of your choice. When you start RRS, you must specify for the gname
parameter of the JCL procedure the same gname specified when you created your log streams. If
you do not specify the name when starting RRS, the default is the Sysplex name.
• vsamls is an SMS class defined for linear VSAM files. You can set up a new SMS class or use an
existing SMS class for VSAM linear data sets.
To verify the data classes already defined in SMS, you can invoke the SMS ISPF application,
choose option 4, and list all defined SMS classes.
The log stream (LS) VSAM data sets will be allocated at the time the RRS log streams are defined.
Each data set is prefixed with IXGLOGR and suffixed with A0000000. They will be named as
follows:
IXGLOGR.ATR.gname.ARCHIVE.A0000000
IXGLOGR.ATR.gname.ARCHIVE.A0000000.DATA
The staging (STG) VSAM data sets are allocated at RRS startup. When RRS is canceled, it deletes
the STG data sets. Each data set is prefixed with IXGLOGR and suffixed with the Sysplex name.
They are named as follows:
IXGLOGR.ATR.gname.ARCHIVE.Sysplexn
IXGLOGR.ATR.gname.ARCHIVE.Sysplexn.DATA
If you need to delete the RRS log streams and VSAM data sets generated, you can use the following
example:
3.14.1.2 Defining the RRS Log Streams to Use the Coupling Facility: If the RRS log
streams use the coupling facility, you have to update the CFRM policy to add the RRS structures. Here
is an example of the JCL to update the CFRM policy:
//DEFCFRM1 JOB MSGCLASS=X,TIME=10,MSGLEVEL=(1,1),NOTIFY=&SYSUID
//STEP1 EXEC PGM=IXCMIAPU
//SYSPRINT DD SYSOUT=*
//SYSABEND DD SYSOUT=*
//SYSIN DD *
DATA TYPE(CFRM) REPORT(YES)
DEFINE POLICY NAME(CFRM18) REPLACE(YES)
CF NAME(CF01)
TYPE(009672)
MFG(IBM)
PLANT(02)
SEQUENCE(000000040104)
PARTITION(1)
CPCID(00)
DUMPSPACE(2048)
CF NAME(CF02)
TYPE(009672)
MFG(IBM)
PLANT(02)
................
................
................
STRUCTURE NAME(RRS_RM_DATA)
INITSIZE(8000)
SIZE(16000)
PREFLIST(CF02,CF01)
REBUILDPERCENT(5)
STRUCTURE NAME(RRS_MAIN_UR)
INITSIZE(8000)
SIZE(16000)
PREFLIST(CF02,CF01)
REBUILDPERCENT(5)
STRUCTURE NAME(RRS_DELAYED_UR)
INITSIZE(8000)
SIZE(16000)
PREFLIST(CF02.CF01)
REBUILDPERCENT(5)
STRUCTURE NAME(RRS_RESTART)
INITSIZE(8000)
SIZE(16000)
STRUCTURE NAME(RRS_ARCHIEVE)
INITSIZE(8000)
SIZE(16000)
PREFLIST(CF02)
REBUILDPERCENT(5)
You can map each log stream to a single structure or you can map log streams of like data types to
the same structure. Here is an example of the JCL to map each RRS log stream to a structure:
//STEP1 EXEC PGM=IXCMIAPU
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
DATA TYPE(LOGR)
DEFINE STRUCTURE NAME(RRS_RM_DATA) LOGSNUM(5)
DEFINE STRUCTURE NAME(RRS_MAIN_UR) LOGSNUM(5)
DEFINE STRUCTURE NAME(RRS_DELAYED_UR) LOGSNUM(5)
DEFINE STRUCTURE NAME(RRS_RESTART) LOGSNUM(5)
DEFINE STRUCTURE NAME(RRS_ARCHIEVE) LOGSNUM(5)
DEFINE LOGSTREAM NAME(ATR.gname.RM.DATA)
STRUCTNAME(RRS_RM_DATA)
LS_DATACLAS(vsamls)
DEFINE LOGSTREAM NAME(ATR.gname.MAIN.UR)
STRUCTNAME(RRS_MAIN_UR)
LS_DATACLAS(vsamls)
DEFINE LOGSTREAM NAME(ATR.gname.DELAYED.UR)
STRUCTNAME(RRS_DELAYED.UR)
LS_DATACLAS(vsamls)
DEFINE LOGSTREAM NAME(ATR.gname.RESTART)
STRUCTNAME(RRS_RESTART)
LS_DATACLAS(vsamls)
DEFINE LOGSTREAM NAME(ATR.gname.ARCHIVE)
STRUCTNAME(RRS_ARCHIVE)
/*
If you need to delete the log streams and the structures from the coupling facility, use the following
example:
//STEP1 EXEC PGM=IXCMIAPU
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
DATA TYPE(LOGR)
DELETE LOGSTREAM NAME(ATR.gname.RM.DATA)
DELETE LOGSTREAM NAME(ATR.gname.MAIN.UR)
DELETE LOGSTREAM NAME(ATR.gname.DELAYED.UR)
DELETE LOGSTREAM NAME(ATR.gname.RESTART)
DELETE LOGSTREAM NAME(ATR.gname.ARCHIEVE)
DELETE STRUCTURE NAME(RRS_RM_DATA)
DELETE STRUCTURE NAME(RRS_MAIN_UR)
DELETE STRUCTURE NAME(RRS_DELAYED_UR)
DELETE STRUCTURE NAME(RRS_RESTART)
DELETE STRUCTURE NAME(RRS_ARCHIEVE)
/*
In this chapter, we provide an overview of DB2 Common Servers and DB2 UDB and explain their
relationship to stored procedures.
“DB2 Common Servers” is a name that groups six DB2 server products sharing the same DB2 code.
You can think of it as one database system running on different platforms with either an Intel or a UNIX
architecture. The six Common Server products are:
• DB2 for OS/2
• DB2 for AIX
• DB2 for Windows NT
• DB2 for HP-UX
• DB2 for Sun/Solaris
• DB2 for Sinix (Siemens Nixdorf)
The DB2 Universal Database Version 5 (DB2 UDB), announced in December 1996, is the follow-on
product to DB2 Common Servers. DB2 UDB combines the online transaction processing (OLTP)
performance, advanced functions, and relational features of DB2 Common Servers with the parallel
processing and clustering, query performance, and very large database support of DB2 Parallel Edition.
DB2 UDB enhances the integration of an enterprise′s total data resources by various forms of support
for the DB2 family (DB2 for OS/390, DB2 for OS/400, and DB2 for VM and VSE). This support ranges
from new Transmission Control Protocol/Internet Protocol (TCP/IP) support, new Distributed Relational
Database Architecture (DRDA) application server (AS) support, and direct desktop client access with
DB2 Connect for Web enablement and middleware (data replication support and centralized database
systems management). With the use of DataJoiner it is possible to access nonrelational data, such as
IMS and VSAM, and non-IBM databases, such as Oracle, Informix, and Sybase.
DB2 Connect supports the Advanced Program-to-Program Communication (APPC) and TCP/IP
protocols for connections with all DRDA servers. For the current releases, however, only DB2 for
OS/390 and UDB products support the TCP/IP protocol.
DB2 Connect runs on different platforms including Windows 3.11, Windows 95, Windows NT, OS/2, AIX,
HP, Sun, SINIX, and SCO. The Windows 3.11 and Windows 95 products are available only in
single-user versions.
DB2 Connect is the DRDA application requester for DB2 UDB. DB2 Connect Version 5.0 supports the
two-phase commit processing over APPC on OS/2 and AIX only, but supports the two-phase commit
processing over TCP/IP on all platforms.
4.3.3 Net.Data
Net.Data is an application that runs on a Web server and enables you to create Web pages with access
to DB2 data. Net.Data offers tools for building two-tier and three-tier applications that can access DB2
data by using standard HTML and SQL. A common gateway interface (CGI) processes input from the
HTML pages and sends SQL commands to a DB2 database specified in the application you create with
Net.Data.
Applications can access DB2 data on the Internet server or on other servers, using DB2 Client
Application Enablers (CAEs) and DB2 Connect. With the use of DataJoiner, the application can also
access non-DB2 servers.
Net.Data builds on the database access and reporting capabilities of the previous DB2 WWW
Connection product. However, its function has been enhanced to become a comprehensive Web
development tool for the creation of either simple dynamic Web pages or complex Web-based
applications. In addition, it also supports Internet server application program interfaces (APIs) on the
following servers:
• Netscape Server (NSAPI)
• Microsoft Internet Server (ISAPI)
• IBM Internet Connection Server (ICAPI)
The Control Center is a graphical tool that enables you to manage the local server or remote servers
from a single point of control. With the Control Center you can:
• Create, alter, and drop DB2 objects
• Back up and recover databases and table spaces
• Define replication policies
• Configure databases and communication protocols
The Control Center provides SmartGuide to help you perform complex tasks. As an example, when
you are tuning the performance of your system, SmartGuide can guide you through the process.
In DB2 V1 Common Servers, the stored procedures SQL CALL statement was not supported, but there
was an interface with similar functions to stored procedures called Database Application Remote
Interface (DARI). This interface is still supported in DB2 Common Servers V2 and DB2 UDB, and it is
possible to use a DARI call to invoke a stored procedure. Because of the DARI interface, the name
DARI is still used in DB2 Common Servers V2 and DB2 UDB, for example, the MAXDARI and KEEPDARI
parameters. In addition, in reference manuals DARI and stored procedures are used synonymously.
Please keep in mind that DARI and stored procedures are two separate interfaces, even though they are
sometimes referred to by the same name .
In the sections that follow, we describe the two parameters in the database manager configuration file
that can affect DB2 common-server stored procedures: KEEPDARI and MAXDARI.
If KEEPDARI is set to no (Figure 52 on page 73), a new process is created and destroyed for each
invocation of a stored procedure. Thus only the minimum amount of resources is consumed by the
stored procedure process as the resources are released after each call. However, for each invocation
of a stored procedure, a new process has to be created, thus reducing performance.
If KEEPDARI is set to yes (Figure 53 on page 74), a stored procedure process remains active after the
first call and can be reused for subsequent stored procedure calls. Setting this parameter to yes
results in additional system resources being consumed by the database manager for each process that
is activated, up to the value of the MAXDARI parameter. When KEEPDARI is set to yes , a subsequent
call to a stored procedure first tries to reuse one of the existing processes. If all processes are in use,
an additional stored procedure process is started to handle the call.
In an environment where stored procedures are intensively used relative to the number of nonstored
procedure database requests and system resources are not constrained, we recommend setting the
KEEPDARI parameter to yes . This improves the performance of the stored procedure calls.
The default value for the MAXDARI parameter is -1, which means that the actual value used for
MAXDARI is the current value of the MAXAGENTS parameter. If you find that too many system
resources are being used for stored procedure processes, you can reduce the MAXDARI parameter.
As a starting point for tuning the MAXDARI parameter, we recommend setting it equal to the number of
applications allowed to issue stored procedure calls at one time.
If you prefer to use the DB2 command line processor, the syntax for viewing the actual settings is:
db2 get database manager configuration
You can also use the DB2 command line processor to set parameters. Here is the syntax for setting
the KEEPDARI parameter to yes and the MAXDARI parameter to 100:
db2 update database manager configuration using MAXDARI 100 KEEPDARI yes
Note that changes to the database manager configuration file become effective only after they have
been loaded into memory. For these server configuration parameters, changes are loaded into
memory during execution of a db2start command.
Fenced stored procedures run in a separate process from the database agent processes. This
separation requires that the stored procedure process and the database agent processes communicate
through a router.
In cases where you want the best possible performance, it is also possible to run a stored procedure
directly in the database agent process, thus eliminating the overhead of the communication through
the router.
Fenced is the default option for running stored procedures (Figure 55). Performance is slightly inferior
to that of unfenced stored procedures, but there is an important advantage: As the stored procedure
operates isolated from the database control structures used by the database agent, an erroneous
stored procedure cannot accidentally damage the database manager control structures.
When nothing separates the stored procedure from the database control structures used by the
database agent, we refer to stored procedures as being unfenced , trusted , or not-fenced (Figure 56 on
page 77). As an administrator, you are confident that the stored procedure does not accidentally or
maliciously damage the database control structures. You “trust” it to operate in a fashion that does
not jeopardize your database control structures.
Because of the risk of damaging your database control structures, use unfenced stored procedures
only when you have to obtain the maximum performance benefits. In addition, ensure that the
procedure is well coded and has been thoroughly tested before allowing it to run as a unfenced stored
procedure.
To identify a stored procedure as being unfenced, you have to place the procedure in a special
directory, usually the \SQLLIB\FUNCTION\UNFENCED directory. Unfenced stored procedures must be
precompiled with the WCHARTYPE NOCONVERT option. Please refer to 7.3, “Stored Procedure
Preparation” on page 111 for more information.
Unlike DB2 on the MVS platform, there is no table or view in the DB2 system catalog for registering
stored procedures in DB2 Common Server V2. However, you can define a pseudo-catalog table,
DB2CLI.PROCEDURES, to DB2 to register stored procedures.
Updating and maintaining the DB2CLI.PROCEDURES table is the responsibility of the database
administrator. An example of how to insert some rows in the table is provided with the
STORPROC.XMP sample file. To run this sample file once the table is created, do the following:
1. Connect to the database.
2. Go to the SQLLIB\MISC directory (on AIX, sqllib/misc).
3. Issue the following command (except on Windows):
db2 -f STORPROC.XMP -z STORPROC.LOG -t
On Windows, the syntax is:
db2cliw -f STORPROC.XMP -z STORPROC.LOG -t
You can check the STORPROC.DDL and STORPROC.XMP files for more details about the
DB2CLI.PROCEDURES table. Initially, all users have SELECT authority, and only users with DBADM
authority can insert, delete, or update rows in this table. A user with DBADM authority can grant
privileges to other users.
You have to create and load DB2CLI.PROCEDURES and build the samples to run PROCS and
PROCCOLS. To invoke PROCS, enter procs at the command prompt. PROCS interactively asks you to
enter the database name, user ID, password, and a “Procedure Schema Name Search Pattern.” The
procedure schema name search pattern is case sensitive. For the sample schema you can enter:
ExampleSchema
PROCS produces the following output:
In this chapter, we consider the interfaces that you can use to code the stored procedure and the client
application.
DB2 on the workstation and DB2 on MVS support two interfaces for application development:
embedded SQL statements and CLI.
Note that DB2 for MVS/ESA Version 4 does not support CLI natively.
Embedded SQL statements are written within the application programming language; programs written
using embedded SQL statements have to be precompiled by an SQL preprocessor or precompiler
before the actual compilation of the program. There are two types of embedded SQL: static and
dynamic.
strcat(insert_stmt, table_name);
strcat(insert_stmt, ″ VALUES (?)″ ) ;
CLI, originally defined by Microsoft, the X/OPEN company, and the SQL Access Group, has been
adopted as an International Standard (ISO/IEC 9057-3: ″SQL, Call Level Interface″).
The CLI is a facility within DB2 for processing dynamic SQL statments.
CLI uses function calls to pass dynamic SQL statements as function arguments. It does not require
host variables or a precompiler. Programs that use CLI must be written in C. The CLI relies on a set
of function calls that can be embedded in a C program and compiled by a conventional C compiler. By
linking the program to the CLI library, you have access to all the SQL facilities of the system. The CLI
functions conform to a standard that is widely implemented, and therefore usable from a variety of
database products.
The DB2 implementation of CLI is compatible with the widely used Microsoft version called Open
Database Connectivity (ODBC). Various levels of DB2 implement different levels of ODBC. For
example, DB2 Common Server Version 2 supports ODBC Level 1 and most of the functions in ODBC
level 2. DB2 UDB Version 5 supports ODBC Level 3. CLI incorporates both the ODBC and X/Open DB2
CLI functions, both of which are accepted industry standards.
Although you can use both embedded SQL statements and CLI in the same program, usually you have
to choose between embedded SQL and CLI. Both interfaces have their advantages.
For the workstation platform, you also can access result sets in this way because CLI is used as the
interface for the client applications. On the MVS platform you can use CLI or a supported host
language.
In this chapter, we explain how to code stored procedures for use with DB2 Version 4 or Version 5
servers. We focus on rules for writing stored procedures, differences among languages, dealing with
parameters, and preparing a stored procedure program.
Planning to write a stored procedure for DB2 is similar to planning to write any DB2 application. You
can use most of the statements that you typically use in an application program.
The sections that follow contain information to help you in the process of coding stored procedures.
6.1.1 Languages
You can write your stored procedures in many different languages. In DB2 on MVS, you can use C,
C++, COBOL, OO COBOL, PL/I, Assembler, or FORTRAN. You can also use VisualGen to generate
COBOL code for your stored procedure.
To use C++ and OO COBOL, you must ensure that you have APAR PN78797-PTF UN86554 installed in
your DB2 on MVS system. C++ requires that you install LE/370 Version 1 Release 4. OO COBOL
requires LE/370 Version 1 Release 5.
Although you can use the VS COBOL II compiler for DB2 client applications, you cannot use it to
compile stored procedures. To compile stored procedures written in COBOL, you must use the IBM
SAA AD/Cycle COBOL/370 Version 1 Release 1 compiler, or a later version. We used IBM COBOL for
OS/390 and VM 2.1.0 (5648-A25) for our tests.
To take advantage of certain stored procedures features, such as the subprograms feature in DB2 V5,
you need LE/370 Version 1 Release 7. Also, some important stored procedure features in future
versions of DB2 require LE/370 Version 1 Release 7, so we recommend you install LE/370 at this or a
higher level.
Refer to 1.3, “Software Prerequisites for Stored Procedures” on page 3 for compiler levels required to
compile stored procedures.
You can write your client applications using any language that supports SQL statements or CLI
application programming interfaces (APIs). The client program does not have to be written in the
same language as the stored procedure. For more information about coding client applications, see
Chapter 8, “Coding Client Applications” on page 117.
6.1.2 LE/370
LE/370 establishes a common run-time environment for different programming languages. It combines
essential run-time services, such as condition handling and storage management. All of these
services are available through a set of interfaces that are consistent across programming languages.
With LE/370, you can use one run-time environment for your applications, regardless of the
application′s programming language or system resource needs.
DB2 on MVS uses LE/370 to provide a run-time environment for the stored procedure programs. You
can have stored procedures written in different languages. All of these stored procedures can execute
in the same stored procedures address space. Thus, using LE/370, you do not have to specify the
LE/370 performs several functions for DB2. It hides the differences among programming languages,
provides the ability to make a stored procedure resident in the stored procedures address space, and
supports a large number of run-time options, including the possibility to debug your stored procedures.
You can use LE/370 run-time options to invoke the CODE/370 debugger or use the VisualDebugger.
If you are using LE/370 Releases 1 or 2, the value of the STAYRESIDENT column in the
SYSIBM.SYSPROCEDURES table is not recognized. In this case, the load module always remains
resident after the first call to the stored procedure.
As other programming languages become supported by LE/370, it will be possible to add support for
them in DB2 on MVS.
Coding a stored procedure is similar to coding any DB2 application. However, there are some
differences and some rules that you must follow when coding a stored procedure.
Although a stored procedure is invoked through an SQL CALL statement, it is not a subroutine. It can
contain or invoke subroutines, but it must be the main program in the DB2-established address space.
For the WLM-established address spaces, the stored procedure can be executed as a main program or
as a subprogram.
6.2.1.1 Contents of Stored Procedures: A stored procedure can contain both static and
dynamic SQL statements. It can contain DDL, DML, or DCL SQL statements. You can use
instrumentation facility interface (IFI) calls to issue DB2 commands from a stored procedure. You can
reference aliases or three-part names in a stored procedure. However, for DB2 Version 4 and DB2
Version 5, the following SQL statements cannot be used in a stored procedure:
• CALL
• COMMIT
• CONNECT
• RELEASE
• ROLLBACK
• SET CONNECTION
• SET CURRENT SQLID
Future versions of DB2 may support the CALL, CONNECT SET CONNECTION, and RELEASE SQL
statements.
Although for DB2 Version 4 and DB2 Version 5 you cannot issue an SQL CALL statement in your stored
procedure, you can call other programs or routines from a stored procedure by using statements of the
programming language. You can even call a REXX procedure.
If you try to use any of the unsupported SQL statements, your stored procedure and the client program
receive a -751 SQLCODE. This is an exceptional case where the client program directly receives the
failing SQLCODE that the stored procedure receives. When your stored procedure receives a -751
SQLCODE, the DB2 thread associated with it is placed in a “must rollback” state. When the client
If, because of an error condition, you want to ensure a rollback in the unit of work, you can code the
following ROLLBACK statement in your stored procedure:
IF error-condition THEN
EXEC SQL ROLLBACK END-EXEC.
Because ROLLBACK is an invalid SQL statement for stored procedures, your unit of work is placed in
a “must rollback” state.
6.2.1.2 Call Attachment Facility Calls: In the DB2-established address space, stored
procedures use CAF calls implicitly; therefore, the stored procedure must be link-edited with CAF.
For WLM-established address spaces, DB2 uses the RRSAF attachment, not CAF.
If you do not link-edit the stored procedure with CAF or RRSAF, you must call a stub program to load
and branch to CAF or RRSAF. The implementation of the stub program is described in the DB2 for
MVS/ESA Application Programming and SQL Guide . The advantage of using the stub program is that
your application remains isolated from DB2 code. Therefore, you do not have to link-edit your stored
procedure again if maintenance must be applied in the CAF or RRSAF code. If you try to use explicit
CAF calls (such as CALL DSNALI CONNECT, OPEN, CLOSE, or DISCONNECT) or RRSAF calls (such as
IDENTIFY, SIGNON, CREATE, TERMINATE, or TRANSLATE) in your stored procedure, DB2 rejects the
CALL.
Figure 58 shows the use of three-part names in DB2 on MVS stored procedures.
System-directed accesses are treated as dynamic SQL statements. You do not have to create a
package for the stored procedure at the remote DB2 server that you are accessing with this method.
The stored procedure from which you are using system-directed access can be precompiled using
CONNECT TYPE 1 or 2.
6.2.2.1 Client Program in MVS: If your client program is running under MVS and using
CONNECT TYPE 2, you cannot use the SQL CONNECT statement to connect to the same remote
location that the stored procedure is accessing. If the client program connects to a remote location
through the SQL CONNECT statement and then calls a stored procedure that uses system-directed
access to the same remote location, the stored procedure access fails with an SQLCODE -842.
Figure 59 shows this scenario.
If the client program calls a stored procedure that uses system-directed access to a remote location
and then tries to connect to the same remote location through the SQL CONNECT statement, this
connection fails with an SQLCODE -842. Figure 60 shows this scenario.
If you must access the same remote location from a client program that uses CONNECT TYPE 2, you
can issue an SQL RELEASE statement for the remote location, followed by an SQL COMMIT statement.
These statements must be executed before or after the call to the stored procedure, depending on
when you want to connect to the remote location. If you use these statements, you terminate your unit
of work. If you do not want to terminate your unit of work but still want to access tables at the same
remote location, you can use system-directed access in your client program to the same remote
location that the stored procedure is accessing. In this case, use a three-part name and do not issue
the SQL CONNECT statement.
6.2.2.2 Client Program in DB2 for OS/2 or AIX: As shown in Figure 61, unlike DB2 on MVS, if
your client program is running on the workstation platform using CONNECT TYPE 2, you can connect in
the same unit of work to the same remote location that is accessed by the stored procedure.
Note that a thread is created in DB2B when you issue the CONNECT statement to connect to DB2B.
When the stored procedure executes the SQL statement with the three-part name specification,
another thread is created in DB2B for the same unit of work. In this scenario, be careful when updates
are made from the client program and the stored procedure on the same data. You may get into a
lock problem situation because the threads are different.
If you have precompiled your program with CONNECT TYPE 1, you must issue an SQL COMMIT
statement before or after the call to the stored procedure that is using system-directed access. If you
do not commit, your client program receives an SQLCODE -30090 when trying to call the stored
procedure or in the SQL CONNECT statement.
The PARMLIST column of the SYSIBM.SYSPROCEDURES table defines the parameters used by the
stored procedure. It also defines the data type, size, and purpose (input, output, or both) of the
parameters. The syntax for defining the PARMLIST column is explained in 2.3.1.2, “Defining the
PARMLIST Column” on page 16.
One output parameter you should consider including in your stored procedure is information in the
SQLCA. The SQLCA of the stored procedure is not automatically sent to the client program. Including
some of the SQLCA parameters, such as the SQLCODE, as an output parameter enables you to check
the successful execution of SQL statements of the stored procedure. You can code your stored
procedure to copy some information from the current SQLCA to some output parameter, when the SQL
statement fails. If you want information about all SQL statements in your stored procedure, you must
have one parameter for each SQL statement to send information to the client program.
6.2.3.2 Passing Nulls to Stored Procedures: Another issue you must consider when defining
stored procedure parameters is whether the stored procedure accepts nulls as input parameters and
can nullify output parameters. This is defined in the SYSIBM.SYSPROCEDURES table in the LINKAGE
column. There are two possible linkage conventions for the parameters: SIMPLE and SIMPLE WITH
NULLS.
SIMPLE Linkage Convention: If you use the SIMPLE linkage convention, the client application can
pass null only for output parameters; nulls for input parameters are not accepted, and the stored
procedure cannot return nulls in the output parameters.
The stored procedure must have a parameter defined for each parameter passed in the SQL CALL
statement. Figure 62 on page 91 shows the parameter list when you use the SIMPLE linkage
convention.
SIMPLE WITH NULLS Linkage Convention: If you use the SIMPLE WITH NULLS linkage convention, the
input parameters can be null. You can use indicator variables in the client program or the NULL
keyword of the SQL CALL statement. The stored procedure can also return null values in output
parameters by using the indicator variables.
The indicator variables are passed to the stored procedure as a single parameter, containing an array
of SMALLINT variables, one for each of the stored procedure parameters. Figure 63 shows the
parameter list when you use the SIMPLE WITH NULLS linkage convention.
The stored procedure must contain a parameter for each parameter passed in the SQL CALL
statement and a structure of indicator variables containing one indicator variable for each parameter,
even if you code your stored procedure to receive only one parameter.
The stored procedure must determine which input parameters are null by examining the indicator
variables array. The stored procedure must also assign values to the indicator variables when
returning the output parameters to the client program.
Using Nulls to Reduce Network Traffic: When a client program issues an SQL CALL statement, all
specified parameters are sent to the server, regardless of their definition as input or output
parameters.
The stored procedure does not examine values sent by the client application that maps to output
parameters. When you have large output parameters, you may want to avoid the transmission of the
output parameters to the server.
You can use indicator variables for the output parameters to avoid sending large amounts of data
through the network. If the indicator variable associated with the output parameter contains a negative
value, the value of the parameter is not sent to the server, and only a null indicator flows through the
network. This technique can be used with the SIMPLE or the SIMPLE WITH NULLS linkage convention.
You can also prevent large parameters that are defined as INOUT but are only being used in one
direction from being sent in the other direction. All you have to do in the stored procedure or the
client program is set the indicator variables associated with the parameters to a negative value. In
this case, you must be using the SIMPLE WITH NULLS linkage convention. Table 6 shows which
parameters can be nullified by the client application and the stored procedure when the SIMPLE
linkage convention is used.
Table 7 shows which parameters can be nullified by the client application and the stored procedure
when the SIMPLE WITH NULLS linkage convention is used.
One reason why DB2 on MVS requires the specification of a parameter as input or output is that the
specification reduces the flow of the parameters as follows:
• The values (nulls or some value) of input parameters are not transmitted through the network to
the client application when the stored procedure ends.
• When the stored procedure is called or ends output (or input/output) parameter values are
transmitted through the network only if the associated indicator variable has a positive value. If
the indicator variable has a negative value, the parameter value is not transmitted through the
network.
When the stored procedure executes, output parameters always contain X′00′, regardless of
whether they were transmitted through the network.
6.2.3.3 Receiving Parameters in the Stored Procedure: DB2 on MVS uses the stored
procedure parameters you define to receive the parameters passed in the SQL CALL statement and
send parameters back to the client program. For DB2 Version 4, it is not possible to use (send or
receive) sets of values (result sets) as parameters. DB2 Version 5 supports multiple result sets.
The client program passes the parameters in the SQL CALL statement, using host variables, constants
(DB2 on MVS only), or an SQLDA. However, the stored procedure in DB2 on MVS always receives the
parameters in program variables. Unlike with DB2 Common Servers or DB2 UDB, you cannot use an
SQLDA to receive parameters. You must consider this difference when porting stored procedures from
one server to the other.
Code examples on how to receive the parameters in a stored procedure are presented below.
Assume that two parameters (PARM1 and PARM2) are being passed. PARM1 is an input parameter
defined as SMALLINT in the SYSIBM.SYSPROCEDURES table, and PARM2 is an output parameter
defined as CHAR(10) in the SYSIBM.SYSPROCEDURES table. The linkage convention for the samples
is SIMPLE WITH NULLS.
You must test the indicator variable for the input parameter to check whether it is null. You must
assign a value to the output parameter and its corresponding indicator variable.
PL/I Stored Procedure: For PL/I stored procedures, you must specify the compile option,
SYSTEM(MVS), and the run-time option, NOEXECOPS.
*PROCESS SYSTEM(MVS);
SP1: PROCEDURE (PARM1,PARM2,INDSTRUC) OPTIONS(MAIN NOEXECOPS REENTRANT);
/* In the PROCEDURE statement you must specify the two parameters */
/* and the indicator variables structure. */
/* */
DCL V1 BIN FIXED(15),
V2 CHAR (10);
DCL 01 INDSTRUC,
02 INDVAR1 BIN FIXED(15),
02 INDVAR2 BIN FIXED(15);
.
.
.
/* You must test the indicator variable for the input parameter */
/* in order to check if it is null */
IF INDVAR1 < 0 THEN
CALL NULLPROC;
.
.
/* You must assign a value to the output parameter and its */
/* corresponding indicator variable */
PARM2 = ′ ALINE′ ;
INDVAR2 = 0;
/* If you want to return a null value to the output parameter */
/* you must move a negative value to the indicator variable */
C Stored Procedure:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
EXEC SQL BEGIN DECLARE SECTION;
short parm1; /* Declarations of the stored procedure parameters
char parm2[11]; /* and indicator array. Note that the size
struct ind { /* of parm2 is 11. Character strings in C are
short indvar1; /* terminated with a null character.
short indvar2;
} indstruc;
EXEC SQL END DECLARE SECTION;
/* argc contains the number of parameters being passed */
/* argv is an array of strings containing the parameter values */
void main(int argc, char *argv[″ )
{
memcpy(&indstruc,(struct ind *) argv[3] /* Get contents of the
sizeof(indstruc)); /* indicator variables array
if (indstruc.indvar1 < 0) /* Test the input parameter for nulls
{
.
.
.
}
else {
If you are using C/370 Version 1 Release 1, your C program may need the following run-time option in
order to receive the parameters correctly:
#pragma runopts(PLIST(MVS))
If the program was ported from another platform and the [ and ] display as a Y and ″ respectively, you
have to edit the source code and type the [ and ] symbols using the hexadecimal values X′AD′ and
X′BD′.
6.2.3.4 Rules for Assigning Parameters: To send input parameters from the client program to
the stored procedure and output parameters from the stored procedure to the client program, DB2
uses a parameter list intermediate area. This intermediate area is based on the data type and size
defined in the PARMLIST column of the SYSIBM.SYSPROCEDURES table.
When passing input variables, DB2 copies the values of the client program variables to the
intermediate area, using the “store assignment” rules of the SQL ISO/ANSI standard. Then DB2
copies the values from the intermediate area to the stored procedures variables, using DB2 rules for
assigning values to host variables. Figure 64 shows the assignment of input parameters.
The “store assignment” rules of the SQL ISO/ANSI standard are the same as the DB2 rules for
assigning values to host variables, with one exception: When the input value is a string that is longer
than the target, an error occurs if the excess characters are not blanks. If the excess characters are
blanks, they are discarded, and the truncated string is assigned to the target. When the store
assignment uses DB2 rules, the string is truncated even if the excess characters are not blank.
6.2.3.5 Data Type Conversion: When passing parameters to a stored procedure, use the same
definitions both for the variables related to the parameters in the client program and those in the
stored procedure. These variables should also be compatible with the definition of the parameters in
the PARMLIST column of the SYSIBM.SYSPROCEDURES table.
In some cases, when you are assigning values to parameters, DB2 may have to use data type
conversion. Data type conversion is performed automatically by DRDA. However, check the
compatibility between SQL data types and the programming language data types.
Character data types are compatible with each other. Variable-length or fixed-length definitions are
compatible with both CHAR and VARCHAR. However, for assignments where SQL ISO/ANSI standard
rules are in effect, it may not be possible to move the values between character strings of different
sizes. In this case, the client program receives an SQLCODE -302 after the SQL CALL statement.
Character data types are not compatible with numeric data types. If your client program tries to pass
character data to numeric stored procedure parameters, an SQLCODE -301 is sent to the client
program after the SQL CALL statement.
Numeric data types are compatible with each other. You can use any numeric data type definition to
assign values to SMALLINT, INTEGER, DECIMAL, REAL, or FLOAT parameter types. However, if the
target parameter cannot receive the value—for example, if a number greater than 32767 is assigned to
a SMALLINT parameter—an an SQLCODE -406 is sent to the client program after the SQL CALL
statement.
Graphic data types are compatible with each other. A GRAPHIC or VARGRAPHIC parameter is
compatible with fixed-length or variable-length graphical programming language data types.
DATE, TIME, and TIMESTAMP SQL data types are not supported as stored-procedure parameter data
types. These data types must be defined as character data types in both the client program and the
PARMLIST column. In this case, no value checking occurs while assigning the parameter values.
The called programs can be used for specific functions, such as validation of parameters or complex
calculations, and can contain any SQL statements that are valid in a stored procedure.
All called programs that contain SQL statements require a DB2 package. The packages of the called
programs do not have to belong to the same collection as the package of the stored procedure. If you
specify a different collection for the called program, you must code a SET CURRENT PACKAGESET to
the called program collection, before calling the program. After calling the program, you may have to
issue the SET CURRENT PACKAGESET statement to the stored procedure collection, if the stored
procedure has SQL statements after the call. If you do not execute a SET CURRENT PACKAGESET, the
called program package must belong to the same collection as the stored procedure package.
When the stored procedure ends, the CURRENT PACKAGESET special register is restored to the value
it had before the SQL CALL statement. As shown in Figure 66, the collection used for the client
program is COLL1. The package for the stored procedure (PKGSTPC) is in collection COLL2. The
stored procedure invokes a program (ROUTINE3) whose package is in collection COLL3. Therefore,
before invoking ROUTINE3, the stored procedure must use the SET CURRENT PACKAGESET command
to set the collection to COLL3. After execution of ROUTINE3, any SQL statement executed by the
stored procedure is searched in PKGSTPC in collection COLL3. If PKGSTPC was not also bound in
collection COLL3, the SQL statement is not found. If this is the case, you must code in your stored
procedure the SET CURRENT PACKAGESET command to reset the collection to COLL2.
On completion of the stored procedure, the CURRENT PACKAGESET register is reset to COLL1.
Therefore, the client program does not have to issue the SET CURRENT PACKAGESET command.
If you are calling the stored procedure from a local DB2 client application, the packages of the stored
procedure and of all called programs must be bound into the client application plan.
The called program does not need to use LE/370. You can even call a REXX procedure
FETCH IRXJCL ;
CALL IRXJCL (PARM_STRUCT) ;
PUT SKIP EDIT (′ RETURN CODE FROM IRXJCL WAS : ′ , PLIRETV) (A, F(4)) ;
Although you can pass parameters to the REXX procedure, the only way for REXX to pass data back to
the stored procedure is using ISPF variables. So the application of calling a REXX procedure from a
PL/I stored procedure can be limited.
If you want to use a reentrant environment, you must explicitly call the initialization routine, IRXINIT, to
initialize the environment. TSO/E REXX automatically initializes nonreentrant environments only.
When you invoke IRXINIT to initialize a reentrant environment, you must set the RENTRANT flag on.
Refer to OS/390: TSO/E REXX Reference for more information.
The tasks involved in preparing a stored procedure to run in a DB2 on the MVS environment are
basically the same as the tasks for any DB2 program. You must precompile, compile, and link-edit the
stored procedure. You must also bind a package for the stored procedure. An additional task to be
performed when preparing a stored procedure is the definition of the stored procedure in the
SYSIBM.SYSPROCEDURES TABLE.
The client program package and the stored procedure package do not have to belong to the same
collection. The COLLID column of SYSIBM.SYSPROCEDURES specifies the collection that contains the
stored procedure package. If the COLLID value is blank, DB2 uses the collection name of the client
program package.
A stored procedure may have more than one package. For example, when you call other programs
that access DB2 resources, each program must have its own package.
The client program requires a plan if it runs in an MVS environment. If it is a remote client, the client
program plan does not have to include the stored procedure package. Up to DB2 Version 5, if the
client program runs in the same location as the stored procedure, the client program plan must include
the stored procedure package and the packages of the programs that the stored procedure invokes.
If your stored procedure does not contain SQL statements (for example, a stored procedure that uses
only IFI calls) it does not require a package to be created.
Additional privileges are required on each package used by the stored procedure during its execution.
The application server determines the privileges that are required and the authorization ID that must
have the privileges. If the server is DB2 on MVS, the privileges and authorization ID depend on the
syntax for specifying the stored procedure to be invoked in the SQL CALL statement, as we explain in
6.3.3.1, “Specifying a Procedure Name” and 6.3.3.2, “Specifying a Host Variable.”
6.3.3.1 Specifying a Procedure Name: For programs containing an SQL CALL statement that
specifies the procedure name, the owner of the package or plan containing the SQL CALL statement
must have at least one of the following privileges on each package used by the stored procedure
during its execution:
• EXECUTE privilege on the package
• OWNER of the package
• PACKADM authority for the package′s collection
• SYSADM authority
6.3.3.2 Specifying a Host Variable: For programs containing an SQL CALL statement that
specifies a host variable for the procedure name, the privilege set (see below) must include at least
one of the following on each package used by the stored procedure during its execution:
• EXECUTE privilege on the package
• OWNER of the package
• PACKADM authority for the package′s collection
• SYSADM authority
The privilege set is the union of the privileges held by:
• The OWNER of the package or plan containing the SQL CALL statement. In the case of ODBC or
CLI applications, this is the OWNER of the package or plan associated with the ODBC or CLI driver.
• The primary SQL authorization ID of the application process
• The secondary SQL authorization IDs associated with the application process
The difference between specifying a procedure name and a host variable is this: For a procedure
name, the authorization ID of the package or plan OWNER must have one (or more) of the required
privileges, while for a host variable, the authorization ID that must have one (or more) of the privileges
is:
• The package or plan OWNER
• The primary authorization ID executing the SQL CALL
• One of the primary′s secondary authorization ID
To prepare your stored procedure as reentrant, you must compile it as reentrant and link-edit it as
reentrant and reusable. To compile the program as reentrant, you must use the appropriate compiler
option:
• For COBOL, use the RENT compiler option.
• For C, use the RENT compiler option and invoke the C/370 prelink utility. Refer to the C/370
manuals for more information on the prelink utility.
• For PL/I, use PROC OPTIONS(REENTRANT).
Besides compiling the program as reentrant, you must specify the RENT and REUS options for the
linkage editor. This specification is also necessary to produce reentrant and reusable load modules.
Here is sample JCL for compiling and link-editing a reentrant stored procedure:
//PREPS01 EXEC DSNHCOB2, ....
// PARM.COB=(RENT, ....
// PARM.LKED=(RENT, REUS, ...
LE/370 is responsible for handling resident stored procedures. The minimum level of LE/370 to have
stored procedures resident is LE/370 Version 1 Release 3.
If your stored procedure is not reentrant, it cannot stay resident. For nonreentrant stored procedures
you must specify N in the STAYRESIDENT column of the SYSIBM.SYSPROCEDURES table.
We recommend that you implement all production stored procedures as reentrant and reusable and
specify Y in the STAYRESIDENT column of SYSIBM.SYSPROCEDURES.
If you implement your stored procedure with the nonreusable attribute, you must specify N for the
STAYRESIDENT column of SYSIBM.SYSPROCEDURES.
Any other combination can lead to problems. For example, if you implement your stored procedure
with the nonreusable attribute and specify Y in the STAYRESIDENT column of
SYSIBM.SYSPROCEDURES, DB2 on MVS loads one copy of the stored procedure′s load module for
every SQL CALL statement issued for the stored procedure. Eventually, the stored procedures address
space runs out of virtual storage.
If you compile your stored procedure as nonreentrant, you must link-edit it with the NOREUS and
NORENT attributes. If you do not follow this rule, the results will be unpredictable.
Every stored procedure must have at least one row in the SYSIBM.SYSPROCEDURES table. You can
populate the table by using SQL statements or the load utility. Listed below is an example of an SQL
INSERT statement to insert data in the SYSIBM.SYSPROCEDURES table of DB2 Version 4 for a stored
procedure with the following characteristics:
• Stored procedure name is TS0BMS.
• Any user on any location can execute it.
• The load module name for the stored procedure is TS0BMS.
• The input parameters cannot be null.
• The collection name for the stored procedure package is TS0BMS.
• The stored procedure is written in COBOL.
• There is no service units limit for the stored procedure.
• It does not stay resident after execution.
• There is only one LE/370 run-time option: TEST(,,,SC02130I).
• It uses two parameters. Each can be used for input and output. The first is a character variable of
length 10, and the second, an integer variable.
INSERT INTO SYSIBM.SYSPROCEDURES
(PROCEDURE, AUTHID, LUNAME, LOADMOD,
LINKAGE, COLLID, LANGUAGE, ASUTIME,
STAYRESIDENT, IBMREQD, RUNOPTS,
PARMLIST)
VALUES(′ TS0BMS′ , -- > PROCEDURE
′ ′, -- > AUTHID
′ ′, -- > LUNAME
′ TS0BMS′ , -- > LOADMOD
′ ′, -- > LINKAGE
′ TS0BMS′ , -- > COLLID
′ COBOL′ , -- > LANGUAGE
0, -- > ASUTIME
′ ′, -- > STAYRESIDENT
′ N′ , -- > IBMREQD
′ TEST(,,,SC02130I)′ , -- > RUNOPTS
′ CHAR(10) INOUT, INTEGER INOUT′ ) ; -- > PARMLIST
You can also use the DSNTIAD sample program to run this insert in your preparation JCL.
For this project we made heavy use of the Database 2 Administration Tool MVS/ESA (DB2 Admin)
program product (5688-515). We installed a beta version which runs under DB2 for OS/390 V5. This is
equivalent to having applied the PTF UQ12144. Its panels simplify many administration tasks, such as:
• Adding and updating entries to the communications database (CDB)
• Adding, updating, and deleting entries in SYSIBM.SYSPROCEDURES
• Starting, stopping, and displaying stored procedures
Figure 69. Using the DB2 Administration Tool to Manage Stored Procedures
If you are changing information in the SYSIBM.SYSPROCEDURES table for a stored procedure already
defined, you may have to issue a START PROCEDURE command to update the information in the DB2
on MVS buffers. If you do not issue the START PROCEDURE command, the old information will still be
used, even after you change the SYSIBM.SYSPROCEDURES table.
With this technique, you can grant write privileges to programmer SILVIO for this view only.
In this chapter, we discuss the coding of stored procedures that execute in a workstation environment.
Here, any reference to DB2 on the workstation refers to DB2 Common Server or DB2 UDB. As with
any other DB2 on the workstation application, one of the first things you have to decide is which
language and interface to use.
7.1 Languages
DB2 on the workstation supports the C, C++, COBOL, and FORTRAN programming languages
through their precompilers. In addition, DB2 on the workstation supports the REXX language through a
dynamic interpreter.
Note: Although DB2 on the AIX platform supports IBM AIX REXX/6000, it is not possible to code stored
procedures in REXX for DB2 on the AIX platform.
If the stored procedure is written with embedded SQL in one of the supported host languages, it must
be precompiled, compiled, and link-edited. Special compile options must be used (refer to 7.3, “Stored
Procedure Preparation” on page 111 for details), a library (or a function in a library) must be
produced, and the stored procedure must be bound to the database on the server before it can run.
The stored procedure cannot be executed directly from an executable (.EXE file).
If the stored procedure is written with CLI, you do not have to precompile it and you do not have to
bind it to the server, but you do have to compile and link-edit it according to the above rules.
If the stored procedure is written in REXX, there are no precompile, compile, link-edit, or bind tasks.
The stored procedure code resides in a .CMD file.
Note that for all these steps the stored procedure must use the same code page as the database.
For testing, it is usually easier to test first on a single machine. Stored procedures can also be
developed and tested on a stand-alone DB2. Once the stored procedure performs without errors, it
can be moved to a remote server. Remember, when you move a stored procedure to another server,
you have to rebind it to create a package at the new server database.
The stored procedure executes when called by the client application. When the stored procedure is
invoked, the following occurs on the server:
1. The database manager generates an SQLDA data structure when the SQL CALL statement is
executed. Host variables are always passed through this SQLDA data structure.
2. The stored procedure accepts the SQLDA data structure from the client application.
3. The stored procedure executes on the database server under the same transaction as the client
application.
4. If output data is to be sent back from the stored procedure to the client, the output variables of the
SQLDA are used. The same variables can be used for both input and output.
5. The stored procedure copies the SQLCA information explicitly to the SQLCA parameter of the
stored procedure so that the calling program can check whether the stored procedure executed
correctly.
6. The stored procedure finishes processing and returns control to the client.
The parameters of the SQL CALL statement are treated as both input and output parameters and
converted to the following format for the stored procedure:
struct sqlda *inoutsqlda;
struct sqlca *sqlca;
void *reserved1, *reserved2;
Unlike DB2 on the MVS platform, when the stored procedure is invoked in DB2 on the workstation, the
database manager provides the SQLDA to the stored procedure, regardless of how you code the CALL
statement in the client application. In other words, an SQLDA is passed to the stored procedure
regardless of whether you issue the CALL statement using host variables or the SQLDA.
Because the database manager automatically allocates the SQLDA structure at the database server,
do not allocate it, and do not alter any storage pointer for the input and output parameters. If you
attempt to replace a pointer with a locally created storage pointer, you receive an error with an
SQLCODE -1133.
Chapter 7. Coding Stored Procedures for DB2 Common Servers and DB2 UDB 107
7.2.3.1 C Example: In this example, we transfer a character string and a small integer through
the SQLDA from the client application to the stored procedure. First, the stored procedure is declared,
with the declaration accepting pointers to the SQLDA and SQLCA structures:
.....
SQL_API_RC SQL_API_FN cc22sbo1(void *reserved1,
void *reserved2,
struct sqlda *io_da,
struct sqlca *ca)
.....
We also have to declare working variables to facilitate the handling of information in the SQLDA in the
stored procedure. For our example we use:
.....
/* Declare Miscellaneous Variables */
.....
char *data_items[1];
short data_items_length[1];
short *data_itemy;
.....
The first SQLVAR in the SQLDA points to a character string; we can, for example, transfer it to the
data_items pointer:
.....
data_items[0] = io_da->sqlvar[0].sqldata;
data_items_length[0] = io_da->sqlvar[0].sqllen;
.....
The second SQLVAR in the SQLDA points to a small integer, which we can assign, for example, to the
data_itemy pointer:
.....
data_itemy = io_da->sqlvar[1].sqldata;
.....
7.2.3.2 REXX Example: In the REXX stored procedure, the values passed through the SQLDA are
received in the SQLRIDA stem variable. Output variables are sent back in the SQLRODA stem
variable. In the following list, n is a numeric value indicating a specific SQLVAR element in the
SQLDA:
sqlrida.sqld The number of variables in the SQLDA
sqlrida.n.sqldata Contains the data of the variable
sqlrida.n.sqltype The type of the variable
sqlrida.n.sqllen Contains the length of the variable
sqlrida.1.sqlind Contains the indicator variable
In the example below, a REXX stored procedure is called from a client application, receiving, as in our
C example, a character string and a small integer through the SQLDA. The database manager
retrieves the values from the SQLDA and adds them to the REXX variable pool. Adding the SQLDA
values to the REXX variable pool makes coding quite simple as the variables are immediately available
in the SQLRIDA stem variable:
If you want the stored procedure′s library to be deleted from memory, specify the following return
code:
SQLZ_DISCONNECT_PROC
Figure 70 shows this process.
If the stored procedure is invoked only once, SQLZ_DISCONNECT_PROC should be returned to free
main memory.
If you want to implement a stored procedure to remain in memory after its execution, code the
following as the return statement in your stored procedure program (example for a C stored
procedure):
.....
return(SQLZ_HOLD_PROC);
}
Chapter 7. Coding Stored Procedures for DB2 Common Servers and DB2 UDB 109
Here is the example for a COBOL program:
.....
move SQLZ_HOLD_PROC to return-code
goback.
In this case, the database manager keeps the stored procedure′s library in memory, so the library
does not have to be loaded when the stored procedure is invoked. This option may lead to better
performance. Figure 71 shows this process.
If the client application issues multiple calls to invoke the same stored procedure, it is better to code
SQLZ_HOLD_PROC as the return value of the stored procedure. If you code SQLZ_HOLD_PROC, the
stored procedure′s library is not deleted from memory, and subsequent calls to this procedure result in
better performance. The last invocation of the stored procedure should return with
SQLZ_DISCONNECT_PROC to free main memory from the server. Otherwise, the library remains in
memory until the database manager is stopped. We recommend that, in the last call to the stored
procedure, the client application send a parameter indicating the final call, at which the stored
procedure ends with an SQLZ_DISCONNECT_PROC.
If you have an application that is infrequently used but consists of several programs executed
sequentially that invoke the same stored procedure, pass a parameter to the stored procedure so that
it can specify SQLZ_HOLD_PROC for all programs except the last.
Refer to Chapter 14, “DB2 Common Server Performance Considerations” on page 299.
The values of the indicator variables are not reset after the transfer. If, for example, the client program
sets the value of the indicator variable of an output-only host variable to a negative value, the stored
procedure must set the value to zero or a positive value if the parameter is to be passed back to the
client program.
If the stored procedure sets the value of the SQLVAR indicator variable of an input-only variable to a
negative value, the next time the client program invokes the stored procedure it should reset the value
to zero or a positive value for the parameter to be passed to the stored procedure.
For example, consider a stored procedure that calculates the highest salary and returns that value to
the client. In the client application that calls the stored procedure, the maximum salary host variable
(maxsal) has no value before the call. To avoid network traffic, its indicator variable value should be
set to a negative value. The indicator variable value and the client CALL statement would be:
maxsalind = -1;
EXEC SQL CALL storproc(:maxsal:maxsalind);
When the stored procedure calculates and sets the maxsal value, it should also reset the value of the
maxsalind indicator variable to a nonnegative value so that the result in maxsal is transferred back to
the client.
Stored procedures are stored either as functions inside a dynamic link library (DLL) on OS/2 or as a
library on AIX. REXX stored procedures are stored as a command file (extension .CMD). Remember
you cannot create REXX stored procedures on a DB2 for AIX server. For DB2 for OS/2, you can place
your DLLs in any directory that is in the LIBPATH of the server operating system. The convention,
however, is to place DLLs in the \sqllib\function directory of the server. You can add a COPY
statement at the end of the makefile to automatically place them in that directory after you have built
the stored procedure.
As REXX stored procedures are command files, not libraries, they must be stored in a directory that is
in the PATH environment variable of the server operating system. The \sqllib\function directory is not
in the PATH environment variable, so you have to add the \sqllib\function directory to your PATH
environment variable or place the REXX stored procedures in another directory that is specified in the
PATH environment variable. Another approach is to qualify the path where the stored procedure is in
the CALL statement. Instead of coding, for example, INPSRV.CMD as the procedure name, you would
code D:\SQLLIB\FUNCTION\INPSRV.CMD as your procedure name in the CALL statement.
Chapter 7. Coding Stored Procedures for DB2 Common Servers and DB2 UDB 111
In the OS/2 environment, the PATH and LIBPATH statements are in your CONFIG.SYS file. If you
change something in that file, you must reboot your OS/2 to activate the changes. In AIX the LIBPATH
statement can be found in the .profile of the user who starts the database manager on the server.
If you have several copies of the same stored procedure in different directories, the copy found first in
the LIBPATH (PATH for REXX) sequence is executed.
You should place the sqllib/function/unfenced subdirectory in the LIBPATH to enable execution of
unfenced stored procedures, unless you will always call them by specifying the full path. If you have
two copies of a stored procedure, one unfenced, and the other fenced, the copy that is executed is the
copy found first after the LIBPATH sequence.
For AIX the name of a stored procedure is always case-sensitive. For OS/2, if the procedure name is
in uppercase, the client program can invoke it using uppercase or lowercase. If the function is
exported in lowercase, it can only be accessed by using lowercase. However, if you want to map
uppercase to lowercase, add the following statement in the EXPORTS section of the module definition
file when linking your stored procedure:
EXPORTS
functioname
FUNCTIONAME=functioname
This statement declares the internal function as functioname, with two entry names: FUNCTIONAME
and functioname.
If the case is not correct, you get one of the following SQL error codes:
• SQL1106N The specified DLL dll_name module was loaded, but the stor_prc function could not be
executed.
• SQL1109 The specified DLL dll_name could not be loaded.
We recommend deciding whether you want your stored procedure names to be in uppercase or in
lowercase and then always follow your rule.
7.3.2 Makefiles
To build your stored procedures, you can either add them to the makefile or create separate, small
makefiles for each stored procedure. Here is the pr2c2s.mak file used to build the pr2c2s OS/2 stored
procedure:
DATASOURCE=sampos2
TESTUID=userid
TESTPWD=password
DB2INSTANCE=db2
CC=icc
LINK=link386
CFLAGS=-Ti+ -c+ -Ge- -Gm+ -W2
LINKFLAGS=/ST:64000 /NOI /PM:VIO
COPY=copy
# Library directories
DB2LIB=$(DB2PATH)\lib\db2api
pr2c2s.obj : pr2c2s.c;
$(CC) $(CFLAGS) pr2c2s.c
pr2c2s.c : pr2c2s.sqc;
embprep pr2c2s $(DATASOURCE) $(TESTUID) $(TESTPWD)
The -Ti+ option, found in some of the makefiles, generates debugger information and can be omitted.
Here is the inpsrv.def file, a sample module definition file that has six module statements:
LIBRARY INPSRV INITINSTANCE TERMINSTANCE
DESCRIPTION ′ Library for DB2 Stored Procedure INPSRV′
PROTMODE
DATA
MULTIPLE
NONSHARED
CODE
LOADONCALL
SHARED
EXPORTS
inpsrv
The LIBRARY statement assigns the name INPSRV to the DLL and specifies that the library be
initialized each time a new process gains access (INITINSTANCE) or relinquishes access
(TERMINSTANCE).
The DESCRIPTION statement inserts the text after the DESCRIPTION keyword in the library. If you
browse the library, you can find this description embedded at the end of the file.
Chapter 7. Coding Stored Procedures for DB2 Common Servers and DB2 UDB 113
The PROTMODE statement is specific for OS/2. It specifies that the module run only in protected mode
and not in Windows or dual mode.
The DATA statement defines the default attributes for data segments within the library. MULTIPLE
indicates that the automatic data segment is to be copied for each instance of the module.
NONSHARED indicates that the READWRITE data segment cannot be shared and must be loaded
separately for each process.
The CODE statement defines the default attributes for code segments in the library. LOADONCALL
indicates that a code segment is not to be loaded until accessed. SHARED indicates that it can be
shared.
The EXPORTS statement defines the name of the functions exported to other modules. The term
export refers to the process of making a function available to other run-time modules. You could also
add the following to this statement:
INPSRV=inpsrv
to ensure that the SQL CALL statement can specify the function name in uppercase or lowercase.
As a stored procedure cannot write output to a display, it is useful to know how to write some of the
stored procedure′s information to a file.
If you are debugging a stored procedure written in CLI, you should consider using the CLI application
trace. On AIX, you may consider using xldb to debug your stored procedure.
If you do not specify the path for the file name, the file is written in the root directory of your DB2 for
OS/2 drive.
7.4.2 C Example 1
In this example, we write data received in the SQLDA to a file. First we have to add one more include
file that will contain the information being traced:
.....
#include <stdio.h> /* Added for writing to file Bo.1 */
.....
We also have to declare one more variable containing the file name of our output file:
.....
/* Declare Miscellaneous Variables */
.....
FILE *stream; /* Added for writing to file Bo.2*/
.....
Now let us take the character and small integer variables passed through the SQLDA from our
example in 7.2.3.1, “C Example” on page 108. First we take SQLTYPE and SQLDATA from the
character string:
.....
data_items[0] = io_da->sqlvar[0].sqldata;
data_items_length[0] = io_da->sqlvar[0].sqllen;
fprintf(stream,″Type of sqlvar0 ′%i′ \n″ , io_da->sqlvar[0].sqltype);
fprintf(stream,″Value of sqlvar0 ′%s′ \n″ , data_items[0]);
.....
When you run a stored procedure, a file named stproc.dat is created in the root directory of your
server containing the SQLLIB directory. The stproc.dat file contains the output created by the stored
procedure.
Chapter 7. Coding Stored Procedures for DB2 Common Servers and DB2 UDB 115
7.4.3 C Example 2
In this example, we write some of the data in the SQLCA to a file if an error occurs. Again we must
include the stdio.h file as in Example 1. We declare two variables: the stream variable, exactly as in
Example 1, and a new variable, my_errmsg:
.....
/* Declare Miscellaneous Variables */
.....
FILE *stream;
char my_errmsg[71];
.....
We add some statements to the error_exit function to write the SQLCODE, SQLERRMC, and SQLSTATE
to file:
error_exit:
/* An Error has occurred - ROLLBACK and return to Calling Program */
memcpy( ca, &sqlca, sizeof( struct sqlca ) );
return(SQLZ_DISCONNECT_PROC);
}
It is good programming practice to always interpret the SQLDA and verify that it is what you expected.
The SENDDA client program and SHOWDA stored procedure that come with the CLI samples enable
you to display and verify the SQLDA. You can reuse both code examples in your own programs.
In this chapter, we describe coding techniques and the program preparation process to create client
applications that call stored procedures on workstation and on MVS platforms.
You can code a client application that invokes a stored procedure in the same way you code any other
application that uses SQL statements. The SQL CALL statement is used to invoke stored procedures
from local or remote client applications.
Figure 72 shows the client application functions for invoking a stored procedure with the SQL CALL
statement.
The client application must connect to the DB2 server before issuing the SQL CALL statement. The
SQL CALL statement cannot be used dynamically, although in a CLI or ODBC application you can use
the SQLPrepare for a CALL statement.
The SQL CALL statement must include the stored procedure name and, optionally, the parameters to
be passed to the stored procedure through the SQLDA, host variables, or constants (for DB2 on the
MVS platform clients only). See 8.3, “SQL CALL Statement in DB2 on the Workstation” on page 123,
and 8.2, “SQL CALL Statement in DB2 on the MVS Platform” on page 119 for an explanation of the
SQL CALL syntax.
The SQL CALL statement enables the client application to pass parameters to the stored procedure
and receive parameters from the stored procedure. Therefore, the client application must declare,
Parameters can be used to exchange data in both directions. Indicator variables can be used to nullify
parameters. When a parameter is null, only the indicator variable is sent through the network. The
client application can nullify parameters that are used only for output, and the stored procedure can
nullify parameters that are used only for input, thus reducing the sending and receiving of unnecessary
data.
For DB2 on the MVS platform, the specification of whether a parameter is used only for input, only for
output, or for both is dependent on the information registered in the SYSIBM.SYSPROCEDURES table.
Refer to 2.3.1.1, “SYSIBM.SYSPROCEDURES Table Columns” on page 14.
For DB2 on the workstation, all parameters are considered to be used for input and output, except
when using ODBC and CLI. When using ODBC and CLI you can specify which parameters are used for
input, for output, or for both.
Because stored procedures in DB2 on the MVS platform cannot execute commit or rollback processing,
the processing must be performed in the client application. For DB2 Version 5, if you specify
COMMIT_ON_RETURN=YES, the unit of work is committed when control returns to the client
application.
Stored procedures in DB2 on the workstation can execute commit or rollback processing if the client
application is precompiled with CONNECT TYPE 1. In this case, commit and rollback processing can
be performed from either the client application or the stored procedure. If a COMMIT is issued from
the stored procedure, the current unit of work is terminated and a new unit of work is initiated.
In Figure 73 STOPROCA and STOPROCB are two stored procedures that are to be invoked by a client
application. STOPROCA uses three parameters: A1, A2, and A3. STOPROCB uses two parameters: B1
and B2.
The client application moves STOPROCA to a variable called procname , moves parameters A1, A2, and
A3 to an SQLDA structure called sqlda , and then executes the CALL statement. This CALL statement
invokes stored procedure STOPROCA and sends parameters A1, A2, and A3 to it.
The client application could also move STOPROCB to a variable called procname , move parameters B1
and B2 to an SQLDA structure called sqlda , and then execute the CALL statement. In this case, the
CALL statement invokes stored procedure STOPROCB, sending parameters B1 and B2 to it.
In the example, the CALL statement is coded twice, but both statements are exactly the same. The
CALL statement could be coded only once in a routine. By changing the content of the procname
variable and the SQLDA structure before performing the routine, you could invoke the appropriate
stored procedure.
Note: Using an SQLDA to pass parameters requires that the client application provide information
about the number of parameters, data type, data length, and indicator of each parameter.
Figure 74 on page 120 shows the syntax of the SQL CALL statement in DB2 for MVS/ESA.
8.2.1.1 Using Procedure Name: The procedure name is a qualified or unqualified name. You
can specify the procedure name in any of the following ways:
• A fully qualified procedure name with three parts: the first part is a location name. The second
part depends on the application server. In the current releases of DB2, the second part must
contain the value SYSPROC. The third part identifies the stored procedure.
• A two-part name. The location name of the current server is implicitly used to qualify the
procedure name. In the current releases of DB2, the first part must be SYSPROC, and the second
part identifies the stored procedure.
• An unqualified name identifying the stored procedure. The name is implicitly qualified by the
location name of the current server and by the value SYSPROC.
If the server is DB2 on MVS, the last part of the procedure name must match an entry in the
PROCEDURE column of the SYSIBM.SYSPROCEDURES table.
Currently, if the stored procedure is located in DB2 on MVS, you can use qualified or unqualified
procedure names. If using fully qualified names, you must connect to the DB2 on MVS where the
stored procedure is located before issuing the SQL CALL statement. If the stored procedure is located
in a DB2 on the workstation, you can only use unqualified names.
8.2.1.2 Using a Host Variable: The name of the stored procedure is passed through a host
variable, which must be a character-string variable with a length attribute that is not greater than 254
bytes, and it must not include an indicator variable. The actual value of the host variable can include
special characters.
When calling stored procedures in DB2 on the workstation, you may want to specify the full path where
the stored procedure resides. In this case, you must use a host variable and move the stored
procedure name with the full path to that variable. If you try to specify a full path as a constant, the
DB2 on MVS precompiler sends an error message.
Host Variable: You can use host variables, separated by commas, as parameters for the stored
procedure. The assignment of these values to the stored procedure parameters is positional, so the
first host variable is assigned to the first stored procedure parameter, the second host variable to the
second parameter, and so on. You must ensure that you are passing the number of parameters that
the stored procedure is expecting.
The host variable being used as a parameter cannot be defined as a structure or an array, and it must
be defined with a data type compatible with the data type of the corresponding stored procedure
parameter.
You can use indicator variables. However, if the server is DB2 for MVS, you must ensure that the
stored procedure is defined to accept nulls in the SYSIBM.SYSPROCEDURES table or the parameter is
defined as an output parameter.
Constant: You can use constant values as parameters. The constant value must be compatible with
the data type of the stored procedure parameter. If the server is DB2 on MVS, to be able to pass
constants as parameters, the corresponding stored procedure parameter must be defined as input
only.
NULL: You can use the NULL string as a parameter in the SQL CALL statement. In this case, a null
value is passed to the stored procedure. If the server is DB2 on MVS, the corresponding stored
procedure parameter must be defined as input only, and the stored procedure must be defined to
accept nulls, in the SYSIBM.SYSPROCEDURES table.
8.2.2.2 Parameters Using an SQLDA: You can use an SQLDA structure to pass the
parameters to the stored procedure by specifying:
USING DESCRIPTOR descriptor-name
The descriptor-name is the name of the structure containing the SQLDA definition.
Before the SQL CALL statement is processed, the application must set the following fields in the
SQLDA:
• SQLD to indicate the number of variables used in the SQLDA when processing the statement. This
number must be the same as the number of parameters of the stored procedure.
• SQLN to indicate the number of SQLVAR occurrences provided in the SQLDA. This value must not
be less than the value of SQLD.
• SQLDABC to indicate the number of bytes of storage allocated for the SQLDA. This value must be
SQLN*44+16.
• SQLVAR is a structural array. Each SQLVAR element is associated with a stored procedure
parameter. The assignment is positional, so the first SQLVAR element is assigned to the first
stored procedure parameter, and so on. The following fields of each base SQLVAR element
passed must be initialized:
− SQLTYPE
− SQLLEN
− SQLDATA
− SQLIND
8.2.3.1 Passing Parameters with Host Variables: Here is an example of a COBOL program
using an SQL CALL statement with host variables to pass parameters.
Identification Division.
Program-ID. ″TS0BMCBM″ .
Data Division.
Working-Storage Section.
EXEC SQL INCLUDE SQLCA END-EXEC.
01 PROG-NAME PIC X(12) VALUE ″BB22STS0″ .
01 PARM1 PIC X(10) VALUE ″ ″.
01 PARM2 PIC S9(9) COMP VALUE 0.
Procedure Division.
accept parm1.
EXEC SQL CONNECT TO SJ2SMPL END-EXEC.
EXEC SQL CALL :prog-name (:PARM1,:PARM2) END-EXEC.
.
.
.
8.2.3.2 Passing Parameters with an SQLDA: Here is an example of a COBOL program using
an SQL CALL statement with an SQLDA to pass parameters:
Identification Division.
Program-ID. ″TS2BMCB2″ .
Data Division.
Working-Storage Section.
EXEC SQL INCLUDE SQLCA END-EXEC.
01 PROG PIC X(10) VALUE ″TS0BMS″ .
01 IO-SQLDA.
05 IO-SQLDAID PIC X(8) VALUE ″SQLDA ″ .
05 IO-SQLDABC PIC S9(9) COMP.
05 IO-SQLN PIC S9(4) COMP.
05 IO-SQLD PIC S9(4) COMP.
05 IO-SQLVAR-ENTRIES OCCURS 0 TO 1489 TIMES
DEPENDING ON IO-SQLD.
10 IO-SQLVAR.
15 IO-SQLTYPE PIC S9(4) COMP.
15 IO-SQLLEN PIC S9(4) COMP.
15 IO-SQLDATA USAGE IS POINTER.
15 IO-SQLIND USAGE IS POINTER.
15 IO-SQLNAME.
20 IO-SQLNAMEL PIC S9(4) COMP.
20 IO-SQLNAMEC PIC X(30).
Linkage Section.
01 PARM-STRUCT.
DB2 on the workstation provides two syntax specifications for the SQL CALL statement to support
different coding techniques. One specification is for embedded SQL and REXX applications, and one is
for DB2 CLI and ODBC applications. See 8.3.1, “Embedded SQL Applications” and 8.4, “CLI and ODBC
Applications” on page 127 for a description of the syntax of the two specifications of the SQL CALL
statement.
Even though it is possible to invoke stored procedures in DB2 on the workstation by using the DARI
SQLeproc function call, we focus on using the SQL CALL statement as a common way of invoking
stored procedures in DB2 on the workstation. See 8.3.4, “Invoking Stored Procedures with DARI” on
page 127 for further explanation of the SQLeproc function call.
In the sections that follow, we explain the parameters associated with the SQL CALL statement.
8.3.1.1 Specifying the Stored Procedure Name: The procedure name can be specified
through a constant, represented as procedure name in Figure 75 on page 123, or within a host
variable .
Using Procedure Name: The name of the stored procedure is passed as a constant, which cannot
contain blanks or special characters.
Using a Host Variable: The name of the stored procedure is passed through a host variable, which
must be a character-string variable with a length attribute that is not greater than 254 bytes, and it
must not include an indicator variable. The actual value of the host variable can include special
characters.
The procedure name can take one of several forms, as explained in 8.5, “Stored Procedure Name
Considerations” on page 128. These forms may include special characters, which can be specified
only by using a host variable.
8.3.1.2 Specifying the Arguments: Embedded SQL applications have two options for passing
parameters to a stored procedure. They can use host variables (constants or the NULL specification
are not valid arguments) or explicitly use the SQLDA by specifying the USING DESCRIPTOR clause.
(host variable,,,): Each specification of a host variable is a parameter of the CALL statement, where
the nth parameter of the CALL corresponds to the nth parameter of the server′s stored procedure.
Each host variable is assumed to be used for exchanging data in both directions between the client
and the server, that is, each host variable is considered to be used for input and for output.
To avoid sending unnecessary data between the client and the server, the client application should
provide an indicator variable with each parameter and set the indicator to -1 if the parameter is not
used to transmit data to the stored procedure.
The stored procedure should set the indicator variable to -128 for any parameter that is not used to
return data to the client application.
If the server is a DB2 on the workstation, the parameters can have compatible data types in both the
client and server program, although we strongly recommend using matching data types in both
programs.
Both the DB2 on MVS and DB2 for OS/400 servers support conversion between compatible data types
when their stored procedures are invoked by any client. For example, if the client program uses the
INTEGER data type and the stored procedure expects FLOAT, the server converts the INTEGER value to
FLOAT before invoking the procedure.
USING DESCRIPTOR descriptor-name: Identifies an SQLDA that must contain a valid description of
host variables. The nth SQLVAR element corresponds to the nth parameter of the server′s stored
procedure.
Before the SQL CALL statement is processed, the application must set the following fields in the
SQLDA:
• SQLN to indicate the number of SQLVAR occurrences provided in the SQLDA
• SQLDABC to indicate the number of bytes of storage allocated for the SQLDA
As with host variables, the SQLDA is assumed to be used for exchanging data in both directions
between the client and the server. To avoid sending unnecessary data between the client and the
server, the client application should set the SQLIND field to − 1 if the parameter is not used to transmit
data to the stored procedure. The stored procedure should set the SQLIND field − 128 for any
parameter that is not used to return data to the client application.
In 8.3.2, “Examples of Coding the CALL Statement” we provide examples of how to code the CALL
statement to send parameters by using host variables or by explicitly using the SQLDA structure.
8.3.2.1 Passing Parameters with Host Variables: Here is an example of a REXX OS/2 SQL
CALL statement using host variables:
procname = ′ SM0PMS′
dataitem.1 = ′ -DISPLAY THREAD(*)′
dataitem.2 = 0
dataitem.3 = 0
dataitem.4 = 0
dataitem.5 = substr(′ ′ , 1 , 8 3 2 0 , ′ ′ )
dataitem.5.ind = 0
8.3.2.2 Passing Parameters with SQLDA: Here is an example of a REXX OS/2 SQL CALL
statement using SQLDA to pass the parameters:
io_sqlda.sqld = 5
io_sqlda.1.sqltype = 453
io_sqlda.1.sqldata = dataitem.1
io_sqlda.1.sqllen = 20
io_sqlda.1.sqlind = dataitem.1.ind
io_sqlda.2.sqltype = 497
io_sqlda.2.sqldata = dataitem.2
io_sqlda.2.sqllen = 20
io_sqlda.2.sqlind = dataitem.2.ind
io_sqlda.3.sqltype = 497
io_sqlda.3.sqldata = dataitem.3
io_sqlda.3.sqllen = 20
io_sqlda.3.sqlind = dataitem.3.ind
io_sqlda.4.sqltype = 497
io_sqlda.4.sqldata = dataitem.4
io_sqlda.4.sqllen = 20
io_sqlda.4.sqlind = dataitem.4.ind
io_sqlda.5.sqltype = 453
io_sqlda.5.sqldata = dataitem.5
io_sqlda.5.sqllen = 8320
io_sqlda.5.sqlind = dataitem.5.ind
For DB2 for OS/2 servers, the stored procedure is searched for in the directories specified by the
LIBPATH environment variable of the CONFIG.SYS file. If you are invoking a stored procedure locally,
it is searched for first in the current directory. If not found in the current directory, it is searched for in
the directories specified in the LIBPATH environment variable.
For DB2 for AIX servers, fenced stored procedures are searched for in the sqllib/function directory, and
unfenced stored procedures are searched in the sqllib/function/unfenced directory.
New client applications should be written using the SQL CALL statement to invoke stored procedures,
as applications using DARI can access stored procedures only on DB2 on the workstation. Other
DRDA database servers cannot be accessed by using DARI as DARI is not portable to other platforms.
DARI is not a part of System Application Architecture (SAA), and the DARI API is primarily kept for
backward compatibility with older applications.
Figure 77 shows the syntax of the SQL CALL statement used in CLI and ODBC applications. This
syntax is applicable to DB2 Common Server V2, DB2 Universal Database V5 and DB2 for OS/390 V5.
The name of the stored procedure is passed as a constant. The constant can contain special
characters to support the different forms adopted by the stored procedure name, as explained in 8.5,
“Stored Procedure Name Considerations” on page 128.
The question mark represents a parameter marker. Each question mark in an SQL CALL statement
denotes an argument to be passed to the stored procedure. The parameter markers in the SQL CALL
statement are bound to application variables by means of the SQLBindParameter function. If you have
parameters that are input only or output only, specifying the type of parameter in an
SQLBindParameter() call (SQL_PARAM_INPUT and SQL_PARAM_OUTPUT) can avoid unnecessary
data transfer. See 10.3.7, “Calling the Stored Procedure” on page 196 for a description of this function.
SQLExecDirect()
•
SQLPrepare()
SQLExecute()
Although the CALL statement itself cannot be prepared dynamically, DB2 CLI accepts the CALL
statement as if it could be dynamically prepared.
The procedure name can take one of several forms. The forms supported vary according to the server
where the procedure is stored.
Note that even if you specify the location in the procedure name, you must issue a connect to that
location before the SQL CALL statement. In the current DB2 implementation of DRDA, system-directed
access is not supported.
For portability, procedure-name should be specified as a single token no longer than 8 bytes.
There are some considerations regarding the use of lowercase and uppercase for the name of a stored
procedure in a CALL statement. These considerations depend on the platform where the stored
procedure is located and the fact that the CALL statement may fold the name of the stored procedure if
provided as a constant.
Table 8. How DB2 for MVS V4 and DB2 for OS/390 V5 Treat Lowercase Names
Call Method Behavior
C A L L < l i t e r a l > w h e r e < l i t e r a l > i s d e l i m i t e d ( ″″) Can find the lowercase procedure names.
CALL <literal> where <literal> is not delimited Causes <literal> to be folded to uppercase.
and has lowercase characters
CALL :hv where :hv is a host variable whose value is Fails with SQLCODE -113.
in lowercase
For CLI (DB2 for OS/390 V5) this implies that the stored procedure name on OS/390 must be in
uppercase. In practice we tend to stick to using uppercase for stored procedure on MVS and OS/390.
If you are using the same name for both the library and the function, but the library name is in
uppercase, you have to explicitly specify the name of the library and the function in the SQL CALL
statement.
For example, if the name of the function is proc1 and the name of the library is PROC1, the name of the
stored procedure, as used in the SQL CALL statement should look like this: PROC1!proc1.
In DB2 for OS/2 the name of the stored procedure function is case sensitive if it was exported in
lowercase. If the function was exported in uppercase, both lowercase and uppercase can be used to
invoke the stored procedure.
Note that DB2 on MVS does not do any name folding if you pass an SQL delimited identifier. If, for
example, you code:
CALL ″my_proc″ ( : hv);
the procedure name my_proc is not folded to uppercase.
The program preparation requirements for a client application that invokes a stored procedure are
identical to those of other DB2 applications. Therefore, besides compiling and linking, you must
precompile and create a package for your client application. The package for the client application
must be bound to the database server location where the stored procedure executes.
As with other DB2 applications, if the client application accesses more than one database server, the
package must be bound to every database server being accessed.
The program preparation process applies to client applications written in a compiled host language
such as COBOL, C, or PL/I and is valid for both DB2 on the workstation and DB2 on MVS.
In the DB2 on the workstation environment, the SQL CALL statement can be used in languages such
as REXX and SQL interfaces such as CLI and ODBC, which generally do not support static SQL. This
is possible because REXX, CLI, and ODBC recognize the SQL CALL, implement it as static, but give it
an appearance of dynamic SQL as explained in 8.1, “Calling Stored Procedures” on page 117. REXX,
CLI, and ODBC packages are bound when you create the DB2 on the workstation database or when
END
//
Client programs do not have to use LE/370 libraries, and they can be written in any language DB2 on
MVS supports.
8.7.1.1 Package and Plan: To call a stored procedure, the client program must have a package
on the DB2 system where the stored procedure executes. As seen in the preparation JCL sample, you
can use the remote bind capability of DRDA to bind the package on the remote system. You only have
to specify the location name in your bind command. In our example, SJ2SMPL is the location name of
the remote system.
The client program must have a plan. The plan resides only on the client system and must include the
remote package for the client program. When the client program and the stored procedure execute in
different DB2 locations, the plan for the client program does not have to include the stored procedure
package.
When the client program executes at the same location as the stored procedure, the plan for the client
program must also include the stored procedure′s package and all the packages associated with the
stored procedure. For example, if the stored procedure calls another program that also accesses DB2
tables, the called program has a separate package, and that package must also be included in the
client program plan.
You can develop client applications using any of the above techniques. The program preparation
process differs among the techniques.
The precompilation step is performed to change the SQL statements into language recognized by the
host language compiler. The development environment of DB2 on the workstation provides
precompilers for the following host languages: C, C++, COBOL, and FORTRAN.
Compilation and link steps are performed to create an executable file for your application. This
executable file contains the appropriate links to enable your application to interface with the host
language APIs and with the database manager APIs required for your operating environment.
The binding step is performed to create an executable control structure called package required to run
the SQL statements in your program. The package is bound to the database where the SQL
statements will be executed. A connection to one of the databases that the application uses is
required to execute this step.
8.7.2.2 Programs Using the REXX Interpreter: REXX is an interpretive language. It uses the
following two APIS:
SQLEXEC This API supports the SQL language.
SQLDBS This API supports DB2 commands.
SQL statements in a REXX application are processed by a dynamic SQL handler; therefore, a REXX
application does not require precompilation. No program preparation process is required for a REXX
application.
8.7.2.3 CLI and ODBC: CLI is a C or a C + + API for relational database access that uses
function calls to pass dynamic SQL statements as arguments.
CLI does not require host variables or a precompiler, so program preparation requires only compiling
and linking as with a regular C or C++ program.
CLI is based on the Microsoft ODBC and X/Open specifications. DB2 Common Server V2 supports all
core and level 1 functions of ODBC. Currently, it supports all level 2 functions of ODBC with the
exception of SQLBrowseConnect(), SQLDescribeParam(), and SQLSetPos().
DB2 Universal Database V5 supports the majority of ODBC 3.0 functions. Refer to DB2 UDB V5 Call
Level Interface Guide and Reference for a detailed list and description.
Figure 78 illustrates the relationship between the IBM ODBC driver and the ODBC Driver Manager and
compares DB2 CLI and the IBM ODBC driver.
To develop ODBC applications that access DB2 database servers, you must have the IBM ODBC driver
and an ODBC Software Development Kit. ODBC applications pass dynamic SQL as function arguments,
so they do not have to be precompiled.
8.7.2.5 Authorizations: The authorization issues covered in this section are related to the
privileges required by an authorization ID to run a stored procedure.
Executing a stored procedure implies running a client application, which in turn invokes the stored
procedure application. When the client program runs on DB2 on the workstation, privileges to execute
the packages of both the stored procedure application and the client application are required.
If the SQL CALL statement is the only SQL statement in the client application, you are not required to
have the execute privilege on the client package; you need the execute privilege only on the stored
procedure package. However, it is possible that neither your client application nor your stored
procedure requires a package to run. Therefore, the user running the application must have the
privileges needed to issue each SQL request.
If the stored procedure application contains dynamic SQL, the authorization rules depend on the
database server. For DB2 on the workstation and DB2 on MVS, the authorization ID of the user
running the client application must be granted the required privileges on the objects specified in each
dynamic SQL statement. For DB2 on MVS, if you bind the stored procedure with the
DYNAMICRULES(BIND) option, the authorization ID of the user who bound the stored procedure is
checked against the dynamic SQL statement executed by the stored procedure.
VisualGen is an application generator that provides you with a graphic interface to interactively
develop and test applications. VisualGen can generate C applications for the AIX environment or
COBOL applications for the MVS and OS/2 environment.
The interactive test facility of VisualGen is a useful debugging tool that dynamically prepares the SQL
statements embedded in the body of your application.
You can use VisualGen to generate client applications on the OS/2, AIX, and MVS platforms.
You can develop client applications with VisualGen, but you cannot test them with the VisualGen
interactive test facility because the SQL CALL statement cannot be dynamically prepared.
Nevertheless the advantage of developing one application and being able to port it to three different
platforms should not be ignored.
You can generate stored procedures for DB2 on MVS, using VisualGen. You cannot generate DB2 on
the workstation stored procedures through VisualGen because VisualGen-generated code does not use
the SQLDA to receive parameters. A development tool called IBM VisualAge for Basic enables you to
build, run, debug, test, register, and distribute stored procedures for the DB2 for AIX and DB2 for OS/2
An escape clause is a syntactic mechanism for implementing vendor-specific SQL extensions in the
framework of standardized SQL. Both DB2 CLI and ODBC support vendor escape clauses as defined by
X/Open. For details please refer to the Call Level Interface Guide and Reference for Common Servers .
Whenever a client program invokes a stored procedure, the stored procedure can return the results
through host variables (DB2 for MVS/ESA and DB2 for OS/390) or information in the SQLDA (DB2 UDB,
DB2 Common Servers, DB2 for MVS/ESA, and DB2 for OS/390). If you use a cursor in the stored
procedure, the result of the FETCH statement, which is one row, can be passed back to the client
program. If the client program requires more than one row to be returned, you can use one of two
methods to transfer the rows by blocking them or using the support available for multiple result sets
(MRSP).
Note that MRSP is not available for DB2 for MVS/ESA, and is not supported by DDCS. MRSP is
supported by:
• DB2 Common Servers if DRDA is not being used
• DB2 for OS/390
• DB2 Connect
If you are using DB2 CLI as the interface for your client in the workstatsion, or if you are using DB2 for
OS/390, use MRSP. Using MRSP functions enables you to use stored procedures to return one or
more result sets in a simpler way. For the client workstation environment, you can also have a client
application that uses mostly embedded SQL, except for the SQL CALL statement and the SQL FETCH,
which are coded in DB2 CLI. In the next sections we illustrate both methods with some examples.
If you code the client program with CLI or ODBC, DB2 on the workstation can return result sets directly
to the client program (see Figure 79).
Figure 79. Multiple Result Sets with CLI. Renamed client to MR3C2CC2 / server MR3C2S
To transfer one or more result sets to a client application, the following requirements must be met:
For additional information refer to the Application Programming Guide for Common Servers and the
Call Level Interface Guide and Reference for Common Servers .
9.1.1.1 Retrieving One Result Set: This example illustrates how to code a stored procedure
with MRSP. The client program reads an SQL statement from the terminal and passes it to be
executed by the stored procedure. The stored procedure opens a cursor for the received SELECT
statement and ends, returning control to the client program. The client issues a FETCH statement for
each resulting row and displays it on the screen. The flow is displayed in Figure 80.
We analyze this process below, dividing it into three parts: the client part, the stored procedure, and
the print_results function with the FETCH processing used by the client program. The client program
(MR3C2CO2.C) must be written with the CLI. The stored procedure (MR3C2S.SQC) is written in C with
embedded dynamic SQL, but it can be written with any supported language.
The screen displays a connect message, a message indicating that the stored procedure has ended,
and the requested information. The program ends with a disconnect message:
>Connected to loopsamp
Server Procedure Complete.
NAME ID SALARY
Molinare 160 22959.20
Jones 260 21234.00
Fraye 140 21150.00
Graham 310 21000.00
Hanes 50 20659.80
Lu 210 20010.00
Quill 290 19818.00
.....
.....
Gafney 350 13030.50
Naughton 120 12954.75
Ngan 110 12508.20
Kermisch 170 12258.50
Abrahams 180 12009.75
Scoutten 200 11508.60
Burke 330 10988.00
Yamaguchi 130 10505.90
Disconnecting .....
9.1.1.2 Client Program MR3C2CO2.C: After initializing and declaring the variables, the
program obtains the server name, user ID, and password from the arguments entered when invoking
the program. The program uses the INIT_UID_PWD macro defined in the samputil.h header file of the
samples\cli directory. Next, the program prompts the user for a SELECT statement and stores it in a
variable called string :
....
....
int
main( int argc, char * argv[] )
{
SQLHENV henv;
SQLHDBC hdbc;
SQLRETURN rc;
SQLHSTMT hstmt;
SQLCHAR stmt[] = ″CALL mr3c2s( ? )″ ;
The program then calls the DBconnect function coded in samputil.c, grouping together the statements
required for the connection. The DBconnect function issues:
1. SQLAllocConnect: Allocates a connection handle and associated resources within the environment
identified by the previously allocated environment handle.
2. SQLSetConnectOption: Enables you to set a range of connection attributes for a particular
connection. In the DBconnect function only one option is used; the default, AUTOCOMMIT, is set to
OFF. Refer to Chapter 5, “Functions,” of the Call Level Interface Guide and Reference for Common
Servers for a detailed description of the different options.
3. SQLConnect: The program establishes a connection to the target database with the supplied user
ID and password.
After the DBconnect, an SQLAllocStmt call is issued. This call allocates a statement handle and
associates it with the connection specified by the connection handle. DB2 CLI uses each statement
handle to relate SQL statements to the current database connection (referred by hdbc). There is no
defined limit on the number of statement handles active at any one time.
rc = SQLAllocStmt(hdbc, &hstmt);
CHECK_DBC(hdbc, rc);
....
....
SQLPrepare associates the SQL CALL statement in the stmt variable with the previously allocated
statement handle, hstmt, and sends it to the database management system to be prepared.
Next, we associate the parameter marker “?” in our SQL CALL statement
stmt[ ] = ″CALL mr3c2s( ? )″ ,
with the application variable “string,” using the SQLBindParameter function. SQLExecute executes the
prepared statement and calls the MR3C2S stored procedure, transferring the SELECT statement to the
server:
SQLExecute(hstmt);
/* Ignore Warnings */
if (rc != SQL_SUCCESS & rc != SQL_SUCCESS_WITH_INFO)
CHECK_STMT(hstmt, rc);
....
....
When the stored procedure returns control, the client program fetches and displays the result set,
using the print_results function. This function is discussed in 9.1.1.4, “The Print_results Function” on
page 144.
On return from the print_results function, the client program ends by issuing the following DB2 CLI
calls:
1. SQLFreeStmt: Used with the SQL_DROP option, ends the processing on the referenced statement
handle. The SQL_DROP option frees all resources associated with the statement handle,
invalidates the handle, closes an open cursor (if any), and discards pending results.
2. SQLTransact: In our example with SQL_COMMIT as the argument, a commit is issued for the
current transaction in the specified connection.
3. SQLDisconnect: Closes the connection associated with the database connection handle.
4. SQLFreeConnect: Invalidates and frees the connection handle. All DB2 CLI resources associated
with the connection handle are freed.
5. SQLFreeEnv: Invalidates and frees the environment handle. All DB2 CLI resources associated with
this handle are freed:
....
....
/* Display result set */
rc = print_results(hstmt);
CHECK_STMT(hstmt, rc);
rc = SQLFreeStmt(hstmt, SQL_DROP);
CHECK_STMT(hstmt, rc);
printf(″Disconnecting .....\n″ ) ;
rc = SQLDisconnect(hdbc);
CHECK_DBC(hdbc, rc);
rc = SQLFreeConnect(hdbc);
CHECK_DBC(hdbc, rc);
rc = SQLFreeEnv(henv);
if (rc != SQL_SUCCESS)
9.1.1.3 Stored Procedure MR3C2S.SQC: The stored procedure accepts the SQLDA and
SQLCA structures passed by the client application, and declares the host and other required variables:
....
....
SQL_API_RC SQL_API_FN mr3c2s (
void *reserved1,
void *reserved2,
struct sqlda *inout_sqlda,
struct sqlca *ca) {
The stored procedure declares a cursor, C1. The WITH HOLD option is required for MRSP. Next, the
stored procedure copies the SQL SELECT statement, transferred by the client program, from the
SQLDA in the stmt variable. We coded the stored procedure assuming that the structure of the SQLDA
is as the stored procedure expected. In a production application, you should check the structure of the
SQLDA. The statement is prepared and the cursor is opened:
....
....
EXEC SQL DECLARE c1 CURSOR WITH HOLD FOR s1;
Basically, this is it. After the OPEN CURSOR, the stored procedure copies the sqlca information into
the SQLCA structure and ends, returning control to the client program:
return(SQLZ_DISCONNECT_PROC);
....
....
9.1.1.4 The Print_results Function: When the stored procedure returns, the client calls the
print_results function. This function is coded in the samputil.c file, which contains useful functions used
by most CLI samples. You can find the samputil.c file in the sqllib\samples\cli subdirectory
(sqllib/samples/cli for the AIX platform). We found the print_results function extremely useful when
coding with MRSP.
In the paragraphs that follow, we analyze the code. First, the program declares the different working
variables. Next, it issues the DB2 CLI SQLNumResultCols statement. This statement returns the
number of columns in the result set associated with the input statement handle in the nresultcols
variable:
print_results(SQLHSTMT hstmt)
{
SQLCHAR colname[32];
SQLSMALLINT coltype;
SQLSMALLINT colnamelen;
SQLSMALLINT nullable;
SQLUINTEGER collen[MAXCOLS];
SQLSMALLINT scale;
SQLINTEGER outlen[MAXCOLS];
SQLCHAR *data[MAXCOLS];
SQLCHAR errmsg[256];
SQLRETURN rc;
SQLSMALLINT nresultcols;
int i;
SQLINTEGER displaysize;
rc = SQLNumResultCols(hstmt, &nresultcols);
CHECK_STMT(hstmt, rc);
....
....
Next, the program starts a loop, going through the different columns. SQLDescribeCol returns the
following information about each column:
• Column name
• Column name length
• Column SQL data type
SQLColAttributes is used to get column attributes. In our example it is used with the
SQL_COLUMN_DISPLAY_SIZE argument to retrieve the maximum number of bytes needed to display
the data in character form.
....
....
for (i = 0; i < nresultcols; i++) {
SQLDescribeCol(hstmt, i + 1, colname, sizeof(colname),
&colnamelen, &coltype, &collen[i], &scale, NULL);
Next, the program displays the column name on the screen, through the printf command, and allocates
memory to bind the column. The column is bound to application variables through the SQLBindCol
statement. This ends the loop for each column:
....
....
/*
* set column length to max of display length, and column name
* length. Plus one byte for null terminator
*/
collen[i] = max(displaysize, strlen((char *) colname)) + 1;
Now the program can fetch the rows in a “while” loop until it gets the SQL_NO_DATA_FOUND return
code. Note that the SQLFetch function specifies the statement handle related to the SQL CALL
statement.
For each row fetched, the program checks whether there is NULL data. If there is NULL data, the
program prints the character string NULL. If there is data, the program verifies that the length of the
data fits in the previously defined column width. If the data does not fit, it is truncated, and a message
is displayed. If the data fits, it is displayed on the screen through the printf function:
....
....
printf(″\n″ ) ;
/* display result rows */
while ((rc = SQLFetch(hstmt)) != SQL_NO_DATA_FOUND) {
errmsg[0] = ′ \0′ ;
for (i = 0; i < nresultcols; i++) {
/* Check for NULL data */
To end the print_results function, the program frees the data buffers and returns control to the client
application:
....
....
/* free data buffers */
for (i = 0; i < nresultcols; i++) {
free(data[i]);
}
return(SQL_SUCCESS);
} /* end print_results */
....
....
9.1.1.5 Retrieving Multiple Result Sets: This example with DB2 CLI and MRSP retrieves three
result sets. Programs mr4c2co2 and mr4c2s are modifications of the mr3c2co2 and mr3c2s. To invoke
the client program, enter the following:
mr4c2co2 loopsamp userid password
This displays the three result sets in succession on your screen.
Client Program MR4C2CO2.C: In this example, we have three string variables to hold the SQL SELECT
statements. For this application, the user is not prompted to enter the SELECT statements. Instead,
the SELECT statements are hard-coded in the client program:
int
main( int argc, char * argv[] )
{
SQLHENV henv;
SQLHDBC hdbc;
SQLRETURN rc;
SQLHSTMT hstmt;
SQLCHAR stmt[] = ″CALL mr4c2s( ? ? ? )″ ;
After connecting to the database and binding the parameters, the client program calls the stored
procedure:
....
....
rc = SQLAllocEnv(&henv);
if (rc == SQL_ERROR)
return (terminate(henv, rc));
CHECK_STMT(hstmt, rc);
SQLExecute(hstmt);
/* Ignore Warnings */
if (rc != SQL_SUCCESS & rc != SQL_SUCCESS_WITH_INFO)
CHECK_STMT(hstmt, rc);
....
....
printf(″Disconnecting .....\n″ ) ;
rc = SQLDisconnect(hdbc);
CHECK_DBC(hdbc, rc);
rc = SQLFreeConnect(hdbc);
CHECK_DBC(hdbc, rc);
rc = SQLFreeEnv(henv);
if (rc != SQL_SUCCESS)
terminate(henv, rc);
return (SQL_SUCCESS);
}; /* end main */
Stored Procedure MR4C2S.SQC: This stored procedure is similar to mr3c2s except that we have three
host variables instead of one:
SQL_API_RC SQL_API_FN mr4c2s (
void *reserved1,
void *reserved2,
struct sqlda *inout_sqlda,
struct sqlca *ca) {
We copy the three SELECT statements from the SQLDA, declare the cursors WITH HOLD, prepare the
statements, open the three cursors, and return:
....
....
/* Copy the SQL statement from the sqlda */
strncpy(stmt1, inout_sqlda->sqlvar[0].sqldata,
inout_sqlda->sqlvar[0].sqllen);
strncpy(stmt2, inout_sqlda->sqlvar[1].sqldata,
inout_sqlda->sqlvar[1].sqllen);
strncpy(stmt3, inout_sqlda->sqlvar[2].sqldata,
inout_sqlda->sqlvar[2].sqllen);
return(SQLZ_DISCONNECT_PROC);
If you connect to sample, it is a local connection; if you connect to loopsamp, it is a remote connection
and you can use DB2 CLI MRSP.
DB2 for OS/390 has implemented support to MRSP as a client and as a server.
If your stored procedure is executing on DB2 for OS/390, your stored procedure can return MRSP to a
local or remote client application. If the client is a remote DRDA, the client must support the DRDA
code points used to return MRSP. DB2 Connect supports these code points, but DDCS Common
Server does not support these code points.
If the stored procedure is executing on a remote DRDA application server, the remote DRDA
application server must support the DRDA code points used to return MRSP.
DB2 Common Servers and the current release of DB2 UDB do not support these DRDA code points.
Note also that DB2 for MVS/ESA does not support MRSP, as a client or as server.
When the stored procedure ends, DB2 returns the rows in the query result when the the client issues a
FETCH statement.
The RESULT_SETS column of the row associated with your stored procedure in the catalog table
SYSIBM.SYSPROCEDURES must contain the number of result sets that the stored procedure passes to
the client application. If the RESULT_SETS column value is less than the number of cursors left open
in the stored procedure, only the number specified in the RESULT_SETS column are passed to the
client application. In this case, you get a +464 SQLCODE. If the RESUL_SETS column value is greater
than the number of cursors left open in the stored procedure, all result sets are passed to the client
application.
For example, if you want to return a result set that contains entries for all employees in department
D11, you should code your stored procedure as follows:
1. The stored procedure declares a cursor that describes this subset of employees:
EXEC SQL DECLARE CURSOR-EMP CURSOR WITH RETURN FOR
SELECT EMPNO, FIRSTNAME, MIDINT, LASTNAME, PHONENO
FROM DSN8510.EMP
WHERE WORKDEPT=′ D11′ ;
END-EXEC.
2. The stored procedure opens the cursor:
EXEC SQL OPEN CURSOR-EMP END-EXEC.
3. The procedure ends without closing the cursor or fetching rows from this cursor,
You should use meaningful cursor names for returning result sets because the name of the cursor that
is used to return result sets is available to the client application through extensions to the DESCRIBE
statement explained in 9.2.4, “DESCRIBE CURSOR Statement” on page 163. See “Writing DB2 for
OS/390 Client Program to Receive Result Sets” in DB2 for OS/390 Version 5 Application Programming
and SQL Guide for more information.
9.2.1.1 Objects from Which You Can Return Result Sets You can use any of these objects in
the SELECT statement associated with the cursor for a result set:
• Tables, synonyms, views, temporary tables, and aliases defined at local DB2 system
• Tables, synonyms, views, temporary tables, and aliases defined at remote DB2 on MVS systems
that are accessible through DB2 private protocol access
9.2.1.2 Returning a Subset of Rows to the Client: If you execute FETCH statements with a
result set cursor within the stored procedure, DB2 does not return the fetched rows to the client
program. For example, if you declare a cursor WITH RETURN and execute the statements OPEN,
FETCH, FETCH, the client receives data beginning with the third row in the result set. As with DB2 for
the workstation we explain this aspect of DB2 for completeness, but we don′t think you will have any
reason to exploit this feature. In most applications, you will not want to fetch any of the rows prior to
returning the result set to the stored procedure caller.
9.2.1.3 Using a Temporary Table to Return Result Sets: You can use a temporary table to
return result sets from a stored procedure. This capability can be used to return nonrelational data
such as IMS, VSAM, or QSAM, to a DRDA client. For example, you can access IMS data from a stored
procedure as follows:
• Use the ways described in 13.3, “Accessing IMS Databases” on page 272 to access IMS
databases.
• Receive the IMS reply message, which contains data that should be returned to the client.
• Insert the data from the reply message into a global temporary table.
• Open a cursor against the temporary table. When the stored procedure ends, the rows from the
temporary table are available to the client.
Using this approach you can join information contained in the global temporary table obtained from a
nonrelational database to your DB2 data and return it to the client.
9.2.1.4 Supported Environment: You can code your MRSP stored procedure in any of the
languages supported to code stored procedures. In addition, the stored procedure can be written
using the DB2 for OS/390 ODBC/CLI interface. As a normal stored procedure, it must use LE/370.
MRSP stored procedures are supported in the DB2-established address space and in the
WLM-established address spaces.
Although result sets are available to the client application when the stored procedure ends, the locking
considerations are the same as if the client application connects to the server, declares and opens a
cursor at the server. If the COMMIT_ON_RETURN column of SYSIBM.SYSPROCEDURE is set to YES,
you have to declare the cursor with the WITH HOLD option. In this case, locks are held on the base
table when the stored procedure ends.
Besides returning MRSP, the stored procedure can also return parameters passed in the SQL CALL
statement.
In addition, your client application can determine how many result sets are returned by using the
DESCRIBE PROCEDURE statement, and determine the contents of each result set by using the new
DESCRIBE CURSOR statement. If you know the number and contents of the result sets that a stored
procedure returns, you can simplify your program. However, if you write code for the more general
case, in which the number and contents of result sets can vary, you do not have to make major
modifications to your client program if the stored procedure changes.
Result sets are available only for read-only client applications. You cannot use the UPDATE or DELETE
for MRSP in your client application. Stored procedure result sets are always marked ″read only,″ so
update or delete operations fail if you issue them. This implies that UPDATE or DELETE WHERE
CURRENT is not allowed in the stored procedure or in the client application.
When result sets are returned to the client program, you get a +466 SQLCODE for the SQL CALL
statement.
If the client application is not using ODBC/CLI, there are these extensions in the SQL language:
• A new result set locator SQL data type
• A new ASSOCIATE LOCATORS SQL statement
• A new ALLOCATE CURSOR SQL statement
• A new DESCRIBE PROCEDURE SQL statement
• A new DESCRIBE CURSOR SQL statement
These extensions are compliant with the SQL92 Entry Level.
If you know how many result sets and the characteristics of each result set, you need to:
1. Declare as many result-set locator variables as the number of result sets returned by the stored
procedure.
2. Invoke the stored procedure using the SQL CALL statement.
3. Issue the ASSOCIATE LOCATORS statement once.
4. Issue one ALLOCATE CURSOR statement for each result set returned by the stored procedure.
Figure 81 on page 153 shows the relationship among the new SQL statements and the new data type.
After the SQL CALL statement is executed, you issue the ASSOCIATE LOCATORS statement. The
ASSOCIATE LOCATORS statement associates the result sets returned by the stored procedure with the
result-set locator variables declared previously and specified in the ASSOCIATE LOCATORS statement.
For each result set returned, issue the ALLOCATE CURSOR statement to assign a local cursor name to
the result set locator variable. You can then process the rows of each result set by using the FETCH
statement specifying the local cursor name.
Note that the order of the association of result sets and result set locator variables is the order that the
stored procedure used in opening the cursor; the first open cursor issued by the stored procedure is
associated with the first result set locator variable, the second open cursor issued by the stored
procedure is associated with the second result set locator variable, and so on.
Unlike with DB2 UDB and DB2 Common Servers, you can process the result sets in parallel. For
example, you can process the first row of the first result set, process the first row of the second result
set, then process the second row of the first result set.
The DESCRIBE PROCEDURE statement should be used when you do not know how many result sets
the stored procedure is returning. The DESCRIBE PROCEDURE returns the number of result sets
returned from the stored procedure and places information about the result sets in an SQLDA. Make
this SQLDA large enough to hold the maximum number of result sets that the stored procedure may
return. When the DESCRIBE PROCEDURE statement completes, the fields in the SQLDA contain the
following values:
• SQLD contains the number of result sets returned by the stored procedure.
• Each SQLVAR entry gives information about a result set. In an SQLVAR entry:
− The SQLNAME field contains the name of the SQL cursor used in the stored procedure to
return the result set.
− The SQLIND field contains the value -1. This indicates that no estimate of the number of rows
in the result set is available.
To use the SQLDATA field from the DESCRIBE PROCEDURE statement, you need to set up a result set
locator variable. A subscript variable is not valid in an SQL ALLOCATE cursor statement. The
following is what is required to use the SQLDATA variable for a COBOL program:
* Redefine the SQLDATA pointer as PIC S9(8) comp.
03 SQLDATA POINTER.
03 SQLDATANUM REDEFINES SQLDATA PIC S9(8) COMP.
* You can now allocate the cursor for the result set.
EXEC SQL ALLOCATE CURSOR1 CURSOR FOR RESULT SET
:LOCPTR
END-EXEC,
Provided for you are two sample programs that use the SQL statement DESCRIBE PROCEDURE. The
first program uses the SQL statement ASSOCIATE CURSOR to allocate the result set locator variable.
The second program uses the SQLDATA variable from SQLDA. Both are coded to handle five known
result sets from a stored procedure. The client program assumes that any result set or sets can be
returned in any order or any number from one to five cursors.
Sample program MR1BMCBM uses the SQL statement ASSOCIATE CURSOR to allocate the result set
locator variables after the DESCRIBE PROCEDURE statement:
WORKING-STORAGE SECTION.
***********************
* SQL INCLUDE FOR SQLCA
***********************
EXEC SQL INCLUDE SQLCA END-EXEC.
***********************
* SQL DESCRIPTION AREA IN COBOL SQLDA
* SEE ″APPLICATION PROGRAMMING AND SQL GUIDE″ SC26-8958
* APPENDIX C. PROGRAMMING EXAMPLES, PAGE X-23
***********************
01 SQLDA.
02 SQLDAID PIC X(8) VALUE ′ SQLDA ′ .
02 SQLDABC PIC S9(8) COMP VALUE 236.
02 SQLN PIC S9(4) COMP VALUE 5.
PROCESS-OUTPUT.
**************************
*
* 3 DETERMINE HOW MANY RESULT SETS THE STORED PROCEDURE IS
* RETURNING.
*
* SQLD WILL HAVE THE NUMBER OF RESULT SETS.
* SQLNAMEC() WILL HAVE THE STORED PROCEDURE CURSOR NAMES.
*
**************************
MOVE ″DESCRIBE″ TO LINE-EXEC.
EXEC SQL DESCRIBE PROCEDURE :STOREPROC INTO :SQLDA
END-EXEC.
**************************
*
* 4 LINK RESULT SET LOCATORS TO RESULT SETS
*
**************************
EXEC SQL ASSOCIATE LOCATORS
(:LOC-1, :LOC-2, :LOC-3, :LOC-4, :LOC-5)
WITH PROCEDURE :STOREPROC
END-EXEC.
**************************
*
* CHECK TO SEE IF PROCEDURE RETURNED MORE RESULT SETS THAN YOU
* HAD LOCATORS FOR. IF SO YOU WILL GET A +494 SQLCODE FROM THE
* SQL ASSOCIATE LOCATOR STATEMENT.
*
* 494, WARNING: PROCEDURE MR1BMS RETURNED 00000006 QUERY RESULT
* SETS
* THE OTHER LOCATORS WILL HAVE VALID ADDRESSES THAT YOU CAN
* PROCESS.
ALLOCATE-CURSOR.
EVALUATE IDX
WHEN 1
EXEC SQL ALLOCATE CURSOR1 CURSOR FOR RESULT SET
:LOC-1
END-EXEC,
WHEN 2
EXEC SQL ALLOCATE CURSOR1 CURSOR FOR RESULT SET
:LOC-2
END-EXEC,
WHEN 3
EXEC SQL ALLOCATE CURSOR1 CURSOR FOR RESULT SET
:LOC-3
END-EXEC,
WHEN 4
EXEC SQL ALLOCATE CURSOR1 CURSOR FOR RESULT SET
:LOC-4
END-EXEC,
WHEN OTHER
EXEC SQL ALLOCATE CURSOR1 CURSOR FOR RESULT SET
:LOC-5
END-EXEC,
END-EVALUATE.
PROCESS-E.
PERFORM ALLOCATE-CURSOR.
PERFORM FETCH-ROWS-1 VARYING F FROM 1 BY 1 UNTIL
SQLCODE = +100.
EXEC SQL CLOSE CURSOR1 END-EXEC.
PROCESS-TB.
PERFORM ALLOCATE-CURSOR.
PROCESS-E-NF.
PERFORM ALLOCATE-CURSOR.
PERFORM FETCH-ROWS-3 VARYING F FROM 1 BY 1 UNTIL
SQLCODE = +100 .
EXEC SQL CLOSE CURSOR1 END-EXEC.
PROCESS-TB-NF.
PERFORM ALLOCATE-CURSOR.
PERFORM FETCH-ROWS-4 VARYING F FROM 1 BY 1 UNTIL
SQLCODE = +100.
EXEC SQL CLOSE CURSOR1 END-EXEC.
PROCESS-TB-WH.
PERFORM ALLOCATE-CURSOR.
PERFORM FETCH-ROWS-5 VARYING F FROM 1 BY 1 UNTIL
SQLCODE = +100.
EXEC SQL CLOSE CURSOR1 END-EXEC.
FETCH-ROWS-1.
EXEC SQL FETCH CURSOR1 INTO
:TESTE-COL1 :ICOL1E,
:TESTE-COL2 :ICOL2E
END-EXEC.
IF SQLCODE = 0
.... process row
END-IF.
FETCH-ROWS-2.
MOVE ″FETCH CUR 2″ TO LINE-EXEC.
EXEC SQL FETCH CURSOR1 INTO
:TEST-TB-COL1,
:TEST-TB-COL2 :ICOL2T,
:TEST-TB-COL3 :ICOL3T
END-EXEC.
IF SQLCODE = 0
.... process row
END-IF.
FETCH-ROWS-3.
EXEC SQL FETCH CURSOR1 INTO
:TESTE-COL1 :ICOL1E,
:TESTE-COL2 :ICOL2E
END-EXEC.
IF SQLCODE = 0
.... process row
END-IF.
FETCH-ROWS-4.
EXEC SQL FETCH CURSOR1 INTO
:TEST-TB-COL1,
:TEST-TB-COL2 :ICOL2T
END-EXEC.
IF SQLCODE = 0
.... process row
END-IF.
FETCH-ROWS-5.
EXEC SQL FETCH CURSOR1 INTO
:TEST-TB-COL1,
:TEST-TB-COL3 :ICOL3T
Sample program MR2BMCBM uses the SQLDA variable SQLDATA to allocate the cursors to the result
sets after the DESCRIBE PROCEDURE statement:
***********************
* SQL DESCRIPTION AREA IN COBOL SQLDA
* SEE ″APPLICATION PROGRAMMING AND SQL GUIDE″ SC26-8958
* APPENDIX C. PROGRAMMING EXAMPLES, PAGE X-23
***********************
01 SQLDA.
02 SQLDAID PIC X(8) VALUE ′ SQLDA ′ .
02 SQLDABC PIC S9(8) COMP VALUE 236.
02 SQLN PIC S9(4) COMP VALUE 5.
02 SQLD PIC S9(4) COMP VALUE 0.
02 SQLDVAR OCCURS 1 TO 5 TIMES
DEPENDING ON SQLN.
03 SQLTYPE PIC S9(4) COMP.
03 SQLLEN PIC S9(4) COMP.
03 SQLDATA POINTER.
03 SQLDATANUM REDEFINES SQLDATA PIC S9(8) COMP.
03 SQLIND POINTER.
03 SQLNAME.
49 SQLNAMEL PIC S9(4) COMP.
49 SQLNAMEC PIC X(30).
PROCEDURE DIVISION.
CALL-STORED-PROCEDURE.
**************************
*
* 2 CALL THE STORED PROCEDURE MR2BMS AND CHECK THE SQL
* RETURN CODE FOR +466
*
**************************
EXEC SQL CALL :STOREPROC END-EXEC. 2A
IF SQLCODE = +466
PERFORM PROCESS-OUTPUT
ELSE
DISPLAY ″NO RESULT SETS RETURNED CALL 1 ″ SQLCODE.
PROG-END.
GOBACK.
PROCESS-OUTPUT.
**************************
*
* 3 DETERMINE HOW MANY RESULT SETS THE STORED PROCEDURE IS
PROCESS-E.
MOVE SQLDATANUM(IDX) TO LOCNUM.
EXEC SQL ALLOCATE CURSOR1 CURSOR FOR RESULT SET
:LOCPTR
END-EXEC,
PERFORM FETCH-ROWS-1 VARYING F FROM 1 BY 1 UNTIL
SQLCODE = +100.
EXEC SQL CLOSE CURSOR1 END-EXEC.
PROCESS-TB.
MOVE SQLDATANUM(IDX) TO LOCNUM.
EXEC SQL ALLOCATE CURSOR2 CURSOR FOR RESULT SET
:LOCPTR
END-EXEC,
PERFORM FETCH-ROWS-2 VARYING F FROM 1 BY 1 UNTIL
SQLCODE = +100.
EXEC SQL CLOSE CURSOR2 END-EXEC.
PROCESS-E-NF.
MOVE SQLDATANUM(IDX) TO LOCNUM.
EXEC SQL ALLOCATE CURSOR3 CURSOR FOR RESULT SET
:LOCPTR
END-EXEC,
PERFORM FETCH-ROWS-3 VARYING F FROM 1 BY 1 UNTIL
SQLCODE = +100 .
EXEC SQL CLOSE CURSOR3 END-EXEC.
PROCESS-TB-WH.
MOVE SQLDATANUM(IDX) TO LOCNUM.
EXEC SQL ALLOCATE CURSOR5 CURSOR FOR RESULT SET
:LOCPTR
END-EXEC,
PERFORM FETCH-ROWS-5 VARYING F FROM 1 BY 1 UNTIL
SQLCODE = +100.
EXEC SQL CLOSE CURSOR5 END-EXEC.
FETCH-ROWS-1.
EXEC SQL FETCH CURSOR1 INTO
:TESTE-COL1 :ICOL1E,
:TESTE-COL2 :ICOL2E
END-EXEC.
IF SQLCODE = 0
.... process row
END-IF.
FETCH-ROWS-2.
EXEC SQL FETCH CURSOR2 INTO
:TEST-TB-COL1,
:TEST-TB-COL2 :ICOL2T,
:TEST-TB-COL3 :ICOL3T
END-EXEC.
IF SQLCODE = 0
.... process row
END-IF.
MOVE ″FETCH CUR 3″ TO LINE-EXEC.
EXEC SQL FETCH CURSOR3 INTO
:TESTE-COL1 :ICOL1E,
:TESTE-COL2 :ICOL2E
END-EXEC.
IF SQLCODE = 0
.... process row
END-IF.
FETCH-ROWS-4.
EXEC SQL FETCH CURSOR4 INTO
:TEST-TB-COL1,
:TEST-TB-COL2 :ICOL2T
END-EXEC.
IF SQLCODE = 0
.... process row
END-IF.
FETCH-ROWS-5.
EXEC SQL FETCH CURSOR5 INTO
:TEST-TB-COL1,
:TEST-TB-COL3 :ICOL3T
END-EXEC.
IF SQLCODE = 0
.... process row
END-IF.
The DESCRIBE CURSOR statement should be used when you do not know the column names and data
types of a particular result set. After execution of the DESCRIBE CURSOR statement, the contents of
the SQLDA are similar to the execution of a SELECT statement:
• The first 5 bytes of the SQLDAID are set to ′SQLRS′.
• SQLD contains the number of columns for this result set.
• Each SQLVAR entry gives information about a column.
In an SQLVAR entry:
• The SQLTYPE field contains the data type of the column.
• The SQLLEN field contains the length attribute of the column.
• The SQLNAME field contains the name of the column.
The following is the format of the DESCRIBE CURSOR statement:
DESCRIBE CURSOR cursorname INTO descriptor
Note that information about the columns is placed in the SQLDA when the statement that generated
the result set is dynamic. For static statements coded in the stored procedure, the SQLDA information
is returned if you specified the DESCSTAT(YES) bind option when binding the package for the stored
procedure.
For a detailed explanation of theses steps see “Writing a DB2 for OS/390 Client Program to Receive
Result Sets” in DB2 for OS/390 Version 5 Application Programming and SQL Guide .
Here is an example of how you receive a single result set in sample COBOL client program
SRSBMCBM (no parameters are passed to or received from the stored procedure):
*
* PARMS TO FETCH ROWS INTO WITH NULL INDICATORS
*
01 COL1 PIC X(10).
01 COL2 PIC X(20).
01 INDARRAY.
02 ICOL1 PIC S9(4) COMP.
PROCEDURE DIVISION.
*
* 2. CALL THE STORED PROCEDURE AND CHECK THE SQL RETURN CODE.
* RETURN CODE FROM CALLED STORED PROCEDURE IS SET TO +466
* IF A RESULT SET IS RETURNED.
*
EXEC SQL CALL SRSBMS END-EXEC.
IF SQLCODE = +466
PERFORM PROCESS-OUTPUT
ELSE
DISPLAY ″NO RESULT SETS RETURNED″ .
PROG-END.
STOP RUN.
EXIT.
PROCESS-OUTPUT.
*
* 4. LINK RESULT SET LOCATORS TO RESULT SETS.
* ASSOCIATE LOCATOR WITH PROCEDURE
*
EXEC SQL ASSOCIATE LOCATORS (:LOC-TESTE)
WITH PROCEDURE SRSBMS
END-EXEC.
*
* 5. ALLOCATE CURSORS FOR FETCHING ROWS FROM THE RESULT SETS
*
EXEC SQL ALLOCATE TESTE-CURSOR CURSOR FOR RESULT SET
:LOC-TESTE
END-EXEC.
GET-ROWS.
*
* 7. FETCH ROWS FROM THE RESULT SET INTO HOST VARIABLES.
* ALLOW FOR NULLS.
*
EXEC SQL FETCH TESTE-CURSOR INTO :COL1 :ICOL1,
:CCL2 :ICOL2
END-EXEC.
IF SQLCODE = 0
END-IF.
Here is an example of the COBOL stored procedure SRSBMS called by sample COBOL client program
SRSBMCBM:
LINKAGE SECTION.
PROCEDURE DIVISION.
*
* 1 Declare a cursor with the option WITH RETURN
*
EXEC SQL DECLARE TESTE-CURSOR CURSOR WITH RETURN FOR
SELECT COL1, COL2 FROM TESTE
END-EXEC.
*
* 2 Open the cursor
*
EXEC SQL OPEN TESTE-CURSOR END-EXEC.
*
* 3 Leave the cursor open
*
ERROR-EXIT.
GOBACK.
Here is an example of how you receive multiple result sets in the sample COBOL client program
MRSBMCBM:
* SQL INCLUDE FOR SQLCA
EXEC SQL INCLUDE SQLCA END-EXEC.
* PARMS FETCH ROWS INTO FROM TESTE
01 COL1 PIC X(10).
01 COL2 PIC X(20).
01 INDARRAY.
02 ICOL1 PIC S9(4) COMP.
02 ICOL2 PIC S9(4) COMP.
*
* 1. DECLARE A RESULT SET LOCATOR FOR THE RESULT SETS THAT
* ARE RETURNED.
*
01 LOC-TESTE USAGE SQL TYPE IS
RESULT-SET-LOCATOR VARYING.
PROCEDURE DIVISION.
CALL-STORED-PROCEDURE.
*
* PROCESS CURSOR 1 FIRST AND THEN PROCESS CURSOR 2
*
*
* 2. CALL THE STORED PROCEDURE AND CHECK THE SQL RETURN CODE.
*
EXEC SQL CALL MRSBMS END-EXEC.
IF SQLCODE = +466
PERFORM PROCESS-OUTPUT
ELSE
DISPLAY ″NO RESULT SETS RETURNED CALL 1 ″ .
*
* PROCESS CURSOR 1 AND CURSOR 2 AT THE SAME TIME.
*
*
* 2. CALL THE STORED PROCEDURE AND CHECK THE SQL RETURN CODE.
*
EXEC SQL CALL MRSBMS (:SQLC) END-EXEC.
IF SQLCODE = +466
PERFORM PROCESS-BOTH
ELSE
DISPLAY ″NO RESULT SETS RETURNED CALL 2 ″ .
PROG-END.
STOP RUN.
EXIT.
PROCESS-BOTH.
PERFORM ASSOCIATE-LINK.
*
* PROCESS DATA FROM TABLE TESTE
*
PERFORM GET-ROWS-B VARYING I FROM 1 BY 1 UNTIL
FETCH-END = +2.
PROCESS-OUTPUT.
PERFORM ASSOCIATE-LINK.
*
* PROCESS DATA FROM TABLE TESTE
*
PERFORM GET-ROWS-E VARYING I FROM 1 BY 1 UNTIL
SQLCODE = +100.
GET-ROWS-TAB.
*
* 7. FETCH ROWS FROM THE RESULT SET INTO HOST VARIABLES.
*
EXEC SQL FETCH TEST-TB-CURSOR INTO :FLD1 END-EXEC.
IF SQLCODE = 0
...... process row logic here
END-IF
GET-ROWS-E.
*
* 7. FETCH ROWS FROM THE RESULT SET INTO HOST VARIABLES.
* ALLOW FOR NULLS
*
EXEC SQL FETCH TESTE-CURSOR INTO :COL1 :ICOL1,
:COL2 :ICOL2
END-EXEC.
IF SQLCODE = 0
...... process row logic here
END-IF
FETCH-TEST-E.
*
* 7. FETCH ROWS FROM THE RESULT SET INTO HOST VARIABLES.
* ALLOW FOR NULLS
*
EXEC SQL FETCH TESTE-CURSOR INTO :COL1 :ICOL1,
:COL2 :ICOL2
END-EXEC.
FETCH-TEST-TB.
*
* 7. FETCH ROWS FROM THE RESULT SET INTO HOST VARIABLES.
*
EXEC SQL FETCH TEST-TB-CURSOR INTO :FLD1 END-EXEC .
GET-ROWS-B.
IF FETCH-END < 2 THEN
IF CUR-END1 < 100 THEN
PERFORM FETCH-TEST-TB
IF SQLCODE = 100 THEN
MOVE SQLCODE TO CUR-END1
ADD 1 TO FETCH-END
MOVE SPACE TO FLD1
END-IF
ELSE
MOVE SPACE TO FLD1
END-IF
IF CUR-END2 < 100 THEN
PERFORM FETCH-TEST-E
IF SQLCODE = 100 THEN
MOVE SQLCODE TO CUR-END2
ADD 1 TO FETCH-END
MOVE SPACES TO COL1
MOVE SPACES TO COL2
ELSE
.... insert null logic for columns
ASSOCIATE-LINK.
*
* 4. LINK RESULT SET LOCATORS TO RESULT SETS.
*
* ASSOCIATE LOCATORS WITH PROCEDURE MRSBMS
EXEC SQL ASSOCIATE LOCATORS (:LOC-TESTE, :LOC-TEST-TB)
WITH PROCEDURE MRSBMS
END-EXEC.
*
* 5. ALLOCATE CURSORS FOR FETCHING ROWS FROM THE RESULT SETS
*
* LINK THE RESULT SET TO THE LOCATOR
EXEC SQL ALLOCATE TESTE-CURSOR CURSOR FOR RESULT SET
:LOC-TESTE
END-EXEC.
* LINK THE RESULT SET TO THE LOCATOR
EXEC SQL ALLOCATE TEST-TB-CURSOR CURSOR FOR RESULT SET
:LOC-TEST-TB
END-EXEC.
Here is an example of the COBOL stored procedure MRSBMS called by the sample COBOL client
program MRSBMCBM:
WORKING-STORAGE SECTION.
EXEC SQL INCLUDE SQLCA END-EXEC.
LINKAGE SECTION.
PROCEDURE DIVISION.
*
* 1 Declare a cursor with the option WITH RETURN
*
EXEC SQL DECLARE TESTE-CURSOR CURSOR WITH RETURN FOR
SELECT COL1, COL2 FROM TESTE
END-EXEC.
*
* 2 Open the cursor
*
EXEC SQL OPEN TESTE-CURSOR END-EXEC.
*
* 1 Declare a cursor with the option WITH RETURN
*
EXEC SQL DECLARE TEST-TB-CURSOR CURSOR WITH RETURN FOR
SELECT COL1 FROM TEST_TB
END-EXEC.
*
* 2 Open the cursor
*
* 3 Leave the cursors open.
*
ERROR-EXIT.
GOBACK.
There are two reasons why a cursor will not be passed back to a client program:
• The stored procedure closes the cursor before it terminates.
• The OPEN CURSOR statement fails.
01 RECPTR POINTER.
01 RECNUM REDEFINES RECPTR PIC S9(8) COMP.
01 IRECPTR POINTER.
01 IRECNUM REDEFINES IRECPTR PIC S9(8) COMP.
01 I PIC S9(4) COMP.
01 F PIC S9(4) COMP.
01 J PIC S9(4) COMP.
01 DUMMY PIC S9(4) COMP.
01 MYTYPE PIC S9(4) COMP.
01 COLUMN-IND PIC S9(4) COMP.
01 COLUMN-LEN PIC S9(4) COMP.
01 COLUMN-PREC PIC S9(4) COMP.
01 COLUMN-SCALE PIC S9(4) COMP.
01 INDCOUNT PIC S9(4) COMP.
01 ROWCOUNT PIC S9(4) COMP.
01 WORKAREA2.
02 WORKINDPTR POINTER OCCURS 750 TIMES.
77 ONE PIC S9(4) COMP VALUE +1.
77 TWO PIC S9(4) COMP VALUE +2.
77 FOUR PIC S9(4) COMP VALUE +4.
77 QMARK PIC X(1) VALUE ′ ? ′ .
* PARM TO RECEIVE THE SQLCODE FROM STORED PROCEDURE
01 SQLC PIC S9(9) COMP.
01 CURSOR-NM PIC X(8).
01 CURSOR-N PIC S9(4) COMP.
01 IDX-1 PIC S9(4) COMP.
01 IDX-2 PIC S9(4) COMP.
01 CURSOR-NAMEX.
02 CURSOR-NAMES OCCURS 5 TIMES.
04 CURNAMEL PIC S9(4) COMP.
04 CURNAMEC PIC X(30).
01 CURNAME1 PIC X(30).
01 CURNAME2 REDEFINES CURNAME1.
02 CURNAME3 OCCURS 30 TIMES.
04 CURNAME0 PIC X(1).
*
* 1 DECLARE A RESULT SET LOCATOR VARIABLE FOR EACH RESULT
* SET THAT MIGHT BE RETURNED.
*
01 LOC-1 USAGE SQL TYPE IS
*
* PASSED BY MR0BMCBM
LINKAGE SECTION.
01 LINKAREA-IND.
02 IND PIC S9(4) COMP OCCURS 750 TIMES.
01 LINKAREA-REC.
02 REC1-LEN PIC S9(8) COMP.
02 REC1-CHAR PIC X(1) OCCURS 1 TO 32700 TIMES
DEPENDING ON REC1-LEN.
01 LINKAREA-QMARK.
02 INDREC PIC X(1).
CALL-STORED-PROCEDURE.
**************************
*
* 2 CALL THE STORED PROCEDURE MR5BMS AND CHECK THE SQL
* RETURN CODE FOR +466
*
PROCESS-OUTPUT.
**************************
*
* 3 DETERMINE HOW MANY RESULT SETS THE STORED PROCEDURE IS
* RETURNING.
*
* SQLD WILL HAVE THE NUMBER OF RESULT SETS.
* SQLNAMEC() WILL HAVE THE STORED PROCEDURE CURSOR NAMES.
*
**************************
EXEC SQL DESCRIBE PROCEDURE :sTOREPROC INTO :SQLDA
END-EXEC.
**************************
*
* Loop through the cursor names so you can see what was
* passed back from the DESCRIBE PROCEDURE SQL statement.
*
**************************
MOVE SQLD TO CURSOR-N.
PERFORM SAVE-CUR-NAMES VARYING IDX-1
FROM 1 BY 1 UNTIL IDX-1 GREATER THAN SQLD
**************************
*
* 4 LINK RESULT SET LOCATORS TO RESULT SETS
*
**************************
EXEC SQL ASSOCIATE LOCATORS
(:LOC-1, :LOC-2, :LOC-3, :LOC-4, :LOC-5)
WITH PROCEDURE :STOREPROC
END-EXEC.
**************************
*
* CHECK TO SEE IF PROCEDURE RETURNED MORE RESULT SETS THAN YOU
* HAD LOCATORS FOR. IF SO YOU WILL GET A +494.
* 494, WARNING: PROCEDURE MR5BMS RETURNED 00000006 QUERY RESULT
* SETS
* THE OTHER LOCATORS WILL HAVE VALID ADDRESSES THAT YOU CAN
* PROCESS.
*
**************************
IF SQLCODE > 0 PERFORM DBERROR.
**************************
*
* 5 ALLOCATE CURSORS FOR EACH RESULT SET RETURNED FOR
* FETCHING ROWS.
*
**************************
*
* 7 PERFORM PARAGRAPH TO FETCH THE ROWS FOR CURSOR ONE
*
**************************
FETCH-ROWS-1.
EXEC SQL FETCH CURSOR1 USING DESCRIPTOR :SQLDA END-EXEC.
IF SQLCODE = 0
PERFORM DISPLAY-RECORD.
FETCH-ROWS-2.
EXEC SQL FETCH CURSOR2 USING DESCRIPTOR :SQLDA END-EXEC.
DISPLAY ″END FETCH ROWS 2″ ″+++++″
IF SQLCODE = 0
PERFORM DISPLAY-RECORD.
FETCH-ROWS-3.
EXEC SQL FETCH CURSOR3 USING DESCRIPTOR :SQLDA END-EXEC.
IF SQLCODE = 0
PERFORM DISPLAY-RECORD.
FETCH-ROWS-4.
EXEC SQL FETCH CURSOR4 USING DESCRIPTOR :SQLDA END-EXEC.
IF SQLCODE = 0
PERFORM DISPLAY-RECORD.
FETCH-ROWS-5.
EXEC SQL FETCH CURSOR5 USING DESCRIPTOR :SQLDA END-EXEC.
IF SQLCODE = 0
PERFORM DISPLAY-RECORD.
**************************
* PERFORM PARAGRAPH TO DISPLAY THE RECORD RETURNED.
**************************
DISPLAY-RECORD.
* ADD IN MARKERS TO DENOTE NULLS.
MOVE ONE TO INDCOUNT.
DISPLAY ″PERFORM NULLCHK″ ″+++++″ SQLD
PERFORM NULLCHK UNTIL INDCOUNT > SQLD.
MOVE REC1-LEN TO REC01-LEN.
**************************
* PERFORM PARAGRAPH TO DENOTE NULLS
**************************
NULLCHK.
IF IND(INDCOUNT) < 0 THEN
SET ADDRESS OF LINKAREA-QMARK TO WORKINDPTR(INDCOUNT)
MOVE QMARK TO INDREC.
ADD ONE TO INDCOUNT.
**************************
* PERFORM PARAGRAPH TO BLANK LINES
**************************
BLANK-REC.
MOVE ONE TO J.
PERFORM BLANK-MORE UNTIL J > REC1-LEN.
BLANK-MORE.
MOVE SPACE TO REC1-CHAR(J).
ADD 1 TO J.
* ONLY PRINT THE FIRST 127 CHARS.
MOVE-REC1.
MOVE REC1-CHAR(J) TO REC01-CHAR(J)
IF J = 127
MOVE REC1-LEN TO J.
**************************
* PERFORM PARAGRAPH TO MOVE CURSOR NAME TO REC1.
**************************
PRINT-CUR-NAME.
MOVE SPACE TO REC01.
MOVE ONE TO J.
PERFORM PRINT-CUR-NAME-MOVE UNTIL J > CURNAMEL(IDX-2).
MOVE CURNAMEL(IDX-2) TO REC01-LEN.
WRITE REC01 AFTER ADVANCING 1 LINE.
MOVE SPACE TO REC01.
PRINT-CUR-NAME-MOVE.
DISPLAY ′+++++ J′ CURNAME0(J) J.
MOVE CURNAME0(J) TO REC01-CHAR(J).
ADD 1 TO J.
**************************
*
* PERFORM PARAGRAPH TO DESCRIBE THE CURSOR FOR RESULT SET,
* AND SET UP THE ADDRESSES IN THE SQLDA FOR DATA.
*
**************************
DESCRIBE-CURSOR.
**************************
*
* 6 DETERMINE THE CONTENTS OF THE RESULT SETS.
* USE THE DESCRIBE CURSOR TO GET INFORMATION ON THE
* NUMBER OF COLUMNS, DATA TYPE.
**************************
*
* SET UP THE ADDRESSES IN THE SQLDA FOR DATA.
*
**************************
MOVE ZERO TO ROWCOUNT.
MOVE ZERO TO REC1-LEN.
SET RECPTR TO IRECPTR.
MOVE ONE TO I.
PERFORM COLADDR UNTIL I > SQLD.
**************************
*
* PERFORM PARAGRAPH TO CALCULATE COLUMN LENGTH
*
* DETERMINE THE LENGTH OF THIS COLUMN (COLUMN-LEN)
* THIS DEPENDS UPON THE DATA TYPE. MOST DATA TYPES HAVE
* THE LENGTH SET; BY VARCHAR, GRAPHIC, VARGRAPHIC, AND
* DECIMAL DATA NEED TO HAVE THE BYTES CALCULATED.
* THE NULL ATTRIBUTE MUST BE SEPARATED TO SIMPLIFY MATTERS.
*
**************************
COLADDR.
SET SQLDATA(I) TO RECPTR.
MOVE SQLLEN(I) TO COLUMN-LEN.
* COLUMN-IND IS 0 FOR NO NULLS AND 1 FOR NULLS
DIVIDE SQLTYPE(I) BY TWO GIVING DUMMY REMAINDER COLUMN-IND.
* MYTYPE IS JUST THE SQLTYPE WITHOUT THE NULL BIT
MOVE SQLTYPE(I) TO MYTYPE.
SUBTRACT COLUMN-IND FROM MYTYPE.
* SET THE COLUMN LENGTH, DEPENDENT UPON DATA TYPE
EVALUATE MYTYPE
WHEN CHARTYPE CONTINUE,
WHEN DATETYP CONTINUE,
WHEN TIMETYP CONTINUE,
WHEN TIMESTMP CONTINUE,
WHEN FLOATYPE CONTINUE,
WHEN VARCTYPE
ADD TWO TO COLUMN-LEN,
WHEN VARLTYPE
ADD TWO TO COLUMN-LEN,
WHEN GTYPE
MULTIPLY COLUMN-LEN BY TWO GIVING COLUMN-LEN,
WHEN VARGTYPE
PERFORM CALC-VARG-LEN,
WHEN LVARGTYP
PERFORM CALC-VARG-LEN,
WHEN HWTYPE
MOVE TWO TO COLUMN-LEN,
WHEN INTTYPE
MOVE FOUR TO COLUMN-LEN,
**************************
* PERFORM PARAGRAPH TO CALCULATE COLUMN LENGTH
* FOR A DECIMAL DATA TYPE COLUMN
**************************
CALC-DECIMAL-LEN.
DIVIDE COLUMN-LEN BY 256 GIVING COLUMN-PREC
REMAINDER COLUMN-SCALE.
MOVE COLUMN-PREC TO COLUMN-LEN.
ADD ONE TO COLUMN-LEN.
DIVIDE COLUMN-LEN BY TWO GIVING COLUMN-LEN.
**************************
* PERFORM PARAGRAPH TO CALCULATE COLUMN LENGTH
* FOR A VARGRAPHIC DATA TYPE COLUMN
**************************
CALC-VARG-LEN.
MULTIPLY COLUMN-LEN BY TWO GIVING COLUMN-LEN.
ADD TWO TO COLUMN-LEN.
**************************
* PERFORM PARAGRAPH TO NOTE AN UNRECOGNIZED
* DATA TYPE COLUMN
**************************
DISPLAY ′ UNRECOGNIZED DATA TYPE FOR COLUMN ′
DISPLAY ′ SQLTYPE ′ SQLTYPE(I).
DISPLAY ′ SQLLEN ′ SQLLEN(I).
GO TO PROG-END.
9.2.10 SQLCODEs
In this section, we provide some of the SQLCODES that you may get. Some are specific to MRSP, and
some can also be received even if you are not using MRSP.
+494 The number of result set locators specified on the ASSOCIATE LOCATORS statement is less
than the number of result sets returned by the stored procedure. The first ″n″ result set
locator values are returned, where ″n″ is the number of result set locator variables specified
on the SQL statement.
In this section, we explain how to transfer multiple rows to the calling program by blocking them (see
Figure 84).
Although we show here examples for the workstation, the concept is also valid for DB2 for MVS/ESA
and DB2 for OS/390.
The programs developed during this project to transfer mulitple rows are TR0C2CC2 and TR0C2S. Our
sample program calls the stored procedure by using an SQLDA. The stored procedure fetches 10 rows
from the STAFF table. Columns NAME, ID, and SALARY are fetched. To avoid having to transfer 30
different items (10 rows x 3 columns), for each row the three columns are grouped as a single
parameter. Thus only 10 parameters are sent back to the client program and are displayed on screen.
/* Declare Variables */
struct sqlca sqlca;
struct sqlda *inout_sqlda = NULL;
int cntr;
....
....
After obtaining the database alias, user ID, and password, the program connects to the database
server. Next, the program prepares the SQLDA to receive 10 rows, each with a data length of 100
characters. The data length of 100 was chosen arbitrarily; it would be enough to define a data length
sufficient to hold the three columns. The preparation proceeds as follows:
....
....
/* Allocate and Initialize Output SQLDA */
inout_sqlda = (struct sqlda *)malloc( SQLDASIZE(10) );
inout_sqlda->sqln = 10;
inout_sqlda->sqld = 10;
inout_sqlda->sqlvar[0].sqltype = SQL_TYP_NCSTR;
inout_sqlda->sqlvar[0].sqldata = data_item0;
inout_sqlda->sqlvar[0].sqllen = 101;
inout_sqlda->sqlvar[0].sqlind = &dataind0;
inout_sqlda->sqlvar[1].sqltype = SQL_TYP_NCSTR;
inout_sqlda->sqlvar[1].sqldata = data_item1;
inout_sqlda->sqlvar[1].sqllen = 101;
inout_sqlda->sqlvar[1].sqlind = &dataind1;
....
....
inout_sqlda->sqlvar[9].sqltype = SQL_TYP_NCSTR;
inout_sqlda->sqlvar[9].sqldata = data_item9;
inout_sqlda->sqlvar[9].sqllen = 101;
inout_sqlda->sqlvar[9].sqlind = &dataind9;
....
....
Once the SQLDA is initialized, the client program can call the TR0C2S stored procedure. When the
stored procedure ends, the 10 rows are transferred from the stored procedure to the client in the
SQLDA. To see the results on the screen, use the printf command:
....
....
EXEC SQL CALL :procname USING DESCRIPTOR :*inout_sqlda;
CHECKERR (″CALL WITH SQLDA″ ) ;
The stored procedure declares and opens a cursor. Next, it starts fetching data into the host variables:
....
....
EXEC SQL DECLARE C1 CURSOR FOR select name, id, salary from staff;
/*******************************************************************/
/* Fetch the data */
For each fetched row, the stored procedure does the following:
1. Checks whether the SQLCODE is 0.
2. Checks whether the received data is NULLS.
3. Copies the host variables to working variables.
4. Groups the three columns in a single character string in the outline variable.
5. Copies the outline variable to the SQLDA.
sprintf(outline,
″Name : %-10s Id : %-5i Salary : %-10.2f ″ ,
sname, sid, ssalary);
break;
} /* End switch */
} /* End For */
....
....
To end, the stored procedure closes the cursor and copies the SQLCA information to the SQLCA
structure. Control is returned to the client program:
....
....
EXEC SQL CLOSE C1;
There are a number of choices for developing Windows applications that access DB2 database servers:
• Embedded SQL - Using this choice you can develop applications in the following programming
languages:
− C or C++
- Microsoft Visual C++ Version 1,5
- Borland C++ 4.0 or Version 4.5
− COBOL
- Micro Focus COBOL Version 3.1 or later.
• DB2 CLI
• IBM ODBC driver - Through the IBM ODBC driver, you can use applications that support the ODBC
specifications to access DB2 database servers. For example, you can develop Visual Basic
applications to access DB2 databases.
You can also use the following products, among others:
• IBM VisualAge C++ for Windows (Version 3 Release 5 is the current release)
• IBM VisualAge for COBOL for OS/2 and Windows (Version 2.0 and later)
• IBM JDBC driver (for Java applications).
Before an ODBC application can access a DB2 database server, you must perform the following steps:
1. Install the IBM ODBC driver.
Start a Windows session and run the following command:
x:\sqllib\win\bin\db2odbc
where x is the drive where CAE for Windows is installed.
In native Windows, this command accomplishes the following tasks:
• Installs the IBM ODBC driver
• Installs an icon to trigger the ODBC Administrator functions from the control panel of Windows.
(The Control Panel icon is on the Windows main panel.)
In Windows under OS/2 (WIN-OS/2), this command installs only the IBM ODBC driver. Select the
ODBC Installer icon from the IBM DATABASE 2 panel to install an icon that triggers the ODBC
administrator functions from the OS/2 desktop.
2. Catalog databases and nodes, employing one of the following methods:
• Use the DB2 client setup tool.
• Issue catalog commands from the command line processor.
3. Bind the ODBC driver files for each database you want to access. We recommend using the
DB2CLI.LST list file to bind the ODBC driver to each database you want to access through ODBC.
4. Register the database as a data source , employing one of the following methods:
• Use the DB2 client setup tool.
• Use the ODBC administration tool.
[SAMP2COZ]
Driver=C:\WINDOWS\SYSTEM\db2cliw.dll
Description=
[DB41POK]
Driver=C:\WINDOWS\SYSTEM\db2cliw.dll
Description=
[SAMP6000]
Driver=C:\WINDOWS\SYSTEM\db2cliw.dll
Description=
Refer to Installing and Using DB2 Client for Windows for more information.
We used Visual Basic to develop our ODBC client application samples. Our Visual Basic sample
applications issue calls to ODBC APIs, which are routed by the ODBC driver manager to the IBM
ODBC driver, and then through CAE for Windows to a DB2 database server. If the DB2 database
server is a DRDA server, DDCS is required. See Figure 86 on page 189 for a description of our ODBC
working environment.
The benefits for using the direct API approach are that the user has complete control over which ODBC
functions get called. This allows greater flexibility and maximizes the performance of the application.
The drawbacks are that the user has to code the ODBC calls and, with this approach, the user is
forced to handle Unicode conversions.
The main benefits of using RDO are that the ODBC function calls are not hard coded in the application,
are performed by Visual Basic, and Unicode conversions are performed automatically by Visual Basic.
While this may seem like the way to go, the drawback is that RDO issues a large number of ODBC
calls over which you have no control. Many of these calls can be unnecessary and their sheer
numbers can degrade application performance. To see all of the ODBC calls that Visual Basic issues
during an RDO session, turn on the CLI trace before running the program.
For diagnosing distributed data problems in applications that use IBM′s ODBC/CLI driver, turn on the
CLI trace just prior to running the application. This trace is highly readable. It shows the sequence of
pcb1 = 4
pcb2 = SQL_NTS
pcb3 = SQL_NTS
Sub
StringByte(Data As String, ByteLen As Integer, return_buffer() As Byte)
Dim StrLen As Integer, Count As Integer
For Count = 0 To Len(Data) - 1
return_buffer(Count) = Asc(Mid(Data, Count + 1, 1))
Next Count
For Count = Len(Data) To ByteLen
return_buffer(Count) = 0
Next Count
End Sub
• After issuing an SQLPrepare and SQLExecute to execute the SQL call stored procedure statement,
you may want to display the output values in those same parameters. Before displaying these
If the SELECT statements inside of the stored procedure are dynamic, the result-set column names
should be returned automatically.
Note that string-data conversion does not apply to Visual Basic Version 3.
For more information about Visual Basic and stored procedure, check the following Web page:
http://www.software.ibm.com/data/db2/os390/cstips.html
You can download working examples for Visual Basic Version 4 and Version 5 from this Web page.
In this section, we do not explain all of the commands that we coded in our Visual Basic applications.
Instead, we describe the commands and functions related to invoking a stored procedure in DB2
servers.
On the diskette, the code for VBWMSTS0 and VBW2STS0 are at Version 3 of Visual Basic and will not
work with Visual Basic Version 4 and Version 5.
The stored procedures and the parameters required to invoke the stored procedures on MVS and on
OS/2 are similar. Table 12 on page 193 shows the characteristics of the parameters used for both
client applications, for Visual Basic Version 4 and Version 5.
For Visual Basic Version 3, the data length of the parameter P1 is 10.
Our client applications prompt users to enter the name of the database server to which they want to
connect. The only difference between the applications is the procedure name specified in the SQL
CALL statement.
In the paragraphs that follow, we describe the structure of our ODBC program coded with Visual Basic.
In this example, we are using the direct API coding method.
10.3.5.1 Parameters Used by the Stored Procedure: We defined and initialized parameters
P1 and P2 as shown in Figure 87.
Dim P1 As String
Dim P2 As Long
P1 = ″W000D1C000″
P2 = 0
Figure 87. Define and Initialize Parameters Used by the Stored Procedure
10.3.5.2 Handle Variables: A typical ODBC application uses a set of handle variables to control
the execution of the ODBC functions invoked by the application. Using these variables, the application
can request the completion status information of each ODBC call as it is executed. These variables
were defined as shown in Figure 88.
10.3.5.3 Return Code Variable: Every ODBC function returns basic diagnostic information by
using a return code that can be received in a variable defined in your application. Figure 89 on
page 195 shows the return code variable we defined in our Visual Basic applications.
10.3.5.4 CONNECT Statement Variables: The CONNECT statement issued from our
application requires three parameters: database name, user ID, and password. We defined three
variables for the values of these parameters as shown in Figure 90.
Note: Our Visual Basic applications prompt the user to enter data for these parameters.
10.3.5.5 Setting a Variable for the CALL Statement: The CALL statement issued in our
Visual Basic applications is passed to the SQLPrepare function by means of a variable we named
Query . This variable was defined as follows:
Dim Query as String
See 10.3.7, “Calling the Stored Procedure” on page 196 for an explanation of the SQLPrepare function.
The content of the Query variable is initialized according to the stored procedure that the application
invokes and should follow certain rules according to the platform where the stored procedure is
located:
Calling the DB2 for MVS/ESA Stored Procedure: Application VBWMSTS0 invokes a DB2 for MVS/ESA
stored procedure named TS0BMS. The initialization of the Query variable is the following:
Query = ″CALL TS0BMS(?,?)″
Note that the stored procedure name must be in uppercase; otherwise DB2 for MVS/ESA returns an
SQLCODE -113.
Calling the DB2 for OS/2 Stored Procedure: Application VBW2STS0 invokes a DB2 for OS/2 stored
procedure named BB22STS0. The initialization of the Query variable is:
Query = ″CALL bb22sts0(?,?)″
Note that the stored procedure name is in lowercase. It can also be specified in uppercase, because
the name of the function that executes this stored procedure was exported to a DLL in uppercase. See
7.3, “Stored Procedure Preparation” on page 111 for more information about preparing a stored
procedure in DB2 for OS/2.
Each question mark represents a parameter marker, that is, an argument to be passed to the stored
procedure. Our Visual Basic applications pass two parameters to the stored procedure they invoke.
Refer to 8.4, “CLI and ODBC Applications” on page 127 for details on the SQL CALL statement.
To allocate these handles and then perform the CONNECT statement, our Visual Basic applications
invoked the ODBC functions shown in Figure 91.
ret=SQLAllocEnv(a_henv)
ret=SQLAllocConnect(a_henv, a_hdbc)
ret=SQLConnect(a_hdbc,DataSource,SQL_NTS,User,SQL_NTS,Password,SQL_NTS)
• SQLAllocEnv allocates memory for the environment handle. This handle is used to control the
valid connection handles and the current active connection handles. This handle must be
requested before connecting to a data source.
In our Visual Basic applications, the name of this handle is a_henv.
• SQLAllocConnect allocates memory to control a particular connection. A connection handle must
be related to one environment handle. However, an environment handle can control multiple
connection handles. This handle must be requested before connecting to a data source.
In our Visual Basic applications, the name of this handle is a_hdbc.
• SQLConnect loads the driver required to pass ODBC calls to a specific database manager system
and establishes the connection to the data source. A reference to a connection handle is required
to keep track of the status, transaction state, and error information of this connection.
Database alias name, user ID, and password values are required to establish the connection.
These values are provided through the DataSource, User, and Password variables, respectively.
With the support for ODBC 3.0 APIs in DB2 Universal Database V5, some of the ODBC 2.0 APIs have
been removed. For a comprehensive list, refer to Appendix B “Migrating Applications” in DB2 UDB V5
Call Level Interface Guide and Reference . It has a table of CLI functions that should not be used for
UDB Version 5.
Figure 92 on page 197 shows the statements we coded for these tasks.
cbn=SQL_NTS
cb4=4
ret=SQLAllocStmt(a_hdbc,s_storedproc)
ret=SQLPrepare(s_storedproc,Query,SQL_NTS)
ret=SQLBindParameter(s_storedproc,1,SQL_PARAM_INPUT,SQL_C_CHAR,SQL_CHAR,
10,0,P1,11,cbn)
ret=SQLBindParameter(s_storedproc,2,SQL_PARAM_OUTPUT,SQL_C_LONG,
SQL_INTEGER,0,0,P2,4,cb4)
ret=SQLExecute(s_storedproc)
Every call to ODBC has specific return code values that you should always check to control the
execution of your application. An error during execution time can be caused by an error at bind time,
which is difficult to debug if you do not check the return code during bind time. For example, to check
the return code of the SQLExecute call, we coded the Visual Basic instruction shown in Figure 93 on
page 199.
This conditional instruction checks whether the completion code of the SQLExecute returned a
function-failed state. In this case, the program calls the error routine shown in Figure 94.
SQLError is a function to return the diagnostic information about both errors and warnings associated
with the most recently invoked ODBC function for a particular statement, connection, or environment
handle.
Every connection in an ODBC application can be set to perform commit processing manually or
automatically. Manual processing implies that the scope of a transaction must be controlled by the
application; that is, the application must explicitly issue commit or rollback operations required.
Automatic commit processing treats each SQL statement as a single, complete transaction. Thus, the
scope of a transaction is a single SQL statement automatically committed after its completion.
Automatic commit is recommended for read applications.
Because our stored procedures perform write operations, we decided to use manual commit in order
to have explicit control over the transaction.
We used the SQLSetConnecOption to set the commit mode of our connection. This function was coded
as follows:
ret=SQLSetConnectOption(a_hdbc, SQL_AUTOCOMMIT, 0)
where
• a_hdbc is the name of the connection handle.
• SQL_AUTOCOMMIT is the name of the commit mode attribute.
Having set the commit mode to off, we were able to explicitly commit or roll back. We coded the
commit operation in the following way:
ret=SQLTransact(a_henv,a_hdbc, SQL_COMMIT)
This call executes commit processing for all the changes to the database since connect time or the
previous call to SQLTransact, whichever is the most recent event. Here, a_henv is the environment
handle for this statement, and a_hdbc is its connection handle. It associates the commit operation with
the connection where commit processing should occur.
This call executes rollback processing for all changes to the database since connect time or the
previous call to SQLTransact, whichever is the most recent event. Again, a_henv is the environment
handle for this statement, and a_hdbc is its connection handle. It associates the rollback operation
with the connection where rollback processing should occur.
Figure 95 shows the functions we used to free our allocated handles and disconnect from the database
server.
ret=SQLFreeStmt(s_storedproc, SQL_DROP)
ret=SQLDisconnect(a_hdbc)
ret=SQLFreeConnect(a_hdbc)
ret=SQLFreeEnv(a_henv)
SQLFreeStmt stops processing associated with a specific statement, closes any open cursors, discards
pending results, and optionally frees all resources associated with the statement handle. In this
example, we free the s_storedproc statement handle. SQL_DROP indicates that all resources
associated with s_storedproc should be freed.
SQLDisconnect closes the connection associated with a specific connection handle. In this example,
we close the connection associated with the a_hdbc connection handle.
SQLFreeConnect releases a connection handle and frees all memory associated with the handle. In
this example, we release the a_hdbc connection handle.
SQLFreeEnv frees the environment handle and releases all memory associated with the handle. You
should issue SQLDisconnect with error checking to close all active connections before issuing
SQLFreeEnv. If there is an active connection, SQLFreeEnv fails. In this example, we free the a_henv
environment handle.
With the support for ODBC 3.0 APIs in DB2 Universal Database V5, some of the ODBC 2.0 APIs have
been removed. For a comprehensive list, refer to Appendix B “Migrating Applications” in DB2 UDB V5
Figure 96 on page 202 shows the C source code of our sample CLI application.
[TSTCLI1X]
uid=userid
pwd=password
autocommit=0
TableType=″ ′ TABLE′ , ′ VIEW′ , ′ SYSTEM TABLE′ ″
[TSTCLI2X]
; Assuming dbalias2 is a database in DB2 for MVS.
SchemaList=″ ′ OWNER1′ , ′ OWNER2′ , CURRENT SQLID″
[MYVERYLONGDBALIASNAME]
dbalias=dbalias3
SysSchema=MYSCHEMA
[SAMP6000]
SYSSCHEMA=SYSIBM
Description=
[SAMP2COZ]
Description=
[DB41POK}
AUTOCOMMIT=0
10.5 PowerBuilder
Different types of DB2 stored procedures call for different techniques from a PowerBuilder client. We
describe three different approaches for three different scenarios:
• No result sets (using output parameters, available in DB2 V4 and later)
• Single result sets (DB2 for OS/390 V5 or later)
• Multiple result sets (DB2 for OS/390 V5 or later)
Our application requester (AR) runs DB2 Connect Personal Edition on Windows 95. We did the
following:
1. Installed PowerBuilder Enterprise 5.0 using the Custom setup option to request ODBC drivers:
a. From the setup options, click on the Custom check box.
b. On the Products Available window, select PowerBuilder and click on the Detail button.
c. Click on the ODBC drivers check box to select it as shown in Figure 98 on page 206
Our application server (AS) runs DB2 for OS/390 V5 on OS/390 R4. Communication is using TCP/IP.
We first write our stored procedure on OS/390 running DB2 for OS/390 V5. We insert a row in
SYSIBM.SYSPROCEDURES for each stored procedure with COMMIT_ON_RETURN set to ′ Y′.
We then code the client programs, allowing for the COMMIT_ON_RETURN settings of the stored
procedure. Familiarity with creating an application using PowerBuilder is assumed.
5. Click on Settings.
6. Click on No to connect (we are updating the local CLI/ODBC configuration file and therefore do not
need a connection for this).
7. Click on the Advanced push button in CLI/ODBC Settings as shown in Figure 101 on page 208.
8. Select the Service tab in the CLI/ODBC Settings - Advanced Settings window
9. From here you can choose the different tabs (such as Service where you can specify CLI/ODBC
trace options). Refer to Chapter 4 “DB2 CLI/ODBC Configuration Keyword Listing” in the IBM DB2
UDB Call Level Interface Guide and Reference .
If a CLI/ODBC application is accessing DB2, these changes will not be in effect in that application
until it is restarted.
If you work with more than one DB2 database (in PowerBuilder terms) / data source, you will need
to perform the same actions for each one.
10. Click on the OK push button to get back to the CLI/ODBC Settings window.
11. Click on the OK push button to return to Database Properties.
12. You should get a message box saying that the database list has been successfully updated. Click
on the OK push button.
Your db2cli.ini file has now been updated.
13. Edit the pbibm050.ini file. Change PBSupportDBBind=′ YES′ .
14. Edit the db2cli.ini file to add the following line for the DB2 data source:
[DSGCT]
..
.
PATCH2=1
PATCH2 specifies using work-arounds for known problems with CLI/ODBC applications.
4. Select Standard for Class in the New User Object window and click on the OK push button as
shown in Figure 103.
5. Select transaction in the Types list box on the Select Standard Class Type window and click on the
OK push button as shown in Figure 104 on page 210.
8. You are prompted to log on if a connection to your database (location) is not already established.
9. Click on the Procedures push button in the Declare Local External Functions window.
10. Select the stored procedure of your choice from the scroll down and click on OK on the Remote
Stored Procedure(s) window as shown in Figure 106 on page 211.
11. PowerBuilder shows you how the transaction is declared. You will see something like:
subroutine ORDSTATX (string P1,ref string P2,ref string P3,ref long P4,
ref string P5) RPCFUNC ALIAS FOR ″ORDSTATX″
12. Click on File and save this User Object. For example we save it as irww_procedures. The
message “User Object - irww_procedures inherited from transaction” appears in the title of the
User Object Painter.
13. To use this User Object, associate it with the Variable Type SQLCA:
a. Click on Appl on the PowerBar.
b. Position mouse pointer on the Appl object and click mouse button 2.
c. Select Properties.
d. Select the Variable Types tab.
e. Currently “transaction” is associated with SQLCA..
f. Overtype the entry field with irww_procedures to associate it with SQLCA.
Your program can now call irww_procedures.
Check it!
In previous releases of PowerBuilder, it was necessary to modify the declaration by adding
alias proc-name for proc-name
where proc-name is the name of the stored procedure you selected.
The script shown in Figure 107 on page 212 is for the clicked event tied to a CommandButton which
we labeled RUN.
P2 = SPACE(79)
P3 = SPACE(435)
P4 = 0
P5 = SPACE(78)
P1 = sle_warehouse.text+sle_district.text+sle_cid.text+sle_clast.text 1
If P4 >= 0 THEN
// Process Parameter P2 4
sle_cid.text=mid(P2,1,4)
sle_cfirst.text=mid(P2,5,16)
sle_cmiddle.text=mid(P2,21,2)
sle_clast.text=mid(P2,23,16)
sle_balance.text=mid(P2,39,12)
sle_date.text=mid(P2,51,19)
sle_orderid.text=mid(P2,70,8)
sle_carid.text=mid(P2,78,2)
// Process Parameter P3
sle_ordid1.text=mid(P3,1,6)
sle_swid1.text=mid(P3,7,4)
sle_quantity1.text=mid(P3,11,2)
sle_amount1.text=mid(P3,13,7)
sle_delivery1.text=mid(P3,20,10)
..
.
sle_ordid15.text=mid(P3,407,6)
sle_swid15.text=mid(P3,413,4)
sle_quantity15.text=mid(P3,417,2)
sle_amount15.text=mid(P3,419,7)
sle_delivery15.text=mid(P3,426,10)
mle_errormsg.text=P5
ELSE
mle_errormsg.text=P5
ROLLBACK;
END IF
RETURN
Figure 107. PowerScript Sample: Process Stored Procedures Using Parameters
Other parameters (P2 to P5) are for output and are initialized just before this line.
3 and 4 For SQLCODE indicating success, the returned parameters are “unpacked” into the various
SingleLineEdit and MultiLineEdit controls. We pick a nonnegative SQLCODE because stored
procedures with results sets will come back with SQLCODE=+466. We do not code any commit in the
client code because the stored procedure is set up to COMMIT_ON_RETURN.
If we did not have COMMIT_ON_RETURN we would have to code a COMMIT for a successful CALL.
IF (SQLCA.sqlcode < 0) THEN
mle_errormsg.text = SQLCA.sqlerrtext
ROLLBACK;
RETURN
ELSE
COMMIT;
END IF;
To code for a single result set client, we create a DataWindow control for our stored procedure:
1. Create a new DataWindow Object.
2. Choose Stored Procedures as data source.
3. Choose a Presentation Style (for example, tabular).
4. Select the appropriate stored procedure.
Our example in query3.pbl is actually inherited from another window, as shown in Figure 108
Figure 109 on page 215 shows the fields of the DataWindow for the execute_query12.
To see how a PowerBuilder Script is used (for example, for the Execute command button of the
“execute_query12” of our application window), follow these steps:
1. Click on the Application Painter button on the PowerBar.
2. Click on File on the menu bar.
3. Select Open and open our sample file, query3.pbl.
4. In the Select Application dialog box, choose query3 and click on OK.
5. Click the Window button on the PowerBar.
6. From the Select Window dialog, scroll down the list box and select w_main_window_1 (because
w_qry12 is inherited from w_main_window_1) and click on OK. w_main_window_1 appears in the
Workspace.
7. Place the cursor over the Execute button in the Workspace and click on mouse button 2
8. Select Script to look at the PowerScript that drives this application. Figure 110 shows our
PowerScript.
string input_values
int numrows
input_values = mle_input1.text
dw_1.SetTransObject(SQLCA) // 1
sle_numrows.text = string(dw_1.Retrieve(input_values)) // 2
IF (SQLCA.sqlcode < 0) THEN
mle_errormsg.text = SQLCA.sqlerrtext
sle_sqlcode.text = string(SQLCA.SQLDBCode) + ″ / ″ + string(SQLCA.sqlcode)
ROLLBACK;
RETURN
END IF;
sle_sqlcode.text = string(SQLCA.SQLDBCode) + ″ / ″ + string(SQLCA.sqlcode)
Figure 110. Code Fragment for Single Result Set
2 Use the Retrieve function in PowerScript to obtain the number of rows retrieved.
To see how DataWindow d_qry12 relates to the stored procedure, we look at the sample provided in
the accompanying diskette:
1. Select DataWindow d_qry12.
2. Click on Design from the menu bar.
3. Select Data Source.
4. Click on the More > > button. Figure 111 shows this DataWindow using stored procedure
USRT001.QRY12.
Before opening any new file, make sure all files are closed). Then,
1. Click on the Application Painter button on the PowerBar.
2. Click on File on the menu bar.
3. Select Open and open our sample file, query3.pbl.
4. In the Select Application dialog box, choose query3 and click on OK.
5. Click on the Window button on the PowerBar.
6. From the Select Window dialog, scroll down the list box and select w_qry22 and click on OK.
w_qry22 appears in the Workspace.
7. Place the cursor over the Execute button in the Workspace and click on mouse button 2.
8. Select Script to look at the PowerScript that drives this application. Figure 113 on page 218 shows
our PowerScript.
RC = 0
mle_errormsg.text = ″″
sle_sqlcode.text = ″″
ERRMSG = space(78)
input_values = mle_input1.text
execute QRY22;
2 Assigning results to output areas in a DataWindow. With a single result set, PowerBuilder handles
the scrolling of data. With multiple result sets, the scrolling has to be done in the code. Both
DataWindows are using the same DataWindow object, so the field names (compute_0001 and
compute_0002) are identical.
To see how DataWindow d_qry22 relates to the stored procedure, follow these steps:
1. Select DataWindow d_qry22.
2. Click on Design from the menu bar.
3. Select Data Source.
4. Click on the More > > button.
5. Figure 114 shows this DataWindow is using stored procedure USRT001.QRY22.
6. Click on Arguments to specify retrieve arguments and the window shown in Figure 115 is shown.
Developing a client application with C does not differ from the development of other database C
applications.
In this section, we describe some considerations related to the platform where the stored procedures
are located:
• DB2 for MVS/ESA
− The length of the string variable used to hold the stored procedure name must be 254
characters or less; otherwise the precompilation step fails with a -312 SQLCODE.
− If the client application is going to send a null indicator, the stored procedure linkage
characteristic in the LINKAGE column of SYSIBM.SYSPROCEDURE must be set to N to indicate
that the stored procedure can accept nulls. Otherwise the stored procedure fails with a -470
SQLCODE.
See 2.3.1.1, “SYSIBM.SYSPROCEDURES Table Columns” on page 14 for a description of the
SYSIBM.SYSPROCEDURES catalog table.
• DB2 Common Servers
− The length of the string variable used to hold the stored procedure can be longer than 254
characters.
− No special action is required to receive nulls in the stored procedures of DB2 Common
Servers.
In this chapter, we show you how to implement a CLI application and present some techniques for
problem determination. For the sample presented in this chapter, we use OS/390 C/C++ V2R4. The
JCL samples provided by the DB2 for OS/390 V5 Call Level Interface Guide and Reference contain
detailed descriptions and instructions regarding DB2 CLI. The JCL examples in this chapter relate to
OS/390 and C/C++ Optional Features V2R4. They differ from those in current DB2 publications, which
document IBM C/C++ for MVS/ESA Version 3 Release 1.
Application ODBC programs written for DB2 Common Server V2, DB2 Universal Database V5, or other
RDBMS that support ODBC can be ported to the OS/390 platform. You can write and execute
applications across different platforms, accessing the various members of the DB2 family with fewer
coding differences. You can also port existing ODBC applications written for the workstation to OS/390
and exploit the many powerful features which can only be found on OS/390 such as its robustness and
its security.
To use CLI, you have to write your application in C or C++ program languages, which support DLLs.
To use CLI, make sure the following APARS/PTFs are applied in your DB2 for OS/390 V5 environment:
• PQ02582 fixes missing SDSNSAMP member DSNTIJCL
• PQ07001/UQ08548
• PQ06894/UQ11231 enables stored procedures to read CLI INI file
Before using CLI, you have to bind the CLI packages at the server (DB2 for OS/390 V5 in our case).
During the bind process of CLI on DB2 for OS/390 V5, you can ignore the BIND PACKAGE warnings for
DSNCLINC related to ISOLATION(NC). DB2 for OS/390 V5 provides this package because some DBMS
support isolation NC. Similarly, if you are binding to a DB2 for OS/390 V5 subsystem, warnings related
to SYSIBM.SYSLOCATIONS can be ignored because this table has been renamed to
SYSIBM.LOCATIONS in Version 5. The SYSIBM.SYSLOCATION table is referred in the CLI pacakges
because a remote DB2 Version 4 can act as a DRDA application server. “Configuring CLI and Running
Sample Applications” in the DB2 for OS/390 V5 Call Level Interface Guide and Reference lists in detail
each step required to set up CLI.
To invoke a stored procedure using CLI, we prepared the CALL statement with parameter markers and
bind the markers with SQLBindParameter(). 8.4, “CLI and ODBC Applications” on page 127 describes
the syntax for the CALL statement.
For reference, a copy of mrspcli.c from the DB2 UDB CLI samples is included on the diskette to
illustrate how to obtain rows from a result set.
Use SQLMoreResults() to examine if there are more result sets associated with a handle.
This does not apply to stored procedures because you cannot use in-stream data in the JCL procedure
to start the stored procedures address space.
Refer to 11.12.3, “CLI Stored Procedure Coding Considerations” on page 240 for a detailed description
of a main CLI stored procedure that returns a single result set and passes a parameter with indicator.
11.6 Compiling
You need to use the DB2 precompiler only if your CLI program uses embedded SQL.
The DLL code can EXPORT or IMPORT functions and external variables. Specifying DLL defaults to
DLL(NOCBA) requires compile options of RENT and LONGNAME. The OS/390 C/C++ compiler
enables RENT and LONGNAME automatically when you specify DLL. Table 13 shows the C and C++
compiler options.
T a b l e 1 3 . C / C + + Compiler Options
Option C C++
Default NODLL(NOCBA) DLL(NOCBA)
Unlike C++, C compilations default to compile option NODLL(NOCBA). So you have to explicitly
specify the DLL compile option for C programs in to IMPORT the CLI DLLs.
You can either use the existing C and C++ compile JCL procedures using overrides and symbolics,
or customze your own JCL procedure based on the JCL procedure shipped with the C and C++
compilers. These JCLs are found in member EDCC (for C) and CBCC (for C++) of the CBC.SCBCPRC
library.
In the above example, we direct the C compiler to read the CCOPT DD statement 2 using the
OPTFILE option 1. This file can be in-stream or it can refer to a data set.
The SEARCH option 4 directs the preprocessor to look for system-include files in the specified
libraries. System-include files are those file associated with the #include <filename> format of the
#include C/C++ preprocessor directive. We include the DB2 header file library.
The LSEARCH option 5 directs the preprocessor to look for the user-include files (#include
″filename″) in the specified libraries.
For more information on C/C++ compile options, refer to OS/390: C/C++ User ′ s Guide , SC09-2361.
CLI programs need to be prelinked because they are compiled with the DLL, RENT and LONGNAME
options.
In Figure 116 we made a copy of the procedure EDCC to pass the desired parameters for the compiler.
We put the options file into a data set, simplifying the invoking JCL significantly because there is no
need to specify overrides.
..
.
// CREGSIZ=′ 4M′ , < COMPILER REGION SIZE
// CRUN=, < compiler run-time options
// CPARM=′ OPTFILE(DD:CCOPT)′ 1
// CPARM2=′ NOMARGINS SOURCE′ , < compiler options
// CPARM3=, < COMPILER OPTIONS
..
.
//C EXEC PGM=CBCDRVR,COND=(4,LT,PC),REGION=&CREGSIZ.,
// PARM=(′&CRUN/&CPARM &CPARM2 &CPARM3′ )
//CCOPT DD DISP=SHR,DSN=MY.OWN.OPTIONS 2
..
.
Figure 116. C/CLI Compile: JCL Fragment Showing Parameters
You can also avoid using OPTFILE by specifying the USERLIB (replacing LSEARCH) and SYSLIB
(replacing SEARCH) DD statements. Figure 117 on page 225 is an example of a JCL procedure that
can be used to compile a CLI program:
The diskette provides a sample JCL procedure DB2HCCLI for compiling C using CLI. This procedure
invokes the DB2 precompiler, but because there is no embedded SQL it returns a condition code of 4
and continues.
For object modules containing DLL code (C++ code, or C code compiled with the DLL compiler
option), the prelinker:
• Generates a function descriptor (linkage section) in writable static for each DLL-referenced
function.
• Generates a variable descriptor (linkage section) for each unresolved DLL-referenced variable.
• Generates an IMPORT control statement in the SYSDEFSD data set for each exported function and
variable.
• Generates internal information for the load module that describes which symbols are exported and
which symbols are imported from other load modules.
• Combines static DLL initialization information.
Refer to O S / 3 9 0 : C / C + + U s e r ′ s Guide for more details. For prelink and link-edit, we customized the
JCL procedure based on the member CBCL of the CBC.SCBCPRC library as shown below:
//********************************************************************
//* *
//* PRELINK AND LINK A C CLI PROGRAM, BASED ON CBC.SCBCPRC(CBCL) *
//* *
//* OS/390 C/C++ *
//* *
//* RELEASE LEVEL: 02.04.00 (VERSION.RELEASE.MODIFICATION LEVEL) *
//* *
//********************************************************************
//*
//CLIL PROC MEM=, 1
// LIBPRFX=′ CEE′ , < PREFIX FOR LIBRARY DSN
// CLBPRFX=′ CBC′ , < PREFIX FOR CLASS LIBRARIES
// PLANG=′ EDCPMSGE′ , < PRE-LINKER MESSAGE NAME
// PREGSIZ=′2048K′ , < PRE-LINKER REGION SIZE
// PPARM=′ MAP,NOER′ , < PRE-LINKER OPTIONS
// LPARM=′ AMODE=31,MAP,RENT′ , < LINKAGE EDITOR OPTIONS
// TUNIT=′ VIO′ , < UNIT FOR TEMPORARY FILES
//* 2
// DB2MACS=DSN510.SDSNMACS, * DB2 macros etc
// LOADLIB=SG244693.SAMPLES.LOAD.SPAS, Appl load lib - SPAS
// OBJLIB=SG244693.SAMPLES.OBJ * Object
Notes: 1 and 2 are symbolics we have added to simplify the invoking JCL. The object 3 is the
input to prelink. Notice that as of OS/390 R3 ASCCOLL 4 replaces APPSUPP and COLLECT. The four
class libraries were:
• The I/O stream class library
• The complex mathematics class library
• The application support class library (replaced by ASCCOLL)
• The collection class library (replaced by ASCCOLL)
Refer to OS/390: C/C++ IBM Open Class Library User ′ s Guide , SC09-2363 for more information.
For a CLI program, the prelink step needs DSN510.SDSNMACS(DSNAOCLI) 5 to IMPORT statements
used for CLI during prelinking. If the application uses other DLLs (such as our sample SR1OMS), you
can use the SYSIN2 6 / 8. DD statement to specify where the IMPORT statements are kept.
7 is a DUMMY for the definition side-deck (SYSDEFSD). The prelinker generates a definition
side-deck if you are prelinking an application that exports external symbols for functions and variables
Our sample utility V2SUTIL exports symbols and we kept the side-deck for SR1OMS which uses
V2SUTIL. These IMPORTs are required because we ported the sample from AIX to the S/390 platform.
We used the following JCL to invoke the compiler and the prelink/link:
// JCLLIB ORDER=SG244693.SAMPLES.JCL
//*
//COMPILE EXEC CLIC,MEM=SR1OMS
//PRELINK EXEC CLIL,MEM=SR1OMS,COND=(4,LT)
//PLKED.SYSIN2 DD *
IMPORT CODE ′ V2SUTIL′ check_error
IMPORT CODE ′ V2SUTIL′ print_connect_info
IMPORT CODE ′ V2SUTIL′ print_error
IMPORT CODE ′ V2SUTIL′ print_results
IMPORT CODE ′ V2SUTIL′ terminate
IMPORT DATA ′ V2SUTIL′ DSNCNM
IMPORT DATA ′ V2SUTIL′ DSNPNM
IMPORT DATA ′ V2SUTIL′ SQLTEMP
//
Note that the prelink/link is executed only if the compiler returns a condition code less than 4.
11.8 SYSIBM.SYSPROCEDURES
For the stored procedure to use the correct CLI packages, the COLLID column in the
SYSIBM.SYSPROCEDURES table must be the same as the collection ID of the CLI packages.
Otherwise, DB2 returns a -805 SQLCODE. The default collection ID is DSNAOCLI. Note also, that as
for other local client applications, the CLI package must be included in the plan of the client
application.
Since a CLI stored procedure is written in either C or C++, the LANGUAGE column in
SYSPROCEDURES must be C.
The rest of the columns obey the same rules as for non-CLI stored procedures described in 2.3.1.1,
“SYSIBM.SYSPROCEDURES Table Columns” on page 14.
Besides the usual stored procedure address space requirements, we need to include the CLI
initialization (INI) file. Figure 118 on page 229 is a JCL example for a DB2-established stored
procedure address space.
1 To test our application, we limit the number of TCBs to 1. This eliminates the danger of multiple
stored procedures running concurrently within the same address space and writing to the same file.
2.2.3, “Serializing Access to Non-DB2 Resources” on page 12 explains this in more detail.
2 For WLM-established stored procedure address space we replace the program name with
PGM=DSNX9WLM. CLI stored procedures that require the same INI file can be grouped into one
application environment. In other words, you should define an application environment for each
variation of the INI file.
For stored procedures using CLI, we need to update the stored procedure address space JCL to
include:
3 The library where the CLI DLL code resides. This code is contained in the DB2 system library
SDSNLOAD which should already be in the STEPLIB DD statement. so there is nothing to add.
4 As with any stored procedure address space, the LE runtime is needed.
6 The library where the application modules reside.
7 DSNAOINI DD statement for the DB2 CLI initialization (INI) file. See Figure 119 on page 230 for
a sample.
We can obtain additional information on how a CLI stored procedure is behaving using:
• The CLI application trace, which traces every DB2 CLI call from the application. It also displays the
invoking parameters.
• The CLI diagnosis trace - IBM Support Center uses information in this trace for service support. It
is sometimes known as the CLI serviceability trace . It is not intended to assist in debugging
application code.
; See also sample DSN510.SDSNSAMP(DSNAOINI) for CLI ini file
; COMMON stanza
[COMMON]
MVSDEFAULTSSID=DSGC
;be sure trace is not started before the ini is read, else this is
; missed
;be sure that you put this ini out just before the job you want to
; trace, that is, if you have a setup insert job. run that before.
; turn diagnosis trace on and increase trace buffer size
; no wrapping around and increase trace buffer size from 32KB
TRACE=1 1
TRACE_NO_WRAP=1 2
TRACE_BUFFER_SIZE=2000000 3
; turn user application trace on and direct to DD name CLITRACE
CLITRACE=1 4
TRACEFILENAME=DD:CLITRACE 5
Figure 119. Using CLI INI File to Enable Diagnosis and Application Trace
You must also include a DD statement for the trace output file, which has to be allocated as fixed block
80 sequential file.
//DSNAOTRC DD DISP=OLD,DSN=SG244693.FB80
Figure 120 on page 231 shows how to produce the Formatted Detail Report (FMT) and Formatted Flow
Report (FLW).
3. Free the trace file by bringing down the stored procedure address space with DB2 -STOP PROC(*)
command. If using WLM-established address spaces, you must issue the VARY WLM command
with the QUIESCE or REFRESH option instead of STO PROC(*).
4. Process the CLI diagnosis trace file using DSNAOTRC to obtain a format detail report and a format
flow report. Note that the CLI application trace in the dataset is formatted when written to the
output dataset. The CLI diagnosis trace needs you to format it using DSNAOTRC FMT or FLW.
Figure 122 on page 233 is an example of the formatted flow report for the diagnosis trace.
Figure 123 is an example of the formatted detail report for diagnosis trace.
..
.
803 DB2 fnc_entry call_level_interface SQLExecute (1.30.42.12) +
pid 0x007d7be0; tid 1; cpid 0; sec 0; nsec 0; tpoint 0
804 DB2 fnc_entry call_level_interface CLI_dstGetInfo (1.30.42.70)
pid 0x007d7be0; tid 1; cpid 0; sec 0; nsec 0; tpoint 0
805 DB2 fnc_retcode call_level_interface CLI_dstGetInfo (1.33.42.70)
pid 0x007d7be0; tid 1; cpid 0; sec 0; nsec 0; tpoint 254
return_code = 0x00000000 = 0
..
.
Figure 123. Formatted Detail Report (FMT) for CLI Diagnosis Trace Example
Notice that the collection ID DSNAOCLI matches the collection ID where you bound CLI packages.
We use MSGFILE to redirect dump output to a different DD name, OUTDUMP2. RPTSTG(ON) requests
report on storage usage which can help you decide on an appropriate LE run-time parameter.
Figure 125 is a sample output from the RPTSTG option.
RPTOPTS(ON) lets you know which run-time options are used. You can also request HEAP size and
trace.
HEAP statistics:
Initial size: 614400
Increment size: 32768
Total heap storage used (sugg. initial size): 838896
Successful Get Heap requests: 214
Successful Free Heap requests: 90
Number of segments allocated: 9
Number of segments freed: 0
Figure 125. Excerpt from Specifying LE Run-Time Option RPTSTG(ON)
Now we go through setting up a sample CLI stored procedure and demonstrate how to use these
techniques. The base for this sample is contained in Appendix F of the DB2 for OS/390 V5 Call Level
Interface Guide and Reference . In this redbook, we provide a modified version of the stored procedure
and the client program.
Table 14 summarizes the steps to implement the sample client program written in CLI that invokes a
stored procedure written in CLI.
We have included the members listed in Table 14 in the JCL library of the sample diskette.
These sample programs use printf() to write output. If we do not define SYSPRINT in the stored
procedure during execution, SYSOUT data sets are dynamically allocated with DD names such as
SYS00001, SYS0002, and SYS00003. You will see these DD names in the stored procedure address
space output as follows:
DDNAME STEPNAME
JESMSGLG JES2
JESJCL JES2
JESYSMSG JES2
CLITRACE DBC1SPAS
SYS00001 DBC1SPAS
SYS00002 DBC1SPAS
SYS00003 DBC1SPAS
In this section, we illustrate some of the considerations we came across while porting existing
CLI/ODBC code from AIX to OS/390. Although what we came across is by no means a comprehensive
list of tasks, it is a starting point for porting code.
There are differences in how stored procedures are supported between the workstation DB2 products
and the host DB2 products. Understanding these differences gives you an idea of the effort involved in
porting existing applications between platforms. It also helps you design more portable code if you are
writing applications using DB2 stored procedures. Table 15 outlines some of these similarities and
differences.
Note that DB2 Common Server V2 and DB2 Universal Database V5 support result sets only when using
CLI.
Result sets are supported both as a client and server by DB2 Universal Database V5 and DB2 Common
Server V2 using the private protocol called DB2RA. Using the DRDA protocol, only the application
requester is supported: DRDA application server on DB2 for the workstation does not currently
support result sets.
11.12.1 Porting a Sample Result Set CLI Application from AIX to OS/390
To demonstrate the similarities and differences between stored procedures on AIX (or other
workstation platforms) and OS/390, we chose the sample stored procedure mrspsrv from DB2
Universal Database V5. This exercise illustrates:
• Restrictions on stored procedures returning result sets
• How to pass a parameter with indicator to and from a stored procedure
We created an AIX client CLI program on UDB based on mrspcli so that we can specify which stored
procedure to CALL at invocation. This program, sr1, is provided as part of the samples. (See
Figure 126.) We ran it with the CLI application trace on at the client so that we can examine each CLI
call. We updated the CLI INI file as follows:
trace=1
; do a physical write after each trace call
traceflush=1
; ------------------------------------------
; use a full subdirectory name
; ------------------------------------------
tracepathname=/home/svtdbm4/sqllib/clitrace
Note that for the column name to be returned, you must specify DESCSTAT=YES in the ZPARM
module. If your stored procedure already exists, you have to rebind it after changing the ZPARM
module.
11.12.2.1 UDB to UDB Result Sets Using Private Protocol: We catalog a remote database
on the workstation platform and run our version sr1 of the sample result sets program mrspcli
between an AIX client and an OS/2 server. Figure 127 shows the results:
$ sr1 sample2 db2v5 db2v5
>Enter stored procedure name
mrspsrv
>Connected to sample2
Use CALL with Host Variable to invoke the Server Procedure named >mrspsrv<
Server Procedure Complete.
Median Salary = 17654.50 A
Going from AIX to OS/2 using the DB2 UDB private protocol (DB2RA), the parameters/indicators A
and the column names B are passed back to the client program with the result set.
11.12.2.2 UDB to UDB Result Sets Using DRDA Protocol: Going from AIX to OS/2 using
DRDA, the median salary comes back because the stored procedure passes it as a parameter. The
result set does not work because on the workstation platform, DRDA AS does not support result sets.
We see an error in our client program (invalid cursor state). Figure 128 on page 239 shows the
results.
11.12.2.3 UDB to OS/390 Result Sets Using DRDA Protocol: We rewrote the UDB sample
program mrspsrv with minor modifications and renamed it sr1oms. We invoked it using the modified
AIX client sr1.
The different tasks and considerations in porting the CLI source code from AIX to OS/390 are described
in 11.12.3, “CLI Stored Procedure Coding Considerations” on page 240. Figure 129 shows the results.
$ sr1 dsgct db2v5 db2v5
>Enter stored procedure name
SR1OMS
>Connected to dsgct
Use CALL with Host Variable to invoke the Server Procedure named >SR1OMS<
Server Procedure Complete.
Median Salary = 17654.50 A
1 2 3 B
340 Edwards 17844.00
90 Koonitz 18001.75
40 O′ Brien 18006.00
20 Pernal 18171.25
100 Plotz 18352.80
..
.
Figure 129. UDB to OS/390 (CLI) Using DRDA Protocol
The parameter/indicator A is passed back, as is the result set. However, the column names B are
not passed back from the stored procedure. Instead, the ordinal position of the column is passed.
We also rewrote the UDB sample program mrspsrv using embedded SQL and other host languages.
The results are identical to those of the OS/390 CLI stored procedure.
Although the column names for the result sets are not passed back to the client that called the stored
procedure, the stored procedure can obtain each column name using SQLDescribeCol(). The result
can then be passed back to the client program either using parameters, or as another result set using
a global temporary table.
Figure 130 (Part 1 of 8). SR1OMS Source Code: CLI Stored Procedure in C
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
/* #include <sqlda.h> */ /* 03 */
#include <sqlca.h>
#include <sqlcli1.h> /* 04 */
#include <decimal.h>
/* #include ″samputil.h″ */ /* Header file for CLI sample code */
#include ″v2sutil.h″ /* ODBC 2.0 CLI sample utilities 05 */
Figure 130 (Part 2 of 8). SR1OMS Source Code: CLI Stored Procedure in C
/*--> SQLL1X58.SCRIPT */
/*
int SQL_API_FN mrspsrv(
void *reserved1,
void *reserved2,
struct sqlda *output_sqlda,
struct sqlca *sqlca)
*/
/*here */
/* local variables */
SQLSMALLINT num_records; /* 07 */
SQLINTEGER indicator; /* 08 */
Figure 130 (Part 3 of 8). SR1OMS Source Code: CLI Stored Procedure in C
/* 12 */
int
main(SQLINTEGER argc, SQLCHAR *argv[] )
/*<-- */
{
/* Declare CLI Variables */ /* 13 */
/* SQLHANDLE henv, hdbc, hstmt1, hstmt2 ; */
SQLHENV henv ;
SQLHDBC hdbc ;
SQLHSTMT hstmt1, hstmt2 ;
SQLRETURN rc ;
/*--> */
SQLINTEGER counter = 0;
/*-----------------------------------------------------------------*/
/* Setup CLI required environment */
/* */
/*ODBC3.0 Split SQLAllocHandle into SQLAllocEnv, SQLAllocConnect */
/*ODBC3.0 and SQLAllocStmt for OS/390 which is mainly ODBC 2/0 */
/*-----------------------------------------------------------------*/
/* 15 */
/*rc = SQLAllocHandle( SQL_HANDLE_ENV, SQL_NULL_HANDLE, &henv ) ; */
rc = SQLAllocEnv( &henv ) ;
if ( rc != SQL_SUCCESS ) return( terminate( henv, rc ) ) ;
Figure 130 (Part 4 of 8). SR1OMS Source Code: CLI Stored Procedure in C
/* 17 */
rc = SQLSetConnectOption(hdbc,SQL_AUTOCOMMIT,SQL_AUTOCOMMIT_OFF);
if( rc != SQL_SUCCESS ) goto ext;
/*-----------------------------------------------------------------*/
/* Issue NULL Connect, since in CLI we need a statement handle */
/* and thus a connection handle and environment handle. */
/* A connection is not established, rather the current */
/* connection from the calling application is used */
/*-----------------------------------------------------------------*/
/*
SQLConnect( hdbc, NULL, SQL_NTS, NULL, SQL_NTS, NULL, SQL_NTS ) ;
*/
rc=SQLConnect(hdbc,
NULL,
0,
NULL,
0,
NULL,
0); /* 18 */
/*---------------------------------------------------------------*/
/* Examine the columns related to the SQL statement 20 */
/*---------------------------------------------------------------*/
rc = SQLNumResultCols(hstmt1, &nresultcols);
Figure 130 (Part 5 of 8). SR1OMS Source Code: CLI Stored Procedure in C
/* Execute a Statement to */
/* determine the Total Number of Records */
rc = SQLExecDirect(hstmt2, stmt2, SQL_NTS);
if (rc != SQL_SUCCESS) goto ext;
rc = SQLFetch(hstmt2);
if (rc != SQL_SUCCESS) goto ext;
Figure 130 (Part 6 of 8). SR1OMS Source Code: CLI Stored Procedure in C
/*-----------------------------------------------------------------*/
/* Return to caller */
/*-----------------------------------------------------------------*/
ext:
/* ml */
Figure 130 (Part 7 of 8). SR1OMS Source Code: CLI Stored Procedure in C
Figure 130 (Part 8 of 8). SR1OMS Source Code: CLI Stored Procedure in C
11.12.3.1 Code Specific to the OS/390 Implementation of CLI: 01 This pragma tells the
compiler that the code is to be reentrant. In our case this is not necessary, since we have decided to
rewrite this stored procedure as a main() program and we do not request that the module be kept
resident (specified in the SYSIBM.SYSPROCEDURES table).
04 We must include the CLI header file (# include<sqlcli1.h>). This is the same as for DB2 in the
workstation environments.
We have decided to use data conversion in this example. Instead of using decimal, the SALARY
column is defined on the STAFF table using SQLDOUBLE ( 14 and 21), which is easier to manipulate
in C. Refer to “Data Types and Data Conversion” in Chapter 3 of DB2 for OS/390 V5 Call Level
Interface Guide and Reference for more details.
DB2 Universal Database V5 provides a sample utility that incorporates a number of routines to handle
standard tasks such as error checking. There is also a version written for ODBC 2.0. Unlike the
original UDB version of mrspsrv.c, we are using the DB2 Common Server V2 routines because
OS/390′s implementation of CLI conforms mostly to ODBC 2.0, and v2sutil.c is written in ODBC 2.0 (the
DB2 Universal Database V5 version uses ODBC 3.0). Therefore, we include the v2sutil.h header 05.
When porting from AIX to OS/390, we need to revert back to some of the ODBC 2.0 functions:
ODBC 3.0: SQLFreeHandle( SQL_HANDLE_STMT, hstmt ) ;
ODBC 2.0: SQLFreeStmt(hstmt, SQL_DROP);
For a comprehensive list, refer to Appendix B “Migrating Applications” in DB2 UDB V5 Call Level
Interface Guide and Reference . It has a table of CLI functions that should not be used for UDB Version
5.
We revert to the ODBC 2.0 version of samputil.c routines (v2sutil.c) in a similar fashion, for example,
ODBC 3.0: CHECK_HANDLE( SQL_HANDLE_STMT, hstmt, rc ) ;
ODBC 2.0: CHECK_STMT(hstmt, rc);
These requirements account for the modifications in 13, 15, 16, 19, 24, 25, 26 and 27.
11.12.3.3 Code Related to Stored Procedures: 02 This pragma is specific to OS/390 for C
stored procedures so that the correct linkage is used when the stored procedure is invoked. On the
workstation, stored procedures are implemented as DLLs. For our example, we have decided to use a
main program (not a subprogram).
When we exit the stored procedure, DB2 for OS/390 V5 decides whether to remove the module from
memory or not by referencing the STAYRESIDENT column of the SYSIBM.SYSPROCEDURES table.
This decision lies outside the stored procedure itself.
The workstation implementation differs in that the stored procedure passes a return code back
indicating whether to disconnect the library or hold. 28
03 The header file for SQLDA is commented out here because unlike the UDB sample, we are
handling parameters as variables ( 07, 08 and 14) passed to a main() program 12.
Similarly we pass the parameter 22 and indicator 23 back without explicit reference to the SQLDA.
SQLCAs are used differently on OS/390 than in the UDB sample 09.
Among the SQL statements that are not allowed for stored procedures for DB2 for MVS/ESA V4 and
DB2 for OS/390 V5 are COMMIT and CONNECT.
A Null Connection: 18 is a null connection. NULL CONNECT is an ODBC term. It is a valid client or
stored procedure connect technique where the pointer to the data source is NULL. NULL CONNECT
means “default” connect.
Note that a NULL CONNECT does not establish the database connection handle. The connection
handle was established at SQLAllocConnect time.
Do Not COMMIT from a Stored Procedure: We cannot COMMIT from a stored procedure and therefore
the SQLTransact() 24 has to be taken out from the AIX code, or a -751 SQLCODE is returned as
follows:
DSNT408I SQLCODE = -751, ERROR: A STORED PROCEDURE HAS BEEN
PLACED IN MUST_ROLLBACK STATE DUE TO SQL OPERATION COMMIT
DSNT418I SQLSTATE = 42987 SQLSTATE RETURN CODE
DSNT415I SQLERRP = DSNXEEZ SQL PROCEDURE DETECTING ERROR
17 To turn autocommit off, use the SQLSetConnectOption() API. This prevents an implicit COMMIT,
which is invalid.
Notice also that result sets in CLI stored procedures do not require the WITH RETURN clause in a
SELECT statement.
11.12.4.2 SQL Errors: At the early stages of application development (such as prototyping), in the
absence of an error handling routine, use SQLError() and SQLGetSQLCA() together with CLI
application trace to find out causes for errors. Variables related to SQLError() are declared in 11 and
SQLError() is called in 20
11.12.4.3 Invalid Conversions (SQLSTATE 07006): Ensure that you are using the correct
data types. By coding SQLDescribeCol() or SQLColAttributes() directly after an SQLExecDirect() or
SQLPrepare(), as in 20 you can obtain the data types and attributes for each column from the CLI
application trace data set:
0SQLDescribeCol( hStmt=1, iCol=1, pszColName=&9e8f318, cbColNameMax=32,
pcbColName=&9e8f04c, pfSQLType=&9e8f048, pcbColDef=&9e8f600,
pibScale=&9e8f054, pfNullable=NULL )
Once you have identified the attributes, you can take these statements out of your source code. Refer
to “Data Types and Data Conversion” in Chapter 3 of DB2 for OS/390 V5 Call Level Interface Guide and
Reference for more details.
11.12.4.4 Code Page Translation Between Platforms: If you are porting C / C + + programs,
beware of code page translations. The C/C++ compiler requires coded character set (code page)
IBM-1047. The CLI initialization file also requires IBM-1047.
The iconv() and JCL procedure EDCICONV (usually found in the LE sample JCL procedure library
CEE.SCEEPROC) can convert the square brackets [ ] (x′B A′ and x′B B′) into the appropriate IBM-1047
representations (x′AD′ and x′BD′). Refer to O S / 3 9 0 : C / C + + U s e r ′ s Guide , SC09-2361, for information
about EDCICONV. Refer to OS/390: OpenEdition Command Reference , SC28-1892, for more information
about iconv().
If you upload a source program from some platform that has a different code page, you need to
convert to the local code page before compiling the application. For our OS/390 we use code page
IBM-037. Figure 131 is a sample JCL to convert the coded character sets from IBM-1047 to our coded
character set IBM-037
//... JOB ...
// JCLLIB ORDER=CEE.SCEEPROC 1
//ICONV EXEC PROC=EDCICONV,
// INFILE=, 2
// OUTFILE=, 3
// FROMC=′ IBM-037′ , 4
// TOC=′ IBM-1047′ 5
Figure 131. Sample JCL to Convert Coded Chracter Set
1 Optional statement to point to a JCL procedures library where EDCICONV is located.
2 and 3 are input and output data set names.
4 The name of the code set in which the input data is encoded. For example IBM-037
(En_US.IBM-037) for US, IBM-285 (En_GB.IBM-285) for the United Kingdom.
5 The name of the code set to which the output data is to be converted.
We can use utilities on OS/390 or AIX to translate the source code between code pages. Figure 132 on
page 251 shows how to invoke EDCICONV to tranlate code pages.
If you need to find out what code page you are using on AIX, you can use smit to help you:
1. Find out the code page for AIX.
2. Find out the code page for OS/390.
3. Use iconv() or EDCICONV to translate code pages.
4. Use FTP to move source code.
To find out the code page for AIX using smit:
1. Invoke smit and the System Management panel shown in Figure 133 is displayed.
System Management
2. Place the cursor under System Environments, press Enter and the System Environments panel
shown in Figure 134 on page 252 is displayed.
3. Place the cursor under Manage Language Environment, press Enter and the Manage Language
Environment panel shown in Figure 135 is displayed.
Manage Language Environment
Move cursor to desired item and press Enter.
Change/Show Primary Language Environment
Add Additional Language Environments
Remove Language Environments
Change/Show Language Hierarchy
Set User Languages
Change/Show Applications for a Language
Convert System Messages and Flat Files
4. Place the cursor under Change/Show Primary Language Environment, press Enter and the
Change/Show Cultural Convention, Language, or Keyboard panel shown in Figure 136 on page 253
is displayed.
5. The value for the code page is in the Primary LANGUAGE translation field.
An application program can use the Recoverable Resource Manager Services attachment facility
(RRSAF) to connect to DB2 to process SQL statements, commands, or instrumentation facility interface
(IFI) calls. Programs that run in MVS batch, TSO foreground, and TSO background can use RRSAF.
RRSAF uses OS/390 Transaction Management and Recoverable Resource Manager Services (OS/390
RRS). With RRSAF, you can coordinate DB2 updates with updates made by all other resource
managers that also use OS/390 RRS in an MVS system.
Any task in an address space can establish a connection to DB2 through RRSAF. The following are
some considerations related to:
• Number of connections to DB2 — You can have multiple connections to DB2 from one TCB, but only
one can be actively used at any one time. You can switch the TCB between the connections any
way you require.
• Specifying a plan for a task - Each connected task can run a plan.
• Providing attention processing exits and recovery routines - RRSAF does not generate task
structures, and it does not provide attention processing exits or functional recovery routines. You
can provide whatever attention handling and functional recovery your application needs, but you
must use ESTAE/ESTAI type recovery routines only.
RRSAF applications can be written in assembler language, C, COBOL, FORTRAN, and PL/I. If you use
MVS macros (ATTACH, WAIT, POST, and so on) you must choose a programming language that
supports them. The RRSAF TRANSLATE function is not available from FORTRAN. To use the function,
code it in a routine written in another language, and then call that routine from FORTRAN.
You have to plan on estimating 10 KB to 20 KB more for your RRSAF program size. The RRSAF code
requires about 10 KB of virtual storage for each address space and an additional 10 KB for each TCB
that uses RRSAF.
RRSAF uses MVS SVC LOAD to load a module as part of the initialization following your first service
request. The module is loaded into fetch-protected storage that has the job-step protection key. If
your local environment intercepts and replaces the LOAD SVC, then you must ensure that your version
of LOAD manages the load list element (LLE) and contents directory entry (CDE) chains like the
standard MVS LOAD macro
To commit work in RRSAF applications, use the CPIC SRRCMIT function or the DB2 COMMIT
statement. If you use the DB2 COMMIT statement, DB2 calls RRS to drive the two-phase commit
processing. We recommend using the CPIC function to commit your work across all other resource
managers that also use OS/390 RRS in an MVS system. The following is an example of the SRRCMIT
call in a COBOL program:
CALL ′ SRRCMIT′ USING CM-RETCODE .
To roll back work, use the CPIC SRRBACK function or the DB2 ROLLBACK statement. We recommend
that you use the CPIC SRRCMIT and SRRBACK functions. Using the CPCI SRRCMIT and SRRBACK
functions will guarantee the synchronization of all work. The following is an example of the SRRBACK
call in a COBOL program:
CALL ′ SRRBACK′ USING CM-RETCODE .
Applications that request DB2 services must adhere to several run environment requirements. Those
requirements must be met regardless of the attachment facility you use. They are not unique to
RRSAF. For a detail explanation of the RRSAF run environment see 6.7.1.2.4. “Run Environment,” in
DB2 for OS/390 Version 5 Application Programming and SQL Guide .
To use RRSAF, you must first make available the RRSAF language interface load module, DSNRLI.
Add the following statement to your link-edit step:
INCLUDE DB2LIB(DSNRLI)
For information on loading or link-editing this module see 6.7.2.1. “Accessing the RRSAF Language
Interface” in DB2 for OS/390 Version 5 Application Programming and SQL Guide .
• STARTECB is the address of the application′s startup ECB. If DB2 has not started when the
application issues the IDENTIFY call, DB2 posts the ECB when DB2 startup has completed. Enter a
value of zero if you do not want to use a startup ECB. DB2 posts a maximum of one startup ECB in
an address space. The ECB posted is associated with the most recent IDENTIFY call from that
address space. The application program must examine any nonzero RRSAF or DB2 reason codes
before issuing a WAIT on this ECB. If SSNM is a group attachment name, DB2 ignores the startup
ECB.
• RETCODE is a 4-byte area in which RRSAF places the return code. This parameter is optional. If
you do not specify this parameter, RRSAF places the return code in register 15 and the reason
code in register 0.
• RSNCODE is a 4-byte area in which RRSAF places a reason code. This parameter is optional. If
you do not specify this parameter, RRSAF places the reason code in register 0. If you specify this
parameter, you must also specify RETCODE.
Included in the sample programs is RRSAFCOB. RRSAFCOB uses RRSAF to coordinate the updates
between DB2 Version 5 and IMS Version 6. It uses APPC to start a conversation with an IMS Version 6
IVP transaction. It updates a DB2 Version 5 table. The logic is provided in the documentation on how
to call a DB2 Version 5 WLM DB2 stored procedure.
At task termination DB2 deallocates the plan, terminates the application′s connection to DBC1, and
OS/390 RRS commits any changes made after the last commit point. This is because RRSAFCOB ends
with no TERMINATE THREAD or TERMINATE IDENTIFY functions to deallocate the plan during the
execution of the program.
01 SWITCHES.
05 RRSAF-SWITCH PIC X.
88 RRSAF-COMMIT VALUE SPACE.
88 RRSAF-ROLLBACK VALUE ′ R′ .
*
* INPUT FOR THE RRS/AF CALL
*
01 RRS-INPUT.
* RRSAF services constants
05 IDFYFN PIC X(18) VALUE ′ IDENTIFY ′.
05 SIGNONFN PIC X(18) VALUE ′ SIGNON ′.
05 CRTHRDFN PIC X(18) VALUE ′ CREATE THREAD ′.
05 TRMTHDFN PIC X(18) VALUE ′ TERMINATE THREAD ′ .
05 TMIDFYFN PIC X(18) VALUE ′ TERMINATE IDENTIFY′ .
PROCEDURE DIVISION.
*************
* CALL RRS/AF FOR IDENTIFY FUNCTION
*************
DISPLAY ′ *** RRS/AF IDENTIFY′ .
CALL ′ DSNRLI′ USING IDFYFN SSNM RIBPTR EIBPTR
TERMCB STARTECB
RETCODE RSNCODE.
PERFORM RRSAF-RETCODE-CHK THRU RRSAF-RETCODE-CHK-EXIT.
************
* CALL RRS/AF FOR SIGNON FUNCTION
************
DISPLAY ′ *** RRS/AF SIGNON ′ .
CALL ′ DSNRLI′ USING SIGNONFN CORRID
ACCTTKN ACCTINT
RETCODE RSNCODE.
PERFORM RRSAF-RETCODE-CHK THRU RRSAF-RETCODE-CHK-EXIT.
************
* CALL RRS/AF FOR CREATE THREAD FUNCTION
************
DISPLAY ′ *** RRS/AF CREATE THREAD′ .
CALL ′ DSNRLI′ USING CRTHRDFN PLAN
COLLID REUSE
RETCODE RSNCODE.
PERFORM RRSAF-RETCODE-CHK THRU RRSAF-RETCODE-CHK-EXIT.
************
* PROCESS TRANSACTIONS
************
SET RRSAF-COMMIT TO TRUE.
............
............ update any resource managers that can use
............ the OS/390 RRS.
............
............ Update a DB2 V5 table.
............
............ EXEC SQL INSERT INTO TRAN_LOG
............ (IMSTRAN, IMSDATA)
............ VALUES(:IMSTRAN, :IMSDATA)
............ end-exec.
............
............ call a DB2 V5 WLM Stored Procedure
............
............ EXEC SQL CALL :STORED-PROCEDURE
............ (:IMS-TRAN, :IMSDATA, :STPC-RC)
............ END-EXEC.
*******************
* Rollback all resource managers′ data
*******************
RRSAF-ROLLBACK.
CALL ′ SRRBACK′ USING CM-RETCODE .
*******************
* Commit all resource managers′ data
*******************
RRSAF-COMMIT.
CALL ′ SRRCMIT′ USING CM-RETCODE .
DELIMITED BY SIZE INTO MSG-LINE-1.
The following are some common RRSAF error return codes. The error codes cover not having the
RRS address space running, having an invalid DB2 subsystem name, and an invalid plan name.
You can access non-DB2 resources, such as VSAM files, flat files, and CICS transactions, from a stored
procedure. The non-DB2 resource must be available to the stored procedures address space. Thus if
you are accessing VSAM files or flat files, you need a JCL DD statement in the stored procedures
address space JCL procedure for every file that the stored procedure accesses.
Not all non-DB2 resources can be accessed concurrently by multiple tasks in the same address space.
Thus you may have to serialize the access to the non-DB2 resource in the stored procedure. You can
use a WLM-established stored procedures address space just for this purpose by setting the
NUMTCB=1 in the JCL to start the stored procedure and specifying to WLM to start only one address
space.
For the DB2-established address space, when accessing non-DB2 resources that are RACF protected,
the user ID associated with the stored procedures address space is used to check authorizations. The
user ID associated with the client program is not used to check authorizations because the MVS
system is not aware that the stored procedures address space is processing work on behalf of the
client program.
For WLM-established address space, when accessing non-DB2 resources that are RACF protected, the
user ID associated with the stored procedures address space or the user ID associated with the client
application is used to check authorizations, depending on what you specify for the
EXTERNAL_SECURITY column of SYSIBM.SYSPROCEDURES.
Stored procedures can access CICS systems by using one of these methods:
• Message Queue Interface (MQI)
• External CICS Interface (EXCI)
• APPC
MQI calls are used for asynchronous execution of CICS transactions, and EXCI calls are used for
synchronous execution of CICS transactions.
When accessing CICS transactions, DB2 Version 4 does not coordinate commit and rollback activity.
The CICS transaction runs as a separate unit of work. For example, if the CICS transaction rolls back
its unit of work, the stored procedure unit of work is not automatically rolled back and can be
committed without problems.
The current release of CICS does not support RRS. DB2 Version 5 supports RRS, so you can get
coordination between a DB2 WLM-established stored procedure and CICS by using CICS′s LU 6.2
support. This can be done using an APPC transaction to invoke the CICS transaction. CICS supports
SYNCLVL=SYNCPT conversations, so when the outbound APPC transaction does the SRRCMIT, an LU
6.2 network flows to the inbound CICS transaction to commit the transaction. So while there are two
commit coordinators (RRS and CICS), it was still a coordinated two-phase commit transaction, because
APPC is the distributed synch point manager. You must use APPC as the distributed synch point
manager because CICS doesn′t register with RRS.
During our project, we tested only the EXCI calls. Below we briefly describe how to code EXCI calls
within a stored procedure.
The EXCI is a programming interface that enables a non-CICS program (including stored procedures)
to call a program running in a CICS/ESA Version 4.1 region and pass and receive data by means of a
communication area (COMMAREA). The EXCI provides two types of programming interfaces: the EXCI
CALL interface and the EXEC CICS interface.
The EXCI CALL interface consists of six commands that are used to allocate and open sessions to a
CICS system, issue requests on these sessions, close, and deallocate the sessions.
The EXEC CICS interface provides a single command, the EXEC CICS LINK PROGRAM command, that
performs all six commands of the EXCI CALL interface in one invocation. You can choose both the
EXEC CICS interface and the EXCI CALL interface to call CICS programs from a stored procedure.
For more information about EXCI, refer to the External CICS Interface Manual .
The called CICS program is named BOUNCE. It is a simple test program that receives the
COMMAREA sent by the stored procedure and returns the value BOUNCE to the stored procedure.
We used the sample STLYXTVL procedure to prepare the batch program with EXCI calls. We included
steps for the DB2 precompiler and bind. For this JCL sample we used STLYXTVL as an in-stream
procedure.
You also have to include the CICS load library, CICS.V4R1M0.SDFHEXCI, in the STEPLIB DD statement
of the stored procedures address space JCL procedure.
We used these definitions for the connection, sessions, program, and transaction:
Connection
Sessions
Sessions : XCIGSESS
Group : EXCI
DEscription :
SESSION IDENTIFIERS
Connection : XCIG
SESSName :
NETnameq :
MOdename :
SESSION PROPERTIES
Protocol : Exci Appc | Lu61 | Exci
MAximum : 000 , 000 0-999
RECEIVEPfx : XG
RECEIVECount : 005 1-999
SENDPfx :
SENDCount : 1-999
SENDSize : 04096 1-30720
RECEIVESize : 04096 1-30720
SESSPriority : 000 0-255
Transaction :
OPERATOR DEFAULTS
OPERId :
OPERPriority : 000 0-255
OPERRsl : 0
OPERSecurity : 1
PRESET SECURITY
USERId :
Program
PROGram : BOUNCE
Group : HUGHPRG
DEscription :
Language : CObol | Assembler | Le370 | C | Pli
| Rpg
RELoad : No No | Yes
RESident : No No | Yes
USAge : Normal Normal | Transient
USElpacopy : No No | Yes
Status : Enabled Enabled | Disabled
RSl : 00 0-24 | Public
Cedf : Yes Yes | No
DAtalocation : Below Below | Any
EXECKey : User User | Cics
REMOTE ATTRIBUTES
REMOTESystem :
REMOTEName :
Transid :
EXECUtionset : Fullapi Fullapi | Dplsubset
Transaction
TRANSaction : EXCI
Group : DFH$EXCI
DEscription :
PROGram : DFHMIRS
TWasize : 00000 0-32767
PROFile : DFHCICSA
PArtitionset :
STAtus : Enabled Enabled | Disabled
PRIMedsize : 00000 0-65520
TASKDATALoc : Below Below | Any
TASKDATAKey : User User | Cics
STOrageclear : No No | Yes
RUnaway : System System | 0-2700000
SHutdown : Disabled Disabled | Enabled
ISolate : Yes Yes | No
REMOTE ATTRIBUTES
DYnamic : No No | Yes
REMOTESystem :
REMOTEName :
TRProf :
Localq : No | Yes
SCHEDULING
Because IMS does not support multiple TCBs for batch processing and requires that the IMS region
controller be the module in control, you cannot access IMS databases directly from the stored
procedures address spaces using the batch region facility.
There are several ways that you can access IMS databases:
• Using the new database resource adapter (DRA) callable interface.
• Including APPC code in the stored procedure. An IMS or CICS transaction can be invoked by
APPC.
• Using the MQI interface to access CICS transaction.
• Using the EXCI interface as described in 13.2, “Using EXCI in a Stored Procedure” on page 266
• Using the MQI interface to place a transaction on the IMS message queue.
Depending on the method and the level of product you use you have different flavors of:
• Commitment control
• Complexity of the code
• Performance
Once you have accessed IMS databases, you can pass IMS information through output parameters, or
you can insert information obtained from IMS databases in a global temporary table, and pass this
information to the client application using multiple result sets.
As illustrated in Figure 137, the architecture of DRA, the callable interface resides in the DB2 stored
procedures address space, and is recognized by IMS as an application execution region (AER).
Using the DRA-callable interface the stored procedure can access to full-function DL/I data bases and
Fast Path data entry databases (DEDB). The DRA-callable interface allows DBCTL and OS/390
application programs, such as stored procedures, to be developed, installed, and maintained
independent of each other.
The IMS DRA-callable interface provides new modules packaged with the existing IMS Version 6 DRA
modules to support the new callable interface to DBCTL.
Although DRA uses a TCB in the stored procedures address space, you don′t have to account for this
TCB when specifying the NUMTCB parameters passed to the stored procedures address space during
initialization.
You cannot execute a stored procedure as an IMS batch application, because the IMS batch does not
connect to DBCTL.
13.3.1.1 IMS Function Calls: You can code the stored procedure in any language supported by
stored procedure.
The CALLs that are available are the CALLs to the AERTDLI interface or the AIBTDLI interface. The
formats of the CALL are the following:
CALL AERTDLI parmcount,function.AIB,...
CALL AIBTDLI parmcount,function.AIB,...
wher:
• parmcount specifies the address of a 4-byte field in the user-defined storage that contains the
number of parameters in the parameter list that follows parmcount.
• function specifies the address of a 4-byte field in the user-defined storage that contains the
function call. The function call must be left justified and padded with blanks, such as GUbb.
• AIB specifies the address of the application interface block.
The following function calls are supported by the DRA-callable interface:
APSB
CIMS
DEQ
DLET
DPSB
FLD
GU
GHU
GN
GHN
GMSG
ICMD
INIT
INQY
ISRT
LOG
POS
RCMD
REPL
ROLS
SETS
SETU
SNAP
STAT
OPEN
CLSE
The I/O PCB is used only for DL/I system service calls. The GU, GN, or ISRT calls to the I/O PCB are
not supported because no access is provided to the IMS message queue.
Table 19 maps the AIB used by the DRA-callable interface for processing DL/I.
13.3.1.2 Link-Edit Requirements: The only requirement is to link the stored procedure with the
DSNARLI module. A program other than stored procedures that intends to use DRA must be
link-edited with the DFSCDL10 module (or load it to use the AERTDLI call interface entry point).
This new module provides the AERTDLI and AIBTDLI application interface block for an OS/390 AEE
application region.
13.3.1.3 The CIMS Call: The CIMS call is a new DL/I function call introduced by the DRA-callable
interface. It is available only for the IMS AER environment. It is used by the stored procedures
address space when it is initializing or terminating the environment.
A stored procedure can allocate (schedule) more than one PSB during its execution, although only one
PSB can be allocated at a time. To allocate a new PSB, the stored procedure must first deallocate the
current PSB through the DPSB call explained in 13.3.1.5, “DPSB Call.”
When you use a new APSB call, you can also change the DBCTL ID in the AIB, by specifying a different
DBCTL ID as the resource name. This way you can process different sets of IMS databases if you have
different DBCTLs control regions.
13.3.1.5 DPSB Call: The DPSB call deallocates a PSB and terminates the connection from a
stored procedure to DBCTL. The format of the DPSB call is the following:
CALL AERTDLI|AIBTDLI parmcount,DPSB,AIB
where
• parmcount is a 4-byte binary field with the value of 2.
• DPSB is the function call.
• AIB is the application interface block.
You must set the following fields in the AIB:
− AIBRSNM1 to the PSB name.
− AIBSFUNC is the subfunction and should be set to the string PREP.
13.3.1.6 Synch Point Processing: The IMS DBCTL Version 6.1 utilizes OS/390 Registration
Services, RRS, and Context Services when the services are available. Since DBCTL is a participant in
synch point processing, it uses a two-phase commit to record a synch point. In general an AEE
application program uses the RRS to process commit or roll back using the following interfaces:
• ATRCMT/ATRBACK
• SRRCMIT/SRRBACK
• OTS-COMMIT/OTS-ROLLBACK
For a stored procedure environment, these interfaces are not available. When the client application
commits, or when control is returned to the client application and the COMMIT_ON_RETURN is in
effect, DB2 initiates commit processing as a coordinator to RRS.
13.3.1.7 Security Considerations: IMS resources accessed from an AEE and an unauthorized
AEE connections to the DBCTL environment are controlled by using the existing ISIS execution
parameter as follows:
• ISIS=0, no check is performed
• ISIS=1 the connection and the PSB are checked. The USERID and application group name (AGN)
from the startup table must be authorized to access DBCTL. You must build RACF tables that
define valid USERID and AGN combination. If the startup table values do not correspond to an
entry in RACF tables, the stored procedure cannot connect to DBCTL.
Note that connection to different DBCTL systems from the same stored procedures address space,
can have different USERID and AGN security because the DBCTL ID specifies a different startup
table module.
• ISIS=2 the connection and the PSB are checked. You have to create the resource access security
exit routine DFSISIS0. The routine must determine whether the AGN passed to it is valid for the
attempted connection
* MESSAGES
* VARIABLES
* CONSTANTS
* FLAGS
01 FLAGS.
02 SET-DATA-FLAG PIC X VALUE ′ 0 ′ .
88 NO-SET-DATA VALUE ′ 1 ′ .
02 TADD-FLAG PIC X VALUE ′ 0 ′ .
88 PROCESS-TADD VALUE ′ 1 ′ .
* COUNTERS
01 AIB.
02 AIBID PIC X(8).
02 AIBLEN PIC 9(9) USAGE BINARY.
02 AIBSFUNC PIC X(8).
02 AIBRSNM1 PIC X(8).
02 AIBRSNM2 PIC X(8).
02 AIBRESV1 PIC X(8).
02 AIBOALEN PIC 9(9) USAGE BINARY.
02 AIBOAUSE PIC 9(9) USAGE BINARY.
02 AIBRESV2 PIC X(12).
02 AIBRETRN PIC 9(9) USAGE BINARY.
02 AIBREASN PIC 9(9) USAGE BINARY.
02 AIBRESV3 PIC X(4).
02 AIBRESA1 USAGE POINTER.
02 AIBRESA2 USAGE POINTER.
02 AIBRESA3 USAGE POINTER.
02 AIBRESV4 PIC X(40).
02 AIBSAVE OCCURS 18 TIMES
USAGE POINTER.
02 AIBTOKN OCCURS 6 TIMES
USAGE POINTER.
02 AIBTOKC PIC X(16).
02 AIBTOKV PIC X(16).
02 AIBTOKA OCCURS 2 TIMES
PIC 9(9) USAGE BINARY.
01 IOAREA.
02 IO-BLANK PIC X(37) VALUE SPACES.
02 IO-DATA REDEFINES IO-BLANK.
03 IO-LAST-NAME PIC X(10).
03 IO-FIRST-NAME PIC X(10).
03 IO-EXTENSION PIC X(10).
03 IO-ZIP-CODE PIC X(7).
02 IO-FILLER PIC X(3) VALUE SPACES.
02 IO-COMMAND PIC X(8) VALUE SPACES.
01 DB2IN-COMMAND.
02 DB2IW-COMMAND PIC X(8).
02 DB2TEMP-COMMAND REDEFINES DB2IW-COMMAND.
03 DB2TEMP-IOCMD PIC X(3).
03 FILLER PIC X(5).
01 SSA.
02 SEGMENT-NAME PIC X(8) VALUE ′ A1111111′ .
02 SEG-KEY-NAME PIC X(11) VALUE ′ ( A1111111 =′ .
02 SSA-KEY PIC X(10).
02 FILLER PIC X VALUE ′ ) ′ .
LINKAGE SECTION.
MAIN-RTN.
INITIALIZE AIB.
SET AIBRESA1 TO NULLS.
SET AIBRESA2 TO NULLS.
SET AIBRESA3 TO NULLS.
MOVE ZEROES to AIBRETRN.
MOVE ZEROES to AIBREASN.
MOVE VAIBID to AIBID.
MOVE LENGTH OF AIB to AIBLEN.
MOVE SPACES to IOAREA.
* MOVE SPACES TO DBPCB.
MOVE LENGTH OF IOAREA to AIBOALEN.
MOVE SPACES TO AIBSFUNC.
* PROCEDURE PROCESS-INPUT
PROCESS-INPUT.
* CHECK THE LEADING SPACE IN INPUT LAST NAME AND TRIM IT OFF
* CHECK THE LEADING SPACE IN INPUT FIRST NAME AND TRIM IT OFF
* CHECK THE LEADING SPACE IN INPUT ZIP CODE AND TRIM IT OFF
*
MOVE DB2IO-LAST-NAME TO IO-LAST-NAME.
MOVE DB2IO-COMMAND TO IO-COMMAND.
MOVE DB2IO-COMMAND TO DB2IN-COMMAND.
TO-ADD.
MOVE DB2IO-FIRST-NAME TO IO-FIRST-NAME.
MOVE DB2IO-EXTENSION TO IO-EXTENSION.
MOVE DB2IO-ZIP-CODE TO IO-ZIP-CODE.
* MOVE IO-DATA TO DB2OUT-DATA.
MOVE IO-COMMAND TO DB2IO-COMMAND.
IF DB2IO-FIRST-NAME EQUAL SPACES OR
DB2IO-EXTENSION EQUAL SPACES OR
DB2IO-ZIP-CODE EQUAL SPACES
THEN
MOVE APPERR TO DB2OUT-AIBRETRN
MOVE INVCMD TO DB2OUT-AIBREASN
ELSE
PERFORM ISRT-DB THRU ISRT-DB-END.
TO-ADD-END.
EXIT.
TO-UPD.
MOVE 0 TO SET-DATA-FLAG.
MOVE IO-LAST-NAME TO SSA-KEY.
PERFORM GET-HOLD-UNIQUE-DB THRU GET-HOLD-UNIQUE-DB-END.
IF AIBRETRN = ZEROES
THEN
IF DB2IO-FIRST-NAME NOT = SPACES
MOVE 1 TO SET-DATA-FLAG
MOVE DB2IO-FIRST-NAME TO IO-FIRST-NAME
END-IF
IF DB2IO-EXTENSION NOT = SPACES
MOVE 1 TO SET-DATA-FLAG
MOVE DB2IO-EXTENSION TO IO-EXTENSION
END-IF
IF DB2IO-ZIP-CODE NOT = SPACES
MOVE 1 TO SET-DATA-FLAG
MOVE DB2IO-ZIP-CODE TO IO-ZIP-CODE
END-IF
* MOVE IO-DATA TO DB2OUT-DATA.
MOVE IO-COMMAND TO DB2IO-COMMAND.
IF NO-SET-DATA
THEN
PERFORM REPL-DB THRU REPL-DB-END
ELSE
MOVE APPERR TO DB2OUT-AIBRETRN
MOVE INVCMD TO DB2OUT-AIBREASN.
TO-DEL.
MOVE IO-LAST-NAME TO SSA-KEY.
PERFORM GET-HOLD-UNIQUE-DB THRU GET-HOLD-UNIQUE-DB-END.
IF AIBRETRN = ZEROES
THEN
* MOVE IO-DATA TO DB2OUT-DATA
MOVE IO-COMMAND TO DB2IO-COMMAND
PERFORM DLET-DB THRU DLET-DB-END.
TO-DEL-END.
EXIT.
TO-DIS.
MOVE IO-LAST-NAME TO SSA-KEY.
PERFORM GET-UNIQUE-DB THRU GET-UNIQUE-DB-END.
IF AIBRETRN = ZEROES
THEN
* MOVE IO-DATA TO DB2OUT-DATA
MOVE IO-COMMAND TO DB2IO-COMMAND.
TO-DIS-END.
EXIT.
ISRT-DB.
MOVE DPCBNME to AIBRSNM1.
CALL ′ AERTDLI′ USING ISRT, AIB, IOAREA, SSA1.
* CALL ′ CEETDLI′ USING ISRT, AIB, IOAREA, SSA1.
IF AIBRETRN = ZEROES
THEN
IF PROCESS-TADD
DISPLAY ′ INSERT IS DONE, REPLY′ UPON CONSOLE
ACCEPT REPLY FROM CONSOLE
MOVE 0 TO TADD-FLAG
END-IF
ELSE
MOVE APPERR TO DB2OUT-AIBRETRN
MOVE INVCMD TO DB2OUT-AIBREASN
MOVE ISRT TO DC-ERROR-CALL.
* SET ADDRESS OF DBPCB TO AIBRESA1
* MOVE DBSTATUS TO DC-ERROR-STATCDE.
ISRT-DB-END.
EXIT.
* PROCEDURE GET-UNIQUE-DB
* DATA BASE SEGMENT GET-UNIQUE-DB REQUEST HANDLER
GET-UNIQUE-DB.
MOVE DPCBNME to AIBRSNM1.
CALL ′ AERTDLI′ USING GET-UNIQUE, AIB, IOAREA, SSA.
* CALL ′ CEETDLI′ USING GET-UNIQUE, AIB, IOAREA, SSA.
IF AIBRETRN NOT EQUAL ZEROES
THEN
* PROCEDURE GET-HOLD-UNIQUE-DB
* DATA BASE SEGMENT GET-HOLD-UNIQUE-DB REQUEST HANDLER
GET-HOLD-UNIQUE-DB.
MOVE DPCBNME to AIBRSNM1.
CALL ′ AERTDLI′ USING GET-HOLD-UNIQUE, AIB, IOAREA, SSA.
* CALL ′ CEETDLI′ USING GET-HOLD-UNIQUE, AIB, IOAREA, SSA.
IF AIBRETRN NOT EQUAL ZEROES
THEN
MOVE APPERR TO DB2OUT-AIBRETRN
MOVE INVCMD TO DB2OUT-AIBREASN
* SET ADDRESS OF DBPCB TO AIBRESA1
MOVE GET-HOLD-UNIQUE TO DC-ERROR-CALL.
* MOVE DBSTATUS TO DC-ERROR-STATCDE.
GET-HOLD-UNIQUE-DB-END.
EXIT.
REPL-DB.
MOVE DPCBNME to AIBRSNM1.
CALL ′ AERTDLI′ USING REPL, AIB, IOAREA.
* CALL ′ CEETDLI′ USING REPL, AIB, IOAREA.
IF AIBRETRN NOT EQUAL ZEROES
MOVE APPERR TO DB2OUT-AIBRETRN
MOVE INVCMD TO DB2OUT-AIBREASN
* SET ADDRESS OF DBPCB TO AIBRESA1
MOVE REPL TO DC-ERROR-CALL.
* MOVE DBSTATUS TO DC-ERROR-STATCDE.
REPL-DB-END.
EXIT.
DLET-DB.
MOVE DPCBNME to AIBRSNM1.
CALL ′ AERTDLI′ USING DLET, AIB, IOAREA.
* CALL ′ CEETDLI′ USING DLET, AIB, IOAREA.
IF AIBRETRN NOT EQUAL ZEROES
MOVE APPERR TO DB2OUT-AIBRETRN
MOVE INVCMD TO DB2OUT-AIBREASN
* SET ADDRESS OF DBPCB TO AIBRESA1
MOVE DLET TO DC-ERROR-CALL.
* MOVE DBSTATUS TO DC-ERROR-STATCDE.
DLET-DB-END.
EXIT.
//LKED.SYSIN DD *
INCLUDE SYSLIB(DSNRLI)
INCLUDE SYSLIB(DSNTIAR)
//********************************************************************
//* DFSTDB2P TEXT DECK ALREADY IN USER.CIMS.PGMLIB
//* FOLLOWING JUST FORCES A LINK (ALREADY DONE WITH IMSCOB2)
//********************************************************************
//*STEPPROC EXEC PROC=DSNHCOBM,DB2LEV=DB2A,MEM=DFSTDB2P
//*LKED.SYSIN DD *
//* NAME DFSTDB2P(R)
//********************************************************************
//* CALLING PROGRAM!
//********************************************************************
/* EXEC SQL */
/* SET CURRENT PACKAGESET = ′ COL1′ ; */
/************************************************************/
/* Include Standard Language Procedures. */
/************************************************************/
/* EXEC SQL INCLUDE PLISUB; */
/******************************************************************/
END APPLDIS2;
//LKED.SYSIN DD *
INCLUDE SYSLIB(DSNELI)
INCLUDE SYSLIB(DSNTIAR)
NAME APPLDIS2(R)
//********************************************************************
//********************************************************************
//********************************************************************
//********************************************************************
//********************************************************************
//STEP14 EXEC TSOBATCH,DB2LEV=DB2A
//SYSTSIN DD *
DSN SYSTEM(V51A)
FREE PLAN(PLANADI2)
//**********************************************************************
//* DSNCMD STEP
//**********************************************************************
//STEP16 EXEC TSOBATCH,DB2LEV=DB2A
//SYSTSIN DD *
DSN SYSTEM(V51A)
13.3.1.9 Problem Determination: You should always check the IMS status code, the reason
code, and the return code. The reason code is contained in the AIBREASN field, and the return code is
contained in the AIBRETRN field. You should check for nonzero values for reason and return codes,
and unexpected nonblank spaces for status code (for example, GE and GB can be expected as status
code indicating no more segment found, or end of database reached).
Table 21 shows some new AIB reason and return codes with possible causes.
Table 22 shows the existing AIB return codes and reason codes for the APSB and DPSB function calls
applicable to AERTDLI.
You can use the APPC code in the DB2-established address space or in the WLM-established address
space.
If you use the DB2-established address space you should specify SYNCLEVEL NONE or CONFIRM. For
DB2-established address space, you don′t have two-phase commit support for updates done in IMS
database. If the client application signals to ROLLBACK, or if one of the participants of the two-phase
commit processing cannot commit, updates done by the IMS transaction are not rolled back. In this
case, the application is responsible for removing the updates. Note that this is an exposure, because
by the time DB2 rolls back, the IMS applications may already have used the updated information.
If you use the WLM-established address space you should specify SYNCLEVEL NONE or CONFIRM if
the IMS transaction does not perform any updates on IMS databases. If the IMS transaction performs
updates to IMS databases you must specify SYNCLEVEL SYNCPOINT to ensure that IMS is a participant
in the two-phase commit process. In this case, you need IMS Version 6, which is RRS enabled. You
have to make sure you also have OS/390 R3 APPC to achieve a full two-phase commit processing.
For an example of how to use APPC to invoke IMS transactions from a stored procedure refer to 13.5,
“APPC to Access Transactions from a Stored Procedure” on page 292.
IMS has access to messages enqueued on MQSeries through a BMP or using the IMS Open
Transaction Manager Access (OTMA). Please refer to IMS publications for more information about
OTMA.
You can use this method using DB2-established stored procedures or WLM-established stored
procedures. If you need two-phase commit processing, then you have to use WLM-established
address spaces and the correct level of MQSeries, CICS, IMS, or the product that your stored
procedure is requesting information from.
We provide three COBOL samples stored procedure using CPIC to invoke the IMS installation
verification procedure (IVP) transactions:
• IMSBMS stored procedure is called by client IMSBMCBM and executes the IMS IVP PART
transaction. It places information about PARTs received from IMS in a DB2 global temporary table,
which is returned to the client application in a result set.
• IMUBMS stored procedure is called by client IMUBMCBM and shows how to dynamically change
the transaction program name (TPN). It can invoke any of the IMS IVP transactions. It stores the
IMS data into a DB2 global temporary table and returns a result set back to the client.
• IMDBMS stored procedure is called by clients IMDBMCBM, IMDBMCBN, and IMDBMCB2. It has
the same function as IMUBMS except the IMS output is returned in a parameter. It does not use a
DB2 global temporary table.
Here is the example of the side information file we used to set up the symbolic destination name:
//STEP01 EXEC PGM=ATBSDFMU
//SYSPRINT DD SYSOUT=*
//SYSSDLIB DD DSN=SYS1.APPCSI,DISP=SHR
//SYSSDOUT DD SYSOUT=*
//SYSIN DD *
SIADD
DESTNAME(IMSA)
TPNAME(PART)
MODENAME(LU62APPC)
PARTNER_LU(SC47IMSA)
/*
Note that:
• The DESTNAME specification can be any name. When you code your application, if you want to
use this side information, you pass it on the CPIC call.
• The TPNAME specifcation matches the IMS transaction code.
• The MODENAME specifcation must match a VTAM log mode available for IMS.
• The PARTNER_LU specifcation must match the IMS LU name.
Instead of using the ATBSDFMU utility, you can update the information in the data sets interactively,
using ISPF panels.
13.5.1.1 Program Preparation: APPC/MVS provides interface definition files (IDFs), also called
pseudonym files, which define variables and values for parameters of APPC/MVS services. IDFs are
available for different languages, and can be included or copied from a central library into programs
that invoke APPC/MVS callable services. The following IDFs are available on MVS:
for PL/I include SYS1.SAMPLIB(ATBCMPLI)
CICS.V5R1M0.SDFHPL1(CMPLI) *
The ATBPBI module, from SYS1.CSSLIB must be link-edited with any program that issues APPC/MVS
services CPIC calls or TP conversation services.
For our samples, we used CPIC calls to MVS TP services because CPIC is more portable. Like CPIC,
APPC/MVS′s MVS-specific services let your programs communicate with other programs on the same
MVS system, other MVS systems, or other operating systems in an SNA network. Unlike CPIC,
programs using the MVS-specific services are not portable to other systems. MVS-specific services
make use of the MVS/ESA architecture, including data spaces and asynchronous processing, and
provide TP scheduling options, server functions, and test services.
The following is a summary of the IMSBMCBM client program and the IMSBMS stored procedure:
1. IMSBMCBM declares a result set locator for the result set that is returned from IMSBMS.
2. A working storage field is defined to hold the 100 characters returned from the result set.
3. IMSBMCBM calls the stored procedure IMSBMS and passes to it a part number using the SIMPLE
linkage convention.
4. IMSBMS initializes and allocates the conversation to the IMS transaction PART.
5. IMSBMS sends and flushes the data, which is the part number.
6. The partner transaction PART is scheduled by the IMS control region in a message processing
region (MPP).
7. The PART transaction reads information about a part number and sends the reply back. In the
case of an existing part number, the part-related information is returned. Otherwise, an error
message is returned.
8. IMSBMS loops to receive data and stores this data in a DB2 global temporary table, and continues
to loop until all data has been received.
9. When a deallocated normal message is received from IMS, IMSBMS ends the conversation.
10. IMSBMS declares a cursor WITH RETURN selecting all the rows from the DB2 global temporary
table:
EXEC SQL DECLARE TESTE-CURSOR CURSOR WITH RETURN FOR
SELECT COL1
FROM TEMPIMS
END-EXEC.
11. The cursor is opened and control is returned to the client program IMSBMCBM.
12. IMSBMCBM checks for an SQLCODE of +466, to see if a result set was returned. If a result is
returned IMSBMCBM associates the result locator variable to the result set with the ASSOCIATE
LOCATORS SQL statement.
Next, the IMUBMS stored procedure must be defined to the SYSIBM.SYSPROCEDURE table. This
stored procedure is defined with COMMIT-ON-RETURN to show you how to return a result set when
using COMMIT_ON_RETURN and how COMMIT_ON_RETURN affects a global temporary table.
INSERT INTO SYSIBM.SYSPROCEDURES
(PROCEDURE, AUTHID, LUNAME, LOADMOD,
LINKAGE, COLLID, LANGUAGE, ASUTIME,
STAYRESIDENT, IBMREQD, RUNOPTS,
PARMLIST,
RESULT_SETS, WLM_ENV, PGM_TYPE, EXTERNAL_SECURITY,
COMMIT_ON_RETURN )
VALUES(′ IBUBMS′ , ′ ′, ′ ′, ′ IBUBMs′ ,
′ ′, ′ SAMPLESP′ , ′ COBOL′ , 0,
′ ′, ′ N′ , ′ ′,
′ CHAR(8) IN, CHAR(71) IN
1, ′ WLMENV2′ , ′ M′ , ′ N′ ,
′ Y′ ) ;
COMMIT;
The following is a summary of the IMUBMCBM client program and the IMUBMS stored procedure:
1. IMUBMCBM declares a result locator for the result set that will be returned from IMUBMS.
2. A working storage field is defined to hold the 100 characters returned from the result set.
3. Two fields are defined to be passed to IMUBMS:
• An 8-character field to hold the IMS IVP transaction
• A 71-character field to hold the IMS IVP transaction data
4. IMUBMCBM reads an IMS IVP transaction, calls the stored procedure IMUBMS, passing to it the
IMS IVP transaction code and its data. It uses the SIMPLE linkage.
5. IMUBMS first deletes any rows in the global temporary table. Although we have defined this
stored procedure to use COMMIT_ON_RETURN, the rows are not deleted when we return to the
client. This is because the application process has an open cursor WITH HOLD option on this table
to pass the IMS rows back to IMUBMCBM. With COMMIT_ON_RETURN, any cursor that is opened
without the WITH HOLD option is closed when control is returned to the client application, and no
data is passed back to the client application. Because the client application can invoke the stored
procedure again using the same thread, the stored procedure must delete the rows inserted when
the stored procedure was invoked previously.
Next, the IMDBMS stored procedure must be defined to the SYSIBM.SYSPROCEDURE table. This
stored procedure is defined with SIMPLE WITH NULLS:
DELETE FROM SYSIBM.SYSPROCEDURES
WHERE PROCEDURE=′ IBUBMS′ ;
commit;
INSERT INTO SYSIBM.SYSPROCEDURES
(PROCEDURE, AUTHID, LUNAME, LOADMOD,
LINKAGE, COLLID, LANGUAGE, ASUTIME,
STAYRESIDENT, IBMREQD, RUNOPTS,
PARMLIST,
RESULT_SETS, WLM_ENV, PGM_TYPE, EXTERNAL_SECURITY,
COMMIT_ON_RETURN )
VALUES(′ IBDBMS′ , ′ ′, ′ ′, ′ IBDBMs′ ,
′ N′ , ′ SAMPLESP′ , ′ COBOL′ , 0,
′ ′, ′ N′ , ′ ′,
′ CHAR(8) IN, CHAR(71) IN
0, ′ WLMENV2′ , ′ M′ , ′ N′ ,
′ N′ ) ;
COMMIT;
The following is a summary of the IMDBMCBM client program and the IMDBMS stored procedure:
1. IMDBMCBM sets the indicator variables for IMS-LINES and IMS-OUTPUT1 to a negative one. This
will stop the sending of these two fields to the stored procedure on the CALL. Two fields are
passed to IMDBMS.
• An 8-character field that contains the IMS IVP transaction
• A 71-character field that contains the IMS IVP transaction data
2. IMDBMCBM reads an IMS IVP transaction and calls the stored procedure IMDBMS passing the IMS
IVP transaction. It uses the SIMPLE WITH NULL linkage.
3. IMDBMS initializes the conversation to the IMS region
4. IMDBMS sets the TPN to the IMS IVP transaction code passed to it, using the CPIC CMSTPN
function.
5. IMDBMS allocates a conversation with this IMS IVP transaction.
6. The IVP transaction and its data are inserted into the transaction log table.
7. IMDBMS sends and flushes the data that contains the transaction code.
8. The partner IVP transaction is scheduled by the IMS control region, and the IVP transaction gets
the data, and sends the reply back. In the case of an existing transaction and good data, the
related information is returned. Otherwise, an error message is returned.
9. IMDBMS loops to receive data and stores this data into the field defined with an OCCURS clause.
10. When a deallocated normal message is received from IMS, IMDBMS ends the conversation.
11. IMDBMS then sets the length of the VARCHAR field with the number of lines returned from IMS
times the length of the lines.
For more information on the IMS IVP transactions PART, ADDPART and DLETPART, which come with
IMS installation verification, refer to IMS/ESA: Installation Volume 1: Installation and Verification ,
SC26-8023.
In this chapter, we discuss several issues related to the performance of an application using stored
procedures. We used stored procedures based on the samples that come with DB2 Common Servers.
We encourage you to reproduce some of our tests in your own environment to verify the performance
improvements for each scenario.
Although the performance aspects of stored procedures are certainly important, we suggest that you
consider the other advantages of using stored procedures. Performance is one, but probably not the
only, reason to implement stored procedures. Refer to Chapter 1, “Stored Procedures Overview” on
page 1 for information about the advantages of stored procedures.
In this section, we compare using stored procedures with using traditional coding where each SQL
statement must be shipped to the server in order to be executed on the server. For the comparisons
we created two scenarios.
In the first scenario, we coded a traditional SQL program that did not use stored procedures. The
traditional application was written in REXX and executed on an OS/2 client connected to a DB2 for
OS/2 server database.
In the second scenario, a stored procedure issued 10 SQL PREPARE and 10 SQL INSERT statements.
These 20 statements were repeated a number of times according to a LOOP parameter defined in the
client program. The client program was written in REXX and ran on an OS/2 client. The stored
procedure was written in REXX and ran on an OS/2 server.
The execution of these programs was relatively slow for the following reasons:
• All three programs (the client, the stored procedure, and the traditional program) were coded in
REXX, which is an interpreted language.
• For each INSERT statement, the program issued an SQL PREPARE statement to simulate execution
of a command different from the SQL INSERT statement. In this test, the PREPARE statement was
not necessary because the program executed the same INSERT statement multiple times, but we
included it to measure the results if different SQL statements were executed.
This test was executed between two dedicated OS/2 PCs connected through a 16 MB LAN. As the
connection between the client and the server machine became slower, or as network traffic increased,
the performance advantage of stored procedures became more obvious, even with only a few SQL
statements. As with all performance tests, we recommend reproducing them in your own environment
to determine the potential benefits.
You can find the following programs from this test on the diskette that accompanies this book: sp0r2s
for the stored procedure, sp0r2cr2 for the client program, and rr22xsp0 for the traditional program.
In most cases, static SQL performs better than dynamic SQL. To verify the effects of using static and
dynamic SQL in a stored procedure, we developed two C OS/2 stored procedures: one used static
SQL, and the other used dynamic SQL. When the stored procedure was invoked, it issued 10 SQL
INSERT statements. We measured the time that elapsed when the client program invoked the stored
procedure 10 times and 50 times. Table 25 shows the results.
As the dynamic SQL had to issue an SQL PREPARE and an SQL EXECUTE statement for each INSERT,
it is no surprise that the static SQL with just a single SQL statement executed faster.
In our case, 10 PREPARE statements were not needed for the 10 INSERT statements as explained in
14.1, “Traditional Coding and Stored Procedures” on page 299. The client program was written in
REXX and ran on an OS/2 client; the stored procedure ran on the OS/2 server.
You can find the following programs from this test on the diskette that accompanies this book: pr1c2s
for the stored procedure and pr1c2cr2 for the client program when dynamic SQL is used, and pr2c2s
for the stored procedure and pr2c2cr2 for the client program when static SQL is used.
Using compound SQL is a way of reducing network traffic by grouping multiple SQL statements into a
single executable block.
As stored procedures reside on the server, network traffic is already reduced. Because stored
procedures usually run fenced, there is still interprocess communication overhead between the DB2
kernel processes and the process running the stored procedure. Compound SQL improves the overall
performance by reducing this interprocess communication. To measure the effect of using compound
SQL statements, we changed the static SQL statements described in 14.2, “Dynamic and Static SQL”
on page 300 to use compound SQL statements. Table 26 shows the results for compound static SQL.
You can find the following programs from this test on the diskette that accompanies this book: pr3c2s
for the stored procedure and pr3c2cr2 for the client program.
END COMPOUND;
Although we do not recommend that you run stored procedures unfenced for reasons discussed in 4.5,
“Fenced and Unfenced Stored Procedures” on page 76, we measured the effect of using unfenced
stored procedures. Table 27 shows the results for unfenced stored procedures. For this test, we used
the same program we used for the static, noncompound SQL tests (program pr2c2s, here renamed to
pr4c2s).
You can find the following programs from this test on the diskette that accompanies this book: pr4c2s
for the stored procedure and pr4c2cr2 for the client program.
To compare embedded dynamic SQL with CLI, we wrote test programs where we had one PREPARE
statement followed by 10 inserts. We ran the programs with 10 calls and 50 calls to the stored
procedure. Table 28 shows the results.
You can find the following programs from this test on the diskette that accompanies this book: cl2o2s
for the stored procedure and cl2o2cr2 for the client program using CLI, and dd1c2s for the stored
procedure and dd1c2cr2 for the client program using embedded dynamic SQL.
You cannot compare this test with the previous tests because in this test we ran one statement 10
times. In the previous tests, we basically ran 10 different statements, each requiring its own PREPARE
statement.
We also tested CLI with a PREPARE statement for each insert, but we found that the CLI SQLPrepare
statement was more time consuming than the embedded dynamic SQL PREPARE statement. In a case
with many prepare statements, performance might be better if you code the stored procedure in
embedded SQL and use CLI for the client only.
The KEEPDARI indicator has an important impact on the performance of stored procedures. If
KEEPDARI is set to YES, the process required for running a stored procedure remains active after the
stored procedure has ended. If KEEPDARI is set to NO, the db2dari process must be rebuilt each time
a stored procedure is called.
We ran some tests with a stored procedure that issued 10 inserts. Subsequently we called this stored
procedure several times, comparing the elapsed time with KEEPDARI=YES and KEEPDARI=NO. As
expected, with KEEPDARI=YES the subsequent calls were processed much faster (see Table 29).
This test was done with a REXX OS/2 program running on an OS/2 client workstation and calling a C
OS/2 stored procedure written with embedded dynamic SQL statements running on an OS/2 server.
You can find the following programs from this test on the diskette that accompanies this book: pr1c2s
for the stored procedure and pr1c2cr2 for the client program.
We also recommend setting MAXDARI to the MAXAGENTS value and running unfenced stored
procedures on dedicated test machines, or only after they have been thoroughly tested.
For other performance measurements and capacity planning information, refer to Appendix B,
“Performance Benchmark and Capacity Planning” on page 377 and Appendix C, “DB2 Connect Result
Set Study” on page 415.
In this chapter, we describe how to debug DB2 on MVS stored procedures with the CoOperative
Development Environment/370 (CODE/370). We discuss installation considerations, preparations for
debugging your stored procedure for testing, and use of the CODE/370 debug tool.
DB2 on MVS stored procedures run in an environment where some commonly used debugging tools,
such as TSO TEST, are not available. CODE/370 provides a powerful alternative for testing and
debugging DB2 for MVS/ESA stored procedures.
CODE/370 is an integrated edit, compile, and debug tool for high-level languages application
development and maintenance across different platforms. CODE/370 functions are available in the
OS/2 environment, operating cooperatively with the host. CODE/370 works with COBOL/370, C/370, or
PL/I compilers and with the LE/370 run-time environment to create the applications. Figure 138 is an
overview of the CODE/370 cooperative environment.
In this chapter, we focus on the debugging functions of CODE/370. For more information on other
CODE/370 functions, refer to the CODE/370 General Information Manual .
The CODE/370 debug tool is an interactive debugger with both a mainframe and an OS/2 interface. It
provides a very powerful set of debugging functions that can be used in CMS, CICS, IMS, TSO, and
MVS batch environments. CODE/370 supports DB2 for MVS/ESA stored procedures as an MVS batch
environment.
The functions above are available in both the mainframe and PWS debug tools.
The mainframe debug tool runs in interactive or batch mode. In interactive mode (MFI) you use panels
to issue debug commands and monitor the execution of the program. In batch mode, you must create
a data set with the debug commands to be used and a data set to receive the debug information.
The PWS debug tool (Figure 139 on page 307) runs on an OS/2 workstation in interactive mode. You
use OS/2 windows to issue debug commands and monitor the execution of the program. The PWS
must be connected to the mainframe through APPC.
To debug DB2 for MVS/ESA stored procedures you can use either the PWS debug tool or the
mainframe batch debug tool. Because stored procedures run in a separate address space, you cannot
use the MFI debug tool to debug them. To debug MVS client programs, you can use either the PWS
debug tool, the MFI, or the batch debug tool.
The workstation components of CODE/370 are downloaded from the host system during the process of
installing CODE/370 on the workstation. You must have a 3270 terminal emulation defined in your
workstation to install CODE/370 in your workstation.
To debug DB2 for MVS/ESA stored procedures, you must ensure that you have the following PTFs
applied in your system:
• LE/370 Version 1.5
For a complete description of the installations steps, refer to Chapter 2, “Installing CODE/370 on Your
Workstation,” in the CODE/370 Installation Manual .
15.2.2.1 Configuring the Workstation: The CODE/370 PWS debug tool requires an APPC
connection between your workstation and the host system. The configuration required for CODE/370
connection is similar to the configuration required for DRDA connection. If your workstation is already
configured for DRDA connection, most of the definitions for CODE/370 probably exist.
In this section, we describe only those definitions required by the CODE/370 PWS debug tool that are
not defined when you create an APPC connection for DRDA. Refer to Chapter 3, “Configuring
Communications for CODE/370,” of the CODE/370 Installation Manual , for a complete list of the
definitions required for CODE/370.
When running CODE/370 interactively to debug a DB2 for MVS/ESA stored procedure, the PWS debug
tool runs as a server to the CODE/370 debug functions on the host. Defining your workstation as a
PWS debug tool server requires that you create a transaction program name (TPN) definition in CM/2.
This TPN definition is similar to the TPN definitions you create when configuring DB2 for OS/2 as a
DRDA server. Follow these steps:
1. To configure the TPN for the PWS debug tool, select Transaction program definition in the SNA
Features List window and click on the Create push button, as shown in Figure 140 on page 309.
4. Select Presentation Manager for Presentation type and Non-queued, Attach Manager started for
Operation type.
5. Click on the OK push button.
15.2.2.2 Configuring the Host: CODE/370 requires some definitions on the host system. We
assume that your host system has been prepared for APPC connection with your workstation. In this
section, we discuss only the configuration of the APPC/MVS subsystem required by CODE/370.
APPC/MVS Definitions: APPC/MVS uses PARMLIB members APPCPMxx and ASCHPMxx to store its
definitions. You must update these members with CODE/370 information. The debug tools requires an
entry only in the APPCPMxx member. The CODE/370 edit and compile tools require an entry in the
ASCHPMxx member. If you plan to use CODE/370 edit and compile tools, refer to the CODE/370
Installation Manual for configuration requirements.
The SIDEINFO DATASET is a VSAM data set used to store the CPI-C side information. The CODE/370
debug tool uses information stored in this data set to establish the APPC connection to your
workstation. Refer to “CPI-C Side Information” on page 311 for information on how to load this data
set with CODE/370 information.
You must define the CODE/370 LU to VTAM. Here is an example of the VTAM definition required by
CODE/370:
VBUILD TYPE=APPL
*
SCC370 APPL ACBNAME=SCC370,
APPC=YES,
AUTOSES=10,
DDRAINL=NALLOW,
DMINWNL=16,
DMINWNR=16,
Note that the LUNAME in the VTAM definition must match the ACBNAME in the APPCPMxx member.
CPI-C Side Information: To configure the CODE/370 PWS debug tool as a client in the host system,
you must add an entry in the CPI-C side information data set. This data set is defined to APPC/MVS in
the APPCPMxx member.
The CPI-C side information defines a symbolic destination name for your workstation. When you
invoke your stored procedure by using the TEST run-time option, this name indicates the name of the
workstation used to debug the stored procedure. You must have an entry in this data set for every
workstation that is going to be used as a CODE/370 PWS debug tool server. Here is the sample JCL to
create an entry in the CPI-C side information data set:
//SIADD0 JOB (999,POK),NOTIFY=&SYSUID,
// CLASS=A,MSGCLASS=T,MSGLEVEL=(1,1),TIME=1440
//STEP EXEC PGM=ATBSDFMU
//SYSPRINT DD SYSOUT=*
//SYSSDLIB DD DSN=SYS1.APPCSI,DISP=SHR
//SYSSDOUT DD SYSOUT=*
//SYSIN DD *
SIADD
DESTNAME(SC02130I)
TPNAME(CODE370DT)
MODENAME(IBMRDB)
PARTNER_LU(SC02130I)
/*
In this example, the TPNAME must match the transaction program name defined in the CM/2 profile.
(See Figure 141 on page 309 for an example of the CM/2 transaction program definition.) We used the
log mode IBMRDB. This log mode is the same as that used by the DRDA connection, and there is no
need to define a new log mode for the CODE/370 connection. The PARTNER_LU is the LUNAME of
your workstation.
Updating the Stored Procedures Address Space JCL Procedure: After you have completed the
CODE/370 installation and customization, to debug DB2 on MVS stored procedures you must update
the stored procedures address space JCL procedure. You must concatenate the CODE/370 load library
to the STEPLIB list. Here is an example of the stored procedures address space JCL procedure with
the CODE/370 library for DB2 Version 4:
//DB41SPAS PROC RGN=0K,TME=1440,SUBSYS=DB41,NUMTCB=8
//IEFPROC EXEC PGM=DSNX9STP,REGION=&RGN,TIME=&TME,
// PARM=′&SUBSYS,&NUMTCB′
//STEPLIB DD DISP=SHR,DSN=DSN410.RUNLIB.LOAD
// DD DISP=SHR,DSN=SYS1.CEE.V1R5M0.SCEERUN
// DD DISP=SHR,DSN=DSN410.SDSNLOAD
// DD DISP=SHR,DSN=EQAW.V1R2M0.SEQAMOD <--------
// DD DISP=SHR,DSN=STDRD2A.STPROC.LOAD
To provide the CODE/370 debug tools with necessary debugging information, you must compile your
stored procedure with the TEST compiler option. This option causes the compiler to create the
dictionary tables that the debug tool uses to obtain information about program variables.
In addition, the compiler inserts program hooks at selected points in your program. Your source is not
modified. These points can be at entrances and exits of blocks, at statements, and at points in the
program where program flow may change between statement boundaries (called path points ), such as
before and after a CALL statement. Using these hooks, you can set breakpoints that enable you to
gain control when you are debugging the stored procedure and issue CODE/370 debug tool commands.
Figure 144 shows the syntax of the TEST option for the COBOL and PL/I compilers.
For C compilers, the TEST option can have any number of suboptions, even if they are contradictory.
In this case, the last suboption specified in the list is used. For example,
TEST(BLOCK,NOBLOCK,SYM) is equivalent to TEST(NOBLOCK,SYM).
For COBOL and PL/I compilers, the TEST option can have a maximum of two suboptions specified.
The first suboption refers to the program hooks that are created in your stored procedure; the second
suboption refers to the creation of dictionary tables that enable the debug tool to access variables.
Here is a summary of the TEST suboptions: (For more detailed information about the TEST suboptions,
refer to the CODE/370 Debug Tool Manual .)
For COBOL programs, specifying the TEST option with no suboption is equivalent to specifying
TEST(ALL,SYM).
For PL/I programs, specifying the TEST option with no suboption is equivalent to specifying
TEST(NONE,SYM).
For C stored procedures, the data set containing the source code is used to display the source code in
the PWS debug tool. When you prepare stored procedures, usually the source data set used by the C
compiler is a temporary data set generated by the DB2 precompiler. To view the C source code, you
must ensure that you direct the output of the DB2 precompiler to a nontemporary data set. Here is an
example of the JCL DD statements required when you prepare a stored procedure in C:
//PC EXEC PGM=DSNHPC,PARM=′ HOST(C),MARGINS(1,80)′ , REGION=4096K
//SYSCIN DD DSN=DSN410.CODE.SOURCE,DISP=(NEW,CATLG),
// UNIT=SYSDA,SPACE=(TRK,(10,10))
.
.
//C EXEC PGM=EDCDC120,COND=(4,LT,PC),REGION=4096K,
// PARM=(′ RENT,NOMARGINS,SOURCE,NOSEQ,LIST,TEST(ALL)′ )
//SYSIN DD DSN=DSN410.CODE.SOURCE,DISP=SHR
.
.
.
For COBOL and PL/I stored procedures, the compiler listing is used to display the source code in the
PWS debug tool. You must ensure that your compiler listing is directed to a nontemporary file that is
available during the debugging session. Here is an example of the JCL DD statements required when
you prepare a stored procedure in COBOL:
You can use both sequential and partitioned data sets to store the source code used by the CODE/370
debug tools.
Because the data set name is stored in the load module, you do not have to specify it when you invoke
the debug tool.
In this section, we explain the use of the TEST run-time option. For information on the use of calls to
CEETEST, refer to the CODE/370 Debug Tool Manual .
The TEST run-time option has four suboptions, which are summarized below. For detailed information
about the suboptions refer to the CODE/370 Debug Tool Manual .
The second suboption is the commands file suboption and determines the commands data set to be
used as a source of debug commands when you are running the debug tool in batch mode. This
option can also be used as an initial source of commands if you are running in interactive mode. The
possible values for this suboption are:
* or blank A commands file is not used, and the workstation is used to input debug tool
commands.
commands_file Specifies the ddname of the data set that contains the debug tool commands. If you
are using a command file, the ddname must be allocated in the stored procedures
address space JCL procedure.
The third suboption is the prompt level and can be used to send an initial set of debug commands to
be executed during program initialization. This suboption can also specify whether the debug tool
should gain control after LE/370 initialization. The possible values for this suboption are:
PROMPT or ; or blank Indicates that you want the debug tool to be invoked immediately after LE/370
initialization.
NOPROMPT or * Specifies that you do not want the debug tool to be invoked immediately after
the LE/370 initialization.
command Specifies that one or more debug tools is to be executed immediately after the
stored procedure initialization.
The fourth suboption consists of the session parameter and the preferences file, separated by a colon.
The session parameter provides information to the debug tool about the session characteristics
necessary to establish an MFI session or an APPC session. The preferences file indicates a file you
can use to specify default settings for your debugging environment. The possible values for this
suboption are:
MFI: or blank Specifies that the MFI debug tool should be invoked.
workstation_id Specifies your workstation APPC symbolic destination name. This value must match
the DESTNAME parameter formed when creating the CPI-C side information for your
workstation. Refer to “CPI-C Side Information” on page 311. An APPC session with
your workstation is automatically created and the PWS debug tool is automatically
started if you specify this option.
%session_id Specifies a unique identifier for each session, if multiple concurrent APPC debug
sessions are being conducted on your workstation. The default value is CODEDT.
INSPPREF or blank
The debug tool default preferences file ddname.
preferences_file
A ddname specifying the preferences file to be used. This file must be allocated in the
stored procedures address space JCL procedure if you plan to use a preferences file.
* A preferences file is not supplied.
When working with the MFI debug tool in batch mode, we used the following TEST run-time option:
TEST(ALL,TESTIN,PROMPT,MFI:*)
The basic difference between this TEST run-time option and the previous TEST run-time option is that a
command data set with ddname TESTIN is used and the MFI debug tool is invoked.
Once you have compiled your stored procedure with the TEST compiler option and updated the
SYSIBM.SYSPROCEDURES table with the TEST run-time option indicating the PWS debug tool to be
activated, when the stored procedure is invoked, the debug tool on the host initiates a session with the
workstation. The windows shown in Figure 146 on page 317 are automatically displayed on the
workstation.
The details of the windows are explained in 15.4.3, “PWS Debug Tool Windows” on page 323 and are
summarized below:
• The CODE-LISTING window displays the stored procedure source code. The data set where the
source code is located is automatically obtained because it is stored in the load module. You can
use this window to generate debug tool commands by clicking on specific areas.
• The Debug Tool Command Log window is divided into two areas: the bottom is for commands
entered manually, and the top is a history log of the commands manually entered or automatically
generated and the results of the commands.
• The Global Monitor List window is used to monitor and change the values of variables during
execution of the stored procedure.
• In the Step/Run window, you control the execution of the stored procedure by indicating when the
stored procedure should proceed.
To begin testing your stored procedure, click on the Step button on the Step/Run window, so that the
first statement of the stored procedure is highlighted, as shown in Figure 147 on page 318.
Note that the Debug Tool Command Log window shows the STEP debug command associated with the
Step push button.
To set breakpoints, use your mouse to scroll the CODE-LISTING window until you get to the statement
where you want the debug tool to gain control during execution. In our example, we want the
execution to stop at statement 85. Double-click on the prefix area of statement 85 in the CODE-LISTING
window. The prefix area for the statement is highlighted, as shown in Figure 148 on page 319.
Note that the Debug Tool Command Log window shows the AT debug command associated with
setting a breakpoint.
Now click on the Run push button in the Step/Run window to execute the stored procedure. The stored
procedure is executed until statement 85 where we set a breakpoint. At this breakpoint, control is
returned to the PWS debug tool, the statement is highlighted, and you can enter debug tool commands.
At this point, we want to display the values of the PARM1 variable. Type the LIST PARM1 command in
the Debug Tool Command Log window, as shown in Figure 149 on page 320.
Click on the Process push button to execute the command. Figure 150 shows the result of the LIST
PARM1 command.
Now you can check the new value of PARM1 by issuing the LIST command again. The LIST PARM1
command is logged in the Debug Tool Command Log window, so you do not have to type it again.
Just use your mouse to select the command, and press Alt+Ins. The LIST PARM1 command is copied
automatically to the command area, as shown in Figure 152.
Click on the Process push button, and check the new value of PARM1, as shown in Figure 153 on
page 322.
To resume execution of the stored procedure, click on the Run push button in the Step/Run window.
Because no other breakpoints were set and we compiled the stored procedure with the TEST compiler
suboption ALL, the stored procedure is executed and then terminates. At this point, you can enter
debug commands. Click on the Run push button again. The stored procedure finishes, and the PWS
debug tool enters a wait state, waiting for the next debug session, as shown in Figure 154.
15.4.3.1 CODE-LISTING Window: The CODE-LISTING window (Figure 155) shows the source
code of your stored procedure. As explained in 15.3.2, “Viewing the Source Code” on page 313, when
preparing your stored procedure you must ensure that the source code data set (for C language) or the
listing data set (for COBOL and PL/I languages) are not temporary data sets, so that you can access
the source code while using the debug tool.
In the CODE-LISTING window, you can monitor the execution of your stored procedures. The next
statement to be executed is highlighted in the source code. Using the CODE-LISTING window you can
also set breakpoints in your stored procedures, list the values of variables, add variables to the debug
tool monitors, run your stored procedure, and get help.
Setting Breakpoints: If you compile your stored procedure with the option TEST(ALL), you can set
breakpoints at most statements in your stored procedure. At these breakpoints, the execution of your
stored procedure is interrupted and the debug tool gains control. When the execution is interrupted,
you can issue debug commands, such as LIST, or change the values of variables.
In the CODE-LISTING window, you can set a breakpoint by double-clicking in the prefix area of the
source code. The prefix areas for statements with breakpoints are highlighted, as shown in Figure 156
on page 324. You can also set breakpoints by selecting Breakpoints from the source window action
bar.
Displaying the Value of a Variable: You can use the CODE-LISTING window to display and change the
values of variables. To display the value of a variable, double-click on any reference to the variable in
the source code. The variable value is then shown in a window as in Figure 157.
To change the value of the variable, type the new value in the variable value window and press Enter.
The new value is assigned to the variable.
By selecting Variables from the source window action bar, you can specify the variables to be
monitored. The variables are added to the Local or Global Monitor List windows, and you can monitor
the values of these variables during execution of the stored procedure.
15.4.3.2 Step/Run Window: The Step/Run window, shown in Figure 158 on page 325, enables
you to control the execution of your stored procedure. It contains four push buttons that issue different
STEP and RUN debug commands.
Use the Step push button to execute the program one statement at time. When you click on the Step
button, the current statement is executed, and execution of the stored procedure is interrupted before
the next statement.
Use the Step over push button to step through the execution of the current statement, without stepping
through called programs, procedures, or functions. All statements of the program, procedure, or
function you want to step over are executed, and you are returned to the next statement after the call.
For example, Figure 159 shows that your stored procedure invokes another routine, PROG B, but you
do not want to monitor any of the statements of that routine.
Use the Step return push button when you are stepping through a called program, procedure, or
function and want to return to the point where the current program, procedure, or function was invoked
without stepping through the remaining statements. Clicking on Step return executes the remaining
statements of the current program, procedure, or function and returns to the next statement after the
call. For example, Figure 160 on page 326 shows that your stored procedure invokes a routine, PROG
B, and you are monitoring each statement of the routine. Then you decide that you do not want to
further monitor the statements of the routine. Instead, you want the routine to complete processing
without monitoring and get back to monitoring the stored procedure.
Use the Run push button to execute the stored procedure statements up to the next breakpoint or the
end of the procedure.
15.4.3.3 Local and Global Monitor List Windows: Use the Local and Global Monitor List
windows to monitor the changing values of variables or change the values of variables. The Global
Monitor List window (Figure 161) appears when you invoke the debug tool. You can use it to monitor
the variables of all programs invoked during one debug session. The Local Monitor List window is a
secondary window. Each Local Monitor List window is associated with a specific CODE-LISTING
window, and you can use it only to monitor the variables of the program in that window.
To specify the variables you want to monitor, select Variables from the source code window action bar
or issue the debug tool MONITOR command.
The upper section is the log output area. The lower section is the command input area. To issue
debug tool commands, type the command in the command input area and click on the Process push
button.
Here is a list of some useful debug commands. For a complete list of debug commands, refer to the
CODE/370 Debug Tool Manual .
LIST variable Displays a variable value.
MONITOR GLOBAL LIST variable
Adds a variable to the Global Monitor List window.
MONITOR LOCAL LIST variable
Adds a variable to the Local Monitor List window.
CLEAR MONITOR n
Removes the variable number, n , from the monitor list.
MOVE value TO variable
Changes the value of a variable (COBOL stored procedures only).
variable = value
Changes the value of a variable (PL/I and C stored procedures).
AT LINE xx Sets a breakpoint at a specific line.
SET SOURCE ON(spname) DSN410.CODE.LISTING(spname)
Displays the stored procedure source code. The process of displaying the source
code should be automatic if you use a nontemporary file for the source code or
compiler listing when preparing the stored procedure.
COMMENT text Adds a comment to the log output area.
STEP n Steps through n statements in the source code.
RUN Runs the stored procedure up to the next breakpoint or to the end of the stored
procedure.
QUIT Ends the debug session.
Here is an example of the contents of a CODE/370 command file for debugging a stored procedure
written in COBOL:
COMMENT SIMPLE TEST OF STORED PROCEDURE;
AT 82
LIST ( ″AT THE BREAKPOINT FOR LINE″, %LINE ) ;
GO ;
STEP 1 ;
LIST PARM1;
MOVE ″RICARDO″ TO PARM1;
77 TEMP PIC X(10);
MOVE PARM1 TO TEMP ;
LIST TEMP ;
AT EXIT *
LIST ( ″EXITING ″, %CU ) ;
GO ;
QUIT ;
The output of the commands in the commands file is directed to a log data set. The default ddname
for the log data set is INSPLOG. You must have a JCL DD statement defined for this log data set in
your stored procedures address space JCL procedure as follows:
//INSPLOG DD DSN=DSN410.CODE370.LOG,DISP=SHR
To direct your output to a different ddname, you must include in your command file the following
command:
SET LOG ON FILE ddname;
The ddname specified in this command must be allocated to the stored procedures address space JCL
procedure.
IBM′s IBM VisualAge for Basic (VAB) enables you to develop, test, and maintain stored procedures
and UDFs. With VAB you can build, run, debug, test, register, and distribute server procedures by
using the client/server facilities of the VAB development environment.
Once you create the server procedures, they can be called from different client platforms just like other
stored procedures or UDFs. Any client program developed in any language that can access your
database server can access VAB server procedures. For clients written in Visual Basic, VAB provides
a Visual Basic custom control (VBX) that simplifies calling your stored procedure.
16.1.1.1 Development Environment: VAB supports the following platforms where you can
develop, test, and maintain stored procedures and UDFs:
• OS/2
• AIX
• Windows NT
• HP-UX
• Solaris
VAB comes with a Visual Basic custom control (VBX) that facilitates the integration of VAB stored
procedures with Visual Basic client applications. In addition, you can develop client applications using
VAB on the following platforms:
• Windows 95
• Windows NT
• OS/2
After creating the DB2CLI.PROCEDURES table, you must grant to each user who builds stored
procedures privileges to insert, update, and delete entries in the DB2CLI.PROCEDURES table of each
target database.
To browse the DB2CLI.PROCEDURES table, select Stored procedure catalog from the Window
pull-down menu of the IBM VAB window as shown in Figure 163 on page 333.
Figure 164 shows the resulting Stored Procedure Catalog window for the SAMPLE database.
In a VAB project, you create code modules, or .bas files , which contain the VAB language source code
for the stored procedures and UDFs. A small application may contain only a single code module, and
larger applications may contain several code modules. In the code modules, VAB instructions are
grouped into procedures. A code module may contain multiple procedures. A given code module can
be reused in multiple projects. By placing procedures in code modules, you can make your frequently
used code reusable.
VAB instructions are grouped into two kinds of procedures: subprocedures and function procedures.
Both accept parameters, but only function procedures have a return code value, enabling them to be
used in such expressions as
return_code = my_function(parm1, parm2)
By double-clicking on any of the modules in the project window, you open the code editor. For
example, double-click on the spsalary.bas stored procedure code to edit it. The code editor window
shown in Figure 166 on page 335 displays your code in different colors and fonts in an easy-to-read
format.
A stored procedure written in VAB does not declare input and output parameters as SQLDA structures.
Optionally, the parameters can be followed by an array of integers to hold corresponding null
indicators and by an instance of the SQLCA structure (sqlca_type). The following is an example of
declaring the sqlsproc entry point and the parameters passed to the stored procedure:
Sub sqlsproc (sqlstmt As String, _
empname As String, _
result As Double, _
nullArray() As Integer, _
sqlca As sqlca_type)
With VAB you can pass single or multiple arrays to a stored procedure; the parameters passed can be
of any type, including a UDT.
EXEC SQL
SELECT MAX(SALARY) INTO :Maxsal FROM USERID.STAFF
END EXEC
EXEC SQL
SELECT NAME INTO :Employee FROM USERID.STAFF
WHERE SALARY = :Maxsal
END EXEC
Salary = Maxsal
EmployeeName = Employee
End sub
16.2.4.2 Building and Registering Stored Procedures: Building is the process of creating
the library for the stored procedure at the database server. The build process uses as input the .bas
file and the stored procedure definition file (.sp file).
VAB creates a new stored procedure definition file (also referred to as a stored procedure history file )
when you select New stored procedure from the File pull-down menu of the IBM VAB window shown in
Figure 165 on page 334 and save the definition.
To update the definition file, double-click on the spsalary.sp file shown in Figure 165 on page 334.
During the build process, you can request that the stored procedure be registered on the selected
servers by checking the Register stored procedure option on the Stored Procedure Definition window.
Registering a stored procedure is the process of updating the DB2CLI.PROCEDURES table with
information about the stored procedure. Any previously registered stored procedure using the same
schema and executable name is replaced by the new procedure.
After you have completed the Stored Procedure Definition window, click on the Build push button to
build the stored procedure at the selected servers. VAB transfers the BASIC modules specified in the
stored procedure definition and sends it to the server to be used to build the library.
16.2.5.1 Local and Remote Calls: When you use VAB to develop a client program, you have
the option of executing a call to the VAB procedure as local or remote. Before you execute a local or
remote call, your client application must connect to the database server.
A local call to the VAB procedure means invoking the VAB subprocedure as a normal procedure call,
that is, the SQL CALL statement is not used. Instead, the subprocedure is executed locally, but the
SQL statements contained in the subprocedure are executed at the database server as shown in
Figure 168.
An example of a local call to the spsalary subprocedure with two parameters is:
spsalary salary, EmployeeName
A remote call to the VAB procedure means invoking the VAB subprocedure remotely as a stored
procedure, that is, the SQL CALL statement is used, as illustrated in Figure 169 on page 339.
An example of a remote call to the spsalary stored procedure with two parameters is:
EXEC SQL CALL userid.spsalary (:salary, :EmployeeName) END EXEC
16.2.5.2 Sample Client Program: We use the salary.bas client program from the sample salary
project as an example. This client program was developed with VAB. The embedded SQL CALL
statement is used to invoke the stored procedure.
Double-clicking on the salary.bas module in the IBM VAB salary project window (Figure 165 on
page 334) opens the code editor with the client code. The client program performs the following tasks:
1. Defines the variables.
2. Opens a window asking if the call is local or remote.
3. Connects to the database.
4. Calls the local subprocedure if the call is local.
5. Calls the stored procedure with the SQL CALL statement if the call is remote.
6. Displays the result in a message box after the subprocedure or the stored procedure has returned.
7. Disconnects from the database and ends.
Here is the salary.bas file:
′ ****************************************************************
′ File: salary.bas
′ ****************************************************************
Sub Main()
Dim prompt As String ′ Input prompt
Dim LRcall As String * 1 ′ Local or Remote
Dim sppath As String * 70 ′ Stored procedure path
Dim salary As Double ′ salary and EmployeeName are
Dim EmployeeName As String * 50 ′ returned from stored procedure
Case ″L″
EXEC SQL CONNECT TO SAMPLE END EXEC ′ connect to sample database
′ Next : call local subprocedure spsalary
spsalary salary, EmployeeName
MsgBox ″The Employee ″ & RTrim$(EmployeeName) & ″ has
highest salary of $″ & salary, 0, ″Local Call″
EXEC SQL CONNECT RESET END EXEC ′ reset connection to database
Case ″R″
EXEC SQL CONNECT TO SAMPLE END EXEC ′ connect to sample database
sppath=″spsalary!spsalary″ ′ sppath = stored proc name
salary=0
EmployeeName = ″″
′ Next : call the stored procedure
EXEC SQL CALL userid.spsalary (:salary, :EmployeeName) END EXEC
End
End Sub
VAB offers you an advanced debugging facility that enables you to set breakpoints, view and edit the
values of variables, and debug stored procedures running on the server platform.
You can enter debug mode when the execution of your project is suspended by a run-time error or by
a breakpoint you have set.
If execution has stopped because of a breakpoint, you can step through your code in the Code Editor
window and examine the values of variables and expressions through the Inspector window. If
execution has stopped because of a run-time error, you cannot continue to step through your code, but
you can use the Inspector window to examine your data. In debug mode, you cannot edit your code in
the Code Editor Window.
When you run your application, VAB stops executing the code at the breakpoint, enabling you to step
through your code to inspect it and the values of your variables. Your source code is displayed, and a
little hand points to the line of code where the debugger is actually waiting. Once VAB encounters a
breakpoint, you can:
• Continue to the next breakpoint.
• Step to the next statement.
• Step over to the next subprocedure.
16.3.2.2 Watch: A watchpoint is a variable or expression that you want the system to monitor. To
add a watchpoint to the Watch area, click on the variable in your Code Editor window. Then click on
Add Watch on the Debug pull-down menu. Figure 171 shows that we added the Employee variable to
the watch area.
When the application enters a break mode, the system displays the values of the watchpoints in the
Value column of the Inspector window. At every debugging step, the values of the watchpoints are
refreshed. You can also use the Quickwatch window to examine the values of variables that are not
listed in the watch area. To do so, place your cursor on a variable in the Code Editor Window and
select Immediate Watch from the Debug pull-down menu.
16.3.2.3 Immediate Window: The Immediate window provides direct access to the VAB
interpreter. Thus you can enter commands and execute statements directly while the execution of
your code is suspended. You also have access to all of the variables in your current program. For
example, Figure 172 on page 343 shows how we used the Immediate window to assign values to the
Salary and EmployeeName variables by entering the following BASIC statements:
To enable the remote debugger, check the Enable remote debug option on the Stored Procedure
Definition window shown in Figure 167 on page 337 when you are building the stored procedure.
Once the stored procedure is built with this option, the call to the stored procedure displays a Project
window and a Code Editor window to show the stored procedure code at the server. You can then
debug your server code as described in the previous sections. When your server code is bug free,
build it again without checking the Enable remote debug option, so the remote debugger is not invoked
the next time the server procedure is called.
To test your code with a GUI client application, use the VAB QuickTest dialogs. Figure 173 on
page 344 shows an example of a QuickTest dialog.
You can initialize QuickTest dialogs with your own parameters and modify them through your VAB
code. For each dialog you can specify:
• The number of buttons on the dialog (up to four)
• Which button is the default
• Code executed when a user clicks on a specific button
• Which bitmap, if any, to display
• A background bitmap
• The title, size, and position of the form
• Labels for each text box, grid, and button
• Whether the form is modal or nonmodal
Four QuickTest dialogs are available; the dialog shown in Figure 174 enables you to specify a fixed
number (from one to the limit your screen can hold) of input and output fields.
To facilitate your use of the QuickTest dialogs, VAB provides the QuickTest SmartGuide as shown in
Figure 175 on page 345. This tool generates a test application that uses QuickTest dialogs to guide
you through the steps required to generate this application.
You can test your stored procedures without QuickTest dialogs by using the remote debugger,
temporarily inserting statements into the code to print values to a file, or using input boxes to gather
input and message boxes to display results. You can also test the stored procedures from your own
client application. However, the QuickTest dialogs offer the simplest method of testing your code with
a GUI.
For the latest news about VAB, go to the VAB home page on the Internet:
http://www.software.ibm.com/data/db2/databasic
This redbook ships with a diskette containing sample programs developed during this project. The
sample programs illustrate the theory discussed in this redbook and are useful for getting started with
stored procedures in your own environment and gaining some hands-on experience.
In the samples that we have provided with this redbook, we have created a naming convention for
client programs, and another naming convention for the stored procedures. Both are defined below.
where
L is the Language:
When interpreting the naming convention with a client program, the first L represents the language of
the stored procedure that the client program calls and the second L identifies the language in which
the client program was written. For the stored procedure, there is only one L, and it represents the
language in which the stored procedure was coded.
E is the environment:
M - MVS and/or OS/390
N - Windows NT
W - WLM-established stored procedure address space
X - AIX
2 - OS/2
9 - Win95
The client program also has two environment identifiers. The first represents the environment of the
stored procedure that this client calls. The second E represents the environment in which this client
was coded. Samples written for the stored procedures have one E associated with them, which
represents the environment that the stored procedures should execute.
For example, PR0C2CCN is a client uniquely identified as PR0 calling a stored procedure written in C
on OS/2 and called PR0C2S, while the client is written in C on a Windows NT environment.
To use the debugger, xldb, specify compiler option -g. If you are using the makefile from DB2
Universal Database V5 samples, you can update:
# the required compiler flags
CFLAGS= -I$(DB2INSTANCEPATH)/sqllib/include -g
A.2.1.1 C Sample Clients on AIX: To use the sample programs from this redbook, first you
need to make sure bldxlc is available, or copy it to the working directory.
To build the client program for database sample2, use the following build script:
bldxlc pr0c2ccx sample2 userid password
A.2.1.2 C Sample Stored Procedures on AIX: Copy the file bldxlcsrv from the
sqllib/samples/c directory of your DB2 instance
To execute the client, make sure the server is started and that you can connect to the server.
A.2.2.2 CLI Sample Stored Procedures on AIX: The AIX CLI sample clients call existing
stored procedure from this redbook. We do not provide any additional sample stored procedures on
AIX. Figure 126 on page 237 shows how client sr1 interacts with other stored procedures.
For more information, consult the README file in sqllib/samples/cli that comes with DB2 Universal
Database V5.
On AIX, your application file can have any file extension. You can run your application using either of
the following two methods:
1. At the shell command prompt, enter rexx name where name is the name of your REXX program.
2. If the first line of your REXX program contains a “ m a g i c number,” (#!), and identifies the directory
where the REXX/6000 interpreter resides, you can run your REXX program by entering its name at
the shell command prompt. For example, if the REXX/6000 interpreter file is in the /usr/bin
directory, include the following as the very first line of your REXX program:
#! /usr/bin/rexx
Then, make the program executable by entering the following command at the shell command
prompt:
chmod +x name
Run your REXX program by entering its file name at the shell command prompt.
REXX sample programs are in the directory sqllib/samples/rexx. To run the sample REXX program
updat.cmd, do one of the following:
For further information on REXX and DB2, refer to the Embedded SQL Programming Guide , Chapter 13,
“Programming in REXX.”
The OS/2 samples that come with this redbook mostly derive from the DB2 Common Server samples.
Please consult Building Applications for Windows and OS/2 Environments for further information.
The *.mak files that come with this redbook were written for CSet++. You can use nmake against
these *.mak files. For UDB we used the bld*.cmd files that come with the product instead of the *.mak
files.
To use the sample programs from this redbook, first you need to:
1. Copy all the files from \os2\c directory on the diskette provided with this book to a working
directory local to your machine. For example,
copy a:\os2\c\*.* c:\working
2. Copy the files util.h and util.c from the samples\c directory of your DB2 instance (typically
\sqllib\samples\c) to the working directory. These provide common utilities such as error
checking and printing SQLDA.
A.3.1.1 C Sample Clients on OS/2: To have a client program call a stored procedure, please
ensure that the following items have been reviewed:
1. Copy the file bldvaemb.cmd from the %DB2PATH%\samples\c directory.
2. Catalog the database where the stored procedure resides.
Next, create the stored procedure dynamically linked library, for example:
bldvastp pr0c2s sample db2v5 db2v5
Do not confuse our samputil.c and samputil.h with the UDB samputil.c and samputil.h. If you are
using the UDB version of samputil.c, you will get two unresolved externals: CHECK_DBC,
CHECK_STMT.
Our CLI samples derive from DB2 Common Server V2. We tested the samples using DB2 Universal
Database V5. To use the samples provided with this redbook, copy all the files from \os2\cli directory
on the diskette provided with this book to a working directory local to your machine. For example:
These samples are designed to show how to do certain tasks. For example, error handling in mr3*
and mr4* needs to be strengthened; giving an invalid SQL statement causes looping. If you are to put
these into a production environment, more work needs to be done on making these programs robust.
Note: For further information, please consult Building Applications for Windows and OS/2
Environments .
A.3.2.1 CLI Sample Clients on OS/2: If you are starting over from scratch and are reading the
book Building Applications for Windows and OS/2 Environments , ensure that you are using the example
in the book for VisualAge C/C++ and not the sample in the samples directory. The sample file
clibld.cmd in the samples directory is not for VisualAge C/C++.
Each of the sample clients come with a *.mak file. To build the client program, update the *.mak file as
necessary to choose between ilink /NOFREE (for VisualAge C/C++) and link386 (C/Set). Use:
nmake /a /f program.mak
to create the client program.
As an alternative, we also added a new cmd file, bldvacli.cmd, to build CLI applications using
V i s u a l A g e C / C + + . The source for this is:
/* rexx */
parse arg pgmname .
address cmd
″icc -c -Ls+ samputil.c″
″icc -C+ -O- -Ti+ -Ls+″ pgmname″ . c″
″ilink /NOFREE /NOI /DEBUG /ST:128000 /PM:VIO″ ,
pgmname″ . obj samputil.obj,,,db2cli;″
exit rc
A.3.2.2 CLI Sample Stored Procedures on OS/2: Each of the sample stored procedures
comes with a *.mak file. To build the stored procedure, update the *.mak file as necessary to choose
between ilink /NOFREE (for VisualAge C/C++) and link386 (C/Set). Use nmake /a /f program.mak to
create the stored procedure.
A.3.3.1 COBOL Sample Client Program on OS/2: To have a client program call a stored
procedure, please ensure that the following items have been reviewed:
1. Copy all the files from \os2\cobol directory on the diskette provided with this book to a working
directory local to your machine.
For example,
copy a:\os2\cobol\*.* c:\working
2. Copy the files bldvaemb.cmd and checkerr.cbl from the samples\cobol directory of your DB2
instance to the working directory.
3. Catalog the database where the stored procedure resides.
Two files, *.sqp and *.def, are required for a stored procedure. To create the stored procedure
dynamically linked library, enter the following:
bldicobs pr0b2s sample db2v5 db2v5
Table 32 (Page 1 of 3). Description of the Programs Used in the OS/2 Environment.
Legend: N=Simple with NULLS, S=Simple, D=SQLDA, H=Host Variables, R(n)=With n Result Sets
Sample Name Features Description Possible
partners
dd1c2cr2 HN Used as a simple test for measuring performance. See dd1c2s
Table 28 on page 302. Each loop inserts 10 presidents
into the PRESIDENTS table. Invoke this client program
using:
dd1c2cr2 dbname uid pwd #_of_loops
dd1c2s HN Used as a simple test for measuring performance. See dd1c2cr2
Table 28 on page 302. Each invocation inserts 10 rows
into a table. DLL is removed from memory after each
invocation. It also writes a small debug file, stproc.dat.
imdbmcb2 SH This is a sample MVS batch client program that invokes IMSBMS
the IMDBMS stored procedure, passing it two parameters,
the IMS IVP transaction and its data. The data is returned
in two parameters, a counter for the number of lines
returned and a varchar field with the lines. The IMSBMS
stored procedure initiates an IMS transaction through
APPC.
pr0cxcc2 DH Renamed from sample inpcli. shipped with DB2 pr0cxs
Common Server V2 and DB2 Universal Database V5. This
client calls pr0cxs to insert three presidents′ names twice.
The first time it passes the names using host variables;
the second time, it uses the SQLDA.
Stored Procedure: To define a sample stored procedure on a Windows system, please ensure that the
following steps have been performed:
1. Copy all the files from \windows\c directory on the diskette provided with this book to a working
directory local to your machine.
For example,
copy a:\windows\c\*.* c:\working
2. Copy the file bldvastp.bat from the samples\c directory of your DB2 instance to the working
directory.
3. Create the sample database.
To create the stored procedure dynamically linked library, enter the following:
bldvastp XXXXXXX sample db2v5 db2v5
Note: Place a valid sample name in example above. There are two files, .sqc, and .def file, required
for a stored procedure. Both are on s:\sg244693\samples97\os2\c.These samples are untested.
For further information, please consult the Building Applications for Windows and OS/2 Environments ,
S10J-8160-0. In our testing we installed IBM VisualAge C++ for Windows Version 3.0.
For further information, please consult Building Applications for Windows and OS/2 Environments. The
readme.txt file located in the DB2 instance ′sqllib\samples\ole′ is another source of information for
building applications and stored procedures with Visual Basic.
In our testing we installed Microsoft Visual Basic Version 5.0. We used the DB2 OLE Automation
Stored Procedure provided with the DB2 Software Developers Kit.
A.4.2.1 Visual Basic Sample Clients on Windows: When creating a client application using
Visual Basic, the following will assist you in building the modules required:
1. Copy all the files from \windows\vbasic directory on the diskette provided with this book to a
working directory local to your machine.
For example,
copy a:\windows\vbasic\*.* c:\working
2. Copy the file ′ODBC32.BAS′ from the samples\ole\msvb directory of your DB2 instance to the
working directory.
3. Catalog the database where the stored procedure resides.
4. Make the Visual Basic program. The result is an executable of the same filename (s02vncvn.exe).
For example,
vb5 /make s02vncvn.vbp
A.4.2.2 PowerBuilder Sample Clients: Two PowerBuilder Version 5 sample clients are
included for reference only:
• irww.pbl demonstrates how a client calls a stored procedure using parameters.
• query3.pbl demonstrates using single and multiple result sets.
Both are from working applications. The server portion (the stored procedures) is not included. Refer
to 10.5, “PowerBuilder” on page 205.
A.4.2.3 Visual Basic Sample Stored Procedures on Windows: When creating a stored
procedure, using the Visual Basic language will assist you in building the modules required:
1. Copy all the files from \windows\vbasic directory on the diskette provided with this book to a
working directory local to your machine. For example,
copy a:\windows\vbasic\*.* c:\working
2. Copy the file ′ODBC32.BAS′ from the samples\ole\msvb directory of your DB2 instance to the
working directory
3. If you have Microsoft C + + compiler 4.2 or higher, then you can build the DB2 OLE Automation
Stored Procedure controller. If not, the .dll been supplied with the samples in the Software
Developers Kit for DB2. Copy the resulting file db2oastp.dll to the function directory in your db2
instance. For example,
copy %db2path%\samples\ole\stpcntr\db2oastp.dll %db2path%\function
4. Create the sample database.
5. Connect to sample database and create the STAFF2 table using DDL from
\sqllib\samples\ole\msvb\staff2.ddl:
Stored Procedure: To define a sample stored procedure on a Windows system, ensure that the
following steps have been performed:
1. Copy all the files from \windows\cobol directory on the diskette provided with this book to a
working directory local to your machine.
For example,
copy a:\nt\cobol\*.* c:\working
2. Copy the file bldvacbs.cmd from the samples\cobol directory of your DB2 instance to the working
directory.
3. Create the sample database.
To create an example of the stored procedure dynamically linked library, enter the following:
bldicobs XXXXXXXX sample db2v5 db2v5
Note: Place a valid sample name in example above. A sample untested file is located on
s:\sg244693\samples97\windows\cobol.
For further information, please consult the Building Applications for Windows and OS/2 Environments ,
S10J-8160-0. In our testing we installed IBM VisualAge for COBOL Standard, version 2.1
3. Use the TSO RECEIVE INDA( ) command to create the PDSs. You can do it interactively or using
JCL. Here is sample JCL:
//SAMPBLD2 JOB (999,POK),′ FTP′ , NOTIFY=&SYSUID.,
// CLASS=A,MSGCLASS=T,REGION=5000K,
// MSGLEVEL=(1,1)
//*-------------------------------------------------------------------
//*$ Sample JCL RECEIVE the SG24-4693 stored procedures samples
//*
//* Make a backup copy of the samples files on the PC, then tailor
//* this JCL and submit it.
//*
//* 1/ Transfer all the unzip files as binary, sequential (dsorg=ps)
//* recfm(fb) lrecl(80) blksize(3200) using FTP or PComm (for
//* example) to the Host from your PC or FTP. In this example, they
//* are uploaded to the host as ′ DB2RES3.*.UNLBIN′ files
//*
//* PComm will use
//* IND$FILE PUT JCL.UNLBIN RECFM(F) LRECL(80) BLKSIZE(3200)
//*
//* 2/ The PDSs that will be rebuilt are all prefixed with
//* userid.SAMPLES.OUT.*
//*-------------------------------------------------------------------
//REBUILD EXEC PGM=IKJEFT01,
//* COND=(00,NE).
// DYNAMNBR=25
//SYSTSPRT DD SYSOUT=*,DCB=(RECFM=VBA,LRECL=255,BLKSIZE=259)
//SYSTSIN DD *
/*
4. Recreate other necessary data sets. We used the following data sets in our test environment,
which you need to replicate:
our name Dsorg Recfm Lrecl Blksz
-------------------------------------------------------------
SG244693.SAMPLES.DBRMLIB PO-E FB 80 32720
SG244693.SAMPLES.H PO FB 80 27920 S
SG244693.SAMPLES.JCL PO FB 80 27920 S
SG244693.SAMPLES.LINK PO FB 80 27920 S
SG244693.SAMPLES.LOAD.SPAS PO U 0 6144
SG244693.SAMPLES.LOAD.WLM PO U 0 6144
SG244693.SAMPLES.OBJ PO-E FB 80 32720
SG244693.SAMPLES.SOURCE PO FB 80 27920 S
SG244693.SAMPLES.SQL PO FB 80 27920 S
S denotes PDSs that are delivered with this redbook.
5. Update JCL for stored procedure address spaces
For our project we had three stored procedure address spaces:
DBC1SPAS - DB2-established stored procedure address space
DBC1WLMM - WLM-established stored procedure address space
DBC1WLM2 - WLM-established stored procedure address space
The WLM-established stored procedure address spaces have the same JCL: they were set up for
convenience to isolate testers/programs. You may or may not want to follow how we set it up.
6. Tailor sample JCL.
Update SG244693.SAMPLES.JCL(@DEFAULT) with your installation defaults. A.6, “Using Sample
JCL Procedures” on page 368 describes this in greater detail.
Warning
If you recreate the JCL library with a name other than SG244693.SAMPLES.JCL, you will have to
update the JCLLIB statement for each run JCL
// JCLLIB ORDER=SG244693.SAMPLES.JCL
to point to your new JCL library.
/*
//
//*-------------------------------------------------------------------
//*$ REMOVE TEMPORARY FILES FROM PREVIOUS FTP/RECEIVE
//*-------------------------------------------------------------------
//DELETE EXEC PGM=IDCAMS,COND=(00,NE)
//AMSDUMP DD SYSOUT=*
//SYSPRINT DD SYSOUT=*
//LISTING DD SYSOUT=*
//SYSIN DD *
DELETE DB2RES3.SAMPLES.OUT.H
DELETE DB2RES3.SAMPLES.OUT.JCL
DELETE DB2RES3.SAMPLES.OUT.LINK
Figure 178 (Part 3 of 4). Sample JCL sampbld.jcl to Rebuild Sample Data Sets
We used sample JCL procedures from each of the products, such as DB2, C/C++, PL/I and COBOL.
The following JCL procedures are supplied as a starting point: (Their names are different from those
of the IBM-supplied sample JCL procedure to avoid clashes.)
1. We use JCLLIB to identify which JCL procedure libraries to search.
2. We use SET statements to set symbolic parameters used in sample JCL procedures.
3. Where symbolic parameters are related to the setup of the environment (such as data set
qualifiers) as opposed to the conditions (such as compile options), these parameters will be put
into a centralized defaults data set member. This is to simplify implementation.
1 You must tailor the symbolics for the DB2 section. If you are not using CLI, you will not be running
the CLI-related JCL procedures and can therefore leave them out. The same applies to data set
names related to C.
2 Tailor this section to reflect the data set names for your application.
3 This section is compulsory since stored procedures must run under LE. We assume all languages
run under the same release of LE. If this is not true, you will probably be using a variety of stored
procedure address space to separate the different environments.
4 This section is needed only if you have PL/I installed and you are going to use it.
5 This section is needed only if you have C/C++ installed and you are going to use it.
6 This section is needed only if you have COBOL installed and you are going to use it.
Table 34 shows the JCL procedures used from the samples in this redbook.
A.7 C / C + + Programs
OS/390 Version 2 Release 4 has introduced intermediate process architecture (IPA) links. If you plan
to use IPA links, you will need the SCEELKEX data set in your link step.
The samples are classified into different “features” for ease of reference:
Refer to 11.11, “Running the CLI Sample Stored Procedure” on page 235 for a detailed description of
how to prepare and run the APD29/SPD29 sample.
Table 39. Description of the REXX Programs Used for APPC/IMS Verification.
Sample Name Features Description Possible
partners
SJXPART1 n/a APPC/IMS sample from Client/Server Computing with n/a
IMS/ESA Using APPC , which forms the basis for our APPC
work. Refer to 13.5, “APPC to Access Transactions from a
Stored Procedure” on page 292.
SJXPART2 n/a APPC/IMS sample from Client/Server Computing with n/a
IMS/ESA Using APPC , which forms the basis for our APPC
work. Refer to 13.5, “APPC to Access Transactions from a
Stored Procedure” on page 292.
This appendix is based on one of a series of technical reports dealing with performance prediction and
capacity planning issues for the DB2 family of products operating in a client/server configuration. The
goal of these technical reports is to help the customer:
• Assess proposed configurations for acceptable performance.
• Assess performance of current configurations.
• Choose proper hardware capacity for a proposed configuration.
• Choose the proper software programs for a proposed configuration.
• Tune a new or current configuration.
We present measurements taken in a DRDA environment where the application server is DB2 for
MVS/ESA and the gateway products used are DDCS for OS/2 and DDCS for AIX. This report is not
intended and should not be used to compare the results of gateway products. Also, the hardware used
for DDCS for OS/2 and the hardware used for DDCS for AIX are not comparable and are not equivalent.
We provide two configurations as examples. A level of analysis must be performed when using
configurations different from the configuration used in this report.
We also include some general considerations that you can use to help plan, tune, and analyze your
configuration.
The information in this appendix is also available in a slightly different format in the following sources:
• IBM personnel can request the following packages by using the DB2INFO EXEC or through
MKTTOOLS:
− DD2MVS, the report using DDCS for OS/2 as the gateway product
− DD6MVS, the report using DDCS for AIX as the gateway product
A report using DDCS for OS/2 as the gateway and DB2 for OS/400 as the application server is also
available in the DD2400 package.
• Home page on the World Wide Web:
http://www.csc.ibm.com/advisor/library/
Performance measurement and capacity planning are ongoing projects of the authors of this appendix,
so you can expect new reports in the sources cited above.
Figure 179 on page 378 shows the configuration for the measurements when DDCS for OS/2 was used
as the gateway.
Figure 180 on page 379 shows the configuration for the measurements when DDCS for AIX was used
as the gateway.
The components in this configuration are monitored by means of the following programs:
• For the MVS platform
− MVS: Resource Measurement Facility (RMF)
− DB2 for MVS/ESA: DB2 Accounting and Statistics traces
• 3745: NetView Performance Monitor (NPM)
• OS/2: System Performance Monitor/2 (SPM/2)
• AIX: vmstat
• Clients: Internal tools to measure elapsed time and transactions per second
• LAN: DatagLANce Network Analyzer for Ethernet and Token-Ring for OS/2
The IBM Relational Warehouse Workload (IRWW) is used for the measurements. IRWW is a simulated
product warehouse, customer order, and order delivery system.
A series of measurements are performed, with each subsequent measurement providing an increased
workload to the configuration. All measurements include six PowerBuilder clients running a
predefined set of transactions in random order without modification for the entire series of
measurements. The PowerBuilder clients provide the end-user perspective of the effects of increasing
the workload in the configuration. The workload is increased each run by increasing the number of
sessions on the two OS/2 workload generators. The OS/2 workload generators run the exact same
transaction mix as the six PowerBuilder clients. The only difference is the management of screen I/O
at the client. The PowerBuilder clients simulate a real application, whereas the OS/2 workload
generators do not perform screen I/O.
These transactions are described in greater detail in B.3.5, “Transactions” on page 389.
For the configurations in this report, there is some overlap of tables and indexes. In some cases, such
as Tables T2 and T3, the tables are 100% preloaded into memory, so the I/O contention to these drives
is reduced.
This section describes the results of the measurements. The effects on each component in the
configuration are described as well as the performance of each transaction. The component results
can help you understand hardware capacity issues. The transaction results can help you understand
the perceived performance prediction issues.
The results are presented in relation to the overall throughput achieved at each workload level. This is
described as transactions per second (TPS). As the workload increases, the throughput increases.
The resultant effects on each component are then presented based on the current achieved
throughput. For example, when achieving 80 TPS:
• The 3745 running the network control program (NCP) is 81% utilized.
• The transfer rate through the 3745 running the NCP is 71,380 bytes per second.
• The PC500 running DDCS for OS/2 is 79% utilized.
The process of tuning the application and database is ongoing. These measurements achieved a
throughput rate of 89.34 TPS. At this point, several factors begin to show signs of stress. Primarily,
high levels of locking and latching occur. Also the device activity rate and the average response time
accessing disks 36-43 increase. These disks are 3380-E, and they contain 8 of the 16 partitions for
Table T7 and Table T9. The other 8 partitions are on 3390-3 disks, which are faster, and the device
rate and average response time reflect this.
Multiple iterations with tuning occurred before these results were achieved. The results are probably
not the absolute best, but to continue tuning and pushing for the absolute limits is not the goal of this
project. Other benchmarking projects deal with those goals. The goal of this project is to provide a
reasonably tuned environment. An important point to note is that the tuning of DB2 for MVS/ESA in
this client/server configuration should be the same as tuning DB2 for MVS/ESA in a local configuration.
This is true because of the use of stored procedures.
Table 43 shows the number of clients activated to achieve the throughput defined in column 1.
Column 2 shows the number of PowerBuilder clients (always 6). Column 3 shows the number of OS/2
clients. This number is increased by 10 for each measurement run. Column 4 shows the actual
number of clients, which is the sum of columns 2 and 3. Column 5 shows the estimated number of
PowerBuilder clients. This column is derived by evaluating the number of equivalent PowerBuilder
clients that each OS/2 client represents in terms of throughput:
---- ----
| |
| OS/2 client tps |
Estimated PowerBuilder Clients = |-------------------------- * (# OS/2 clients)| + (# PowerBuilder clients)
| PowerBuilder client tps |
| |
---- ----
If
OS/2 client tps = 1.476 tps
PowerBuilder client tps = .128 tps
# OS/2 clients = 60
# PowerBuilder clients = 6
then
---- ----
| |
| 1.476 tps |
Estimated PowerBuilder Clients = |----------- * 60 clients | + (6 clients)
| .128 tps |
| |
---- ----
= 698 clients
Table 43. Number of Clients in Relation to Aggregate Throughput: Transactions per Second (DDCS for OS/2)
Throughput PowerBuilder OS/2 Clients Actual Clients Est. PowerBuilder
Clients Clients
34.67 6 10 16 293
66.57 6 20 26 563
79.74 6 30 36 646
86.54 6 40 46 657
88.55 6 50 56 697
89.34 6 60 66 698
The values obtained in these measurements are based on the actual number of clients as reflected in
column 4 of Table 43. When evaluating the estimated number of PowerBuilder clients, other capacity
planning factors must be addressed. For example, DB2 for MVS/ESA V4.1 can handle up to 25,000
connections, with up to 1,999 active threads, and each client that passes through DDCS for OS/2 should
be allowed approximately 300 KB of RAM on DDCS for OS/2. These factors must be taken in
consideration when designing a configuration. One last point on the number of clients: the numbers
are based on the clients running without think time or keystroke time. So, if you factor in these
aspects, you could increase the actual number of clients to fill the capacity left available due to think
time and keystroke time.
The PC500 running DDCS for OS/2 reaches CPU utilization of 90.04% at a throughput rate of 89.34 TPS.
The measurements that support this report do not support any conclusions about the upper bounds of
CPU utilization for a PC500 or at what point degradation becomes noticeable.
At a throughput rate of 89.34 TPS, the 9121-742 running DB2 for MVS/ESA processor reaches 77.33 CPU
utilization. The 9121-742 is capable of scaling up to 98% CPU utilization; therefore, excess processor
power is available.
Table 45 on page 386 shows the increase in RAM utilization for the PC500 as the number of clients
increases. The initial value of 33.126 MB for 16 clients should not be used as an indication of RAM
usage. This number is a factor of the types and numbers of applications activated on the PC500. Treat
33.126 as the base number, which increases as clients are added. The average amount of RAM used
by a client is approximately 200 KB. Conservatively, plan for 300 KB for each additional client.
B.3.3 Network
The network plays a big part in how well the client/server application performs and how big an effect
the client/server applications have on the performance of the database. Delay is the biggest enemy.
How big a role delay plays depends on the application. Every time a trip across the network is
required, if locks are held, they are held for the time it takes for the messages to move between the
client and the database. For these measurements, stored procedures are used, which has the benefit
of decreasing the number of SQL calls that must flow across the network as messages. Locks are held
only for the time to process the stored procedure and for one network roundtrip to issue a commit or
rollback. So, although delay does play a role in these measurements, it could play a larger role in
other environments.
The next two sections describe the effects on the NCP and LAN during the measurements. For this mix
of transactions, the message lengths are never greater than 1,200 bytes, so the following values are
set:
• 4 KB RUSIZEs in VTAM
• 4 KB RQRIOBLK definition in DDCS for OS/2
• 4 KB RQRIOBLK definitions in the OS/2 clients
• 4 KB DOS_RQRIOBLK definitions in the DOS clients
B.3.3.1 Network Control Program: Table 46 on page 387 shows the effects of the workload on
the 3745 running the NCP. The 3745 reached 89% utilization at a throughput rate of 89.34 TPS. The
data throughput in the NCP reached 79,873 bytes per second.
The DELAY parameters in the NCP play an important role in the performance of the NCP as well as the
response time observed at the clients. The greater the DELAY parameter value on the PCCU, HOST,
or GROUP LNCTL=CA parameters, the lower the CCU utilization, but the greater the response time at
the clients, especially at lower throughput.
For these measurements, the GROUP LNCTL=CA DELAY parameter is set to 0.1, and the PCCU and
HOST DELAY parameters are set to zero. This has the beneficial effect of controlling the CCU
utilization with little effect on response time at the clients with the higher levels of throughput. The
The NCP for the measurements was dedicated to the IRWW workload activity. Typically, an NCP is
doing other work, and this work should be factored into the capacity requirements for the NCP.
These results can be used to analyze the line capacity requirements between the LAN and the DB2 for
MVS/ESA host system. For these measurements, the connection is through a 4.5 MB channel, which is
underutilized. If this is replaced by a line with a speed of 56 Kb per second or 256 Kb per second, the
throughput is limited.
Additional measurements using a 3174-61R attached to the LAN with a 56 Kb per second connection to
the 3745 achieved a data throughput rate of approximately 4847 bytes per second (see Figure 182 on
page 388). This is a 69% utilization of the line. At this utilization, the throughput rate reached
approximately 5 TPS. Although the line has more capacity, the increase in transactions per second
begins to level off. For this report, there is not enough information to identify why 69% is the
threshold. It could be due to such factors as message size, hardware constraints, or protocol
constraints. Additional research or measurements are needed to identify the cause.
B.3.3.2 LAN: Table 47 on page 389 shows the percentage of LAN utilization and byte rate as the
workload is increased. For these measurements, the LAN capacity does not factor in. The 16 Mb LAN
is lightly utilized. Typically, a production environment has more activity on the LAN. Use these
numbers to analyze the additional load introduced on the LAN due to the mix of transactions used in
this configuration.
Monitor your LAN to make sure it is not being overutilized. Overutilization of the LAN could
introduce delays with resultant bad effects on database performance.
Two network flows are involved in each transaction. The first flow calls the stored procedure. The
second flow commits or rolls back the transaction. Locks acquired during stored procedure processing
are held until the commit or rollback is received. These locks are held for the amount of time it takes
to send the stored procedure response back to the client and for the client to process the response
and issue the commit or rollback. This delay results in locking contention at DB2 for MVS/ESA. This
contention becomes a major factor at higher levels of utilization, such as 89.34 TPS.
Secondarily, the average response time accessing 8 of the 16 partitions for Table T7 and Table T9 is
reaching levels that cause a delay because those partitions reside on slower disk drives (3380-E).
B.3.5 Transactions
This section describes the transactions and the achieved response time at the PowerBuilder clients in
relation to the aggregate throughput. The response time duration starts when the end user submits
the transaction request—after the network connection is established and the end user has entered the
input data. The duration ends after the commit or rollback has been issued and all response data is
available to the end user (visible on the computer monitor).
In this section, graphs depict the point where 90% of the transactions complete within the identified
response time for the identified throughput. Figure 183 on page 390 shows the aggregate response
time of the transactions for the identified throughput.
Each procedure described in the subsections that follow performs different operations, which creates
various degrees of contention on the tables. The SQL calls are described in the order they are called
for each procedure. Locking contention is the primary factor for response time degradation, with
access to slower disk drives also becoming a factor.
B.3.5.1 Transaction Tx1: Transaction Tx1 is the most involved of the seven transactions. The
transaction calls a stored procedure that performs write operations against Tables T3, T4, T6, T8, and
T9. The stored procedure performs read operations against Tables T2, T3, T5, T7, and T8.
Index-matching predicates in the WHERE clauses optimize access to all tables. The stored procedure
has two input parameters with a byte count between 22 and 190. There are nine output parameters
with a maximum byte count of 713.
Here is a summary of the SQL calls in the stored procedure and the order in which they occur:
Select
T7
T2
Fetch and Update
T3
Insert
T4
T6
Fetch
T5
Loop up to 15 times
Fetch and Update
T8
Insert
T9
End loop
Figure 184 on page 391 shows the achieved response time for transaction Tx1. The vertical axis
represents the response time as seen by the PowerBuilder clients. The horizontal axis represents the
Figure 184. Response Time for Transaction Tx1: DDCS for OS/2
Most Tx1 transactions complete between 1.4 and 2.6 seconds. Transaction Tx1 shows a response time
degradation at 80 TPS, which is attributed to lock contention on Table T3. Both the Tx3 and Tx6
transactions update Table T3. Transaction Tx1 also inserts Table T9, and one-half of those rows are
resident on slower disk drives.
Table 48 shows the transactions that have potential lock contention with transaction Tx1 on the defined
tables.
Table 48. Potential Lock Contention for Transaction Tx1: DDCS for OS/2
Table Tx1 Tx2 Tx3 Tx4 Tx5 Tx6 Tx7
T2 Yes
T3 Yes Yes
T4 Yes Yes
T5 Yes
T6 Yes Yes
T7 Yes Yes
T8 Yes
T9 Yes Yes
B.3.5.2 Transaction Tx2: Transaction Tx2 calls a stored procedure that performs read operations
against Tables T6, T7, and T9. Index-matching predicates in the WHERE clause optimize access to
these tables. The stored procedure has one input parameter with a byte count of 26 and four output
parameters with a maximum byte count of 516.
Here is a summary of the SQL calls in the stored procedure and the order in which they occur:
Figure 185 shows the achieved response time for transaction Tx2. The vertical axis represents the
response time as seen by the PowerBuilder clients. The horizontal axis represents the overall
throughput rate for the aggregate of transactions. The solid line represents the point where 90% of the
Tx2 transactions achieved this response time or better.
Figure 185. Response Time for Transaction Tx2: DDCS for OS/2
Most Tx2 transactions complete between 1.0 and 1.2 seconds. Transaction Tx2 shows the beginning of
response time degradation at 89 TPS, which can be attributed to the fetch from Table T9, which is 50%
contained on slower disk drives. These drives are beginning to show higher response time due to
higher utilization rates.
Table 49 shows the transactions that have potential lock contention with transaction Tx2 on the defined
tables.
Table 49. Potential Lock Contention for Transaction Tx2: DDCS for OS/2
Table Tx1 Tx2 Tx3 Tx4 Tx5 Tx6 Tx7
T6 Yes Yes
T7 Yes Yes
T9 Yes Yes
Here is a summary of the SQL calls in the stored procedure and the order in which they occur:
For 60% of the time
Select
T7
Loop up to 4 times
Fetch from
T7
End loop
End 60% of the time
Fetch and Update
T7
Conditionally Fetch and Update
T2
T3
Insert
T1
Figure 186 shows the achieved response time for transaction Tx3. The vertical axis represents the
response time as seen by the PowerBuilder clients. The horizontal axis represents the overall
throughput rate for the aggregate of transactions. The solid line represents the point where 90% of the
Tx3 transactions achieved this response time or better.
Figure 186. Response Time for Transaction Tx3: DDCS for OS/2
Most Tx3 transactions complete between 1 and 2.75 seconds. Transaction Tx3 shows a gradual
response time degradation at 70 TPS and a greater degradation at 89 TPS. This is attributed to
requiring exclusive locks on Tables T2, T3, and T7. Both Tables T2 and T3 are small. The Tx7
transaction updates Table T7. The Tx1 transaction updates Table T3 and reads Tables T2 and T7. The
Tx2 transaction reads Table T7.
Table 50 on page 394 shows the transactions that have potential lock contention with transaction Tx3
on the defined tables. Both Tables T2 and T3 are small, so the contention potential is great.
B.3.5.4 Transaction Tx4: Transaction Tx4 calls a stored procedure that performs a write
operation against T5. Index-matching predicates in the WHERE clause optimize access to this table.
The stored procedure has one input parameter with a byte count of 8 and three output parameters with
a maximum byte count of 99.
Figure 187 shows the achieved response time for this transaction. The vertical axis represents the
response time as seen by the PowerBuilder clients. The horizontal axis represents the overall
throughput rate for the aggregate of transactions. The solid line represents the point where 90% of the
Tx4 transactions achieved this response time or better.
Figure 187. Response Time for Transaction Tx4: DDCS for OS/2
Most Tx4 transactions complete in less than 0.8 second. The response time degradation is minimal for
the scope of the measurements. Tx4 updates Table T5, which is 100% preloaded into memory. The
disk drive that contains the log shows some high-level utilization, but the average access time is still
low because of the speed of the disk drive. The high level of utilization at the greater throughput rates
can account for the slight degradation in response time. Transaction Tx4 runs 1% of the time;
therefore, the sampling is low, which can result in sampling anomalies.
Table 51 on page 395 shows the transaction that has potential lock contention with transaction Tx4 on
the defined tables.
B.3.5.5 Transaction Tx5: Transaction Tx5 calls a stored procedure that performs read operations
against Table T5. Index-matching predicates in the WHERE clause optimize access to this table. The
stored procedure has one input parameter with a byte count between 8 and 120. It has seven output
parameters with a maximum byte count of 590.
Here is a summary of the SQL calls in the stored procedure and the order in which they occur:
Loop up to 15 times
Select
T5
End loop
Figure 188 shows the achieved response time for this transaction. The vertical axis represents the
response time as seen by the PowerBuilder clients. The horizontal axis represents the overall
throughput rate for the aggregate of transactions. The solid line represents the point where 90% of the
Tx5 transactions achieved this response time or better.
Figure 188. Response Time for Transaction Tx5: DDCS for OS/2
The Tx5 transactions complete in less than 1.0 second. Tx5 shows a little change in response time.
Table T5 is the only table that Tx5 accesses. There is neither lock contention nor I/O contention
against this table because it is large and 100% loaded into memory.
Table 52 shows the transaction that has potential lock contention with transaction Tx5 on the defined
table.
Table 52. Potential Lock Contention for Transaction Tx5: DDCS for OS/2
Table Tx1 Tx2 Tx3 Tx4 Tx5 Tx6 Tx7
T5 Yes
Here is a summary of the SQL calls in the stored procedure and the order in which they occur:
Select
T3
Select
T8 joined with T9
Figure 189 shows the achieved response time for this transaction. The vertical axis represents the
response time as seen by the PowerBuilder clients. The horizontal axis represents the overall
throughput rate for the aggregate of transactions. The solid line represents the point where 90% of the
Tx6 transactions achieved this response time or better.
Figure 189. Response Time for Transaction Tx6: DDCS for OS/2
The Tx6 transactions complete between 0.65 and 1.5 seconds. Tx6 shows an increase in response time
at about 80 TPS. This can be attributed to locking contention on Table T3 and increased disk utilization
on 50% of Table T9. Transactions Tx1 and Tx7 perform updates on Table T3.
Table 53 shows the transactions that have potential lock contention with transaction Tx6 on the defined
tables.
Table 53. Potential Lock Contention for Transaction Tx6: DDCS for OS/2
Table Tx1 Tx2 Tx3 Tx4 Tx5 Tx6 Tx7
T3 Yes Yes
T8 Yes
T9 Yes Yes
Here is a summary of the SQL calls in the stored procedure and the order in which they occur:
Loop up to 10 times
Fetch and Delete
T4
Fetch and Update
T6
Update
T9
Select
T9
Update
T7
End loop
Figure 190 shows the achieved response time for this transaction. The vertical axis represents the
response time as seen by the PowerBuilder clients. The horizontal axis represents the overall
throughput rate for the aggregate of transactions. The solid line represents the point where 90% of the
Tx7 transactions achieved this response time or better.
Figure 190. Response Time for Transaction Tx7: DDCS for OS/2
The Tx7 transactions complete between 0.7 and 1.5 seconds. Tx7 shows a gradual response time
degradation throughout the scope of the measurements. Since transaction Tx7 needs exclusive locks
to Tables T4, T6, T7, and T9, as the throughput increases, the response time should degrade. These
are all large tables, so the contention should not be drastic.
Table 54 on page 398 shows the transactions that have potential lock contention with transaction Tx7
on the defined tables.
This section describes the results from the measurements. The effects on each component in the
configuration are described as well as the performance of each transaction. The component results
can help you understand hardware capacity issues. The transaction results can help you understand
the perceived performance prediction issues.
The results are presented in relation to the overall throughput achieved at each workload level. This is
described as TPS. As the workload increases, the throughput increases. The resultant effects on each
component are then presented based on the current achieved throughput. For example, when
achieving 57.1 TPS:
• The 3745 running the NCP is 74% utilized.
• The transfer rate through the 3745 running the NCP is 50,922 bytes per second.
• The C10 running DDCS for AIX is 97% utilized.
The process of tuning the application and database is ongoing. These measurements achieved a
throughput rate of 57.1 TPS. At this point, several factors are beginning to show signs of stress.
Primarily, there are CPU constraints on the C10 running DDCS for AIX. The CPU utilization on the C10
reaches 97%. Secondarily, high levels of locking and latching occurr. Also, the device activity rate
and the average response time accessing disks 36-43, both increase. These disks are 3380-E, and they
contain 8 of the 16 partitions for the Tables T7 and T9. The other 8 partitions are on 3390-3 disks,
which are faster, and the device rate and average response time reflect this.
Multiple iterations with tuning occurred before achieving these results. The results are probably not
the absolute best, but to continue tuning and pushing for the absolute limits is not the goal of this
project. Other benchmarking projects deal with those goals. The goal of this project is to provide a
reasonably tuned environment. An important point to note is that the tuning of DB2 for MVS/ESA in
this client/server configuration should be the same as tuning DB2 for MVS/ESA in a local configuration.
This is true because of the use of stored procedures.
Table 55 on page 399 shows the number of clients activated to achieve the throughput defined in
column 1. Column 2 shows the number of PowerBuilder clients (always 6). Column 3 shows the
number of OS/2 clients. This number is increased by 10 for each measurement run. Column 4 shows
the actual number of clients, which is the sum of columns 2 and 3. Column 5 shows the estimated
number of PowerBuilder clients. This column is derived by evaluating the number of equivalent
PowerBuilder clients that each OS/2 client represents in terms of throughput:
If
OS/2 client tps = 1.41 tps
PowerBuilder client tps = .128 tps
# OS/2 clients = 40
# PowerBuilder clients = 6
then
---- ----
| |
| 1.41 tps |
Estimated PowerBuilder Clients = |----------- * 40 clients | + (6 clients)
| .128 tps |
| |
---- ----
= 447 clients
Table 55. Number of Clients in Relation to Aggregate Throughput: Transactions per Second (DDCS for AIX)
Throughput (TPS) PowerBuilder OS/2 Clients Actual Clients Est. PowerBuilder
Clients Clients
24.53 6 10 16 179
43.67 6 20 26 340
53.68 6 30 36 402
57.1 6 40 46 447
The values obtained in these measurements are based on the actual number of clients as reflected in
column 4. When evaluating the estimated number of PowerBuilder clients, other capacity planning
factors must be addressed. For example, DB2 for MVS/ESA V4.1 can handle up to 25,000 connections,
with up to 1,999 active threads, and each client that passes through DDCS for AIX should be allowed
approximately 500 KB of RAM on DDCS for AIX. These factors must be taken in consideration when
designing a configuration. One last point on the number of clients: the numbers are based on the
clients running without think time or keystroke time. So, if you factor in these aspects, you could
increase the actual number of clients to fill the capacity left available due to think time and keystroke
time.
The C10 running DDCS for AIX reaches CPU utilization of 97% at a throughput rate of 57.1 TPS. The
C10 is identified as the primary bottleneck that prevents additional throughput.
At a throughput rate of 57.1 TPS, the 9121-742 processor reaches 48.57 CPU utilization. The 9121-742 is
capable of scaling up to 98% CPU utilization; therefore, excess processor power is available.
Table 57 shows the increase in RAM utilization for the C10 as the number of clients increases. The
initial value of 75.54 MB for 16 clients should not be used as an indication of RAM usage. This number
is a factor of the types and numbers of applications activated on the C10. Treat 75.54 as the base
number, which increases as clients are added. The average amount of RAM used per client is
approximately 450 KB. Conservatively, plan for 500 KB for additional clients.
The next two sections describe the effects on the NCP and LAN during the measurements. For this mix
of transactions, the message lengths are never greater than 1,200 bytes, so the following values are
set:
• 4 KB RUSIZEs in VTAM
• 4 KB RQRIOBLK definition in DDCS for AIX
• 4 KB RQRIOBLK definitions in the OS/2 clients
• 4 KB DOS_RQRIOBLK definitions in the DOS clients
B.4.3.1 Network Control Program: Table 58 shows the effects of the workload on the 3745
running the NCP. The 3745 reached 74% utilization at a throughput rate of 57.1 TPS. The data
throughput in the NCP reached 50,922 bytes per second.
The DELAY parameters in the NCP play an important role in the performance of the NCP as well as the
response time observed at the clients. The greater the DELAY parameter values on the PCCU, HOST,
or GROUP LNCTL=CA parameters, the lower the CCU utilization, but the greater the response time at
the clients, especially at lower throughput.
For these measurements, the GROUP LNCTL=CA DELAY parameter is set to 0.1 and the PCCU and
HOST DELAY parameters are set to zero. This has the beneficial effect of controlling the CCU
utilization with little response time impact at the clients at the higher levels of throughput. The lower
levels of throughput experience mildly higher levels of delay than when the GROUP LNCTL=CA
DELAY parameter is set to 0.
The NCP for the measurements was dedicated to the measurement workload activity. Typically, an
NCP is doing other work, and this work should be factored into the capacity requirements for the NCP.
These results can be used to analyze the line capacity requirements between the LAN and the DB2 for
MVS/ESA host system. For these measurements, the connection is through a 4.5 MB channel, which is
underutilized. If this is replaced by a line with a speed of 56 Kb per second or 256 Kb per second, the
throughput is limited.
Monitor your LAN to make sure it is not being overutilized. Overutilization of the LAN could
introduce delays that degrade database performance.
Two network flows are involved in each transaction. The first flow calls the stored procedure. The
second flow commits or rolls back the transaction. Locks acquired during stored procedure processing
are held until the commit or rollback is received. These locks are held for the amount of time it takes
to send the stored procedure response back to the client and for the client to process the response
and issue the commit or rollback. DDCS for AIX running at maximum CPU utilization increases this
delay. Delays result in locking contention at DB2 for MVS/ESA.
Secondarily, the average response time accessing 8 of the 16 partitions for Tables T7 and T9 is
reaching levels that have a delaying effect because those partitions reside on slower disk drives
(3380-E).
In this section, graphs depict the point where 90% of the transactions complete within the identified
response time for the identified throughput. Figure 193 shows the aggregate response time of the
transactions for the identified throughput.
Each procedure described in the subsections that follow performs different operations, which creates
various degrees of contention on the tables. The SQL calls are described for each procedure in the
order they are called. Locking contention is the primary factor for response time degradation, with
access to slower disk drives also becoming a factor.
B.4.5.1 Transaction Tx1: Transaction Tx1 is the most involved of the seven transactions. The
transaction calls a stored procedure that performs write operations against Tables T3, T4, T6, T8, and
T9. The stored procedure performs read operations against Tables T2, T3, T5, T7, and T8.
Index-matching predicates in the WHERE clauses optimize access to all tables. The stored procedure
has two input parameters for a byte count between 22 and 190. There are nine output parameters with
a maximum byte count of 713.
Here is a summary of the SQL calls in the stored procedure and the order in which they occur:
Select
T7
T2
Fetch and Update
T3
Insert
T4
T6
Fetch
T5
Loop up to 15 times
Fetch and Update
Figure 194 shows the achieved response time for transaction Tx1. The vertical axis represents the
response time as seen by the PowerBuilder clients. The horizontal axis represents the overall
throughput rate for the aggregate of transactions. The solid line represents the point where 90% of the
Tx1 transactions achieved this response time or better.
Figure 194. Response Time for Transaction Tx1: DDCS for AIX
Most Tx1 transactions complete between 1.4 and 2.0 seconds. Transaction Tx1 shows a gradual
response time degradation as the workload increases. This can be attributed to lock contention on
Table T3. Both the Tx3 and Tx6 transactions update Table T3. Transaction Tx1 also inserts Table T9,
and one-half of those rows are resident on slower disk drives. A greater reponse-time degradation
begins at 54 TPS, and this is attributed to CPU constraints on the C10 running DDCS for AIX.
Table 60 shows the transactions that have potential lock contention with transaction Tx1 on the defined
tables.
Table 60. Potential Lock Contention for Transaction Tx1: DDCS for AIX
Table Tx1 Tx2 Tx3 Tx4 Tx5 Tx6 Tx7
T2 Yes
T3 Yes Yes
T4 Yes Yes
T5 Yes
T6 Yes Yes
T7 Yes Yes
T8 Yes
T9 Yes Yes
Here is a summary of the SQL calls in the stored procedure and the order in which they occur:
For 60% of the time:
Select
T7
Loop up to 4 times
Fetch from
T7
End loop
End 60% of the time
Select
T7
T6
Loop up to 15 times
Fetch
T9
End loop
Figure 195 shows the achieved response time for transaction Tx2. The vertical axis represents the
response time as seen by the PowerBuilder clients. The horizontal axis represents the overall
throughput rate for the aggregate of transactions. The solid line represents the point where 90% of the
Tx2 transactions achieved this response time or better.
Figure 195. Response Time for Transaction Tx2: DDCS for AIX
Most Tx2 transactions complete between 1.0 and 1.5 seconds. The response time improvement can be
attributed to the data becoming available in the buffers because of the increased workload.
Table 61 shows the transactions that have potential lock contention with transaction Tx2 on the defined
tables.
Table 61 (Page 1 of 2). Potential Lock Contention for Transaction Tx2: DDCS for AIX
Table Tx1 Tx2 Tx3 Tx4 Tx5 Tx6 Tx7
T6 Yes Yes
B.4.5.3 Transaction Tx3: Transaction Tx3 calls a stored procedure that performs write operations
against Tables T1, T2, T3, and T7. The stored procedure performs read operations against Tables T2,
T3, and T7. Index-matching predicates in the WHERE clauses optimize access to all tables except T1.
The stored procedure has one input parameter for a byte count of 39 and six output parameters with a
maximum byte count of 551.
Here is a summary of the SQL calls in the stored procedure and the order in which they occur:
For 60% of the time
Select
T7
Loop up to 4 times
Fetch from
T7
End loop
End 60% of the time
Fetch and Update
T7
Conditionally Fetch and Update
T2
T3
Insert
T1
Figure 196 shows the achieved response time for transaction Tx3. The vertical axis represents the
response time as seen by the PowerBuilder clients. The horizontal axis represents the overall
throughput rate for the aggregate of transactions. The solid line represents the point where 90% of the
Tx3 transactions achieved this response time or better.
Figure 196. Response Time for Transaction Tx3: DDCS for AIX
Most Tx3 transactions complete between 1 and 1.75 seconds. Transaction Tx3 shows a response time
degradation at 54 TPS. This is attributable to delays introduced by CPU contraint on the C10 running
Table 62 shows the transactions that have potential lock contention with transaction Tx3 on the defined
tables.
Table 62. Potential Lock Contention for Transaction Tx3: DDCS for AIX
Table Tx1 Tx2 Tx3 Tx4 Tx5 Tx6 Tx7
T1 Yes
T2 Yes
T3 Yes Yes
T7 Yes Yes
B.4.5.4 Transaction Tx4: Transaction Tx4 calls a stored procedure that performs a write
operation against T5. Index-matching predicates in the WHERE clause optimize access to this table.
The stored procedure has one input parameter with a byte count of 8 and three output parameters with
a maximum byte count of 99.
Figure 197 shows the achieved response time for this transaction. The vertical axis represents the
response time as seen by the PowerBuilder clients. The horizontal axis represents the overall
throughput rate for the aggregate of transactions. The solid line represents the point where 90% of the
Tx4 transactions achieved this response time or better.
Figure 197. Response Time for Transaction Tx4: DDCS for AIX
Most Tx4 transactions complete in less than 1 second. The response time degradation is gradual for
the scope of the measurements. Tx4 updates Table T5, which is 100% preloaded into memory.
Transaction Tx4 runs 1% of the time; therefore, the sampling is low, which can result in sampling
anomalies.
Table 63 on page 409 shows the transaction that has potential lock contention with transaction Tx4 on
the defined tables.
B.4.5.5 Transaction Tx5: Transaction Tx5 calls a stored procedure that performs read operations
against Table T5. Index-matching predicates in the WHERE clause optimize access to this table. The
stored procedure has one input parameter with a byte count between 8 and 120. It has seven output
parameters with a maximum byte count of 590.
Here is a summary of the SQL calls in the stored procedure and the order in which they occur:
Loop up to 15 times
Select
T5
End loop
Figure 198 shows the achieved response time for this transaction. The vertical axis represents the
response time as seen by the PowerBuilder clients. The horizontal axis represents the overall
throughput rate for the aggregate of transactions. The solid line represents the point where 90% of the
Tx5 transactions achieved this response time or better.
Figure 198. Response Time for Transaction Tx5: DDCS for AIX
The Tx5 transactions complete in less than 1.25 seconds. Tx5 shows a little change in response time.
Table T5 is the only table that Tx5 accesses. There is neither lock contention nor I/O contention
against this table because it is large and 100% loaded into memory.
Table 64 shows the transaction that has potential lock contention with transaction Tx5 on the defined
tables.
Table 64. Potential Lock Contention for Transaction Tx5: DDCS for AIX
Table Tx1 Tx2 Tx3 Tx4 Tx5 Tx6 Tx7
T5 Yes
Here is a summary of the SQL calls in the stored procedure and the order in which they occur:
Select
T3
Select
T8 joined with T9
Figure 199 shows the achieved response time for this transaction. The vertical axis represents the
response time as seen by the PowerBuilder clients. The horizontal axis represents the overall
throughput rate for the aggregate of transactions. The solid line represents the point where 90% of the
Tx6 transactions achieved this response time or better.
Figure 199. Response Time for Transaction Tx6: DDCS for AIX
The Tx6 transactions complete between 0.5 and 1.0 second. Tx6 shows a gradual improvement in
response time until 54 TPS. This is attributable to data becoming available in the buffers as the
workload increases. At 54 TPS the response time degrades. This can be attributed to CPU contraint
on the C10 running DDCS for AIX, which causes locking contention on Table T3. The Tx1 and Tx7
transactions perform updates on Table T3.
Table 65 shows the transactions that have potential lock contention with transaction Tx6 on the defined
tables.
Table 65. Potential Lock Contention for Transaction Tx6: DDCS for AIX
Table Tx1 Tx2 Tx3 Tx4 Tx5 Tx6 Tx7
T3 Yes Yes
T8 Yes
T9 Yes Yes
Here is a summary of the SQL calls in the stored procedure and the order in which they occur:
Loop up to 10 times
Fetch and Delete
T4
Fetch and Update
T6
Update
T9
Select
T9
Update
T7
End loop
Figure 200 shows the achieved response time for this transaction. The vertical axis represents the
response time as seen by the PowerBuilder clients. The horizontal axis represents the overall
throughput rate for the aggregate of transactions. The solid line represents the point where 90% of the
Tx7 transactions achieved this response time or better.
Figure 200. Response Time for Transaction Tx7: DDCS for AIX
The Tx7 transactions complete between 0.75 and 2.5 seconds. The spike in response time at 40 TPS
can be attributed to the random sampling that created a sample with lock contention. Because Tx7
runs only 2% of the time, a single sample with contention can have drastic effects on the final
numbers.
Note: Other measurements not included in this report show that the response time of transaction Tx7
gradually degrades as the workload increases. Spikes such as these are not uncommon for
small transaction samples such as those found with Tx7 and Tx4.
Table 66 on page 412 shows the transactions that have potential lock contention with transaction Tx7
on the defined tables.
B.5 Conclusions
This technical report shows the results and analysis of some performance measurements for a
client/server relational database configuration that includes DB2 for MVS/ESA, DDCS for OS/2, DDCS
for AIX, and DB2 CAE for Windows with PowerBuilder. The goal of these measurements was to
provide a sample environment and analysis that will help a customer:
• Assess proposed configurations for acceptable performance.
• Assess performance of current configurations.
• Choose appropriate hardware capacity for a proposed configuration.
• Choose appropriate software programs for a proposed configuration.
• Tune a new or current configuration.
Throughout this appendix, we identify general considerations: information to consider for tuning and
capacity planning analysis. This does not imply that all information is summarized in these general
considerations, because we provide information throughout this appendix that can be of more or less
utility depending on your reference point.
We focus on the use of stored procedures for online transaction processing environments. Network
delays play an important role in how well the environment performs. The use of stored procedures
may reduce the importance of network delays, but does not eliminate them. For example, each
transaction consists of a stored procedure call and a commit (or rollback). This results in two
message roundtrips from the client to the database. After the stored procedure completes, any locks
created remain held until the commit or rollback flows across the wire. Without stored procedures,
locks may be held much longer because there are two roundtrip messages for every SQL statement.
The burden of providing good throughput in a client/server environment falls on several different
people:
• Network designers, to ensure that the network has capacity and that an efficient path exists
between the client and target database.
• Database administrators, to ensure that the target database and gateway operate within capacity
and are tuned for client/server access.
• Application programmers, to ensure that the program is manipulating the database efficiently (for
example, making efficient use of indexes).
• Any administrators of other software programs between the client and the target database (for
example, gateways).
This appendix provides a reasonably tuned client/server database configuration that achieves a certain
level of throughput using a combination of components. Some questions to ask when analyzing this
configuration in terms of your environment are as follows:
• How does the data throughput compare to my environment or proposed environment?
− On the LAN?
− Through DDCS?
− Through the NCP?
This appendix is based on one of a series of technical reports dealing with performance prediction and
capacity planning issues for the DB2 family of products operating in a client/server configuration. The
goal of these technical reports is to help the customer:
• Assess proposed configurations for acceptable performance.
• Assess performance of current configurations.
• Choose proper hardware capacity for a proposed configuration.
• Choose the proper software programs for a proposed configuration.
• Tune a new or current configuration.
This appendix is unique because it describes the impacts on the numerous components involved in a
client/server environment as the workload increases in that environment. Use the configuration
reported here as an example that could be analyzed and potentially applied to other configurations.
We describe the throughput, CPU utilization, RAM utilization, LAN utilization and response times when
PowerBuilder clients and OS/2 clients call stored procedures on DB2 to perform Query type
transactions. Use the results from these measurements to predict performance and plan for capacity
in your environment. Particularly noteworthy items are highlighted as general considerations.
Figure 201 presents the configuration example discussed in the following sections.
The components in this configuration are monitored using the following programs:
• MVS: Resource Measurement Facility (RMF)
• DB2 for OS/390 Version 5: DB2 Accounting and Statistics traces
• NT-provided Performance Monitor program
• PowerBuilder clients: Mercury Winrunner and Loadrunner programs
• OS/2 clients: Internal tools
• LAN: DatagLANce Network Analyzer for Ethernet and Token-Ring for OS/2
The workload calls stored procedures that return query answer sets. These answer sets provide a
combination of various answer set characteristics, from a few rows to many rows, from a few columns
to many columns. The answer sets are designed to help understand the effects of various answer set
characteristics on DB2 Connect, the network, and the client.
A series of measurements are performed, with each subsequent measurement providing an increased
workload to the configuration. All measurements include six PowerBuilder clients running a
predefined set of transactions in random order without modification for the entire series of
measurements. The PowerBuilder clients provide the end-user perspective of the effects of increasing
the workload in the configuration. The workload is increased each run by increasing the number of
sessions on the two OS/2 workload generators. The OS/2 workload generators run the exact same
transaction mix as the six PowerBuilder clients. The only difference is the management of screen I/O
at the client. The PowerBuilder clients simulate a real application, where the OS/2 workload
generators do not perform screen I/O.
• Transaction Qry1_9
Transaction Qry1_9 returns 150 rows with nine columns per row. There are 105 character bytes
per row for a total of 15,750 bytes per query. Qry1_9 runs as 24% of the total transaction mix. The
SQL logic in the stored procedure is as follows:
C.3 Results
The next sections describe the results from the measurements. The results are presented in relation
to the overall throughput achieved at each workload level. This is described as transactions per
second (TPS). As the workload increases, the throughput increases. The effects on each component
are then presented, based on the current achieved throughput. For example, when achieving 24
transactions per second:
• The PC704 4X166 running DB2 Connect for NT is 19.90% utilized.
• The 16 Mbps token-ring LAN is 61.83% utilized at a rate of 1,236,541 bytes per second.
Table 69 on page 420 describes the number of clients activated to achieve the throughput defined in
column 1. Column 2 describes the number of PowerBuilder clients. This is always six. Column 3
describes the number of OS/2 clients. This is increased by five for each measurement run. Column 4
is the actual number of clients, which is the sum of columns 2 and 3. Column 5 is the estimated
number of PowerBuilder clients. This column is derived by evaluating the number of equivalent
PowerBuilder clients each OS/2 client represents in terms of throughput. The equation for evaluating
this is:
---- ----
| |
| OS/2 client tps |
Estimated PowerBuilder Clients = |-------------------------- * (# OS/2 clients)| + (# PowerBuilder clients)
| PowerBuilder client tps |
| |
---- ----
If
OS/2 client tps = 2.630 tps
PowerBuilder client tps = 0.146 tps
# OS/2 clients = 5
# PowerBuilder clients = 6
then
---- ----
| |
= 96 clients
The values obtained in these measurements are obtained based on the actual number of clients as
reflected in column 4. When evaluating the estimated number of PowerBuilder clients, other capacity
planning factors need to be addressed. For example, DB2 for OS/390 Version 5 can handle up to
25,000 connected threads, with up to 1,999 simultaneously active threads, and each client that passes
through DB2 Connect for NT and OS/2 should be allowed approximately 200 KB of RAM on DB2
Connect. These factors need to be taken into consideration when designing a configuration. One last
point on the number of clients. These numbers are based on the clients running without a think time.
So if this aspect is factored in, the actual number of clients could be increased to fill the capacity left
available due to the think time.
In Figure 202 on page 421, the solid line represents the CPU utilization of the PC704 four-processor
configuration. The dotted line represents the PC704 two-processor configuration. The dotted/dashed
line represents the PC704 single-processor configuration, and the dashed line represents the PC720
four-processor configuration. The 9672-R61 (not shown in Figure 202 on page 421) reached a 41.72%
utilization at the maximum throughput of 24.14 transactions per second.
Legend:
Solid line - PC704 4X166
Dot line - PC704 2X166
Dot/Dash line - PC704 1X166
Dash - PC720 4X100
The network plays a big part in how well the client/server application will perform, and how big an
effect the client/server applications will have on the performance of the database. Delay is the biggest
enemy. How big a role delay plays depends on the application. Every time a trip across the network
is required, locks (if locks are held) are held for the time it takes the messages to move between the
client and the database. If the transaction includes many SQL statements, then the client will wait for
every SQL statement to traverse the network, assuming this is not a stored procedure or compound
SQL. For these measurements, overall network bandwidth is very important because large amounts of
data are moving through the network. Bandwidth through the 3172 Model 1 is the factor that prevents
additional throughput.
This section describes the effects on the 3172 Model 1 and LAN during the measurements. For this
mix of transactions, the message lengths are large, so the following values are set:
• 4 KB RUSIZEs in VTAM
• 32 KB RQRIOBLK definition in DB2 Connect for NT
• 32 KB RQRIOBLK definitions in the OS/2 clients
• 32 KB RQRIOBLK definitions in the Win95 Clients
C.3.3.1 3172 Model 1: The 3172 Model 1 is fully utilized for these measurements. This model of
3172 uses older technology. Follow-on 3172 models will provide greater bandwidth and speed. The
3172 Model 1 has a delay parameter that must be turned off.
C.3.3.2 LAN: Table 71 shows the percentage of LAN utilization and byte rate as the workload is
increased. Notice that query answer set transactions impose a much greater demand on LAN capacity
than OLTP type transactions. The 16 Mbps LAN is over 50% utilized at the higher transaction rate.
Monitor your LAN to make sure it is not being overutilized. This is especially true for transactions
that return large answer sets. Overutilization of the LAN could introduce delays that degrade
database performance.
In this section, graphs depict the point where 90% of the transactions complete within the identified
maximum response time for the identified throughput. Since this is the 90th percentile, this is an
indication of the greatest response time when sampling the fastest 90% of the transactions. Most
transactions achieve a better response time.
Figure 203 through Figure 208 on page 428 describe the achieved response time for the transactions.
The vertical axis represents the response time as seen by the PowerBuilder clients. The horizontal
axis represents the overall throughput rate for the aggregate of transactions. The solid line represents
the point where 90% of the specified transaction achieves this response time or better.
Legend:
Solid line - PC704 4X166
Dot line - PC704 2X166
Dot/Dash line - PC704 1X166
Dash - PC720 4X100
The maximum response time for 90% of the FEWROWS transactions is between 0.61 and 0.88 second
on the four-processor PC704 as the workload increases. The standard deviation is between 0.06 and
0.10. The transaction response times for the other multiprocessor configurations are moderately
Legend:
Solid line - PC704 4X166
Dot line - PC704 2X166
Dot/Dash line - PC704 1X166
Dash - PC720 4X100
The maximum response time for 90% of the MANYROWS transactions is between 8.35 and 12.47
seconds on the four-processor PC704 as the workload increases. The standard deviation is between
0.28 and 0.95. The transaction response times for the other multiprocessor configurations are
moderately degraded, with the single-processor configuration achieving the longest response times
(9.45 to 13.29 seconds, a standard deviation of 0.34 to 1.32).
Legend:
Solid line - PC704 4X166
Dot line - PC704 2X166
Dot/Dash line - PC704 1X166
Dash - PC720 4X100
The maximum response time for 90% of the Qry1_2 transactions is between 0.72 and 0.99 second on
the four-processor PC704 as the workload increases. The standard deviation is between 0.07 and 0.10.
The transaction response times for the other multiprocessor configurations are moderately degraded,
with the single-processor configuration achieving the longest response times (0.82 to 1.10 seconds, a
standard deviation of 0.08 to 0.13).
Legend:
Solid line - PC704 4X166
Dot line - PC704 2X166
Dot/Dash line - PC704 1X166
Dash - PC720 4X100
The maximum response time for 90% of the Qry1_9 transactions is between 0.83 and 1.10 seconds on
the four-processor PC704 as the workload increases. The standard deviation is between 0.08 and 0.11.
The transaction response times for the other multiprocessor configurations are moderately degraded,
with the single-processor configuration achieving the longest response times (0.93 to 1.21 seconds, a
standard deviation of 0.09 to 0.12).
Legend:
Solid line - PC704 4X166
Dot line - PC704 2X166
Dot/Dash line - PC704 1X166
Dash - PC720 4X100
The maximum response time for 90% of the Qry1_17 transactions is between 0.93 and 1.21 seconds on
the four-processor PC704 as the workload increases. The standard deviation is between 0.07 and 0.11.
The transaction response times for the other multiprocessor configurations are moderately degraded,
with the single-processor configuration achieving the longest response times (0.99 to 1.31 seconds, a
standard deviation of 0.09 to 0.12).
Legend:
Solid line - Qry1_2
Dot line - Qry1_9
Dash - Qry1_17
Figure 208 displays the response time differences between transactions Qry1_2, Qry1_9, and Qry1_17.
These transactions return the same number of rows from the same table. The only difference is the
number of columns. As expected, the response time increases as the number of columns increases.
Many factors are involved in achieving a well-performing environment for your application. These
factors are:
• Client
• DB2 Connect
• Network
− LAN
− WAN
This section will describe these factors and what you can do to tune them. However, because the
potential variations in the environments are so numerous, not all the listed suggestions may be
appropriate for your environment, nor do we list all possible performance enhancements.
Hardware plays a large role in how well DB2 Connect will work. The following hardware features are
important:
• Processor Power and SPECint
Processor speed with high SPECint values is very important for DB2 Connect performance.
Consider an SMP system for the best performance. The greater the number of processors, the
greater the capacity. Enclosed below are some SPECint_base95 values obtained from the SPEC
Web page (http://www.specbench.org/osg/cpu95/results/cint95.html):
C.4.3 Network
Ignoring the run-time characteristics of the transaction, network time is where the majority of the time
is spent in a transaction. The biggest performance gains can be here. In addition to network
parameter values suggested on the DB2 Connect and the client, the next sections describe additional
considerations.
C.4.3.1 LAN: The LAN portion of your network connectivity is generally faster than the WAN
portion. Overutilization of the LAN and the devices (Bridges/Routers) will have an effect.
Retransmissions as a result of LAN congestion or other causes are common. Retransmissions are
very expensive in regards to response time:
• Ethernet tends to have congestion problems when approaching 50% LAN utilization.
• Token-Ring can operate up to 75% to 100% LAN utilization, depending on the type of workload.
Bulk transfer transactions get the higher utilization. For best performance, enable early token
release.
• Bridges and Routers - Bridges generally operate at media speed and therefore create little latency
unless the bridge is overutilized. An overutilized bridge will result in delays. Routers provide
more logic, thus increasing the latency. If a client needs to cross many routers to access the
database, the latency will add up and increase the potential for retransmit conditions.
C.4.3.2 WAN: The WAN portion of the network is generally where the majority of time is spent.
Performance can be affected by the type of devices as well as how those devices are configured. The
following are some typical connectivity solutions:
• 3745 - LAN attached (TIC) provides medium to high throughput, depending on the TIC type and 3745
model. 3745s are also used for connections over long distance by connecting between two 3745s.
The line speed of the connection between the 3745s should be as fast as possible to provide the
best response time.
• OSA - The mainframe is directly connected to the LAN, which provides very high throughput.
• 3172 - This can provide LAN/channel-attached capabilities as well as ″3172 to 3172″ attached
capabilities for long distance. The LAN/channel-attached setup provides very high throughput,
while throughput of the 3172 to 3172 configuration depends on the line speed connecting the two.
• ESCON - This is a channel-attached solution where DB2 Connect is directly channel attached to the
mainframe. It does not require DB2 Connect traffic to traverse the LAN, as does an OSA, 3172, or
3745 solution. This solution provides very high throughput with multipath channel (MPC ) support
providing the best throughput.
The configuration parameters for these devices, VTAM, and TCP/IP have large to small effects on
response time, dependent on the parameter. Listed below are some of these parameters. They are
not all valid in all configurations. These are parameters to look at in terms of your configuration:
• DELAY - Most devices have one or more DELAY parameters that can be set. This parameter has
severe performance implications since the network message can be held up on route to and from
the client. This has a direct effect on response time, so it should be set to zero in all cases.
C.5 Conclusions
This appendix shows the results and analysis of some performance measurements for a client/server
relational database configuration that includes DB2 for MVS/ESA, DB2 Connect for NT, and
DB2/CAE/Windows with PowerBuilder. The goal of these measurements is to provide an example
environment and analysis that will help a customer:
Throughout this appendix, general considerations are identified. This information is especially
identified as for you to consider in tuning and capacity planning analyses. This does not imply that all
information is summarized in these general considerations, because information throughout this
appendix can be of use, depeinding upon the analyst′s reference point.
This report focuses on the use of stored procedures for query-transaction processing environments.
Network delays, network bandwidth, and gateway speed play an important role in how well such an
environment performs. This is because of the large amount of data that is typically returned for these
transactions. The use of stored procedures helps speed up this process by eliminating some of the
line flows back and forth between the client and server.
The burden of providing good throughput in a client/server environment falls on several different
people:
• Network designers, to make sure the network has capacity and an efficient path exists between the
client and target database.
• Database administrators, to make sure the target database and gateway (DB2 Connect) is
operating within capacity and tuned for client/server access.
• Application programmers, to make sure the program is manipulating the database efficiently (for
example, use of indexes).
• Any other administrators of software programs between the client and the target database (for
example, gateways).
This publication is intended to help database administrators implement DB2 stored procedures in a
client/server environment. The information in this publication is not intended as the specification of
any programming interfaces that are provided by the DB2 family of products. See the PUBLICATIONS
section of the IBM Programming Announcement for the current level of the products listed below for
more information on the formal product documentation:
• ACF/VTAM
• AIX
• AIX/6000
• CICS/ESA
• CODE/370
• DB2 for MVS/ESA
• DB2 Server for OS/390
• DB2 for OS/2
• DB2 for OS/400
• DB2 for AIX
• DDCS for OS/2
• DDCS for AIX
• DB2 Connect for OS/2
• DB2 Connect for AIX
• DB2 Connect for Windows NT
• DB2 Connect for Windows 95
• IMS/ESA
• LE/370
• MVS/ESA
• OS/390
References in this publication to IBM products, programs or services do not imply that IBM intends to
make these available in all countries in which IBM operates. Any reference to an IBM product,
program, or service is not intended to state or imply that only IBM′s product, program, or service may
be used. Any functionally equivalent program that does not infringe any of IBM′s intellectual property
rights may be used instead of the IBM product, program or service.
Information in this book was developed in conjunction with use of the equipment specified, and is
limited in application to those specific hardware and software products and levels.
IBM may have patents or pending patent applications covering subject matter in this document. The
furnishing of this document does not give you any license to these patents. You can send license
inquiries, in writing, to the IBM Director of Licensing, IBM Corporation, 500 Columbus Avenue,
Thornwood, NY 10594 USA.
Licensees of this program who wish to have information about it for the purpose of enabling: (i) the
exchange of information between independently created programs and other programs (including this
one) and (ii) the mutual use of the information which has been exchanged, should contact IBM
Corporation, Dept. 600A, Mail Drop 1329, Somers, NY 10589 USA.
Such information may be available, subject to appropriate terms and conditions, including in some
cases, payment of a fee.
The information contained in this document has not been submitted to any formal IBM test and is
distributed AS IS. The information about non-IBM (″vendor″) products in this manual has been
supplied by the vendor and IBM assumes no responsibility for its accuracy or completeness. The use
of this information or the implementation of any of these techniques is a customer responsibility and
Any performance data contained in this document was determined in a controlled environment, and
therefore, the results that may be obtained in other operating environments may vary significantly.
Users of this document should verify the applicable data for their specific environment.
The following document contains examples of data and reports used in daily business operations. To
illustrate them as completely as possible, the examples contain the names of individuals, companies,
brands, and products. All of these names are fictitious and any similarity to the names and addresses
used by an actual business enterprise is entirely coincidental.
Reference to PTF numbers that have not been released through the normal distribution process does
not imply general availability. The purpose of including these reference numbers is to alert IBM
customers to specific information relative to the implementation of the PTF when it becomes available
to each customer according to the normal IBM PTF distribution process.
The following terms are trademarks of the International Business Machines Corporation in the United
States and/or other countries:
DB2 MVS/ESA
OS/2 AIX
IBM Distributed Relational Database Architecture
DRDA ACF/VTAM
AIX/6000 CICS/ESA
OS/400 IMS/ESA
SAA AD/Cycle
C/370 Language Environment
RISC System/6000 ES/9000
PowerPC PS/2
CICS IMS
RACF SP
VisualGen SP1
400 VisualAge
DATABASE 2 AT
COBOL/370 Presentation Manager
VTAM LANStreamer
PowerPC 601 Resource Measurement Facility
RMF DatagLANce
PROFS
Microsoft, Windows, Windows NT, and the Windows 95 logo are trademarks
or registered trademarks of Microsoft Corporation.
The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this redbook.
For information on ordering these ITSO publications see “How to Get ITSO Redbooks” on page 443.
• Distributed Relational Database Architecture Connectivity Guide , SC26-4783
• DRDA Security Considerations , GG24-2500-00
• WOW! DRDA Supports TCP/IP: DB2 Server for OS/390 and DB2 Universal Database , SG24-2212-00
Redbooks are also available on CD-ROMs. Order a subscription and receive updates 2-4 times a year
at significant savings.
This information was current at the time of publication, but is continually subject to change. The latest
information may be found at http://www.redbooks.ibm.com.
Redpieces
For information so current it is still in the process of being written, look at ″Redpieces″ on the Redbooks Web
Site ( http://www.redbooks.ibm.com/redpieces.htm). Redpieces are redbooks in progress; not all redbooks
become redpieces, and sometimes just a few chapters will be published this way. The intent is to get the
information out much quicker than the formal publishing process allows.
IBMMAIL Internet
In United States: usib6fpl at ibmmail usib6fpl@ibmmail.com
In Canada: caibmbkz at ibmmail lmannix@vnet.ibm.com
Outside North America: dkibmbsh at ibmmail bookshop@dk.ibm.com
• Telephone orders
Redpieces
For information so current it is still in the process of being written, look at ″Redpieces″ on the Redbooks Web
Site ( http://www.redbooks.ibm.com/redpieces.htm). Redpieces are redbooks in progress; not all redbooks
become redpieces, and sometimes just a few chapters will be published this way. The intent is to get the
information out much quicker than the formal publishing process allows.
Company
Address
We accept American Express, Diners, Eurocard, Master Card, and Visa. Payment by credit card not
available in all countries. Signature mandatory for credit card payment.
Index 449
CLI (continued) CODE/370 (continued)
SYSIBM.SYSPROCEDURES 228 displaying variables 324
trace 189 edit tool 310
VAB 332, 337 installation 307
Visual Basic 189 LE/370 86
Windows 187 o v e r v i e w 305
CLI0109E message 190 PTF 308
clicall.c 139 source code 317
clicked event 211 step o v e r 325
client pro gra m step return 325
CODE/370 307 step through 325
coding 117 using the debug tool 314
DB2 on the workstation 132 coded character set 250
MRSP 137 COLLECT option 227
preparation 130 collection
VAB 332, 337, 339 calling other programs 97
VisualGen 134 classification rules 28
client/server 1, 331, 377 client program 100
close 200 COLLID column 14
CLSE function call 274 RRSAF 259
CM/2 308, 311 SET CURRENT PACKAGESET statement 98
CMD file 106 WLM-established stored procedure 28
CMS 306 COLLID column 14, 100, 228
COBOL command
batch debug tool 328 bind 131
calling other programs 97 building stored procedures 113
client program 130 DB2 on MVS 19
COBOL/370 305 db2start 75, 107
COLLID column 14 debug 306, 312, 317, 327
data type 90 DISPLAY PROCEDURE 21
DB2 for AIX 105 DISPLAY THREAD 23
DB2 for OS/2 105 EXCI call 266
DB2 on MVS 85 file 315, 328
DB2 on the workstation 105 in a stored procedure 86
DB2CLI.PROCEDURES 79 SET CURRENT PACKAGESET 97
diskette 348, 355, 361, 372 START DB2-established address space 11
example 93, 99, 122, 266 START PROCEDURE 19
MRSP 164, 166 STOP PROCEDURE 20
precompiler 132 terminate current process 107
RENT compiler option 102 unsupported 106
SQLDATA 155 VAB 342
stored procedures overview 1 Visual Basic 194
TEST compiler option 312, 313 Windows 187
VAB 332, 335 COMMAREA 266
varying-length character 17 COMMENT debug command 327
VisualGen 134 commit
Windows 187 client program 118
COBOL for OS/390 and VM 85 coordination 11
code editor 331, 334 DB2 Connect result set study 429
code module 333, 334, 336 DB2 on MVS 7
code page 106, 250 implicit 222, 249
code points 150 lock 381, 386, 412
code segment 114 mode 199, 204
CODE statement 114 network flow 389, 403
CODE-LISTING window 323 non-DB2 resources 11, 265
CODE/370 ODBC 199
batch debug tool 328 transaction characteristics 380
compile tool 310 COMMIT statement
debug tool session example 316 CLI 204
Index 451
cursor (continued) DataJoiner 68, 69
MRSP 137, 139, 143, 150 DATE data type 96
names 150 DB2 accounting and statistics traces 380
order of result sets 153 DB2 Administration Tool 103
reasons why not returned to client 170 DB2 Client Application Enabler
statement handle 197 See CAE
used in stored procedure 137 DB2 Common Servers
C application for Windows 220
coding considerations 105
D MRSP 150
D WLM command 54 performance 299
DARI VisualGen 134
CALL statement 123 DB2 Connect
DB2 Common Servers V2 71 COMMIT_ON_RETURN 18
DB2CLI.PROCEDURES 79 MRSP 137, 150
using to invoke stored procedures 127 PowerBuilder 205
VAB 337 product overview 68
data areas 200 result set study 415
data control language DB2 for AIX
See DCL client program 89
data conversion 13, 237, 247, 249 DRDA stored procedure support 1
data definition language REXX 111
See DDL searching stored procedures 126
data integrity 67 stored procedure name 130
data length 119 VisualGen 134
data manipulation language DB2 for HP-UX 1, 67
See DML DB2 for MVS/ESA
data replication 68 C application for Windows 220
data segments 113 connections 384, 399
data sharing 19 DRDA stored procedure support 1
data source line capacity 387, 401
commit mode 204 measurement conclusions 412
handle 196 measurements taken 377
register 187 MRSP 150
SQLConnect 196 network protocol 379
statement handle 197 stored procedures architecture 7
data transformation 3 thread 384, 399
data type tuning 398
assignment rules 95 utilization 389, 403
C 197 DB2 for OS/2
client program 118, 121 client program 89
conversion 96 DRDA stored procedure support 1
object extenders 68 LIBPATH 111
PARMLIST column 89 searching stored procedures 126
print_results function 144 stored procedure name 130
specifying arguments 124 VisualGen 134
SQLDA 119 DB2 for OS/390
data warehouse 68 DB2 Connect 68
database agent process 76 DB2 Connect result set study 416
database application remote interface MRSP 137, 150
See DARI stored procedures architecture 7
database control system DB2 for OS/400 1, 68, 124, 129
See DBCTL DB2 for Sinix 67
database request module DB2 for Solaris 1
See DBRM DB2 for Sun/Solaris 67
database resource adapter DB2 for VM and VSE 68
See DRA DB2 for Windows NT 1, 67
DatagLANce Network Analyzer for Ethernet and DB2 kernel 301
Token-Ring for OS/2 380
Index 453
delay (continued) DOS 67
disk 389 DOS_RQRIOBLK 386, 401
line 402 DOUBLE parameter 17
network 412 DPSB call 274, 276, 290
role played 386, 401 DRA 272, 273
DELAY parameter 386, 388, 401, 402, 422, 431 DRDA
DELETE statement 13, 152 bind 131
delimited identifier 130 character conversion 17
DEQ function call 274 CODE/370 308
DESCRIBE CURSOR statement COMMIT_ON_RETURN 18
example 170 data type conversion 96
format 163 DB2 Connect 68
returned information 163 DB2 for MVS/ESA 8
SQL extension 152 DB2 for OS/390 8
DESCRIBE PROCEDURE statement DDCSMVS.LST 131
example 170 measurements taken 377
format 155 MRSP 150
MR2BMCBM sample program 160 SQLeproc 127
number of result sets 152 stored procedures overview 1, 3
returned information 154 system-directed access 87, 129
SQL extension 152 temporary table 151
DESCRIBE statement 150 Visual Basic 188
DESCRIPTION statement 113 DROP DATABASE command 106
DESCSTAT bind option 164, 192 DSN8EP2 member 57
DESTNAME parameter 293, 315 DSNALI module 52, 87, 100
DFSCDL10 module 275 DSNAOCLI collection 228
diagnostic information 189, 193, 199 DSNAOINI file 222
dictionary tables 312 DSNAOINI statement 229
differences between stored procedures and other DSNAOTRC statement 229
programs 106 DSNARLI module 275
directories search 126 DSNCLINC package 221
directory structure 347 DSNRLI module 52, 59, 100, 257, 260
discretionary goals 27 DSNRRSAF statement 260
DISPLAY privilege 21 DSNT408I message 249
DISPLAY PROCEDURE command 21 DSNTIAD 103
DISPLAY statement 57 DSNTIJUZ 12
DISPLAY THREAD command 23 DSNTINST CLIST 8, 12
DISPLAY WLM command 34, 57 DSNTIPG panel 10
Distributed Computing Environment DSNTIPX panel 8, 11, 21
See DCE DSNV429I message 23
distributed data facility DSNX968I message 58
See DDF DSNX981E message 53, 55
Distributed Database Connection Services DSNX982I message 53
See DDCS for Common Servers DSNX9WLM program 229
Distributed Relational Database Architecture dual mode 114
See DRDA dynamic breakpoints 306
distributed unit of work 67 dynamic SQL
see DUW authorizations 134
DLET function call 274 CALL statement 1, 118
DLL compound SQL 301
CLI 222, 229 considerations 81
compile option 223 DB2 on MVS 86
function name 195 DESCRIBE CURSOR statement 164
LINK386 113 performance 300, 303
OS/2 106, 111 privilege 7
prelink 224 system-directed access 87
side deck 228 VisualGen 134
DML 3, 86 DYNAMICRULES(BIND) 7, 134
Index 455
function descriptor 226 host variables (continued)
FUNCTIONAME 112 stored procedure name 118, 120, 124
structure 121
VAB 332
G varying-length character 17
GHN function call 274 HP 68
GHU function call 274 HP-UX 331
global temporary table HTML 69
accessing IMS databases 151, 272
result sets 239
sample application 292, 294, 353, 373 I
GMSG function call 274 I/O PCB 275
GN function call 274 IBM AIX XL FORTRAN Version 2 Release 3 4
goal mode IBM AIX XL FORTRAN/6000 Version 2.3 105
application environment 30, 32 IBM C for AIX Version 3 Release 1 3
operation 33, 58 IBM C for AIX Version 3.1 105
Sysplex 26 IBM C Set++ Version 2 Release 1 4
testing 57 IBM C/C++ for MVS/ESA Version 3 Release 1 3
WLM-established stored procedure 25 IBM C/SET++ for AIX Version 2.1 or Version
GRAPHIC data type 96, 111 3.1 105
GRAPHIC parameter 17 IBM C/Set++ for OS/2 Version 2.1 105
GROUP parameter 386, 388, 401, 402 IBM COBOL for MVS and VM Version 1 Release 1 3
GU function call 274 IBM COBOL Set for AIX Version 1 Release 1 4
IBM COBOL Set for AIX Version 1.1 105
IBM COBOL VisualSet for OS/2 Version 1 Release
H 1 4
handler 132 IBM COBOL VisualSet for OS/2 Version 1.1 105
handles IBM High Level Assembler/MVS Version 1 Release
allocation 196 1 3
CLI sample 203 IBM Internet Connection Server 69
data areas 200 IBM PL/I for MVS and VM Version 1 Release 1 3
deallocation 204 IBM Procedures Language 2/REXX 4, 106
e r r o r 199 IBM SAA AD/Cycle C/370 Version 1 Release 2 3
program structure 193 IBM SAA AD/Cycle COBOL/370 85
SQLFreeEnv 200 IBM SAA AD/Cycle Language Environment/370
variables 194 Version 1 Release 1 3
header file 247 IBM VisualAge C++ for OS/2 Version 3 105
HEAP size 234 I B M V i s u a l A g e C + + f o r W i n d o w s 187
history file 336 IBM VisualAge for COBOL for OS/2 and
home page 345, 377 Windows 187
host language IBM XL C Compiler Version 1.2.1 or Version 1.3 105
client program 130 IBM XL C Version 1 Release 2.1 3
compound SQL 301 IBM XL FORTRAN for AIX Version 3.2 105
considerations 81 IBMREQD column 14
DB2 on the workstation 106 ICMD function call 274
package 134 IDENTIFY function call 87, 257, 261
HOST parameter 386, 388, 401, 402 IEASYSnn member 59
host variables IEFSSNxx member 63
array 121 IFI calls 86, 100, 255
CALL statement 117 ILINK 113
CLI 82, 132 IMDBMCB2 sample 292
DB2 on MVS 93 IMDBMCBM sample 292, 296
example 122 IMDBMCBN sample 292
full-path specification 120 IMDBMS sample 292, 296
MRSP 180 Immediate window 342
ODBC 132 implicit APPC support 291
passing parameters 125 IMPORT statement 223, 226
specifying arguments 120, 124 importance 27
SQLDA 107
Index 457
LANGUAGE column 14, 79, 228 load module (continued)
language interface 52 DISPLAY PROCEDURE command 21
large objects DISPLAY THREAD command 23
See LOB LE/370 86
latching 383, 398 LOADMOD column 14
LE/370 nonreusable 102
calling other program 98 reentrant 102
client program 131 REFRESH option 35
CODE/370 305, 315 reusable 102
DB2 Version 4 85 STAYRESIDENT column 14
introduction 85 STOP PROCEDURE command 20, 21
library name 10 SYSIBM.SYSPROCEDURES 13
link-edit 100 test version 15
MRSP 151 WLM-established address space 52
PTF 307 LOAD utility 13
resident 102 LOADMOD column 14, 21, 52, 104
RUNOPTS 14 LOB 67, 197
RUNTIME parameter 10 Local and Global Monitor List windows 326
stored procedures address space 7 local application 8
SYSIBM.SYSPROCEDURES 13 local call 338
virtual storage 9 local client 1, 117
level functions 132 location 120, 129, 131
LIBPATH 111, 126 lock
library accessing IMS databases 277
case sensitive 112 CPU 403
DB2 on the workstation 106 DB2 Connect result set study 418
LE/370 86 DB2 on MVS 7
name 130 definitions for the measurements 381
path 128 delay 389, 401, 403
SQLZ_DISCONNECT_PROC 109 network 386, 401
SQLZ_HOLD_PROC 109 page level 381
STOPPING state 35 result sets 151
stored procedure name considerations 128 row level 381
stored procedure preparation 111 three-part names 89
VAB 334, 336 LOG function call 274
LIBRARY statement 113 log mode 293, 311
line 387, 388, 401 log streams 59, 60, 64
LINE TEST suboption 313 logical partitions 379
link-edit 52 LONG VARCHAR data type 17
accessing IMS databases 275, 287 LONGNAME option 223, 224
APPC 293 loopback connection 138, 149
CAF 87 lowercase
CFRM policy 63 DB2 for OS/2 130
CLI 223, 226 function name 195
CODE/370 306 stored procedure name 112, 129
DB2 on MVS 99 LSEARCH option 224
LE/370 100 LU 6.2 265, 379, 416
reentrant 102 LU name 13, 293, 310
RENT option 102 LUNAME column 14, 15, 311
REUS option 102
LINK386 113
LINKAGE column 14, 90 M
LIST debug command 319, 321, 327 Macintosh 67
load library 11, 100 mainframe interactive debug tool
load module See MFI
CALL flow 7 makefile
CLI 226 - T i + o p t i o n 113
CODE/370 314, 317 build stored procedure 112
DB2CLI.PROCEDURES 79 COPY statement 111
Index 459
NOBLOCK TEST suboption 313 ODBC.INI file 188
NOCONVERT option 76, 111 OLTP 68
NOEXECOPS run-time option 94 OO COBOL
NOLINE TEST suboption 313 COLLID column 14
non-DB2 resources DB2 on MVS 85
accessing from a stored procedure 8, 265 stored procedures overview 1
change the JCL procedure 11 open 138, 200
RACF 12, 53 OPEN CURSOR statement 170
WLM-established address space 53 Open Database Connectivity
NONE TEST suboption 313, 315 See ODBC
NOPATH TEST suboption 313 OPEN function call 274
NOSYM TEST suboption 313 OpenEdition 12
Not-fenced stored procedures operator commands 58
See unfenced stored procedure OPTFILE option 224
NPM 380 optimizer 67
null Oracle 68
blocking rows 184 OS/2
CALL statement 121 client 299, 384, 398
character string 144, 145 COBOL 134
client program 118 CODE/370 305
DB2 on MVS 90 DB2 Connect 68
DB2 on the workstation 111 debug 306
LINKAGE column 14 desktop 187
pointer 107 diskette 347, 348, 353
specifying arguments 120, 124 DLL 106, 111
terminator 198 function 128
VAB 335 IBM Procedures Language 2/REXX 4
Visual Basic 204 LIBPATH 112
NULL CONNECT term 249 PATH 112
null connection 222, 248 prerequisites 4
NULL keyword 91, 120 product overview 68
NUMBER OF TCBS parameter 9 server 299
NUMTCB parameter 12, 13, 57, 273 SPM/2 380
test 303
VAB 331
O Warp 5
object-relational extenders 68 OS/390 25, 54, 206, 221, 274, 275
ODBC OTS-COMMIT 277
administration tool 187 OTS-ROLLBACK 277
bind 187 OUT parameter 17
CALL statement 127 overhead 181
CLI 82 OWNER 7, 101
client program 130
DB2 Common Servers features 67
DB2 Connect result set study 429 P
e r r o r 198 PACKADM authority 101
escape clause 135 package
handle allocation 196 calling other programs 97
migration 237 classification rules 28
MRSP 137, 151, 152 CLI 221
parameters 118 client program 130, 131
PowerBuilder 205, 206 COLLID column 14
privilege 101 DB2 on MVS 99, 100
program structure 193 DB2CLI.PROCEDURES 79
sample application 192 embedded SQL 83
setting the environment 187 isolation level 381
Software Development Kit 133 privilege 7, 101
Visual Basic 189 RRSAF 259
Windows 187 system-directed access 87
Index 461
pr3c2s 301
pr4c2cr2 302 Q
pr4c2s 302 QSAM files 12, 53, 151
precision 145 qualified name 120
precompile querying 78
client pro gra m 130 queuing 56, 58
CODE/370 313 QuickTest 331, 343
DB2 on MVS 99 QUIESCE option 33, 34
embedded SQL statements 81 QUIESCED state 34, 58
precompiler 120 QUIESCING state 33, 34, 58
VAB 332 QUIT debug command 327
preferences file 315
prelink 102, 224, 226
preload 381, 382, 394, 408, 418
R
RACF
PREPARE statement 81, 197
accessing IMS databases 277
preprocessor 81
DB2-established address space 11, 265
primary authorization 101
RRSAF 255
print_results function 144, 146
WLM-established stored procedures address
priority
space 25, 53, 265
RRS 64
R A M 385, 386, 399, 400
WLM-established stored procedure 25, 56
RCMD function call 274
private protocol 1, 151
RDO 189, 190, 191, 192
privileges
rdoDefaultLoginTimeout property 192
DB2 on MVS 101
REAL data type 96
DB2CLI.PROCEDURES 332
REAL parameter 16
DISPLAY PROCEDURES command 21
reason code
embedded SQL 83
00E79002 55
execute a stored procedure 101, 134
00E79009 53
non-DB2 resources 12
1592312 reason code 264
START PROCEDURE command 19
15925250 reason code 263
STOP PROCEDURE command 20
15925393 263
SYSIBM.SYSPROCEDURES 104
1592554 reason code 264
PROC OPTIONS(REENTRANT) 102
receive operation 2
PROC_LOCATION column 79
Recoverable Resource Manager attachment facility
PROCCOLS sample 78
See RRSAF
PROCEDURE column 14, 120, 129
recoverable resource manager services attachment
procedure-library!function-name 128
facility
process
See RRSAF 253
db2dari 303
recovery 67, 70
fenced stored procedure 76
reentrant 99, 102, 247
KEEPDARI parameter 72
referential integrity 67
terminating 107
refresh 34
PROCNAME column 79
REFRESH option 35
PROCS sample 78
REFRESHING state 33, 35
PROCSCHEMA column 79
region controller 272
program hooks 312
REGION parameter 11
program specification block
registering stored procedures 77, 187, 332
See PSB
RELEASE statement 86, 88, 106
project manager 331
REMARKS column 80
prompt level 315
remote call 338
protected mode 114
remote client 117
protected resources 59
remote data objects
PROTMODE statement 114
See RDO 189
PS/2 4
remote data services 1
PSB 276
RENT compiler option 102, 224
pseudonym files 293
RENT link-edit option 102
PTF 85, 307
REPL function call 274
Index 463
samputil.h 140 SNA 293, 309, 416
SBCS 17 SNAP function call 274
scale 198 software prerequisites 3
SCHED specification 292 Solaris 331
scheduling 29 source code
SCHEMA 129, 337 .bas file (VAB) 333
SCO 68 CODE-LISTING window 317, 323
scripts 70 CODE/370 317
SDK for Common Servers 138 VAB 331
SEARCH option 224 viewing (CODE/370) 313
searching stored procedures 16, 126 sp file 334, 336
secondary authorization 101 sp0r2cr2 300
security 3, 7, 83, 277 sp0r2s 300
SELECT statement special characters 120, 124, 127
blocking rows technique 180 speed line 387, 401
CLI 141 SPM/2 380
DESCRIBE CURSOR statement 163 SQL_CLOSE 138
DESCSTAT bind option 192 SQL_COMMIT 142
mr3c2o2 sample program 140 SQL_DROP 138, 142, 200
MR4C2S.SQC program 149 SQL_NO_DATA_FOUND 145, 148, 192
MRSP 139, 143, 151 SQL_NTS 190, 204
WITH RETURN clause 249 SQL0969N message 55
WITH RETURN option 357 SQL1106N message 112
send operation 2 SQL1109 message 112
SENDDA sample 116 SQL3 1, 117
serialization 12, 32, 53, 265 SQL92 Entry Level 152
service class 26, 56, 57 SQLAllocConnect 141, 196, 203
service class period 27 SQLAllocEnv 140, 196
service definition 25, 30, 35 SQLAllocStmt 141, 197
WLM 26 SQLarrayCALL 337
service policy 26, 35, 38 SQLBindCol 145, 249
service units 14, 103 SQLBindParameter function call
serviceability trace 229 CLI 204, 221
SET CONNECTION statement 86, 106 parameter marker 141
SET CURRENT PACKAGESET statement 97 parameter markers 127, 197
SET CURRENT SQLID statement 86 rgbValue column 190
SET PATH statement 129 Visual Basic 190, 194
SET SOURCE ON debug command 327 SQLBrowseConnect 132
SETRRS CANCEL command 64 SQLCA
SETS function call 274 client program 89
setting variables 193, 203 DB2 on the workstation 107
SETU function call 274 MRSP 143
SETXCF command 63 porting CLI applications 248
SHOWDA sample 116 PowerBuilder 211, 216
side deck 227 RRSAF 260
side information 293 VAB 334, 335
SIDEINFO 310 sqlcli.h header file 223
sign on 7, 255 SQLCODE
SIGNON function call 87, 257, 258, 261 -113 129, 195
SIMPLE linkage convention 14, 90, 92 -1133 107
SIMPLE WITH NULLS linkage convention 14, 91, 92, -114 179
164 -204 179
simulate 149, 306 -30090 89
SINIX 68 -301 96
SMALLINT data type 96 -302 96
SMALLINT parameter 16 -303 374
SmartGuide 70, 331, 344 -312 220
SMS class 60 -406 96
-426 19
Index 465
statement handle (continued) STORPROC parameter 12
ODBC 193 STORPROC.DLL 77
Visual Basic 196, 197 STORPROC.LOG 77
static SQL STORPROC.XMP 78
compound SQL 301 STORTIME 55
considerations 81 StrConv function 192
DB2 on MVS 86 string variable 193
DESCRIBE CURSOR statement 164 structure 121
performance 300 subprocedures 333, 338, 341, 342
privilege 7 subprogram 85
status 196, 197, 198, 336 subscript variable 155
STATUS field of DISPLAY PROCEDURE command 22 subsystem identifier 32
STAYRESIDENT column 14, 79, 86, 102, 248 subtask 255
STCB 25 Sun 68
STEP debug command 327 Sybase 68
step mode 306 SYM TEST suboption 313
step over 325, 341 symbol table 313
step return 325 symbolic destination 311, 315
step through synch point manager 265
breakpoint 341 synch point processing 277
Code Editor window 340 synchronous execution 265
CODE/370 306, 325 SYNCLEVEL 291
VAB 331 SYNCLVL specification 265
Step/Run window 324 synonyms 151
STEPLIB 311 SYS1.MIGLIB library 60
STMT TEST suboption 313 SYS1.SAMPLIB library 63
STOP PROCEDURE command 20, 34 SYSADM authority 19, 20, 21, 101
STOPPED state 35, 53, 54 SYSCTRL authority 19, 20, 21
STOPPING state 35 SYSDEFSD data set 226, 227
storage SYSEXEC statement 99
link-edit 100 SYSIBM.SYSLOCATIONS 221
management 85 SYSIBM.SYSPROCEDURES
measurements 379 application environment 30
pointer 107 CALL flow 7
print_results function 144 CLI 228
store assignment rules 95 columns 14
stored procedure name entries required to run in DB2 and
case sensitivity 130 WLM-established address space 52
considerations 128 indicator variables 92
DB2 on MVS 129 INSERT statement 103
DB2 on the MVS platform 120 MRSP 150
embedded SQL 124 passing nulls 90
folding 129, 130 restricting access 104
load module 7 search precedence 16
supplied at execution time 1 START PROCEDURE command 19
Visual Basic 195 updating 13
stored procedures address space SYSOPR 19, 20, 21
architecture 7 SYSOUT statement 57
batch debug tool 328 Sysplex
bringing down 58 application environment 30, 32
CODE/370 311 checking WLM data sets 56
installation 9 couple data sets 26
language-specific libraries 86 DISPLAY command 34
load library 100 log streams 59
multiple 57 MODIFY command 33
non-DB2 resources 265 MONOPLEX 60
resident 102 OS/390 level 37
START PROCEDURE command 19 RRS CFRM policy 63
STOP PROCEDURE command 20 RRS implementation 59
Index 467
two-phase commit (continued) VAB (continued)
RRS 265 developing stored procedure 335
RRSAF 256 development environment 331
WLM-established stored procedure 25 diskette 348
type 2 indexes 381, 418 editing a project 334
functions and features 331
home page 345
U remote debugger 343
UDF 67, 129, 331, 333 testing using QuickTest 343
udf files 334 VARCHAR data type 96
UDT 67, 335 VARCHAR parameter 17, 296
undelimited constant 130 VARGRAPHIC data type 96
unfenced stored procedures VARGRAPHIC parameter 17
environment variables 107 variable descriptor 226
LIBPATH 112 variable pool 108
measurement 302 VARY WLM command 33, 34, 55, 58
MRSP 138 VBX 331, 332, 337
performance 302, 304 views 104, 151
placement 111 virtual storage 9, 11, 102
searching 126 Visual Basic
Unicode 189, 190, 191 coding considerations 189
unit of work coding the application 194
client program 118 diskette 359
COMMIT_ON_RETURN 18 ODBC driver 187
CONNECT TYPE 2 88, 89 VAB 331, 332, 337
DB2 for MVS/ESA 8 visual explain 67
DB2 for OS/390 8 VisualAge 113
DB2 on MVS 7 VisualAge for Basic
non-DB2 resources 265 See VAB 135
performance goals 29 VisualDebugger 86
rollback 87 VisualGen 85, 132, 134
RRSAF 258 v m s t a t 380
unit-of-recovery 59 VS COBOL II 85
UNIX 67, 128 VSAM
unqualified name 120 DataJoiner 68
UPDATE statement 13, 152 JCL requirement 265
uppercase RACF 12, 53
DB2 for AIX 130 RRS log streams 60
DB2 for OS/2 130 temporary table 151
delimited identifier 130 VTAM 310, 379, 386, 401, 416
function name 114, 195
porting applications 236
stored procedure name 112, 129, 195 W
undelimited constant 130 wait state 322
user-defined functions WARP 379
See UDF WATCOM FORTRAN 77 32 Version 9.5 4, 105
user-defined types WCHARTYPE option 76, 111
See UDT Web
USING DESCRIPTOR 121, 124 enablement 68
util.c 139 Net.Data 69
util.h 139 WIN-OS/2 187
Windows
client support 67
V coding considerations 187
V2SUTIL utility 228, 248 diskette 348
VAB protected mode 114
client 132, 337, 339 STORPROC.DLL 77
creating stored procedures 332 VAB 331
debugging and testing with VAB 340
X
X/Open 82, 132, 135
Index 469
470 Getting Started with DB2 Stored Procedures
ITSO Redbook Evaluation
Getting Started with DB2 Stored Procedures: Give Them a Call through the Network
SG24-4693-01
Your feedback is very important to help us maintain the quality of ITSO redbooks. Please complete this
questionnaire and return it using one of the following methods:
• Use the online evaluation form found at http://www.redbooks.ibm.com
• Fax this form to: USA International Access Code + 1 914 432 8264
• Send your comments in an Internet note to redbook@vnet.ibm.com
Please rate your overall satisfaction with this book using the scale:
(1 = very good, 2 = good, 3 = average, 4 = poor, 5 = very poor)
Was this redbook published in time for your needs? Yes____ No____
_____________________________________________________________________________________________________
_____________________________________________________________________________________________________
_____________________________________________________________________________________________________
_____________________________________________________________________________________________________
_____________________________________________________________________________________________________
_____________________________________________________________________________________________________
_____________________________________________________________________________________________________
_____________________________________________________________________________________________________
_____________________________________________________________________________________________________
IBML
SG24-4693-01