Sei sulla pagina 1di 104

Informix Product Family Informix Warehouse Accelerator

Version 11.70

IBM Informix Warehouse Accelerator Administration Guide

SC27-3851-00

Informix Product Family Informix Warehouse Accelerator


Version 11.70

IBM Informix Warehouse Accelerator Administration Guide

SC27-3851-00

Note Before using this information and the product it supports, read the information in Notices on page E-1.

Edition This document contains proprietary information of IBM. It is provided under a license agreement and is protected by copyright law. The information contained in this publication does not include any product warranties, and any statements provided in this manual should not be interpreted as such. When you send information to IBM, you grant IBM a nonexclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you. Copyright IBM Corporation 2010, 2011. US Government Users Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.

Contents
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
About this publication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v Types of users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v Software dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v Assumptions about your locale . . . . . . . . . . . . . . . . . . . . . . . . . . . . v What's New in the Informix Warehouse Accelerator . . . . . . . . . . . . . . . . . . . . . . vi Example code conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Additional documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Compliance with industry standards . . . . . . . . . . . . . . . . . . . . . . . . . . viii Syntax diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii How to read a command-line syntax diagram . . . . . . . . . . . . . . . . . . . . . . . ix Keywords and punctuation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x Identifiers and names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x How to provide documentation feedback . . . . . . . . . . . . . . . . . . . . . . . . . xi

Chapter 1. Overview of Informix Warehouse Accelerator . . . . . . . . . . . . . . 1-1


Accelerator architecture options . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3 Accelerated query considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-6 Queries that benefit from acceleration . . . . . . . . . . . . . . . . . . . . . . . . . 1-6 Types of queries that are not accelerated . . . . . . . . . . . . . . . . . . . . . . . . 1-8 Supported data types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-9 Supported functions and expressions . . . . . . . . . . . . . . . . . . . . . . . . . . 1-9 Supported and unsupported joins . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-10 Software prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-11 Hardware prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-12

Chapter 2. Accelerator installation


Accelerator directory structure . . . Preparing the Informix database server Installing the accelerator . . . . . Installing the administration interface Uninstalling the accelerator . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . 2-1
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-1 2-2 2-3 2-4 2-5

Chapter 3. Accelerator configuration

. . . . . . . . . . . . . . . . . . . . . . 3-1
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-1 . 3-2 . 3-3 . 3-6 . 3-7 . 3-9 . 3-9 . 3-10 . 3-11 . 3-12 . 3-13 . 3-13 . 3-14 . 3-14 . 3-15

Configuring the accelerator (non-cluster installation) Configuring the accelerator (cluster installation) . . dwainst.conf configuration file . . . . . . . . Connecting the database server to the accelerator . Enabling and disabling query acceleration . . . . The ondwa utility . . . . . . . . . . . . Users who can run the ondwa commands . . . ondwa setup command . . . . . . . . . ondwa start command . . . . . . . . . ondwa status command . . . . . . . . ondwa getpin command . . . . . . . . ondwa tasks command . . . . . . . . . ondwa stop command . . . . . . . . . ondwa reset command . . . . . . . . . ondwa clean command . . . . . . . . .

Chapter 4. Accelerator data marts and AQTs . . . . . . . . . . . . . . . . . . . 4-1


Creating data marts . . . . . . . . . . . . . . . . . . Designing effective data marts . . . . . . . . . . . . . . Creating data mart definitions by using the administration interface .
Copyright IBM Corp. 2010, 2011

. . .

. . .

. . .

. . .

. . .

. . .

. . .

. . .

. . .

. . .

. . .

. . .

. . .

. 4-1 . 4-2 . 4-8

iii

Creating data mart definitions by using Deploying a data mart . . . . . . Loading data into data marts . . . . Refreshing the data in a data mart . . . Updating the data in a data mart . . . Drop a data mart . . . . . . . . . Handling schema changes . . . . . . Removing probing data from the database Monitoring AQTs . . . . . . . . .

workload analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

4-10 4-20 4-21 4-22 4-23 4-23 4-23 4-24 4-24

Chapter 5. Reversion requirements for an Informix warehouse edition server and Informix Warehouse Accelerator . . . . . . . . . . . . . . . . . . . . . . . . 5-1 Chapter 6. Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . 6-1
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-1 . 6-1 . 6-1

Missing sbspace . . . . . . . . . . . . . . . . Memory issues for the coordinator node and the worker nodes . Ensuring a result set includes the most current data . . . .

Appendix A. Sample warehouse schema. . . . . . . . . . . . . . . . . . . . . A-1 Appendix B. Sysmaster interface (SMI) pseudo tables for query probing data. . . . . B-1 Appendix C. Supported locales . . . . . . . . . . . . . . . . . . . . . . . . . C-1 Appendix D. Accessibility . . . . . . . . . . . . . . . . . . . . . . . . . . . D-1
Accessibility features for IBM Informix products Accessibility features . . . . . . . . . Keyboard navigation . . . . . . . . . Related accessibility information . . . . . IBM and accessibility. . . . . . . . . Dotted decimal syntax diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D-1 D-1 D-1 D-1 D-1 D-1

Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E-1
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E-3

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . X-1

iv

IBM Informix Warehouse Accelerator Administration Guide

Introduction
About this publication
This publication contains comprehensive information about using the Informix Warehouse Accelerator to process data warehouse queries more quickly than processing the queries using the Informix database server.

Types of users
This publication is written for the following users: v Database administrators v System administrators v Performance engineers v Application developers This publication is written with the assumption that you have the following background: v A working knowledge of your computer, your operating system, and the utilities that your operating system provides v Some experience working with relational and dimensional databases or exposure to database concepts v Some experience with database server administration, operating-system administration, network administration, or application development You can access the Informix information centers, as well as other technical information such as technotes, white papers, and IBM Redbooks publications online at http://www.ibm.com/software/data/sw-library/.

Software dependencies
This publication is written with the assumption that you are using IBM Informix Version 11.70.xC2 or later as your database server.

Assumptions about your locale


IBM Informix products can support many languages, cultures, and code sets. All the information related to character set, collation and representation of numeric data, currency, date, and time that is used by a language within a given territory and encoding is brought together in a single environment, called a Global Language Support (GLS) locale. The IBM Informix OLE DB Provider follows the ISO string formats for date, time, and money, as defined by the Microsoft OLE DB standards. You can override that default by setting an Informix environment variable or registry entry, such as DBDATE. The examples in this publication are written with the assumption that you are using one of these locales: en_us.8859-1 (ISO 8859-1) on UNIX platforms or en_us.1252 (Microsoft 1252) in Windows environments. These locales support U.S.
Copyright IBM Corp. 2010, 2011

English format conventions for displaying and entering date, time, number, and currency values. They also support the ISO 8859-1 code set (on UNIX and Linux) or the Microsoft 1252 code set (on Windows), which includes the ASCII code set plus many 8-bit characters such as , , and . You can specify another locale if you plan to use characters from other locales in your data or your SQL identifiers, or if you want to conform to other collation rules for character data.

What's New in the Informix Warehouse Accelerator


This publication includes information about new features and changes in existing functionality. The following changes and enhancements are relevant to this publication.
Table 1. What's New in IBM Informix Warehouse Accelerator Administration Guide for IBM Informix Version 11.70.xC4. Overview Install Informix Warehouse Accelerator on a cluster system You can now install Informix Warehouse Accelerator on a cluster system. Using a cluster for Informix Warehouse Accelerator takes advantage of the parallel processing power of multiple cluster nodes. The accelerator coordinator node and each worker node can run on a different cluster node. The Informix Warehouse Accelerator software and data are shared in the cluster file system. You can administer Informix Warehouse Accelerator from any node in the cluster. Refresh data mart data during query acceleration You can now refresh data in the Informix Warehouse Accelerator while queries are accelerated. You no longer need to suspend query acceleration in order to drop and re-create the original data mart. Using the existing administrator tools, you create a new data mart that has the same data mart definition as the original data mart, but has a different name. You can then load the new data mart while query acceleration still uses the original data mart. Once the new data mart is loaded, the accelerator automatically uses the new data mart with the latest data to accelerate queries. User informix can run the ondwa commands The ondwa commands can now be run by user informix provided that the shell limits are set correctly. Support for new functions is implemented v DECODE v NVL v TRUNC New options for monitoring AQTs New columns have been added to the output of the onstat -g aqt command that include counters for candidate AQTs. See onstat -g aqt command. Reference See Configuring the accelerator (cluster installation) on page 3-2.

See Refreshing the data in a data mart on page 4-22.

See Users who can run the ondwa commands on page 3-9.

See the list of scalar functions at Supported functions and expressions on page 1-9.

vi

IBM Informix Warehouse Accelerator Administration Guide

Table 2. What's New in IBM Informix Warehouse Accelerator Administration Guide for IBM Informix Version 11.70.xC3 Overview Create data mart definitions automatically Creating data mart definitions is one of the more time consuming tasks in setting up the accelerator. You can use a new capability in Informix Warehouse Accelerator to automatically create the data mart definitions for you. This capability is especially useful if there are a very large number of tables in your database and using the administration interface to create the data mart definitions is cumbersome. It is also useful when you are not intimately familiar with the table schemas in your database. Additional locales supported Previously Informix Warehouse Accelerator supported only the default locale en_us.8859-1. Additional locales are now supported. Reference See Creating data mart definitions by using the administration interface on page 4-8.

See Appendix C, Supported locales, on page C-1.

Example code conventions


Examples of SQL code occur throughout this publication. Except as noted, the code is not specific to any single IBM Informix application development tool. If only SQL statements are listed in the example, they are not delimited by semicolons. For instance, you might see the code in the following example:
CONNECT TO stores_demo ... DELETE FROM customer WHERE customer_num = 121 ... COMMIT WORK DISCONNECT CURRENT

To use this SQL code for a specific product, you must apply the syntax rules for that product. For example, if you are using an SQL API, you must use EXEC SQL at the start of each statement and a semicolon (or other appropriate delimiter) at the end of the statement. If you are using DBAccess, you must delimit multiple statements with semicolons. Tip: Ellipsis points in a code example indicate that more code would be added in a full application, but it is not necessary to show it to describe the concept being discussed. For detailed directions on using SQL statements for a particular application development tool or SQL API, see the documentation for your product.

Additional documentation
Documentation about this release of IBM Informix products is available in various formats. You can access or install the product documentation from the Quick Start CD that is shipped with Informix products. To get the most current information, see the Informix information centers at ibm.com. You can access the information centers

Introduction

vii

and other Informix technical information such as technotes, white papers, and IBM Redbooks publications online at http://www.ibm.com/software/data/sw-library/.

Compliance with industry standards


IBM Informix products are compliant with various standards. IBM Informix SQL-based products are fully compliant with SQL-92 Entry Level (published as ANSI X3.135-1992), which is identical to ISO 9075:1992. In addition, many features of IBM Informix database servers comply with the SQL-92 Intermediate and Full Level and X/Open SQL Common Applications Environment (CAE) standards. The IBM Informix Geodetic DataBlade Module supports a subset of the data types from the Spatial Data Transfer Standard (SDTS)Federal Information Processing Standard 173, as referenced by the document Content Standard for Geospatial Metadata, Federal Geographic Data Committee, June 8, 1994 (FGDC Metadata Standard).

Syntax diagrams
Syntax diagrams use special components to describe the syntax for statements and commands.
Table 3. Syntax Diagram Components Component represented in PDF Component represented in HTML >>---------------------Meaning Statement begins.

----------------------->

Statement continues on next line. Statement continues from previous line. Statement ends. Required item. Optional item.

>--------------------------------------------->< --------SELECT-----------+-----------------+--------LOCAL------ ---+-----ALL-------+--+--DISTINCT-----+ ---UNIQUE------

Required item with choice. Only one item must be present.

---+------------------+--+--FOR UPDATE-----+ --FOR READ ONLY--

Optional items with choice are shown below the main line, one of which you might specify.

viii

IBM Informix Warehouse Accelerator Administration Guide

Table 3. Syntax Diagram Components (continued) Component represented in PDF Component represented in HTML .---NEXT---------. ----+----------------+--+---PRIOR--------+ ---PREVIOUS----- Meaning The values below the main line are optional, one of which you might specify. If you do not specify an item, the value above the line is used by default. Optional items. Several items are allowed; a comma must precede each repetition.

.-------,-----------. V | ---+-----------------+--+---index_name---+ ---table_name--- >>-| Table Reference |->< Table Reference |--+-----view--------+--| +------table------+ ----synonym------

Reference to a syntax segment. Syntax segment.

How to read a command-line syntax diagram


Command-line syntax diagrams use similar elements to those of other syntax diagrams. Some of the elements are listed in the table in Syntax Diagrams. Creating a no-conversion job
onpladm create job job -p project -n -d device -D database

-t

table

(1) Setting the Run Mode -S server -T target

Notes: 1 See page Z-1

This diagram has a segment named Setting the Run Mode, which according to the diagram footnote is on page Z-1. If this was an actual cross-reference, you would find this segment on the first page of Appendix Z. Instead, this segment is shown in the following segment diagram. Notice that the diagram uses segment start and end components.

Introduction

ix

Setting the run mode:


l c -f d p a u n N

To see how to construct a command correctly, start at the upper left of the main diagram. Follow the diagram to the right, including the elements that you want. The elements in this diagram are case-sensitive because they illustrate utility syntax. Other types of syntax, such as SQL, are not case-sensitive. The Creating a No-Conversion Job diagram illustrates the following steps: 1. Type onpladm create job and then the name of the job. 2. Optionally, type -p and then the name of the project. 3. Type the following required elements: v -n v -d and the name of the device v -D and the name of the database v -t and the name of the table 4. Optionally, you can choose one or more of the following elements and repeat them an arbitrary number of times: v -S and the server name v -T and the target server name v The run mode. To set the run mode, follow the Setting the Run Mode segment diagram to type -f, optionally type d, p, or a, and then optionally type l or u. 5. Follow the diagram to the terminator.

Keywords and punctuation


Keywords are words reserved for statements and all commands except system-level commands. When a keyword appears in a syntax diagram, it is shown in uppercase letters. When you use a keyword in a command, you can write it in uppercase or lowercase letters, but you must spell the keyword exactly as it appears in the syntax diagram. You must also use any punctuation in your statements and commands exactly as shown in the syntax diagrams.

Identifiers and names


Variables serve as placeholders for identifiers and names in the syntax diagrams and examples. You can replace a variable with an arbitrary name, identifier, or literal, depending on the context. Variables are also used to represent complex syntax elements that are expanded in additional syntax diagrams. When a variable appears in a syntax diagram, an example, or text, it is shown in lowercase italic.

IBM Informix Warehouse Accelerator Administration Guide

The following syntax diagram uses variables to illustrate the general form of a simple SELECT statement.
SELECT column_name FROM table_name

When you write a SELECT statement of this form, you replace the variables column_name and table_name with the name of a specific column and table.

How to provide documentation feedback


You are encouraged to send your comments about IBM Informix user documentation. Use one of the following methods: v Send email to docinf@us.ibm.com. v In the Informix information center, which is available online at http://www.ibm.com/software/data/sw-library/, open the topic that you want to comment on. Click the feedback link at the bottom of the page, fill out the form, and submit your feedback. v Add comments to topics directly in the information center and read comments that were added by other users. Share information about the product documentation, participate in discussions with other users, rate topics, and more! Feedback from all methods is monitored by the team that maintains the user documentation. The feedback methods are reserved for reporting errors and omissions in the documentation. For immediate help with a technical problem, contact IBM Technical Support at http://www.ibm.com/planetwide/. We appreciate your suggestions.

Introduction

xi

xii

IBM Informix Warehouse Accelerator Administration Guide

Chapter 1. Overview of Informix Warehouse Accelerator


Informix Warehouse Accelerator is a product that you use to improve the performance of your warehouse queries. Informix Warehouse Accelerator processes warehouse queries significantly more quickly than the Informix database server. The accelerator improves performance while at the same time reduces administration. There are only a few configuration parameters that you need to set to tune query performance. You can install the accelerator on same computer as the Informix database server, as shown in the following figure. You can also install the Informix Warehouse Accelerator on a separate computer.

Client application

Client application

Client application

Client connectivity

Informix server

Informix Warehouse Accelerator

Figure 1-1. Informix Warehouse Accelerator installed on the same computer as the Informix database server.

You can use Informix Warehouse Accelerator with an Informix database server that supports a mixed workload (online transactional processing (OLTP) database and a data warehouse database), or use it with a database server that supports only a data warehouse database. However, before you use Informix Warehouse Accelerator you must design and implement a dimensional database that uses a star or snowflake schema for your data warehouse. This design includes selecting the business subject areas that you want to model, determining the granularity of the fact tables, and identifying the dimensions and hierarchies for each fact table. Additionally, you must identify the measures for the fact tables and determine the attributes for each dimension table. The following figure shows a sample snowflake schema with a fact table and multiple dimension tables.

Copyright IBM Corp. 2010, 2011

1-1

REGION

CITY MONTH

STORE CONTACT

PERIOD

QUARTER CUSTOMER DAILY_SALES fact table BRAND ADDRESS PRODUCT

DEMOGRAPHICS

PROMOTION

PRODUCT_LINE

Figure 1-2. A sample snowflake schema that has the DAILY_SALES table as the fact table.

Administration interface
Informix Warehouse Accelerator includes an Eclipse-based administration interface, IBM Smart Analytics Optimizer Studio. You use this interface to administer the accelerator, and the data contained within the accelerator. The administration tasks are performed using a set of stored procedures in the Informix database server. The stored procedures are called through the administration interface.

Accelerator utilities
You also use utilities that are supplied with Informix Warehouse Accelerator to create the files and subdirectories that are required to run an accelerator instance and start the accelerator nodes. By default, the Informix Warehouse Accelerator uses one coordinator node and one worker node. The coordinator node is a process that manages the query tasks, such as load query and run query. The Informix database server and the ondwa utility connect to the coordinator node. The worker node is a process that communicates only with the coordinator node. The worker node has all of the data in main memory, compresses the data, and processes the queries.

Accelerator samples
Informix Warehouse Accelerator also includes a sample set of Java classes that you can use from the command line or in an application. You use these classes to perform many of the same tasks that you can perform with the administration interface. Tip: One reason that you might use these classes is to automate the steps required to refresh the data stored in the data marts. Instead of dropping and recreating the data marts manually through the administration interface, you can create an application that runs whenever it is convenient for your organization. See the dwa_java_reference.txt file in the dwa/example/cli/ directory for more

1-2

IBM Informix Warehouse Accelerator Administration Guide

information about these sample Java classes.

Accelerator architecture
The following figure shows that the database server communicates with the worker nodes through the coordinator node, and describes the roles of the database server, the coordinator node, and the worker nodes.

Informix database server


Performs the query parsing and AQT matching. The Optimizer routes the query blocks.

Informix Warehouse Accelerator

Coordinator node

Worker nodes

Manages the distribution tasks such as loading data and and query processing.

Store the data in main memory spread across all of the nodes. Perform the data compression and query processing.

Figure 1-3. A sample accelerator node architecture with one coordinator node and four worker nodes.

Informix Warehouse Accelerator must have its own copy of the data. The data is stored in logical collections of related data, or data marts. After you create a data mart, information about the data mart is sent to the database server in the form of a special view referred to as an accelerated query table or AQT. The architecture that is implemented with Informix Warehouse Accelerator is optimal for processing data warehouse queries. The demands of data warehouse queries are substantially different than OLTP queries. A typical data warehouse query requires the processing of a large set of data and returns a small result set. The overhead required to accelerate a query and return a result set is negligible compared with the benefits of using Informix Warehouse Accelerator: v The accelerator uses a copy of the data. v To expedite query processing, the copy of the data is kept in memory in a special compressed format. Advances in compression techniques, query processing on compressed data, and hybrid columnar organization of compressed data enable the accelerator to query the compressed data. v The data is compressed and stored with the accelerator to maximize parallel query processing. In addition to the performance gain of the warehouse query itself, the resources on the database server can be better utilized for other types of queries, such as OLTP queries, which perform more efficiently on the database server.

Accelerator architecture options


There are several approaches that you can use to integrate the accelerator into your current system architecture.
Chapter 1. Overview of Informix Warehouse Accelerator

1-3

You can install Informix Warehouse Accelerator on the same computer as your Informix database server, on a separate computer, or on a cluster. There must be a TCP/IP connection between the database server and the accelerator. If the accelerator is installed on the same computer as the database server, the connection must be a local loop-back TCP/IP connection. The query optimizer on the database server identifies which data warehouse queries can be accelerated and sends those queries to the accelerator. The result set is sent back to the database server, which passes the result set back to the client. If the query cannot be processed by the accelerator, the query is processed on the database server. You use an administration tool called IBM Smart Analytics Optimizer Studio to perform the administration tasks that are required on the accelerator. The IBM Smart Analytics Optimizer Studio is commonly referred to as the administration interface. The administration tasks are implemented as a set of stored procedures that are called through the administration interface. You can install the administration interface on the same computer as the Informix database server or on a separate computer.

The accelerator and database server on the same computer


The following figure shows the accelerator and the database server installed on the same computer with the administration interface installed on a separate computer.

Administrator interface
Symmetric multiprocessing system Informix database server
Optimizer Client SQL query Use Accelerator?
Yes Local loopback TCP/IP connection

Informix Warehouse Accelerator

Result set
No

Accelerator administrator interface

Figure 1-4. The accelerator and the database server installed on the same computer.

The accelerator and database server on different computers


The following figure shows the accelerator and the database server installed on different computers, with the administration interface installed on a separate computer.

1-4

IBM Informix Warehouse Accelerator Administration Guide

Symmetric multiprocessing system Informix database server


Optimizer Client SQL query Use Accelerator?
Yes TCP/IP connection

Linux 64-bit Intel processor

Informix Warehouse Accelerator

Result set
No

Administrator stored procedures

Accelerator administrator interface

Figure 1-5. The accelerator and the database server installed on different computers.

The accelerator installed on a cluster system


The following figure shows the accelerator installed on a cluster and the database server and administration interface installed on separate computers.

Chapter 1. Overview of Informix Warehouse Accelerator

1-5

Symmetric multiprocessing system Informix database server


Optimizer Client SQL query Use Accelerator?
Yes TCP/IP connection

Result set
No

Cluster with Informix Warehouse Accelerator Administrator stored procedures

Accelerator administrator interface

Figure 1-6. The accelerator installed on a cluster system and the database server and administration interface installed on separate computers.

Accelerated query considerations


The characteristics of a query determine if the query can be processed by the accelerator. There are some queries that cannot be accelerated. A query can be sent to the accelerator for processing only if: v The query refers to the fact table v The query refers to a subset of the tables in the data mart definition v The table joins that are in the query match the table joins that are specified in the data mart definition.

Queries that benefit from acceleration


The database server uses information about the existing data marts to determine which types of queries should be sent to the accelerator for processing. Only SELECT queries that refer to a fact table, or SELECT queries that join a fact table with one or more dimension tables are processed by the accelerator. The following types of queries will benefit the most from being sent to the accelerator: v Complex, ad hoc queries that look for trends or exceptions to make workable business decisions

1-6

IBM Informix Warehouse Accelerator Administration Guide

v Queries that access a large subset of the database, often by using sequential scans v Queries that involve aggregation functions such as COUNT, SUM, AVG, MAX, MIN, and VARIANCE v Queries that often create reports that group data by time, product, geography, customer set, or market v Queries that involve star joins or snowflake joins of a large fact table with several dimension tables Related concepts: Analyze queries for acceleration on page 4-7 Types of queries that are not accelerated on page 1-8

Query Example: Quantity, revenue, and cost by item


This example shows, for a given category, all of the items including quantity sold, revenue, and cost.
SELECT ITEM_DESC, SUM(QUANTITY_SOLD), SUM(EXTENDED_PRICE), SUM(EXTENDED_COST) FROM PERIOD, DAILY_SALES,PRODUCT, STORE, PROMOTION WHERE PERIOD.PERKEY=DAILY_SALES.PERKEY AND PRODUCT.PRODKEY=DAILY_SALES.PRODKEY AND STORE.STOREKEY=DAILY_SALES.STOREKEY AND PROMOTION.PROMOKEY=DAILY_SALES.PROMOKEY AND CALENDAR_DATE BETWEEN 2011/01/01 AND 2011/01/31 AND STORE_NUMBER=01 AND PROMODESC IN (Advertisement, Coupon, Weekly Special, Overstocked Items) AND CATEGORY=42 GROUP BY ITEM_DESC;

Query Example: Profit by store


This example shows the profit by store for a given category on a given day.
SELECT T1.STORE_NUMBER,T1.CITY,T1.DISTRICT,SUM(AMOUNT) AS SUM_PROFIT FROM (SELECT STORE_NUMBER,STORE.CITY,DISTRICT, EXTENDED_PRICE-EXTENDED_COST FROM PERIOD, PRODUCT,STORE, DAILY_SALES WHERE PERIOD.CALENDAR_DATE=3/1/2011 AND PERIOD.PERKEY=DAILY_SALES.PERKEY AND PRODUCT.PRODKEY=DAILY_SALES.PRODKEY AND STORE.STOREKEY=DAILY_SALES.STOREKEY AND PRODUCT.CATEGORY=42 ) AS T1(STORE_NUMBER,CITY,DISTRICT,AMOUNT) GROUP BY DISTRICT,CITY, STORE_NUMBER ORDER BY DISTRICT,CITY, STORE_NUMBER DESC;

Query Example: Revenue by store for each brand


This example takes the revenue for each brand and calculates the revenue by store. Products are grouped by store, current week, prior week, and prior month totals.
SELECT STORE_NUMBER, SUM(CASE WHEN ((CALENDAR_DATE >= 8/8/2010) AND (CALENDAR_DATE < 8/14/2010)) THEN EXTENDED_PRICE ELSE 0 END) AS CURR_PERIOD, SUM(CASE WHEN ((CALENDAR_DATE >= 8/1/2010) AND (CALENDAR_DATE <= 8/7/2010)) THEN EXTENDED_PRICE ELSE 0 END) AS PRIOR_WEEK, SUM(CASE WHEN ((CALENDAR_DATE >= 7/1/2010) AND (CALENDAR_DATE <= 7/28/2010))
Chapter 1. Overview of Informix Warehouse Accelerator

1-7

THEN EXTENDED_PRICE ELSE 0 END) AS PRIOR_MONTH FROM PERIOD,PRODUCT,DAILY_SALES,STORE WHERE PRODUCT.PRODKEY=DAILY_SALES.PRODKEY AND PERIOD.PERKEY=DAILY_SALES.PERKEY AND STORE.STOREKEY=DAILY_SALES.STOREKEY AND CALENDAR_DATE BETWEEN 7/1/2010 and 8/14/2010 AND ITEM_DESC LIKE NESTLE% GROUP BY STORE_NUMBER ORDER BY STORE_NUMBER;

Query Example: Week to day profits


This example shows the week to day profits for a given category within region.
SELECT FIRST 100 SUB_CATEGORY_DESC, SUM(CASE REGION WHEN North THEN EXTENDED_PRICE-EXTENDED_COST ELSE 0 END) AS NORTHERN_REGION, SUM(CASE REGION WHEN South THEN EXTENDED_PRICE-EXTENDED_COST ELSE 0 END) AS SOUTHERN_REGION, SUM(CASE REGION WHEN East THEN EXTENDED_PRICE-EXTENDED_COST ELSE 0 END) AS EASTERN_REGION, SUM(CASE REGION WHEN West THEN EXTENDED_PRICE-EXTENDED_COST ELSE 0 END) AS WESTERN_REGION, SUM(CASE WHEN REGION IN (North, South, East, West) THEN EXTENDED_PRICE-EXTENDED_COST ELSE 0 END) as ALL_REGIONS FROM PERIOD per, PRODUCT prd, STORE st, DAILY_SALES s WHERE per.PERKEY=s.PERKEY AND prd.PRODKEY=s.PRODKEY AND st.STOREKEY=s.STOREKEY AND per.CALENDAR_DATE BETWEEN 07/01/2010 AND 09/30/2010AND CATEGORY<>88 GROUP BY SUB_CATEGORY_DESC ORDER BY SUB_CATEGORY_DESC;

Types of queries that are not accelerated


There are characteristics of queries that either will not benefit from being sent to the accelerator, or will not be considered for acceleration.

Queries that will not benefit from being accelerated


Queries that only refer to a single, small dimension table do not benefit from being sent to the accelerator for processing as much as queries that also refer to a fact table. Queries that return a large result set should be processed by the database server to avoid the overhead of sending the large result set from the accelerator over the connection to the database server. If a query returns millions of rows, the total response time of the query is influenced by the maximum transfer rate of the connection. For example, the following query might return a very large result set:
SELECT * FROM fact_table ORDER BY sales_date;

Queries that search only a small number of rows of data should be processed by the database server to avoid the overhead of sending the query to the accelerator.

Queries that are not considered for acceleration


There are some queries that will not be processed by the accelerator.

1-8

IBM Informix Warehouse Accelerator Administration Guide

Queries that would change the data cannot be processed by the accelerator and must be processed by the database server. The data in the accelerator is a snapshot view of the data and is read only. There is no mechanism to change the data in the data marts and replicate those changes back to the source database server. Other queries that are not processed by the accelerator include queries that contain INSERT, UPDATE, or DELETE statements, queries that contain subqueries, and other OLTP queries. Related concepts: Queries that benefit from acceleration on page 1-6

Supported data types


The Informix Warehouse Accelerator supports specific data types. The following data types are supported: v BIGINT v BIGSERIAL v v v v v CHAR CHARACTER CHARACTER VARYING DATE DATETIME YEAR TO FRACTION

v DECIMAL v DOUBLE PRECISION v FLOAT v v v v v INT INT8 INTEGER LVARCHAR MONEY

v NUMERIC v SMALLFLOAT v v v v v SMALLINT SERIAL SERIAL8 REAL VARCHAR

Supported functions and expressions


The accelerator supports specific functions and expressions.

Aggregate functions and expressions


The following aggregate functions are supported by the accelerator: v AVG v STDEV v SUM
Chapter 1. Overview of Informix Warehouse Accelerator

1-9

v VARIANCE

User-defined functions
User-defined functions are not supported.

Scalar functions
The following scalar functions are supported by the accelerator: v ABS v ADD_MONTHS v CASE v v v v v v v CEIL CONCAT COUNT DATE DAY DECODE FLOOR

v LAST_DAY v LOWER v v v v v LPAD LTRIM MAX MIN MOD

v MONTH v MONTHS_BETWEEN v NEXT_DAY v v v v NVL POW POWER RANGE

v ROUND v RPAD v RTRIM v TRUNC v UPPER v WEEKDAY

Supported and unsupported joins


Specific join types, join predicates, and join combinations are supported by the accelerator.

1-10

IBM Informix Warehouse Accelerator Administration Guide

Supported joins
Equality join predicates, INNER joins, and LEFT OUTER joins are the supported join types. The fact table referenced in the query must be on the left side of the LEFT OUTER join.

Unsupported joins
The following joins are not supported: v RIGHT OUTER joins v FULL OUTER joins v Informix outer joins v Joins that do not use an equality predicate v Subqueries

Software prerequisites
There are separate software prerequisites for the Informix Warehouse Accelerator and the administration interface, IBM Smart Analytics Optimizer Studio. Informix Warehouse Accelerator must be installed on a computer that uses a Linux Intel x86 64-bit operating system. Informix Warehouse Accelerator can be installed on the same computer as the Informix database server, on a separate computer, or on a cluster. If you install the accelerator on a separate computer, then the Informix database server must be installed on a computer that uses one of the following operating systems: v AIX 64-bit v HP IA 64-bit v Solaris SPARC 64-bit v Linux Intel x86 64-bit For a detailed list of the operating systems supported by the current version of Informix and by other IBM Informix products, download the platform availability spreadsheet from http://www.ibm.com/software/data/informix/pubs/ roadmaps.html. Search for the product name, or sort the spreadsheet by name.

The accelerator and database server on the same computer


You must have the following installed: v IBM Informix 11.70.xC2, or later v A Linux Intel x86 64-bit operating system v One of the supported Linux distributions: Red Hat Enterprise Linux (RHEL 5 update 3 or above) or SUSE Linux Enterprise Server (SLES 11). You might need to adjust the SHMMAX Linux Kernel parameter. SHMMAX is a shared memory parameter that controls the maximum size, in bytes, of a shared memory segment. v Telnet client program v Expect utility (expect-5)
Chapter 1. Overview of Informix Warehouse Accelerator

1-11

v su command

The accelerator and database server on different computers


You must have the following installed on the Linux computer where the accelerator is installed: v A Linux x86 64-bit operating system. v One of the supported Linux distributions: Red Hat Enterprise Linux (RHEL) 5 or SUSE Linux Enterprise Server (SLES). You might need to adjust the SHMMAX Linux Kernel parameter. SHMMAX is a shared memory parameter that controls the maximum size, in bytes, of a shared memory segment. v Telnet client program v Expect utility (expect-5) v su command

The accelerator installed on a cluster system


Following are additional requirements for installing the accelerator on a cluster system: v You must have a shared-disk cluster file system. For example, IBM General Parallel File System (GPFS). v The user root must be able to connect to all cluster nodes using the Secure Shell (SSH) network protocol without a password. v If user informix is used for accelerator administration, then user informix must be able to connect to all cluster nodes using the Secure Shell (SSH) network protocol without a password.

The administration interface


You can install the administration interface, IBM Smart Analytics Optimizer Studio, on the same computer as the Informix database server or on a separate computer. If you install the administration interface on a separate computer, that computer can use either Linux or Windows operating systems. Related concepts: Hardware prerequisites

Hardware prerequisites
Make certain that you have the appropriate hardware to support the Informix Warehouse Accelerator and the administration interface. Informix Warehouse Accelerator can be installed on the same computer as the Informix database server, on a separate computer, or on a cluster. You can install the administration interface on the same computer as the Informix database server or on a separate computer. Important: The accelerator caches compressed data in memory. It is essential that the computer where the accelerator is installed is configured with a large amount of memory.

1-12

IBM Informix Warehouse Accelerator Administration Guide

The computer on which you install Informix Warehouse Accelerator must have a CPU with the Streaming SIMD Extensions 3 (SSE3) instruction set. To verify what is installed on the computer, you can run the cat /proc/cpuinfo command and look at the flags that are returned. For example, you can use the configuration that is shown in the following table:
Component System Processor Number of processors Memory Capacity / Size IBM System x3850 X5 Intel Xeon CPU X7560 @ 2.26GHz (8-core) 4 512 GB

For additional information about the IBM System x3850 X5, see the specifications at http://www.ibm.com/systems/x/hardware/enterprise/x3850x5/specs.html.

Hardware prerequisites for cluster configurations


Following are the hardware prerequisites if you install the accelerator on a cluster system: v The number of CPUs and the amount of memory must be the same on each cluster node. v You must have at least two cluster nodes. The maximum number of cluster nodes is 256. Related concepts: Software prerequisites on page 1-11

Chapter 1. Overview of Informix Warehouse Accelerator

1-13

1-14

IBM Informix Warehouse Accelerator Administration Guide

Chapter 2. Accelerator installation


You can install Informix Warehouse Accelerator on the same computer as the Informix database server, on a separate computer, or on a cluster. Important: Only one instance of Informix Warehouse Accelerator can be installed on a computer. Informix Warehouse Accelerator includes an administration client called IBM Smart Analytics Optimizer Studio. IBM Smart Analytics Optimizer Studio is an Eclipse-based program. IBM Smart Analytics Optimizer Studio is included with Informix Warehouse Accelerator. You can install IBM Smart Analytics Optimizer Studio on the same computer as the accelerator or on a separate computer, including a Windows computer. Before you install the accelerator, ensure that your computers meet the software and hardware prerequisites, and that you have decided which architecture you want to implement.

Accelerator directory structure


When you install and setup Informix Warehouse Accelerator there are several directories that are needed.

Installation directory
The accelerator is installed in the directory that is specified by the INFORMIXDIR environment variable, if the variable is set in the environment in which the installer is launched. If the variable is not set, the default installation directory is /opt/IBM/informix. Whenever there is a reference to the file path for the accelerator installation directory, the file path appears as $IWA_INSTALL_DIR.

Storage directory
The accelerator instance resides in its own directory, referred to as the accelerator storage directory. This directory stores the accelerator catalog, data marts, logs, traces, and so forth. You create this directory when you configure the accelerator. The file path for this directory is stored in the DWADIR parameter in the dwainst.conf file.

Administration interface directory


You specify the path to this directory when you install the administration interface. For example you might use $IWA_INSTALL_DIR/dwa_gui as the administration interface directory.

Sample directory for Java classes


The Java classes that are included with the command line sample are located in the dwa/example/cli directory. Tip: Information about the Java classes is located in the dwa_java_reference.txt file in the dwa/example/cli directory.
Copyright IBM Corp. 2010, 2011

2-1

Documentation directory
Before you install the accelerator, you can access the accelerator documentation in the following directories: v The Release Notes file is in the $IWA_ROOT_DIR/doc directory v The Quick Start Guide is in the $IWA_ROOT_DIR/quickstart directory After you install the accelerator, you can access the accelerator documentation in the following directories: v The release notes file is in the $IWA_INSTALL_DIR/release/en_us/0333/doc directory Related tasks: Configuring the accelerator (non-cluster installation) on page 3-1 Related reference: dwainst.conf configuration file on page 3-3

Preparing the Informix database server


Before you install the accelerator, you need to configure the database server. To configure the Informix database server: 1. Ensure that the user Informix has write access to the sqlhosts file and the directory that the file is in. 2. To use the administration interface, you must define a SOCTCP network connection type in the $INFORMIXSQLHOSTS environment variable or in the ONCONFIG file by using the NETTYPE configuration parameter. 3. If you do not already have a default sbspace created and configured, create the default sbspace: a. In the ONCONFIG file, set the SBSPACENAME parameter to the name of your default sbspace. For example, to name the default sbspace sbsp1:
SBSPACENAME sbspace1 # Default sbspace name

You must update the ONCONFIG file before you start the database server. b. Use the onspaces command to create the sbspace. The following example creates an sbspace named sbspace1:
onspaces -c -S sbspace1 -p $INFORMIXDIR/tmp/sbspace1 -o 0 -s 30000

Note: The size of the sbspace can be relatively small, for example between 30 and 50 MB. c. Restart the Informix database server.

2-2

IBM Informix Warehouse Accelerator Administration Guide

Related concepts: Missing sbspace on page 6-1 Related tasks: Setting up the sqlhosts file (UNIX) (Administrator's Guide) Related reference: NETTYPE Configuration Parameter (Administrator's Reference)

Installing the accelerator


You can use the graphical mode, console mode, or silent mode to install Informix Warehouse Accelerator. Prerequisites v You must have an Informix warehouse edition installed before you can install Informix Warehouse Accelerator. v Only one instance of Informix Warehouse Accelerator can be installed on a computer. v If you are installing on a cluster system, ensure that the file paths for the Informix Warehouse Accelerator installation files are the same on each cluster node. You can install Informix Warehouse Accelerator on the same computer as your Informix database server, on a separate computer, or on a cluster system. Informix Warehouse Accelerator is included in any warehouse edition of the Informix database server. You can install the accelerator from the provided installation media, or you can install it after you download a warehouse edition of Informix from Passport Advantage 1. On the computer where you want to run the Informix Warehouse Accelerator installation program, log in as user root. 2. From the product media or the download site, locate the IBM Informix warehouse edition bundle and unpack the iif.version.tar file. 3. Select the installation mode that you want to use: v For the graphical or console mode: a. Issue the install command to start the installation program:
Installation mode Graphical Console Installation command ./iwa_install -i gui ./iwa_install -i console

b. Read the license agreement and accept the terms. c. Respond to the prompts in the installation program as the program guides you through the installation. v For the silent mode: a. Make a copy of the response file template that is located in the same directory as the Informix Warehouse Accelerator installation program. The name of the template is iwa.properties. b. In the response file change the value for license from FALSE to TRUE, to indicate that you accept the license terms. For example:
DLICENSE_ACCEPTED=TRUE
Chapter 2. Accelerator installation

2-3

c. Issue the installation command for the silent mode. The command for the silent mode installation is:
./iwa_install -i silent -f "file_path"

Tip: Specify the absolute path for the response file. For example, to use the silent mode with a response file called installer.properties that is located in the /usr3/iwa/ directory, the command is:
./iwa_install -i silent -f "/usr3/iwa/installer.properties"

After you complete the installation, there are additional steps if you want to install the administration interface. v The accelerator is installed in the directory that is specified by the INFORMIXDIR environment variable, if the INFORMIXDIR environment variable is set in the environment in which the installer is launched. Otherwise, the default installation directory is /opt/IBM/informix. v The configuration file, $IWA_INSTALL_DIR/dwa/etc/dwainst.conf, is generated during the installation. This configuration file is required to start the accelerator. v The installation log file, $IWA_INSTALL_DIR/ IBM_Informix_Warehouse_Accelerator_InstallLog.log, is generated during the installation. This log file provides information on the actions performed during installation and success or failure status of those actions. You must configure and start the accelerator before you can use it. Related tasks: Installing the administration interface Chapter 3, Accelerator configuration, on page 3-1 Uninstalling the accelerator on page 2-5

Installing the administration interface


You can install the administration interface, IBM Smart Analytics Optimizer Studio, on the same computer as the Informix database server or on a separate computer. v You must have an Informix warehouse edition and Informix Warehouse Accelerator installed before you can install the administration interface. v You must have write permission on the directory where you plan to install IBM Smart Analytics Optimizer Studio. To install the administration interface: 1. Locate the IBM Smart Analytics Optimizer Studio installation programs. On the computer where you unpacked the IBM Informix warehouse edition .tar file, the installation programs are in the IBM_Smart_Analytics_Optimizer_Suite directory. There is one installation program for Linux and UNIX computers, and another installation program for Windows computers. The names of the installation programs are:
IBM_Smart_Analytics_Optimizer_Suite/install.bin IBM_Smart_Analytics_Optimizer_Suite/install.exe

2. If you are installing the administration interface on a separate computer, you can insert the provided media or FTP the installation program to the separate computer. 3. Run the installation program to install the administration interface. The administration interface is installed in the path that you specify in the installer, for example $IWA_INSTALL_DIR/dwa_gui.

2-4

IBM Informix Warehouse Accelerator Administration Guide

Tip: Linux and UNIX only - You can use the -i swing command for graphical installation or the -i silent command for silent installation. The -i console command is not supported. 4. Ensure that the interface opens without any errors. After you complete the installation, open the directory where the administration interface is installed and run the ./datastudio command to start the administration interface. You must configure and start the accelerator before you can use it. Related tasks: Installing the accelerator on page 2-3 Chapter 3, Accelerator configuration, on page 3-1 Uninstalling the accelerator

Uninstalling the accelerator


If you need to reinstall the accelerator or if you no longer want to use the accelerator, you must uninstall the accelerator. You must be logged on as user root to run the uninstaller. To uninstall the accelerator: 1. Stop the accelerator using the ondwa stop command. 2. Uninstall the accelerator: a. b. Run the uninstaller program, uninstall_iwa, which is located in the $INFORMIXDIR/uninstall/uninstall_iwa/ directory To reinstall the accelerator install the accelerator again. Then start the accelerator using the ondwa start command. Related tasks: Installing the accelerator on page 2-3 Installing the administration interface on page 2-4 Related reference: ondwa start command on page 3-11 ondwa stop command on page 3-14

Chapter 2. Accelerator installation

2-5

2-6

IBM Informix Warehouse Accelerator Administration Guide

Chapter 3. Accelerator configuration


You must configure the accelerator to work with the Informix database server. After you install Informix Warehouse Accelerator, there are several configuration steps that you must complete before you can use the accelerator. 1. Configuring the accelerator (non-cluster installation) or Configuring the accelerator (cluster installation) on page 3-2 2. Connecting the database server to the accelerator on page 3-6 3. Enabling and disabling query acceleration on page 3-7 Related tasks: Installing the accelerator on page 2-3 Installing the administration interface on page 2-4

Configuring the accelerator (non-cluster installation)


You must configure the Informix Warehouse Accelerator before you can enable query acceleration and set up the connection between the accelerator and the database server. Configure the accelerator by identifying the network interface, creating a storage directory, and editing the dwainst.conf configuration file. Configuring the accelerator sets up the files and directories that you will need to use the accelerator: 1. On the computer where the accelerator is installed, log on to the computer as user root. 2. Determine the correct network interface value to use for the connection from the Informix database server to the accelerator. a. Run the Linux ifconfig system command to retrieve the information about the network devices on your system. b. Review the output with your system administrator and network administrator and select the appropriate value to use. Examples of network interface values are: eth0, lo, peth0. The default value is lo. Note: If the accelerator is installed on a separate computer than the Informix database server, you cannot use the local loopback value. 3. Create a directory to use as the accelerator storage directory. Create this directory with enough space to store the accelerator catalog, data marts, logs, traces, and so on. For example:
$ mkdir $IWA_INSTALL_DIR/dwa/demo

Because the amount of data in the accelerator storage directory might increase significantly, do not create the accelerator storage directory in the accelerator installation directory.You will specify the file path for this directory in the value for the DWADIR parameter in the dwainst.conf file. 4. Open the $IWA_INSTALL_DIR/dwa/etc/dwainst.conf configuration file. Review and edit the values in the dwainst.conf configuration file on page 3-3. Important: Specify the network interface value for the DRDA_INTERFACE parameter in the dwainst.conf file.
Copyright IBM Corp. 2010, 2011

3-1

5. Run the ondwa setup command on page 3-10 to create the files and subdirectories that are required to run the accelerator instance. 6. Run the ondwa start command on page 3-11 to start all of the accelerator nodes. Related concepts: Accelerator directory structure on page 2-1 Related reference: The ondwa utility on page 3-9 ondwa setup command on page 3-10 ondwa start command on page 3-11 dwainst.conf configuration file on page 3-3

Configuring the accelerator (cluster installation)


You must configure the Informix Warehouse Accelerator before you can enable query acceleration and set up the connection between the accelerator and the database server. Configure the accelerator by identifying the network interface, creating a storage directory, editing the parameters in the dwainst.conf configuration file, and creating a cluster.conf file. In a cluster system, one coordinator node uses the first cluster node, and then each cluster node that you add to the cluster becomes a worker node. Prerequisite: Test that user root or user informix can run the Secure Shell (SSH) network protocol without a password between all cluster nodes. Configuring the accelerator sets up the files and directories that you will need to use the accelerator. 1. On one of the cluster nodes where the accelerator is installed, log on as user root. 2. Determine the correct network interface value to use for the connection from the Informix database server to the accelerator. a. Run the Linux ifconfig system command to retrieve the information about the network devices on your system. b. Review the output with your system administrator and network administrator and select the appropriate value to use. Examples of network interface values are eth0 or peth0. 3. On the shared cluster file system, create a directory to use as the accelerator storage directory. The storage directory must be accessible with the same path on all cluster nodes. Create this directory with enough space to store the accelerator catalog, data marts, logs, traces, and so on. For example:
$ mkdir $IWA_INSTALL_DIR/dwa/demo

Because the amount of data in the accelerator storage directory might increase significantly, do not create the accelerator storage directory in the accelerator installation directory. 4. Open the $IWA_INSTALL_DIR/dwa/etc/dwainst.conf configuration file. Review and edit the values in the dwainst.conf configuration file on page 3-3: a. For the DRDA_INTERFACE parameter, specify the network interface value that you identified in step 2. b. For the DWADIR parameter, specify the file path for the storage directory that you created in step 3. On all cluster nodes, the DWADIR parameter must be the same file path.

3-2

IBM Informix Warehouse Accelerator Administration Guide

c. For the CLUSTER_INTERFACE parameter, specify the network device name for the connection between the cluster nodes. For example, eth0. d. If only one coordinator node or one worker node will run on each cluster node, add the following additional parameters and values:
CORES_FOR_SCAN_THREADS_PERCENTAGE=100 CORES_FOR_LOAD_THREADS_PERCENTAGE=100 CORES_FOR_REORG_THREADS_PERCENTAGE=25

5. In the $IWA_INSTALL_DIR/dwa/etc directory, create a file named cluster.conf to store the cluster nodes host names or IP addresses. In the cluster.conf file, enter one cluster node per line. For example:
node0001 node0002 node0003 node0004

The order that you list the cluster nodes hosts names (or their IP addresses) is the order that the cluster nodes are started or stopped with the ondwa start and ondwa stop commands. 6. Use the ondwa commands to set up and start the accelerator. You can run the ondwa commands from any node in the cluster. The ondwa commands apply to all the nodes listed in the cluster.conf file. a. Run the ondwa setup command on page 3-10 to create the files and subdirectories that are required to run the accelerator instance. Example output:
Checking Checking Checking Checking for for for for DWA_CM_node0 DWA_CM_node1 DWA_CM_node2 DWA_CM_node3 on on on on node0001: node0002: node0003: node0004: stopped stopped stopped stopped

b. Run the ondwa start command on page 3-11 to start all of the cluster nodes. Example output:
Starting Starting Starting Starting DWA_CM_node0 DWA_CM_node1 DWA_CM_node2 DWA_CM_node3 on on on on node0001: node0002: node0003: node0004: started started started started

Related reference: dwainst.conf configuration file

dwainst.conf configuration file


The dwainst.conf configuration file contains the parameters that are used to configure the accelerator. The dwainst.conf configuration file is added to the $IWA_INSTALL_DIR/dwa/etc directory when you run the installation program for the Informix Warehouse Accelerator. Open the dwainst.conf configuration file and edit the parameter values before you run the ondwa setup command. Important: After initial setup, if you update any of the parameters in the dwainst.conf configuration file, you must run the following ondwa commands again for the updated parameters to take effect: v ondwa stop v ondwa reset v ondwa setup
Chapter 3. Accelerator configuration

3-3

v ondwa start The dwainst.conf file contains the following parameters.


Table 3-1. Parameters in the dwainst.conf file Parameter CLUSTER_INTERFACE Description For cluster installations: The network device name for the connection between the cluster nodes. The shared memory on the coordinator node. You can specify the value as a percentage, such as .70, or a value in Megabytes. Guidance Common examples are eth0, eth1, or eth2.

COORDINATOR_SHM

The total of the shared memory on the coordinator node and worker nodes should not exceed the free memory on the computer where the accelerator is installed. Tip: The coordinator node does not need as much shared memory as the worker nodes. A value between 5 to 10 % of the total memory set aside for the accelerator is a good estimate for this parameter. If only one coordinator node or one worker node will run on each cluster node, set this value to 100. If only one coordinator node or one worker node will run on each cluster node, set this value to 25. If only one coordinator node or one worker node will run on each cluster node, set this value to 100. Ask your system administrator and network administrator which network interface to use for the DRDA_INTERFACE value. If the accelerator is installed on a separate computer than the Informix database server, you cannot use the local loopback value. Create the directory first and then specify the directory in the dwainst.conf file. Note: Specify the directory before you run the ondwa setup command.

CORES_FOR_LOAD _THREADS_PERCENTAGE

Used in cluster installations.

CORES_FOR_REORG _THREADS_PERCENTAGE

Used in cluster installations.

CORES_FOR_SCAN _THREADS_PERCENTAGE

Used in cluster installations.

DRDA_INTERFACE

The network device name that you will use for the connection from the Informix database server to the accelerator. The default value is lo.

DWADIR

The name and file path for accelerator storage directory.

3-4

IBM Informix Warehouse Accelerator Administration Guide

Table 3-1. Parameters in the dwainst.conf file (continued) Parameter NUM_NODES Description The number of nodes (DWA_CM processes) Guidance The number of accelerator nodes should not overload the computer where the accelerator is installed. The number of worker nodes is the value of the NUM_NODES parameter - 1. START_PORT The starting port number for The accelerator instance the coordinator node and the assigns the port numbers worker nodes. that immediately follow the starting port number to the coordinator node and the worker nodes. These port numbers should not already be used by other processes. Each accelerator node needs to be configured with four different port numbers. Beginning with the starting port number, the nodes are assigned incremental port numbers. For example, if your instance has five nodes and you specify the START_PORT number as 21020, the accelerator instance will use ports 21020 21039 because each node uses four port numbers. WORKER_SHM The shared memory on the worker nodes. This value is the total shared memory, combined, on all the worker nodes. You can specify the value as a percentage, such as .70, or a value in Megabytes. The combined total shared memory on the worker nodes and the coordinator node should not exceed the free memory on the computer where the accelerator is installed. Tip: The data marts, with all their data in the compressed format, must fit into the shared memory on the worker nodes. Plan on using approximately two-thirds of the total memory for the accelerator as worker nodes shared memory.

The following example shows parameters and their values in a dwainst.conf configuration file:
DWADIR=$IWA_INSTALL_DIR/dwa/demo START_PORT=21020 NUM_NODES=2 WORKER_SHM=500 COORDINATOR_SHM=250 DRDA_INTERFACE="eth0"
Chapter 3. Accelerator configuration

3-5

Related concepts: Accelerator directory structure on page 2-1 Memory issues for the coordinator node and the worker nodes on page 6-1 Related tasks: Configuring the accelerator (non-cluster installation) on page 3-1 Configuring the accelerator (cluster installation) on page 3-2 Related reference: ondwa setup command on page 3-10

Connecting the database server to the accelerator


Set up the connection between the Informix database server and the accelerator and using the ondwa getpin command to retrieve the IP address, port number, and PIN from the accelerator. You then use the IBM Smart Analytics Optimizer Studio to create the connection to the database server. 1. On the computer where the accelerator is installed, log on as user root. 2. Run the ondwa getpin command on page 3-13 to retrieve the IP address, port number, and PIN from the accelerator. This information is needed to establish the initial connection from the Informix database server to the accelerator. 3. On the computer where the IBM Smart Analytics Optimizer Studio is installed, start the administration interface: v On Linux and UNIX, open the directory where the administration interface is installed and run the ./datastudio command. When the administration interface is installed on the same computer as Informix, the directory is $IWA_INSTALL_DIR/dwa_gui. v On Windows, select Start > Programs > IBM Smart Analytics Optimizer Studio 1.1 4. In the Workspace Launcher window you can use the default workspace, select an existing workspace, or create a new workspace: v The file path and name of default workspace appears in the Workspace drop-down box. To accept the default workspace, click OK. v To select an existing workspace, choose a workspace from the drop-down list and click OK. v To create a new workspace, click Browse. Navigate to the directory location where you want to create the workspace and click Make New Folder. Type the name for the workspace and click OK. Then click OK again. 5. The first time a workspace is created, a Welcome screen appears. Close the Welcome screen. The newly created blank workspace appears. 6. Add the accelerator and connect to the Informix database server: a. In the Data Source Explorer window, open the Database Connections folder. b. Create a new connection. You must use the Informix JDBC driver for the connection. Restriction: The Informix Warehouse Accelerator does not support using the IBM Data Server Driver for JDBC for the connection between Informix and the accelerator. c. To add a new accelerator to the database, right-click on the Database Connection folder. Right-click on the Accelerators folder and choose Add Accelerator.

3-6

IBM Informix Warehouse Accelerator Administration Guide

d. Using the information you gathered in Step 2 on page 3-6, type the name and pairing information for the accelerator. In the Pairing code field, type the PIN number. e. Click OK. A connection between the accelerator and the database server is established, and the sqlhosts file on the Informix database server is updated with the connection information. An example of the sqlhosts file is:
FLINS2 group - - c=1, a=4b3f3f457d5f552b613b4c587551362d2776496f226e714d75217e22614742677b424224 FLINS2_1 dwsoctcp 127.0.0.1 21022 g=FLINS2

7. Use the SET EXPLAIN statement to see the query plan. If the query is accelerated, the Remote SQL Request section appears in the query plan. For example:
QUERY: (ISAO-Executed)(OPTIMIZATION TIMESTAMP: 02-20-2011 01:05:57) -----select sum(units) from salesfact Estimated Cost: 242522 Estimated # of Rows Returned: 1 Maximum Threads: 0 1) sk@FLINS2:dwa.aqt0f957100-cca1-406b-93cc-cae2117122ae: Remote SQL Request: {QUERY {FROM dwa.aqt0f957100-cca1-406b-93cc-cae2117122ae} {SELECT {SUM {SYSCAST COL10 AS BIGINT} } } }

QUERY: (ISAO-FYI) (OPTIMIZATION TIMESTAMP: 02-20-2011 01:05:57) -----select sum(units) from salesfact Estimated Cost: 242522 Estimated # of Rows Returned: 1 Maximum Threads: 0 1) informix.salesfact: SEQUENTIAL SCAN Query statistics: ----------------Table map : ---------------------------Internal name Table name ---------------------------type rows_prod est_rows time est_cost ------------------------------------------------remote 1 0 00:00.00 0

Enabling and disabling query acceleration


Before queries can be routed to the accelerator for processing, you must update the database server statistics and set several session variables. 1. On the Informix database server, run the UPDATE STATISTICS LOW statement. The Informix query optimizer uses the statistics when evaluating a query to

Chapter 3. Accelerator configuration

3-7

determine the fact table in the query. If the fact table in the query is not the fact table in the AQT, the query is sent to Informix for processing Tip: Run the statistics again if the distribution of the data in the database changes. 2. Set the PDQPRIORITY session variable. This variable needs to be set so that a star join plan is considered by the optimizer and the right fact table is chosen. 3. Set the use_dwa session variable. Setting this variable enables the Informix query optimizer to consider using the accelerator to process the query when the optimizer generates the query plans. If the variable is set to use_dwa 0 or is not set at all, queries are not accelerated. You can specify that the variable is set automatically or you can set the variable manually: v To have this variable set when you connect to the Informix database, add the use_dwa session variable to the sysdbopen() stored procedure on your Informix database server. Specifying the variable in the procedure will avoid changing your applications or recompiling. v To set the use_dwa variable manually, from the Informix database client set the use_dwa session variable. For example:
SET ENVIRONMENT use_dwa 1;

Valid values are:


Value 0 Action Turns acceleration OFF. Description Queries are not sent to the accelerator even if the queries meet the required criteria. The default value for the use_dwa variable is 0. 1 Turns acceleration ON. This is the recommended setting. Queries that match one of the accelerated query tables (AQTs) are sent to the accelerator for processing.

Turns acceleration ON and Queries that match one of sends debugging information the accelerated query tables to a LOG file. (AQTs) are sent to the accelerator for processing. Debugging information is saved to the Informix message log file.

The following example shows the information in the log file:


14:08:54 Explain file for session 281 : /work3/andreasb/test.new/sqexplain.out 14:08:54 Identified one candidate AQT for matching. 14:08:54 Trying to match query with AQT aqta77d004f-7fc1-444e-87d6-380aef1617fd 14:08:54 select MIN(s_suppkey) from par 14:08:54 AQT aqta77d004f-7fc1-444e-87d6-380aef1617fd is a potential syntactical match for query. 14:08:54 Successfully generated query plan to offload query to AQT aqta77d004f-7fc1-444e-87d6-380aef1617fd 14:08:55 Matching succesful. Took 383 ms.

3-8

IBM Informix Warehouse Accelerator Administration Guide

Related concepts: The query plan (Performance Guide) Statistics held for the table and index (Performance Guide) Related reference: Configure session properties (Administrator's Guide) PDQPRIORITY environment variable (SQL Reference)

The ondwa utility


Use the ondwa utility is to set up and work with the Informix Warehouse Accelerator instance.

Prerequisites
The ondwa utility is a Bash shell script. Because the accelerator is supported only on Linux operating systems, the ability to run Bash shell scripts is built into the operating system. The following prerequisites must be installed on the machine where you installed the accelerator: v Telnet client program v Expect utility v su command

The ondwa utility directory


The ondwa utility is located in the $IWA_INSTALL_DIR/bin directory.

Running the ondwa utility in a cluster system


If you have installed Informix Warehouse Accelerator on a cluster system, you can run the ondwa command from any cluster node. The ondwa command will run on all cluster nodes listed in the $IWA_INSTALL_DIR/dwa/etc/cluster.conf file. Related tasks: Configuring the accelerator (non-cluster installation) on page 3-1

Users who can run the ondwa commands


Either user root or user informix can run the ondwa commands. To run the ondwa commands as user informix requires setup steps. It is recommended that you determine which user will run the ondwa commands and to use the same user consistently. To run ondwa commands as user informix, the following restrictions apply: v The directory specified in the DWADIR parameter in the dwainst.conf file must be owned by user informix. v If you have a DWA_watchdog.log file, it must be writable for user informix. The DWA_watchdog.log file is in the directory specified by the DWADIR parameter. v The following shell soft and hard limits must be set to unlimited prior to running ondwa commands. For example, you can use the built-in Linux Bash shell ulimit command to change the resource availability.

Chapter 3. Accelerator configuration

3-9

Table 3-2. Resources that must be set to unlimited for user informix to run ondwa commands. Resource max locked memory max memory size virtual memory Bash sell ulimit command ulimit -l ulimit -m ulimit -v

If you installed Informix Warehouse Accelerator on a cluster system with user informix, you can set the following equivalent parameters to unlimited in the /etc/security/limits.conf file on each cluster node: memlock max locked-in-memory address space rss as max resident set size address space limit

For example:
informix informix informix informix informix informix soft hard soft hard soft hard memlock memlock rss rss as as unlimited unlimited unlimited unlimited unlimited unlimited

ondwa setup command


The ondwa setup command creates the files and subdirectories that are required to run an accelerator instance. To run this command, you must log on to the computer as either user root or as user informix. To run the command as user informix, your Linux administrator must make specific memory resources available. See Users who can run the ondwa commands on page 3-9. Usage:
$ ondwa setup

The ondwa setup command uses a file named dwainst.conf in the $IWA_INSTALL_DIR/dwa/etcdirectory to configure the accelerator instance. Using the dwainst.conf file, the ondwa setup command creates the following structure in the accelerator storage directory: v The directory shared between the accelerator nodes (shared) v The directory containing the accelerator node private directories (local) v For each accelerator node: The accelerator node private directory: local/node The accelerator node configuration file: node.conf A link to the DWA_CM executable file: DWA_CM_node The ondwa utility automatically determines the role of the coordinator node or a worker node. The first node is the coordinator node, whereas the remaining nodes are worker nodes. The number of worker nodes is determined by the following calculation: NUM_NODES - 1.

3-10

IBM Informix Warehouse Accelerator Administration Guide

Tip: If you have the accelerator installed on the same symmetric multiprocessing (SMP) system as your Informix database server, edit the dwainst.conf file and change the NUM_NODES to 5. This will generate one coordinator node and four worker nodes on the accelerator. Each accelerator node needs to be configured with four different port numbers. These port numbers are listed in the configuration file for the accelerator node. The starting port number is taken from the dwainst.conf file, and incremented in turn. For example, if your instance has five accelerator nodes and you specify the START_PORT number as 21020, the accelerator instance will use ports 21020 21039 because each accelerator node uses four port numbers. When a accelerator node is started, the corresponding link to the accelerator binary DWA_CM_node is used. The ondwa setup command creates a symbolic link for accelerator each node to the DWA_CM binary. The link makes it easier to find the processes of your accelerator instance in a ps output because the ps command shows the symbolic links and not the DWA_CM binary itself. Related tasks: Configuring the accelerator (non-cluster installation) on page 3-1 Related reference: ondwa start command dwainst.conf configuration file on page 3-3

ondwa start command


The ondwa start command starts all of the accelerator nodes. If the accelerator is installed on a cluster system, and if some of the cluster nodes are stopped, the ondwa start command starts the offline cluster nodes. To run this command, you must log on to the computer as either user root or as user informix. To run the command as user informix, your Linux administrator must make specific memory resources available. See Users who can run the ondwa commands on page 3-9. Usage:
$ ondwa start

The output of a accelerator node is recorded in the log file for the node, for example: node0.log, node1.log, and so forth. These files are located in the accelerator storage directory. After the ondwa start command has finished, your accelerator instance is ready to use. After you run ondwa start, you should run the ondwa status command to check that status of the accelerator.

Chapter 3. Accelerator configuration

3-11

Related tasks: Uninstalling the accelerator on page 2-5 Configuring the accelerator (non-cluster installation) on page 3-1 Related reference: ondwa setup command on page 3-10 ondwa status command

ondwa status command


The ondwa status command shows information about the status of the accelerator nodes, the cluster status, and the expected node count. To run this command, you must log on to the computer as either user root or as user informix. To run the command as user informix, your Linux administrator must make specific memory resources available. See Users who can run the ondwa commands on page 3-9. Usage:
$ ondwa status ID | Role | Cat-Status | HB-Status | Hostname | System ID ---+-------------+------------+-----------+----------+-----------0 | COORDINATOR | ACTIVE | Healthy | leo | 1 1 | WORKER | ACTIVE | Healthy | leo | 2 Cluster is in state : Fully Operational Expected node count : 1 coordinator and 1 worker nodes

If one accelerator node is shut down, the status shows:


$ ondwa status ID | Role | Cat-Status | HB-Status | Hostname | System ID ---+-------------+-------------+-----------+----------+-----------0 | COORDINATOR | ACTIVE | Healthy | leo | 1 1 | WORKER | DEACTIVATED | Shutdown | leo | 2 Cluster is in state : Recovering Expected node count : 1 coordinator and 1 worker nodes

The following example output shows the status of a cluster system:


ID | Role | Cat-Status | HB-Status | Hostname | System ID -----+-------------+-------------+--------------+------------------+-----------0 | COORDINATOR | ACTIVE | Healthy | node0001 | 1 1 | WORKER | ACTIVE | Healthy | node0002 | 2 2 | WORKER | ACTIVE | Healthy | node0003 | 3 3 | WORKER | ACTIVE | Healthy | node0004 | 4 Cluster is in state Expected node count : Fully Operational : 1 coordinator and 3 worker nodes

Columns in ondwa status command output


The Cat-Status column shows the status of all nodes in the accelerator catalog. The Cat-Status column can have one of the following values: ACTIVE The accelerator node is active. For a cluster system, ACTIVE indicates that the cluster node participates in the cluster. DEACTIVATED The accelerator node is shut down or the accelerator is not running. FAILOVER A failed accelerator node that was set in ERROR state.

3-12

IBM Informix Warehouse Accelerator Administration Guide

The HB-Status (heartbeat status) column can have one of the following values: Initializing The accelerator node is initializing and loading data into memory. Healthy The initialization is complete and the accelerator node is ready for queries. QuiescePend The accelerator node is waiting to go into a quiesced state. Quiesced The accelerator node is in a quiesced state. Resuming The accelerator node is resuming after a quiesced or a maintenance state. Maintenance The accelerator node is in maintenance state. MaintPend The accelerator node is waiting to go into a maintenance state. Missing The accelerator node has a missing heartbeat. Shutdown The accelerator node is shutting down. Related reference: ondwa start command on page 3-11

ondwa getpin command


The ondwa getpin command retrieves the IP address, port number, and PIN that are required for Informix to connect to your accelerator instance. To run this command, you must log on to the computer as either user root or as user informix. To run the command as user informix, your Linux administrator must make specific memory resources available. See Users who can run the ondwa commands on page 3-9. Usage:
$ ondwa getpin

The following shows examples of the values that are returned:


127.0.0.1 21022 1234

ondwa tasks command


The ondwa tasks command shows information about that tasks that are currently running, memory used, and resources. To run this command, you must log on to the computer as either user root or as user informix. To run the command as user informix, your Linux administrator must make specific memory resources available. See Users who can run the ondwa commands on page 3-9. Usage:
$ ondwa tasks TaskManager tracking 2 task(s):
Chapter 3. Accelerator configuration

3-13

------------------+--------------------+---------+---------+--------+-----------Task 316822777484 (of type QUERY with name Query execution @ coordinator - 0:00) ------------------+--------------------+---------+---------+--------+-----------Location | Status | Progr. | Upd. ms | Memory | Monitor ------------------+--------------------+---------+---------+--------+-----------Primary Node 0 | OPNQRY | 0 | 10 | 17K | Fine -> Node 1 | Fact | 0 | 3 | 22M | Fine (Total Memory) | | | | 22M | ------------------+--------------------+---------+---------+--------+-----------Used Resources | Mart ID: 2 on node 0 ------------------+--------------------+---------+---------+--------+-----------Task 437785070080 (of type DAEMON with name DRDADaemon - 4:48) ------------------+--------------------+---------+---------+--------+-----------Location | Status | Progr. | Upd. ms | Memory | Monitor ------------------+--------------------+---------+---------+--------+-----------Primary Node 0 | Running | 2 | 30 | 0 | Fine (Total Memory) | | | | 0 | ------------------+--------------------+---------+---------+--------+-----------Used Resources | DRDA device: lo address: 127.0.0.1:21022 on node 0 | Unbound on node 0 ------------------+--------------------+---------+---------+--------+------------ End of Tasklist -

ondwa stop command


The ondwa stop command stops the accelerator instance. To run this command, you must log on to the computer as either user root or as user informix. To run the command as user informix, your Linux administrator must make specific memory resources available. See Users who can run the ondwa commands on page 3-9. Usage:
$ ondwa stop $ ondwa stop -f

The ondwa stop action ends when all of the DWA_CM_* processes on your accelerator instance are completed. Use the -f option to stop the full set of DWA_CM processes. Use this option if the ondwa stop action is not able to shut down one or more of the processes. Related tasks: Uninstalling the accelerator on page 2-5

ondwa reset command


The ondwa reset command removes all of the files from the accelerator storage directory that were created by the accelerator instance under the shared and local/* subdirectories. To run this command, you must log on to the computer as either user root or as user informix. To run the command as user informix, your Linux administrator must make specific memory resources available. See Users who can run the ondwa commands on page 3-9. Important: Before using this command, you must drop all data marts on the accelerator. Usage:

3-14

IBM Informix Warehouse Accelerator Administration Guide

$ ondwa reset

The ondwa reset command also removes all of the entries of your accelerator instance under the /dev/shm directory. This command does not remove the log files for the accelerator node and the files created by the ondwa setup command. After you run the ondwa reset command, you can initialize the accelerator instance by using the ondwa start command.

ondwa clean command


The ondwa clean command cleans the accelerator instance directory. To run this command, you must log on to the computer as either user root or as user informix. To run the command as user informix, your Linux administrator must make specific memory resources available. See Users who can run the ondwa commands on page 3-9. Important: Before using this command, all of the data marts must be dropped on the accelerator and the accelerator instance must be removed from the computer. Usage:
$ ondwa clean

The ondwa clean command removes the following files and directories: v All of the files created by the ondwa setup command v The complete shared and local directory trees in the accelerator storage directory v The log files for the accelerator node After you run the ondwa clean command, you can run the ondwa setup command to set up the accelerator instance again.

Chapter 3. Accelerator configuration

3-15

3-16

IBM Informix Warehouse Accelerator Administration Guide

Chapter 4. Accelerator data marts and AQTs


For efficient query processing, the Informix Warehouse Accelerator must have its own copy of the data. The data is stored in logical collections of related data, or data marts. A data mart specifies the collection of tables that are loaded into an accelerator and the relationships, or references, between these tables. Before the data mart can be created, information about the tables used by your warehouse queries must be assembled into a data mart definition. A data mart definition is an XML file that contains information about the data mart. The information includes the tables and columns within the table that are included in the data mart. The information also specifies how the tables and columns are related to each other. The XML file that contains the data mart definition does not contain any actual user data. When a data mart is created, information about the data mart is sent to the Informix database server in the form of a special view referred to as an accelerated query table or AQT. The information in the AQTs is used by the database server to determine which queries can be processed by the accelerator. The database server attempts to match a query with an AQT.

Creating data marts


After you configure the accelerator instance and set up the connection between the accelerator and the database server, you need to create the data marts. A data mart is an object in the accelerator and is actually created when you complete the step to deploy the data mart. When you load the data mart, the data mart is filled with the actual user data. The query optimizer can then use the data mart when the optimizer generates the query access plans. Before the data mart can be created, information about the tables used by your warehouse queries must be assembled into a data mart definition. The key tasks in creating data marts are: 1. Designing effective data marts 2. Creating data mart definitions. There are several methods you can use to create the data mart definitions: v Creating data mart definitions by using the administration interface on page 4-8 This method is a good choice when you have detailed knowledge about the schema of the database and the queries that applications are sending to the database server. v Creating data mart definitions by using workload analysis on page 4-10 This method is a good choice when you are not familiar with the database schema and the application queries, or when your database schema has a
Copyright IBM Corp. 2010, 2011

4-1

very large number of tables. The data mart definitions are created after you run a series of statements, stored procedures, and functions that analyze your schema and queries. 3. Deploying a data mart on page 4-20 4. Loading data into data marts on page 4-21 Related tasks: Creating data mart definitions by using the administration interface on page 4-8 Creating data mart definitions by using workload analysis on page 4-10

Designing effective data marts


Planning is an essential part of creating data marts that are effective at accelerating queries. Before you create the data marts, you should learn more about data marts, AQTs, queries that are accelerated, and analyze your query workload. 1. Understand how data marts are created from your fact tables and dimension tables. 2. Understand how AQTs work with the Optimizer to accelerate queries. 3. Determine the query workload that should be accelerated. If you do not know which queries you want to accelerate, you can identify the queries that have the longest elapsed time or cause the highest CPU cost by following these steps: a. On the computer where the Informix database server is installed, log on as user informix. b. Enable the SQLTRACE configuration parameter in the onconfig file in the $INFORMIXDIR/etc directory. For example:
SQLTRACE level=high,ntraces=1000,size=4,mode=global

Or you can activate SQLTRACE by using the SQL Administration API Functions. For example:
EXECUTE FUNCTION task("set sql tracing on","1500","4","high","global");

c. Restart the database server to activate the configuration parameters. d. Run the query workload. e. Review the results of the workload by using one of the following methods: v Run the onstat -g his command v Use information from the syssqltrace table in the sysmaster database. For example in dbaccess run this SQL statement:
SELECT sql_runtime, sql_statement FROM syssqltrace WHERE sql_stmtname matches "SELECT" ORDER BY sql_runtime DESC

4. Analyze the queries that you want to accelerate. 5. From the query information, exclude the OLTP queries. Analyze the remaining queries to determine which columns in the dimension tables are being used by the queries. 6. If you have large dimension tables, identify specific columns in the tables to load into the data mart instead of loading all of the columns from each table.

4-2

IBM Informix Warehouse Accelerator Administration Guide

Related reference: Enable SQL tracing (Administrator's Guide) set sql tracing argument: Set global SQL tracing (SQL administration API) (Administrator's Reference) EXPLAIN_STAT Configuration Parameter (Administrator's Reference) SQLTRACE Configuration Parameter (Administrator's Reference)

Data marts
For efficient query processing, the Informix Warehouse Accelerator must have its own copy of the data. The data is stored in logical collections of related data, or data marts. A data mart specifies the collection of tables that are loaded into an accelerator and the relationships, or references, between these tables. Typically, the data marts contain a subset of the tables in your database. The data marts can also contain a subset of the columns within a table. To improve query processing, limit the number of dimension tables, and columns within the dimension tables, in the data mart. By identifying only those columns that are necessary to respond to your queries. However, the data marts do not need to be a duplication of the design of your warehouse fact and dimension tables. For example, you can designate a dimension table in your warehouse schema as a fact table in a data mart. When you create a data mart you use the accelerator administrator interface to specify the fact table, the dimension tables, and the references between the tables. A newly created data mart has all of the necessary structures defined but is empty and must be filled with a snapshot of the data from the Informix database server. When the data from the database server is loaded in the data mart on the accelerator, the data is compressed. After the data is loaded in the data mart, the data mart becomes operational.

Data marts must be based on a snowflake or star schema


The following figure shows a sample schema with two fact tables, DAILY_SALES and DAILY_FORECAST. These fact tables are linked to several dimension tables: STORE, CUSTOMER, PROMOTION, PERIOD, and PRODUCT. There are several key references in the fact tables that are used to link to the dimension tables. For example in the DAILY_SALES fact table, the PRODKEY column is linked to the PRODKEY column in the PRODUCT dimension table.

Chapter 4. Accelerator data marts and AQTs

4-3

STORE STOREKEY STORE_NUMBER CITY STATE DISTRICT REGION

DAILY_SALES fact table PERKEY PRODKEY STOREKEY CUSTKEY PROMOKEY

PERIOD PERKEY CALENDAR_DATE WEEK WEEK_ENDING_DATE MONTH PERIOD YEAR HOLIDAY_FLAG

CUSTOMER CUSTKEY NAME ADDRESS C_CITY C_STATE ZIP PHONE AGE_LEVEL

QUANTITY_SOLD EXTENDED_PRICE EXTENDED_COST SHELF_LOCATION SHELF_NUMBER START_SHELF_DATE SHELF_HEIGHT SHELF_WIDTH

PRODUCT PRODKEY BRANDKEY

DAILY_FORECAST fact table PROMOTION PROMOKEY PROMOTYPE PROMODESC PROMOVALUE PROMOVALUE2 PROMO_COST PERKEY STOREKEY PRODKEY QUANTITY_FORECAST EXTENDED_PRICE_FORECAST EXTENDED_COST_FORECAST

PRODLINEKEY UPC_NUMBER P_PRICE P_COST ITEM_DESC PACKAGE_TYPE CATEGORY SUB_CATEGORY PACKAGE_SIZE

Figure 4-1. A sample star schema with two fact tables

Using the schema in Figure 4-1, you can create two data marts. The first data mart is based on the DAILY_SALES fact table and the dimension tables that it links to, as shown in Figure 4-2 on page 4-5. A second data mart is based on the DAILY_FORECAST fact table and the dimension tables that it links to, as shown in Figure 4-3 on page 4-6.

4-4

IBM Informix Warehouse Accelerator Administration Guide

STORE STOREKEY STORE_NUMBER CITY STATE DISTRICT REGION

DAILY_SALES fact table PERKEY PRODKEY STOREKEY CUSTKEY PROMOKEY

PERIOD PERKEY CALENDAR_DATE WEEK WEEK_ENDING_DATE MONTH PERIOD YEAR HOLIDAY_FLAG

CUSTOMER CUSTKEY NAME ADDRESS C_CITY C_STATE ZIP PHONE AGE_LEVEL

QUANTITY_SOLD EXTENDED_PRICE EXTENDED_COST SHELF_LOCATION SHELF_NUMBER START_SHELF_DATE SHELF_HEIGHT SHELF_WIDTH

PRODUCT PRODKEY BRANDKEY PRODLINEKEY

PROMOTION PROMOKEY PROMOTYPE PROMODESC PROMOVALUE PROMOVALUE2 PROMO_COST

UPC_NUMBER P_PRICE P_COST ITEM_DESC PACKAGE_TYPE CATEGORY SUB_CATEGORY PACKAGE_SIZE

Figure 4-2. A data mart with the DAILY_SALES fact table

Chapter 4. Accelerator data marts and AQTs

4-5

STORE STOREKEY STORE_NUMBER CITY STATE DISTRICT REGION

PERIOD PERKEY CALENDAR_DATE WEEK WEEK_ENDING_DATE MONTH PERIOD YEAR HOLIDAY_FLAG

PRODCUT PRODKEY BRANDKEY DAILY_FORECAST fact table PERKEY STOREKEY PRODKEY QUANTITY_FORECAST EXTENDED_PRICE_FORECAST EXTENDED_COST_FORECAST PRODLINEKEY UPC_NUMBER P_PRICE P_COST ITEM_DESC PACKAGE_TYPE CATEGORY SUB_CATEGORY PACKAGE_SIZE

Figure 4-3. A data mart with the DAILY_FORECAST fact table

Summary or aggregate tables in data marts


To summarize the granular data in the fact tables and dimension tables, sometimes other tables are created which are referred to as summary tables or aggregate tables. For example, a summary table might contain sales information for an entire month or quarter that is consolidated from fact and dimension tables. One reason to use summary tables is to improve query performance by querying the summary table instead of the underlying fact and dimension tables. Because Informix Warehouse Accelerator is designed to significantly speed up query processing, it is not necessary for you to use summary tables to improve query performance. Using Informix Warehouse Accelerator you can query the fact table and dimension tables directly and still experience improvements in query performance. Tip: If you want to query a summary table and use the accelerator, you must include the summary table in the data mart.

Accelerated query tables


When a data mart is created, information about the data mart is sent to the database server in the form of a special view referred to as an accelerated query table or AQT.

4-6

IBM Informix Warehouse Accelerator Administration Guide

The information stored in the catalog tables of the Informix database server is the same information that is stored in the catalog tables for other types of views. The information in the AQTs is used by the database server to determine which queries can be processed by the accelerator. The database server attempts to match a query with an AQT. If a query matches an AQT, the query is sent to the accelerator. There can be many AQTs. If the first AQT is not able to satisfy a query, then the search for a match continues until the query has been checked against all of the AQTs. If a match is found, the query is sent to the accelerator for processing. If no match is found, the query is processed by Informix. To be sent to the accelerator, the query must meet the following criteria: v The query must refer to a subset of the tables in the AQT. v The table references, or joins, specified in the query must be the same as the references in the data mart definition. v The query must include only one fact table. v The query must have an INNER JOIN or LEFT JOIN with the fact table on the left dominant side. v The scalar and aggregate functions in the query must be supported by the accelerator. A data mart can be in different states such as: LOAD PENDING, ENABLED, DISABLED. The associated AQTs reflect the basic states to facilitate correct query matching and have only two states: active and inactive. When the administrator drops a data mart from the accelerator, the associated AQTs are removed automatically from the database server. Related concepts: Analyze queries for acceleration

Analyze queries for acceleration


By analyzing your queries you will have a better understanding of which queries can be accelerated. If a query matches an AQT, the query is sent to the accelerator. This process is called acceleration. The results are then returned from the accelerator to Informix. Different types of queries are more or less suitable for acceleration. Knowing the characteristics of these query types helps to understand which queries are eligible for offloading and which queries are not. Although you do not need to redesign your queries to use the accelerator, understanding why some queries are not accelerated will help you design your queries to take advantage of the accelerator.
Consideration Description

Does the query reference only the tables and Only queries that include the supported data columns that are included in the data mart types are loaded into the accelerator. definition? Does the query reference a table that is marked as a fact table in the data mart definition?

Chapter 4. Accelerator data marts and AQTs

4-7

Consideration Is the query a long-running analytical query and not a transactional query that should not be processed by the accelerator? For example, a query that returns only a few rows by using a selective condition on an indexed column. Does the query use only supported join types? Do the join sequence and the join predicates of the query match the definition of the accelerated query table (AQT)?

Description

Equality join predicates, INNER joins, and LEFT OUTER joins are the supported join types. The fact table referenced in the query must be on the left side of the LEFT OUTER join. You need to know the joins that are supported and unsupported.

Does the query use only supported aggregate and scalar functions?

Only queries that include the supported functions are sent to the accelerator for processing.

Related concepts: Accelerated query tables on page 4-6 Queries that benefit from acceleration on page 1-6

Creating data mart definitions by using the administration interface


Use the IBM Smart Analytics Optimizer Studio to create the data mart definitions when you are familiar with the database schema and the application queries that are sent to the database server. The administration interface estimates how large the data mart might become and determines if there is sufficient space on the accelerator for the proposed data mart. To create a data mart definition using the administration interface, use the following steps: 1. On the computer where the IBM Smart Analytics Optimizer Studio is installed, start the administration interface: v On UNIX, open the $IWA_INSTALL_DIR/dwa_gui directory and run the ./datastudio command. v On Windows, select Start > Programs > IBM Smart Analytics Optimizer Studio 1.1 2. Create a new accelerator project. Right-click the Accelerators folder and select New Accelerator or choose File > New > Accelerator project. 3. Use the New Data Mart wizard to create a data mart definition. Choose File > New > Data Mart to start the New DataMart Wizard. The New Data Mart wizard uses the existing database connection to read the catalog tables in the database and to retrieve information about the objects in the database, such as the tables, constraints, and so on. The wizard uses this information to create resources in the workspace on your computer. 4. Add database tables to the data mart definition. v A single data mart definition can have a maximum of 255 tables or 750 columns

4-8

IBM Informix Warehouse Accelerator Administration Guide

v If you add a table to your data mart definition that includes a data type column that is not supported, you must remove that column from the data mart definition 5. Create references, or joins, between tables in the data mart definition (if necessary). 6. Designate the fact table for the data mart definition. 7. Specify the table columns to load into the accelerator. Select the table in the Canvas and look at the Properties view. Click on the Columns page to view a list of the columns in that table. By default all of the columns in the table are included in the data mart. Uncheck the columns that you do not want included. 8. Check the size of a data mart definition before it is deployed. You can compare a size estimate of the data mart definition with the memory that is available on your accelerator. If necessary, you can change the join type of table references or omit specific columns that are rarely or never needed to reduce required memory. 9. Validate the integrity of the data mart definition. Ensure that the syntax and structure of a data mart definition are correct and that the data mart can be safely deployed to the accelerator. After you create the data mart definition, you need to deploy and load data into the data mart. Related tasks: Creating data marts on page 4-1 Deploying a data mart on page 4-20

Specify references in data marts


A reference is a join between two database tables that indicates how the tables in the data mart are related to each other.

One-to-many joins
A one-to-many join connects the columns of a primary key, unique constraint or unique, nonnullable index of the parent table with columns of the child table. Any row or tuple in the child table is related to a maximum of one row or tuple in the parent table. If the table has a primary key, the corresponding key columns are selected automatically. You can override this automatic selection by selecting another unique constraint or unique index. If the parent table does not have a primary key, select one of the unique keys. At least one unique constraint or unique nonnullable index is required, otherwise the one-to-many reference cannot be created. Important: One-to-many joins lead to a better query performance than many-to-many joins. If one of the tables that you want to use has a unique constraint, unique index, or primary key on the join columns, use a one-to-many join.

Many-to-many joins
In a many-to-many join, one or more columns of the parent table are joined with an equal number of columns in the child table. The values of these columns do not have to be unique and you do not have to enforce uniqueness through the selection of a constraint. This means that any row or tuple in the child table can
Chapter 4. Accelerator data marts and AQTs

4-9

relate to multiple tuples in the parent table. Join tables at run-time: When you specify that the tables are joined at run time, the tables are joined in the system memory of the accelerator when the query is run. Runtime joins require less system memory to hold the data mart.

Fact table for the data mart


Typical queries against a data mart join the fact table with the dimension tables and group information by a specific criteria. When you specify the tables to use for the data mart, the administrator interface automatically identifies the fact table in the warehouse on the Informix database server. The fact table is partitioned across the worker nodes of the data warehouse accelerator to reach a maximum degree of parallel processing. The fact table for the data mart does not necessarily need to be the fact table in the warehouse that you created on the Informix database server. By default, the largest table that you identify for the data mart is designated as the fact table by the administrator interface. However, you can specify any table that is part of a star or snowflake schema as the fact table for a data mart.

Creating data mart definitions by using workload analysis


Use this method to create data marts when you are not familiar with the database schema and the application queries, or when your database schema contains many tables. v To use workload analysis, the database must be a local database of the server that you are connected to. v A default smart blob space must exist. Workload analysis involves two main steps, query probing and data analysis. Query probing is gathering information about your query workload. The probing data and database schema are then analyzed and the end result is a data mart definition. To create a data mart definition by using workload analysis, use the following steps: 1. Connect to the database that contains the data warehouse tables. 2. Update database statistics using the LOW option to generate a minimum amount of statistics for the database. For example UPDATE STATISTICS LOW. Query probing needs to determine the fact table of a query. If parallel database query (PDQ) is active, you can specify the fact table with the FACT optimizer directive. If the FACT optimizer directive is not set, and for inner join queries, the fact table is identified as the table with the most number of rows. 3. Enable query probing. Run the following command to activate query probing for the current user session:
SET ENVIRONMENT use_dwa probe start;

You can run this command through an application or with the sysdbopen() procedure.

4-10

IBM Informix Warehouse Accelerator Administration Guide

4. Optional. You can enable SQL tracing. With SQL tracing on, you can identify the probing data that resulted from a specific SQL statement because each query statement is assigned an individual number, a statement ID.
SQL Tracing On Results You can identify specific statements, for example statements that took a certain length of time to process, or statements that accessed specific tables. With that information, you can include the probing data that resulted from only these statements in the mart definition. The probing data is collected into one single set of data. When you create the data mart definition, the entire set of probing data must be used.

Off (default setting)

To turn SQL tracing on, use the following steps: a. On the computer where the Informix database server is installed, log on as user informix. b. Enable the SQLTRACE configuration parameter in the onconfig file in the $INFORMIXDIR/etc directory. For example:
SQLTRACE level=low,ntraces=1000,size=4,mode=global

Or you can activate SQLTRACE by using the SQL Administration API Functions. For example:
EXECUTE FUNCTION task("set sql tracing on","1000","4","low","global");

c. If you changed the onconfig file, restart the database server to activate the configuration parameter. 5. Optional. To run the query probing more quickly, issue the SET EXPLAIN ON AVOID_EXECUTE statement before you run your query workload. When you issue this statement, the queries are optimized and the probing data is collected, but a result set is not determined or returned. Important: If you want to process the probing data based on the runtime of the queries then turn on SQL tracing and do not use the AVOID_EXECUTE option of the SET EXPLAIN statement. If you avoid running the queries, you will not know how long it takes to really run the queries. 6. Run the query workload. The probing data is stored in memory. 7. If you want to view the SQL trace information about the workload, use one of the following methods: v Run the onstat -g his command. v Use information from the syssqltrace table in the sysmaster database. For example, in dbaccess run this SQL statement:
SELECT sql_runtime, sql_statement FROM syssqltrace WHERE sql_stmtname matches "SELECT" ORDER BY sql_runtime DESC

8. If you want to view the probing data, use one of the following methods: v Run the onstat -g probe command. v Query the system monitoring interface (SMI) pseudo tables that contain probing data. 9. Create a separate logging database. Even though your warehouse database might be configured for logging, you should create a separate database. This separate database is used to store the data mart definition.

Chapter 4. Accelerator data marts and AQTs

4-11

10. Convert the probing data into a data mart definition using the probe2mart() stored procedure. You can create a data mart definition from all of the probing data or from the data from specific queries (if SQL tracing is ON). 11. Use the genmartdef() function to generate the data mart definition. This function returns a CLOB that contains the data mart definition in XML format. Store the data mart definition in a file. 12. Import the file into the administration interface. a. On the computer where the IBM Smart Analytics Optimizer Studio is installed, start the administration interface: v On UNIX, open the $IWA_INSTALL_DIR/dwa_gui directory and run the ./datastudio command. v On Windows, select Start > Programs > IBM Smart Analytics Optimizer Studio 1.1 b. Create a new accelerator project. Right-click on the Accelerators folder and select New Accelerator or choose File > New > Accelerator project. c. Right-click on the new accelerator project in the Project Explorer window. Select Import. The value Data Mart Import is selected by default. Click Next. d. Locate the generated file. Verify that Import into existing project and the name of the current project is selected. Click Finish. After you create the data mart definition, you need to deploy and load data into the data mart. Related tasks: Creating data marts on page 4-1 Deploying a data mart on page 4-20 Related reference: UPDATE STATISTICS statement (SQL Syntax) Star-Join Directives (SQL Syntax) Contents of query probing data on page 4-18 The probe2mart stored procedure on page 4-18 Appendix B, Sysmaster interface (SMI) pseudo tables for query probing data, on page B-1

Example: Create data mart definitions using workload analysis


Use this step-by-step example as a guide when you create data mart definitions using workload analysis. This example uses the stores_demo database that is created by the command dbaccessdemo. The workload is from the following query:
SELECT {+ FACT(orders)} first 5 fname,lname,sum(ship_weight) FROM customer c,orders o WHERE c.customer_num=o.customer_num and state=CA and ship_date is not null GROUP BY 1,2 ORDER BY 3 desc;

The query selects the names of the top five customers from the state CA and the total ship weight of their already shipped orders. The query is an inner join. The orders table is the fact table. The customer table is the dimension table. Since the

4-12

IBM Informix Warehouse Accelerator Administration Guide

orders table has fewer rows than the customer table, the {+ FACT(orders)} optimizer hint is required. Otherwise the customer table would be considered as the fact table. The following commands correspond to steps in the task Creating data mart definitions by using workload analysis on page 4-10. The SQL statements used in this example were executed using dbaccess, and are prompted by ">". Commands executed from the shell are prompted by "$".

Step 1: Connect to the database


Connect to the database. This example uses the stores_demo database:
> connect to stores_demo; Connected.

Step 2: Update statistics


Update the statistics on the database:
> update statistics low; Statistics updated.

Step 3: Start probing


Set the environment variable to activate probing:
> SET ENVIRONMENT use_dwa probe start; Environment set.

Step 4: Optional - Enable SQL tracing


In a separate session, connect as user informix to the sysadmin database and activate SQL tracing:
> execute function task("set sql tracing on","1500","4","low", "global"); (expression) SQL Tracing ON: ntraces=1500, size=4056, level=Low, mode=Global. 1 row(s) retrieved.

Step 5: Optional - Skip running the query workload


When you issue this statement, the queries are optimized and probed but a result set is not determined or returned. Important: If you want to process the probing data based on the runtime of the queries, then turn on SQL tracing and do not use the AVOID_EXECUTE option of the SET EXPLAIN statement. If you avoid running the queries, you will not know how long it takes to really run the queries.
> set explain on avoid_execute; Explain set.

Step 6: Run the query workload


Using this example, the SQL statements are:
> SELECT {+ FACT(orders)} first 5 fname,lname,sum(ship_weight) FROM customer c,orders o WHERE c.customer_num=o.customer_num and state=CA and ship_date is notnull

Chapter 4. Accelerator data marts and AQTs

4-13

GROUP BY by 1,2 ORDER BY by 3 desc; fname lname No rows found.

(sum)

The reason no rows are returned in this example is that the SET EXPLAIN ON AVOID_EXECUTE statement has been used.

Step 7: Optional - View the SQL trace information


You can use the onstat command or query the SMI tables to view the SQL trace information. To use the onstat command:
$ onstat -g his IBM Informix Dynamic Server Version 11.70.FC3 -- On-Line -- Up 00:53:06 -182532 Kbytes Statement history: Trace Level Trace Mode Number of traces Current Stmt ID Trace Buffer size Duration of buffer Trace Flags Control Block Low Global 1500 2 4056 42 Seconds 0x00001611 0x4dd51028

Statement # 2:

@ 0x4dd51058

Database: 0x100153 Statement text: select {+ FACT(orders)} first 5 fname,lname,sum(ship_weight) from customer c,orders o where c.customer_num=o.customer_num and state=CA and ship_date is not null group by 1,2 order by 3 desc Statement information: Sess_id User_id Stmt Type 51 29574 SELECT Statement Statistics: Page Buffer Read Read Read % Cache 0 0 0.00 Lock Requests 0 Lock Waits 0 LK Wait Time (S) 0.0000 Avg Time (S) 0.0000 Finish Time 10:39:09 Buffer IDX Read 0 Log Space 0.000 B Max Time (S) 0.0000 SQL Error 0 Page Write 0 Num Sorts 0 Avg IO Wait 0.000000 ISAM Error 0 Run Time 0.0000 Buffer Write 0 Disk Sorts 0 I/O Wait Time (S) 0.000000 Isolation Level NL TX Stamp 33f4e PDQ 0

Write % Cache 0.00 Memory Sorts 0 Avg Rows Per Sec 678122.8357 SQL Memory 25304

Total Total Executions Time (S) 1 0.0000

Estimated Estimated Actual Cost Rows Rows 10 2 0

To query the SMI tables to view the SQL trace information:

4-14

IBM Informix Warehouse Accelerator Administration Guide

> SELECT sql_id,sql_runtime, sql_statement FROM sysmaster:syssqltrace WHERE ql_stmtname=SELECT ORDER BY sql_runtime desc; sql_id 2 sql_runtime 1.47450285e-06 sql_statement select {+ FACT(orders)} first 5 fname,lname,sum(ship_weight) from customer c,orders o where c.customer_num=o.customer_num and state=CA and ship_date is not null group by 1,2 order by 3 desc

Step 8: Optional - View the probing data


To see the data that was gathered from running the query workload, you can run an onstat command or query the SMI tables. To use the onstat -g probe command:
$ onstat -g probe DWA probing data for database stores_demo: statement 2: columns: tabid[colno,...] 100[1,2,3,8] 102[3,7,8] f joins: tabid[colno,...] = tabid[colno,...] (type) {u:unique} 102[3] = 100[1] (inner) u

Output description: v Statement 2 accesses tables: the customer table with tabid 100 and the orders table with tabid 102 v In the customer table, columns 1, 2, 3, and 8 are accessed v In the orders table, columns 3, 7, and 8 are accessed v v v v The orders table is the fact table Column 3 in the orders table is joined with column 1 in the customers table The join is an inner join The customers table has a unique index on column 1

You can verify that the probing data was gathered by connecting to the sysmaster database and querying the SMI tables:
> SELECT * FROM sysprobetables; dbname stores_demo sql_id 2 tabid 100 fact n dbname stores_demo sql_id 2 tabid 102 fact y 2 row(s) retrieved. > SELECT * FROM sysprobecolumns; dbname sql_id tabid colno stores_demo 2 100 1
Chapter 4. Accelerator data marts and AQTs

4-15

dbname sql_id tabid colno dbname sql_id tabid colno dbname sql_id tabid colno dbname sql_id tabid colno dbname sql_id tabid colno dbname sql_id tabid colno

stores_demo 2 100 2 stores_demo 2 100 3 stores_demo 2 100 8 stores_demo 2 102 3 stores_demo 2 102 7 stores_demo 2 102 8

7 row(s) retrieved. > SELECT * FROM sysprobejds; dbname stores_demo sql_id 2 jd 1 ctabid 102 ptabid 100 type i uniq y 1 row(s) retrieved. > SELECT * FROM sysprobejps; dbname stores_demo sql_id 2 jd 1 jp 1 ccolno 3 pcolno 1 1 row(s) retrieved.

Step 9: Create a logging database


The stores_demo database is not a logging database. A different database is required to store the data mart information generated by running the probe2mart() stored procedure:
> CREATE DATABASE stores_marts WITH LOG; Database created.

4-16

IBM Informix Warehouse Accelerator Administration Guide

Step 10: Run the probe2mart () stored procedure


Connect to the logging database stores_marts and run the stored procedure:
$ dbaccess stores_marts > execute procedure probe2mart(stores_demo,orders_customer_mart); Routine executed.

Step 11: Run the genmartdef() function and store the result in a file
Connect to the logging database stores_mart, where the probing data that was generated by the probe2mart stored procedure is stored:
$ dbaccess stores_marts > execute function lotofile(genmartdef (orders_customer_mart),orders_customer_mart.xml!,client); (expression) orders_customer_mart.xml 1 row(s) retrieved.

To view the contents of the data mart definition file:


$ cat orders_customer_mart.xml <?xml version="1.0" encoding="UTF-8" ?> <dwa:martModel xmlns:dwa="http://www.ibm.com/xmlns/prod/dwa" version="1.0"> <mart name="orders_customer_mart"> <table name="customer" schema="informix" isFactTable="false" > <column name="customer_num"/> <column name="fname"/> <column name="lname"/> <column name="state"/> </table> <table name="orders" schema="informix" isFactTable="true" > <column name="customer_num"/> <column name="ship_date"/> <column name="ship_weight"/> </table> <reference referenceType="LEFTOUTER" isRuntimeJoin="true" parentCardinality="1" dependentCardinality="n" dependentTableSchema="informix" dependentTableName="orders" parentTableSchema="informix" parentTableName="customer"> <parentColumn name="customer_num"/> <dependentColumn name="customer_num"/> </reference> </mart> </dwa:martModel>

Step 12: Import the file into a project in the administration interface
The detailed instructions for importing the file into the administration interface are already documented in the task. Alternatively, you can also use the Java classes that are included with the accelerator to deploy and load the data mart. To deploy the data mart:
$ java createMart <accelerator name> orders_customer_mart.xml

To load the data mart:


$ java loadMart <accelerator name> orders_customer_mart NONE
Chapter 4. Accelerator data marts and AQTs

4-17

See the dwa_java_reference.txt file in the dwa/example/cli/ directory for more information about these sample Java classes.

Contents of query probing data


The query probing data contains a set of records when SQL tracing is used, or a single record when SQL tracing is not used. Each record is identified by a statement ID. A record contains: v The database name the user was connected to v A list of tables, identified by their table id (tabid) v A list of columns for each table, identified by their column number (colno) v A list of references between each pair of tables, also known as the join descriptors v A list of column pairs for each join descriptor, also known as the join predicates A join predicate in the probing data corresponds to an equality join predicate between columns of different tables in the query. For example:
table_1.col_a = table_2.col_x

A join descriptor is comprised of: v All of the join predicates of the same pair of tables. For example:
table_1.col_a = table_2.col_x and table_1.col_b = table_2.col_y and table_1.col_c = table_2.col_z

v The type of the join - an inner join or left outer join. v Information about any unique indexes on the dimension table of the join. The dimension table is the right table in case of a left outer join. The unique index must be on the complete set of columns contained in this join descriptor. Related tasks: Creating data mart definitions by using workload analysis on page 4-10

The probe2mart stored procedure


The probe2mart stored procedure is used to convert the data that is gathered from probing into a data mart definition.

Syntax
The syntax of the stored procedure is:
probe2mart ( ' database ' , ' mart_name ' , sqlid ) ;

database The name of the database that contains the data warehouse. This is the warehouse database on which the workload queries are run. See Usage. mart_name The name that you want to use for the data mart definition. The name is also the name of the data mart that is created later, based on the data mart definition. If the name you specify is an existing data mart definition, the probing data is merged into the already existing data mart definition. If the data mart definition you specify does not exist, the data mart definition is created. sqlid Optional. The ID of the query SQL statement, which identifies the probing

4-18

IBM Informix Warehouse Accelerator Administration Guide

data from that query. If the sqlid is not provided, all of the probing data from the specified database is added to the data mart definition.

Usage
The probe2mart stored procedure should be run from a different database than the warehouse database. This separate database must be a logging database. You can use a test database, if the test database is already a logging database, or you can create a different database that keeps these tables separated from your other tables. Note: Create a separate logging database to use with the probe2mart stored procedure. Using a separate makes it much easier to revert from Informix 11.70 to an earlier version of Informix. When the stored procedure is run, the probing data is processed and stored in the logging database in a set of permanent tables. The tables keep the data mart definition in a relational format. The tables are automatically created when the probing data is processed into a data mart definition for the first time. The probe2mart stored procedure creates a data mart definition by converting the probing data and inserting rows into the following tables.
Table 4-1. Tables created by the probe2mart stored procedure Table 'informix'.iwa_marts 'informix'.iwa_tables 'informix'.iwa_columns 'informix'.iwa_mtabs 'informix'.iwa_mcols 'informix'.iwa_mrefs 'informix'.iwa_mrefcols Description Names of the data mart definitions All of the tables used in any data mart definition All columns used in any data mart definition Tables for a specific data mart definition Columns for a specific data mart definition References (join descriptors) of a specific data mart definition Reference columns (join predicates) of a specific data mart definition

Examples
To convert, or merge, all of the probing data into a data mart definition, use this form of the syntax:
EXECUTE PROCEDURE probe2mart(database, mart_name);

For example, to generate a data mart definition named salesmart from all of probing data that is available for the database sales, use this statement:
EXECUTE PROCEDURE probe2mart(sales, salesmart);

You can also merge the probing data from a specific query into a data mart definition. You need to look up the SQL ID number of the query which was captured by SQL tracing. SQL tracing must be ON to designate data from specific queries. Queries are identified by a statement ID.

Chapter 4. Accelerator data marts and AQTs

4-19

For example, to merge the probing data from SQL statement 8372 into the data mart definition salesmart, run this command:
EXECUTE PROCEDURE probe2mart(sales, salesmart, 8372);

To create a data mart definition from queries that run longer than 10 seconds, use this SQL statement:
SELECT probe2mart(sales,salesmart,sql_id) FROM sysmaster:syssqltrace WHERE sql_runtime > 10;

Related tasks: Creating data mart definitions by using workload analysis on page 4-10 Related reference: Chapter 5, Reversion requirements for an Informix warehouse edition server and Informix Warehouse Accelerator, on page 5-1

The genmartdef() function


The genmartdef() function returns a CLOB that contains the data mart definition in an XML format.

Syntax
The syntax of the function is:
genmartdef(mart name);

You can either issue the genmartdef() function by itself, or incorporate it as a parameter within the LOTOFILE() function. Using the genmartdef() function with the LOTOFILE() function places the CLOB into an operating system file.

Examples
Use the genmartdef() function as a parameter within the LOTOFILE() function:
EXECUTE FUNCTION LOTOFILE(genmartdef(salesmart), salesmart.xml!,client));

The following example generates the data mart definition for salesmart. The resulting CLOB is used as a parameter within the LOTOFILE() function. The LOTOFILE() function stores the resulting CLOB in an operating system file named salesmart.xml on the client computer.
SELECT lotofile(genmartdef(salesmart),salesmart.xml!,client) FROM iwa_marts WHERE martname=salesmart;

Related reference: LOTOFILE Function (SQL Syntax)

Deploying a data mart


When a data mart is initially deployed, the data mart is in the LOAD PENDING state and is disabled until you load data into the data mart. As part of creating a data mart definition, IBM Smart Analytics Studio estimates the amount of space required for the data mart and the data mart definition is validated. When you use the administration interface to deploy the data mart, the data mart definition is imported from an XML file into an accelerator project in the IBM Smart Analytics Optimizer Studio.

4-20

IBM Informix Warehouse Accelerator Administration Guide

1. On the computer where the IBM Smart Analytics Optimizer Studio is installed, start the administration interface: v On UNIX, open the$IWA_INSTALL_DIR/dwa_gui directory and run the ./datastudio command. v On Windows, select Start > Programs > IBM Smart Analytics Optimizer Studio 1.1 2. You can deploy the data mart from either the Data Source Explorer or from the Properties view of the data mart. Related tasks: Loading data into data marts Creating data mart definitions by using the administration interface on page 4-8 Creating data mart definitions by using workload analysis on page 4-10

Loading data into data marts


When you load a data mart, the related data is unloaded from the Informix tables and transferred to the accelerator. When data is loaded into the data mart, it is enabled automatically and is in the Active state. You can disable or enable a loaded data mart to switch the query acceleration for the data mart on or off. For most data marts, the data is gathered from several different Informix tables. Important: Loading data into a data mart can take several hours, depending on the size and the amount of data that is contained in the tables. To load the data into the data marts: 1. On the computer where the IBM Smart Analytics Optimizer Studio is installed, start the administration interface: v On UNIX, open the$IWA_INSTALL_DIR/dwa_gui directory and run the ./datastudio command. v On Windows, select Start > Programs > IBM Smart Analytics Optimizer Studio 1.1 2. You can start the load process from either the Data Source Explorer or from the Properties view of the data mart. 3. Right-click on the data mart and select Load. 4. Choose the level of consistency that you want for the loaded data by specifying a value for the locking parameter for the data load. The values that you can choose from are:
Option NONE Description No locking is done during the load. The data is read from the different tables similar to a dirty read. Other user sessions can change the data during the load operation. As a result the loaded data might be inconsistent. The data inconsistencies might be acceptable if the primary goal of the data mart is to create statistics and discover trends rather than to find the exact values of specific data rows.
Chapter 4. Accelerator data marts and AQTs

4-21

Option TABLE

Description Each table is locked for the duration it takes to gather the load data from the table. The loaded data is consistent within each table, but not necessarily across different tables. All of the tables in the data mart are locked for the duration of the load. The loaded data is consistent from all of the tables. However, all of the other user sessions are blocked from changing the data in the tables that are involved in the load.

MART

Related tasks: Deploying a data mart on page 4-20

Refreshing the data in a data mart


By creating a new data mart, you can refresh the data in the Informix Warehouse Accelerator while queries are being accelerated. You do not need to suspend query acceleration in order to drop and re-create the original data mart. Prerequisite: Confirm that you have enough memory to create the new data mart. During the load phase, you need approximately twice as much memory as the original data mart has. You can drop the original data mart after the new data mart has successfully loaded. To refresh the data-mart data that is used by the accelerator, create a new data mart. As soon as the new data mart is loaded, the database server uses the new data mart for matching new queries to AQTs. New candidate queries that are sent to the accelerator will use the new data mart. Use either IBM Smart Analytics Optimizer Studio or the Java classes that are included with the accelerator to create and load the new data mart. The new and old data marts must have the same data mart definition, but different names. In this example, datamart_1 has been loaded and sent to the accelerator for processing. The data for datamart_1 has changed. You want any new queries that are sent to the database server to use the new data. 1. Create a data mart that has the same definition as datamart_1, but name the data mart datamart_2. 2. Load datamart_2. When the data loading for datamart_2 completes, the accelerator will use datamart_2 with the refreshed data for incoming queries. 3. Disable datamart_1. 4. Optional: Drop datamart_1. If multiple data marts use the same tables, Informix will use latest data mart to accelerate the queries.

4-22

IBM Informix Warehouse Accelerator Administration Guide

Related concepts: Updating the data in a data mart Related tasks: Creating data marts on page 4-1

Updating the data in a data mart


When you want to update the data in a data mart with the most recent information in your database, you can drop and deploy the same data mart again. You cannot update the data in a data mart directly. You can drop and deploy the same data mart again, for example, if you do not have a enough memory to refresh the data in the Informix Warehouse Accelerator while queries are being accelerated. Fast path: Instead of using the Administrator interface to redeploy the data mart, you can deploy the data mart directly from an XML file. The advantage is that the entire process to deploy requires only one step and you do not need a local copy of the data mart definition in your IBM Smart Analytics Optimizer Studio workspace. When you deploy a data mart, the XML file is created. You can save the XML file on your computer. If you forget to save the XML file locally, you can retrieve the file from the Informix Warehouse Accelerator only if the data mart definition has not been deleted from the accelerator. You must retrieve a copy of the XML file before the data mart is dropped. Related tasks: Refreshing the data in a data mart on page 4-22

Drop a data mart


You can drop a data mart if you no longer need the data mart or if you need to update the data in a data mart. The data mart can be dropped from either the Data Source Explorer folder or the Properties view in the administrator interface.

Handling schema changes


When you modify the schema for the tables on the database server, the data marts that refer to those tables are impacted. Errors are returned when you run queries that refer to those tables. Examples of schema modification operations include: v Renaming tables v Renaming columns v Altering tables to add or drop columns v Changing the datatype of the columns v Adding or dropping constraints Use the following steps to refresh the data marts: 1. Drop the data marts from the accelerator that are impacted by the schema changes. 2. Recreate the data mart definitions that were impacted by the schema changes.
Chapter 4. Accelerator data marts and AQTs

4-23

3. Validate the data marts. 4. Deploy the data marts. 5. Load the data into the data marts again. After you refresh the data marts, rerun the queries to confirm that no processing errors are returned.

Removing probing data from the database


After you no longer need the probing data, you can use a SET ENVIRONMENT statement to remove it from the database that contains the warehousing data. To remove all of the probing data from the database: 1. Connect to the database where the warehousing data resides. 2. Run the SET ENVIRONMENT use_dwa probe cleanup statement. For example:
$ dbaccess stores_demo > set environment use_dwa probe cleanup; Environment set.

To check if the probing data is removed, run the onstat -g probe command. Important: The probing data is stored in memory. The data is automatically removed when the database server is shut down.

Monitoring AQTs
You can use the onstat -g aqt command to view information about the data marts and the associated accelerated query tables (AQTs). Related reference: onstat -g aqt command: Print data mart and accelerated query table information (Administrator's Reference)

4-24

IBM Informix Warehouse Accelerator Administration Guide

Chapter 5. Reversion requirements for an Informix warehouse edition server and Informix Warehouse Accelerator
If you need to revert from the Informix 11.70 instance to an earlier version, there are some reversion tasks you need to perform.

Reversion of the database server that contains data marts that were created with Informix Warehouse Accelerator
Before you use the onmode -b command to revert the database server, you must drop all of the data marts associated with that database server. You can drop the data marts by using the administration interface, Informix Smart Analytics Optimizer Studio, or by using the command line interface. If you do not drop the data marts and the reversion succeeds, the system databases might be rebuilt and the AQTs will disappear. As a result, the data marts associated with the reverted databases are orphaned and consume space and memory.

Reversion of the database server that performed workload analysis to create data mart definitions
If you created the data mart definitions using workload analysis, the tables that are created by the probe2mart stored procedure must be dropped before starting the reversion: v If you created a separate logging database to use with the probe2mart stored procedure, all of the data mart definition information will be in tables in that separate database. You can simply drop that database without impacting your other databases. v If you used your warehousing database as the logging database with the probe2mart stored procedure, you must manually drop each of the permanent tables that were created by the probe2mart stored procedure. If you do not drop the tables before you start reversion, the reversion will fail. Related reference: The probe2mart stored procedure on page 4-18

Copyright IBM Corp. 2010, 2011

5-1

5-2

IBM Informix Warehouse Accelerator Administration Guide

Chapter 6. Troubleshooting
Missing sbspace
If you do not have a default sbspace created in the Informix database server and attempt to install the accelerator an error is returned. You must create and configure a default sbspace in the Informix database server, set the name of the default sbspace in the SBSPACENAME configuration parameter, and restart the Informix database server. Related tasks: Preparing the Informix database server on page 2-2

Memory issues for the coordinator node and the worker nodes
If you do not have sufficient memory assigned to the coordinator node and to the worker nodes, you might receive errors when you load data from the database server or when you run queries. The more worker nodes that you designate, the faster the data will be loaded from the database server. However, the more worker nodes you designate the more memory you will need because each worker node stores a copy of the data in the dimension tables. You specify the memory in the dwainst.conf configuration file. Related reference: dwainst.conf configuration file on page 3-3

Ensuring a result set includes the most current data


Occasionally you want a specific query to access the most current data, which is stored on the database server. You can turn off the accelerator temporarily to run the query. The data that is stored on the accelerator is a snap-shot of the data on the database server. If you need to ensure that a query result set includes the most current data, turn off the accelerator, run the query, and turn the accelerator on again: 1. Specify the SET ENVIRONMENT use_dwa 0 statement at the beginning of the query to turn off acceleration. Setting this variable ensures that the query is processed by the Informix query optimizer and not processed by the accelerator. 2. Add the statement SET ENVIRONMENT use_dwa 1 at the end of the query to activate acceleration again. 3. Run the query.

Copyright IBM Corp. 2010, 2011

6-1

6-2

IBM Informix Warehouse Accelerator Administration Guide

Appendix A. Sample warehouse schema


The Informix Warehouse Accelerator examples use this sample warehouse schema.

SQL statements
The following SQL statements create the tables, indexes, and key constraints for the sample warehouse schema.
CREATE TABLE DAILY_FORECAST ( PERKEY INTEGER NOT NULL , STOREKEY INTEGER NOT NULL , PRODKEY INTEGER NOT NULL , QUANTITY_FORECAST INTEGER , EXTENDED_PRICE_FORECAST DECIMAL(16,2) , EXTENDED_COST_FORECAST DECIMAL(16,2) ); CREATE TABLE DAILY_SALES ( PERKEY INTEGER NOT NULL , STOREKEY INTEGER NOT NULL , CUSTKEY INTEGER NOT NULL , PRODKEY INTEGER NOT NULL , PROMOKEY INTEGER NOT NULL , QUANTITY_SOLD INTEGER , EXTENDED_PRICE DECIMAL(16,2) , EXTENDED_COST DECIMAL(16,2) , SHELF_LOCATION INTEGER , SHELF_NUMBER INTEGER , START_SHELF_DATE INTEGER , SHELF_HEIGHT INTEGER , SHELF_WIDTH INTEGER , SHELF_DEPTH INTEGER , SHELF_COST DECIMAL(16,2) , SHELF_COST_PCT_OF_SALE DECIMAL(16,2) , BIN_NUMBER INTEGER , PRODUCT_PER_BIN INTEGER , START_BIN_DATE INTEGER , BIN_HEIGHT INTEGER , BIN_WIDTH INTEGER , BIN_DEPTH INTEGER , BIN_COST DECIMAL(16,2) , BIN_COST_PCT_OF_SALE DECIMAL(16,2) , TRANS_NUMBER INTEGER , HANDLING_CHARGE INTEGER , UPC INTEGER , SHIPPING INTEGER , TAX INTEGER , PERCENT_DISCOUNT INTEGER , TOTAL_DISPLAY_COST DECIMAL(16,2) , TOTAL_DISCOUNT DECIMAL(16,2) ) ; CREATE TABLE CUSTOMER ( CUSTKEY INTEGER NOT NULL , NAME CHAR(30) , ADDRESS CHAR(40) , C_CITY CHAR(20) , C_STATE CHAR(5) , ZIP CHAR(5) , PHONE CHAR(10) , AGE_LEVEL SMALLINT , AGE_LEVEL_DESC CHAR(20) , INCOME_LEVEL SMALLINT ,
Copyright IBM Corp. 2010, 2011

A-1

INCOME_LEVEL_DESC CHAR(20) , MARITAL_STATUS CHAR(1) , GENDER CHAR(1) , DISCOUNT DECIMAL(16,2) ) ; ALTER TABLE CUSTOMER ADD CONSTRAINT PRIMARY KEY ( CUSTKEY ); CREATE TABLE PERIOD ( PERKEY INTEGER NOT NULL , CALENDAR_DATE DATE , DAY_OF_WEEK SMALLINT , WEEK SMALLINT , PERIOD SMALLINT , YEAR SMALLINT , HOLIDAY_FLAG CHAR(1) , WEEK_ENDING_DATE DATE , MONTH CHAR(3) ) ; ALTER TABLE PERIOD ADD CONSTRAINT PRIMARY KEY ( PERKEY ); CREATE UNIQUE INDEX PERX1 ( CALENDAR_DATE ASC, PERKEY ASC ); ON PERIOD

CREATE UNIQUE INDEX PERX2 ON ( WEEK_ENDING_DATE ASC, PERKEY ASC );

PERIOD

CREATE TABLE PRODUCT ( PRODKEY INTEGER NOT NULL , UPC_NUMBER CHAR(11) NOT NULL , PACKAGE_TYPE CHAR(20) , FLAVOR CHAR(20) , FORM CHAR(20) , CATEGORY INTEGER , SUB_CATEGORY INTEGER , CASE_PACK INTEGER , PACKAGE_SIZE CHAR(6) , ITEM_DESC CHAR(30) , P_PRICE DECIMAL(16,2) , CATEGORY_DESC CHAR(30) , P_COST DECIMAL(16,2) , SUB_CATEGORY_DESC CHAR(70) ) ; ALTER TABLE PRODUCT ADD CONSTRAINT PRIMARY KEY ( PRODKEY ); CREATE UNIQUE INDEX PRODX2 ( CATEGORY ASC, PRODKEY ASC ); CREATE UNIQUE INDEX PRODX3 ( CATEGORY_DESC ASC, PRODKEY ASC ); ON PRODUCT

ON

PRODUCT

CREATE TABLE PROMOTION ( PROMOKEY INTEGER NOT NULL , PROMOTYPE INTEGER , PROMODESC CHAR(30) , PROMOVALUE DECIMAL(16,2) , PROMOVALUE2 DECIMAL(16,2) ,

A-2

IBM Informix Warehouse Accelerator Administration Guide

PROMO_COST

DECIMAL(16,2) )

ALTER TABLE PROMOTION ADD CONSTRAINT PRIMARY KEY ( PROMOKEY ); CREATE UNIQUE INDEX PROMOX1 ( PROMODESC ASC, PROMOKEY ASC); ON PROMOTION

CREATE TABLE STORE ( STOREKEY INTEGER NOT NULL , STORE_NUMBER CHAR(2) , CITY CHAR(20) , STATE CHAR(5) , DISTRICT CHAR(14) , REGION CHAR(10) ) ; ALTER TABLE STORE ADD CONSTRAINT PRIMARY KEY ( STOREKEY ); CREATE INDEX CREATE INDEX CREATE INDEX CREATE INDEX CREATE INDEX CREATE INDEX CREATE INDEX CREATE INDEX DFX1 DFX2 DFX3 DSX1 DSX2 DSX3 DSX4 DSX5 ON ON ON ON ON ON ON ON DAILY_FORECAST ( PERKEY DAILY_FORECAST ( STOREKEY DAILY_FORECAST ( PRODKEY DAILY_SALES ( PERKEY DAILY_SALES ( STOREKEY DAILY_SALES ( CUSTKEY DAILY_SALES ( PRODKEY DAILY_SALES ( PROMOKEY ASC); ASC); ASC);

ASC); ASC); ASC); ASC); ASC); KEY (perkey) KEY (prodkey) KEY KEY (custkey) KEY (promokey)

ALTER TABLE daily_sales ADD CONSTRAINT FOREIGN references period(perkey); ALTER TABLE daily_sales ADD CONSTRAINT FOREIGN references product(prodkey); ALTER TABLE daily_sales ADD CONSTRAINT FOREIGN (storekey) references store(storekey); ALTER TABLE daily_sales ADD CONSTRAINT FOREIGN references customer(custkey); ALTER TABLE daily_sales ADD CONSTRAINT FOREIGN references promotion(promokey);

ALTER TABLE daily_forecast ADD CONSTRAINT FOREIGN KEY (perkey) references period(perkey); ALTER TABLE daily_forecast ADD CONSTRAINT FOREIGN KEY (prodkey) references product(prodkey); ALTER TABLE daily_forecast ADD CONSTRAINT FOREIGN KEY (storekey) references store(storekey); update statistics medium;
Appendix A. Sample warehouse schema

A-3

A-4

IBM Informix Warehouse Accelerator Administration Guide

Appendix B. Sysmaster interface (SMI) pseudo tables for query probing data
The SMI tables provide a way to access probing data in a relational form, which is most convenient for further processing. The sysmaster database provides the following pseudo tables for accessing probing data: v For tables: sysprobetables v For columns: sysprobecolumns v For join descriptors: sysprobejds v For join predicates: sysprobejps

The sysprobetables table


The schema for the sysprobetables table is:
CREATE TABLE informix.sysprobetables ( dbname char(128), { database name } sql_id int8, { statement id in syssqltrace } tabid integer, { table id } fact char(1) { table is fact table (y/n) } ); REVOKE ALL ON informix.sysprobetables FROM public AS informix; GRANT SELECT ON informix.sysprobetables TO public as informix;

The sysprobecolumns table


The schema for the sysprobecolumns table is:
CREATE TABLE informix.sysprobecolumns ( dbname char(128), { database name } sql_id int8, { statement id in syssqltrace } tabid integer, { table id } colno smallint { column number } ); REVOKE ALL ON informix.sysprobecolumns FROM public AS informix; GRANT SELECT ON informix.sysprobecolumns TO public AS informix;

The sysprobejds table


The schema for the sysprobejds table is:
CREATE TABLE informix.sysprobejds ( dbname char(128), { database name } sql_id int8, { statement id in syssqltrace } jd integer, { join descriptor sequence number } ctabid integer, { child table id } ptabid integer, { parent table id } type char(1), { join type: inner (i), left outer (l) } uniq char(1) { parent table has unique index (y/n) } ); REVOKE ALL ON informix.sysprobejds FROM public AS informix; GRANT SELECT ON informix.sysprobejds TO public AS informix;

Copyright IBM Corp. 2010, 2011

B-1

The sysprobejps table


The schema for the sysprobejps table:
CREATE TABLE informix.sysprobejps ( dbname char(128), { database name } sql_id int8, { statement id in syssqltrace } jd integer, { join descriptor sequence number } jp integer, { join predicate sequence number } ccolno smallint, { child table column number } pcolno smallint { parent table column number } ); REVOKE ALL ON informix.sysprobejps FROM public AS informix; GRANT SELECT ON informix.sysprobejps TO public AS informix;

Related tasks: Creating data mart definitions by using workload analysis on page 4-10

B-2

IBM Informix Warehouse Accelerator Administration Guide

Appendix C. Supported locales


Informix Warehouse Accelerator supports a subset of the locales that IBM Informix supports. Some of the locales are not supported by the JDBC environment that is needed to run administrative commands for the accelerator. For these locales you must map the locale to a different locale or code set name, only for the connection through JDBC, for example in the Java connection URL. See User-defined locales (JDBC Driver Guide) for information about how to map the locale in the Java connection URL.

Locale 8859-1 and 8859-15


The following table lists the supported locales:
Table C-1. Information for locales 8859-1 and 8859-15 Locale 8859-1 da_dk.8859-1 de_at.8859-1 de_at.8859-1@bun1 de_at.8859-1@bund de_at.8859-1@dud1 de_at.8859-1@dude de_ch.8859-1 de_ch.8859-1@bun1 de_ch.8859-1@bund de_ch.8859-1@dude de_de.8859-1 de_de.8859-1@bun1 de_de.8859-1@bund de_de.8859-1@dud1 de_de.8859-1@dude en_au.8859-1 en_gb.8859-1 en_us.8859-1 en_us.8859-1@dict en_us.8859-1@dres en_us.8859-1@extn es_es.8859-1 es_es.8859-1@ra94 es_es.8859-1@rae fi_fi.8859-1 fr_be.8859-1 fr_ca.8859-1 fr_ch.8859-1 fr_fr.8859-1 is_is.8859-1 it_it.8859-1 nl_be.8859-1 nl_nl.8859-1 no_no.8859-1 pt_br.8859-1 pt_pt.8859-1 sv_se.8859-1 Locale 8859-15 da_dk.8859-15 de_at.8859-15 de_at.8859-15@bun1 de_at.8859-15@dud1 de_ch.8859-15 de_ch.8859-15@bun1 de_ch.8859-15@dud1 de_de.8859-15 de_de.8859-15@bun1 de_de.8859-15@dud1 en_gb.8859-15 en_us.8859-15 en_us.8859-15@dict es_es.8859-15 es_es.8859-15@rae fi_fi.8859-15 fr_be.8859-15 fr_ch.8859-15 fr_fr.8859-15 it_it.8859-15 nl_be.8859-15 nl_nl.8859-15 no_no.8859-15 pt_pt.8859-15 sv_se.8859-15

Copyright IBM Corp. 2010, 2011

C-1

Locales 8859-2, 8859-5, and 8859-6


The following table lists the supported locales:
Table C-2. Information for locales 8859-2, 8859-5, and 8859-6 Locale 8859-2 cs_cz.8859-2 cs_cz.8859-2@sis hr_hr.8859-2 hu_hu.8859-2 pl_pl.8859-2 ro_ro.8859-2 sh_hr.8859-2 (Needs mapping to hr_hr.8859-2 for Java/JDBC) sk_sk.8859-2 sk_sk.8859-2@sn Locale 8859-5 bg_bg.8859-5 ru_ru.8859-5 uk_ua.8859-5 Locale 8859-6 The following locales are supported if mapped to the ar_ae.8859-6 locale for the Java/JDBC connection: ar_dz, ar_eg, ar_jo, ar_lb, ar_ma, ar_sy, ar_tn, and ar_ye. ar_ae.8859-6 ar_ae.8859-6@greg ar_ae.8859-6@isla ar_bh.8859-6 ar_bh.8859-6@greg ar_bh.8859-6@isla ar_kw.8859-6 ar_kw.8859-6@greg ar_kw.8859-6@isla ar_om.8859-6 ar_om.8859-6@greg ar_om.8859-6@isla ar_qa.8859-6 ar_qa.8859-6@greg ar_qa.8859-6@isla ar_sa.8859-6 ar_sa.8859-6@greg ar_sa.8859-6@isla ar_dz.8859-6 ar_eg.8859-6 ar_jo.8859-6 ar_lb.8859-6 ar_ma.8859-6 ar_sy.8859-6 ar_tn.8859-6 ar_ye.8859-6

Locales 8859-7, 8859-8, and 8859-9


The following table lists the supported locales:
Table C-3. Information for locales 8859-7, 8859-8, and 8859-9 Locale 8859-7 el_gr.8859-7 Locale 8859-8 iw_il.8859-8 iw_il.8859-8@greg iw_il.8859-8@hebr Locale 8859-9 tr_tr.8859-9@ifix

C-2

IBM Informix Warehouse Accelerator Administration Guide

Locales utf-8
The following table lists the supported locales:
Table C-4. Information for locales utf-8 Locale utf-8 (ar_ae - hu_hu) ar_ae.utf8 ar_bh.utf8 ar_kw.utf8 ar_om.utf8 ar_qa.utf8 ar_sa.utf8 cs_cz.utf8 da_dk.utf8 de_at.utf8 de_ch.utf8 de_de.utf8 en_au.utf8 en_gb.utf8 en_us.utf8 es_es.utf8 fi_fi.utf8 fr_be.utf8 fr_ca.utf8 fr_ch.utf8 fr_fr.utf8 hr_hr.utf8 hu_hu.utf8 Locale utf8 (ja_jp - zh_tw) ja_jp.utf8 ko_kr.utf8 nl_be.utf8 nl_nl.utf8 no_no.utf8 pl_pl.utf8 pt_br.utf8 pt_pt.utf8 ro_ro.utf8 ru_ru.utf8 sh_hr.utf8 (Needs mapping to hr_hr.utf8 for Java/JDBC) sk_sk.utf8 sv_se.utf8 th_th.utf8 tr_tr.utf8 tr_tr.utf8@ifix zh_cn.utf8 zh_hk.utf8 (Needs mapping to zh_tw.utf8 for Java/JDBC) zh_tw.utf8

Locales PC-Latin-1, PC-Latin-2, and 858


The PC-Latin codesets need to be replaced with the following codesets: v Replace PC-Latin-1 with 850 v Replace PC-Latin-2 with 852 v Replace PC-Latin-1 w_euro with 858 For the Java/JDBC connection, the codeset name 858 then needs to be mapped to Latin9. Note: The sh_hr locale must be mapped to hr_hr for the Java/JDBC connection. The following table lists the supported locales:

Appendix C. Supported locales

C-3

Table C-5. Information for locales PC-Latin-1, PC-Latin-2, and 858 Locale PC-Latin-1 da_dk.PC-Latin-1 de_at.PC-Latin-1 de_at.PC-Latin-1@bund de_at.PC-Latin-1@dude de_ch.PC-Latin-1 de_ch.PC-Latin-1@bund de_ch.PC-Latin-1@dude de_de.PC-Latin-1 de_de.PC-Latin-1@bun1 de_de.PC-Latin-1@bund de_de.PC-Latin-1@dud1 de_de.PC-Latin-1@dude en_au.PC-Latin-1 en_gb.PC-Latin-1 en_us.PC-Latin-1 en_us.PC-Latin-1@dict es_es.PC-Latin-1 es_es.PC-Latin-1@rae fi_fi.PC-Latin-1 fr_be.PC-Latin-1 fr_ca.PC-Latin-1 fr_ch.PC-Latin-1 fr_fr.PC-Latin-1 is_is.PC-Latin-1 it_it.PC-Latin-1 nl_be.PC-Latin-1 nl_nl.PC-Latin-1 no_no.PC-Latin-1 pt_pt.PC-Latin-1 sv_se.PC-Latin-1 Locale PC-Latin-2 cs_cz.PC-Latin-2 hr_hr.PC-Latin-2 hu_hu.PC-Latin-2 pl_pl.PC-Latin-2 ro_ro.PC-Latin-2 sh_hr.PC-Latin-2 sk_sk.PC-Latin-2 Locale 858 da_dk.858 de_at.858 de_at.858@bund de_at.858@dude de_de.858 de_de.858@bun1 de_de.858@bund de_de.858@dud1 de_de.858@dude es_es.858 fi_fi.858 fr_be.858 fr_fr.858 it_it.858 nl_be.858 nl_nl.858 pt_pt.858

C-4

IBM Informix Warehouse Accelerator Administration Guide

Locales CP1250, CP1251, CP1252, and CP1253


The following table lists the supported locales:
Table C-6. Information for locales CP1250, CP1251, CP1252, and CP1253 Locale CP1250 cs_cz.CP1250 hr_hr.CP1250 hu_hu.CP1250 pl_pl.CP1250 ro_ro.CP1250 sh_hr.CP1250 (Needs mapping to hr_hr.CP1250 for Java/JDBC) sk_sk.CP1250 Locale CP1251 bg_bg.1251 ru_ru.1251 uk_ua.1251 Locale CP1252 da_dk.CP1252 da_dk.CP1252@euro de_at.CP1252 de_at.CP1252@bun1 de_at.CP1252@bund de_at.CP1252@dud1 de_at.CP1252@dude de_at.CP1252@ebu1 de_at.CP1252@edu1 de_at.CP1252@euro de_ch.CP1252 de_ch.CP1252@bun1 de_ch.CP1252@bund de_ch.CP1252@dud1 de_ch.CP1252@dude de_ch.CP1252@ebu1 de_ch.CP1252@edu1 de_ch.CP1252@euro de_de.CP1252 de_de.CP1252@bun1 de_de.CP1252@bund de_de.CP1252@dud1 de_de.CP1252@dude de_de.CP1252@ebu1 de_de.CP1252@edu1 de_de.CP1252@euro en_au.CP1252 en_gb.CP1252 en_gb.CP1252@euro en_us.CP1252 Locale CP1252 (cont.) en_us.CP1252@dict en_us.CP1252@edic en_us.CP1252@euro es_es.CP1252 es_es.CP1252@euro es_es.CP1252@rae fi_fi.CP1252 fi_fi.CP1252@euro fr_be.CP1252 fr_be.CP1252@euro fr_ca.CP1252 fr_ch.CP1252 fr_ch.CP1252@euro fr_fr.CP1252 fr_fr.CP1252@euro is_is.CP1252 it_it.CP1252 it_it.CP1252@euro nl_be.CP1252 nl_be.CP1252@euro nl_nl.CP1252 nl_nl.CP1252@euro no_no.CP1252 no_no.CP1252@euro pt_br.CP1252 pt_pt.CP1252 pt_pt.CP1252@euro sv_se.CP1252 sv_se.CP1252@euro Locale CP1253 el_gr.1253

Appendix C. Supported locales

C-5

Locales CP1254, CP1255, and CP1256


The following table lists the supported locales:
Table C-7. Information for locales CP1254, CP1255, and CP1256 Locale CP1254 tr_tr.CP1254@ifix Locale CP1255 iw_il.1255 iw_il.1255@greg iw_il.1255@hebr Locale CP1256 ar_ae.1256 ar_ae.1256@greg ar_ae.1256@isla ar_bh.1256 ar_bh.1256@greg ar_bh.1256@isla ar_kw.1256 ar_kw.1256@greg ar_kw.1256@isla ar_om.1256 ar_om.1256@greg ar_om.1256@isla ar_qa.1256 ar_qa.1256@greg ar_qa.1256@isla ar_sa.1256 ar_sa.1256@greg ar_sa.1256@isla

Locales big5, gb, ksc, and cp949


The following table lists the supported locales:
Table C-8. Information for locales big5, and gb Locale big5 zh_cn.big5 zh_tw.big5 zh_hk.big5-HKSCS (needs to be mapped to zh_tw.Big5-HKSCS for Java/JDBC connection) Locale gb zh_cn.gb zh_tw.gb Locales ksc and cp949 ko_kr.ksc ko_kr.cp949 (Needs mapping to ko_kr.windows-949 for Java/JDBC connection)

Locales 866, KOI-8, and thai620


The following table lists the supported locales:
Table C-9. Information for locales 866, KOI-8, thai620 Locale 866 ru_ru.866 Locale KOI-8 ru_ru.KOI-8 Locale thai620 th_th.thai620

C-6

IBM Informix Warehouse Accelerator Administration Guide

Appendix D. Accessibility
IBM strives to provide products with usable access for everyone, regardless of age or ability.

Accessibility features for IBM Informix products


Accessibility features help a user who has a physical disability, such as restricted mobility or limited vision, to use information technology products successfully.

Accessibility features
The following list includes the major accessibility features in IBM Informix products. These features support: v Keyboard-only operation. v Interfaces that are commonly used by screen readers. v The attachment of alternative input and output devices. Tip: The information center and its related publications are accessibility-enabled for the IBM Home Page Reader. You can operate all features by using the keyboard instead of the mouse.

Keyboard navigation
This product uses standard Microsoft Windows navigation keys.

Related accessibility information


IBM is committed to making our documentation accessible to persons with disabilities. Our publications are available in HTML format so that they can be accessed with assistive technology such as screen reader software. You can view the publications in Adobe Portable Document Format (PDF) by using the Adobe Acrobat Reader.

IBM and accessibility


See the IBM Accessibility Center at http://www.ibm.com/able for more information about the IBM commitment to accessibility.

Dotted decimal syntax diagrams


The syntax diagrams in our publications are available in dotted decimal format, which is an accessible format that is available only if you are using a screen reader. In dotted decimal format, each syntax element is written on a separate line. If two or more syntax elements are always present together (or always absent together), the elements can appear on the same line, because they can be considered as a single compound syntax element. Each line starts with a dotted decimal number; for example, 3 or 3.1 or 3.1.1. To hear these numbers correctly, make sure that your screen reader is set to read punctuation. All syntax elements that have the same dotted decimal number (for example, all syntax elements that have the number 3.1) are mutually exclusive
Copyright IBM Corp. 2010, 2011

D-1

alternatives. If you hear the lines 3.1 USERID and 3.1 SYSTEMID, your syntax can include either USERID or SYSTEMID, but not both. The dotted decimal numbering level denotes the level of nesting. For example, if a syntax element with dotted decimal number 3 is followed by a series of syntax elements with dotted decimal number 3.1, all the syntax elements numbered 3.1 are subordinate to the syntax element numbered 3. Certain words and symbols are used next to the dotted decimal numbers to add information about the syntax elements. Occasionally, these words and symbols might occur at the beginning of the element itself. For ease of identification, if the word or symbol is a part of the syntax element, the word or symbol is preceded by the backslash (\) character. The * symbol can be used next to a dotted decimal number to indicate that the syntax element repeats. For example, syntax element *FILE with dotted decimal number 3 is read as 3 \* FILE. Format 3* FILE indicates that syntax element FILE repeats. Format 3* \* FILE indicates that syntax element * FILE repeats. Characters such as commas, which are used to separate a string of syntax elements, are shown in the syntax just before the items they separate. These characters can appear on the same line as each item, or on a separate line with the same dotted decimal number as the relevant items. The line can also show another symbol that provides information about the syntax elements. For example, the lines 5.1*, 5.1 LASTRUN, and 5.1 DELETE mean that if you use more than one of the LASTRUN and DELETE syntax elements, the elements must be separated by a comma. If no separator is given, assume that you use a blank to separate each syntax element. If a syntax element is preceded by the % symbol, that element is defined elsewhere. The string following the % symbol is the name of a syntax fragment rather than a literal. For example, the line 2.1 %OP1 refers to a separate syntax fragment OP1. The following words and symbols are used next to the dotted decimal numbers: ? Specifies an optional syntax element. A dotted decimal number followed by the ? symbol indicates that all the syntax elements with a corresponding dotted decimal number, and any subordinate syntax elements, are optional. If there is only one syntax element with a dotted decimal number, the ? symbol is displayed on the same line as the syntax element (for example, 5? NOTIFY). If there is more than one syntax element with a dotted decimal number, the ? symbol is displayed on a line by itself, followed by the syntax elements that are optional. For example, if you hear the lines 5 ?, 5 NOTIFY, and 5 UPDATE, you know that syntax elements NOTIFY and UPDATE are optional; that is, you can choose one or none of them. The ? symbol is equivalent to a bypass line in a railroad diagram. Specifies a default syntax element. A dotted decimal number followed by the ! symbol and a syntax element indicates that the syntax element is the default option for all syntax elements that share the same dotted decimal number. Only one of the syntax elements that share the same dotted decimal number can specify a ! symbol. For example, if you hear the lines 2? FILE, 2.1! (KEEP), and 2.1 (DELETE), you know that (KEEP) is the default option for the FILE keyword. In this example, if you include the FILE keyword but do not specify an option, default option KEEP is applied. A default option also applies to the next higher dotted decimal number. In this example, if the FILE keyword is omitted, default FILE(KEEP) is used.

D-2

IBM Informix Warehouse Accelerator Administration Guide

However, if you hear the lines 2? FILE, 2.1, 2.1.1! (KEEP), and 2.1.1 (DELETE), the default option KEEP only applies to the next higher dotted decimal number, 2.1 (which does not have an associated keyword), and does not apply to 2? FILE. Nothing is used if the keyword FILE is omitted. * Specifies a syntax element that can be repeated zero or more times. A dotted decimal number followed by the * symbol indicates that this syntax element can be used zero or more times; that is, it is optional and can be repeated. For example, if you hear the line 5.1* data-area, you know that you can include more than one data area or you can include none. If you hear the lines 3*, 3 HOST, and 3 STATE, you know that you can include HOST, STATE, both together, or nothing. Notes: 1. If a dotted decimal number has an asterisk (*) next to it and there is only one item with that dotted decimal number, you can repeat that same item more than once. 2. If a dotted decimal number has an asterisk next to it and several items have that dotted decimal number, you can use more than one item from the list, but you cannot use the items more than once each. In the previous example, you can write HOST STATE, but you cannot write HOST HOST. 3. The * symbol is equivalent to a loop-back line in a railroad syntax diagram. + Specifies a syntax element that must be included one or more times. A dotted decimal number followed by the + symbol indicates that this syntax element must be included one or more times. For example, if you hear the line 6.1+ data-area, you must include at least one data area. If you hear the lines 2+, 2 HOST, and 2 STATE, you know that you must include HOST, STATE, or both. As for the * symbol, you can repeat a particular item if it is the only item with that dotted decimal number. The + symbol, like the * symbol, is equivalent to a loop-back line in a railroad syntax diagram.

Appendix D. Accessibility

D-3

D-4

IBM Informix Warehouse Accelerator Administration Guide

Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not grant you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing IBM Corporation North Castle Drive Armonk, NY 10504-1785 U.S.A. For license inquiries regarding double-byte (DBCS) information, contact the IBM Intellectual Property Department in your country or send inquiries, in writing, to: Intellectual Property Licensing Legal and Intellectual Property Law IBM Japan Ltd. 1623-14, Shimotsuruma, Yamato-shi Kanagawa 242-8502 Japan The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk.

Copyright IBM Corp. 2010, 2011

E-1

IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Licensees of this program who wish to have information about it for the purpose of enabling: (i) the exchange of information between independently created programs and other programs (including this one) and (ii) the mutual use of the information which has been exchanged, should contact: IBM Corporation J46A/G4 555 Bailey Avenue San Jose, CA 95141-1003 U.S.A. Such information may be available, subject to appropriate terms and conditions, including in some cases, payment of a fee. The licensed program described in this document and all licensed material available for it are provided by IBM under terms of the IBM Customer Agreement, IBM International Program License Agreement or any equivalent agreement between us. Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurements may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. All statements regarding IBM's future direction or intent are subject to change or withdrawal without notice, and represent goals and objectives only. All IBM prices shown are IBM's suggested retail prices, are current and are subject to change without notice. Dealer prices may vary. This information is for planning purposes only. The information herein is subject to change before the products described become available. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy,

E-2

IBM Informix Warehouse Accelerator Administration Guide

modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. The sample programs are provided "AS IS", without warranty of any kind. IBM shall not be liable for any damages arising out of your use of the sample programs. Each copy or any portion of these sample programs or any derivative work, must include a copyright notice as follows: (your company name) (year). Portions of this code are derived from IBM Corp. Sample Programs. Copyright IBM Corp. _enter the year or years_. All rights reserved. If you are viewing this information softcopy, the photographs and color illustrations may not appear.

Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on the web at "Copyright and trademark information" at http://www.ibm.com/legal/copytrade.shtml. Adobe, the Adobe logo, and PostScript are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, and/or other countries. Intel, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates. Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both. Microsoft, Windows, and Windows NT are trademarks of Microsoft Corporation in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Other company, product, or service names may be trademarks or service marks of others.

Notices

E-3

E-4

IBM Informix Warehouse Accelerator Administration Guide

Index A
Accelerated query tables described 1-1, 4-7 states 4-7 Accelerator administrator interface 1-4 architecture 1-4 configuring 3-1, 3-2 connecting to Informix 3-6 installing 2-1, 2-3 instance 2-1 memory 1-12 operating system 1-11 reinstalling 2-5 turning off acceleration 6-1 uninstalling 2-5 Accessibility D-1 dotted decimal format of syntax diagrams D-1 keyboard D-1 shortcut keys D-1 syntax diagrams, reading in a screen reader D-1 Administration interface creating data marts 4-8 installing 2-4 operating system 1-11 overview 1-4 Aggregate functions 1-9 Aggregate tables 4-3 AQTs See Accelerated query tables Data marts (continued) dropping 4-23 example, creating 4-12 fact table 4-3 key tasks, creating 4-1 loading data 4-21 overview 4-3 refreshing data 4-22 star schema 4-3 summary tables 4-3 updating data 4-23 Data types supported 1-9 Database server schema changes 4-23 Dimension tables sample 4-3 Directories administration interface 2-1 documentation 2-1 installation 2-1 instance 2-1 samples 2-1 storage 2-1, 3-1, 3-2 Disabilities, visual reading syntax diagrams D-1 Disability D-1 Dotted decimal format of syntax diagrams DRDA_INTERFACE parameter 3-3 DWA_CM binary 3-10 DWADIR parameter 3-3 dwainst.conf file ondwa setup command 3-10 parameters 3-3

D-1

C
CLUSTER_INTERFACE parameter 3-2, 3-3 cluster.conf file 3-2 compliance with standards viii Configuring accelerator 3-1, 3-2 dwainst.conf file 3-3 overview 3-1 sbspace 2-2 SBSPACENAME configuration parameter Coordinator node 1-1 memory issues 6-1 COORDINATOR_SHM parameter 3-3

E
Eclipse tool 1-1 Environment settings 3-7 Examples genmartdef function 4-20 probe2mart stored procedure workload analysis 4-12

2-2

4-18

F
Fact tables 4-10 sample 4-3 Functions aggregate functions 1-9 genmartdef 4-20 scalar functions 1-9 user-defined functions 1-9

D
Data currency 4-22 Data mart definitions creating 4-10 defined 4-1 Data marts aggregate tables 4-3 creating 4-8 defined 4-1 deploying 4-20 described 1-1, 4-3 designing 4-2 dimension table 4-3 Copyright IBM Corp. 2010, 2011

G
Genmartdef function examples 4-20 syntax 4-20

X-1

I
IBM Smart Analytics Optimizer Studio architecture 1-4 described 1-1 installing 2-1, 2-4 industry standards viii Installing administration interface 2-4 directory 2-1 Informix Warehouse Accelerator 2-3 overview 2-1 Instance directory 2-1

P
PDQPRIORITY variable 3-7 Prerequisites hardware 1-12 operating system 1-11 software 1-11 Probe2mart reversion requirements 5-1 Probe2mart stored procedure examples 4-18 syntax 4-18 tables 4-18 Probing data removing 4-24

J
Join descriptors 4-18, B-1 Join predicates 4-18, B-1 Joins join combinations 1-11 join predicates 1-11 many-to-many 4-9 one-to-many 4-9 supported 1-11 unsupported 1-11

Q
Queries acceleration considerations analyze 4-7 not accelerated 1-8 types accelerated 1-6 Query probing creating data marts 4-10 example 4-12 removing data 4-24 1-6

L
Locales v, 1-1 supported C-1

R
Reinstalling Informix Warehouse Accelerator Reversion workload analysis tables 5-1 2-5

M
Many-to-many joins 4-9 Memory accelerator 1-12 issues 6-1

S
Samples query profit by store 1-7 revenue by item 1-7 revenue by store 1-7 week to day profits 1-8 warehouse schema A-1 sbspace configuring 2-2 SBSPACENAME configuration parameter 2-2 Scalar functions 1-9 Schemas sample SQL A-1 Screen reader reading syntax diagrams D-1 SET ENVIRONMENT statement 4-24, 6-1 Shortcut keys keyboard D-1 Snowflake schema 1-1 standards viii Star schema sample 4-3 START_PORT parameter 3-3 Statistics UPDATE STATISTICS LOW statement 3-7 Stored procedures probe2mart examples 4-18 probe2mart syntax 4-18

N
NUM_NODES parameter 3-3

O
ondwa utility clean command 3-15 dwainst.conf file 3-10 getpin command 3-13 overview 3-9 reset command 3-14 running a cluster system 3-9 setup command 3-10 start command 3-11 status command 3-12 stop command 3-14 tasks command 3-13 users who can run the command One-to-many joins 4-9 onstat -g aqt command 4-24 Operating systems supported 1-11 Overview 1-1

3-9

X-2

IBM Informix Warehouse Accelerator Administration Guide

Summary tables 4-3 Syntax genmartdef function 4-20 Syntax diagrams reading in a screen reader D-1 sysdbopen() procedure 3-7

T
Troubleshooting memory issues sbspace 6-1 6-1

U
Uninstalling Informix Warehouse Accelerator 2-5 UPDATE STATISTICS LOW statement 3-7 use_dwa variable 3-7, 6-1 User-defined functions 1-9

V
Variables PDQPRIORITY 3-7 use_dwa 3-7 Visual disabilities reading syntax diagrams

D-1

W
Worker nodes 1-1 memory issues 6-1 WORKER_SHM parameter 3-3 Workload analysis creating data marts 4-10

Index

X-3

X-4

IBM Informix Warehouse Accelerator Administration Guide

Printed in USA

SC27-3851-00

Potrebbero piacerti anche