Sei sulla pagina 1di 66

Schema Improvements

Performance and Scalability Enhancements 1-1


Objectives

After this lesson, you should be able to:


• Use new features of index-organized tables (IOTs)
• Explain skip scanning of indexes
• Define improvements to the cost-based optimizer

Performance and Scalability Enhancements 1-2


Index-Organized Table Enhancements

• Bitmap indexes on IOT columns


– Use an intermediate mapping table
– Reduce bad guesses by refreshing the mapping
table
• Additional enhancements
– Online CREATE, REBUILD, and COALESCE of
secondary indexes
– Parallel DML on index-organized tables
– Online MOVE of IOT with overflow segment

Index-Organized Table Enhancements


Two of the key improvements to IOTs introduced in Oracle9i relate to
secondary indexes. Secondary indexes on an IOT consist of two parts: the
physical address of the block where the row was stored (the data block address)
and the primary key value of the row. This two-part entry is called a logical
ROWID.
When a secondary index is used to find a row, the data block address is used as
a guess and, if the row is found on the block, it is returned by the query as usual.
If the row is not on the block found by the guess, the primary key is used to
locate the record in the B*-tree structure of the IOT. The reason that the first
probe is called a guess is that the row may have moved to a new block as the
result of block splits or subsequent block coalesces. To maintain acceptable
performance when block splits and coalesce, data block addresses in secondary
indexes are not updated. The data block address stored in the index may not,
therefore, reflect the actual location of the indexed row.
Prior to Oracle9i, the structure of secondary indexes precluded the use of a
bitmap index for a secondary index. A bitmap index consists of bits which
represent physical ROWIDs and the structure does not allow for a primary key
component. This restriction has been removed with the introduction of an
intermediate storage structure, called a mapping table, which relates the physical
ROWID from the bitmap index and the primary key value from the IOT.
Other restrictions on use of the IOT during the creation and rebuilding of
secondary indexes on IOTs have been removed in Oracle9i and these operations
can now be performed while DML continues on the IOT.
Further enhancements to IOTs, which simplify their management and use,
include:
• Use of parallel DML
• Online MOVE of an IOT with an overflow segment.

Performance and Scalability Enhancements 1-3


Create a Mapping Table

SQL> CREATE TABLE countries


2 ( country_id CHAR(2)
3 CONSTRAINT country_id_nn
4 NOT NULL
4 , country_name VARCHAR2(40)
5 , currency_name VARCHAR2(25)
6 , currency_symbol VARCHAR2(3)
7 , region VARCHAR2(15)
8 , CONSTRAINT country_c_id_pk
9 PRIMARY KEY (country_id))
10 ORGANIZATION INDEX
11 MAPPING TABLE TABLESPACE tbs_1
12 OVERFLOW TABLESPACE tbs_2;

Create a Mapping Table


Bitmap indexes are useful for queries on columns with low cardinality, such as
region in the example. They are also useful for columns which are involved
in AND or OR operations through the predicate of -a query. Bitmap indexes are
most commonly used on tables involved in queries rather than data manipulation
language (DML) operations. To use a bitmap index for the secondary index of
an IOT, you must identify a tablespace where the intermediate mapping table
will be stored.
The statement above creates the table countries as an IOT with a tablespace,
TBS_1, defined to hold the mapping tables. These tables will be created
automatically when a secondary index on countries is defined as a bitmap
index. The MAPPING TABLE clause is only required if you intend to use
bitmap indexes with the table.
The default name of mapping tables is sys_iot_map_sequence, where
sequence is a generated value that guarantees a unique name.

Performance and Scalability Enhancements 1-4


Update Guess ROWIDs

SQL> SELECT index_name,index_type,


2 pct_direct_access
3 FROM user_indexes
4 WHERE table_name = 'COUNTRIES'
5 AND pct_direct_access IS NOT NULL;

Update Guess ROWIDs


As blocks split or coalesce in an IOT, the data block addresses stored in
secondary indexes become obsolete. You should rebuild the index when too
many guesses are likely to be wrong. In the case of bitmap indexes, you need to
refresh the mapping table when it becomes stale. To do this, you drop and then
recreate the related bitmap index.
Use a query such as the one shown to determine if the index guesses are likely
to be accurate. A low value for PCT_DIRECT_ACCESS indicates that most
guesses will not find the required blocks. Such values should alert you to the
need to refresh the mapping table.

Performance and Scalability Enhancements 1-5


Additional Index-Organized Table
Enhancements

• Online CREATE, REBUILD, and COALESCE of


secondary indexes
• Parallel DML on index-organized tables
• Online MOVE of IOT with overflow segment

Online CREATE, REBUILD, and COALESCE of Secondary Indexes


Oracle9i allows normal activity to continue on an IOT while commands related
to secondary indexes, CREATE INDEX, ALTER INDEX… REBUILD, and
ALTER INDEX… COALESCE execute online.
Parallel DML on Index-Organized Tables
You can execute DML in parallel on partitioned IOTs. This is a new capability,
not available in previous releases.
Moving Index-Organized Tables
Online move of an index-organized table is a feature that was released with
Oracle8i. There was the restriction of moving an index-organized table along
with its OVERFLOW segment. That restriction has now been lifted.

Performance and Scalability Enhancements 1-6


Skip Scanning of Indexes

• Improved index scan by non-prefixed columns


– Enabling an index scan through a composite
index when the prefix column is an unknown
value.
• Skip index scanning supports:
– Cluster indexes
– Descending scans
– Connect-by
• Skip index scanning does not support reverse key
indexes

Skip Scanning Definition


In prior releases, a composite index would only be used if the index prefix
(leading) column was included in the predicate of the statement. With Oracle9i,
the optimizer can use a composite index even if the prefix column value is not
known. The optimizer uses an algorithm called skip scanning to retrieve
ROWIDs for values that do not use the prefix column.
Skip scans reduce the need to add an index to support occasional queries which
do not reference the prefix column of an existing index. This can be useful when
high levels of DML activity would be degraded by the existence of too many
indexes used to support infrequent queries. The algorithm is also valuable in
cases where there are no clear advantages as to which column to use as the
prefix column in a composite index. The prefix column should be the most
discriminating, but also the most frequently referenced in queries. Sometimes,
these two requirements are met by two different columns in a composite index,
forcing a compromise or the use of multiple indexes.
During a skip scan, the B*-tree is probed for each distinct value in the prefix
column. Under each prefix column value, the normal search algorithm takes
over. The result is a series of searches through subsets of the index, each of
which appears to result from a query using a specific value of the prefix column.
However, with the skip scan, the value of the prefix column in each subset is
obtained from the initial index probe rather than from the command predicate.
In addition to standard B*-tree indexes, the optimizer can use skip scans for
processing
• cluster indexes
• descending scans
• CONNECT BY clauses.
Reverse key indexes do not support the skip scan algorithm.

Performance and Scalability Enhancements 1-7


Skip Scanning Example

Assume a composite index on NLS_LANGUAGE and


NLS_ TERRITORY. The available combinations are:
NLS_LANGUAGE NLS_TERRITORY
ENGLISH AMERICA
ENGLISH CANADA
ENGLISH UNITED KINGDOM
FRENCH CANADA
FRENCH FRANCE
FRENCH SWITZERLAND
GERMAN GERMANY
GERMAN SWITZERLAND
PORTUGEUSE BRAZIL
PORTUGEUSE PORTUGAL

Skip Scanning Example


In the example, a composite index exists on the two columns,
NLS_LANGUAGE and NLS_TERRITORY, with the NLS_LANGUAGE as
the prefix column. The data values stored in the underlying table result in the
combinations of values shown in the table. Each combination may occur
multiple times in the table and the resulting index.
In previous releases without the skip scan algorithm, a query on a value in the
NLS_TERRITORY column would be forced to execute a full table scan. If such
a query were infrequent, this might be acceptable. If the query were more
common, then you might have to add a new index on the NLS_TERRITORY
column alone. This new index, however, could negatively impact the
performance of DML on the table. The skip scan solution provides an
improvement without the need for the second index. While not as fast as a direct
index look up, the skip scan algorithm is faster than a full table scan in cases
where the number of distinct values in the prefix column is relatively low.
The optimizer uses statistics to determine whether a skip scan retrieval would be
more efficient than a full table scan, or other possible retrieval paths, when
parsing SQL statements.

Performance and Scalability Enhancements 1-8


Skip Scanning Example

Level 1
< FRENCH,FRANCE

Level 2, Branch 1 Level 2, Branch 2


< ENGLISH,CANADA < GERMAN,SWITZERLAND

Level 3, Branch 1 Level 3, Branch 3


ENGLISH,AMERICA FRENCH,FRANCE
FRENCH,SWITZERLAND
Level 3, Branch 2 GERMAN,GERMANY
ENGLISH,CANADA
Level 3, Branch 4
ENGLISH,UNITED KINGDOM
GERMAN,SWITZERLAND
FRENCH,CANADA
PORTUGEUSE,BRAZIL
PORTUGUESE,PORTUGAL

Skip Scanning Example


Assuming the optimizer chooses a skip scan to locate the rows containing the
value
of 'SWITZERLAND' for the NLS_TERRITORY column, the skip scan
algorithm
searches for the distinct values of the prefix column. The steps are:
1. From the root, the search begins at Level 2. The algorithm finds two values,
ENGLISH (Branch 1) and GERMAN (Branch 2).
2. Following the ENGLISH link to Level 3, Branch 1, the algorithm finds that
the branch contains only ENGLISH, AMERICA. There are no values less
than ENGLISH in the prefix column in this branch, and obviously
ENGLISH, SWITZERLAND cannot be in this branch because there are no
values in the second column greater than AMERICA. So it skip scan ignores
(skips) Level 3, Branch 1.
3. Having skipped the Level 3, Branch 1,in Step 2, the search continues with
Level 3, Branch 2. However, from Level 1, this branch cannot contain the
value FRENCH, SWITZERLAND because that would be greater than
FRENCH, FRANCE. Since FRENCH is the only other value in this branch,
it too is skipped.
4. The algorithm proceeds with the search down Branch 2 of Level 2. At Level
3, Branch 3, it finds the value FRENCH, SWITZERLAND. This is the
required value, so the set of data under this value is scanned— this is the first
subset of the index which is scanned beyond a branch level
5. Having found all the entries under FRENCH, SWITZERLAND, and finding
only GERMAN, GERMANY remaining in the current branch, the algorithm
continues with Level 3, Branch 4. The subset of GERMAN,
SWITZERLAND is located and fully scanned, but no other values in this
branch meet the criteria, so the scan is complete.

Performance and Scalability Enhancements 1-9


Cost Based Optimizer Improvements

The cost based optimizer (CBO) is enhanced


to improve the accuracy of the cost and size:
• Estimated CPU usage
– Cost is used in optimizer decisions
– Combined with cost of disk access
• Estimated network usage for query servers running
on different nodes
• The effect of caching with nested-loop joins
• Considers index pre-fetching
• Include the cost of non merged subqueries, or
views

Performance and Scalability Enhancements 1-10


Cost Based Optimizer Improvements

• You can choose between two models


– The old model does not use CPU performance
statistics
– The new model uses the CPU statistics
• Set the model for the instance with the parameter
OPTIMIZER_SIZE_AND_COST
– Allowed values are OLD, NEW
– Default value OLD
• You can also change the model by issuing an
ALTER SYSTEM or an ALTER SESSION command

Cost Based Optimizer Improvements


You have the option to use the older model of the cost based optimizer. An
initialization parameter, OPTIMIZER_SIZE_AND_COST, specifies which cost
based optimizer model is desired. The OLD option invokes the old model which
does not include CPU speed statistics when choosing an execution plan. The
NEW option invokes the new model which includes any available CPU
statistics when optimizing execution plans.
For backward compatibility, the parameter defaults to OLD, using the cost
based optimizer with no CPU calculations. You can change this by issuing an
ALTER SYSTEM or an ALTER SESSION command, or by including the
parameter in the initialization parameter file and rebooting the database.

Performance and Scalability Enhancements 1-11


Summary

In this lesson, you should have learned about


improvements to:
• Index organized tables
• Skip scanning of indexes
• The cost based optimizer

Performance and Scalability Enhancements 1-12


System Improvements

Performance and Scalability Enhancements 2-1


Objectives

After this lesson, you should understand changes to:


• Literal replacement
• Improvements to redo log writer
• Redo log buffer allocation latch
• Other performance topics

Performance and Scalability Enhancements 2-2


Metadata Unload

• It is difficult to get metadata out of an Oracle8i


database
• Most popular methods query the data dictionary
• This involves many select statements

Metadata Unload
There are essentially two methods to extract metadata from the data dictionary
in Oracle8i.
The first (and most popular) method involves querying the data dictionary using
SQL statements. This is problematic due to high maintenance costs created by
new object definitions and DDL changes. Also in many cases more than one
select statement has to be written. This increases network traffic.
The second method is to run Export with ROWS=N and then to run Import with
SHOW=Y. This produces a text file from the binary dump file that can be edited
to create SQL scripts. This method can require substantial editing and is not
considered a convenient technique.
The third method involves the OCIDescribeAny interface. This is not widely
used due to drawbacks such as being incapable of retrieving a complete set of
metadata about all database objects. It also does not scale well.

Performance and Scalability Enhancements 2-3


Metadata Unload in Oracle9i

SQL> SELECT dbms_metadata.get_ddl


('TABLE', t.table_name)
2 FROM user_tables t;

• The query can have a WHERE clause


• Output is in XML format
• Combine with a SPOOL statement for XML file

Metadata Unload (continued)


To address the metadata unload problem Oracle9i has a new option. This form
of extraction offers many advantages over using EXPORT. Using export gives
the user the following options: entire database, users schema, or table. To
extract many tables it is necessary to perform multiple export commands, each
exporting out a single object.
The above SELECT statement provides, in combination with a SPOOL
statement, an editable output of the required objects.

Performance and Scalability Enhancements 2-4


Metadata Unload: Advantages

• Any combination of objects can be extracted


depending on the result of the select statement.
• Running spool while extracting will lead to a file that
can be edited immediately.

Performance and Scalability Enhancements 2-5


Safe and Unsafe Literal Replacement

• Literal replacement occurs when the optimizer replaces


a bind variable with a fixed value (literal) before
developing the execution plan for the statement
• Safe literal replacement implies that any value used for
literal replacement will cause the optimizer to develop
the same execution plan
• Unsafe literal replacement implies that one value used
for literal replacement could result in a different
execution than the choice of a different literal value

Literal Replacement
The term literal replacement refers to an operation performed by the optimizer
to reduce the overhead of parsing SQL statements containing bind variables.
Rather than executing a bind step each time the statement is executed, the
optimizer substitutes a “real” value (the literal in literal replacement) for the
bind variable when the statement is first parsed. The execution plan developed
using the substituted value is used for all subsequent executions of the
statement.
Safe Literal Replacement
Consider a query like this one, where ID is the primary key
SELECT * from T where id = :variable;
The substitution of any value would produce exactly the same execution plan. It
would, therefore, be safe for the optimizer to use literal replacement before
generating its execution plan for the statement.
Unsafe Literal Replacement
The two following SQL statements could produce different execution plans,
depending on the distribution of values for ID, the availability of statistics,
including histograms, and the available access paths:
SELECT * from T where id <= 20;
SELECT * from T where id <= 5000;
If this statement were written with a bind variable, the optimizer could not select
a good value for literal replacement. Depending on the value chosen, the
execution plan could be different. Only certain bind variable values would
benefit from the execution derived from a specific literal replacement value.
Therefore, this would be an unsafe literal replacement.

Performance and Scalability Enhancements 2-6


Literal Replacement

• CURSOR_SHARING parameter values:


– FORCE
– SIMILAR (new in Oracle9i)
– EXACT (default)
• CURSOR_SHARING can be changed using:
– ALTER SYSTEM
– ALTER SESSION
– INIT.ORA

Literal Replacement
The value of the initialization parameter CURSOR_SHARING determines how
the optimizer will process statements with bind variables.
• EXACT: Literal replacement disabled completely
• FORCE: Causes sharing for all literals
• SIMILAR: Causes sharing for safe literals only
In previous releases, only the EXACT and FORCE options were available to
you. The SIMILAR option is new in Oracle9i. It causes the optimizer to
examine the statement to ensure that replacement occurs only for safe literals.

Performance and Scalability Enhancements 2-7


Oracle8i Latch Algorithm

• If a process failed to get a latch it spins for a fixed


period of time
– After spinning the process tried to get the latch again
– This continued until the process gets the latch
• Limitations:
– A free latch waited for the process to complete spinning
– Time was wasted waiting for the process
– Another process could get the latch before the first process
had completed its spinning cycle

Spinning and Sleeps


In Oracle8i, after a set number of spins (spin_count) the process was put to sleep
for a period of time. The process then woke up, tested the latch, and if
unsuccessful, returned to a sleep state. Each time the process returned to a sleep
state, the period of the sleep became longer. There was overhead due to the
context switches in changing from a sleep state, to an awake, and back to sleep
again.

Performance and Scalability Enhancements 2-8


Oracle9i Latch Improvements

Latch algorithm for Oracle9i


• Try get the latch
• If not successful, get added to a wait list
• Sleep on the wait list
• When latch is free the first process on the wait list is
posted

Latch Improvements
On systems that are experiencing no, or little, latch contention this advancement
offers no significant improvement. The improvement will be shown on systems
that currently experience a high contention for latches.

Performance and Scalability Enhancements 2-9


Redo Buffer Latches

The algorithm for writing into the redo log buffer


has been improved
• Redo buffer latches held for less time
• Less contention for these latches
• Overall database throughput is improved, resulting in
better performance

Redo Buffer Latches


In Oracle9i, Redo Allocation Latches, used to allocate space in the redo buffer,
are held for less time than in previous releases. Due to the decrease in the time
the latch is held, there is a decrease in the contention for the latch. This decrease
in contention improves database performance.

Performance and Scalability Enhancements 2-10


Measuring Latch Contention

• If you calculate the ratio between the number of misses


and the number of gets:
– A ratio of 95% misses could have resulted from a small wait
time
– A ratio of 1% misses could have resulted from a long wait time
• Solution:
– New CWAIT_TIME column in V$LATCH measures cumulative
wait time

Wait Time
Wait time is defined as the amount of time a thread of execution has to wait
before it can acquire a latch. This is the elapsed time starting from the time the
first attempt to obtain the latch failed to the time the latch was actually acquired.
Contention
Contention is the ratio of number of misses to the number of latch gets (stated as
a percentage) on a particular latch. For example 95% contention would mean
that 95 times out of every 100 latch gets we encountered a miss.
However, this percentage does not give information on how long the wait was.
Therefore, although there might have been only a couple of misses (which
would look good on contention), it might be that these few are responsible for
the majority of the wait time.
Cumulative Wait Time
Cumulative wait time stands for the cumulative sum of all the wait or hold times
incurred by all the threads of execution.
CWAIT_TIME is the time a thread of execution actually had to wait from the
time it tried to acquire a latch to the time it actually got the latch. This time
would include:
• The spin and sleep times, if any
• The overhead of context switches that might have occurred due to sleeps or
due to OS scheduler time slicing
• Page faults, interrupts, and other user and system idiosyncrasies that
account for overall system performance

Performance and Scalability Enhancements 2-11


Performance Topics in Other Modules

“Manageability” focus area:


• Dynamic SGA
• Multiple block sizes
• Memory management
• System managed undo
• Persistent INIT.ORA parameters
• Self tuning direct I/O
• Bitmapped managed segments

Performance and Scalability Enhancements 2-12


Performance Topics in Other Modules

“Business Intelligence” focus area:


• Materialized views
• Grouping sets
• SQL Loader
• External tables
• Inserts into multiple tables
• The MERGE command
• New OLAP functions

Performance and Scalability Enhancements 2-13


Performance Topics in Other Modules

“Development Platform” focus area:


• Java performance
• PL/SQL optimization and native compilation
• Better native compilation
• JDBC and SQLJ performance improvements

Performance and Scalability Enhancements 2-14


Summary

In this lesson, you should have learned about


the improvements to:
• Unloading metadata
• Replacement of literal values by bind variables
• Latches for writing into, and out of the redo log buffer

Performance and Scalability Enhancements 2-15


Performance and Scalability Enhancements 2-16
Scalable Session State Management

Performance and Scalability Enhancements 3-1


Objectives

After this lesson, you should be able to:


• Implement connection pooling with OCI
• Invoke dedicated external procedure agents
• Use transactional external procedure agents
• Manage multi-threaded heterogeneous service
agents
• Describe the Virtual Interface Protocol Adapter
• List the benefits of CORE library improvements

Performance and Scalability Enhancements 3-2


OCI Connection Pooling: Benefits

• Provides cost effective and scalable connection


pooling
• Requires no additional database processes
• Removes the need for application servers to create
connection pooling solutions
• Is more efficient than user level connection pooling
• Requires users to learn very few API calls
• Provides simplified logon step using OCI

OCI Connection Pooling: Benefits


Dedicated connections for each user thread are very expensive and are not scalable. They
also increase the number of incoming connections and server processes required on the
database instance.
Connection pooling solves these problems by removing the requirement for additional
processes. This reduces the cost of creating and maintaining the connections which, in turn,
allows more scalability because fewer resources are needed.
Additionally, connection pooling with OCI removes the need for application servers to
create connection pooling solutions. Connection pooling provided and maintained by OCI
is more efficient than user level connection pooling. Users have very few new API calls to
learn in order to code OCI connection pooling. Logging on is a simple step when
connecting through OCI connection pooling.

Performance and Scalability Enhancements 3-3


OCI Connection Pooling: Features

• Allows the creation of a pool of database


connections for an application
• Provide an interface for the application to specify
the optimal, increment and maximum number of
connections in the pool
• Provide a mechanism for any OCI call to pickup a
server connection from a connection pool.
• Manage the pool of server connections
transparently

OCI Connection Pooling: Features


OCI Connection Pooling allows the creation of a pool of database connections
to support an application’s requests for data. It is primarily intended for middle
tier products developed by Oracle partner companies.
A sample usage of this feature would be in a Web Application Server
connected to a back-end Oracle database. A web application server gets several
concurrent requests for data from the database server. Typically, it would have
to explicitly manage the connections to the database. However, by using this
functionality, it can leave that task to OCI. This feature will be useful only for
multi-threaded applications.The application can create a pool (or set of pools)
per environment during initialization. Each thread will have its own service
handle, session handle and server handle.

Performance and Scalability Enhancements 3-4


Usage Model

Database
Middle tier
servers

Application layer OCI layer Server 1


s1 c1
t1
s2 Pool 1
t2 c2
s3
t3
Incoming Service
Connections
threads handles
Server 2
t4 s4
c3
t5 s5
Pool 2 c4
t6 s6

Usage Model
The pool creation and the association of the server handles to the pool handle are done in the
main thread. The threads then start using the connections in the pool via their respective
server handles. So the application need not bind the threads to the server
handles(connections) making less utilization of the connections. The connections are
dynamically allotted from the pool, balancing the load properly thereby increasing the
connection utilization.
The OCI connection pools provide more control over the application than network
connection pooling. However, connection pooling does not work with transparent
application failover (TAF).

Performance and Scalability Enhancements 3-5


Steps Used for OCI Connection Pooling

Allocate pool handle Free pool handle

Create connection pool Destroy connection pool

Connection pooling logons Server detaches

Call executions
Multiple threads

Performance and Scalability Enhancements 3-6


New OCI APIs and Calls

• New handle type, OCI_HTYPE_CPOOL in


OCIHandleAlloc()
• New APIs
– OCIConnectionPoolCreate()
– OCIConnectionPoolDestroy()
• OCI_POOL mode added to OCIServerAttach()
• New attributes for the Server handle
– OCI_ATTR_CONN_TIMEOUT: Connection idle time
– OCI_ATTR_CONN_INCR_DELAY: Minimum delay
between two successive increments
– OCI_ATTR_CONN_NOWAIT: Do not block if
connection not available, flag error

Example
OCIHandleAlloc (envhp, (dvoid **)&poolhp,
OCI_HTYPE_CPOOL, (size_t)0, (dvoid **)0))
OCIConnectionPoolCreate ((dvoid *)envhp,
(dvoid*)errhp, (dvoid *)poolhp, (text *)database,
strlen ((char*)database), conMin, conMax, conIncr,
pusername, strlen ((char *)pusername), ppassword,
strlen ((char *)ppassword), & dblink, &
dblink_len, OCI_DEFAULT))
OCIServerAttach (svrhp[i], errhp, (text *)dblink,
(sb4) dblink_len ), OCI_POOL))
/* the server handle is a logical connection but
it works exactly the same a server handle obtained
without using a connection pool */
/* The pool attributes (min, max, incr) can be
changed dynamically using the
OCI_CPOOL_REINITIALIZE mode with the above pool
handle to OCIConnectionPoolCreate call */
OCIConnectionPoolDestroy (poolhp, errhp,
OCI_DEFAULT);
OCIHandleFree (poolhp, OCI_HTYPE_CPOOL);

Performance and Scalability Enhancements 3-7


Usage Guidelines

• Apply the following guidelines in connection


pooling mode
• OCI_ATTR_NONBLOCKING_MODE must not be set on
the virtual server handle
• Reconfiguration of the pool is expensive so you
should not indiscriminately change the following
pool attributes:
– OCI_ATTR_CONN_MIN
– OCI_ATTR_CONN_MAX
– OCI_ATTR_CONN_INCR
• The dblink returned from
OCIConnectionPoolCreate is an internally
generated OCI string which should be used with
OCIServerAttach

Usage Guidelines
A few restrictions apply in connection pooling mode. An error is generated if these
conditions are not followed:
• Between OCIStmtExecute() and OCIStmtFetch() the service context
handle should not be changed or deallocated. But, it is valid to set the
OCI_ATTR_SERVER attribute to another server handle belonging to the same pool.
• If OCI_ATTR_NONBLOCKING_MODE is set on the pool handle, an error is
flagged.
• The service context handle should not be changed/deallocated between the
OCIDescribeAny() and OCIAttrGet() calls.
• Between continuing (piece-wise) calls, the attributes should not be changed for the
service context handle. The service context handle must be associated with the same
server handle and same session handle between these calls.
• The following pool attributes cannot be set at run-time:
– OCI_ATTR_CONN_MIN
– OCI_ATTR_CONN_MAX
– OCI_ATTR_CONN_INCR

Performance and Scalability Enhancements 3-8


Dedicated External Procedure Agents

• Dedicated external agents


– Allow distributed external procedure transactions
– Improve scalability and robustness
• Dedicated external agents allow
– PL/SQL to specify alternate or preferred agent
– Requested agent can be on any machine
– Different agents, on different machines, run the same
procedure concurrently
• External procedures are run through EXTPROC by default
– Runs only on the server
– Communicates between the agent and the database
– Uses a hard-wired alias so no TNS information has to be
passed to the server

Dedicated External Procedure Agents


Dedicated external procedure agents allow multiple sessions to run concurrently
without each one requiring its own process. This reduces the load on the
operating system, ultimately allowing more sessions.
Separate agents also increase system robustness. For example, company A is
developing a cartridge with external procedures. Also, Company B is
developing another cartridge with external procedures. The developers in
Company A know that, while they themselves write great robust code that
never fails, the programmers in Company B are famous for writing code that
almost always aborts and brings down the operating system. The programmers
in Company A, therefore, want their cartridge to run in a separate agent from
anyone else (in particular, Company B), so that their code will keep on running
even when other developers’code fails.

Performance and Scalability Enhancements 3-9


Uses for Dedicated External Agents

• Group for robustness, for example, “trusted”


cartridge applications
• Distributed transactional functionality
– MQ series
– APPC

Performance and Scalability Enhancements 3-10


Functionality

• Environment variable specifications in CREATE


LIBRARY statements allow agents to find external
procedures from library instances on different
machines
• A string passed to the CREATE LIBRARY
statement identifies the agent
• CREATE PROCEDURE/FUNCTION uses a string for
the agent name to be passed at run time
• Agent names are database link names

Functionality
The path name in the CREATE LIBRARY statement supports environment
variable specifications, to be expanded at runtime by the agent. The need for
this can be seen by considering the situation in which agents on multiple
machines are used to run external procedures from multiple instances of a
library. The libraries on the different machines can be in different absolute
directories, but as long as they are in the same directory relative to a common
root directory (specified by an environment variable), the agents can correctly
access the external procedures.
Agent information is passed in a string to the CREATE LIBRARY statement to
specify an agent. A string is passed to the CREATE PROCEDURE and
CREATE FUNCTION statements specifying the name of an argument in which
an agent name will be passed at runtime.
The name of the agent is actually the name of a database link. Since it is passed
as an argument rather than concatenated to the function name, the problem of
local versus remote processing is avoided.

Performance and Scalability Enhancements 3-11


Example

SQL> CREATE OR REPLACE DATABASE LINK


2 agent_link
3 USING 'agent_tns_alias';

SQL> CREATE OR REPLACE LIBRARY plib1 IS
2 '${EP_LIB_HOME}/plib1.so'
3 AGENT 'agent_link';

Example
In this example, the agent database link, agent_link, will run any external procedure in the
library plib1.
If no agent is specified, then the default agent (EXTPROC) is used. This ensures backward
compatibility with existing external procedure applications.
Upon first invocation of any procedure in the library, if the agent is not already running, it is
launched.
If the specified agent does not exist it is an error. This is because dedicated agent functionality can
be important enough (transaction support, for example) that some sort of default or fallback agent
in event of this error (EXTPROC, for example) would not, in general, be acceptable.
The file specification, {$EP_LIB_HOME}/plib1.so, includes the environment variable
${EP_LIB_HOME}. Although this format uses UNIX syntax, you use it on any platform. The
Oracle code expands the environment variable contained in the curly braces prior to passing it to
the operating system. This makes it possible to pass agent arguments to external procedures which
must run on different platforms, but with library security that is controlled at the server. The same
CREATE LIBRARY command could therefore be coded on Windows
SQL> CREATE OR REPLACE LIBRARY plib1 IS
2 '${EP_LIB_HOME}\plib1.so'
3 AGENT 'agent_link';
The only difference between the UNIX example, shown on the slide, and this Windows example,
is the direction of the slash between the environment variable and the library name.

Performance and Scalability Enhancements 3-12


Library Level Procedure Creation

SQL> CREATE LIBRARY plib1 IS


2 '${EP_LIB_HOME}/plib1.so'
3 AGENT 'agent1_link';

SQL> CREATE LIBRARY plib2 IS
2 '${EP_LIB_HOME}/plib2.so'
3 AGENT 'agent2_link';

SQL> CREATE PROCEDURE sum_rates(…) IS
2 LANGUAGE LIBRARY plib1;

SQL> CREATE PROCEDURE reset_rate(…) IS
2 LANGUAGE LIBRARY plib2;

Library Level Procedure Creation


The example is for a situation where the same external procedure needs to be invoked from
two different agents. The code shows the creation and references to the libraries required to
support these invocations.
The following PL/SQL block would invoke both external procedures created in the
example:
BEGIN
sum_rates(…);
reset_rate(…);
END;
/

Performance and Scalability Enhancements 3-13


Run Time Procedure Creation

SQL> CREATE LIBRARY plib1 IS


2 '${EP_LIB_HOME}/plib1.so';

SQL> CREATE LIBRARY plib2 IS
2 '${EP_LIB_HOME}/plib2.so';

SQL> CREATE PROCEDURE sum_rates(…) IS
2 LANGUAGE LIBRARY plib1
3 AGENT IN 'agent1_link';

SQL> CREATE PROCEDURE reset_rate(…) IS
2 LANGUAGE LIBRARY plib2
3 AGENT IN 'agent2_link';

Run Time Procedure Creation


This example shows an alternate run time solution for the same procedures shown in the
previous example. A routine to invoke these procedures could be
BEGIN
sum_rates (…, 'agent1_link');
reset_rate (…, 'agent2_link');
END;
/

Performance and Scalability Enhancements 3-14


Example

SQL> CREATE OR REPLACE PROCEDURE eproc1


2 (p1 VARCHAR2, p2 VARCHAR2) IS
3 LANGUAGE C NAME 'ep1'
4 LIBRARY lib AGENT IN(p2)
5 WITH CONTEXT PARAMETERS (context, p1, p2);

SQL> DECLARE
2 v_user VARCHAR2;
3 BEGIN
4 v_user := 'hr/hr';
5 ep1(bar, 'agent1');
6 ep1(bar, 'agent2');
7 END;
8 /

Example
The agent with the unique identifier specified in the second parameter is used to
run the external procedure.
Upon first invocation of this procedure, if the agent is not already running, it
will be launched.
If the agent does not exist the procedure fails with an error.
The data type of the named agent argument is not predetermined. PL/SQL
checks only the syntactic validity, but back-end code checks that the argument
is allowable as an agent specification.
If the named agent argument is NULL, the default agent EXTPROC is invoked.
If the named agent argument is not in the formal argument list, an error is
generated at creation time.
The agent named in the agent argument will take precedence over the agent
named in the create library statement for the procedure’s specified library.
If no agent argument is specified, then the agent associated with the procedure’s
library is used, which, if not specified, is the default EXTPROC (compatible
with the implementation in earlier releases).

Performance and Scalability Enhancements 3-15


Transactional External Procedure Agents

• Transaction support for external procedures which


change state of non-Oracle databases
– Avoid inconsistent distributed transactions that
occur when non-Oracle systems and Oracle
databases roll back statements without each
other’s knowledge
– Provide notification of transaction events to
non-Oracle systems
• Replace distributed external tables
• Implemented with XA and supports MQ Series and
APPC

Transactional External Procedure Agents


Transaction Support for External Procedures will expand the functionality of
external procedures to a level at which they can replace procedural gateways.
Each transaction system (e.g. MQ Series or APPC) must have a transaction
server (XA) library, which will be linked with a dedicated agent.
For this release, any Oracle-internal group using this functionality must build
their own Dedicated Agent, with an XA interface to their transactional library
linked in. As a possible future enhancement, any agent can be transactional or
not, go to multiple transaction systems, and users will be able to register their
own transactional routines.
DBAs will configure Transaction Support for External Procedures using
configuration parameters. Once the configuration parameters are established, it
will be very easy to employ them in PLSQL.
Application developers will code external procedures (say, in C), and code a
small bit of linkage in PLSQL. The linkage will relate to the configuration
parameters set up by the DBA.
Note: This feature will not be exposed to end-users, at least initially.

Performance and Scalability Enhancements 3-16


Security

• Security for transactional external procedures is


required to prevent database owners from granting
themselves privileges to access them
• The logon for transactional external procedures is
through the database link
• Any number of database users connected to Oracle
will become one OS user executing C code, but will
end up as multiple users at the non-Oracle system
• To prevent access to all libraries on a non-Oracle
system, each access through a database link is
limited to one library per session

Security
When all external procedures were processed by an agent on the Oracle database server,
security was relatively simple. In the case of transactional external procedures, they can be
invoked from a different machine. Although the machine on which the listener and the
agent run can be highly secure, it would be possible to connect from a very insecure
machine to the agent and execute any of the external procedures. For instance, anyone
could install a database on their laptop, grant themselves all the necessary privileges and
then invoke any external procedure.
Therefore transaction support for external procedures requires a logon step.
The transaction server (XA) code that implements the transactional functionality logs on to
the non-Oracle system that is accessed by the external procedures. Since an external
procedures requires a database link to identify which agent is used to execute the external
procedures, it uses either the username/password that was used to connect to Oracle, or the
username/password that was specified in the connect as/identified by clause of the create
database link statement.
Thus, although transactional external procedures can be invoked from a different (and
possibly insecure) machine, user verification is employed on the system that is accessed by
the external procedures. M database users connected to Oracle will become one OS user
executing C code, but will end up as N users at the non-Oracle system.
Another issue is that users granted access to a library on a non-Oracle system must be
prevented from invoking C routines in other libraries. This is achieved by allowing access
to only one library per database link per session (each database link gets its own transaction
branch and has its own logon step).

Performance and Scalability Enhancements 3-17


Sample Configuration
db_user2
Transactional agent can
db_user1 db_user3 load only one library Lib_Z
Lib_Y
Lib_X
Oracle Agent
Lib_A

"connect as"
NOS_user4
"connect as"
NOS_user3
Non-Oracle
os_user1
Listener system NOS_user2
(NOS) NOS_user1

Sample Configuration
In the sample configuration shown on the slide, multiple users (db_user1,
db_user2, and db_user3) are logged onto Oracle. Using the listener,
which has been started by os_user1, the db_users can connect to the
AGENT (also running on the system as os_user1). The (single allowed)
library LIB_A, loaded by the AGENT, allows the db_users to access the Non-
Oracle System (NOS), as NOS_user3 and NOS_user4 ("connect as"
users). NOS_user1 and NOS_user2 are logged onto the NOS via some other
means.

Performance and Scalability Enhancements 3-18


Library Creation Example

CREATE LIBRARY eproclib AS


'${ORACLE_HOME}/lib/eproc_lib.so'
AGENT 'txn_enabled_agent_dblink'
TRANSACTIONAL;

Library Creation Example


The dblink string is a database link that specifies TNS connection information
for a dedicated (and presumably transactional) agent.
Upon first invocation of any external procedure declared to reside in
eproclilb, the transaction manager is notified that it is to add the named
agent to its list of whom to notify for transaction events.
If a library is declared to be transactional, but the agent does not have
transaction support built in, a runtime error is generated the first time that a call
is made to an external procedure in the library.
If the agent has transaction support, but the "transactional" keyword is
not present in the create library statement, the transaction manager is not
notified upon any call to any procedure in the library.

Performance and Scalability Enhancements 3-19


Oracle Shared Server Improvements

• Oracle Net Services connection establishment:


direct handoff
• RDBMS/Network event model
• Oracle Net Services TCP adapter rewrite for
Microsoft’s OS

Performance and Scalability Enhancements 3-20


Connection Establishment: Direct Handoff

Database server
Listener 2 Shared
server
Client process
1
Oracle9i
Dispatcher
3 database
Shared
Pool of shared server
server process
processes

1. Listener receives a client connection request


2. Listener hands connect request to dispatcher
3. Client communicates to the dispatcher

Performance and Scalability Enhancements 3-21


RDBMS/Network Event Model

• Improvement in the event model: no polling


between database and network
• Reduce CPU consumption and latency
• Handle network sessions more efficiently in
dispatcher, listener, and connection manager

Performance and Scalability Enhancements 3-22


Oracle Net Services TCP Adapter Rewrite for
Windows
• Using Microsoft’s proprietary socket API instead of
the standard socket API
• Significantly increase concurrent network
connections on Windows platform

Performance and Scalability Enhancements 3-23


Multi-Threaded HS Agent Concepts

• Agents in the Oracle8 Heterogeneous Services


(HS) architecture
– Started up on a per user-session and per
database link basis
– Consumed a large amount of system resources
• In Oracle9i, the HS agent is multi-threaded
– More efficient use of resources
– System supports a larger number of concurrent
user-sessions

Multi-Threaded HS Agent
In the Oracle8 Heterogeneous Services architecture, agents are started up on a
per user-session and per database link basis. When a user-session tries to access
a non-Oracle system via a particular database link, an agent process is started
up dedicated to that user-session and database link. The agent process
terminates only when the user-session ends or when the database link is closed.
Separate agent processes will be started when the same user-session uses two
different database links to connect to the same non-Oracle system or when two
different user sessions use the same database link to access the same non-
Oracle system.
In the case of the Oracle server, this problem can be solved by starting the
server in multi-threaded mode. The Oracle Shared Server architecture assumes
that even when there are several thousand user-sessions currently open, only a
small percentage of these connections will be active at any given time. The
server in Oracle Shared Server mode has a pool of shared server processes— the
number of these shared server processes is usually considerably less than the
number of user-sessions— and the tasks requested by the user-sessions are put
on a queue and are picked up by the first available shared server process.
However, even if the server is running in Oracle Shared Server mode, the agent
architecture remains one process per user-session. So, any advantages obtained
using the Oracle Shared Server mode could potentially be negated if a large
number of user-sessions use HS.
To solve this problem, the Oracle9i HS agent has multi-threaded abilities. The
architecture is similar to the Oracle Shared Server architecture— with a set of
dispatcher threads to receive requests from Oracle server processes and return
results to them and a pool of task threads to process the requests and compute
results.

Performance and Scalability Enhancements 3-24


Multi-Threaded HS Agent

Oracle server Oracle server

HS HS

Dispatcher1 Dispatcher2
HS
agent
Thread1 Thread2 Thread3

Non-Oracle system

Multi-Threaded HS Agent
Each request issued by a user-session is shown with a different type of line.
Note the following:
• All requests from a user-session go through the same dispatcher thread
but they can be serviced by different task threads
• Several task threads could use the same connection to the non-Oracle
system

Performance and Scalability Enhancements 3-25


Multi-Threaded HS Agent Concepts

• There are three kinds of threads:


– a single monitor thread
– several dispatcher threads
– several task threads
• The monitor thread is responsible for:
– maintaining communication with the listener
– monitoring the load on the process
– starting and stopping threads when required
• The dispatcher threads communicate with the Oracle
server and pass task requests onto the task threads
• The task threads handle requests from the Oracle
processes

Multi-Threaded HS Agent Concepts


These three thread types roughly correspond to the Oracle Shared Server’s PMON,
dispatcher and shared server processes respectively.

Performance and Scalability Enhancements 3-26


Multi-Threaded HS Agent Features

• Pre-Oracle9i architecture is supported


• The agent administrator can start as many agent
processes as required
• Each agent process is configurable
• Agent processes can be started and stopped
through the listener
• Existing memory management service routines are
made thread-safe

Multi-Threaded HS Agent Features


Support for the Pre-Oracle9i architecture includes the following:
• Porting a driver from a platform that supports threads to a platform that
does not should require just recompilation and linking with the agent
libraries on the new platform.
• The same executable is runable in either single-process or multi-
threaded mode.

Performance and Scalability Enhancements 3-27


Starting Multi-Threaded HS Agents

• Multi-threaded agents are started on a per-SID basis


• For each SID, a separate agent process is started and
incoming connections for that SID will be handed over
by the listener to that process
• Like the Oracle Shared Server, the agent process will be
pre-started
• The process started first is the monitor process, which
can then create dispatcher and task threads as needed

Performance and Scalability Enhancements 3-28


Starting and Stopping HS Agents

• Use agent control utility


• Executes in
– Single command mode
– Shell mode

Agent Control Utility


The multi-threaded agent is started and stopped by an agent control utility, agtctl.
agtctl works like lsnrctl in that you can run commands from an operating system
prompt— single command mode— or type commands from within an agtctl shell— shell
mode. The following operations, and related operating system commands, are available in
single command mode:
1. Startup
agtctl startup agent name agent sid
2. Shutdown
agtctl shutdown agent sid
or
agtctl shutdown abort agent sid
3. Setting parameters
agtctl set <parameter> <value> <agent sid>
4. Unsetting parameters
agtctl unset parameter agent sid
5. Examining parameter values
agtctl show parameter agent sid
6. Deleting all settings for a particular agent sid
agtctl delete agent sid

Performance and Scalability Enhancements 3-29


HS Agent Initialization Parameters

• max_dispatchers
• tcp_dispatchers
• max_task_threads
• listener_address
• shutdown_address

Shell Mode of the Agent Control Utility


You start a shell mode session by typing agtctl at an operating system prompt. When you
do that, you get an AGTCTL> prompt and the first thing you need to do is to set the name of
the agent sid that you are working with as follows:
AGTCTL> set agent_sid agent sid
After that all commands issued are assumed to be for this particular sid until the
agent_sid value is changed. Command are the same as for single command mode
except that you can drop the agtctl and <agent sid> keywords. For example, to set
an initialization parameter value, type
AGTCTL> set parameter value
The information about which program to start for a particular value of SID is extracted
from the SID_LIST field of listener.ora.
HS Agent Initialization Parameters
The initialization parameters for HS agents are:
• max_dispatchers— maximum number of dispatchers
• tcp_dispatchers— number of dispatchers listening on TCP, the rest will use
IPC
• max_task_threads— number of task threads
• listener_address— address on which the listener is listening; needed for
registration
• shutdown_address— address on which the agent should listen for shutdown
messages from agtctl

Performance and Scalability Enhancements 3-30


Configuring the Multi-Threaded HS Agent

Configuring the HS Agent:


• hsini_<sid>.ora in $ORACLE_HOME/dbs
• hsini.ora in $ORACLE_HOME/dbs

Configuring the Multi-Threaded HS Agent


If an agent is bequeathed by the listener it does not check any initialization file but starts up
in single process mode. If the agent is prespawned, it will first look for the initialization file
hsini_<sid>.ora (where <sid> is the Oracle SID associated with the agent) in the
$ORACLE_HOME/dbs directory. If no such file is found then the agent looks for the file
hsini.ora in the $ORACLE_HOME/dbs directory. This file (which is created by the agent
administrator) contains defaults for all the agents running on a system. If even this file is
not found then the agent uses its internal defaults.

Performance and Scalability Enhancements 3-31


Virtual Interface (VI) Protocol Adapter

• ‘Thin’communication protocol for clustered server


• Place messaging burden on network hardware instead of CPU
• A scalable solution for connecting mid-tier cluster servers
• No application change required
• Emerging standard: http://www.Viarch.Org

Performance and Scalability Enhancements 3-32


Oracle 9i VI Adapter

Oracle
Application
server
TTC support TTC - IIOP - HTTP
event model Oracle names - LDAP - NDS Event model
Naming Naming
services Kerberos - DCE services
Security Biometrics - Radius Security
services services
Networking Networking
Multi-protocol interchange
services services
Failover
Network Load balancing Network
protocol Connection pooling protocol
adapters Multiplexing adapters
VI Proxying VI

Performance and Scalability Enhancements 3-33


Improvements

The database spends less


time waiting for work, the Clients
application servers spend Application server
less time waiting for cluster
responses, and the CPUs
have more cycles free Oracle9i
for other work.
VI
• Number of users increases fabric
• Response time decreases
• Price/performance benefit

Performance and Scalability Enhancements 3-34


Improvements

• Results of SAP-SD Benchmark


– 4900 users
– 1.85 second avg. response time
– 35% fewer processing resources than previous record
on Windows
• Oracle’s test comparing network connections using TCP/IP
versus VI shows VI has:
– 40% reduction of users CPU time
– 62% reduction of system CPU time
– 50% reduction of total CPU time

Performance and Scalability Enhancements 3-35


CORE Library Improvements

• CORE library is further modularized


• Reduced dependencies between modules
– applications pull in smaller chunks, improving
memory usage
– smart linkers pull objects into the executable
based upon their atomic level (object files) and
their dependencies

Performance and Scalability Enhancements 3-36


Summary

In this lesson, you should have learned how to:


• Use OCI for connection pooling
• Use dedicated external agents
• Create libraries, procedures, and C declarations for
dedicated external agents
• Configure transactional external procedure agents
• Manage multi-threaded heterogeneous service
agents
• Describe the Virtual Interface Protocol Adapter
• List the benefits of CORE library improvements

Performance and Scalability Enhancements 3-37


Performance and Scalability Enhancements 3-38

Potrebbero piacerti anche