Sei sulla pagina 1di 39

An Oracle White Paper

March 2010

Oracle Database: The Database of Choice for


Deploying SAP Solutions
Oracle Database: the Database of Choice for Deploying SAP Solutions

Executive Overview ............................................................................................. 2


Introduction ......................................................................................................... 1
Database market share for Enterprise Applications ............................................. 1
Oracle-SAP Technology Relationship .................................................................. 2
Oracle Advanced Compression ........................................................................... 3
OLTP Table Compression ............................................................................... 3
Minimal Performance Overhead .................................................................. 4
SecureFile Compression ................................................................................. 5
Recovery Manager (RMAN) Compression ....................................................... 6
Data Pump Compression ................................................................................. 6
Compression for Network Traffic...................................................................... 7
Real Application Testing ...................................................................................... 7
Database Replay ............................................................................................. 8
Faster deployment ....................................................................................... 8
SQL Performance Analyzer ............................................................................. 8
Online Patching ................................................................................................... 9
Direct NFS ......................................................................................................... 10
SecureFile Performance .................................................................................... 11
Deferred Segment Creation ............................................................................... 12
Enhanced ADD COLUMN Functionality (Dictionary-Only Add Column) ............. 12
Table Partitioning .............................................................................................. 13
SAP Standard Applications Benchmarks ........................................................... 14
Real Application Clusters for SAP (RAC for SAP) ............................................. 16
High Availability for SAP Resources (through SAPCTL) ................................ 18
Data Guard for SAP........................................................................................... 18
Patching of Oracle Databases and Real Application Clusters ............................ 21
Oracle Database: the Database of Choice for Deploying SAP Solutions

Oracle Advanced Security ................................................................................. 21


Tablespace Encryption .................................................................................. 22
RMAN Backup Encryption (Oracle Secure Backup) ....................................... 22
Data Guard Secure Transmission of Redo Data ............................................ 22
Secure Database exports with Encryption ..................................................... 23
SecureFile Encryption.................................................................................... 24
Database Vault .............................................................................................. 24
More new 11g features ...................................................................................... 25
Data Guard Improvements............................................................................. 25
Fast-Start Failover for Maximum Performance Mode in a Data Guard
Configuration ............................................................................................. 25
User Configurable Conditions to Initiate Fast-Start Failover in a Data Guard
Configuration ............................................................................................. 25
Data Guard Integration, Simplification, and Performance........................... 26
Support Up to 30 Standby Databases ........................................................ 26
Integration, Simplification, and Performance of Availability Features ............. 26
Automatic Reporting of Corrupt Blocks ...................................................... 26
Automatic Block Repair.............................................................................. 26
Block Media Recovery Performance Improvements ................................... 27
Parallel Backup and Restore for Very Large Files ...................................... 27
Enhanced Tablespace Point-In-Time Recovery (TSPITR) ......................... 27
Online Application Maintenance and Upgrade ............................................... 28
Invisible Indexes ........................................................................................ 28
Online Index Creation and Rebuild Enhancements .................................... 28
RMAN Integration, Simplification, and Performance ...................................... 28
Archive Log Management Improvements ................................................... 28
Fast Incremental Backups on Physical Standby Database ........................ 29
Server Manageability ..................................................................................... 29
Global Oracle RAC ASH Report + ADDM Backwards Compatibility........... 29
ADDM for Oracle Real Application Clusters ............................................... 29
Oracle Expertise in the SAP environment ...................................................... 30
Oracle Database: the Database of Choice for Deploying SAP Solutions

Conclusion ........................................................................................................ 30
Appendix ........................................................................................................... 31
Executive Overview

Since 1988, Oracle has been the database of choice as the development platform for
SAP applications. November 1999, a contract between Oracle and SAP was signed to
ensure future cooperation and maintain Oracle’s position as a “tier one” database
platform for SAP.

SAP R/3 was originally developed on the Oracle database, and the companies have a
long standing technology relationship. Subsequent SAP products, such as SAP Business
Information Warehouse (BW) have also been developed using the Oracle database. With
Oracle’s assistance, incorporation of new database features, performance testing, bug
fixing and customer problem escalations has been invaluable to SAP and the large
number of SAP customers running on the Oracle database. SAP customers running the
Oracle database have always benefited from the close cooperation beween Oracle and
SAP Development Teams which resulted in the highest levels of one stop service and
Oracle database optimizations for SAP applications.

The Oracle Database has an established history as the industry leader for relational
databases. Today, many successful businesses use the Oracle Database to power their
mission critical applications. Deploying Oracle Database 11g Release 2, within their IT
architecture, SAP customers can leverage the power of the world's leading database to
reduce their server and storage costs, eliminate idle redundancy and improve quality of
service. SAP customers can dramatically reduce the cost and effort required to test
changes in the infrastructure, reduce downtime, and use the first truly self-managing
database designed to self-monitor, diagnose and heal itself.
Oracle Database: the Database of Choice for Deploying SAP Solutions

Introduction

Oracle and SAP continue to satisfy the tens of thousends mutual SAP on Oracle
database customers. The joint effort has always been charecterized by a constant desire
to provide mutual customers with efficient service and support solutions for their SAP
application needs, in order to bring additional benefit to their businesses and to offer
optimum protection of their investments. The Oracle database is always optimized for
SAP Applications and with each new database release many new features are provided
that help customers cope with constant challenges such as, reducing storage costs,
minimizing downtime etc.

This paper will describe the most important features supported by SAP and show the
main differentiators between the Oracle Database and DB2 and SQL Server. Many of
these features, such as RAC, Data Guard, Table Partitioning, AWR, and etc, have been
available in earlier Oracle database versions ( 9i and 10g ), but are now enhanced in the
current Oracle database version 11g Release 2. Some major new features like Advanced
Compression, Real Application Testing, Online Patching are available immediately with
11g Release 2 for SAP.

Database market share for Enterprise Applications


Analysts are unanimous in saying that the Oracle database enjoys dominant market share for
enterprise applications, including SAP.
More than 60% of SAP implementations are based on an Oracle database. Note that the larger
the system (i.e. more users, more data) the higher the requirements in regards to storage saving,
performance, security and high availability − then higher is the share of Oracle based systems.
Very large systems are almost exclusively based on Oracle.
Oracle dominates the SAP database market share across operating systems platforms including,
the various flavors of Unix and Linux as well as Windows.
Oracle’s market position has real advantages for customers considering database choices for their
SAP system. A large installed base indicates that Oracle is able to meet the database needs of
SAP customers across many industries and geographies.

1
Oracle Database: the Database of Choice for Deploying SAP Solutions

It also means that a large group of customers have tested the SAP-Oracle combination in
situations that no QA group at SAP could ever recreate. Both Oracle and SAP have learned from
this experience in the field, and both products have been enhanced as a result. Customers now
choosing Oracle for SAP will get the accumulated benefits of years of product testing in the real
world. The impressively large customer base translates into several advantages:
 Proven technology
 Widest choice of solutions and systems
 Highest consulting expertise on the market
 Best cooperation with hardware and tool vendors
 Largest labor pool of people with combined Oracle – SAP skills

Oracle-SAP Technology Relationship


Oracle has dedicated teams working with SAP in many different areas, including joint software
development, pre-sales and technology evangelism, customer technical support and professional
services. The long Oracle-SAP technology relationship started in 1988 when SAP R/3
development began.

Figure 1: Milestones in SAP-Oracle Technology Relationship

The Oracle development team working at SAP HQ in Walldorf, Germany assists SAP in:
 Performance testing of each release with the Oracle database to ensure there is no degradation
of response time, throughput and scalability between SAP versions.

2
Oracle Database: the Database of Choice for Deploying SAP Solutions

 Fixing database bugs found during SAP functional testing, and including SAP enhancement
requests in the database product roadmap
 Incorporating new Oracle features in SAP releases
 Optimize each new release of the DBMS and new versions of SAP applications
 Responding to escalated customer problems, when related to database issues

Oracle Advanced Compression


Oracle has been a pioneer in database compression technology. Oracle Database 9i introduced
Basic Table Compression several years ago that compressed data that was loaded using bulk load
operations. SAP BW customers have been able to compress data since Oracle 9i. Customers
could compress PSA tables, Historical Cubes, aggregates and ODS objects. Since Oracle 10.2.0.2,
SAP certified the use of Index compression to save disk space for indexes and reduce total
database size on disk. Customer experiences show that up to 75% less disk space is needed for
key compressed indexes. (Real world example: The size of the index 'GLPCA~1' was reduced
from 18GB to 4.5GB). Even after full database reorganization has taken place, an additional 20%
of total disk space reduction for the whole database can be achieved using index compression.
Without any reorganization before the total space savings for the complete database may be
higher than 20% using index compression, as index compression implicitly reorganizes any index.
The Oracle Database 11g Advanced Compression Option introduces a comprehensive set of
compression capabilities to help customers maximize resource utilization and reduce costs. It
allows IT administrators to significantly reduce their overall database storage footprint by
enabling compression for all types of data – be it relational (table), unstructured (file), or backup
data. Although storage cost savings are often seen as the most tangible benefit of compression,
innovative technologies included in the Advanced Compression Option are designed to reduce
resource requirements and technology costs for all components of your IT infrastructure,
including memory and network bandwidth. Oracle 11g Advanced compression includes:

OLTP Table Compression


Oracle’s OLTP Table Compression uses a unique compression algorithm specifically designed to
work with OLTP applications. The algorithm works by eliminating duplicate values within a
database block, even across multiple columns. Compressed blocks contain a structure called a
symbol table that maintains compression metadata. When a block is compressed, duplicate values
are eliminated by first adding a single copy of the duplicate value to the symbol table. Each
duplicate value is then replaced by a short reference to the appropriate entry in the symbol table.
Through this innovative design, compressed data is self-contained within the database block as
the metadata used to translate compressed data into its original state is stored in the block. When
compared with competing compression algorithms that maintain a global database symbol table,

3
Oracle Database: the Database of Choice for Deploying SAP Solutions

Oracle’s unique approach offers significant performance benefits by not introducing additional
I/O when accessing compressed data.
In general, customers can expect to reduce their storage space consumption by a factor of 2 to 3
by using the OLTP Table Compression feature. That is, the amount of space consumed by
uncompressed data will be two to three times larger than that of the compressed data. The
benefits of OLTP Table Compression go beyond just on-disk storage savings. One significant
advantage is Oracle’s ability to read compressed blocks directly without having to first
uncompress the block. Therefore, there is no measurable performance degradation for accessing
compressed data. In fact, in many cases performance may improve due to the reduction in I/O
since Oracle will have to access fewer blocks. Further, the buffer cache will become more
efficient by storing more data without having to add memory.
The results achieved using OLTP compression in real world SAP BW customers are depicted in
figure 2 that shows space saving up to 86% at table level.

Figure 2: OLTP Table compression within SAP BW

Minimal Performance Overhead

As stated above, OLTP Table Compression has no adverse impact on read operations. There is
additional work performed while writing data, making it impossible to eliminate performance
overhead for write operations. However, Oracle has put in a significant amount of work to

4
Oracle Database: the Database of Choice for Deploying SAP Solutions

minimize this overhead for OLTP Table Compression. Oracle compresses blocks in batch mode
rather than compressing data every time a write operation takes place. A newly initialized block
remains uncompressed until data in the block reaches an internally controlled threshold. When a
transaction causes the data in the block to reach this threshold, all contents of the block are
compressed. Subsequently, as more data is added to the block and the threshold is again reached,
the entire block is recompressed to achieve the highest level of compression. This process
repeats until Oracle determines that the block can no longer benefit from further compression.
Only transactions that trigger the compression of the block will experience the slight
compression overhead. Therefore, a majority of OLTP transactions on compressed blocks will
have the exact same performance as they would with uncompressed blocks.

Figure 3: OLTP Table compression Process

SecureFile Compression
SecureFiles is a new feature in Oracle Database 11g that introduces a completely reengineered
large object (LOB) data type to dramatically improve performance, manageability, and ease of
application development.
SecureFiles data is compressed using industry standard compression algorithms. Compression
not only results in significant savings in storage but also improved performance by reducing IO,
buffer cache requirements, redo generation and encryption overhead. If the compression does
not yield any savings or if the data is already compressed, SecureFiles will automatically turn off
compression for such columns. Compression is performed on the server-side and allows for

5
Oracle Database: the Database of Choice for Deploying SAP Solutions

random reads and writes to SecureFile data. SecureFile compression provides significant storage
savings for unstructured data depending on the degree of compression: LOW, MEDIUM
(default) and HIGH, which represent a tradeoff between storage savings and latency.
SecureFile compression handles in-line and out-of-line LOB data which are getting more and
more important in SAP applications and are widely used in SAP products such as SAP CRM,
SAP XI, SAP NetWeaver Portal, and even in SAP ERP. Almost all non-cluster tables in SAP
ERP use out-of-line LOBS that are unique to Oracle database.
OLTP compression and SecureFile compression lead Oracle to be able to compress each type of
data related to SAP applications such as tables, indexes, and unstructured data. Using all 11g
space optimizations the database size can be reduced up to a factor of 3.

Recovery Manager (RMAN) Compression


The continuous growth in enterprise databases creates an enormous challenge to database
administrators. The storage requirements for maintaining database backups and the performance
of the backup procedures are directly impacted by database size. Oracle Advanced Compression
includes RMAN compression technology that can dramatically reduce the storage requirements
for backup data. Due to RMAN’s tight integration with Oracle Database, backup data is
compressed before it is written to disk or tape and doesn’t need to be uncompressed before
recovery – providing an enormous reduction in storage costs.
There are three levels of RMAN Compression: LOW, MEDIUM, and HIGH. The amount of
storage savings increases from LOW to HIGH, while potentially consuming more CPU
resources.

Data Pump Compression


The ability to compress the metadata associated with a Data Pump job was first provided in
Oracle Database 10g Release 2. In Oracle Database 11g, this compression capability has been
extended so that table data can be compressed on export. Data Pump compression is an inline
operation, so the reduced dump file size means a significant savings in disk space. Unlike
operating system or file system compression utilities, Data Pump compression is fully inline on
the import side as well, so there is no need to uncompress a dump file before importing it. The
compressed dump file sets are automatically decompressed during import without any additional
steps by the Database Administrator.
In the following compression example from the Oracle sample database, the OE and SH
schemas were exported while simultaneously compressing all data and metadata. The dump file
size was reduced by 74.67%.
Three versions of the gzip (GNU zip) utility and one UNIX compress utility were used to
compress the 6.0 MB dump file set. The reduction in dump file size was comparable to Data

6
Oracle Database: the Database of Choice for Deploying SAP Solutions

Pump compression. Note that the reduction in dump file size will vary based on data types and
other factors.
Full Data Pump functionality is available using a compressed file. Any command that is used on
a regular file will also work on a compressed file. Users have the following options to determine
which parts of a dump file set should be compressed:
• ALL enables compression for the entire export operation.
• DATA-ONLY results in all data being written to the dump file in compressed format.
• METADATA-ONLY results in all metadata being written to the dump file in compressed
format. This is the default.
• NONE disables compression for the entire export operation.

Compression for Network Traffic


Data Guard provides the management, monitoring, and automation software infrastructure to
create, maintain, and monitor one or more standby databases to protect enterprise data from
failures, disasters, errors, and data corruptions. Data Guard maintains synchronization of primary
and standby databases using redo data (the information required to recover a transaction). As
transactions occur in the primary database, redo data is generated and written to the local redo
log files. Data Guard Redo Transport Services are used to transfer this redo data to the standby
site(s). With Advanced Compression, redo data may be transmitted in a compressed format to
reduce network bandwidth consumption and in some cases reduce transmission time of redo
data when the Oracle Data Guard configuration uses either synchronous redo transport (SYNC)
or asynchronous redo transport (ASYNC).

Real Application Testing


Today, enterprises have to make sizeable investments in hardware and software to roll out
infrastructure changes. For example, a data center may have an initiative to move databases to a
low cost computing platform, such as Linux. This would, traditionally, require the enterprise to
invest in duplicate hardware for the entire application stack, including web server, application
server and database, to test their production applications. Organizations therefore find it very
expensive to evaluate and implement changes to their data center infrastructure. In spite of the
extensive testing performed, unexpected problems are frequently encountered when a change is
finally made in the production system. This is because test workloads are typically simulated and
are not accurate or complete representations of true production workloads. Data center
managers are therefore reluctant to adopt new technologies and adapt their businesses to the
rapidly changing competitive pressures.

7
Oracle Database: the Database of Choice for Deploying SAP Solutions

Oracle Database 11g’s Real Application Testing addresses these issues head-on with the
introduction of two new solutions, Database Replay and SQL Performance Analyzer.

Database Replay
Database Replay provides DBAs and system administrators with the ability to faithfully,
accurately and realistically rerun actual production workloads, including online user and batch
workloads, in test environments. By capturing the full database workload from production
systems, including all concurrency, dependencies and timing, Database Replay enables you to
realistically test system changes by essentially recreating production workloads on the test system
something that a set of scripts can never duplicate. With Database Replay, DBAs and system
administrators can test:
• Database upgrades, patches, parameter, schema changes, etc.
• Configuration changes such as conversion from a single instance to RAC, ASM, etc.
• Storage, network, interconnect changes
• Operating system, hardware migrations, patches, upgrades, parameter changes

Faster deployment

Another major advantage of Database Replay is that it does not require the DBA to spend
months getting a functional knowledge of the application and developing test scripts. With a few
point and clicks, DBAs have a full production workload available at their fingertips to test and
rollout any change. This cuts down testing cycles from many months to days or weeks and brings
significant cost savings to businesses as a result.
Database Replay consists of four three main steps:
• Capture workload in production including critical concurrency
• Replay workload in test with production timing
• Analyze and fix issues before production

SQL Performance Analyzer


Changes that affect SQL execution plans can severely impact application performance and
availability. As a result, DBAs spend enormous amounts of time identifying and fixing SQL
statements that have regressed due to the system changes. SQL Performance Analyzer (SPA) can
predict and prevent SQL execution performance problems caused by environment changes. SQL
Performance Analyzer provides a granular view of the impact of environment changes on SQL
execution plans and statistics by running the SQL statements serially before and after the
changes. SQL Performance Analyzer generates a report outlining the net benefit on the workload

8
Oracle Database: the Database of Choice for Deploying SAP Solutions

due to the system change as well as the set of regressed SQL statements. For regressed SQL
statements, appropriate executions plan details along with recommendations to tune them are
provided. SQL Performance Analyzer is well integrated with existing SQL Tuning Set (STS),
SQL Tuning Advisor and SQL Plan Management functionality. SQL Performance Analyzer
completely automates and simplifies the manual and time-consuming process of assessing the
impact of change on extremely large SQL workloads (thousands of SQL statements). DBAs can
use SQL Tuning Advisor to fix the regressed SQL statements in test environments and generate
new plans. These plans are then seeded in SQL Plan Management baselines and exported back
into production. Thus, using SQL Performance Analyzer, businesses can validate with a high
degree of confidence that a system change to a production environment in fact results in net
positive improvement at a significantly lower cost. Examples of common system changes for
which you can use the SQL Performance Analyzer include:
• Database upgrade, patches, initialization parameter changes
• Configuration changes to the operating system, hardware, or database
• Schema changes such as adding new indexes, partitioning or materialized views
• Gathering optimizer statistics.
• SQL tuning actions, for example, creating SQL profiles
Using SQL Performance Analyzer involves the following main steps:
• Capture SQL workload in production including statistics and bind variables
• Re-execute SQL queries in test environment
• Tune regressed SQL and seed SQL plans for production

Online Patching
A regular RDBMS patch is comprised of one or more object files and/or libraries. Installing a
regular patch requires shutting down the RDBMS instance, re-linking the oracle binary, and
restarting the instance; uninstalling a regular patch requires the same steps.
With Oracle Database 11g, it is possible to install single or bundle patches completely online,
without requiring the database instance to be shut down, and without requiring RAC or Data
Guard configurations. With online patching, which is integrated with OPatch, each process
associated with the instance checks for patched code at a safe execution point, and then copies
the code into its process space.
An online patch is a special kind of patch that can be applied to a live, running RDBMS instance.
An online patch contains a single shared library; installing an online patch does not require
shutting down the instance or relinking the oracle binary. An online patch can be installed/un-

9
Oracle Database: the Database of Choice for Deploying SAP Solutions

installed using Opatch (which uses oradebug commands to install/uninstall the patch). Online
patches are currently only supported for the RDBMS, i.e. the oracle binary.
How does Online Patching differ than traditional diagnostic patching?
• Online patches are applied and removed from a running instance where traditional patches
require the instances to be shutdown.
• Online patches utilize the oradebug interface to install and enable the patches where traditional
diagnostic patches are linked into the "oracle" binary.
• Online patches do not require the "oracle" binary to be relinked where traditional diagnostic
patches do.

Direct NFS
Standard NFS client software, provided by the operating system, is not optimized for Oracle
Database file I/O access patterns. With Oracle Database 11g Release 2, you can configure Oracle
Database to access NAS devices directly using Oracle Direct NFS Client, rather than using the
operating system kernel NFS client. Oracle Database will access files stored on the NFS server
directly through the integrated Direct NFS Client eliminating the overhead imposed by the
operating system kernel NFS. These files are also accessible via the operating system kernel NFS
client thereby allowing seamless administration.
Direct NFS Client includes two fundamental I/O optimizations to increase throughput and
overall performance. First, Direct NFS Client is capable of performing concurrent direct I/O,
which bypasses any operating system level caches and eliminates any operating system write-
ordering locks. This decreases memory consumption by eliminating scenarios where Oracle data
is cached both in the SGA and in the operating system cache and eliminates the kernel mode
CPU cost of copying data from the operating system cache into the SGA. Second, Direct NFS
Client performs asynchronous I/O, which allows processing to continue while the I/O request is
submitted and processed.
SAP customers can benefit from Direct NFS in the following way:
• Improves throughput of NAS solutions such as NetApp

• Up to 50% more database throughput in NAS environments with multiple NICs

• Up to 20% CPU savings on database server


• Works for Single Instance and Real Application Clusters (RAC)
• Works for UNIX/Linux and Windows Platforms
• Highly Available Network Solution

10
Oracle Database: the Database of Choice for Deploying SAP Solutions

• Failure of NICs will not impact access to data as long as one single NIC survives.

• Up to four network cards can be used between database server and NAS.
• Faster, easier and more available than any OS or NAS based bonding or trunking solution
• Direct NFS with NAS may provide higher throughput than traditional, more complex SAN
solutions
• Superior to any bonding solution – Faster and Easier
• Better throughput than most SAN solutions

SecureFile Performance
SecureFiles offer the best solution for storing file content, such as images, audio, video, PDFs,
and spreadsheets. Traditionally, relational data is stored in a database, while unstructured
content—both semi-structured and unstructured—is stored as files in file systems. SecureFiles is
a major paradigm shift in the choice of files storage. SecureFiles is specifically engineered to
deliver high performance for file data comparable to that of traditional file systems, while
retaining the advantages of the Oracle Database. SecureFiles offers the best database and file
system architecture attributes for storing unstructured content.
SAP customers benefit from SecureFiles because of
• Significantly faster access times compared to LOBs in SAP environments
• Increased transaction throughput on SAP cluster tables especially with RAC
• Prerequisite for compression of SAP tables containing LOBs (e.g. cluster tables)
Overall transaction throughput increases when LOB data is stored in SecureFiles (see figure 4).
LOB data stored in SecureFiles delivers equal or better performance compared with LOB data
stored in LONG or BasicFiles (LOB implementation prior to 11g). SecureFiles improve
dramatically the scalability of SAP applications running against Oracle Database 11g RAC but
also Oracle Database 11g Single Instance benefits substantially from SecureFiles. Therefore a
clear recommendation is given to migrate all existing LONG and Basicfile LOB data to
SecureFiles.

11
Oracle Database: the Database of Choice for Deploying SAP Solutions

SAP VBDATA Throughput (Insert/Read/Delete)

2,5
Improvement Factor

2
Performance

1,5
1
0,5
0
1 2 4 8 16 32 64
row size in KB

LONGs (9.2,10.2) LOBs (10.2,11.2) Securefiles 11.2

Figure 4: Performance Improvement with SecureFiles

Deferred Segment Creation


Beginning in Oracle Database 11g Release 2, when creating a table in a locally managed
tablespace, table segment creation is deferred until the first row is inserted. In addition, creation
of segments is deferred for any LOB columns of the table, any indexes created implicitly as part
of table creation, and any indexes subsequently explicitly created on the table.
The advantages of this space allocation method for customers running Oracle database
underneath their SAP applications are the following:
• Empty database objects will not consume any disk space
• Very important for SAP environments as 60-70% of all tables, lobs, indexes and partitions in
an SAP installation are empty
• Makes database installation for SAP a lot faster because creation of empty tables, LOBs, and
indexes are dramatically faster.
• Oracle Data Dictionary Space queries run substantially faster

Enhanced ADD COLUMN Functionality (Dictionary-Only Add


Column)
Before Oracle 11g adding new columns with DEFAULT values and NOT NULL constraint
required both an exclusive lock on the table and the default value to be stored in all existing
records.

12
Oracle Database: the Database of Choice for Deploying SAP Solutions

Now in Oracle 11g the database can optimize the resource usage and storage requirements for
this operation, default values of columns are maintained in the data dictionary for columns
specified as NOT NULL. Adding new columns with DEFAULT values and NOT NULL
constraint no longer requires the default value to be stored in all existing records. This not only
enables a schema modification in sub-seconds and independent of the existing data volume, it
also consumes no space. Especially for large tables, updating table column results in reduced
execution time and space saving.
Because add column is very common within SAP BW applications and SAP upgrades, enhanced
ADD Column Functionality leads to:
• factor 10-20 performance improvement for SAP BW during add column process
• saving large amount of disk space

Table Partitioning
Table partitioning has been supported since SAP Release 4.6C (using many of the Oracle
partitioning types available) and SAP BW 2.0 where many SAP InfoCubes tables are partitioned
by default.
As of 11g Release 2 and SAP BASIS Release 700 (Support Package 22), composite partitioning
(or subpartitioning) and interval partitioning are also supported by SAP:
• With composite partitioning - a scheme introduced in Oracle8i Database - you can create
subpartitions from partitions, allowing further granularity of the table. But in that release, you
could use subpartition range-partitioned tables only via hash subpartitioning. In Oracle9i,
composite partitioning was expanded to include range-list subpartitioning.
• In Oracle Database 11g, you are not limited to range-hash and range-list composite
partitioning. Rather, your choices are virtually limitless; you can create composite partitions in
any combination. This means customers can create the following types of composite partitions
available in Oracle 11g: Range-range, Range-hash, Range-list, List-range, List-hash, List-list.
• Interval partitioning, new in 11g, is an extension of range partitioning which instructs the
database to automatically create partitions of a specified interval when data inserted into the
table exceeds all of the existing range partitions. You must specify at least one range partition.
The range partitioning key value determines the high value of the range partitions, which is
called the transition point, and the database creates interval partitions for data beyond that
transition point. The lower boundary of every interval partition is the non-inclusive upper
boundary of the previous range or interval partition.
• For example, if you create an interval partitioned table with monthly intervals and the
transition point at January 1, 2007, then the lower boundary for the January 2007 interval is

13
Oracle Database: the Database of Choice for Deploying SAP Solutions

January 1, 2007. The lower boundary for the July 2007 interval is July 1, 2007, regardless of
whether the June 2007 partition was already created.
• The "SAP Partition Engine" provides a tool for SAP/Oracle systems that you can use to
partition large application tables to optimize archiving. The Partition Engine offers a
predefined set of approximately 30 application tables to be partitioned based on time based
criteria:

• Existing non-partitioned tables will be converted through an ABAP/SAP BR*Tools task.

• Partition Maintenance is fully automated through the internal SAP SM37 job, and requires
no DBA intervention.

For additional information regarding prerequisits and usage check the SAP note 1333328.

SAP Standard Applications Benchmarks


SAP has created several standard benchmarks to compare the performance of various solutions
and components across different hardware platforms and technology stacks and to assist in sizing
customer systems. The most “popular” SAP standard application benchmarks are SAP Sales and
Distribution (SAP SD), Assemble-to-Order (ATO), SAP Business Information Warehouse (SAP
BW), SAP Business Intelligence Data Mart, and Advanced Planning and Optimization (SAP
APO). SAP SD comes in three possible configurations: 2-tier, the database and SAP application
are on same server. 3-tier, the database and SAP application are on separate servers. Parallel
3-tier with a clustered database such as Oracle RAC. Finally, the ATO benchmark has two
configurations: 2-tier and 3-tier.
In October 2009, Oracle announced a world-record result on the SAP® Business Intelligence-
Data Mart (BI-D) Standard Application Benchmark, SAP certification number 2009037. This
result surpasses the best IBM DB2 result running this benchmark, with more than triple the
performance, SAP certification number 2008063:
On a system comprised of a two-node Fujitsu PRIMERGY RX300 cluster, each
equipped with a two-socket, quad-core Intel Xeon x5570 2.93 GHZ processor, Oracle
Database and Oracle Real Application Clusters on Linux delivered world record 609,349
query navigation steps per hour.
When measured from a one-node to a two-node configuration, as documented and
certified by SAP, Oracle Database and Oracle Real Application Clusters showed 90
percent scalability by achieving 320,363 (SAP certification number 2009036) and
609,349 (SAP certification number 2009037) query navigation steps per hour,
respectively, while delivering unmatched performance and high availability.

14
Oracle Database: the Database of Choice for Deploying SAP Solutions

Additionally, the one-node and two-node results delivered more than 67 percent higher
performance per core and 3.3 times more performance-per-processor, respectively, than
the highest IBM DB2 results on the SAP BI-D Standard Application Benchmark.
The data mart scenario is one use of the business intelligence capabilities of the SAP
NetWeaver® technology platform. The data mart contains a static snapshot of a huge
amount of operational data. Multiple users run queries on this data in 10 InfoCubes that
contain 2.5 billion (2,500,000,000) records. The key figure is the number of query
navigation steps per hour against an enormous amount of data.
Oracle has extended this series of benchmarks to three (SAP certification number 2009044) and
then four (SAP certification number 2009045) RAC nodes cluster to prove that scalability stays
on the same high level whenever we double the node number and thus the resources.

Figure 5: RAC scalability with SAP BI Data Mart

In November 2007, Oracle announced a world-record result on the SAP® Sales and
Distribution-Parallel (SD-Parallel) Standard Application Benchmark running on the SAP® ERP
6.0 application with 37,040 SD users, Certification Number 2008013. Based on these results an
IBM paper states “This document will demonstrate that the SAP software suite works and
scales very well utilizing multiple server nodes in a Oracle RAC cluster”.
A Head-to-Head Comparison finds Oracle on Top with 3,600 more SAP SD Users than
Microsoft SQL Server 2005 on Identical Fujitsu Hardware, Oracle/Linux certification number
2006071 and SQL Server/Windows certification number 2006068. Another benchmark
comparison on identical HP Hardware shows Oracle Database could serve 34% more SD users
than SQL Server , Oracle/Linux certification number 2008064 and SQL Server/Windows

15
Oracle Database: the Database of Choice for Deploying SAP Solutions

certification number 2008026. This comparison shows that Oracle 10g was better than SQL
Server 2008.
“Oracle continues to prove that it is far ahead of the competition when it comes to meeting the
high-performance, data-intensive computing demands of our customers,” said Juan Loaiza,
senior vice president, Systems Technology, Oracle. "This new world-record result and superior
scalability proof points clearly distinguish Oracle Database and Real Application Clusters in
demanding enterprise application environments.”
A production SAP system sees fluctuating user loads, contention on frequently used tables, a mix
of reads and writes, and occasional large batch jobs. The database platform for the system needs
to be able to scale easily with this mixed workload without requiring frequent and extensive DBA
intervention. SAP standard benchmark results as well as customer experience show that the
Oracle RDBMS distinguishes itself through an optimal usage of available system resources. SAP
certified a series of benchmarks that demonstrate the impressive scalability of Oracle Real
Application Clusters (RAC): The throughput increased by a factor of 1.9 whenever the number
of nodes was doubled. This scalability was proven by Oracle in two of the most known SAP
Standard Application Benchmarks: SAP BI-D (figure 5) and SAP SD (figure 6).

Figure 6: RAC scalability with SAP SD

Real Application Clusters for SAP (RAC for SAP)


Oracle Database 10g comes with an integrated set of High Availability (HA) capabilities that help
organizations ensure business continuity by minimizing the various kinds of downtime that can
affect their businesses. These capabilities take care of most scenarios that might lead to data

16
Oracle Database: the Database of Choice for Deploying SAP Solutions

unavailability, such as system failures, data failures, disasters, human errors, system maintenance
operations and data maintenance operations
The cornerstone of Oracle’s high availability solutions that protects from system failures is
Oracle Real Application Clusters (RAC). Oracle RAC is a cluster database with a shared cache
architecture that overcomes the limitations of traditional shared-nothing and shared-disk
approaches, to provide a highly scalable and available database solution for SAP applications.
RAC supports the transparent deployment of a single database across a cluster of active servers,
providing fault tolerance from hardware failures or planned outages. RAC supports mainstream
business applications of all kinds – these include popular packaged products such as SAP, as well
as custom applications. RAC provides a very high availability for these applications by removing
the single point of failure with a single server. In a RAC configuration, all nodes are active and
serve production workload. If a node in the cluster fails, the Oracle Database continues running
on the remaining nodes. Individual nodes can also be shutdown for maintenance while
application users continue to work.
A RAC configuration can be built from standardized, commodity-priced processing, storage, and
network components. RAC also enables a flexible way to scale applications, using a simple scale-
out model. When more processing power is needed by a particular application service, another
server can be added easily and dynamically, without taking any of the active users offline. Based
on customer configurations, SAP Dialog instances and connected users can be routed to
dedicated nodes in the RAC cluster.
Contrary to Failover Cluster, where every SAP instance is connected to a single database
instance, with an Oracle RAC cluster one or more SAP instances can be connected to one
dedicated Oracle RAC instance from within the available instances. If one RAC node crashes, the
users connected to the other nodes will not be affected since they are connected to a different
database instance. The SAP dialog instances that were connected to the crashed database
instance (node1) will be automatically reconnected to a surviving database instance (node2)
within seconds. In case more than one SAP instance were connected to the crashed database
instance, then the SAP instances concerned can be reconnected either to only one available RAC
instance or to different RAC instances in order to split the workload.

17
Oracle Database: the Database of Choice for Deploying SAP Solutions

Figure 7: SAP workload distribution with RAC

High Availability for SAP Resources (through SAPCTL)


Oracle Clusterware can provide high availability for SAP resources just as it does for Oracle
resources. Oracle has created an Oracle Clusterware tool, SAP Control (SAPCTL), to enable
customers to easily manage SAP high availability resources. SAPCTL provides an easy-to-use
interface to administer the resources, scripts, and dependencies of Oracle Clusterware and SAP
high availability components. SAPCTL consolidates the functionality of the Oracle command-
line tools by enabling SAP customers to easily manage the SAP Enqueue Service for ABAP and
JAVA, the SAP Replication Service for ABAP and JAVA, and the additional virtual IP addresses
used by the SAP Enqueue Service for ABAP and/or JAVA. In addition to the critical SAP high
availability components, namely the SAP Enqueue and SAP Replication Service, SAPCTL
provides an interface for the protection of arbitrary number of SAP application instances. The
SAP Central Instance (CI) or SAP application instances (DV) are possible candidates to run
under SAPCTL supervision.

Data Guard for SAP


Oracle Data Guard is the most effective and comprehensive data availability, data protection, and
disaster recovery solution for enterprise databases. It provides the management, monitoring, and
automation software infrastructure to create and maintain one or more synchronized standby
databases to protect data from failures, disasters, errors, and corruptions. Data Guard standby
databases deliver high return on investment when used for reports, backups, and testing.
Administrators can chose either manual or automatic failover of the production system to a

18
Oracle Database: the Database of Choice for Deploying SAP Solutions

standby system, if the primary fails in order to maintain high availability for mission critical
applications, without downtime.
Data Guard standby databases can be located at remote disaster recovery sites thousands of miles
away from the production data center, or they may be located in the same city, same campus, or
even in the same building. If the production database becomes unavailable because of a planned
or an unplanned outage, Data Guard can switch any standby database to the production role,
thus minimizing downtime and preventing any data loss.
Oracle Data Guard 11g Release 2 redefines what users should expect from such solutions. Data
Guard is included with Oracle Database Enterprise Edition and provides the management,
monitoring, and automation software to create and maintain one or more synchronized standby
databases that protect data from failures, disasters, errors, and corruptions. It can address both
High Availability and Disaster Recovery requirements and is the ideal complement to Oracle Real
Application Clusters

Data Guard functionalities for SAP customers:


• Snapshot Standby enables a physical standby database to be open read-write for testing or any
activity that requires a read-write replica of production data. A Snapshot Standby continues to
receive, but not apply, updates generated by the primary. These updates are applied to the
standby database automatically when the Snapshot Standby is converted back to a physical
standby database. Primary data is protected at all times.
• A physical standby database, because it is an exact replica of the primary database, can also be
used to offload the primary database of the overhead of performing backups
• Automatic Gap Resolution: In cases where the primary and standby databases become
disconnected (network failures or standby server failures), and depending upon the protection
mode used, the primary database will continue to process transactions and accumulate a
backlog of redo that cannot be shipped to the standby until a new network connection can be
established. While in this state, Data Guard continually monitors standby database status,
detects when connection is re-established, and automatically resynchronizes the standby
database with the primary (step four in Figure 3). No administrative intervention is required as
long as the archive logs required to resynchronize the standby database are available on-disk at
the primary database. In the case of an extended outage where it is not practical to retain the
required archive logs, a physical standby can be resynchronized using an RMAN fast
incremental backup of the primary database.
• Oracle Data Validation: One of the significant advantages of Data Guard is its ability to use
Oracle processes to validate redo before it is applied to the standby database. Data Guard is a
loosely coupled architecture where standby databases are kept synchronized by applying redo
blocks, completely detached from possible data file corruptions that can occur at the primary
database. Redo is also shipped directly from memory (system global area), and thus is

19
Oracle Database: the Database of Choice for Deploying SAP Solutions

completely detached from I/O corruptions on the primary. Corruption-detection checks occur
at a number of key interfaces during redo transport and apply.
• Managing a Data Guard Configuration: Primary and standby databases and their various
interactions may be managed by using SQL*Plus. Data Guard also offers a distributed
management framework called the Data Guard Broker, which automates and centralizes the
creation, maintenance, and monitoring of a Data Guard configuration. Administrators may
interact with the Broker using either Enterprise Manager Grid Control or the Broker’s
command-line interface (DGMGRL).
• Role Management Services: Data Guard Role Management Services quickly transition a
designated standby database to the primary role. A switchover is a planned operation used to
reduce downtime during planned maintenance, such as operating system or hardware
upgrades. Regardless of the transport service (SYNC or ASYNC) or protection mode utilized,
a switchover is always a zero data loss operation.
A failover brings a standby database online as the new primary database during an unplanned
outage of the primary database. A failover operation does not require the standby database to
be restarted in order to assume the primary role. Also, as long as the database files on the
original primary database are intact and the database can be mounted, the original primary can
be reinstated and resynchronized as a standby database for the new primary using Flashback
Database – it does not have to be restored from a backup.
• Fast-Start Failover: Fast-Start Failover allows Data Guard to automatically fail over to a
previously chosen, standby database without requiring manual intervention to invoke the
failover. A Data Guard Observer process continuously monitors the status of a Fast-Start
Failover configuration. If both the Observer and the standby database lose connectivity to the
primary database, the Observer attempts to reconnect to the primary database for a
configurable amount of time before initiating a fast-start failover. Fast-start failover is designed
to ensure that out of the three fast-start failover members - the primary, the standby and the
Observer - at least two members agree to major state transitions to prevent split-brain
scenarios from occurring. Once the failed primary is repaired and mounted, it must establish
connection with the Observer process before it can open. When it does, it will be informed
that a failover has already occurred and the original primary is automatically reinstated as a
standby of the new primary database. The simple, yet elegant architecture of fast-start failover
makes it excellent for use when both high availability and data protection is required.
• Automating Client Failover: The ability to quickly perform a database failover is only the first
requirement for high availability. SAP Applications must also be able to quickly drop their
connections from a failed primary database, and quickly reconnect to the new primary
database.
Effective SAP failover in a Data Guard context has three components:

• Fast database failover

20
Oracle Database: the Database of Choice for Deploying SAP Solutions

• Fast start of database services on the new primary database

• Fast notification of clients and fast reconnection to the new primary database
In previous Oracle releases, one or more user-written database triggers were required to
automate client failover, depending upon configuration. Data Guard 11g Release 2 simplifies
configuration significantly by eliminating the need for user-written triggers to automate client
failover. Role transitions managed by the Data Guard broker can automatically failover the
database, start the appropriate services on the new primary database, disconnect clients from
the failed database and redirect them to the new primary database – no manual intervention is
required.
• Easy conversion of a physical standby database to a reporting database – A physical standby
database can be opened read/write for reporting purposes, and then flashed back to a point in
the past to be easily converted back to a physical standby database. At this point, Data Guard
automatically synchronizes the standby database with the primary database. This allows the
physical standby database to be utilized for read/write reporting activities for SAP applications
e.g. NetWeaver BI.
Many Oracle Data Guard 11g Release 2 functionalities are enhanced features that were originally
available with Oracle 10g, like the improved Redo Transmission, Easy conversion of a physical
standby database to a reporting database, and Real Time Apply etc.

Patching of Oracle Databases and Real Application Clusters


MOPatch is a specially-packaged wrapper utility created around Opatch, simplifying the task of
installing multiple Oracle database patches into an Oracle Home. MOPatch automates the
process of unpacking the patches and calling opatch apply for each of them. See SAP notes
1027012 and 839182 for more details.
MOPatch is developed by Oracle’s SAP integration development team and is available for
download from SAP Service Marketplace. It is now integrated with the deployment procedures
of Enterprise Manager Grid Control to automate the orchestration of patching on Oracle
Databases. This automation significantly reduces time and effort involved in the manual patching
activity, see: http://www.oracle.com/newsletters/sap/products/database/oradb4sap_howto.html

Oracle Advanced Security


Oracle Advanced Security helps customers address regulatory compliance requirements by
protecting sensitive data from unauthorized disclosure on the network, backup media, or within
the database. Oracle Advanced Security Transparent Data Encryption provides the industries
most advanced encryption capabilities for protecting sensitive information without requiring any
changes to the existing application.

21
Oracle Database: the Database of Choice for Deploying SAP Solutions

Prior to 11g Release 2 SAP customers could implement security features such as Column
Encryption through Transparent Data Encryption (TDE) and Client Server Network
Encryption, to secure the data transfer between SAP instances and the database server.
Unlike most database encryption solutions, TDE is completely transparent to existing
applications with no triggers, views or other application changes required. Data is transparently
encrypted when written to disk and transparently decrypted after an application user has
successfully authenticated, and passed all authorization checks.

Tablespace Encryption
Starting with Oracle Database 11g it is possible to encrypt entire tablespaces. This makes it much
easier to ensure that all relevant data is encrypted because everything stored in the tablespace gets
encrypted automatically. Tablespace encryption means entire application tables can be
transparently encrypted. Data blocks will be transparently decrypted as they are accessed by the
database.

RMAN Backup Encryption (Oracle Secure Backup)


Lost or stolen tapes are frequently the cause for losing sensitive data. Oracle Secure Backup
encrypts tapes and provides centralized tape backup management for the entire Oracle
environment and protects Oracle database.
Oracle Secure Backup protects heterogeneous UNIX, Linux, Windows and Network Attached
Storage (NAS) file system data and the Oracle database providing tape backup management for
the entire Oracle environment:
• Oracle database to tape through integration with Recovery Manager (RMAN) supporting
versions Oracle9i to Oracle Database 11g.
• Optimized Oracle database backups to tape provide unparalleled performance achieving 10-
25% faster backups than comparable media management utilities with up to 30% less CPU
utilization
• Backup encryption using AES128, AES192 or AES256 encryption algorithms
• File system data protection of local and distributed servers
• Policy-based tape backup management

Data Guard Secure Transmission of Redo Data


Because a lack of security can directly affect availability, Data Guard provides a secure
environment and prevents tampering with redo data as it is being transferred to the standby
database. To enable secure transmission of redo data, set up every database in the Data Guard

22
Oracle Database: the Database of Choice for Deploying SAP Solutions

configuration to use a password file, and set the password for the SYS user identically on every
system. The following is a summary of steps needed for each database in the Data Guard
configuration:
Create a password file for each database in the Data Guard configuration (Set the initialization
parameter on each instance):
• REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
• orapwd file=$ORACLE_HOME/dbs/orapw password=<passwd> entries=10
After you have performed these steps to set up security on every database in the Data Guard
configuration, Data Guard transmits redo data only after the appropriate authentication checks
using SYS credentials are successful. This authentication can be performed even if Oracle
Advanced Security is not installed and provides some level of security when shipping redo data.

Secure Database exports with Encryption


For years Oracle customers have found the import / export utilities a convenient way to move
small amounts of data from one database to another. Oracle Data Pump 11g provides the ability
to encrypt data as it is written to the export file, providing additional protection for credit card
numbers and other sensitive business data.
Oracle Data Pump can easily encrypt an entire export file using one of these three methods:
• Protected by the Transparent Data Encryption master encryption key
• Protected by a passphrase
• Protected by both passphrase and Oracle Transparent Data Encryption master encryption key
Using Oracle Transparent Data Encryption, Oracle Data Pump uses the Transparent Data
Encryption master encryption key either from the Oracle Wallet or a Hardware Security Module
(HSM).
Using a passphrase, Oracle Data Pump uses the passphrase supplied on the command line as the
key for the encryption algorithm. This is beneficial if the export file is to be imported into
another database, where the matching master encryption key is not available, but the temporary
passphrase can be shared with the receiving site.
If using both passphrase and TDE master encryption key, the export file can be decrypted
transparently if the TDE master encryption key is available, or by providing a passphrase. This is
convenient when export files are to be imported back into the source database, and shipped off
to other locations where the matching TDE master encryption key is not available, but the
temporary passphrase can be shared with the receiving site.
Oracle Data Pump supports the AES encryption algorithm with key sizes ranging from 128 to
256.

23
Oracle Database: the Database of Choice for Deploying SAP Solutions

Oracle Data Pump command line parameters can be used to specify the granularity of data
encryption in the export file. For example, Data Pump can be instructed to encrypt all
information or only those columns currently encrypted using Oracle Transparent Data
Encryption.

SecureFile Encryption
In 11g, Oracle has extended the encryption capability to SecureFiles and uses the Transparent
Data Encryption (TDE) syntax. The database supports automatic key management for all
SecureFile columns within a table and transparently encrypts/decrypts data, backups and redo
log files. Applications require no changes and can take advantage of 11g SecureFiles using TDE
semantics. SecureFiles supports the following encryption algorithms:
• 3DES168: Triple Data Encryption Standard with a 168-bit key size
• AES128: Advanced Encryption Standard with a 128 bit key size
• AES192: Advanced Encryption Standard with a 192-bit key size (default)
• AES256: Advanced Encryption Standard with a 256-bit key size

Database Vault
Outsourcing, application consolidation, and increasing concerns over insider threats have
resulted in an almost mandatory requirement for strong controls on access to sensitive
application data. In addition, regulations such as Sarbanes-Oxley (SOX), Payment Card Industry
(PCI), and the Health Insurance Portability and Accountability Act (HIPAA) require strong
internal controls to protect sensitive information such as financial, healthcare, and credit cards
records. Oracle Database Vault enforces real-time preventive controls and separation-of-duty in
the Oracle Database to secure the SAP application data.
Oracle Database Vault Protection for SAP enables SAP customers to prevent access to
application data by privileged database users, enforce separation-of-duty, and provide stronger
access control with multi-factor authorization (Oracle Database Vault is currently in controlled
availability as described in SAP Note 1355140). Database Vault enforces security controls even
when a database user bypasses the application and connects directly to the database. Database
Vault certification with SAP applications benefits customers by:
• Preventing privileged user access to application data using protection realms for the SAP
ABAP stack and the SAP Java stack
• Enforcing separation of duty in the Oracle Database while allowing SAP administrators to
perform their duties and protecting their SAP administration roles
• Provides SAP specific Database Vault protection policies for SAP BR*Tools

24
Oracle Database: the Database of Choice for Deploying SAP Solutions

• Implements all Database Vault protections transparently and without any change to the SAP
application code
Preventing Privileged User Access: Database administrators hold highly trusted positions within
the enterprise. With Database Vault realms, enterprises increase security by preventing access to
application data even if the request is coming from privileged users. This is especially important
when a privileged account is compromised or accessed outside normal business hours or from an
un-trusted IP address. The regular tools used by administrators to help manage and tune the
Oracle database continue to work as before, but they can no longer be used to access SAP
application data.
Enforcing Separation-of-Duty: Database Vault helps administrators manage operations more
securely by providing fine-grained controls on database operations such as creating accounts, and
granting privileges. For more information and White Paper see:
http://www.oracle.com/newsletters/sap/products/dbvault.html

More new 11g features

Data Guard Improvements

Fast-Start Failover for Maximum Performance Mode in a Data Guard Configuration

This feature enables fast-start failover to be used in a Data Guard configuration that is set up in
the maximum performance protection mode. Since there is some possibility of data loss when a
Data Guard failover occurs in maximum performance mode, administrators can now choose not
to do a fast-start failover if the redo loss exposure exceeds a certain amount.
This enhancement allows a larger number of disaster recovery configurations to take advantage
of Data Guard's automatic failover feature.

User Configurable Conditions to Initiate Fast-Start Failover in a Data Guard Configuration

For lights out administration, you can enable fast-start failover to allow the broker to determine
if a failover is necessary and to initiate a failover to a pre-specified target standby database, with
either no data loss or a configurable amount of data loss. In addition, you can specify under
which conditions or errors you want a failover to be initiated. Oracle also provides the DBMS_DG
PL/SQL package to allow an application to request a fast-start failover.
This feature enables the administrator to choose and configure a list of conditions which, if they
occur, will initiate fast-start failover and increases the flexibility and manageability of customers'
disaster recovery configurations.

25
Oracle Database: the Database of Choice for Deploying SAP Solutions

Data Guard Integration, Simplification, and Performance

The new features in the following sections simplify the configuration and use of Oracle Data
Guard. For example, some features provide a smaller set of integrated parameters, a unified
SQL/Broker syntax, and better integration with other High Availability features like RMAN and
Oracle RAC. Other features enhance the performance of key Oracle Data Guard features like
redo transport, gap resolution, switchover/failover times.
• Enhanced Data Guard Broker Based Management Framework: The enhancements for this
release include:

 Data Guard Broker improved logging and tracing

 Oracle Managed Files (OMF) support for Data Guard Broker configuration files

 Data Guard Broker integration with database startup

 Guard Broker support for advanced redo transport settings

 Data Guard Broker support of prepared switchovers for Logical Standby

These enhancements make it possible to use Data Guard Broker in a wider variety of disaster
recovery configurations.

Support Up to 30 Standby Databases

The number of standby databases that a primary database can support is increased from 9 to 30
in this release.
The capability to create 30 standby databases, combined with the functionality of the Oracle
Active Data Guard option, allows the creation of reader farms that can be used to offload large
scale read-only workloads from a production database.

Integration, Simplification, and Performance of Availability Features

Automatic Reporting of Corrupt Blocks

During instance recovery, if corrupt blocks are encountered, the


DBA_CORRUPTION_LIST is automatically populated. Block validation occurs at every
level of backup, media recovery, and instance recovery.

Automatic Block Repair

Automatic block repair allows corrupt blocks on the primary database or physical standby
database to be automatically repaired, as soon as they are detected, by transferring good blocks

26
Oracle Database: the Database of Choice for Deploying SAP Solutions

from the other destination. In addition, RECOVER BLOCK is enhanced to restore blocks
from a physical standby database. The physical standby database must be in real-time query
mode.
This feature reduces time when production data cannot be accessed, due to block corruption, by
automatically repairing the corruptions as soon as they are detected in real-time using good
blocks from a physical standby database. This reduces block recovery time by using up-to-date
good blocks from a real-time, synchronized physical standby database as opposed to disk or tape
backups or flashback logs.

Block Media Recovery Performance Improvements

In prior releases, block media recovery needed to restore original block images from disk or tape
backup before applying needed archived logs. In this release, if flashback logging is enabled and
contains older, uncorrupted blocks of the corrupt blocks in question, then these blocks will be
used, speeding up the recovery operation.
The benefit is a reduction in the time it takes for block media recovery by restoring block images
from flashback logs instead of from disk or tape backups.

Parallel Backup and Restore for Very Large Files

Backups of large data files now use multiple parallel server processes to efficiently distribute the
workload for each file. This is especially useful for very large files. This feature improves the
performance backups of large data files by parallelizing the workload for each file.

Enhanced Tablespace Point-In-Time Recovery (TSPITR)

Tablespace point-in-time recovery (TSPITR) is enhanced as follows:


• You now have the ability to recover a dropped tablespace.
• TSPITR can be repeated multiple times for the same tablespace. Previously, once a tablespace
had been recovered to an earlier point-in-time, it could not be recovered to another earlier
point-in-time.
• DBMS_TTS.TRANSPORT_SET_CHECK is automatically run to ensure that TSPITR
is successful.
• AUXNAME is no longer used for recovery set data files.
This feature improves usability with TSPITR.

27
Oracle Database: the Database of Choice for Deploying SAP Solutions

Online Application Maintenance and Upgrade

Invisible Indexes

Beginning with Release 11g, you can create invisible indexes. An invisible index is an index that is
ignored by the optimizer unless you explicitly set the
OPTIMIZER_USE_INVISIBLE_INDEXES initialization parameter to TRUE at the session or
system level. Making an index invisible is an alternative to making it unusable or dropping it.
Using invisible indexes, you can do the following:
• Test the removal of an index before dropping it.

• Use temporary index structures for certain operations or modules of an application without
affecting the overall application.
Unlike unusable indexes, an invisible index is maintained during DML statements.

Online Index Creation and Rebuild Enhancements

In highly concurrent environments, the requirement of acquiring a DML-blocking lock at the


beginning and end of an online index creation and rebuild could lead to spikes of waiting DML
operations and, therefore, a short drop and spike of system usage. While this is not an overall
problem for the database, this anomaly in system usage could trigger operating system alarm
levels. This feature eliminates the need for DML-blocking locks when creating or rebuilding an
online index.
Online index creation and rebuild prior to this release required a DML-blocking lock at the
beginning and end of the rebuild for a short period of time. This meant that there would be two
points at which DML activity came to a halt. This DML-blocking lock is no longer required,
making these online index operations fully transparent.

RMAN Integration, Simplification, and Performance

Archive Log Management Improvements

This feature provides the following enhancements:


• Ensure that archive logs are deleted only when not needed by required components (for
example, Data Guard, Streams, and Flashback).
• In a Data Guard environment, allow all standby destinations to be considered where logs are
applied (instead of just mandatory destinations), before marking archive logs to be deleted.
This configuration is specified using CONFIGURE ARCHIVELOG DELETION POLICY
TO APPLIED ON ALL STANDBY.

28
Oracle Database: the Database of Choice for Deploying SAP Solutions

• Allow optional archive log destination to be utilized in the event that the flash recovery area is
inaccessible during backup. Archive logs in this optional destination can be deleted using
BACKUP .. DELETE INPUT or DELETE ARCHIVELOG.
This feature simplifies archive log management when used by multiple components. It also
increases availability when backing up archive logs, when an archive log in the flash recovery area
is missing or inaccessible. In this case, the backup will failover to an optional archive log
destination to continue backing up the archive logs.

Fast Incremental Backups on Physical Standby Database

You can enable block change tracking on a physical standby database. RMAN uses the change
tracking file, on incremental backups, to quickly identify the changed blocks since the last
incremental backup and to read and write just those blocks.
This feature enables faster incremental backups on a physical standby database than in previous
releases.

Server Manageability

Global Oracle RAC ASH Report + ADDM Backwards Compatibility

The Active Session History (ASH) report now includes cluster-wide information, greatly
enhancing it's utility in identifying and troubleshooting performance issues that span nodes for a
cluster database.
Automatic Database Diagnostic Monitor (ADDM) has been enhanced to be backward
compatible allowing it to analyze archived data, or data preserved through database upgrades,
allowing a customer to do performance comparisons over a longer time frame.

ADDM for Oracle Real Application Clusters

ADDM has been enhanced to provide comprehensive cluster-wide performance diagnostic and
tuning advice. A special mode of ADDM analyzes an Oracle RAC database and reports on issues
that are affecting the entire cluster as well as those that are affecting individual instances.
This feature is particularly helpful in tuning global resources such as I/O and interconnect traffic
and makes the tuning of Oracle RAC databases easier and more precise.

29
Oracle Database: the Database of Choice for Deploying SAP Solutions

Oracle Expertise in the SAP environment


The Solution Center SAP Support and Service offers SAP customers the following services:
 Advanced Customer Services (ACS)
 Performance Analysis and Tuning
 Development of concepts for Backup/Restore/Recovery, and High Availability,
Administration
 Security concepts
 Optimizing of ABAP/4 programs (performance improvement)
 Migration service for customers, who want to use Oracle as the database for SAP applications
(from Informix, MaxDB, DB2, or SQL Server to Oracle).
 Migration services from “Oracle to Oracle” (e.g. Tru64 to HP_UX)
 Integration Products and Services
 Oracle Database: The Database of Choice for Deploying SAP Solutions

Conclusion
Oracle has a large and growing share of the database market used to deploy SAP. This is not by
chance, both companies invest in making Oracle technology work well for SAP, and Oracle has a
long track record of delivering the de facto standard database for enterprise applications. SAP
customers continue to choose Oracle because of the Scalability, High Availability, Manageability
and Security benefits they obtain.

30
Oracle Database: the Database of Choice for Deploying SAP Solutions

Appendix
Certification Number 2009037: The SAP BI-D Standard Application Benchmark performed
on August 11, 2009 by Fujitsu in Walldorf, Germany has been certified with the following data:
Throughput/hour (query navigation steps): 609,349. CPU utilization of servers: 96% (Node 1
active: 96%. Node 2 active: 95%). Operating system all servers: SuSE Linux Enterprise Server 10.
RDBMS: Oracle 10g Real Application Clusters (RAC). Technology platform release: SAP
NetWeaver 7.0. Configuration: 2 servers (2 active nodes): Fujitsu Primergy RX300-S5, 2
processors / 8 cores / 16 threads, Intel Xeon Processor X5570, 2.93 GHz, 64 KB L1 cache and
256 KB L2 cache per core, 8 MB L3 cache per processor, 96 GB main memory
Certification Number 2009036: The SAP BI-D Standard Application Benchmark performed
on August 11, 2009 by Fujitsu in Walldorf, Germany has been certified with the following data:
Throughput/hour (query navigation steps): 320,363. CPU utilization of servers: 99% (one node
active: 99%). Operating system all servers: SuSE Linux Enterprise Server 10. RDBMS: Oracle
10g Real Application Clusters (RAC). Technology platform release: SAP NetWeaver 7.0.
Configuration: 1 servers (1 active node): Fujitsu Primergy RX300-S5, 2 processors / 8 cores / 16
threads, Intel Xeon Processor X5570, 2.93 GHz, 64 KB L1 cache and 256 KB L2 cache per core,
8 MB L3 cache per processor, 96 GB main memory
Certification Number 2008063: The SAP BI-D Standard Application Benchmark performed
on October 17, 2008 by IBM in Rochester, MN, USA was certified on October 31, 2008 with the
following data: Throughput/hour: 182,112 query navigation steps.CPU utilization of central
system: 94% Operating system, Central server: i 6.1. RDBMS: DB2 for i 6.1 Platform release:
SAP NetWeaver 7.0 (2004s). Configuration: Central server: IBM Power System 570, 4 processors
/ 8 cores / 16 threads, POWER6, 5 GHz, 128 KB L1 cache and 4 MB L2 cache per core, 32 MB
L3 cache per processor, 128 GB main memory
Certification Number 2009044: The SAP BI-D Standard Application Benchmark performed
on August 11, 2009 by Fujitsu in Walldorf, Germany has been certified with the following data:
Throughput/hour (query navigation steps): 900,309. CPU utilization of servers: 93% (Node 1
active: 94%. Node 2 active: 93%. Node 3 active: 93%). Operating system all servers: SuSE Linux
Enterprise Server 10. RDBMS: Oracle 10g Real Application Clusters (RAC). Technology
platform release: SAP NetWeaver 7.0. Configuration: 3 servers (3 active nodes): Fujitsu Primergy
RX300-S5, 2 processors / 8 cores / 16 threads, Intel Xeon Processor X5570, 2.93 GHz, 64 KB
L1 cache and 256 KB L2 cache per core, 8 MB L3 cache per processor, 96 GB main memory
Certification Number 2009045: The SAP BI-D Standard Application Benchmark performed
on August 11, 2009 by Fujitsu in Walldorf, Germany has been certified with the following data:
Throughput/hour (query navigation steps): 1,165,742. CPU utilization of servers: 88% (Node 1
active: 89%. Node 2 active: 88%. Node 3 active: 88%. Node 4 active: 88%). Operating system all
servers: SuSE Linux Enterprise Server 10. RDBMS: Oracle 10g Real Application Clusters (RAC).
Technology platform release: SAP NetWeaver 7.0. Configuration: 4 servers (4 active nodes):
Fujitsu Primergy RX300-S5, 2 processors / 8 cores / 16 threads, Intel Xeon Processor X5570,
2.93 GHz, 64 KB L1 cache and 256 KB L2 cache per core, 8 MB L3 cache per processor, 96 GB
main memory
Certification Number 2006071: The SAP SD standard mySAP ERP 2004 application
benchmark performed on August 19, 2006 by Fujitsu in Paderborn, Germany was certified on
August 31, 2006 with the following data: 12,500 SD (Sales and Distribution) Benchmark users,
1.83 seconds average dialog response time, 1,268,000 fully processed order line items per hour,
3,804,000 dialog steps per hour, 63,400 SAPS, 0.014 seconds/0.046 seconds average database

31
Oracle Database: the Database of Choice for Deploying SAP Solutions

request time (dia/upd), 85 percent CPU utilization of central server. Configuration of the central
server was as follows: Fujitsu PRIMEQUEST 580, 32 processors / 64 cores / 128 threads, Dual-
Core Intel Itanium 2 9050, 1.6 GHz, 32 KB(I) + 32 KB(D) L1 cache, 2 MB(I) + 512 KB(D) L2
cache, 24 MB L3 cache, 512 GB main memory. The server was running the SuSe Linux
Enterprise 9 operating system, Oracle Database 10g, and SAP ECC 5.0.
Certification Number 2006068: The SAP SD standard mySAP ERP 2004 application benchmark
performed on August 5, 2006 by Fujitsu in Paderborn, Germany was certified on August 31, 2006
with the following data:: 8,900 SD (Sales and Distribution) Benchmark users, 1.95 seconds average
dialog response time, 893,670 fully processed order line items per hour, 2,681,000 dialog steps per
hour, 44,680 SAPS, 0.043 seconds/0.042 seconds average database request time (dia/upd), 90 percent
CPU utilization of central server. Configuration of the central server was as follows: Fujitsu
PRIMEQUEST 580, 32 processors / 64 cores / 128 threads, Dual-Core Intel Itanium 2 9050, 1.6
GHz, 32 KB(I) + 32 KB(D) L1 cache, 2 MB(I) + 512 KB(D) L2 cache, 24 MB L3 cache, 512 GB
main memory. The Server was running Windows Server 2003 Datacenter Edition, SQL Server 2005
database, and SAP ECC 5.0
Certification Number 2008064: The SAP SD standard SAP ERP 6.0 (2005) application benchmark
performed on November 05, 2008 by HP in Marlboro, MA, USA was certified on November 12,
2008 with the following data: 7,010 SD (Sales and Distribution) Benchmark users, 1.88 seconds
average dialog response time, 708,000 fully processed order line items per hour, ,2,124,000 dialog
steps per hour, 35,400 SAPS, 0.016 seconds/0.022 seconds average database request time (dia/upd),
90 percent CPU utilization of central server. Configuration of the central server was as follows: HP
ProLiant DL785 G5, 8 processors / 32 cores / 32 threads, Quad-Core AMD Opteron Processor
8384, 2.7 GHz, 128 KB L1 cache and 512 KB L2 cache per core, 6 MB L3 cache per processor, 128
GB main memory. The server was running the SuSe Linux Enterprise Server 10 operating system,
Oracle Database 10g, and SAP ECC 6.0.
Certification Number 2008026: The SAP SD standard SAP ERP 6.0 (2005) application benchmark
performed on April 22, 2008 by HP in Houston, TX, USA was certified on May 5, 2008 with the
following data: 5,230 SD (Sales and Distribution) Benchmark users, 1.99 seconds average dialog
response time, 523,670 fully processed order line items per hour, 1,571,000 dialog steps per hour,
26,180 SAPS, 0.030 seconds/0.028 seconds average database request time (dia/upd), 92 percent CPU
utilization of central server. Configuration of the central server was as follows: HP ProLiant DL785, 8
processors / 32 cores / 32 threads, Quad-Core AMD Opteron processor 8360 SE, 2.5 GHz, 128 KB
L1 cache and 512 KB L2 cache per core, 2 MB L3 cache per processor, 128 GB main memory. The
Server was running Windows Server 2003 Enterprise Edition, SQL Server 2008 database, and SAP
ECC 6.0
Certification Number: 2008013: The SAP SD Parallel Standard Application benchmark performed
on November 26, 2007 by IBM in Beaverton, OR, USA, was certified by SAP on March 25, 2008
with the following data: 37,040 SAP SD-Parallel Benchmark users, 1.86 seconds average dialog
response time, 3,749,000 fully processed order line items per hour, 11,247,000 dialog steps per hour,
187,450 SAPS. Server configuration: 5X IBM System p 570, 8 processors/16 cores/32 threads,
POWER6, 4.7 GHz, 128 KB L1 cache and 4 MB L2 cache per core, 32 MB L3 cache per processor,
128 GM main memory, running AIX 5L version 5.3, Oracle 10g Real Application Clusters and SAP
ERP 6.0.
Certification Number: 2008012: The SAP SD Parallel Standard Application benchmark performed
on November 26, 2007 by IBM in Beaverton, OR, USA, was certified by SAP on March 25, 2008
with the following data: 30,016 SAP SD-Parallel Benchmark users, 1.86 seconds average dialog
response time, 3,036,000 fully processed order line items per hour, 9,108,000 dialog steps per hour,
151,800 SAPS. Server configuration: 4X IBM System p 570, 8 processors/16 cores/32 threads,
POWER6, 4.7 GHz, 128 KB L1 cache and 4 MB L2 cache per core, 32 MB L3 cache per processor,

32
Oracle Database: the Database of Choice for Deploying SAP Solutions

128 GM main memory, running AIX 5L version 5.3, Oracle 10g Real Application Clusters and SAP
ERP 6.0.
Certification Number: 2008011: The SAP SD Parallel Standard Application benchmark performed
on November 26, 2007 by IBM in Beaverton, OR, USA, was certified by SAP on March 25, 2008
with the following data: 22,416 SAP SD-Parallel Benchmark users, 1.94 seconds average dialog
response time, 2,252,330 fully processed order line items per hour, 6,757,000 dialog steps per hour,
112,620 SAPS. Server configuration: 3X IBM System p 570, 8 processors/16 cores/32 threads,
POWER6, 4.7 GHz, 128 KB L1 cache and 4 MB L2 cache per core, 32 MB L3 cache per processor,
128 GM main memory, running AIX 5L version 5.3, Oracle 10g Real Application Clusters and SAP
ERP 6.0.
Certification Number: 2008010: The SAP SD Parallel Standard Application benchmark performed
on November 26, 2007 by IBM in Beaverton, OR, USA, was certified by SAP on March 25, 2008
with the following data: 15,520 SAP SD-Parallel Benchmark users, 1.94 seconds average dialog
response time, 1,559,330 fully processed order line items per hour, 4,678,000 dialog steps per hour,
77,970 SAPS. Server configuration: 2X IBM System p 570, 8 processors/16 cores/32 threads,
POWER6, 4.7 GHz, 128 KB L1 cache and 4 MB L2 cache per core, 32 MB L3 cache per processor,
128 GM main memory, running AIX 5L version 5.3, Oracle 10g Real Application Clusters and SAP
ERP 6.0.

33
Oracle Database: The Database of Choice for
Deploying SAP Solutions

March 2010

Author: Abdelrhani Boukachabine

Copyright © 2010, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only and
the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other
warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or
fitness for a particular purpose. We specifically disclaim any liability with respect to this document and no contractual obligations are
Oracle Corporation formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any
means, electronic or mechanical, for any purpose, without our prior written permission.

World Headquarters

Potrebbero piacerti anche