Sei sulla pagina 1di 124

EUROPEAN

SAP TECHNICAL
EDUCATION
CONFERENCE 2002

WORKSHOP

Sept. 30 – Oct. 2, 02 Bremen, Germany

SAP liveCache
Administration & Monitoring

Werner Thesing

PRINT ON DEMAND
sponsored by
1
1
Learning Objectives

As a result of this workshop, you will


be able to:
„ Integrate your SAP liveCache into the APO system
„ Start, stop and initialize your SAP liveCache
„ Configure your SAP liveCache
„ Take backups and restore it
„ React on critical situations
„ Monitor the system regarding
‹ Consistent views and garbage collection
‹ Memory areas
‹ Task structure
‹ Performance

 SAP AG 2002, Title of Presentation, Speaker Name 2

PRINT ON DEMAND
sponsored by
2
2
About the workshop

„ The workshop contains 12 units


„ Each unit consists of
„ lecture (15 min)

„ Most units consist of


„ exercises (10 min)
„ solutions ( 5 min)

„ Feel free to ask your questions during the exercises


„ Breaks every 2 hours for 15 min

 SAP AG 2002, Title of Presentation, Speaker Name 3

PRINT ON DEMAND
sponsored by
3
3
Agenda

(1) liveCache concepts and architecture


(2) liveCache integration into R/3 via transaction lc10
(3) Basic administration (starting / stopping / initializing)

(4) Complete data backup

(5) Data storage

(6) Advanced administration


(log backup / incremental data backup/ add volume)

(7) Consistent views and garbage collection


(8) Memory areas
(9) Task structure

(10) Recovery

(11) Configuration

(12) Performance analysis

(13) Summary

 SAP AG 2002, Title of Presentation, Speaker Name 4

z In this workshop you will learn the main tasks of a liveCache administrator. Moreover the
architecture and the concepts of the liveCache are introduced which gives an
understanding of the liveCache behavior and ideas how to analyze and to overcome
performance bottlenecks.
z This workshop refers to the liveCache release 7.4.

PRINT ON DEMAND
sponsored by
4
4
EUROPEAN
SAP TECHNICAL
SAP
EDUCATION
CONFERENCE 2002
liveCache
Administration &
WORKSHOP Monitoring
Sept. 30 – Oct. 2, 02 Bremen, Germany

liveCache
Concepts and Architecture

z In this unit you will learn how to integrate an existing liveCache into the CCMS.

PRINT ON DEMAND
sponsored by
5
5
Why the liveCache has been developed (1)

Disk-based approaches and data storage based on


relational schemas are not suitable for
high performance processing
and thus for
Advanced Planning and Optimization (APO)

 SAP AG 2002, Title of Presentation, Speaker Name 6

z For the development of the Advanced Planning and Optimization (APO) component a
database system was needed which allows fast access to data organized in a complex
network.
z Applying conventional relational database management systems as data sources for the
APO showed a poor performance since disk I/O and the non-appropriate data description
in the relational schema limited the performance.

PRINT ON DEMAND
sponsored by
6
6
Why the liveCache has been developed (2)

Traditional buffering, data access/transfer times

Presentation client

Good performance only if


all data fit into application
App. Server

Application buffer
Application buffer Bring data to the
0.1 ms
application

Comprehensive
computation triggers huge
Database buffer
Database Server

1 ms data traffic and disk I/O

Buffered data still relational

Database
10 ms

= 8KB

 SAP AG 2002, Title of Presentation, Speaker Name 7

z To read data from an application buffer which is in the same address space as the
application takes about 0.1ms. Reading data from a database takes about 1ms if the
corresponding record is already in the database buffer and even 10 ms if the record must
be read from a hard disk before.
z Working with an application having a too small buffer to accommodate all required data
causes a huge data traffic between application and database server.
z An additional problem of the traditional buffering is that after reading data into the
application buffer they are still organized in a relational schema which is not appropriate to
describe complex networks.
z To achieve a good performance for applications which require access to a large amount of
data (i.e. APO) it is necessary to bring the application logic and the application data
together in one address space. One possible solution could be to shift the application
logic from the application server to the database server via stored procedures. However,
this impairs the scalability of R/3. On the other hand one could shift all required data to the
application server. But this requires that each server is equipped with very large main
memory. Furthermore, the synchronization of the data changed on each server with the
data stored in the database server is rather complicated.

PRINT ON DEMAND
sponsored by
7
7
Why the liveCache has been developed (3)

Minor performance
impact on
transactional
Presentation client processing
Message- Concurrency and

Planning Server
based transactional
Advanced
App. Server

Application semantic behavior supported


Planner &
Optimizer
Application buffer Bring application
Synchroni- logic and data
zation together

Avoid huge data


Database buffer
traffic and disk I/O
Database Server

on comprehensive
computation
Database liveCache
Buffered data
structures optimized
Dedicated for advanced
hardware/software business
system applications

 SAP AG 2002, Title of Presentation, Speaker Name 8

z To overcome performance problems the liveCache was introduced which is a dedicated


server tier for the main memory-based temporary storage of volatile shared data.

PRINT ON DEMAND
sponsored by
8
8
What is the liveCache (1)

liveCache is an instance type of the relational DBMS SAP DB


which was expanded by properties of an ODBMS

liveCache is an object management system for concurrent C++


programs which run in a single address space

liveCache provides an API to create, read, store and delete OMS


objects

liveCache provides a transaction management for objects


(commit, rollback)

liveCache ensures persistence of OMS objects including recovery

 SAP AG 2002, Title of Presentation, Speaker Name 9

z liveCache is a program for high performance management of objects used by APO


application programs (COM routines). These objects - called OMS objects - contain
application data, whose meaning is unknown to the liveCache . All objects ideally are
located in the main memory - in the global data cache - of the liveCache , but may be
swapped out to disk in case of memory shortage.
z COM routines run as stored procedures in the address space of liveCache and are called
from APO ABAP programs which run on the APO application servers. Due to the fact that
COM routines run in the address space of liveCache , they have direct access to OMS
objects, and navigation over networks of OMS objects is very fast. Typical access time is
less than 10 microseconds per object.
z liveCache provides classes and class methods to the COM routines to administer their
objects. Technically: COM routines inherit class methods from the liveCache base
classes to create, read, store and delete OMS objects.
z liveCache relieves the application programs of implementing their own transaction and
lock management. The application program is able either to commit or rollback all
changes made on several objects in a business transaction.
z liveCache ensures the existence of OMS objects beyond the lifetime of COM routines.
That’s why liveCache uses the term persistent OMS objects. When liveCache is stopped
or when a checkpoint is requested, all objects are stored on hard disks.

PRINT ON DEMAND
sponsored by
9
9
What is the liveCache (2)

liveCache provides suitable representations of complex data


structures, like networks and trees, based on object references

liveCache is used for fast navigation in large and complex networks

liveCache offers consistent views to isolate navigation on data


structures from simultaneous changes on these data structures

liveCache provides the complete functionality of a OLTP database


which can be used in COM routines via a SQL interface

 SAP AG 2002, Title of Presentation, Speaker Name 10

z The APO application uses a complex object orientated application model. This model is
easier to implement by an object oriented programming than with the relational structures
of a relational database. Therefore, liveCache supports object oriented programming
through providing adequate C++ methods/functions.
z liveCache provides the application with the concept of consistent views to isolate the data
of an application from simultaneous updates by other users (reader isolation).
z COM routines are implemented in liveCache as stored procedure. Therefore the call of a
COM routine from ABAP is quite simple through using EXEC SQL.

PRINT ON DEMAND
sponsored by
10
10
liveCache objective

Application Server liveCache Server

ABAP
C++
> 1 ms
< 10 µs

RDBMS Data Storage


Database Server liveCache

The main target of the liveCache is to optimize performance:


„ liveCache resides in main memory and therefore avoids disk I/O
„ Object orientation enables efficient programming techniques
„ C++ applications run in the address space of liveCache
„ Objects are referenced via logical pointers (= OID)

 SAP AG 2002, Title of Presentation, Speaker Name 11

z In a standard SAP system, typical database request times are above 1 ms. For data
intensive applications, a new technology is required in order to achieve better response
times. liveCache has been developed to reduce typical database request times to below
10 µs. Key factors in achieving these response times are:
y Accesses to liveCache data usually do not involve any disk I/O.
y The processes accessing the data are optimized C++ routines that run in the process
context of the liveCache on the liveCache server.
y Object orientation enables the use of efficient programming techniques. Besides -
compared to a relational database, where many related tables may have to be
accessed to retrieve all requested information - one object contains all the relevant
information and the need to access numerous objects or tables is eliminated. In other
words, the typical liveCache data structure is NOT a relational data table.
y Objects are referenced via logical pointers OID in contrast to referencing records via
keys (as in standard SQL) no search in an index tree is required.
z APO is the first product to use liveCache technology.

PRINT ON DEMAND
sponsored by
11
11
liveCache architecture (1)

Appl. liveCache interface of R/3 kernel


server (DBDS / native SQL)
SQL Packet

COM objects
(DLL)

liveCache

Devices

 SAP AG 2002, Title of Presentation, Speaker Name 12

z ABAP Programs and the APO optimizers use native SQL for communicating through the
standard SAP DB interface to liveCache. liveCache has an SQL interface that is used to
communicate with the SAP instances. With native SQL, ABAP programs call stored
procedures in the liveCache that point to Component Object Model (COM) routines written
in C++. An SQL class provides SQL methods to access the SQL data through the COM
routines.
z The COM routines are part of a dynamic link library that runs in the process context of the
liveCache instance. In the Windows NT implementation of liveCache, COM routines and
their interface are registered in the Windows NT Registry. For the Unix implementation, a
registry file is provided by liveCache. A persistent C++ class provides the COM routines
with access to the corresponding Object Management System (OMS) data that is stored
in the liveCache.

z COM Routines in APO are delivered in DLL libraries as SAPXXX.DLL and SAPXXX.LST
on NT or as shared libraries SAPXXX.ISO and SAPXXX.LST on UNIX . The application
specific knowledge is built into these COM routines based on the concept of object
orientation.

PRINT ON DEMAND
sponsored by
12
12
liveCache architecture (2)

Appl. liveCache interface of R/3 kernel


server (DBDS / native SQL)

Command analyzer
Framework
SQL for COM objects
OMS
class application (DLL)
embedding
SQL
live
Cache SQL basis OMS basis
(B* trees) (page chains)

DBMS basis

SQL/Object Log
data devices devices

 SAP AG 2002, Title of Presentation, Speaker Name 13

z liveCache is a hybrid of a relational and object-oriented database


z The relational part of the liveCache is available as the open source data base SAP DB
(see www.sapdb.org)
z The SQL part as well as the OMS part of the liveCache are based on the same DBMS
basis functionality which supplies services as for instance transaction management,
logging, device handling and caching mechanisms
z Object and SQL data are stored on common devices
z All liveCache data is stored in the caches as well as on disks in 8 KB blocks called pages.
z liveCache stores the OMS objects in page chains, the pages in the chain being linked by
pointers. SQL table data are stored in the B*trees. SQL and OMS data reside together in
the data cache and the data devices of the liveCache.

PRINT ON DEMAND
sponsored by
13
13
liveCache administration tools

The liveCache can be administered by

„ Transaction LC10 in the SAPGUI


„ Database Manager CLI (DBMCLI) command line interface
„ Database Manager GUI (DBMGUI) graphical user interface
for Windows NT/2000 only
„ Web Database Manager (WEB DBM)

 SAP AG 2002, Title of Presentation, Speaker Name 14

z liveCache, similar to the standard SAP RDBMS, can be administered within the SAP
system. The SAP transaction LC10 makes it possible to monitor, configure and administer
liveCache.
z The LC10 applies the Database Manager CLI (DBMCLI) to administer the liveCache.
Therefore, it is obvious that all functionalities of an SAP System are still available without
the LC10 and could also be performed with the „native“ data base administration tool
DBMCLI.
z In addition to the DBMCLI the administration tool DBMGUI is available, which is a
graphical user interface to the liveCache management client tool DBMCLI.
z While the DBMGUI works only on Windows NT/2000, running the WEB DBM requires
only an internet browser and the DBM Web Server which can be installed anywhere in
the net.
z DBMCLI, DBMGUI and WEB DBM should not be used for starting or stopping the
liveCache, even if LC10 itself calls DBMCLI for starting or stopping the liveCache. They
should only be used for changing liveCache parameters, defining backup media and for
liveCache monitoring. That is because the LC10 runs in addition to starting, stopping and
initializing application specific reports. Moreover, it registers the COM routines each time
the liveCache is started.

PRINT ON DEMAND
sponsored by
14
14
EUROPEAN
SAP TECHNICAL
SAP
EDUCATION
CONFERENCE 2002
liveCache
Administration &
WORKSHOP Monitoring
Sept. 30 – Oct. 2, 02 Bremen, Germany

liveCache Integration
into R/3 via LC10

z In this unit you will learn how to integrate an existing liveCache into the CCMS.

PRINT ON DEMAND
sponsored by
15
15
Transaction LC10

 SAP AG 2002, Title of Presentation, Speaker Name 16

z Transaction LC10 introduces liveCache specific administration functions within R/3 (≥


4.6D). It allows the administration of multiple liveCaches.
z Transaction LC10 identifies liveCaches via a connection name (in the example above it is
LCA_LAPTOP) which need not to be the physical name of the liveCache as it was
installed on the liveCache server.
z Integration Button
y creates and modifies new liveCache connections
z Monitoring Button
y leads to the main screen of the LC10
y liveCache administration (stop,start and initialization of the liveCache).
y changing liveCache configuration
y watch and analyze the liveCache performance by the liveCache administrator
y save and recovery of the liveCache
z Console Button
y views the status of liveCache tasks
z Alert Monitor Button
y reports error situations (liveCache specific part of the transaction RZ20)

PRINT ON DEMAND
sponsored by
16
16
liveCache integration into LC10 (1)

 SAP AG 2002, Title of Presentation, Speaker Name 17

z Choose ‘Integration’ on the initial screen of LC10 to reach the integration screen.
z The integration data are required for the multi-db-connection from an R/3 system to the
liveCache via NATIVE SQL. They are stored in tables DBCON and DBCONUSR on the
RDBMS.
z The ‘Name of the database connection’ is used for a NATIVE SQL connection to an R/3
system.
z The ‘liveCache name’ is the name of the liveCache database. It can be different from the
name of the database connection.
z The server name in the ‘liveCache server name’ is case sensitive. It must be the same as
the output from the command ‘hostname’ on a DOS prompt or UNIX shell.
z The default user/password combinations are control/control for the DBM operator and
sapr3/sap for the standard liveCache user.
z The APO application server has to be stopped and started again after changes of
liveCache connection information. This guarantees the R/3 system connects to the correct
liveCache instance.

PRINT ON DEMAND
sponsored by
17
17
liveCache integration into LC10 (2)

 SAP AG 2002, Title of Presentation, Speaker Name 18

z There are two possibilities to authorize a connection to the liveCache:


decentral authorization:
y you have to authorize access to the liveCache on each APO application server
y On each application server you have to start the dbmcli command (via sm49):
dbmcli –d <liveCache name> -n <lc-server> -us <DBM user>,<DBM password>

central authorization:
y central authorization data is stored in the APO database in table DBCONUSR
y this authorization is recommended and it is the default
y new with version 46D -> APO 3.1

PRINT ON DEMAND
sponsored by
18
18
liveCache integration into LC10 (3)

 SAP AG 2002, Title of Presentation, Speaker Name 19

z Execution of application-specific functions:


y To run an ABAP report automatically prior or after liveCache start, stop and
initialization, users can specify the report names in this section.
y The same report cannot be used more than once, unless a new name is used.
y The report names are stored in the table LCINIT within the APO database.
y The report /SAPAPO/DELETE_LC_ANCHORS has to be executed each time after the
liveCache was initialized. This report is responsible for the integrity of the APO and the
liveCache data.

PRINT ON DEMAND
sponsored by
19
19
liveCache integration into LC10 (4)

 SAP AG 2002, Title of Presentation, Speaker Name 20

z When the alert monitor is activated a number of performance critical data (i.e. heap and
device usage, cache hit rates) is collected periodically and displayed in the alert monitor
which can be reached by pressing ’Alert monitor’ on the initial screen of the LC10.
z The alert monitor is activated by default if the liveCache was installed by the standard
installation tool (LCSETUP).

PRINT ON DEMAND
sponsored by
20
20
EUROPEAN
SAP TECHNICAL
SAP
EDUCATION
CONFERENCE 2002
liveCache
Administration &
WORKSHOP Monitoring
Sept. 30 – Oct. 2, 02 Bremen, Germany

Basic
Administration

z At the end of this unit you will be able to start, stop and initialize a liveCache and you will
know where to find liveCache diagnosis files.

PRINT ON DEMAND
sponsored by
21
21
liveCache status

Basic status
information

 SAP AG 2002, Title of Presentation, Speaker Name 22

z This is the main screen of the LC10 which can be reached by pressing ’liveCache
Monitoring’ on the initial screen of the LC10. It offers all services and information to
administrate the liveCache.
z Before this window appears, the R/3 system sends a request to the liveCache about its
status. The liveCache name and liveCache server information are stored in the table
DBCON as described in previous slides. The remaining information displays the output
from the status request. If the connection to a liveCache is not available, an error
message is displayed.
z The left frame of the screen shows a tree which contains all information and services
needed to administer the liveCache. The tree branches with the most important
information and services are opened by default.
z The right frame displays the details which belong to the activated branch of the service
tree.
z Initially the window screen which belongs to the ‘Properties’ icon is activated.
z The ‘DBM server version’ displays the version of the database manager server which is
responsible for the dbmcli communication to the liveCache.
z The ‘liveCache version’ shows the liveCache kernel build version.
z The traffic light at ‘liveCache status’ illustrates the operation mode of the liveCache.

PRINT ON DEMAND
sponsored by
22
22
liveCache operation modes

Three possible operation modes of liveCache:

OFFLINE: liveCache kernel processes and caches


do not exist

ADMIN: liveCache kernel active (processes


started, caches initialized, but not synchronized)

ONLINE: liveCache kernel active and ready to


work

 SAP AG 2002, Title of Presentation, Speaker Name 23

z There are three liveCache operating modes:


y OFFLINE: No liveCache kernel processes are running, memory areas (caches) are
not allocated. No user can use liveCache .
y ADMIN: The liveCache kernel is active, but caches are not yet synchronized with the
volumes. Users cannot connect to the liveCache. Only the liveCache administration
user can connect and perform administrative tasks like restoring the database.
y ONLINE: The liveCache kernel is active and data and log information is
synchronized between caches and volumes. Users can connect to the liveCache.

PRINT ON DEMAND
sponsored by
23
23
Starting, initializing and stopping the liveCache

Starting,stopping and
initializing the liveCache

 SAP AG 2002, Title of Presentation, Speaker Name 24

z To start, stop or initialize the liveCache choose ‘Administration->Operating’ in the


‘liveCache: Monitoring’ screen. There you can find three buttons with the following
meanings:
y Start liveCache starts the liveCache into online mode. After the restart all data,
committed before the last shutdown (or crash), are available again.
y Initialize liveCache deletes the complete contents of the liveCache. (The next pages
describe the initialization process in more detail.)
y Stop liveCache shuts the liveCache down into the offline mode.
z Although starting, stopping and initializing the liveCache is also possible with DBMGUI or
DBMCLI, it is strongly recommended to use the transaction LC10. First, LC10 calls up an
APO specific report after starting the liveCache instance. If the report does not run,
accesses of work processes to the liveCache may cause an error. Second, during
stopping the liveCache instance, LC10 informs all work processes about this, which
causes them to automatically execute a reconnect when accessing the liveCache next
time. If the liveCache instance was stopped using DBMGUI or DBMCLI, a short dump
occurs as soon as work processes try to access the liveCache again after a restart.

PRINT ON DEMAND
sponsored by
24
24
Initialize liveCache (1)

 SAP AG 2002, Title of Presentation, Speaker Name 25

z Initializing the liveCache always formats the log volumes. If the data volumes do not
already exist they are created and formatted too.
z All liveCache data will be lost after initialization, and it has to be loaded again via the
APO system or via a recovery.
z Program LCINIT.BAT with option init is used to initialize the liveCache.
z The initialization process is logged in the log file LCINIT.LOG, which is automatically
displayed at the end of the initialization process.

PRINT ON DEMAND
sponsored by
25
25
Initialize liveCache (2)

INIT LIVECACHE LCA (init) #

• Start liveCache from OFFLINE into ADMIN mode


• Format the log volumes with initial configuration
• Activate liveCache into ONLINE mode
• Load liveCache system tables
• Create liveCache user SAPR3
• Activate liveCache monitoring
• Registration of COM routines
• Load liveCache procedures

liveCache LCA successfully initialized #

Run report /SAPAPO/DELETE_LC_ANCHORS

 SAP AG 2002, Title of Presentation, Speaker Name 26

z This slide demonstrates the steps in a liveCache initialization process.


z Formatting log volume can take some time; It depends on the size of the log volumes.
z Loading system tables is needed for liveCache error messages and liveCache monitoring.
z User sapr3 is the owner of liveCache content. This user is re-created each time the
liveCache is initialized.
z Registration of COM-Routines registers all application specific routines, e.g. sapapo.dll
for APO.
z In an APO system, the report /SAPAPO/DELETE_LC_ANCHORS is required to be
executed immediately after the liveCache is initialized.

PRINT ON DEMAND
sponsored by
26
26
liveCache Message Files: lcinit.log

Log files for starting, stopping


and initializing the liveCache

 SAP AG 2002, Title of Presentation, Speaker Name 27

z Each time the liveCache is started, stopped or initialized, a log file (LCINIT.LOG ) is
written which can be viewed in the branch ‘Logs->Initialization->Currently’ of the service
tree.
z The log file of the previous starts, stops or initializations is displayed in
‘Logs->Initialization->History’.
z The tab ‘Controlfile’ of the selection ‘Problem Analysis->Logs->Initialization’ displays the
script LCINIT.BAT which is used to start, stop and initialize the liveCache.
z Whenever the liveCache is started, stopped or initialized successfully you can find a
message
liveCache <connection name> successfully started/stopped/initialized
at the end of the log file.

PRINT ON DEMAND
sponsored by
27
27
liveCache message files: knldiag

liveCache system
message files

 SAP AG 2002, Title of Presentation, Speaker Name 28

z The knldiag file logs messages about current liveCache activities. The actions logged
include liveCache start, user logons, writing of savepoints, errors and liveCache
shutdown. Therefore, this file is one of the most important diagnostic files to analyze
database problems or performance bottlenecks.
z The knldiag file is recreated at every liveCache start. The previous one is saved under
‘knldiag.old’ (‘Problem Analysis->Messages->Kernel->Previous’ ), which means that the
content of every knldiag file is definitely lost after two consecutive restarts. To avoid
loosing the information about fatal errors that happened during two consecutive startup
failures, errors are also appended to file ‘knldiag.err’.
z To avoid that the size of the knldiag file increases unlimitedly with the time the database
spends in the operation mode online, the knldiag file has a fixed length which can be set
as a configuration parameter of the database. The system messages are written in a
roundtrip. Therefore, it can happen that the knldiag file does not contain all system
messages after long operation time. This is another reason why all error messages are
written into the file ‘knldiag.err’.

PRINT ON DEMAND
sponsored by
28
28
liveCache message files: knldiag.err

liveCache system error


message file

 SAP AG 2002, Title of Presentation, Speaker Name 29

z In contrast to the knldiag file, knldiag.err is not overwritten cyclically or reinitialized during
a restart. It logs consecutively the starting time of the database and any serious errors.

z This file is required to analyze errors if the knldiag files, which originally contained the
error messages, are already overwritten.

PRINT ON DEMAND
sponsored by
29
29
liveCache directories

Directory structure

sapdb

programs data <SID>

pgm bin config wrk

<SID> <SID> db

bin pgm env etc lib incl misc sap

 SAP AG 2002, Title of Presentation, Speaker Name 30

z Installing a liveCache creates a number of directories which are in a standard installation


subdirectories of a common root directory called sapdb.
z You should not change the names of these directories.
z The default System ID (SID) of a liveCache is LCA. The standard connection names are
LCA for APO and LDA for ATP (Available To Promise).
z The directories <IndepPrograms> and <InstallationPath> contain all files which are
required for the database management system while the <IndepData> directory
accommodates all configuration and message files which belong to specific liveCache
instances.
z For each instance a new subdirectory is created in the <IndepData>. The
<Rundirectory> defines the name of the subdirectory where to find the message files
which belongs to the instance currently monitored. Usually you should have only one
instance on your liveCache server.
z In the <IndepPrograms> subdirectory those programs and scripts are stored which do not
depend on particular liveCache releases, like e.g. the downward compatible network
server program x_server that transfers data between any liveCache instance and a
remote client.
z In contrast to the <IndepPrograms> directory all files contained in the directory
<InstallationPath> are release dependent.

PRINT ON DEMAND
sponsored by
30
30
liveCache directories: example (1)

Database configuration files

Installation log files

Database working directories


„ knldiag
„ knldiag.err
„ knldump
„ knltrace
„ rtedump

Administration log files

Saved diagnosis files after


abnormal shutdown

 SAP AG 2002, Title of Presentation, Speaker Name 31

z Sapdb/data/config: database configuration file for each installed database instance.

z Sapdb/data/config/install: log file for each installation of the SAPDB database


management system.

z Sapdb/data/wrk/Lca: working directory of a liveCache. The working directory contains the


message files knldiag, knldiag.old and knldiag.err, the liveCache trace file knltrace and
the dump file knldump. The dump file is created whenever the database crashes due to an
error. The file contains an image of all structures stored in the memory. Together with the
knldiag file this file is essential for the error analysis. The size of this file is about 10
percent larger than the size of the data cache. Make sure that there is always sufficient
space on the device accommodating the working directory to host the knldump file
in case of a crash.

z Sapdb/data/wrk/Lca/dbahist: detailed log files for each backup and restore of the
database

z Sapdb/data/wrk/Lca/DIAGHISTORY: All message, dump and trace files except the


knldiag.err are overwritten after a restart. To avoid the lost of the message files needed for
the error analysis at each restart all files from the working directory are saved in a
subdirectory of DIAGHISTORY when the liveCache detects that the previous shutdown
was due to an error. The subdirectories are labeled with a time stamp.

PRINT ON DEMAND
sponsored by
31
31
liveCache files: example (2)

Programs specific for installed liveCache release

System programs, tools

Documentation files
Root directory for the SAP DB Web Server
Scripts for creation of system tables
List of installed files

Libraries for precompiler

system programs, e.g kernel.exe


SAP-specific liveCache utilities
Map files of all system programs

Release independent programs, e.g.


DBMCLI

 SAP AG 2002, Title of Presentation, Speaker Name 32

z Sapdb/Lca/db/pgm: executable programs in particular the program kernel.exe which


represents the database management system.

z Sapdb/Lca/db/sap: dynamic link libraries and shared object files respectively which
contain the application code to run via COM in the database. Here you can also find the
script LCINIT.BAT to start, stop and initialize the liveCache.

PRINT ON DEMAND
sponsored by
32
32
EUROPEAN
SAP TECHNICAL
SAP
EDUCATION
CONFERENCE 2002
liveCache
Administration &
WORKSHOP Monitoring
Sept. 30 – Oct. 2, 02 Bremen, Germany

Complete
Data Backup

z At the end of this unit you will be able to perform a complete backup of the liveCache
using the database administration tool DBMGUI.

PRINT ON DEMAND
sponsored by
33
33
Complete data backup

Labels:
Data 1
DAT_00001

DAT_00001
Data 2
Data
Data n
volumes

paramfile

/sapdb/data/config/<SID>
Log 1

Log Log 2
volumes

 SAP AG 2002, Title of Presentation, Speaker Name 34

z A complete backup saves all occupied pages of the data volume. In addition, the
liveCache parameter file is written to the backup.
z The complete backup as well as the incremental backups (see later) are always
consistent on the level of transactions since the before images of running transaction are
stored in the data area; i.e. they are included in the backup.
z Each backup gets a label reflecting the sequence of the backups. This label is used by
the administrative tools to distinguish the backups. A map from the logical backup media
name to the backup label can be found in the file dbm.mdf in the <Rundirectory> of the
liveCache.
z For each backup log is written to the file dbm.knl in the <Rundirectory>.

z Backups are performed by the database process. Online backups of the


volumes with operating system tools (e.g. dd, copy) are useless.

PRINT ON DEMAND
sponsored by
34
34
Backup (1) : Start the DBMGUI

Calling the DBMGUI

 SAP AG 2002, Title of Presentation, Speaker Name 35

z To perform an initial backup of the liveCache we will use the DBMGUI which can be called
by choosing ‘Tools->Database Manager (GUI)’. After the selection you will be asked for
the user name of the database manager and its password which are usually
CONTROL/CONTROL.
z Since backup and restore procedures of a liveCache are identical to those for a OLTP
instance of the SAP DB these functions are not directly included in the liveCache specific
transaction lc10 but can be accessed via the general administration tool DBMGUI.
z To use the DBMGUI it has to be installed on the local PC.

PRINT ON DEMAND
sponsored by
35
35
Backup (2) : Create a backup media

Define parallel backup media

Define single backup media

 SAP AG 2002, Title of Presentation, Speaker Name 36

z Appearance of the DBMGUI:


y On the left side you can see all possible actions and information grouped into six
topics.
y On the right upper side the most important database information are displayed:
the filling levels of data and log volumes and the cache hit rates.
y In the central window new information will be shown if you click on one of the icons in
the left window.
z To perform a backup, you first have to configure a backup media, which can be done by
the selection `Configuration -> Backup Media`.
z You will then see an overview of all defined backup media – divided in single and parallel
media.
z At the lower border of the central window there are two icons which can be used to define
either a parallel or a single backup media.

PRINT ON DEMAND
sponsored by
36
36
Backup (3a) : Create a backup media

 SAP AG 2002, Title of Presentation, Speaker Name 37

z You can choose nearly any name for the media name. There are only a few names reserved for
external backup tools: ADSM, NSR, BACK. If your media name begins with one of these strings,
an external backup tool is expected.
z Besides the media name you have to specify a location. You have to enter the complete path of the
media. If you specify only a file name this file will be created in the <Rundirectory> of the database.
z There are four backup types:

y Complete: full backup of the data.

y Incremental: incremental backup of the data, saves all pages changed since the last complete
data backup.
y Log: interactive backup of the full log area (in units of log segments).

y AutoLog: automatic log backup, when a log segment is completed, it will be written to the
defined media.
z For a complete or incremental data backup you can choose one of the three device types: file, tape
or pipe. For a log backup you can choose file or pipe. It is not possible to save log segments directly
to tape.
z After you have entered the necessary information, you have to press the button „OK“ (green tick).
z The media definition is stored in the file dbm.mmm in the <Rundirectory> of the database.

PRINT ON DEMAND
sponsored by
37
37
Backup (3b) : Create media for external backup tools

 SAP AG 2002, Title of Presentation, Speaker Name 38

z The liveCache supports three kinds of external backup tools:

(1) Tivoli Storage Manager (ADSM)


(2) Networker (NSR)
(3) Tools which support the Interface BackInt for Oracle (BACK)
z To use one these tools you have to choose the device type Pipe for your backup media.
Moreover, the name of the media has to start with either the letters ADSM, NSR or BACK.
The DBMGUI needs these letters to decide which kind of external tool it should apply.
z For Windows NT media location must be as ’\\.\<PipeName>‘ where <Pipename> stands
for any name. On a UNIX platform the location can be any file name of a non existing file.

PRINT ON DEMAND
sponsored by
38
38
Backup (4) : Start complete data backup

‘Next Step’ button to continue

 SAP AG 2002, Title of Presentation, Speaker Name 39

z To create a complete data backup you have to select ‘Backup->Complete‘. In the central
window you are offered all media which are available for this operation.
z After you have chosen a media you have to confirm your choice by pressing the ‘Next
Step‘ button. The following window repeats your choice and ask you to confirm it. When
this is done the backup process starts and you can follow the progress displayed in a
progress bar.

PRINT ON DEMAND
sponsored by
39
39
Backup (5) : Final backup report

 SAP AG 2002, Title of Presentation, Speaker Name 40

z When the backup is finished, a status message will be displayed.

z A complete backup is consistent, i.e. it is possible to restart the recovered database


without further log information.
z To continue working with the DBMGUI press the ‘Step back’ button.

PRINT ON DEMAND
sponsored by
40
40
Running a backup in the background

Report to perform a backup

 SAP AG 2002, Title of Presentation, Speaker Name 41

z Use the report RSLVCBACKUP to perform a liveCache backup in the background.

z The report requires the input parameter:

y liveCache connection name (usually LCA)

y backup type

• BUP_DATA : complete data backup


• BUP_PAGE : incremental data backup
• BUP_LOG : log backup
y a backup media name

z Before the report can be executed the backup media must be defined which can be done
with the DBMGUI as shown in the previous slides.

PRINT ON DEMAND
sponsored by
41
41
EUROPEAN
SAP TECHNICAL
SAP
EDUCATION
CONFERENCE 2002
liveCache
Administration &
WORKSHOP Monitoring
Sept. 30 – Oct. 2, 02 Bremen, Germany

Data
Storage

z At the conclusion of this unit you will be able to monitor the data page usage of the
liveCache.

PRINT ON DEMAND
sponsored by
42
42
liveCache objects

class MyObj : public OmsKeyedObject<MyObj, unsigned char>


{
public :
unsigned char UpdCnt;
MyObj() { UpdCnt = 0;}
};

STDMETHODIMP TestComponent::OID_UPD_OBJ (int KeyNo)


{
try {
const MyObj* pMyObjKey = MyObj::omsKeyAccess(*this, KeyNo,
OMS_DEFAULT_SCHEMA_HANDLE, CONTAINER_NO);
if (pMyObjKey)
{
MyObj* pUpdMyObj = pMyObjKey->omsForUpdPtr (*this, DO_LOCK);

pUpdMyObj->UpdCnt++; // 1st update


pUpdMyObj->omsStore(*this);

pUpdMyObj->UpdCnt++; // 2nd update


}
else throw DbpError (100, "Object key not found");
}
catch (DbpError e) { omsExceptionHandler(e); }
return S_OK;
}

 SAP AG 2002, Title of Presentation, Speaker Name 43

z The liveCache was designed to store instances of C++ classes which are defined within
COM routines. At runtime a COM routine generates instances of classes. These instances
are called „persistent objects“ since they survive their creators (the COM routines). They
are stored in liveCache and on physical disks.
z The example above shows the definition of a class (MyObj) to generate persistent objects
in the liveCache and its usage by a COM-Object (TestComponent).
z By inheriting from the template OmsKeyedObject all instances from the class MyObj
inherit the ability to be stored persistently in the liveCache. The template
OmsKeyedObject belongs to the API supplied by the liveCache it offers transaction
control (rollback,commit), lock mechanisms, access methods and the ability to be stored
to all derived classes.

PRINT ON DEMAND
sponsored by
43
43
liveCache data storage

Object data (page chains)


SQL data (B* tree)

...

Data volume pages

 SAP AG 2002, Title of Presentation, Speaker Name 44

z SQL data is stored on SQL pages and is sorted using the B*tree algorithm. Access
occurs via a key and requires a search for the record position in the index. In contrast
object data is stored in OMS pages, which are linked to build page chains. Objects are
accessed via an OID. The OID contains already the object position therefore no further
search is required.
z In the liveCache, all data are stored in data volume pages regardless of the data type
(SQL data or object data).
z The size of a data page is 8 KB.

PRINT ON DEMAND
sponsored by
44
44
Object access: RDBMS approach

Table 1 Table 2
logical reference
via primary key
1 2 3 Tables
Primary are
index 3
logically
2
1 3 linked by
relational
data
1 3
1 2

2 2
rec rec Data is
record
retrieved
Data record from
pages buffers
or disk

Navigation via logical key using SQL


in > 1 ms (data cache access)
 SAP AG 2002, Title of Presentation, Speaker Name 45

z Application data in APO is organized as a network of linked data records. Data records
contain application data and mostly one or more links to other records used for the
navigation over the data network.
z In a traditional relational database management system, data is stored in relational
tables. Tables containing related data are logically linked through one or more fields
(which may but do not have to carry the same names). Mostly the primary key of the
tables is used as link criteria.
z To retrieve data in a table, an index will be used – either the primary index containing the
primary key or a secondary index. Normally more than one access to index data is
necessary to navigate to the table data in the data pages.
z Navigation over a network of data, stored in one or several tables, is performed using
several communications between application program and database:
y The database reads the first record and returns it to the application program.
y The application program gets the primary key of the next record from data stored in
first record.
y The database reads the next record and returns it to the application program.
y (1-3) until all data is read.
z If most of the pages accessed are buffered in the database’s RAM then no disk access
will be required, but if this is not the case the database software has to read information
stored on hard disks to fulfill the data request. Physical disk access slows down the
performance of the database.
PRINT ON DEMAND
sponsored by
45
45
Object access: liveCache approach

page 11 page 4 page 34

container 1
Objects Class
of
class 1 object object
1

2
physical
reference via OID
page 5 page 20 (= page number + offset)
container 2

obj
Class

Objects
of obj
3

class 2

Navigation via OID in < 10 µs

 SAP AG 2002, Title of Presentation, Speaker Name 46

z In liveCache the data (objects) are stored in class containers which consist of double
linked page chains. Navigation between objects is very fast because objects are
referenced using a physical reference – the Object ID (OID) which contains the page
number and the page offset.
z Direct accesses to the body of an object, e.g. by searching data in the body, is not
possible. Only alternative are keyed objects where the application may define a key on
the object. Features like LIKE, GT, LT etc. are not supported only a key range iterator is
supplied.
z liveCache can also store data in relational tables and access them correspondingly, but
first, this is only used for a minority of data.

PRINT ON DEMAND
sponsored by
46
46
Class container for objects of fixed length

Class container

Chain 1 Chain 2 Chain 3 Chain 4 Map: key->OID


first free

index

next free

 SAP AG 2002, Title of Presentation, Speaker Name 47

z The liveCache supplies two kinds of class containers to store objects. One for fixed
length objects and one for objects of variable length.
z Class container for objects of fixed length contain solely objects which are instances of
one class and thus are all of one length.
z Containers consist of chains of double linked pages. All pages which contain free space
to accommodate further objects are linked additionally in a free chain.
z Since new objects are inserted into a container always on the first page of the free chain
the class containers can be partitioned into more than one chain to avoid bottlenecks
during massive parallel insert of objects.
z The root page of each chain includes administrative data, e.g. the pointer to the first
page in the chain where is still space for another object.
z An index can be defined for a class container (but at maximum one) which maps a key
of fixed length onto an OID. Object of those containers can also be accessed via a key.
The index is organized as one or several B* trees.

PRINT ON DEMAND
sponsored by
47
47
Data page structure for objects of fixed length

Page header (80 Byte)


„ page number
Object frame header (24 Byte) „ check sum
„ pointer to next free frame „ pointer to first free object frame
„ object lock state „ num. of free/occupied frames
„ pointer before image „ pointers to next/previous pages

Free object frame

pointer to
next free
object
frame
Occupied object frame

Page trailer (12 Byte)


„ page number
„ check sum
 SAP AG 2002, Title of Presentation, Speaker Name 48

z Each page contains objects instantiated from the same class, i.e. all objects on a page
are of the same length. Therefore, they are stored in an array of object frames. With this
approach, there is no space fragmentation on a data page.
z The object frame consists of a 24 Bytes header with internal data and the data body that
is visible to the COM routines. The header stores for instance the lock state of the object,
the pointer to the next free object frame and the pointer to the before image of the object.
z The length of a data page is 8KB. Each page has a header of 80 Byte and a trailer of 12
Bytes. These parts of the page are not used for object frames but filled with structural data
as the page number, the numbers of the previous and next pages in the page chain, a
checksum to detect I/O errors, the number of occupied/free object frames on the page and
the offset of the first free frame.
z The length of a fixed length object is limited to the page size of slightly less than 8KB.

PRINT ON DEMAND
sponsored by
48
48
Class container for objects with variable length

Class container
Primary i. continuation j. continuation
container container container

Page 22 Page 100 Page 79

2. 2. 2.
continu3
Chain continu3
Chain continu3
Chain
ation ation ation
contain contain contain
er er er

 SAP AG 2002, Title of Presentation, Speaker Name 49

z Objects with variable length may be distributed over several pages and have a theoretical
maximal length of 2GB.
z To store the objects they are divided into pieces of less than 8KB. The pieces are stored in class
containers for objects of variable length. Each of those class containers consists of one primary
container and six continuation container.The primary container can accommodate object smaller
than 126 Byte. The ith continuation container contains object frames which can host object with
the length of ~126*(2^i) Byte with i=1,..,6.
z To insert an object the liveCache chooses a free object frame from the primary file. If the object is
smaller than 126 Byte it is put into this free frame otherwise the object is put into a frame of the
continuation container which has the smallest object frames which can still accommodate the
object. The OID of the frame, where the object is actually stored, is put into the chosen frame in
the primary file.
z The OID which is used by the application to identify an object is always the OID from the primary
container. This guarantees that the object can be accessed always by the same OID even if its
length changed and it was moved to another continuation container.
z The construction of the page chains and the pages of the continuation files is similar to those of
the fixed length class containers except that object frames in the continuation files are only 8 Byte
long.
z No index can be defined for objects of variable length.

z Accesses to objects with variable length are more expensive than accesses to ordinary objects if
they are longer than 126 Byte, since each access to those objects requires more than one page
access.
z Primary containers as well as continuation containers can be partitioned too.

PRINT ON DEMAND
sponsored by
49
49
Analysis of the class containers with the LC10

Detailed data about


class containers

 SAP AG 2002, Title of Presentation, Speaker Name 50

z The LC10 offers detailed data about all class containers stored in the liveCache. The class
container monitor can be reached by ‘Problem Analysis->Performance->Class container’
z The data in the class container monitor are:

y Class ID: unique internal number for each class container. The ID is assigned in the order of
the creation date of the container.
y Class name: name of the class whose instances are stored in the container.

y Object size: size of the stored objects in bytes.

y ContainerNo: external number of a class container. This number is used by the application to
identify a class container.
y Container size: Number of data pages which are occupied by the container.

y Free container pages: number of container pages which contain free object frames.

y Empty container pages: number of container pages which contain no occupied object frame.

y Key pages: number of pages which are occupied by the index.

y Container use: percent of usable space on the data pages which is used by occupied object
frames.
y Schema: name of the schema a class container is assigned to. Each container must be
assigned to a schema. A schema can be considered as a name space which can be dropped
with all its class containers at once.
y Class GUID: external unique identifier of the class.

PRINT ON DEMAND
sponsored by
50
50
COM routines

View registered
COM routines

 SAP AG 2002, Title of Presentation, Speaker Name 51

z Objects stored in the class containers can be accessed and manipulated only via COM
routines which are methods of COM objects.
z The selection ´Current Status->Configuration->Database Procedures´ displays a list of
all COM objects and their methods which are currently registered at the database. For
each COM routine a detailed parameter description is available when the triangle left of
the routine name is pressed.
z The COM routines can be executed through stored procedure calls. For instance The
COM routine CREATE_SCHEMA from the example above can be executed by the SQL
command “call CREATE_SCHEMA (‘MyFirstSchema’)”.
z The registration of the COM routines is done automatically when the liveCache is
started by the LC10.

PRINT ON DEMAND
sponsored by
51
51
EUROPEAN
SAP TECHNICAL
SAP
EDUCATION
CONFERENCE 2002
liveCache
Administration &
WORKSHOP Monitoring
Sept. 30 – Oct. 2, 02 Bremen, Germany

Advanced
Administration

z At the conclusion of this unit you will be able to save the log, perform an incremental
backup, to add a data device and to configure the liveCache for saving the log
automatically.

PRINT ON DEMAND
sponsored by
52
52
Log full situation (1)

liveCache icon

Show the state of


data base tasks

 SAP AG 2002, Title of Presentation, Speaker Name 53

z When performing the last exercise the liveCache ran into a log-full situation which caused a
standstill of the liveCache. All users trying to write any entry into the log were suspended.
However, users can still connect to the database and as long as they only read they can
continue to work on the database.
z The filling level of the data and log volumes can be observed by the transaction LC10 or
the DBMGUI. Within the LC10 the selection ‘Current Status->Memory Areas
->Decvspaces’ displays a detailed list of occupation of the data and log devices. However,
it is more convenient to watch the bars at the upper side of the DBMGUI. By a double click
on the liveCache icon you can get a detailed information about data and log devices in the
central screen too. If the log filling reaches critical values you can find warning messages in
the knldiag file too.

PRINT ON DEMAND
sponsored by
53
53
Log full situation (2)

Suspended user task


due to log full situation

 SAP AG 2002, Title of Presentation, Speaker Name 54

z You can convince that no data base task in particular no user task is active in the log full
situation by choosing ‘Check->Server’. By clicking on the selection ‘TASKS’ you get an
overview what each database task is currently doing.
z In case the log device is full you find the archive log writer task in the state ‘log-full’.
z User tasks which have tried to write entries into the archive log you can find in the state
‘LogIOwait’
z Tasks which serve other users are not suspended and in the state ‘Command wait’, i.e.
these users can use the database for read accesses.
z Notice that a user task can be suspended even before the log filling level reaches 100%.
This is because a small amount of the log is reserved and cannot be used by user tasks.
This reserved part is required to guarantee that the liveCache can be shut down even in a
log full situation.

PRINT ON DEMAND
sponsored by
54
54
Solution to log full situation

current log write position last unsaved log entry


Dev 1

(1) Log full Logwriter is


still waiting
Dev 1 Dev 2
(2) Add log
volume

Dev 1
(3) Log
backup BACKUP

(4) Continue Dev 1 Dev 2


log

 SAP AG 2002, Title of Presentation, Speaker Name 55

z At first glance one could think that a log full situation could be overcome by only adding
another log volume. However, the liveCache/SAP DB writes the log cyclically onto the
volumes as they would be only one device. This means that even if a new log volume is
added, the log writing has to be continued after the last written entry. Therefore, a log
volume cannot be used immediately after it was added but the log has to be backed up
before (SAVE LOG – interactive log backup).
z Note: Prerequisite for a log backup is a data backup.

PRINT ON DEMAND
sponsored by
55
55
Interactive log backup (1)

Labels:
Data 1
Data 2
Data LOG_00001
Data n
volumes

LOG_00001
Log 1

Log Log 2
volumes

 SAP AG 2002, Title of Presentation, Speaker Name 56

z Interactive log backup (SAVE LOG) backs up all occupied log segments from the log
volumes which have not been saved before.
z Only version files are supported as media.
z We recommend to back up the log into version files. One version file will be created for
each log segment. The version files get a number as extension (e.g. L_BackUpFile.001,
Al_BackUpFile.002, ...).
z The label versions are independent of the labels generated with complete data backup
(SAVE DATA) and incremental data backup (SAVE PAGES).

PRINT ON DEMAND
sponsored by
56
56
Interactive log backup (2)

Interactive log backup

Define log backup media

 SAP AG 2002, Title of Presentation, Speaker Name 57

z Choosing ‘Backup->Log`in the DBMGUI you activate the central window which allows to
back up all log segments (interactive log backup – SAVE LOG). After activating ‘Backup-
>Log` the central window displays a list of all log backup media defined so far which can be
used to save the current log. If this window is empty or all media defined are already in use
you must define a new log backup media before.

PRINT ON DEMAND
sponsored by
57
57
Interactive log backup (3)

 SAP AG 2002, Title of Presentation, Speaker Name 58

z For the definition of the log backup media you have to enter a name and a location for the
media. By pressing the green tick the input can be confirmed. By following the footprint
icon you can now continue the log backup. No further input is required. At the end of the
backup you get a report about the save.
z You can define a log backup media as well as a data save media also by choosing
‘Configuration->Backup Media’
z The log is logically divided into a number of log segments. The size of these segments is a
configuration parameter of the liveCache. After the first of these segments is saved all
tasks which were suspended due to the log full situation are immediately resumed. That
means suspended tasks continue working already during the backup of the log area if there
exist more than one log segment.

PRINT ON DEMAND
sponsored by
58
58
Autosave log mode

AutoLog mode status

AutoLog mode selection

AutoLog mode on/off

 SAP AG 2002, Title of Presentation, Speaker Name 59

z To prevent the database from further standstills due to a full log device you can activate the
autosave log mode (AutoLog mode). When the AutoLog mode is activated the log is
automatically written to files whenever a log segment is full. Each segment is saved in a
new backup file. The backup files are labeled as the corresponding media file plus a suffix
of a three digit number. The numbers are assigned in ascending order according to the
order of the saves.
z You can switch on the AutoLog mode by selecting ‘Backup->AutoLog on/off’. There you
can select a media which stores the automatically written log files. Alternatively, you can
define a new media by pressing the ‘Tape’ icon. After you have confirmed your media
selection with the AutoLog icon the auto log mode is activated.
z Pressing the tape icon on the lower taskbar of the central window you can create also a
new backup media.
z You can easily find the current status of the AutoLog mode by checking the column
AutoLog in the upper right window of the DBMGUI.

PRINT ON DEMAND
sponsored by
59
59
Add data volume (1)

List and add volumes

 SAP AG 2002, Title of Presentation, Speaker Name 60

z After your last exercise the database is nearly full. Therefore, another data volume should
be added to prevent the liveCache from a standstill due to a database full situation.
z In the LC10 you can add a data volume by selecting ‘Administration->Configuration-
>Devspaces’. After pressing the ‘Add Devspace’ button in the upper left corner a new
dialog window appears where you have to specify the size and the location of the new
volume.
z The new volume is immediately available after you have saved and confirmed the input
values.
z Data and log volumes can also be added using the DBMGUI (‘Configuration->Data
Volumes’).

PRINT ON DEMAND
sponsored by
60
60
Add data volume (2)

Maximum number of data


volumes

Show and change liveCache


configuration parameters

 SAP AG 2002, Title of Presentation, Speaker Name 61

z Before a data volume can be added the parameter MAXDATADEVSPACES of the


liveCache configuration has to be checked. If a Nth device shall be added this parameter
must be larger or equal to N. The parameter can be changed in the LC10 by the selection
‘Administration->Configuration->Parameters’. If you choose the DBMGUI you have to
select the menu path ‘Configuration->Parameters’. Note that new values of the database
configuration parameters are not valid until the database was stopped and started again.

PRINT ON DEMAND
sponsored by
61
61
Incremental data backup (1)

Labels:
Data 1
DAT_00001
Data 2
Data PAG_00002
Data n
volumes PAG_00003
DAT_00004

PAG_00002
PAG_00005
PAG_00006

Log 1

Log Log 2
volumes

 SAP AG 2002, Title of Presentation, Speaker Name 62

z In addition to a complete data backup data pages can also be backed up with an
incremental data backup.
z In contrast to a complete data backup an incremental data backup stores only those pages
which have changed since the last complete data backup.
z Notice that the incremental backup differs from those of previous liveCache releases (<7.4)
where the incremental backup contained all pages which changed since the last
incremental or complete data backup.
z The label version is increased with each complete and incremental data backup.
z To decide if you should rather make an incremental backup than a complete backup check
the number of pages which have been changed since the last complete backup. You can
find this number by choosing the tab ‘Data area’ in the selection ‘Current Status->Memory
Areas->Data Area’. An incremental backup is useful if the number of changed pages is
small compared to the number of used pages.

PRINT ON DEMAND
sponsored by
62
62
Incremental data backup (2)

Incremental data backup

 SAP AG 2002, Title of Presentation, Speaker Name 63

z An incremental data backup can be performed via the DBMGUI by selecting ‘Backup-
>Incremental’. As for the complete data backup you have to choose a media for the
backup. Via the icons on the lower task bar of the central window you can also create and
delete media or change the properties of existing media. The ‘Next Step’ button guides you
through the further backup process. At it’s end a backup report is shown.

PRINT ON DEMAND
sponsored by
63
63
EUROPEAN
SAP TECHNICAL
SAP
EDUCATION
CONFERENCE 2002
liveCache
Administration &
WORKSHOP Monitoring
Sept. 30 – Oct. 2, 02 Bremen, Germany

Consistent Views and


Garbage Collection

z In this unit the concept and the consequences of consistent views are explained.

PRINT ON DEMAND
sponsored by
64
64
Consistent views

All read accesses provide the image of an object that was committed
at a certain time. This point in time is the same for all accesses
within one transaction.
Example of reading within implicit consistent views
T1 T3
Set s=3 Commit Set s=7 Commit

T4
First reading Reading s Commit

s=3

T2 First reading Commit


Reading s

s=7
T5
Reading s Commit

s=7
time

 SAP AG 2002, Title of Presentation, Speaker Name 65

z liveCache uses consistent views to isolate read accesses to object from concurrent
changes on data by other applications.
z Consistent views see all liveCache data as it was when the consistent view was created.
Changes by other simultaneous running applications are invisible to the transaction.
z Databases like Oracle support similar concepts, but mostly only for single statements
(consistent read).
z Transactions are always performed as consistent views. The dedicated time which
decides about the appropriate before image to be read is the first access to a persistent
object and ends with COMMIT or ROLLBACK (implicit consistent view).
z liveCache also knows the concept of named consistent views, called versions. These
views do not end with commit or rollback but can contain several transactions (see later)
and may be active for several hours. Such named consistent views are used by APO for
transactional simulations.
z Reading within consistent views allows to provide only committed images without
waiting for the end of any other transactions.

PRINT ON DEMAND
sponsored by
65
65
Why consistent views

Example of reading without consistent views

transaction T1 transaction T2 transaction T1 continued


ƒ follow path to element B ƒ deletes element C ƒ knows the old image of B
ƒ knows the path to C ƒ inserts element X ƒ wants to read element C
ƒ updates the path B → X ƒ element C is deleted
ƒ commit ƒ element D is unreachable

B B' B
C C C

A D A D A D

T2
T1

time

 SAP AG 2002, Title of Presentation, Speaker Name 66

z Consistent views are required to navigate trough networks.


z The example above demonstrates one problem that can occur when reading is without
using a consistent view.
z Example description:
1. Transaction T1 starts to read a object chain at object A. It wants to follow the path
until object D in order to update D.
2. Unfortunately a scheduler interrupts transaction T1 after reading object B.
3. Transaction T2 is started and replaces element C by X.
4. T2 commits.
5. Transaction T1 continues and follows the link to the object C. However, C is
deleted and D therefore unreachable.
z If transaction 1 uses a consistent view of the chain, it can still access the deleted
element C to follow up to element D.
z Therefore, consistent views starts with the first object access of a transaction.

PRINT ON DEMAND
sponsored by
66
66
History files
OMS: pObj->value = y;
pObj->omsStore(*this);
...
Commit
Transaction list History files of
open transactions update OID 23.409

Class
x(k1)
container

History file list


History files of
committed transactions
x (k1)
y

Data cache

 SAP AG 2002, Title of Presentation, Speaker Name 67

z The read consistency forces that all old images of objects which where updated by a
transaction T are stored not only until T committed but until the last consistent view
which where open before T committed is closed.
z The storage of before images is realized with the help of history files. When an object is
updated, the old value of the object (the before image) is copied to a history file which
exists for each transaction. Then the new object is copied to the data page and a pointer
in the page points to the former object version in the history file.
z History files of open transactions are not only used for the consistent read but they can
also be used for the rollback of transactions.
z In case of rollback, the old image is copied from the history file back to its original data
page and the history file is destroyed. If the transaction ends with a commit its history
file survives the transaction end and is inserted into a history file list.

PRINT ON DEMAND
sponsored by
67
67
Consistent reading via history files

T2 T3 T5 T6
Set s=15 Commit Set s=3 Commit Set s=7 Commit Set s=8

T4
First Reading Reading s
s= 15

History files
T3
s (15) Class
container
T4

T5
s (3) s (8)

T6
s (7)

 SAP AG 2002, Title of Presentation, Speaker Name 68

z Several changes of an object made by different transactions are recorded in the history
files. These different versions of an object are linked in the history files.
z Dependent on the start time of active consistent views (transactions or named consistent
views) it may be necessary to keep several versions of an object.
z The before image of an object can be deleted when no consistent view may need to
access the object anymore.

PRINT ON DEMAND
sponsored by
68
68
Garbage Collection (1)

Problem
Objects are marked as deleted only but not removed

Changes to objects (before images) are recorded in history files

Solution
Garbage is collected by garbage collector tasks:
„ History files that cannot be accessed by consistent views anymore are
deleted
„ All deleted objects in the OMS pages that will not be accessed by
consistent views anymore are released
„ Free pages are released when all objects in the data or history page
are released
„ Garbage collectors are scheduled every 30 seconds and will start
working when data cache usage is higher than 80%

 SAP AG 2002, Title of Presentation, Speaker Name 69

z Due to the consistent read no transaction that removes an object can remove the object
directly since a consistent view of an other transaction could probably access this object
or one of its before images. Therefore, objects are only marked as deleted when a
transaction deletes them.
z Actually, objects marked as deleted are removed by special server tasks called garbage
collectors. Scanning the history pages they remove objects when no consistent view can
access the objects anymore.

PRINT ON DEMAND
sponsored by
69
69
Garbage Collection (2)

T3 T4 T6 Garbage
Delete s Commit Commit Delete t Delete u Commit collection

T5

History files of Class


committed transactions container

T3
delete

T4

STOP

T6
delete delete
STOP Object
marked as
deleted

 SAP AG 2002, Title of Presentation, Speaker Name 70

z The garbage collectors scan periodically the history file list for history files of
transactions which cannot be accessed anymore by open consistent views. When the
garbage collector finds such a file it looks for all log entries which point to deleted
objects and removes these objects finally, i.e. afterwards the corresponding object
frames in the class container file are free and can be reused. After all delete entries in
the history file were found and the corresponding objects were removed the complete
file is dropped.
z The garbage collectors checks also whether the class containers contain too many
empty pages. If more than 20% of the pages of a file are empty the GC removes all
empty pages. The GC finds the empty pages by following the chain of free pages which
belongs to each container.

PRINT ON DEMAND
sponsored by
70
70
Garbage Collection (3)

The algorithm of garbage collection changes according to the


filling level of the database

Start of Garbage Collector every 30 seconds


„ History files are removed which belong to committed transactions which are older
than the oldest transaction which was open while one of the currently active
consistent views start.

Database filling over 90%


„ To avoid a standstill of the data base due to a ‘database full’ situation object history
files are removed even if their before images could be accessed by an active
consistent view.
„ The garbage collector removes the oldest history files until either the filling is again
below 90% or there are no more history files of committed transactions.

 SAP AG 2002, Title of Presentation, Speaker Name 71

z As long as transactions are not committed or named consistent views are not dropped,
the before images of objects stored in the history files cannot be released, because they
may be accessed by the consistent views. Remember that the consistent view wants to
see the liveCache as it was when the consistent view started. So before images in the
history files that are younger than the consistent view may reflect the status of liveCache
at start of the consistent view. As a result the history files may grow.
z When a transaction or a named consistent view is active for a long time, this may become
a problem for liveCache performance and availability:
y When the data cache is too small to hold the history files, data is swapped to disk.
When the data is accessed again (by the application or the garbage collectors) it must
be read into the data cache before. This leads to physical I/O what has to be avoided
for liveCache.
y When history files grow further, this may lead to a ‘database full’ situation. The result
is a standstill of the application.
z liveCache tries to optimize garbage collection, because scanning large history files is
CPU and I/O consuming.
z The total usage of the data cache as well as the occupation with history and data pages
can be monitored with transaction ‘LC10 -> Current Status -> Memory Areas -> Data
Cache ’.

PRINT ON DEMAND
sponsored by
71
71
Loss of consistent views

T2 T3 T5 T6
Set s=15 Commit Set s=3 Commit Set s=7 Commit Set s=8

T 4 First reading Reading s

Object history
not found

History files
T3
s (15)
Database Class
filling >90% container
T4

T5
s (8)
s (3)

T6
s (7)

 SAP AG 2002, Title of Presentation, Speaker Name 72

z If the data cache filling exceeds the limit of 95% consistent views may become
incomplete since old object images which belong to the view are removed. The access
to such a removed old image causes the error ‘too old OID’ or ‘object history not found ‘.
z When the data cache filling level is above 95% before images which are not accessed by
any consistent view are removed. However, since the before images are linked in a
chain the connection to older images which might be visible in a consistent view is lost.
z When the database filling reaches the limit of 90% before images are removed which
are visible in consistent views.

PRINT ON DEMAND
sponsored by
72
72
EUROPEAN
SAP TECHNICAL
SAP
EDUCATION
CONFERENCE 2002
liveCache
Administration &
WORKSHOP Monitoring
Sept. 30 – Oct. 2, 02 Bremen, Germany

Memory
Areas

z In this unit you will get to know the two main memory areas of the liveCache: the data
cache and the OMS heap.

PRINT ON DEMAND
sponsored by
73
73
Calling a liveCache method in ABAP

...
... ABAP coding
set
set connection:
connection: <liveCache>
<liveCache> running on APO
...
... application server
exec
exec sql
sql call
call OID_UPD_OBJ
OID_UPD_OBJ (:KeyNo);
(:KeyNo);
...
...
exec
exec sql
sql commit;
commit;
...
...

Session Session Session


context n context n
... context n
Private caches
in the OMS heap

liveCache basis
liveCache
Data cache

 SAP AG 2002, Title of Presentation, Speaker Name 74

z A COM routine is called as a stored procedure in ABAP from the APO application server.
z Within a transaction (terminated by COMMIT or ROLLBACK), several COM routines can
be called. All these routines will work within the same session context in the liveCache. An
important feature of a session context is that global data is copied into a private memory
area (OMS heap) and that all following operations will operate on these private copies.
The access to private data is much faster than accessing global data from the data cache,
leading to a considerable win of performance – on cost of memory consumption. The
changes on private copies will be transferred into the global memory after a COMMIT and
the private memory is released (one exception are versions). The released memory is not
returned to the operating system but only free to be used again for new private caches.
Therefore, the OMS heap memory can never shrink.

PRINT ON DEMAND
sponsored by
74
74
Memory areas in the liveCache

Data cache OMS heap

OMS data pages

History pages Copied OMS objects

SQL pages Local COM memory

Parameter: CACHE_SIZE Parameter: OMS_HEAP_LIMIT

 SAP AG 2002, Title of Presentation, Speaker Name 75

z liveCache uses two main memory areas in the physical memory of the liveCache server:
data cache and OMS heap
z Data cache
y Data cache is allocated in full size when the liveCache is started. The size is configured by
liveCache parameter CACHE_SIZE
y data cache contains
• data pages with the persistent objects (OMS data pages)
• history pages with before images of changed or deleted objects (history pages)
• swapped named consistent views, keys for keyed objects and SQL pages (SQL pages). All pages which are
organized as B* trees are called SQL pages.

y all these pages may be swapped to data volumes if the data cache is too small to hold all data

z OMS heap
y liveCache heap grows when additional heap memory is requested. The maximum size is
configured by liveCache configuration parameter OMS_HEAP_LIMIT
y heap contains
• local copies of OMS objects (private cache for consistent views)
• local memory of a COM routine allocated by omsMalloc() and new()

y no swapping mechanism for heap memory is implemented except for inactive named
consistent views

PRINT ON DEMAND
sponsored by
75
75
Interaction of data cache and OMS heap (1)

Method: GET LIST

A B C
objects
instance map
A OID 13.1
OID addr
25.1

?
object not found
session
private cache (OMS heap)
context

25 A D free
object pages
45 free free E

liveCache 13 B free C
basis data cache

 SAP AG 2002, Title of Presentation, Speaker Name 76

z When an object is accessed via its OID, the object is searched in the private cache of the
session first. The OIDs of the private cache are stored in a hash table.
z When the object cannot be found in the private cache, the object is read from the global
data cache. The OID contains the physical page number of the page that contains the
object.
z If the page is not already in global data cache, it will be read from the data volumes.

PRINT ON DEMAND
sponsored by
76
76
Interaction of data cache and OMS heap (2)

Method: GET LIST

A B C
objects
instance map
A OID 13.1
OID addr
25.1
B

?
13.1

session
private cache (OMS heap)
context

25 A D free
object pages
45 free free E
offset 1
liveCache 13 B free C
basis data cache

 SAP AG 2002, Title of Presentation, Speaker Name 77

z When the page that contains the searched object is located in the global data cache, the
page offset which is part of the OID is used to locate the object inside the page.
z The object is copied to private cache and the hash table of the private cache is updated.

PRINT ON DEMAND
sponsored by
77
77
Interaction of data cache and OMS heap (3)

Method: GET LIST

A B C
objects
instance map
A
OID addr
25.1
B

13.1

session
private cache (OMS heap)
context

25 A D free
object pages
45 free free E

liveCache 13 B free C
basis data cache

 SAP AG 2002, Title of Presentation, Speaker Name 78

z All further accesses to the object will be handled in the private cache.
z All changes on the object will be made on the local copy of the object.
z The global version of the object in data cache remains unchanged until the transaction
performs a commit. If the transaction ends with a rollback the private cache is released
without changing any global version of the object.
z The subtransactions are completely handled within the private cache.
z When the object is used by a version, the object will never be copied back to global
cache, but will be released when the version is dropped.

PRINT ON DEMAND
sponsored by
78
78
Monitoring interaction of data cache and OMS heap

Object accesses from


OMS heap and data cache

 SAP AG 2002, Title of Presentation, Speaker Name 79

z The tab ‘Object accesses’ in the selection ‘Current Status->Problem Analysis-


>Performance->OMS Monitor’ lists for each COM routine the number of object accesses
to the OMS heap and the data cache.
z The tab displays two kinds of columns. Columns named like ‘OMS …’ describe accesses
to the private OMS heap while those named like ‘Basis …’ count the various object
accesses to the data cache. By comparing an OMS column with the corresponding basis
column you can find out how effective the private object cache works. Simply speaking:
the larger the ratio between ‘OMS object acc.’ and ‘Basis object acc.’ is, the better works
the OMS caching.
z Object accesses via keys and iterators are supplied only by the basis layer, therefore no
columns for ‘OMS key accesses’ and ‘OMS iterator accesses’ exist.

PRINT ON DEMAND
sponsored by
79
79
Data cache and OMS heap configuration

Data cache (parameter CACHE_SIZE)


„ Is a static memory and allocated when the liveCache is started
„ Contains persistent OMS objects (OMS page chains)
„ Contains swapped inactive transactional simulations
„ Contains SQL data and keys for OMS objects (B* trees)
„ Contains the history files (before images)

OMS heap (parameter OMS_HEAP_LIMIT)


„ liveCache heap grows dynamically until OMS_HEAP_LIMIT is reached
„ Contains copies of objects in consistent views
‹ transactions
‹ named consistent views (versions/transactional simulations)

 SAP AG 2002, Title of Presentation, Speaker Name 80

z Memory administration in heap


y When local object copies are released at the end of transaction or when a named
consistent view is dropped, the freed heap memory is not returned to the operation
system. So the physical allocated heap never shrinks. It can only grow - up to
OMS_HEAP_LIMIT.
y Internally the liveCache heap is organized in 64kB blocks.
y The allocated heap memory is fully under control of the liveCache. liveCache
implements its own memory administration for OMS objects in private cache.
y Memory is only released to the liveCache and may be used for other liveCache
objects.
z When OMS_HEAP_LIMIT is reached, liveCache copies inactive named consistent views
to data cache and releases memory in heap.
z When no additional memory can be allocated for the heap, the COM routine that tries to
allocate memory gets an outOfMemory error and the transaction is rolled back by the
COM routine. All private data of this consistent view is freed. To handle the destruction of
objects an emergency memory area of 10MB is allocated at liveCache start.
z Heap usage can be monitored with report /SAPAPO/OM_LC_MEM_MEMORY.

PRINT ON DEMAND
sponsored by
80
80
Monitoring the OMS heap usage

Display OMS heap usage

 SAP AG 2002, Title of Presentation, Speaker Name 81

z The selection ‘Current Status->Memory Areas->Heap usage’ yields information about the usage of
OMS heap.
z ‘Available heap’ is the memory that was allocated for heap from the operating system. It reflects
the maximum heap size that was needed by the COM routines since start of liveCache .
z ‘Total Heap usage’ is the currently used heap. When additional memory is needed, liveCache uses
the already allocated heap until ‘Available’ is reached. Additional memory requests will result in
additional memory requests from operating system and the value of ‘Reserved’ will grow.
(‘Available heap’ > ‘Total Heap usage’ )
z It is important to monitor the maximum heap usage. When the ‘Available heap’ reaches
OMS_HEAP_LIMIT, errors in COM routines may occur due to insufficient memory. This
should be avoided.
z ‘OMS malloc usage’: memory currently in use that has been allocated via calls of method
'omsMalloc' (‘Total Heap usage’ > ‘OMS malloc usage’)
z 'Temp. heap at memory shortage‘: size of the emergency chunk. If a db-procedure runs out of
memory, the emergency chunk is assigned to the corresponding session and following memory
requests are fulfilled from the emergency chunk. This ensures that the db-procedure can
cleanup correctly, even if no more memory is available. After the db-procedure call the emergency
chunk is returned to public.
z 'Temporary emergency reserve space‘: memory of emergency chunk currently in use.
('Temp. heap at memory shortage' >= 'Temporary emergency reserve space')
z 'Max. emergency reserve space used‘: maximal usage of emergency chunk.
('Temp. heap at memory shortage' >= 'Max. emergency reserve space used')

PRINT ON DEMAND
sponsored by
81
81
Monitoring the data cache usage

Display data cache usage

 SAP AG 2002, Title of Presentation, Speaker Name 82

z The menu path ‘Current Status->Memory Areas->Data cache’ leads to a screen which
displays all information about the liveCache data cache like data cache size, used data
cache and the usage and hit ratios for the different types of liveCache data.
z In an optimal configured system
y the data cache usage should be below 100%
y the data cache hit rate should be 100%
y if data cache usage is higher than 80%, the number of OMS data pages should be
higher than the number of OMS history pages
z Use the refresh button to monitor the failed accesses to the data cache. Each failed
access results in a physical disk I/O and should be avoided.
z More detailed information about the cache accesses can be found selecting ‘Problem
Analysis->Performance->Monitor->Caches’.
z Compare the size of OMS data with OMS history. If data cache usage is higher than 80 %
and OMS history has nearly the same size as OMS data, use the ‘Problem Analysis-
>Performance->Monitor->OMS Versions’ screen to find out if named consistent views
(versions) are open for a long time. Maximum age should be four hours.

PRINT ON DEMAND
sponsored by
82
82
Versions and named consistent views

Create Close
Version Reading s Reading t Set t=3 Version Commit Reading s Reading t Commit

T3 T4
s=3 t=1 s=2 t=1

Open Drop
Set s=2 Commit Reading s Reading t Commit
Version Version
T2 T5
s=3 t=3

A session can run within a version: enclosed in the API commands


omsCreateVersion-OMSDropVersion.

All transactions running in one version have the same consistent view. It was
started when the version was created. Such a consistent view is called a named
consistent view.

All updates, creations and deletions of objects performed within a version remain
in the private cache of the session.
Æ Complete detachment of a user from the action of other users.

Versions can be closed temporarily and re-opened. Closed versions are called
inactive.

 SAP AG 2002, Title of Presentation, Speaker Name 83

z For larger plenary scenarios (implemented by so called transactional simulations) APO


required the ability to keep one consistent view over more than one transaction. This is for
instance because in such a plenary scenario DynPro changes occur which cause
automatically commit requests. For these scenarios the liveCache provides versions.
z After creating a version within a session all transactions in this session have the same
consistent view.
z After a commit no changed data is written into the global data cache but all data reside in
the private cache. Thus cached objects cannot be released from the private cache after a
commit or rollback. The consequence is that versions consume more and more OMS-
memory the longer they exist. Moreover, the garbage collector cannot release history
pages since the version could access an old image of an object.
z Versions can be closed temporarily and reopened in any other session. This is necessary
since after a commit an application may be connected to another work process and
therefore to another liveCache session.
z In case the heap consumption passes certain limits closed versions can be swapped into
the global data cache where they are stored in B*-trees on temporary pages.
z Since temporary pages as well as the states of the private session caches are not
recovered after a restart versions disappear automatically after stopping and starting the
liveCache.

PRINT ON DEMAND
sponsored by
83
83
Monitoring versions

Listing versions and their


heap consumptions

 SAP AG 2002, Title of Presentation, Speaker Name 84

z One reason for a large consumption of OMS heap and data cache could be a long running
version which cumulates heap memory and which prevents the garbage collector from
releasing old object images.
z With the selection ‘Problem Analysis->Performance->Monitor->OMS versions’ you can
monitor the memory usage by versions.
z The column ‘Memory usage’ displays the actual usage of OMS heap memory. The
columns ‘Time’ and ‘Age (hours)’ define the starting time and the version and the time
since the start. Note, there should never be any version older than 4 hours. To avoid this
situation, the report /SAPAPO/OM_REORG_DAILY’ must be scheduled at least once a
day.
z Versions can be closed and re-opened in another session. To gain heap memory versions
can be rolled out into the global data cache where it is stored on temporary pages. The
column ‘Rolled out’ displays if the version cache was rolled out into the data cache. In the
column ‘Rolled out pages’ you find the number of temporary pages in the data cache
which are occupied by the rolled out version cache.
z Long running transactions can cause the same memory lack as versions. To display
starting time of all open transactions use ‘Problem Analysis->Performance-
>Transactions’.

PRINT ON DEMAND
sponsored by
84
84
Controlling the OMS heap consumption

Configurable control parameters:


OMS_VERS_THRESHOLD [KB]
OMS_HEAP_THRESHOLD [%]

After each COMMIT the liveCache checks whether the active version in
the current session consumes more than OMS_VERS_THRESHOLD KB
of the OMS heap or more than OMS_HEAP_THRESHOLD % of
OMS_HEAP_LIMIT are in use.
If YES:
- Unchanged objects are removed from the cache of the current version
- The current version cache is rolled out into the data cache.

 SAP AG 2002, Title of Presentation, Speaker Name 85

z The consumption of OMS heap by versions can be controlled by two configuration


parameters OMS_VERS_THRESHOLD and OMS_HEAP_THESHOLD. Both parameter
allow a limitation of the heap consumption on cost of the object access time.
z OMS_VERS_THRESHOLD:
At the end of the transaction, unchanged data from versions of a session are deleted from the
version cache and the version cache is rolled out into the data cache if the version occupies
more than OMS_VERS_THRESHOLD KB of memory. If the stored object is accessed again
at a later stage within the version, the object must be copied again from the data cache into the
heap. You do not have to do this if you set the OMS_VERS_THRESHOLD higher and there is
enough memory available.
z OMS_HEAP_THRESHOLD:
If the percentage rate is exceeded when the available heap is occupied (The available heap is
defined by the parameter OMS_HEAP_LIMIT. ), then objects that were read and not changed
within a version are removed from the heap at the end of the transaction and the version cache
is rolled out to the data cache. The default value is set to 100. Where memory bottlenecks are
concerned, it might be wise to determine a smaller value.

PRINT ON DEMAND
sponsored by
85
85
EUROPEAN
SAP TECHNICAL
SAP
EDUCATION
CONFERENCE 2002
liveCache
Administration &
WORKSHOP Monitoring
Sept. 30 – Oct. 2, 02 Bremen, Germany

Task
Structure

z At the conclusion of this unit you will be able to monitor the tasks running inside your
liveCache server.

PRINT ON DEMAND
sponsored by
86
86
Process, thread and task structure

Coordinator UKT UKT


UKT
Server User
Server User user
Requestor server User user
user
Console

Clock
UKT UKT
Dcom 0 - n
DataWriter
datawriter Timer ALogWriter
Dev 0 - n

(IOWorker 0 - n) UKT UKT UKT


Garbage
Garbage
Collector TraceWriter Utility
Asdev 0 - n Collector
Event
event

liveCache process

 SAP AG 2002, Title of Presentation, Speaker Name 87

z The operating system sees the liveCache as one single OS process. The process is
divided into several OS threads (Windows and UNIX). liveCache calls these threads
UKTs (user kernel threads).
z Some threads contain different specialized liveCache tasks whose dispatching is under
control of liveCache.
z Other threads contain just one single task.
z The tasks that perform the application requests are called user tasks. User tasks are
contained in UKTs which contain exclusively user tasks.
z Each APO work process is connected to one or two user tasks.
z Starting with liveCache 7.2.5.4 (APO SP 13) the number of CPUs used by user tasks can
be limited by parameter MAXCPU. MAXCPU defines the number of UKTs which
accommodate user tasks. Since the usertasks consume the majority of the CPU
performance MAXCPU defines approximately how many CPUs of the liveCache server
are occupied by the liveCache.

PRINT ON DEMAND
sponsored by
87
87
Coordinator User
Initialization / UKT coordination

User Kernel Thread UKT 3


Requestor
Connect processing User

Console
Diagnosis

Timer
Time monitoring

Dev0 thread
Master for I/O on volume
Dev<i> slave threads

Async0 thread
Master for backup I/O
AsDev<i> threads

 SAP AG 2002, Title of Presentation, Speaker Name 88

PRINT ON DEMAND
sponsored by
88
88
Task description

User
Executes commands from applications
and interactive components
Server Performs I/O during backups

ALogWriter Writes the logs to the log volumes

Writes dirty pages from the data cache


DataWriter
to disk

TraceWriter
Flushes the kernel trace to the kernel
trace file

Utility Handles liveCache administration

Timer Monitors LOCK and REQUEST TIMEOUTs

GarbageCollector
Removes outdated history files and
object data

 SAP AG 2002, Title of Presentation, Speaker Name 89

z Each UKT makes various tasks available, including:


y user tasks, i.e. tasks that users connect to in order to work with the liveCache
y tasks with specific internal functions
z The total number of tasks is determined at start-up time and they are then distributed
dynamically over the configured UKTs according to defined rules. Task distribution is
controlled by parameters like e.g. _TASKCLUSTER_02.
z UKT tasks allow a more effective synchronization of actions involving several
components of the liveCache, and minimize expensive process switching.
z The user tasks execute all commands from the applications. COM routines for instance
run within the user tasks.
z Sever tasks are used for various task. e.g. for I/O during backups, for creation of
indexes, for read ahead.
z For NT tasks are implemented by fibers. For UNIX task are realized as OS threads. The
threads of one UKT form a group in which only one thread can be active.

PRINT ON DEMAND
sponsored by
89
89
Task distribution

Show task distribution

 SAP AG 2002, Title of Presentation, Speaker Name 90

z The task distribution of the liveCache can be viewed within the LC10 through the
selection ‘Current Status->Kernel threads->Thread Overview’.
z liveCache configuration: All garbage collector tasks run always in one thread.
z In the example above all user tasks run in one thread. Accordingly the configuration
parameter MAXCPU is one.

PRINT ON DEMAND
sponsored by
90
90
Current task state

Show task state

 SAP AG 2002, Title of Presentation, Speaker Name 91

z The screen ‘Current Status->Kernel threads->Task manager’ displays information about


the status of liveCache tasks which are currently working for an APO work process.
z In a running system, possible status are
y Running: task is in kernel code of liveCache and uses CPU
y Command Wait: user tasks wait for another command to execute
y DcomObjCalled: task is in COM routine code and uses CPU
y IO Wait (R) or IO Wait (W): task waits for I/O completion
y Vbegexl, Vsuspend: task waits for an internal lock in liveCache
y Vwait: task waits that a lock is released which is hold by another
APO application. Locks are released after commit or rollback.
y No-Work: task is suspended since there is nothing to do
z If the sum of tasks in status ‘Running’ and ‘DcomObjCalled’ is higher than the number of
CPUs on the liveCache server for a longer time, liveCache likely faces a CPU
bottleneck. Before the number of CPUs is increased, a detailed analysis of the COM
routines may be necessary .
z The ‘Application Pid’ is the process ID of the connected APO work process which can
be identified in transaction SM50 and SM51 respectively.

PRINT ON DEMAND
sponsored by
91
91
liveCache Console

liveCache: Console

 SAP AG 2002, Title of Presentation, Speaker Name 92

z The ‘liveCache: Console‘ window displays information about the liveCache status as
they are mainly also shown by the selection ‘Current Status->Kernel threads‘ in the
‘liveCache: Monitoring’ window (see previous slide). However, while the output from the
‘liveCache: Monitoring‘ window bases always on SQL queries to the liveCache the
results from the selection ‘liveCache: Console‘ get their results directly from the run time
environment of the liveCache. That means in situations where you cannot connect
anymore to the liveCache you can still use the liveCache console to investigate
the liveCache status.
z All data shown in the various selections from the console screen can also be yield by
calling the command ‘x_cons <liveCache name> show all‘ on a command line.

PRINT ON DEMAND
sponsored by
92
92
Cumulative task state

 SAP AG 2002, Title of Presentation, Speaker Name 93

z A comprehensive description of all objects of the liveCache run time environment (RTE)
is displayed when the ‚liveCache: Console‘ screen is used. RTE objects are tasks,
disks, memory, semaphores (synchronization objects which here are called regions)
and waiting queues.
z In addition to the information about the current task states which can also be displayed
as shown on the previous slides the selection ‚Task activities‘ displays cumulated
information about the task activities. In particular the dispatcher count is given which
counts how often a task was dispatched by the task scheduler. As long as this number
is constant the task is inactive.
Other important parameter are:
y command_cnt: counts the number of application commands executed the the task.
y exclusive_cnt: number of accesses to regions (synchronization objects)
y state_vwait: counts the cases where the task had to wait for objects locked by
another task
z Among the other information which can be displayed by the liveCache console the
number of disk accesses, the accesses of critical regions (see slides in unit
‘Performance analysis’) and the PSE data are most important. Everything else is
intended to be used only by liveCache developers. Therefore the displayed values
sometimes may seem to be a little bit cryptic.

PRINT ON DEMAND
sponsored by
93
93
EUROPEAN
SAP TECHNICAL
SAP
EDUCATION
CONFERENCE 2002
liveCache
Administration &
WORKSHOP Monitoring
Sept. 30 – Oct. 2, 02 Bremen, Germany

Recovery

z At the conclusion of this unit you will be able to restore your liveCache.

PRINT ON DEMAND
sponsored by
94
94
Restart

Savepoint Savepoint Savepoint Crash

T1 C

T2
T3 R

T4 C
C: Commit
R: Rollback T5 R
T6 C
t

No recovery Redo (read archive log) : T4, T6


for transactions Undo (read undo file) : T2, T3
T1 and T5
 SAP AG 2002, Title of Presentation, Speaker Name 95

z Automatic recovery at restart.


z The restart performs a redo of transactions which were open at the time of the last
savepoint and committed at the crash time. Transactions which were open at the crash
time will only be rolled back if they were open at the time of the last savepoint .
z Start-point for the redo/undo is the last savepoint. All data written after the last savepoint
to the data volumes will not be considered.
z Our Example:
y Transactions 1 and 5 are not relevant for redo/undo. Transaction 1 was committed at
the time of the last savepoint. The modifications were written to the Data-volumes.
Modifications of transactions 5 are not in the data area of the last savepoint
y Transaction 2, 3 and 4 were not completed at the time of the last savepoint. The
liveCache will redo transaction 4 ➜ REDO
y Transaction 2 and 3 will be rolled back, beginning at the time of the last savepoint ➜
UNDO
y The restart will completely redo transaction 6. The modifications are not in the data
area of the last savepoint ➜ REDO

PRINT ON DEMAND
sponsored by
95
95
Recovery process

Restore Restore Restore Restore Restart Restart


DAT_00004 PAG_00006 LOG_00010 LOG_00011 automatically ready

LOG_00011
Archive log

LOG_00010
PAG_00005
DAT_00004

Data 1 Data 2 Data n

 SAP AG 2002, Title of Presentation, Speaker Name 96

z Recovery always starts with a RESTORE DATA in the operation mode ADMIN. During
the restore, pages are written back to the volumes.
z RESTORE PAGES overwrites the pages in the volumes with the modified images.
z Log recovery is based on the last savepoint executed with the SAVE DATA/PAGES. After
the last RESTORE DATA/PAGES the database immediately performs a restart, if the log
entries belonging to the savepoint persist in the archive log. The restart reapplies the log
entries.
z RESTORE LOG must be run, if the savepoint belonging to the complete/incremental
backup was overwritten in the archive Log.
z The database reads the log entries from the backup media until it can find the next entry
in the log.

PRINT ON DEMAND
sponsored by
96
96
Recovery (1) : Start recovery process

Switch into
OFFLINE/ADMIN/ONLINE mode

Recover database

 SAP AG 2002, Title of Presentation, Speaker Name 97

z To perform a recovery it is necessary to bring the database to the ADMIN mode which
can be done in the DBMGUI by pressing the yellow light in the traffic light symbol in the
left upper corner.
z To start the backup you have to change to the selection ‘Recovery->Database‘. In the
central window you can then choose which complete backup should be the basis for
the recovery of the database. You can take the last complete backup (uppermost radio
button) but also any other complete backup (middle radio button). With the ‚Next Step‘
icon you continue the recovery process.

PRINT ON DEMAND
sponsored by
97
97
Recovery (2) : Choose backup to start with

 SAP AG 2002, Title of Presentation, Speaker Name 98

z All previously made complete data backups are shown in this list. To continue the
recovery mark the backup which you want to use as the basis for the recovery and
press the button ‘Next Step‘.

PRINT ON DEMAND
sponsored by
98
98
Recovery (3) : Choose strategy

Recovery strategies

 SAP AG 2002, Title of Presentation, Speaker Name 99

z Now the simplest recovery strategy is shown. In the example above it is to restore the
incremental backup after the complete backup. No further log backups are required
since all needed log information are still on the log device.
z Instead of restoring the incremental backup you could restore the log backups.
Therefore you have to mark one of the log backups all further needed backups would be
marked automatically.

PRINT ON DEMAND
sponsored by
99
99
Recovery (4) : Start physical recovery

Start recovery

 SAP AG 2002, Title of Presentation, Speaker Name 100

z To start the recovery you have to press the ‘Start‘ button.


z Each time a backup media is restored the DBMGUI asks for the next backup. If the
backup media would be a tape and not a file you had to change the tape now. To
continue the recovery press the ‘Start‘ button again.

PRINT ON DEMAND
sponsored by
100
100
Recovery (5) : Restart liveCache

Restart

 SAP AG 2002, Title of Presentation, Speaker Name 101

z After the recovery from the backup media is finished the DBMGUI informs you that it is
possible to restart the liveCache. Then the log entries from the log volumes will be
redone.
z When the restart is finished the liveCache is in ONLINE mode and all its data and
functionalities are available again.

PRINT ON DEMAND
sponsored by
101
101
EUROPEAN
SAP TECHNICAL
SAP
EDUCATION
CONFERENCE 2002
liveCache
Administration &
WORKSHOP Monitoring
Sept. 30 – Oct. 2, 02 Bremen, Germany

Configuration

z This unit introduces the key parameter of the liveCache configuration and demonstrates
how they can be manipulated.

PRINT ON DEMAND
sponsored by
102
102
Displaying configuration parameters

Display parameters
and their history

 SAP AG 2002, Title of Presentation, Speaker Name 103

z Each time a liveCache is started it is configured according to a parameter set stored in


the liveCache parameter file.
z The parameter file is named as the <SID> of the liveCache and stored in the directory
<IndepData>/config which is usually ‚/sapdb/data/config‘. The changes of the parameter
file are logged in the file <SID>.pah located in the same directory.
z The parameter file is not readable and must not be changed directly since the
parameters are not independent and have to fulfil certain constraints. To change the
parameters you have to use one of the administration tools like DBMCLI, DBMGUI,
LC10 or WEBGUI.
z Within the LC10 the configuration parameters can be shown via the selection ‚Current
Status->Configuration->Parameters->Currently‘. The history of each parameter can be
accessed by pressing the triangle in front of it.
z According to their meaning for the administrator the parameters are divided into three
groups:

y General: These parameters can be changed by the liveCache administrator.


y Extended, Support: Changes should be performed only in cooperation with the SAP
support.

PRINT ON DEMAND
sponsored by
103
103
Change configuration parameters

Store changes

Change parameters

 SAP AG 2002, Title of Presentation, Speaker Name 104

z To change the configuration parameter go to the selection ‚Administration-


>Configuration->Parameters‘. Here you find a column ‚New value‘ which is highlighted
for all parameters which you are allowed to change. The other parameters are either
fixed after the initialization or they are determined by other parameters.
z By pressing the ‚Check Input‘ button you can check if your new parameter values fulfil
all required constraints. To store your updated values press the disk icon. The file which
contains the constraints, rules and descriptions of the parameters is called cserv.pcf and
can be found in <InstallationPath>/env.
z Notice that the configuration parameters are read only when the liveCache is started
which means parameter value changes do not take effect before liveCache is stopped
and started again.
z In principle all parameters should have proper values after the installation and no
further reconfiguration should be necessary.

PRINT ON DEMAND
sponsored by
104
104
EUROPEAN
SAP TECHNICAL
SAP
EDUCATION
CONFERENCE 2002
liveCache
Administration &
WORKSHOP Monitoring
Sept. 30 – Oct. 2, 02 Bremen, Germany

Performance
Analysis

z At the conclusion of this unit you will be able to use the LC10 and the DBMGUI to find out
if the performance of your liveCache is limited by a bottleneck. Moreover, you will be given
ideas of how to improve the performance.

PRINT ON DEMAND
sponsored by
105
105
Monitoring APO and liveCache performance

APO performance and


liveCache
liveCache Monitoring
as share of APO specific
response time transactions
Monitoring
the liveCache
server

 SAP AG 2002, Title of Presentation, Speaker Name 106

z Analyzing an APO system for liveCache workload and bottlenecks, three different areas
must be covered:
y Estimate the liveCache share on the total APO response time and identify the APO
transactions which cause the high liveCache workload.
y Monitor the liveCache server and identify bottlenecks
y Detail analysis of specific APO transactions which are identified as performance
critical.
z These three areas are covered by different sets of SAP monitoring transactions
y Workload analysis transaction ST03N
y liveCache monitor transaction LC10
y A combination of runtime analysis transaction SE30, SQL trace transaction LC10
and liveCache monitoring transaction LC10
z This workshop is focused on monitoring the liveCache server. However, a complete
performance analysis has always to include all three parts shown above.

PRINT ON DEMAND
sponsored by
106
106
Reasons for poor liveCache performance

z High rate of I/O operations


z Serialization on synchronization objects
z Insufficient of CPU performance
z Algorithmic errors in the COM routines
z Algorithmic errors in the liveCache code

 SAP AG 2002, Title of Presentation, Speaker Name 107

z There exist several causes for a poor liveCache performance. The most important are:
y A high rate of I/O operations performed by the user tasks.
y Serialization on liveCache synchronization objects. These objects are used to
synchronize the parallel access to shared liveCache resources, such as the data
cache.
y Too many user run COM routines.
y COM routines as well as the liveCache can raise a poor performance due to
algorithmic errors.

PRINT ON DEMAND
sponsored by
107
107
How to increase performance

z Optimize setting of configuration parameters


z Extend main memory
z Increase number of CPUs
z Call APO/liveCache support

 SAP AG 2002, Title of Presentation, Speaker Name 108

z The most important sanction to improve the liveCache performance is to optimize the
setting of the liveCache configuration parameters.
z If a shortage of main memory or CPU performance is detected (see next slides) you
should enlarge the main memory or increase the number of CPUs.
z Whenever the performance is poor due to unclear reasons you should call the
APO/liveCache support.

PRINT ON DEMAND
sponsored by
108
108
Prerequisite for performance analysis

Must be larger than 50000


for representative data

Show SQL statistics

 SAP AG 2002, Title of Presentation, Speaker Name 109

z A reliable analysis of the liveCache for a productive system is only possible, if a


sufficient number COM routine have already been executed. If less than about
50000 COM routines have been executed, the monitored data may not reflect a
representative workload of a productive APO system.
z To get an impression how many commands (DB Procedures / COM routines)
have been executed, choose the tab ‘SQL statistics’ in the selection ‘Problem
Analysis->Performance->Monitor’. The tab displays for each SQL action such as
reading, inserting or deleting an record how often it was executed. To find the
number of executed COM routines look for the row ‘External DBPROC calls’. For
a liveCache this number corresponds to the number of COM routines executed.

PRINT ON DEMAND
sponsored by
109
109
Performance parameters: I/O

Show data cache filling


and accesses

! Should be 100% !

 SAP AG 2002, Title of Presentation, Speaker Name 110

z Although the liveCache is constructed to keep all data in the data cache when it is in
ONLINE mode the liveCache can accommodate more data than it can host in the data
cache. However, if this happens the liveCache performance can suffer heavily from I/O
operations which are due to swap pages from the data cache to the data devices and
vice versa.
z To detect bottlenecks due to I/O operations use the selection ‘Current Status->Memory-
>Areas->Data cache’. There you can find information about the data cache filling level
as well as about the data cache accesses.
z For optimal liveCache performance (i.e. to avoid I/O-operations when accessing data
and history pages) the data cache usage should be below 100%.
z Whether the performance is significantly effected by I/O operations can be seen from
the number of failed cache accesses. The average data cache hit rate should be
above 99.9%. A lower rate is a hint for a too small data cache. A situation as shown
above shows rather poor performance.
z With the SQL command ‘monitor init’ you can reset the access counters to zero. This
allows to display the current hit rates.
z Notice that after the start of the liveCache the data cache is empty and it takes some
time till the hit rate shows a stable value which is relevant for an analysis.

PRINT ON DEMAND
sponsored by
110
110
Possible reasons for poor data cache hit rates

z Insufficient size of data cache


z Long-running version
z Long-running transaction

 SAP AG 2002, Title of Presentation, Speaker Name 111

z The main reason of a poor cache hit rate is a data cache which is configured too small.
However, sometimes the hit rate is poor due to long running versions or transactions. To
keep the consistent view of the versions or transactions the liveCache is forced to store
a large number of history pages which fill the cache and lead to a roll out of data and
history pages to the data devices.
z To find out if a bad hit rate is caused by versions or transactions check the selections
‘Problem Analysis->Performance->Monitor->OMS versions’ and ‘Problem Analysis-
>Performance->Transactions’. There should be no version older than four hours.

PRINT ON DEMAND
sponsored by
111
111
How to configure the data cache

CACHE_SIZE Ì 0.4 * FREE_MEMORY


FREE_MEMORY = min [physical memory - memory for OS and other applications,
MAX_VIRTUAL_MEMORY]
- SHOW_STORAGE
- MAXUSERTASK * _MAXTASK_STACK
- 100 MB

physical memory : physical memory of the liveCache server


MAXUSERTASK : parameter from the liveCache configuration file
_MAXTASK_STACK : parameter from the liveCache configuration file
MAX_VIRTUAL_MEMORY : NT see ‘MAX virtual memory‘ in knldiag file
UNIX call ulimit -a
100 MB : upper limit of memory for: task stacks of non user tasks + memory
for COM routine DLLs+ memory for the liveCache program code
SHOW_STORAGE : result of the command
dbmcli –d <liveCache_name> -u control,control show storage

 SAP AG 2002, Title of Presentation, Speaker Name 112

z The above formula gives a suggestion for the configuration parameter CACHE_SIZE
determining the size of the data cache. However, depending on your particular profile it
can be that the CACHE_SIZE has to deviate from the suggestion.
z If your cache hit rate is below 100% although the CACHE_SIZE is set as shown above
the physical memory on your liveCache server should be enlarged.
z MAX_VIRTUAL_MEMORY describes the maximum memory that can be accessed by
the liveCache. For NT this limit is displayed in the knldiag file for UNIX use the
command ‘ulimit –a’.
z On Windows NT you should use the Enterprise Edition to increase the
MAX_VIRTUAL_MEMORY from 2 GB to 3 GB.

PRINT ON DEMAND
sponsored by
112
112
How to configure the OMS heap

OMS_HEAP_LIMIT Ì 0.6 * FREE_MEMORY

The heap size is all right if no OutOfMemory exceptions occur.

If (#OutOfMemoryExceptions > 0)
increase the OMS heap, if necessary on cost of the data cache.

 SAP AG 2002, Title of Presentation, Speaker Name 113

z The free memory available for the data cache and the OMS heap should be divided in
the ratio 40/60, where the OMS heap gets the larger part of the memory.
z In contrast to the data cache the OMS heap is not allocated at the start of the liveCache
and thus there is no need to define the OMS_HEP_LIMIT in the configuration file. By
setting the OMS heap limit to 0 you allow the liveCache to allocate as much heap
memory as it can get from the operating system. However, on Windows NT and AIX the
liveCache could crash if the OS cannot allocate anymore memory therefore you should
set OMS_HEAP_LIMIT to the value suggested above. If the OMS_HEAP_LIMIT is not
zero the liveCache stops to request heap memory from the OS if the
OMS_HEAP_LIMIT is reached instead all COM routines requesting further memory are
aborted.
z The heap memory is of sufficient size if there occur no OutOfMemory exceptions. They
must be avoided since they let a COM routine abort. The occurrence of OutOfMemory
exceptions can be checked by executing the SQL command
select sum (OutOfMemoryExceptions) from Monitor_OMS
or
checking the column ‘OutOfMemory excpt.‘ in the tab ‘Transaction counter‘ of the
selection ‚Problem Analysis->Performance->OMS monitor‘ of the LC10
z If you find the number of OutOfMemory exceptions to grow you should increase the
OMS heap (if necessary by making the data cache smaller).

PRINT ON DEMAND
sponsored by
113
113
Performance parameters: regions (1)

Example: Data cache

The data cache is striped into _DATA_CACHE_RGNS regions. To


access a page having the page number PNO a task must enter the
data region (PNO mod _DATA_CACHE_RGNS)+1. In each region a
maximum of one task can search for a page.

DATA CACHE u1
16 8 64 40 132 28
Data1 region u2

61 25 33 97 801 89 Data2 region u3

42 2 74 38 66 10 Data3 region
u4

83 59 27 31 55 103 Data4 region u5


u6

 SAP AG 2002, Title of Presentation, Speaker Name 114

z When monitoring the liveCache task activities in the ‘liveCache: Console ->Active Task’
(or executing dbmcli –d <liveCache_name> -u control,control show act) you should find
the user tasks ideally to reside in the state ‘Running’ or ‘DcomObjCalled’. If instead user
tasks are often in the state ‘Vbegexcl’ it could be that your performance suffers from the
serialized access to internal liveCache locks. The liveCache calls these internal locks
regions (They correspond to Latches in Oracle). Regions are used to synchronize the
parallel access to shared resources. For instance searching for a page in the data cache
is saved by regions. In each region at maximum one task can search for a page.
z If a task requests a region which is already occupied by another task the requesting task
is suspended as long as it cannot enter the region. This situation is displayed by the
status ‘Vbegexcl‘ in the task monitor ‘liveCache: Console ->Active tasks’.

PRINT ON DEMAND
sponsored by
114
114
Performance parameters: regions (2)

Collision rate

Show region access

 SAP AG 2002, Title of Presentation, Speaker Name 115

z The number of collisions, i.e. situations where a task must be suspended since it
requested an occupied region, is displayed in the ‘liveCache: Console’ screen for each
region.
z The collision rates of frequently used regions should not exceed 10%. Otherwise the
liveCache performance is at risk.
z To reduce critical collision rates the configuration parameter defining number of regions
used to stripe the corresponding resource can be increased. However, since a high
collision rate could be an indicator for algorithmic errors this should be done only in
collaboration with the liveCache support.

PRINT ON DEMAND
sponsored by
115
115
Performance parameters: MAXCPU

Guideline for servers used exclusively for a liveCache

if ( # CPUs of liveCache server < 8)


MAXCPU = # CPUs of liveCache server
else
MAXCPU = # CPUs of liveCache server - 1

 SAP AG 2002, Title of Presentation, Speaker Name 116

z If a liveCache server possesses less than 8 CPUs the configuration parameter


MAXCPU should set to the exact number of CPUs. But if there are more than 8 CPUs
MAXCPU should be the number of CPUs reduced by one. This reserves one CPU for
non user tasks. In particular the garbage collector can use this processor to remove
deleted objects.
z A good choice for the number of garbage collectors (GC) is to choose
_MAX_GARBAGE_COLL as twice the number of data devices. This choice has no
influence on the CPU usage by the GCs since all GCs run in one thread but it results in
a good I/O performance of the GCs.
z If more user tasks are in the state ‘Running’ and ‘DcomObjCalled’ than the liveCache
server has CPUs the liveCache performance is CPU bound.

PRINT ON DEMAND
sponsored by
116
116
Analysis of COM routine performance

COM routine monitor

 SAP AG 2002, Title of Presentation, Speaker Name 117

z Even if the liveCache works fine the COM routines can cause a poor performance due
to algorithmic errors. To analyze those problems the liveCache supplies an expert tool
to investigate the performance of COM routines. It lists the runtime, memory
consumption and number of object accesses for each COM routine. All these data give
hints which COM routine could be problematic. However, since the analysis is not
simple this monitor should be used only by the APO support.
z Tab explanation:
y ‘Runtime’: total and average runtime of each COM routine.
y ‘Object Accesses’: number of object accesses from the private cache and from the
data cache for each routine (see also slide).
y ‘Transaction counter’ : number of exceptions thrown within the routine, number of
commits and rollbacks for subtransactions.
y ‘Cost summary’: summary of the previous four tabs.

PRINT ON DEMAND
sponsored by
117
117
Tracing internal liveCache activities

liveCache tracing

Create readable trace file

Activate/deactivate tracing

Flush trace

 SAP AG 2002, Title of Presentation, Speaker Name 118

z To analyze the internal activities of the liveCache the liveCache can write a trace file.
This file is very helpful to look for the reasons of a bad performance which may due to
algorithmic or programming errors within the liveCache. The file should be interpreted
only by the liveCache support.
z The trace is not automatically written but must be activated using the DBMGUI. In the
selection ‘Check->Tracing’ you can choose which operations should be traced. After
activating the trace it is written into a main memory structure to avoid a slow down of the
system due to trace I/O operations. To write the trace actually to a file it must be flushed.
The resulting file is not yet readable but still an image of the memory structure. A
readable file can be created in the tab ‘Protocol’.

PRINT ON DEMAND
sponsored by
118
118
Summary

(1) liveCache concepts and architecture


(2) liveCache integration into R/3 via transaction lc10
(3) Basic administration (starting / stopping / initializing)

(4) Complete data backup

(5) Data storage

(6) Advanced administration


(log backup / incremental data backup/ add volume)

(7) Consistent views and garbage collection

(8) Memory areas

(9) Task structure


(10) Recovery
(11) Configuration

(12) Performance analysis

 SAP AG 2002, Title of Presentation, Speaker Name 119

PRINT ON DEMAND
sponsored by
119
119
Further Information

Î Public Web:
www.sap.com Î Solutions Î
Supply Chain Management
www.sapdb.org
Î Service Marketplace:
http://service.sap.com MySAP SCM Technology

Î Related Workshop at TechEd 2002


SAP DB Administration Made Easy,
September 30th / 4:00 pm,
Hall 5/Room L
Î Related Lectures at TechEd 2002
liveCache: The Engine of APO,
October 2nd / 3:00pm – 4:00 pm,
Kaisen Saal

 SAP AG 2002, Title of Presentation, Speaker Name 120

PRINT ON DEMAND
sponsored by
120
120
Q&A

 SAP AG 2002, Title of Presentation, Speaker Name 121

PRINT ON DEMAND
sponsored by
121
121
Feedback

http://www.sap.com/teched/bremen/

Î Conference Activities

 SAP AG 2002, Title of Presentation, Speaker Name 122

PRINT ON DEMAND
sponsored by
122
122
Copyright 2002 SAP AG. All Rights Reserved

No part of this publication may be reproduced or transmitted in any form or for any purpose without the express
permission of SAP AG. The information contained herein may be changed without prior notice.
Some software products marketed by SAP AG and its distributors contain proprietary software components of other
software vendors.
Microsoft®, WINDOWS®, NT®, EXCEL®, Word®, PowerPoint® and SQL Server® are registered trademarks of
Microsoft Corporation.
IBM®, DB2®, DB2 Universal Database, OS/2®, Parallel Sysplex®, MVS/ESA, AIX®, S/390®, AS/400®, OS/390®,
OS/400®, iSeries, pSeries, xSeries, zSeries, z/OS, AFP, Intelligent Miner, WebSphere®, Netfinity®, Tivoli®, Informix
and Informix® Dynamic ServerTM are trademarks of IBM Corporation in USA and/or other countries.
ORACLE® is a registered trademark of ORACLE Corporation.
UNIX®, X/Open®, OSF/1®, and Motif® are registered trademarks of the Open Group.
Citrix®, the Citrix logo, ICA®, Program Neighborhood®, MetaFrame®, WinFrame®, VideoFrame®, MultiWin® and
other Citrix product names referenced herein are trademarks of Citrix Systems, Inc.
HTML, DHTML, XML, XHTML are trademarks or registered trademarks of W3C®, World Wide Web Consortium,
Massachusetts Institute of Technology.
JAVA® is a registered trademark of Sun Microsystems, Inc.
JAVASCRIPT® is a registered trademark of Sun Microsystems, Inc., used under license for technology invented and
implemented by Netscape.
MarketSet and Enterprise Buyer are jointly owned trademarks of SAP Markets and Commerce One.
SAP, SAP Logo, R/2, R/3, mySAP, mySAP.com and other SAP products and services mentioned herein as well as
their respective logos are trademarks or registered trademarks of SAP AG in Germany and in several other countries
all over the world. All other product and service names mentioned are trademarks of their respective companies.

 SAP AG 2002, Title of Presentation, Speaker Name 123

PRINT ON DEMAND
sponsored by
123
123
EUROPEAN
SAP TECHNICAL
EDUCATION
CONFERENCE 2002

WORKSHOP

Sept. 30 – Oct. 2, 02 Bremen, Germany

PRINT ON DEMAND
sponsored by
124
124

Potrebbero piacerti anche