Sei sulla pagina 1di 210

Oracle Internal & Oracle Academy Use Only

Oracle Database 18c: New Features


for Administrators

Student Guide
D101882GC10
Edition 1.0 | April 2018 | D103824

Learn more from Oracle University at education.oracle.com


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.
Author
Disclaimer
Dominique Jeunot
This document contains proprietary information and is protected by copyright and other
intellectual property laws. You may copy and print this document solely for your own use in an
Technical Contributor Oracle training course. The document may not be modified or altered in any way. Except where
and Reviewer your use constitutes "fair use" under copyright law, you may not use, share, download, upload,
copy, print, display, perform, reproduce, publish, license, post, transmit, or distribute this
Jean-Francois Verrier document in whole or in part without the express authorization of Oracle.

The information contained in this document is subject to change without notice. If you find any
Graphic Designer problems in the document, please report them in writing to: Oracle University, 500 Oracle
Parkway, Redwood Shores, California 94065 USA. This document is not warranted to be error-
Kavya Bellur free.

Restricted Rights Notice


Editors
If this documentation is delivered to the United States Government or anyone using the
Smita Kommini documentation on behalf of the United States Government, the following notice is applicable:
Nikita Abraham

Oracle Internal & Oracle Academy Use Only


U.S. GOVERNMENT RIGHTS
The U.S. Government’s rights to use, modify, reproduce, release, perform, display, or disclose
Publishers these training materials are restricted by the terms of the applicable Oracle license agreement
and/or the applicable U.S. Government contract.
Jobi Varghese
Trademark Notice
Joseph Fernandez
Giri Venugopal Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be
trademarks of their respective owners.
Contents

1 Introduction
Overview 1-2
New Release Model 1-3
New Version Numbering for Oracle Database 1-4
Practice 1: Overview 1-5

2 Using Multitenant Enhancements

Oracle Internal & Oracle Academy Use Only


Objectives 2-2
CDB Fleet 2-3
CDB Lead and CDB Members 2-4
Use Cases 2-5
PDB Snapshot Carousel 2-6
Creating PDB Snapshot 2-7
Creating PDBs Using PDB Snapshots 2-8
Dropping PDB Snapshots 2-9
Flashbacking PDBs Using PDB Snapshots 2-10
Container Map 2-11
Container Map: Example 2-12
Query Routed Appropriately 2-13
Dynamic Container Map 2-14
Restricting Operations with Lockdown Profile 2-15
Lockdown Profiles Inheritance 2-16
Static and Dynamic Lockdown Profiles 2-17
Refreshable Cloned PDB 2-18
Switching Over a Refreshable Cloned PDB 2-19
Unplanned Switchover 2-20
Instantiating a PDB on a Standby 2-21
PDB-Level Parallel Statement Queuing 2-23
PDB-Level Parallel Statement Queuing: CPU_COUNT 2-25
Using DBCA to Clone PDBs 2-26
Summary 2-27
Practice 2: Overview 2-28

3 Managing Security
Objectives 3-2
Schema-Only Account 3-3

iii
Encrypting Data Using Transparent Data Encryption 3-4
Managing Keystore in CDB and PDBs 3-6
Keystore Management Changes for PDB 3-8
Defining the Keystore Type 3-9
Isolating a PDB Keystore 3-10
Converting a PDB to Run in Isolated Mode 3-11
Converting a PDB to Run in United Mode 3-12
Migrating a PDB Between Keystore Types 3-13
Creating Your Own TDE Master Encryption Key 3-14
Protecting Fixed-User Database Links Obfuscated Passwords 3-15
Importing Fixed-User Database Links Encrypted Passwords 3-16
DB Replay: The Big Picture 3-17

Oracle Internal & Oracle Academy Use Only


Encryption of Sensitive Data in Database Replay Files 3-18
Capture Setup for DB Replay 3-19
Process and Replay Setup for DB Replay – Phase 1 3-20
Process and Replay Setup for DB Replay – Phase 2 3-21
Oracle Database Vault: Privileged User Controls 3-22
Database Vault: Access Control Components 3-23
DB Replay Capture and Replay with Database Vault 3-25
Authenticating and Authorizing Users with External Directories 3-26
Architecture 3-27
EUS and AD 3-28
CMU and AD 3-29
Choosing Between EUS and CMU 3-31
Summary 3-32
Practice 3: Overview 3-33

4 Using RMAN Enhancements


Objectives 4-2
Migrating a Non-CDB to a CDB 4-3
Migrating a Non-CDB and Transporting Non-CDB Backups to a CDB 4-4
Relocating/Plugging a PDB into Another CDB 4-5
Plugging a PDB and Transporting PDB Backups to a CDB - 1 4-6
Plugging a PDB and Transporting PDB Backups to a CDB - 2 4-7
Using PrePlugin Backups 4-8
To Be Aware Of 4-9
Example 4-10
Cloning Active PDB into Another CDB Using DUPLICATE 4-11
Example: 1 4-12
Example: 2 4-13
Duplicating On-Premise CDB as Cloud Encrypted CDB 4-14

iv
Duplicating On-Premise Encrypted CDB as Cloud Encrypted CDB 4-16
Duplicating Cloud Encrypted CDB as On-Premise CDB 4-17
Automated Standby Synchronization from Primary 4-18
Summary 4-19
Practice 4: Overview 4-20

5 Using General Database Enhancements


Objectives 5-2
Global Temporary Tables 5-3
Private Temporary Tables 5-4
Import with the CONTINUE_LOAD_ON_FORMAT_ERROR option 5-5
Online Partition and Subpartition Maintenance Operations 5-6

Oracle Internal & Oracle Academy Use Only


Online Modification of Partitioning and Subpartitioning Strategy 5-8
Online Modification of Subpartitioning Strategy: Example 5-9
Online MERGE Partition and Subpartition: Example 5-10
Batched DDL from DBMS_METADATA Package 5-11
Unicode 9.0 Support 5-12
Summary 5-13
Practice 5: Overview 5-14

6 Improving Performance
Objectives 6-2
In-Memory Column Store: Dual Format of Segments in SGA 6-3
Deploying the In-Memory Column Store 6-4
Setting In-Memory Object Attributes 6-5
Managing Heat Map and Automatic Data Optimization Policies 6-6
Creating ADO In-Memory Policies 6-7
Automatic In-Memory: Overview 6-8
AIM Action 6-9
Configuring Automatic In-Memory 6-10
Diagnostic Views 6-11
Populating In-Memory Expression Results 6-12
Populating In-Memory Expression Results Within a Window 6-13
Memoptimized Rowstore 6-14
In-Memory Hash Index 6-15
DBMS_SQLTUNE Versus DBMS_SQLSET Package 6-16
SQL Tuning Sets: Manipulation 6-17
SQL Performance Analyzer 6-18
Using SQL Performance Analyzer 6-19
Steps 6-7: Comparing / Analyzing Performance and Tuning Regressed SQL 6-20
SQL Performance Analyzer: PL/SQL Example 6-21

v
SQL Exadata-Aware Profile 6-23
Summary 6-24
Practice 6: Overview 6-25

7 Handling Enhancements in Big Data and Data Warehousing


Objectives 7-2
Querying External Tables 7-3
Querying Inlined External Tables 7-4
Database In-Memory Support for External Tables 7-5
Analytic Views 7-6
Visual Totals Versus Non-Visual Totals 7-7
Filter-Before Aggregate Predicates and Calculated Measures 7-8

Oracle Internal & Oracle Academy Use Only


Query-Scoped Calculations Using Hierarchy-Based Predicates 7-9
Using Multiple Hierarchy-Based Predicates 7-10
Query-Scoped Calculations Using Calculated Measures 7-11
Using Hierarchy-Based Predicates and Calculated Measures 7-12
Polymorphic Table Functions 7-14
Row-Semantics and Table-Semantics PTFs 7-15
Components to Create PTFs 7-16
Steps to Create PTFs 7-17
DBMS_TF Routines 7-18
How Does RDBMS Compile and Execute PTFs? 7-19
Exact Top-N Query Processing: SQL Row-Limiting Clause 7-20
Exact Top-N Query Processing: Rank Window Function 7-21
Approximate Top-N Query Processing 7-22
Approximate Top-N Query Processing: Example 1 7-24
Approximate Top-N Query Processing: Example-2 7-25
Approximate Top-N Query Processing: Example-3 7-26
Approximate Top-N Query Processing: Example-4 7-27
Summary 7-28
Practice 7: Overview 7-29

8 Sharding Enhancements
Objectives 8-2
System-Managed and Composite Sharding Methods 8-3
User-Defined Sharding Method 8-5
Support for PDBs as Shards 8-7
Improved Oracle GoldenGate Support 8-8
Query System Objects Across Shards 8-9
Consistency Levels for Multi-Shard Queries 8-10
Sharding Support for JSON, LOBs, and Spatial Objects 8-11

vi
Improved Multi-Shard Query Support 8-14
Oracle Sharding Documentation 8-15
Summary 8-16
Practice 8: Overview 8-17

A Database Sharding
Objectives A-2
What Is Database Sharding? A-3
Sharding: Benefits A-4
Oracle Sharding: Advantages A-5
Application Considerations for Sharding A-6
Components of Database Sharding A-8

Oracle Internal & Oracle Academy Use Only


Shard Catalog A-9
Shard Directors A-10
Complete Deployment of a System-Managed SDB A-11
Creating Sharded Tables A-12
Sharded Table Family A-13
Partitions, Tablespaces, and Chunks A-14
Sharding Methods: System-Managed Sharding A-15
Sharding Methods: Composite Sharding A-16
Duplicated Tables A-17
Routing in an Oracle Sharded Environment A-18
Direct Routing via Sharding Key A-19
Connection Pool as Shard Director A-20
Proxy Routing: Limited to System Managed in 12.2.0.1 A-21
Lifecycle Management of SDB A-22
Sharding Deployment Outline: DBA Steps A-23
Summary A-26

vii
Oracle Internal & Oracle Academy Use Only
1

Introduction

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.
Overview

• This course focuses on the new features and enhancements of Oracle Database.
• This course complements the topics covered in Oracle Database 18c:
– The 5-day Oracle Database 12c: 12.2 New Features for 12.1 Administrators Ed 1
course
– Or the 10-day Oracle Database 12c R2: New Features for Administrators course
— Oracle Database 12c R2: New Features for Administrators Part 1 Ed 1
— Oracle Database 12c R2: New Features for Administrators Part 2 Ed 1
• Previous experience with Oracle Database 12c is required and, in particular, Release 2
(12.2) for a full understanding of many of the new features and enhancements.

Oracle Internal & Oracle Academy Use Only


• Hands-on practices emphasize functionality rather than test knowledge.

Refer to Oracle Database Database New Features Guide 18c

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

This three-day course follows either the 5-day Oracle Database 12c: 12.2 New Features for 12.1
Administrators Ed 1 course or the 10-day Oracle Database 12c Release 2 (12.2) course. Both
courses are designed to introduce the new features and enhancements of Oracle Database 12c
Release 2 (12.2.0.1) that are applicable to the work that is usually performed by database
administrators and related personnel.
The course is designed to introduce the major new features and enhancements of
Oracle Database 18c.
You should not expect to discover in the course all the new features without supplemental reading,
especially the Oracle Database New Features 12c Release 2 (12.2) and Oracle Database Database
New Features 18c documentation.
The course consists of instructor-led lessons, hands-on labs, and tutorials like OBEs (Oracle By
Examples) that enable you to see how certain new features behave.

Oracle Database 18c: New Features for Administrators 1 - 2


New Release Model

Oracle delivers annual releases and quarterly release updates.


• Oracle delivers releases yearly instead of on a multi-year cycle.
• Yearly releases improve database quality by reducing the number of software changes
released at one time.
• Quarterly Release Update (RU) + Release Update Revision (RUR) improve the quality
and experience of proactive maintenance. This model:
– Gives best of PSUs combined with the best of Bundle Patches
– Allows customers to update using RUs when they need fixes, and then switch to field
proven RURs when their environment becomes stable

Oracle Internal & Oracle Academy Use Only


– Enables customers to switch back and forth between RUs and RURs unlike PSUs
and BPs
– Contains all important security fixes
– Eliminates tradeoff between security and stability
– Shipped in January, April, July, and October as PSUs and BPs
Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Traditional Multi-Year Release Cycle


It has become difficult to support and address the one-off patch proliferation problem. It is also not
possible to switch back and forth between the various maintenance deliverable types.
Yearly Release
• Agility
- Delivers new features more quickly and incrementally
- Eliminates multiple year wait for new functionality
- Allows customer-requested enhancements to be delivered more rapidly
• Quality
- Incremental changes avoid massive changes that are difficult to test and stabilize.
- This removes the pressure to add new features late in the release cycle to satisfy
customer requests that cannot wait multiple years.
Oracle Database 12c Release 1 (12.1) and Oracle Database 11g Release 1 continue to use
previous PSU/BP process and version numbering.
Quarterly Release Update and Release Update Revision
Size and content restrictions of current Patch Set Updates (PSUs) result in greater risk of
encountering a known problem, and create a proliferation of risky one-off backports. Bundle Patches
have more proactive fixes to address issues with PSUs, but increase risk of regressions.
The Release Update (RU) plus Release Update Revision (RUR) model provides the stability benefits
of PSUs, with the proactive maintenance benefits of Bundle Patches.

Oracle Database 18c: New Features for Administrators 1 - 3


New Version Numbering for Oracle Database

Version numbers use a three-digit format consisting of: Year.Update.Revision


• Year is the last two digits of the year a release is delivered:
– “18” stands for year 2018: Oracle Database 18c
• Update tracks release update (RU) or Beta releases.
• Revision tracks the associated release update revision (RUR) level (0, 1, 2).
– 18.1.0 uses new version numbering for all RUs and RURs.
• Production databases start by using latest RU for fastest stabilization.
• When production DB stability is achieved, switch over to RURs seamlessly.

Oracle Internal & Oracle Academy Use Only


Q1 2018 Q2 Q3 Q4 Q1 2019 Q2
18.1.0 18.2.0 18.3.0 18.4.0 18.5.0 18.6.0
18.2.1 18.3.1 18.4.1 18.5.1
18.2.2 18.3.2 18.4.2

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

RUs consolidate fixes for common issues encountered by customers and are highly tested as a
bundle by Oracle before shipping.
RURs primarily contain RU content with sufficient time allowed for the content to be proven in field
by thousands of customer deployments and any regressions resolved.
The table in the slide is an example.
For more information about the availability of Oracle Database 18c for on-prem platforms, refer to the
My Oracle Support (MOS) Release Schedule of Current Database Releases (Doc ID 742060.1).

Oracle Database 18c: New Features for Administrators 1 - 4


Practice 1: Overview

• 1-1: Discovering practices environment

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Oracle Database 18c: New Features for Administrators 1 - 5


Oracle Internal & Oracle Academy Use Only
2

Using Multitenant Enhancements

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.
Objectives

After completing this module, you should be able to:


• Manage a CDB fleet
• Manage PDB snapshots
• Use a dynamic container map
• Explain lockdown profile inheritance
• Describe refreshable copy PDB switchover
• Identify the parameters used when instantiating a PDB on a standby

Oracle Internal & Oracle Academy Use Only


• Enable parallel statement queuing at PDB level
• Use DBCA to clone PDBs

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Oracle Database 18c: New Features for Administrators 2 - 2


CDB Fleet
18c

A CDB fleet is a collection of different CDBs that can be managed as one logical CDB:
• To provide the underlying infrastructure for massive scalability and centralized
management of many CDBs
• To provision more than the maximum number of PDBs for an application
CDB Fleet High_Speed CDB Fleet Low_Speed
cdb1

PDB1 cdb2 cdb4


cdb3 cdb5
PDB2 PDB4
PDB3 PDB5

Oracle Internal & Oracle Academy Use Only


PDB22 PDB44
PDB33 PDB55

PDB333 PDB555

• To manage appropriate server resources for PDBs, such as CPU, memory, I/O rate, and
storage systems

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Oracle Database 18c introduces the CDB Fleet feature. CDB Fleet aims to provide the underlying
infrastructure for massive scalability and centralized management of many CDBs.
• The maximum number of PDBs in a CDB is 4096 PDBs. A CDB fleet can hold more than
4096 PDBs.
• Different PDBs in a single configuration require different types of servers to function
optimally. Some PDBs might process a large transaction load, whereas other PDBs are used
mainly for monitoring. You want the appropriate server resources for these PDBs, such as
CPU, memory, I/O rate, and storage systems.
• Each CDB can use all the usual database features for high availability, scalability, and
recovery of the PDBs in the CDB, such as Real Application Clusters (RAC), Data Guard,
RMAN, PITR, and Flashback.
• PDB names must be unique across all CDBs in the fleet. PDBs can be created in any CDB in
the fleet, but can be opened only in the CDB where they physically exist.

Oracle Database 18c: New Features for Administrators 2 - 3


CDB Lead and CDB Members

• The CDB lead in a fleet is the CDB from which you perform operations across the fleet.
• The CDB members of the fleet link to the CDB lead through a database link.
CDB Fleet High_Speed
CDB Lead
cdb3
cdb1 cdb2
PDB3
PDB2
CDB root PDB33
PDB22 PDB333
1

Oracle Internal & Oracle Academy Use Only


CONNECT / AS SYSDBA CDB member CDB member
ALTER DATABASE SET lead_cdb = TRUE; 3 4
CONNECT / AS SYSDBA Same sequence
2 CREATE PUBLIC DATABASE LINK lcdb1 of statements
CONNECT TO system
GRANT sysoper, … TO system
IDENTIFIED BY pass
CONTAINER = ALL;
USING 'cdb1';
ALTER DATABASE SET lead_cdb_uri='dblink:lcdb1';

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

A CDB fleet contains a CDB lead and CDB members. PDB information from the individual CDBs is
synchronized with the CDB lead.
The CDB lead is, from the CDB root, able to:
• Monitor all PDBs of all CDBs in the fleet
• Report information and collect diagnostic information from all PDBs of all CDBs in the fleet
through a cross-container query
• Query Oracle-supplied objects from all PDBs of all CDBs in the fleet
To configure a CDB fleet, define the lead and then the members.
1. To define a CDB as the CDB lead in a CDB fleet, from the CDB root, set the LEAD_CDB
database property to TRUE.
2. Still in the CDB root of the CDB lead, use a common user and grant appropriate privileges.
3. To define other CDBs as members of the CDB fleet:
a) Connect to the CDB root of another CDB.
b) Use a common user identical to the common user used in the lead CDB because you
have to create a public database link using a fixed user.
c) Set the LEAD_CDB_URI database property to the name of the database link to the
CDB lead.
It assumes that the network is configured so that the current CDB can connect to the CDB lead
using the connect descriptor defined in the link.

Oracle Database 18c: New Features for Administrators 2 - 4


Use Cases

• Monitoring and collecting diagnostic information across CDBs from the lead CDB
• Querying Oracle-supplied objects, such as DBA views, in different PDBs across the CDB
fleet

CDB Fleet High Speed


CDB Lead Member in the fleet Member in the fleet
cdb1 cdb2 cdb3

PDB1 PDB2 PDB3

PDB22 PDB33

Oracle Internal & Oracle Academy Use Only


PDB333

SELECT … con$name, cdb$name FROM CONTAINERS (dba_users) GROUP BY …;

• Serving as a central location where you can view information about and the status of all
the PDBs across multiple CDBs

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

• The CDB lead in the CDB fleet can monitor PDBs across the CDBs in the CDB fleet. You can
install a monitoring application in one container and use CDB views and GV$ views to
monitor and process diagnostic data for the entire CDB fleet. A cross-container query issued
in the lead CDB can automatically execute in all PDBs across the CDB fleet through the
Oracle-supplied objects.
• Using Oracle-supplied or even common application schema objects in different PDBs (or
application PDBs) across the CDB fleet, you can use the CONTAINERS clause or
CONTAINER_MAP to run queries across all of the PDBs of the multiple CDBs in the fleet.
This enables the aggregation of data from PDBs in different CDBs across the fleet. The
application can be installed in an application root and each CDB in the fleet can have an
application root clone to enable the common application schema across the CDBs.
• The CDB lead can serve as a central location where you can view information about and the
status of all the PDBs across multiple CDBs.

Oracle Database 18c: New Features for Administrators 2 - 5


PDB Snapshot Carousel
18c

A PDB snapshot is a named copy of a PDB at a specific point in time.


cdb1
• Recovery extended beyond flashback retention period
PDB Snapshot Carousel
• Reporting on historical data kept in snapshots
• Storage-efficient snapshot clones taken on periodic basis
PDB1
• Maximum of eight snapshots for CDB and each PDB
Example:
On Friday, need to recover back to Wednesday.

Oracle Internal & Oracle Academy Use Only


• Restore PDB1_snapW.
PDB1_snapW PDB1_snapT
at Wed-00:01 at Thu-00:01

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

In Oracle Database 18c, when you create a PDB, you can specify whether it is enabled for PDB
snapshots. A PDB snapshot is an archive file (.pdb) containing the contents of the copy of the PDB
at snapshot creation.
PDB snapshots allow the recovery of PDBs back to the oldest PDB snapshot available for a PDB.
This feature extends the recovery beyond the flashback retention period that necessitates database
flashback enabled.
The example in the slide shows a situation where you have to restore PDB1 back to Wednesday.
A use case of PDB snapshots is reporting on historical data. You might create a snapshot of a
sales PDB at the end of the financial quarter. You could then create a PDB based on this snapshot
so as to generate reports from the historical data.
Every PDB snapshot is associated with a snapshot name and the SCN and timestamp at snapshot
creation. The MAX_PDB_SNAPSHOTS database property sets the maximum number of PDB
snapshots for each PDB. The default and allowed maximum is 8. When the maximum number is
reached for a PDB, and an attempt is made to create a new PDB snapshot, the oldest PDB snapshot
is purged. If the oldest PDB snapshot cannot be dropped because it is open, an error is raised. You
can decrease this limit for a given PDB by issuing an ALTER DATABASE statement specifying a
max number of snapshots. If you want to drop all PDB snapshots, you can set the limit to 0.

Oracle Database 18c: New Features for Administrators 2 - 6


Creating PDB Snapshot
18c

DATABASE_PROPERTIES
PROPERTY_NAME = MAX_PDB_SNAPSHOTS
PROPERTY_VALUE = 8

To create PDB snapshots for a PDB: DBA_PDB_SNAPSHOTS


DBA_PDBS
cdb1 1. Enable a PDB for PDB snapshots. SNAPSHOT_MODE

PDB Snapshot Carousel SQL> CREATE PLUGGABLE DATABASE pdb1 …


SNAPSHOT MODE MANUAL;

PDB1 SQL> ALTER PLUGGABLE DATABASE pdb1


SNAPSHOT MODE EVERY 24 HOURS;
Archive 2. You can create multiple manual PDB snapshots of a PDB.
log files
SQL> ALTER PLUGGABLE DATABASE pdb1

Oracle Internal & Oracle Academy Use Only


SNAPSHOT pdb1_first_snap;
PDB1_snapW PDB1_snapT SQL> ALTER PLUGGABLE DATABASE pdb1
at Wednesday at Thursday SNAPSHOT pdb1_second_snap;

3. Disable snapshot creation for a PDB.


SQL> ALTER PLUGGABLE DATABASE pdb1 SNAPSHOT MODE NONE;

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

By default, a PDB is enabled for PDB snapshots. There are two ways to define PDBs enabled for
PDB snapshot creation:
• Manually: The first example in the slide uses the SNAPSHOT MODE MANUAL clause of the
CREATE PLUGGABLE DATABASE or ALTER PLUGGABLE DATABASE statement. No clause
gives the same result.
• Automatically after a given interval of time: The second example in the slide uses the
SNAPSHOT MODE EVERY <snapshot_interval> [MINUTES|HOURS] clause of the
CREATE PLUGGABLE DATABASE or ALTER PLUGGABLE DATABASE statement. When the
amount of time is expressed in minutes, it must be less than 3000. When the amount of time
is expressed in hours, it must be less than 2000. In the second example in the slide, the
SNAPSHOT MODE clause specifies that a PDB snapshot is created automatically every 24
hours.
Every PDB snapshot is associated with a snapshot name and the SCN and timestamp at snapshot
creation. You can specify a name for a PDB snapshot. The third and fourth examples in the slide
show how to create PDB snapshots manually, even if the PDB is set to have PDB snapshots created
automatically. If PDB snapshots are created automatically, the system generates a name.

Oracle Database 18c: New Features for Administrators 2 - 7


Creating PDBs Using PDB Snapshots
cdb1

PDB Snapshot Carousel After a PDB snapshot is created, you can create a new PDB
from it:
PDB1
SQL> CREATE PLUGGABLE DATABASE pdb1_day_1 FROM pdb1
USING SNAPSHOT <snapshot_name>;

SQL> CREATE PLUGGABLE DATABASE pdb1_day_2 FROM pdb1


USING SNAPSHOT AT SCN <snapshot_SCN>;
PDB1_snap2 PDB1_snap1
at day-2 at day-1

Oracle Internal & Oracle Academy Use Only


PDB1_day_1

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

You can create a new PDB from an existing PDB snapshot by using the USING SNAPSHOT clause.
Provide any of the following:
• The snapshot name
• The snapshot SCN at which the snapshot was created
• The snapshot time at which the snapshot was created

Oracle Database 18c: New Features for Administrators 2 - 8


Dropping PDB Snapshots

• Automatic PDB snapshot deletion when MAX_PDB_SNAPSHOTS is reached:


cdb1
MAX_PDB_SNAPSHOTS = 4
PDB1

PDB1_snapA PDB1_snapB PDB1_snapC PDB1_snapD PDB1_snapE PDB1_snapF

Oracle Internal & Oracle Academy Use Only


at day -8 at day -7 at day -6 at day -5 at day -4 at day -3

• Manual PDB snapshot deletion:


SQL> ALTER PLUGGABLE DATABASE pdb1 DROP SNAPSHOT pdb1_first_snap;

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

When the carousel reaches eight PDB snapshots or the maximum number of PDB snapshots
defined, the oldest PDB snapshot is deleted automatically, whether or not it is in use. There is no
need to materialize a PDB snapshot in carousel, because PDB snapshots are all full clone. Be
aware that if the SNAPSHOT COPY clause is used with the USING SNAPSHOT clause, the
SNAPSHOT COPY clause is simply ignored.
You can manually drop the PDB snapshots by altering the PDB for which the PDB snapshots were
created and using the DROP SNAPSHOT clause.

Oracle Database 18c: New Features for Administrators 2 - 9


Flashbacking PDBs Using PDB Snapshots

cdb1

PDB1 PDB1

PDB1b
User Error Drop PDB1
error detected
Close PDB1 Rename PDB1b
to
PDB1
Create PDB1b
PDB1_snapW PDB1_snapT from
at Wednesday at Thursday PDB1_snapW

Oracle Internal & Oracle Academy Use Only


PDB1_snapS
at Saturday

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

PDB snapshots enable the recovery of PDBs back to the oldest PDB snapshot available for a PDB.
The example in the slide shows a situation where you detect an error that happened between
PDB1_SNAPW and PDB1_SNAPT creation. To recover the situation, perform the following steps:
1. Close PDB1.
2. Create PDB1b from the PDB1_SNAPW snapshot created before the user error.
3. Drop PDB1.
4. Rename PDB1b to PDB1.
5. Open PDB1 and create a new snapshot.

Oracle Database 18c: New Features for Administrators 2 - 10


Container Map
12c

• Define a PDB-based partition strategy based on the values stored in a column.


• Select a column that is commonly used and never updated.
– Time Identifier (versus creation_date)/Region Name
• Set the database property CONTAINER_MAP in the application root.

MAP table
Partition N_AMER Partition APAC Partition EMEA
• Each PDB corresponds to data for a
particular partition.

Oracle Internal & Oracle Academy Use Only


NA APAC EMEA
DATABASE_PROPERTIES
PROPERTY_NAME = CONTAINER_MAP
PROPERTY_VALUE = app.tabapp
DESCRIPTION = value of container mapping table

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

In Oracle Database 12c, the CONTAINERS clause (table or view) in a query in the CDB root
accesses a table or view in the CDB root and in each of the opened PDBs, and returns a UNION ALL
of the rows from the table or view. This concept is extended to work in an application container.
CONTAINERS (table or view) queried in an application root accesses the table or view in the
application root and in each of the opened application PDBs of the application container.
CONTAINERS (table or view) can be restricted to access a subset of PDBs by using a predicate on
CON_ID. CON_ID is an implicitly generated column of CONTAINERS (table or view).
SELECT fname, lname FROM CONTAINERS(emp) WHERE CON_ID IN (44,56,79);
One drawback of CONTAINERS() is that queries need to be changed to add a WHERE clause on
CON_ID if only certain PDBs should be accessed. Often, rows of tables or views are horizontally
partitioned across PDBs based on a user-defined column.
The CONTAINER_MAP database property provides a declarative way to indicate how rows in
metadata-linked tables or views are partitioned across PDBs.
The CONTAINER_MAP database property is set in the application root. Its value is the name of a
partitioned table (the map object). The names of the partitions of the map object match the names of
the PDBs in the application container. The columns that are used in partitioning the map object
should match the columns in the metadata-linked object that is being queried. The partitioning
schemes that are supported for a CONTAINER_MAP map object are LIST, HASH, and RANGE.
Note: Container maps can be created in CDB root, but the best practice is to create them in
application roots.

Oracle Database 18c: New Features for Administrators 2 - 11


Container Map: Example
CREATE TABLE tab1 (region …, …);
CREATE TABLE tab2 (…, region …);

CREATE TABLE app1.app_map ( columns …, region VARCHAR2(20))


PARTITION BY LIST (region)
(PARTITION N_AMER VALUES ('TEXAS','CALIFORNIA','MEXICO','CANADA'),
PARTITION EMEA VALUES ('UK', 'FRANCE', 'GERMANY'),
PARTITION APAC VALUES ('INDIA', 'CHINA', 'JAPAN'));

ALTER PLUGGABLE DATABASE SET CONTAINER_MAP = 'app1.app_map'; DBA_TABLES


CONTAINER_MAP_OBJECT = YES
ALTER TABLE tab1 ENABLE container_map;

Oracle Internal & Oracle Academy Use Only


PDB$SEE Application
DPDB$SEED N_AMER APAC EMEA
ROOT

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

In a hybrid model, you can create common partitioned tables in the application root, mapping a
partition of the table to an application PDB of the application container. For example, the
TENANT_GRP1 partition would store data for customers of group1 in the Tenant_GRP1 application
PDB. The TENANT_GRP2 partition would store data for customers of group2 in the Tenant_GRP2
application PDB.
In a data warehouse model, you can create common partitioned tables in the application root, which
are partitioned on a column, such as REGION in the example in the slide, where data is segregated
into separate application PDBs of the application container.
In the example in the slide, the NA partition stores data for AMERICA, MEXICO, and CANADA as
defined in the list, in the NA application PDB. The EMEA partition stores data for UK, FRANCE, and
GERMANY as defined in the list, in the EMEA application PDB.

Oracle Database 18c: New Features for Administrators 2 - 12


Query Routed Appropriately

SELECT … FROM some_table


WHERE region IN ('CANADA', 'GERMANY', 'INDIA');

- Use CONTAINERS to implicitly AGGREGATE data-

SELECT .. FROM fact_tab UPDATE fact_tab2 SET COLUMN


WHERE region = ‘N_AMER'; WHERE region = 'FRANCE';

Oracle Internal & Oracle Academy Use Only


PDB$SEE Application
DPDB$SEED N_AMER APAC EMEA
ROOT

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Because data is segregated into separate application PDBs of the application container, querying a
container map table, for example, data for N_AMER, automatically retrieves data from the NA
application PDB. The query is appropriately routed to the relevant partition and therefore to the
relevant application PDB.
If you need to retrieve data from a table that is spread over several application PDBs within an
application container, use the CONTAINERS clause to aggregate rows from partitions from several
application PDBs.

Oracle Database 18c: New Features for Administrators 2 - 13


Dynamic Container Map
18c

CREATE PLUGGABLE DATABASE s_amer …


CONTAINER_MAP UPDATE (ADD PARTITION s_amer VALUES ('PERU','ARGENTINA'));

SELECT .. FROM fact_tab WHERE region = ‘S_AMER';

PDB$SEE Application
DPDB$SEED N_AMER S_AMER APAC EMEA
ROOT

Oracle Internal & Oracle Academy Use Only


CREATE PLUGGABLE DATABASE s_amer_peru …
CONTAINER_MAP UPDATE (SPLIT PARTITION s_amer
INTO (partition s_amer ('ARGENTINA'), partition s_amer_peru));

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

In Oracle Database 18c, when a PDB is created, dropped or renamed, CONTAINER_MAP defined in
CDB root, or application root, or both can be dynamically updated to reflect the change.
The CREATE PLUGGABLE DATABASE statement takes an optional clause that describes the key
values affiliated with the new PDB.

Oracle Database 18c: New Features for Administrators 2 - 14


Restricting Operations with Lockdown Profile
12c

CDB_LOCKDOWN_PROFILES

Operations, features, and options used by users


CDB root connected to a given PDB can be disallowed.
lock_profile1
lock_profile2 1. You can create PDB lockdown profiles from
ALTER SYSTEM
the CDB root only.
Partitioning ALTER SYSTEM SET
2. You can define restrictions through enabled
and disabled:
PDB_OE – Statements and clauses
PDB_LOCKDOWN =
– Features
lock_profile1 – Options

Oracle Internal & Oracle Academy Use Only


3. Setting the PDB_LOCKDOWN parameter to a
PDB_SALES PDB_HR
PDB lockdown profile sets it for all PDBs.
PDB_LOCKDOWN = PDB_LOCKDOWN = 4. Optionally, set the PDB_LOCKDOWN parameter
lock_profile2 lock_profile2
to another PDB lockdown profile for a PDB.
CDB1
5. Restart the PDBs.

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

In Oracle Database 12c, a PDB lockdown profile whose name is stored in the PDB_LOCKDOWN
parameter determines the operations that can be performed in a given PDB. If the PDB_LOCKDOWN
parameter is set to a PDB lockdown profile at the CDB root level, and no PDB_LOCKDOWN
parameter is set at the PDB level, then the PDB lockdown profile defined at the CDB root level
determines the operations that can be performed in all the PDBs.
After the PDB_LOCKDOWN parameter is set, the PDB must be bounced before the lockdown profile
can take effect.
A created PDB lockdown profile cannot derive any restriction rule from another PDB lockdown
profile.

Oracle Database 18c: New Features for Administrators 2 - 15


Lockdown Profiles Inheritance
18c

CDB root App Root APP2 App PDB app2_1


Rule1
CDB_prof1 Rule7 PDB_LOCKDOWN = App_prof5
Disabled R2 App_prof4 Disabled R8  Rules of App_prof5 are in
effect.
CDB_prof2 Rule3
Rule9  Inherits lockdown profile rules
Disabled R4 App_prof5 set in its nearest ancestor,
Disabled R10
CDB_prof1
PDB_LOCKDOWN = CDB_prof1
Rule11
App_prof6 App PDB app2_2
Disabled R12
 Inherits lockdown profile
Regular PDB rules set in its nearest
PDB_LOCKDOWN = App_prof4 ancestor, App_prof4 profile
 Inherits from CDB_prof1  App_prof4 affects all application  In addition, inherits lockdown

Oracle Internal & Oracle Academy Use Only


PDBs in the application profile rules set in its nearest
container. ancestor, CDB_prof1
App Root APP1  Inherits rules from CDB_prof1
Rule5
App_prof3 Rule6 App PDB app1_1
 Inherits from CDB_prof1
 Inherits from CDB_prof1

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

In Oracle Database 18c, you can create lockdown profiles in application roots, and not only in the
CDB root.
If the PDB_LOCKDOWN parameter in a PDB is set to a name of a lockdown profile different from that
in its ancestor, the CDB root or application root for application PDBs, the following governs
interaction between restrictions imposed by these lockdown profiles:
• If the PDB_LOCKDOWN parameter in a regular or application PDB is set to a CDB lockdown
profile, lockdown profiles specified by the PDB_LOCKDOWN parameter respectively in the CDB
root or application root are ignored.
• If the PDB_LOCKDOWN parameter in an application PDB is set to an application lockdown
profile while the PDB_LOCKDOWN parameter in the application root or CDB root is set to a
lockdown profile, in addition to rules stipulated in the application lockdown profile, the PDB
lockdown profile inherits the DISABLE rules from the lockdown profile set in its nearest
ancestor, the CDB root.
• If there are conflicts between rules comprising the CDB lockdown profile and the Application
lockdown profile, the rules in CDB lockdown profile takes precedence. For example, an
OPTION_VALUE clause of a CDB lockdown profile takes precedence over OPTION_VALUE
clause of an Application lockdown profile.

Oracle Database 18c: New Features for Administrators 2 - 16


Static and Dynamic Lockdown Profiles
18c

CDB root / Application root


There are two ways to create lockdown
Base_ lock_prof1 Base_lock_prof2
profiles by using an existing profile:
2 Added rules 2 • Static lockdown profiles:
Added rules
1 SQL> CREATE LOCKDOWN PROFILE prof3
1
Dynamic_lock_from_prof2
FROM base_lock_prof1;
Static_lock_from_prof1
3 Automatic added rules • Dynamic lockdown profiles:
PDB_OE SQL> CREATE LOCKDOWN PROFILE prof4
INCLUDING base_lock_prof2;

Oracle Internal & Oracle Academy Use Only


PDB_LOCKDOWN =
static_lock_from_prof1

PDB_SALES PDB_HR

PDB_LOCKDOWN = PDB_LOCKDOWN =
base_lock_prof2 dynamic_lock_from_prof2
CDB1

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

When a PDB lockdown profile is created, it can derive rules from a “base” lockdown profile.
There are two ways of creating lockdown profiles using existing profiles:
• Static lockdown profiles: When the lockdown profile is being created with the FROM clause,
rules comprising the base profile at the time are copied to the static lockdown profile. Any
subsequent changes to the base profile do not affect the newly created static lockdown
profile.
• Dynamic lockdown profiles: When the lockdown profile is being created with the
INCLUDING clause, the dynamic lockdown profile inherits disabled rules comprising base
profile as well as any subsequent changes to the base profile. If rules explicitly added to the
newly created dynamic lockdown profile come into conflict with rules comprising the base
profile, then the latter takes precedence.

Oracle Database 18c: New Features for Administrators 2 - 17


Refreshable Cloned PDB
12c

CDB1 Remote source PDB still up and fully functional:


CDB root 1. Connect to the target CDB2 root to create the
database link to CDB1.
UNDO1 2. Switch the shared UNDO mode to local UNDO mode
SYSTEM USERS
PDB1 SYSAUX in both CDBs.
3. Clone the remote PDB1 to PDB1_REF_CLONE.
Remote source
PDB1 4. Open PDB1_REF_CLONE in read/write mode.

Refresh
DB Link Incremental refreshing => Open PDB1_REF_CLONE
Hot Cloned in RO mode:
PDB1
• Manual

Oracle Internal & Oracle Academy Use Only


CDB2
• Automatic (predefined interval)
CDB root
SQL> CREATE PLUGGABLE DATABASE pdb1_ref_clone
UNDO1 FROM pdb1@link_pdb_source_for_clone
SYSTEM
PDB1_REF_CLONE SYSAUX
USERS REFRESH MODE EVERY 2 MINUTES;

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Refreshable Copy
The Oracle Database 12c cloning technique copies a remote source PDB into a CDB while the
remote source PDB is still up and fully functional.
Hot remote cloning requires both CDBs to switch from shared UNDO mode to local UNDO mode,
which means that each PDB uses its own local UNDO tablespace.
In addition, hot cloning allows incremental refreshing in that the cloned copy of the production
database can be refreshed at regular intervals. Incremental refreshing means refreshing an existing
clone from a source PDB at a point in time that is more recent than the original clone creation to
provide fresh data. A refreshable copy PDB can be opened only in read-only mode.
Propagating changes from the source PDB can be performed in two ways:
• Manually (on demand)
• Automatically at predefined time intervals
If the source PDB is not accessible at the moment, the refresh copy needs to be updated. Archive
logs are read from the directory specified by the REMOTE_RECOVERY_FILE_DEST parameter to
refresh the cloned PDB.

Oracle Database 18c: New Features for Administrators 2 - 18


Switching Over a Refreshable Cloned PDB
18c

Switchover at the PDB level:


1. A user creates a refreshable clone of a PDB.

Primary role PDB1 PDB1_REF_CLONE Refreshable copy role


R/W Read Only

2. The roles can be reversed: the refreshable clone can be made the primary PDB.
– The new primary PDB can be opened in read/write mode.
– The primary PDB becomes the refreshable clone.

Oracle Internal & Oracle Academy Use Only


SQL> CONNECT sys@PDB1 AS SYSDBA
SQL> ALTER PLUGGABLE DATABASE REFRESH MODE EVERY 6 HOURS
FROM pdb1_ref_clone@link_cdb_source_for_clone SWITCHOVER;

PDB1 PDB1_REF_CLONE Primary role


Refreshable copy role
Read Only R/W

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

In Oracle Database 18c, after a user creates a refreshable clone of a PDB, the roles can be
reversed. The refreshable clone can be made the primary PDB which can be opened in read/write
mode while the primary PDB becomes the refreshable clone.
The ALTER PLUGGABLE DATABASE command with the SWITCHOVER clause must be executed
from the primary PDB. The refresh mode can be either MANUAL or EVERY <refresh interval>
[MINUTES | HOURS]. REFRESH MODE NONE cannot be specified when issuing this statement.
After the switchover operation, the primary PDB becomes the refreshable clone and can only be
opened in READ ONLY mode. During the operation, the source is quiesced and any redo generated
from the time of the last refresh is applied to the destination to bring it current.
The database link user also has to exist in the primary PDB if the refreshable clone exists in another
CDB.

Oracle Database 18c: New Features for Administrators 2 - 19


Unplanned Switchover

When a PDB with an associated refreshable clone encounters an issue, complete an


unplanned switchover:
1. Close the primary PDB. Primary PDB1
Archive
2. Archive the current redo log file. log file

3. Drop the primary PDB.


Archive
4. Copy the archive redo log files to a new folder. log files

5. Set the destination for the archive redo log files.


Refreshable copy
6. Refresh the refreshable clone PDB.

Oracle Internal & Oracle Academy Use Only


PDB1_REF_CLONE

7. Disable the refresh mode of the refreshable clone PDB. Primary


PDB1_REF_CLONE
8. Open the refreshed PDB that became the new primary PDB.
Refreshable copy
9. Optionally, create a new refreshable clone. PDB1

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

In the example in the previous slide, the roles can be reversed at any time because none of the
primary and refreshable cloned PDBs are damaged.
In the example of an unplanned switchover, the primary encounters an issue. Then the way to
refresh the refreshable cloned PDB is to first archive the current redo log file and copy the archive
log files to a new directory that you define as the REMOTE_RECOVERY_FILE_DEST. Then once the
PDB is refreshed, you disable the refreshable capability on the former refreshable cloned PDB that
becomes the primary PDB and open it. You can finally re-create a new refreshable cloned PDB from
the former refreshable cloned PDB. Because the original primary PDB is dropped, you can give the
new refreshable cloned PDB the same name of the former primary PDB.

Oracle Database 18c: New Features for Administrators 2 - 20


Instantiating a PDB on a Standby

Creating a PDB on a primary CDB:


12c
From an XML file: Copy the data files specified in the XML file to the standby database.
18c
Use the STANDBY_PDB_SOURCE_FILE_DIRECTORY parameter to specify a directory
location on the standby where source data files for instantiating the PDB may be found
 Data files are automatically copied.

12c
As a clone from another PDB: Copy the data files belonging to the source PDB to the
standby database.
Use the STANDBY_PDB_SOURCE_FILE_DBLINK parameter to specify the name of a

Oracle Internal & Oracle Academy Use Only


18c

database link which is used to copy the data files from the source PDB to which the
database link points.
 The file copy is automatically done only if the database link points to the source
PDB, and the source PDB is open in read-only mode.

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

In Oracle Database 12c, when you create a PDB in the primary CDB with a standby CDB, you must
first copy your data files to the standby. Do one of the following, as appropriate:
• If you plan to create a PDB from an XML file, the data files on the standby are expected to be
found in the PDB's OMF directory. If this is not the case, then copy the data files specified in
the XML file to the standby database.
• If you plan to create a PDB as a clone, then copy the data files belonging to the source PDB
to the standby database.
The path name of the data files on the standby database must be the same as the path name that
will result when you create the PDB on the primary in the next step, unless the
DB_FILE_NAME_CONVERT database initialization parameter has been configured on the standby. In
that case, the path name of the data files on the standby database should be the path name on the
primary with DB_FILE_NAME_CONVERT applied.
In Oracle Database 18c, you can use initialization parameters to automatically copy the data files to
the standby.
• If you plan to create a PDB from an XML file, the
STANDBY_PDB_SOURCE_FILE_DIRECTORY parameter can be used to specify a directory
location on the standby where source data files for instantiating the PDB may be found. If
they are not found there, files are still expected to be found in the PDB's OMF directory on
the standby.

Oracle Database 18c: New Features for Administrators 2 - 21


• If you plan to create a PDB as a clone, the STANDBY_PDB_SOURCE_FILE_DBLINK
parameter specifies the name of a database link, which is used to copy the data files
from the source PDB to which the database link points. The file copy is done only if
the database link points to the source PDB, and the source PDB is open in read-only
mode. Otherwise, the user is still responsible for copying datafiles to the OMF location
on the standby.

Oracle Internal & Oracle Academy Use Only

Oracle Database 18c: New Features for Administrators 2 - 22


PDB-Level Parallel Statement Queuing
18c

• Possible issues of parallel statements queuing in a PDB:


PARALLEL_DEGREE_POLICY = AUTO | ADAPTIVE

– Not sufficient parallel servers available


– Parallel statements queued for a long time
• A PDB DBA can make parallel statement queuing work just as it does in a non-CDB.
– Disable parallel statement queuing at CDB level: PARALLEL_SERVERS_TARGET = 0.
– Set the PARALLEL_SERVERS_TARGET initialization parameter for individual PDBs.
– Kill a runaway SQL operation:

Oracle Internal & Oracle Academy Use Only


SQL> ALTER SYSTEM CANCEL SQL '272,31460';

– Dequeue a parallel statement:


SQL> EXEC dbms_resource_manager.dequeue_parallel_statement()

– Define the action when dequeuing: PQ_TIMEOUT_ACTION plan directive

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

• In Oracle Database 12c, a DBA can set the PARALLEL_SERVERS_TARGET initialization


parameter at the CDB level.
In Oracle Database 18c, the DBA can set PARALLEL_SERVERS_TARGET at the PDB level
once parallel statement queuing is disabled at the CDB level. When setting
PARALLEL_SERVERS_TARGET at the CDB level to 0, parallel statement queues at the PDB
level, based on the number of active parallel servers used by that PDB and the PDB’s
PARALLEL_SERVERS_TARGET.
• In Oracle Database 18c, the DBA can use the ALTER SYSTEM CANCEL SQL command to
kill a SQL operation in another session that is consuming excessive resources, including
parallel servers, either in the CDB root or PDB.
- Sid, Serial: The session ID and serial number of the session that runs the SQL
statement
- Inst_id: The instance ID if the session is connected to an instance of a RAC
database
- Sql id: Optionally the SQL ID
The session that consumed excessive resources receives an ORA-01013: user
requested cancel of current operation message.

Oracle Database 18c: New Features for Administrators 2 - 23


• In Oracle Database 12c, if a parallel statement has been queued for a long time, the DBA can
dequeue the statement by using the
DBMS_RESOURCE_MANAGER.DEQUEUE_PARALLEL_STATEMENT procedure. The DBA can
also set the PARALLEL_STMT_CRITICAL plan directive to BYPASS_QUEUE. Parallel
statements from this consumer group are not queued. The default is FALSE, which means
that parallel statements are eligible for queuing.
• In Oracle Database 18c, you can specify the timeout behavior by using the
PQ_TIMEOUT_ACTION resource manager plan directive. The values are:
- ERROR
- RUN
- DOWNGRADE: In this case, a downgrade percentage can also be specified.

Oracle Internal & Oracle Academy Use Only

Oracle Database 18c: New Features for Administrators 2 - 24


PDB-Level Parallel Statement Queuing: CPU_COUNT
18c

• If CPU_COUNT is set at the PDB level, the maximum DOP generated by AutoDOP
queries are the PDB’s CPU_COUNT.
• Similarly, the default value for PARALLEL_MAX_SERVERS and
PARALLEL_SERVERS_TARGET are computed based on the PDB’s CPU_COUNT.

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

If CPU_COUNT is set at the PDB level, the maximum DOP generated by AutoDOP queries are the
PDB’s CPU_COUNT. Similarly, the default value for PARALLEL_MAX_SERVERS and
PARALLEL_SERVERS_TARGET are computed based on the PDB’s CPU_COUNT.

Oracle Database 18c: New Features for Administrators 2 - 25


Using DBCA to Clone PDBs
18c

• Clones PDB in hot mode


• Creates the datafiles
directory for the new PDB
• Opens the new PDB

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

In Oracle Database 12c, DBCA enables you to create a new PDB. The new PDB is created as a
clone of the CDB seed.
In Oracle Database 18c, DBCA enables you to create a new PDB as a clone of an existing PDB, and
not necessarily from the CDB seed.

Oracle Database 18c: New Features for Administrators 2 - 26


Summary

In this lesson, you should have learned how to:


• Manage a CDB fleet
• Manage PDB snapshots
• Use a dynamic container map
• Explain lockdown profile inheritance
• Describe refreshable copy PDB switchover
• Identify the parameters used when instantiating a PDB on a standby

Oracle Internal & Oracle Academy Use Only


• Enable parallel statement queuing at PDB level
• Use DBCA to clone PDBs

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Oracle Database 18c: New Features for Administrators 2 - 27


Practice 2: Overview

• 2-1: Managing a CDB fleet


• 2-2: Managing and using PDB snapshots
• 2-3: Using a dynamic container map
• 2-4: Using static and dynamic lockdown profiles
• 2-5: Switching over refreshable cloned PDBs

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Oracle Database 18c: New Features for Administrators 2 - 28


3

Managing Security

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.
Objectives

After completing this module, you should be able to:


• Create schema-only accounts
• Isolate a new PDB keystore
• Convert a PDB to run in isolated or united mode
• Migrate PDB keystore between keystore types
• Create user-defined TDE master keys
• Protect fixed-user database link passwords

Oracle Internal & Oracle Academy Use Only


• Export and import fixed-user database links with encrypted passwords
• Configure encryption of sensitive data in Database Replay files
• Perform Database Replay capture and replay in a database with
Database Vault
• Explain Enterprise users integration with Active Directory
Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Oracle Database 18c: New Features for Administrators 3 - 2


Schema-Only Account

Ensure that user cannot log in to the instance:


• Enforce data access through the application.
• Secure schema objects.
– Prevent objects from being dropped by the connected schema.
• Use the NO AUTHENTICATION clause.
– Can be replaced by IDENTIFIED BY VALUES
• A schema-only account cannot be:
– Granted system administrative privileges

Oracle Internal & Oracle Academy Use Only


– Used in database links

DBA_USERS
AUTHENTICATION_TYPE = NONE | PASSWORD

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Application designers may want to create accounts that contain the application data dictionary, but
are not allowed to log in to the instance. This can be used to enforce data access through the
application, separation of duties at the application level, and other security mechanisms.
In addition, utility accounts can be created, but remain inaccessible by denying the ability to log in
except under controlled situations.
Until Oracle Database 12c, DBAs create accounts that do not need to log in to the instance or maybe
rarely log in to the instance. Nevertheless, for all of these accounts, there are default passwords and
there are also requirements to rotate the passwords.
In Oracle Database 18c, an account can be created with the NO AUTHENTICATION clause to
ensure that the account is not permitted to log in to the instance. Removing the password and
removing the ability to log in essentially just leaves a schema. The schema account can be altered to
allow login, but can then have the password removed. The ALTER USER statement can be used to
disable or re-enable the login capability.
The DBA_USERS view has a new column, AUTHENTICATION_TYPE, which displays NONE when NO
AUTHENTICATION is set, and PASSWORD when a password is set.

Oracle Database 18c: New Features for Administrators 3 - 3


Encrypting Data Using Transparent Data Encryption
12c

DD Table Table EMP


Table Key ID Name CCN
Decrypts Encrypts
EMP K1 1 ANN 3///?.

into DEPT K2 into 2 TOM Éà$ù#1


clear text TAB1 K3 cipher text 3 JIM &è@)]a

Tables keys
Data bocks

Master encryption key: Tablespaces keys TBS_HR

Oracle Internal & Oracle Academy Use Only


to encrypt and decrypt

TBS_APPS

Keystore

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Transparent Data Encryption (TDE) provides easy-to-use protection for your data without requiring
changes to your applications. TDE allows you to encrypt sensitive data in individual columns or
entire tablespaces without having to manage encryption keys. TDE does not affect access controls,
which are configured using database roles, secure application roles, system and object privileges,
views, Virtual Private Database (VPD), Oracle Database Vault, and Oracle Label Security. Any
application or user that previously had access to a table will still have access to an identical,
encrypted table.
TDE is designed to protect data in storage, but does not replace proper access control.
TDE is transparent to existing applications. Encryption and decryption occurs at different levels
depending on whether it is tablespace or column level, but in either case, encrypted values are not
displayed and are not handled by the application. TDE eliminates the ability of anyone who has
direct access to the data files to gain access to the data by circumventing the database access
control mechanisms. Even users with access to the data file at the operating system level cannot see
the data unencrypted.
TDE stores the master key outside the database in an external security module, thereby minimizing
the possibility of both personally identifiable information (PII) and encryption keys being
compromised. TDE decrypts the data only after database access mechanisms have been satisfied.
TDE enables encryption for sensitive data in columns without requiring users or applications to
manage the encryption key. The default external security module is a software keystore.

Oracle Database 18c: New Features for Administrators 3 - 4


TDE creates a key for each table that uses encrypted columns and each encrypted
tablespace. The table key is stored in the data dictionary and the tablespace keys are stored
in the tablespace data files. Both tablespace and table keys are encrypted with a master key.
There is one master key for the database. The master key is stored in a PKCS#12 software
keystore or a PKCS#11-based HSM, outside the database.

Oracle Internal & Oracle Academy Use Only

Oracle Database 18c: New Features for Administrators 3 - 5


Managing Keystore in CDB and PDBs
12c

• There is one single keystore for CDB and all PDBs.


• There is one master key per PDB to encrypt PDB data.
• The master key must be transported from the source database keystore to the target
database keystore when a PDB is moved from one host to another.

Keystore location PDBA PDBB PDBC


sqlnet.ora

Master root key

Oracle Internal & Oracle Academy Use Only


root
CDB keystore Master PDB key
Master PDB key
Master PDB key

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

In a multitenant container database (CDB), the root container and each pluggable database (PDB)
have their own master key used to encrypt data in the PDB, all of them stored in the common single
keystore. The master key must be transported from the source database keystore to the target
database keystore when a PDB is moved from one host to another.
The master keys are stored in a PKCS#12 software keystore or a PKCS#11-based HSM, outside the
database. For the database to use TDE, the keystore must exist.
To create a software keystore and a master key, perform the following steps:
1. Create a directory to hold the keystore, as defined by default in
V$ENCRYPTION_WALLET.WRL_PARAMETER, which is accessible to the Oracle software
owner.
2. If you plan to define another location for the keystore, specify an entry in the
$ORACLE_HOME/network/admin/sqlnet.ora file and create the appropriate directory.
ENCRYPTION_WALLET_LOCATION =
(SOURCE =
(METHOD = FILE)
(METHOD_DATA =
(DIRECTORY = /u01/app/oracle/other_admin_dir/orcl/wallet)))

Oracle Database 18c: New Features for Administrators 3 - 6


Managing Keystore in CDB and PDBs
18c

• There is still one single keystore for CDB and optionally one keystore per PDB.
• There is still one master key per PDB to encrypt PDB data, stored in the PDB keystore.
• Modes of operation
– United mode: PDB keys are stored in the unique CDB root keystore.
– Isolated mode: PDBs keys are stored in their own keystore.
– Mix mode: Some PDBs use united mode, some use isolated mode.

PDBA PDBB PDBC


TDE master key

Oracle Internal & Oracle Academy Use Only


root
CDB keystore
TDE PDB key

TDE PDB key


PDB keystore
TDE PDB key
PDB keystore

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

In an Oracle Database 12c multitenant container database (CDB), the CDB root and each pluggable
database (PDB) have their own master key used to encrypt data in the PDB, all of them stored in the
common single keystore. The master key must be transported from the source database keystore to
the target database keystore when a PDB is moved from one host to another.
The master keys are stored in a PKCS#12 software keystore or a PKCS#11-based HSM, outside the
database. For the database to use TDE, the keystore must exist in a directory defined by the
ENCRYPTION_WALLET_LOCATION location in the sqlnet.ora file.
In Oracle Database 12c, the Multitenant architecture was mainly focused on providing support for
database consolidation.
In Oracle Database 18c, the Multitenant architecture continues to provide support for database
consolidation; however, focus is on independent, isolated PDB administration. To support
independent, isolated PDB administration, support for separate keystores for each PDB is now
provided. Providing the PDBs with their own keystore is called the “Isolated Mode.” Having
independent keystores allows PDBs to be managed independently of each other. The shared
keystore mode provided with Oracle Database 12c is now called "United Mode". Both modes can be
used at the same time, in a single multitenant environment, with some PDBs sharing a common
keystore in united mode and some having there own independent keystores in isolated mode.
This feature allows each PDB running in isolated mode within in a CDB to manage its own keystore.
Isolated mode allows a tenant to manage its TDE keys independently, and supports the requirement
for a PDB to be able to use its own independent keystore password. The project aims to allow the
customer to decide how the keys of a given PDB are protected, either with the independent
password of an isolated keystore, or with the password of the united keystore.

Oracle Database 18c: New Features for Administrators 3 - 7


Keystore Management Changes for PDB V$ENCRYPTION_WALLET
ENCRYPTION_MODE = NONE

PDBs can optionally have their own keystore, allowing tenants to manage their own keys.
1. Define the shared location for the CDB root and PDB keystores:
SQL> ALTER SYSTEM SET wallet_root = /u01/app/oracle/admin/ORCL/tde_wallet;

2. Define the default PDB keystore type for each future isolated PDB, and then define a
different file type in each isolated PDB if necessary:
SQL> ALTER SYSTEM SET tde_configuration = 'KEYSTORE_CONFIGURATION=FILE';

– United:  WALLET_ROOT/<component>/ewallet.p12

Oracle Internal & Oracle Academy Use Only


CDB root
/u01/app/oracle/admin/ORCL/tde_wallet/tde/ewallet.p12
and PDBA

– Isolated:  WALLET_ROOT/<pdb_guid>/<component>/ewallet.p12
PDBB /u01/app/oracle/admin/ORCL/tde_wallet/51FE2A4899472AE6/tde/ewallet.p12

PDBC /u01/app/oracle/admin/ORCL/tde_wallet/7893AB8994724ZC8/tde/ewallet.p12

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

In Oracle Database 12c, the ENCRYPTION_WALLET_LOCATION parameter in the


$ORACLE_HOME/network/admin/sqlnet.ora file defines the path to the united keystore.
In Oracle Database 18c, no isolated keystore can be used unless the new initialization
WALLET_ROOT parameter is set, replacing ENCRYPTION_WALLET_LOCATION. The WALLET_ROOT
initialization parameter specifies the path to the root of a directory tree containing a subdirectory for
each PDB GUID, under which a directory structure is used to store the various keystores associated
with features such as TDE, EUS, and SSL. A new column, KEYSTORE_MODE, is added to the
V$ENCRYPTION_WALLET view with values NONE, ISOLATED, and UNITED. If all PDBs use the
united mode, you can still create the CDB keystore by using the command without requiring the
WALLET_ROOT parameter to be set:
SQL> ADMINISTER KEY MANAGEMENT CREATE KEYSTORE
'/u01/app/oracle/admin/ORCL/tde_wallet/' IDENTIFIED BY <password>;
All of the existing commands allowed only in the CDB root until Oracle_Database 12c are now
supported in Oracle_Database 18c at the PDB level, with the understanding that the ADMINISTER
KEY MANAGEMENT privilege first needs to be granted to a newly-created security officer for the
PDB.

Oracle Database 18c: New Features for Administrators 3 - 8


Defining the Keystore Type

Values of keystore types allowed:


• FILE
• OKV (Oracle Key Vault)
• HSM (Hardware Security Module)
• FILE|OKV: Reverse-migration from OKV to FILE has occurred
• FILE|HSM: Reverse-migration from HSM to FILE has occurred
• OKV|FILE: Migration from FILE to OKV has occurred

Oracle Internal & Oracle Academy Use Only


• HSM|FILE: Migration from FILE to HSM has occurred
In isolated mode, when the CDB is in mounted state:

SQL> STARTUP MOUNT


SQL> ALTER SYSTEM SET tde_configuration='CONTAINER=pdb1; KEYSTORE_CONFIGURATION=FILE';

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

A per-PDB dynamic instance initialization parameter, TDE_CONFIGURATION, is added, which takes


an attribute-value list.
TDE_CONFIGURATION has only two attributes:
• KEYSTORE_CONFIGURATION: Can take the values FILE and those defined in the slide
• CONTAINER: Specifies the PDB. This attribute can be specified only when setting the
parameter in the CDB root when performing crash recovery or media recovery and the
database is in the MOUNTED state. If the control file is lost, it may be necessary to run an
ALTER SYSTEM command in the CDB root to set the TDE_CONFIGURATION parameter
appropriately for each PDB. Because the command needs to be run in the CDB root, the
PDB name is provided via the additional attribute CONTAINER, for example as follows:
ALTER SYSTEM SET
TDE_CONFIGURATION='CONTAINER=CDB1_PDB1;KEYSTORE_CONFIGURATION=FILE'
SCOPE=MEMORY;
• The command configures the CDB1_PDB1 to run in isolated mode using its own keystore.

Oracle Database 18c: New Features for Administrators 3 - 9


Isolating a PDB Keystore V$ENCRYPTION_WALLET
ENCRYPTION_MODE = ISOLATED

SQL> ADMINISTER KEY MANAGEMENT CREATE KEYSTORE


1. Create / open the CDB root keystore: IDENTIFIED BY <united_keystore_pass> ;

2. Connect as the PDB security admin to the newly created PDB to:
a. Create the PDB keystore
pass
TDE master key WALLET_ROOT/<pdb_guid>/tde/ewallet.p12
No keystore mgt TDE PDB key

SQL> ADMINISTER KEY MANAGEMENT CREATE KEYSTORE


IDENTIFIED BY <isolated_keystore_pass> ;
b. Open the PDB keystore

Oracle Internal & Oracle Academy Use Only


SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN
IDENTIFIED BY <isolated_keystore_pass> ;
c. Create the TDE PDB key in the PDB keystore
SQL> ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY <isolated_keystore_pass>
WITH BACKUP;

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

In the case of a newly-created PDB, the ADMINISTER KEY MANAGEMENT privilege needs to be
granted to a newly-created local user in the PDB, who acts as the security officer for the new PDB. It
is assumed that this security officer is provided with the password of the united keystore, because
that password is required to gain access to the TDE master key. Note that knowledge of this
password does not allow the user to perform an ADMINISTER KEY MANAGEMENT UNITE
KEYSTORE operation. Additional privilege scope is needed for the unite keystore operation.
The PDB security officer is then allowed to invoke the ADMINISTER KEY MANAGEMENT CREATE
KEYSTORE command, which creates an isolated keystore for the PDB and automatically configures
the keystore of the PDB to run in isolated mode.
Note: Observe that the ADMINISTER KEY MANAGEMENT CREATE KEYSTORE command does
not use the definition of the keystore location. The keystore location is defined in the WALLET_ROOT
parameter.
In the V$ENCRYPTION_WALLET view, the KEYSTORE_MODE column shows NONE for the CDB root
container. For the isolated PDB, the KEYSTORE_MODE column, shows ISOLATED.

Oracle Database 18c: New Features for Administrators 3 - 10


Converting a PDB to Run in Isolated Mode V$ENCRYPTION_WALLET
ENCRYPTION_MODE = ISOLATED

1. In the CDB root:


a. Create a common user to act as the security officer
b. Grant the ADMINISTER KEY MANAGEMENT privilege commonly to
2. Connect as the security officer to the PDB and create the keystore in the PDB.

SQL> ADMINISTER KEY MANAGEMENT ISOLATE KEYSTORE


IDENTIFIED BY <isolated_keystore_password>
FROM ROOT KEYSTORE IDENTIFIED BY [EXTERNAL STORE | <united_keystore_password>]
WITH BACKUP;

Oracle Internal & Oracle Academy Use Only


TDE master key
WALLET_ROOT/tde/ewallet.p12
TDE PDB key TDE PDB key
TDE PDB key moved
WALLET_ROOT/51FE2A4899472AE6/tde/ewallet.p12

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

If you want to convert a PDB to run in isolated mode, the ADMINISTER KEY MANAGEMENT
privilege needs to be commonly granted to a newly-created common user who will act as the
security officer for the PDB. The security officer for each PDB will now be managing their own
keystore.
Then after logging in to the PDB as the security officer, the ADMINISTER KEY MANAGEMENT
ISOLATE KEYSTORE command must be executed to isolate the key of the PDB into a separate
isolated keystore. The isolated keystore is created by this command, with its own password.
All of the previously active (historical) master keys associated with the PDB are moved to the
isolated keystore.

Oracle Database 18c: New Features for Administrators 3 - 11


Converting a PDB to Run in United Mode V$ENCRYPTION_WALLET
ENCRYPTION_MODE = UNITED

1. In the CDB root:


a. The security officer of the CDB exists.
b. The security officer of the CDB is granted the ADMINISTER KEY
MANAGEMENT privilege commonly.
2. Connect as the security officer to the PDB and unite the TDE PDB key with those of
the CDB root.

SQL> ADMINISTER KEY MANAGEMENT UNITE KEYSTORE


IDENTIFIED BY <isolated_keystore_password>
WITH ROOT KEYSTORE IDENTIFIED BY [EXTERNAL STORE | <united_keystore_password>]

Oracle Internal & Oracle Academy Use Only


[WITH BACKUP [USING <backup_id>]];

TDE PDB key WALLET_ROOT/51FE2A4899472AE6/tde/ewallet.p12

TDE PDB keys moved


TDE master key
TDE PDB key WALLET_ROOT/tde/ewallet.p12

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

If a PDB no longer wants to manage its own separate keystore in isolated mode, the security officer
can decide to unite its keystore with that of the CDB root, and allow the security officer of the CDB
root administer its keys.
The PDB security officer who is a common user with the ADMINISTER KEY MANAGEMENT privilege
granted commonly logs in to the PDB and issues the ADMINISTER KEY MANAGEMENT UNITE
KEYSTORE command to unite the keys of the PDB with those of the CDB root.
When the keystore of a PDB is being united with that of the CDB root, all of the previously active
(historical) master keys associated with the PDB are also moved to the keystore of the CDB root.
When V$ENCRYPTION_WALLET is queried from the united PDB, the PDB being configured to use
the CDB root keystore, in this case the KEYSTORE_MODE column, shows UNITED.

Oracle Database 18c: New Features for Administrators 3 - 12


Migrating a PDB Between Keystore Types

To migrate a PDB from using wallet as the keystore to using Oracle Key Vault if the PDB is
running in isolated mode:
1. Upload the TDE encryption keys from the isolated keystore to Oracle Key Vault using a
utility.
2. Set the TDE_CONFIGURATION parameter of the PDB to the appropriate value:
SQL> ALTER SYSTEM SET tde_configuration = 'KEYSTORE_CONFIGURATION=OKV';

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

To migrate a PDB from using wallet as the keystore to using Oracle Key Vault if the PDB is running
in isolated mode, the TDE encryption keys from the isolated keystore need to be uploaded to Oracle
Key Vault by using a utility such as the okvutil upload command to migrate an existing TDE
wallet to Oracle Key Vault. Then the TDE_CONFIGURATION parameter of the PDB needs to be
changed to KEYSTORE_CONFIGURATION=OKV.
Refer to the following Oracle documentation:
• Oracle Database Advanced Security Guide 18c – Chapter Managing the Keystore and the
Master Encryption Key - Migration of Keystores to and from Oracle Key Vault.
• Oracle Key Vault Administrator's Guide 12c Release 2 (12.2) – Chapter Migrating an Existing
TDE Wallet to Oracle Key Vault – Oracle Key Vault Use Case Scenarios
• Oracle Key Vault Administrator's Guide 12c Release 2 (12.2) – Chapter Enrolling Endpoints
for Oracle Key Vault – okvutil upload Command

Oracle Database 18c: New Features for Administrators 3 - 13


Creating Your Own TDE Master Encryption Key

• Create and then use your own TDE master encryption key by providing raw binary data:
SQL> ADMINISTER KEY MANAGEMENT CREATE KEY
'10203040506032F88967A5419662A6F4E460E892318E307F017BA048707B402493C'
USING ALGORITHM 'SEED128' FORCE KEYSTORE
IDENTIFIED BY "WELcome_12" WITH BACKUP;

SQL> ADMINISTER KEY MANAGEMENT USE KEY


'ARAgMEBQYHCAERITFBUWFxgAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
IDENTIFIED BY "WELcome_12" WITH BACKUP;

Oracle Internal & Oracle Academy Use Only


• Or, create and activate your TDE master encryption key:
SQL> ADMINISTER KEY MANAGEMENT SET KEY
'102030406070801112131415161718:3D432109DF1967062A6F4E460E892318c'
USING ALGORITHM 'SEED128'
IDENTIFIED BY "WELcome_12" WITH BACKUP;

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

This capability is needed by Database Cloud Services to support the integration with the Key
Management service. Instead of requiring that TDE master encryption keys always be generated in
the database, this supports using master keys generated elsewhere.
The ADMINISTER KEY MANAGEMENT command allows you to either SET your own TDE master
encryption key if you want to create and activate the TDE master encryption key within a single
statement, or CREATE if you want to create the key for later use, without activating it yet. To activate
the generated key, first find the key in the V$ENCRYPTION_KEYS view and then use the USE KEY
clause of the same command.
Define the value for the key:
• MKID: The master encryption key ID is a 16-byte hex-encoded value that you can create or
have Oracle Database generate. If you omit this value, Oracle Database generates it.
• MK: The master encryption key is a hex-encoded value that you can create or have Oracle
Database generate, either 32 bytes for the AES256, ARIA256, and GOST256 algorithms or
16 bytes for the SEED128 algorithm. The default algorithm is AES256.
If you omit both the MKID and MK values, then Oracle Database generates both of the values for
you.
To complete the operation, the keystore must be opened. The keystore can be temporarily opened
by using the FORCE KEYSTORE clause.

Oracle Database 18c: New Features for Administrators 3 - 14


Protecting Fixed-User Database Links Obfuscated Passwords

How to prevent an intruder from decrypting an obfuscated database link password?


12c
Passwords for DB links are stored obfuscated in the database.
18c
Passwords for DB links are not exported, being replaced with ‘x’.
– Set the COMPATIBLE initialization parameter to 18.1.0.0.
– Open the CDB root keystore and PDB isolated keystores if necessary.
– Enable credentials encryption in the dictionary: at CDB root or PDB level:
SQL> ALTER DATABASE DICTIONARY ENCRYPT CREDENTIALS;

– Display the enforcement status of the credentials encryption:

Oracle Internal & Oracle Academy Use Only


DICTIONARY_CREDENTIALS_ENCRYPT
ENFORCEMENT = ENABLED | DISABLED

– Display the usability of encrypted passwords of database links:


DBA_DB_LINKS
VALID = YES | NO

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

In Oracle Database 12c, fixed-user database links passwords are obfuscated in the database.
Hackers find ways to deobfuscate the passwords.
In Oracle Database 18c, you can have the DB Link passwords be replaced with “x’ in the dump file
by enabling the credentials encryption in the dictionary of the CDB root and PDBs.
If the operation is performed in the CDB root, the CDB root keystore is required and opened. If the
operation is performed in a PDB and if the PDB works in isolated mode, the PDB keystore is
required and opened.
To perform this operation, the SYSKM privilege is required.
If you need to disable the credentials encryption, use the following statement:
SQL> ALTER DATABASE DICTIONARY DELETE CREDENTIALS KEY;

Oracle Database 18c: New Features for Administrators 3 - 15


Importing Fixed-User Database Links Encrypted Passwords

In the dump file of Data Pump export:


12c
 Obfuscated values for DB link passwords
– Passwords not protected
– Unless exported with ENCRYPTION_PWD_PROMPT=YES
18c
If credentials encryption enabled in the dictionary:
 Invalid value for DB link password in the dump file
 Warning message during export and import
$ expdp …

Oracle Internal & Oracle Academy Use Only


ORA-39395: Warning: object SYSTEM.TEST requires password reset after import

 Reset password for the DB link after import


SQL> ALTER DATABASE LINK lk1 CONNECT TO test IDENTIFIED BY <reset_password>;

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

In Oracle Database 12c, because fixed-user database links passwords are obfuscated in the
database, Data Pump export exports the database links passwords with the obfuscated value. In this
case, Oracle recommends that you set the ENCRYPTION_PASSWORD parameter on the expdp
command so that the obfuscated database link passwords are encrypted in the dump file. To further
increase security, Oracle recommends that you set the ENCRYPTION_PWD_PROMPT parameter to
YES so that the password can be entered interactively from a prompt, instead of being echoed on the
screen with the ENCRYPTION_PASSWORD parameter.
In Oracle Database 18c, if you enabled the encryption of credentials in the database, a Data Pump
export operation stores an invalid password for the database link password in the dump file. A
warning message during the export and import operations tells you that after the import, the
password for the database link has to be reset.
If you do not reset the password of the imported database link, the following error appears when you
attempt to use it during a query:
SELECT * FROM system.test@test
*
ERROR at line 1:
ORA-28449: cannot use an invalidated database link

Oracle Database 18c: New Features for Administrators 3 - 16


DB Replay: The Big Picture
12c

Postchange test system

Prechange production system DBMS_WORKLOAD_REPLAY.PROCESS_CAPTURE

Clients/app servers
Capture directory 2
Replay
system
In capture files,

Process capture files


Shadow capture file
clear byte strings
1 Shadow capture file
for:
• Connection Shadow capture file Test
system
with
• SQL text Production Shadow capture file changes
system
• Bind values

Oracle Internal & Oracle Academy Use Only


Database
Database restore
Production backup 3
database
DBMS_WORKLOAD_REPLAY.START_REPLAY
DBMS_WORKLOAD_CAPTURE.START_CAPTURE
$ wrc userid=system password=oracle
DBMS_WORKLOAD_CAPTURE.FINISH_CAPTURE replaydir=/dbreplay

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Because Oracle Database 11g manages system changes, a significant benefit is the added
confidence to the business in the success of performing changes. The record-and-replay
functionality offers an accurate method to test the impact of a variety of system changes including
database upgrade, configuration changes, and hardware upgrade.
A useful application of Database Replay is to test the performance of a new server configuration.
Consider a customer who is utilizing a single instance database and wants to move to a RAC setup.
The customer records the workload of an interesting period and then sets up a RAC test system for
replay. During replay, the customer is able to monitor the performance benefit of the new
configuration by comparing the performance to the recorded system. This can also help convince the
customer to move to a RAC configuration after seeing the benefits of using the Database Replay
functionality.
Another application is debugging. You can record and replay sessions emulating an environment to
make bugs more reproducible. Manageability feature testing is another benefit. Self-managing and
self-healing systems need to implement this advice automatically (“autonomic computing model”).
Multiple replay iterations allow testing and fine-tuning of the control strategies’ effectiveness and
stability. The database administrator, or a user with special privileges granted by the DBA, initiates
the record-and-replay cycle and has full control over the entire procedure.

Oracle Database 18c: New Features for Administrators 3 - 17


Encryption of Sensitive Data in Database Replay Files
18c

• Protect data in capture files with TDE encryption.


• The keystore is used to:
– Store the oracle.rat.database_replay.encryption user password
– Retrieve the oracle.rat.database_replay.encryption user password during
workload capture, process, and replay
Capture file key Capture file key in header
5
PROCESS/REPLAY 3 4 Encrypts data
Capture file 1 Capture file 1
scott/tiger
Retrieves Generates a second-level hr/xxyyy

Oracle Internal & Oracle Academy Use Only


cwallet.sso client Decrypts encryption key for each file
6 wrc 7 SELECT c1 FROM t1;
password data SELECT * FROM t2
WHERE c1=:b1;

Set password for Database Replay user Retrieves user password


oracle.rat.database_replay.encryption 1
in keystore  User password mapped with Keystore CAPTURE 2
first-level encryption key

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

On the server side, the DB Replay user password is set before the workload capture in the keystore.
On the client side, the DB Replay client password is set in the SSO wallet. They must match
together.
• Before the whole process starts, the DBA must set the password for the DB Replay user
(oracle.rat.database_replay.encryption identifier) in the keystore. The DB
Replay user password is then mapped to an encryption key and stored in the keystore.
• During the capture, the oracle.rat.database_replay.encryption password is
retrieved from the keystore and used to encrypt the sensitive fields. This encryption key is the
first-level encryption key used to generate a second-level encryption key for each capture file.
The second-level encryption key is encrypted by the first-level encryption key and saved in
the capture file header. The second-level encryption key is applied to all sensitive fields in
capture files, such as database connection strings, SQL text, and SQL bind values.
• During the process and replay, the data encrypted in the capture files is decrypted.
a) During the process and replay, the oracle.rat.database_replay.encryption
password is verified against the keystore to see if it matches the one used during the
workload capture.
b) If the verification is successful, the password is mapped to the first-level encryption
key, which subsequently is used to recover the second-level encryption key for each
capture.
c) Once the second-level encryption key is available, all sensitive fields are decrypted.

Oracle Database 18c: New Features for Administrators 3 - 18


Capture Setup for DB Replay DBA_WORKLOAD_CAPTURES
ENCRYPTION = AES256

1. Set a password for the Database Replay user


oracle.rat.database_replay.encryption in the keystore.
SQL> ADMINISTER KEY MANAGEMENT
ADD SECRET '<replaypass>'
FOR CLIENT 'oracle.rat.database_replay.encryption'
IDENTIFIED BY <pass> WITH BACKUP;

2. Start the capture by using an encryption algorithm.


SQL> exec DBMS_WORKLOAD_CAPTURE.START_CAPTURE( NAME => 'OLTP_peak', -
DIR => 'OLTP', -

Oracle Internal & Oracle Academy Use Only


ENCRYPTION => 'AES256')

3. Stop the capture.


SQL> exec DBMS_WORKLOAD_CAPTURE.FINISH_CAPTURE()

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

1. Before the whole process can encrypt sensitive data, set the password for the DB Replay
user (oracle.rat.database_replay.encryption identifier) in the keystore.
To delete the secret password from the keystore, use the ADMINISTER KEY MANAGEMENT
DELETE SECRET FOR CLIENT 'oracle.rat.database_replay.encryption‘
command.
2. Then when starting the capture, specify which encryption algorithm used to encrypt the data
in the workload capture files by using the new ENCRYPTION parameter of the
DBMS_WORKLOAD_CAPTURE.START_CAPTURE procedure:
- NULL: Capture files are not encrypted (default).
- AES128: Capture files are encrypted using AES128.
- AES192: Capture files are encrypted using AES192.
- AES256: Capture files are encrypted using AES256.
3. Stop the capture when workload is sufficient for testing.
You can encrypt data in existing capture files by using the
DBMS_WORKLOAD_CAPTURE.ENCRYPT_CAPTURE procedure:
SQL> exec DBMS_WORKLOAD_CAPTURE.ENCRYPT_CAPTURE(-
SRC_DIR => 'OLTP', DST_DIR => 'OLTP_ENCRYPTED')
You can also decrypt data in capture files by using the
DBMS_WORKLOAD_CAPTURE.ENCRYPT_CAPTURE procedure:
SQL> exec DBMS_WORKLOAD_CAPTURE.DECRYPT_CAPTURE(-
SRC_DIR => 'OLTP_ENCRYPTED', DST_DIR => 'OLTP_DECRYPTED')

Oracle Database 18c: New Features for Administrators 3 - 19


Process and Replay Setup for DB Replay – Phase 1

4. Process the capture after moving the capture files to the testing server environment.
SQL> exec DBMS_WORKLOAD_REPLAY.PROCESS_CAPTURE(capture_dir => 'OLTP')

5. Initialize the replay after setting various replay parameters.


SQL> exec DBMS_WORKLOAD_REPLAY.INITIALIZE_REPLAY(replay_name => 'R',
replay_dir => 'OLTP')

6. Prepare the replay to start.


SQL> exec DBMS_WORKLOAD_REPLAY.PREPARE_REPLAY ()

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

4. Process the capture.


5. Initialize the replay.
6. Prepare the replay to start.
During the three steps, the keystore must be open. The password for the DB Replay user
(oracle.rat.database_replay.encryption identifier) is retrieved and verified in the
keystore.

Oracle Database 18c: New Features for Administrators 3 - 20


Process and Replay Setup for DB Replay – Phase 2

7. On the client side, set up a client-side keystore including the


oracle.rat.database_replay.encryption client password.
$ mkdir /tmp/replay_encrypt_cwallet
$ mkstore -wrl /tmp/replay_encrypt_cwallet -create
$ mkstore -wrl /tmp/replay_encrypt_cwallet -createEntry
'oracle.rat.database_replay.encryption' "replaypass"

8. Start the replay clients.


$ wrc REPLAYDIR=/tmp/dbreplay USERID=system WALLETDIR= tmp/replay_encrypt_cwallet
Workload Replay Client: …

Oracle Internal & Oracle Academy Use Only


Wait$ for the replay to start (21:47:01)
wrc REPLAYDIR=/tmp/dbreplay USERID=system WALLETDIR= tmp/replay_encrypt_cwallet
Workload Replay Client: …
Wait for the replay to start (21:47:01)

9. Start replaying for the clients waiting in step 8.


SQL> exec DBMS_WORKLOAD_REPLAY.START_REPLAY ()

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

7. On the client side, the password for the encrypted capture is retrieved from a client-side SSO
wallet on disk. The wrc replay client retrieves the password by the identifier
oracle.rat.database_replay.encryption. This ensures the safety of user password
without compromising automation.
Set up a client-side wallet including the same
oracle.rat.database_replay.encryption client password defined in the keystore of
the production database where the capture was executed.
8. Start as many wrc replay clients as required to replay the capture workload. The client
retrieves the password for the oracle.rat.database_replay.encryption from the
client-side SSO wallet created in step 7. In the wrc command line, specify the directory
where the client-side SSO wallet was created by using the WALLETDIR parameter.
9. Start the replay.

Oracle Database 18c: New Features for Administrators 3 - 21


Oracle Database Vault: Privileged User Controls
12c

• Database DBA is blocked from viewing HR data:


– Compliance and protection SELECT * FROM HR.EMP
from insiders
• HR App Owner is blocked from viewing FIN data: DBA
– Eliminates security risks from
server consolidation HR Realm

HR
HR App

Oracle Internal & Oracle Academy Use Only


FIN Realm

FIN
FIN App

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Oracle Database Vault (DB Vault) helps customers control privileged user access to sensitive
application data stored in the database.
The slide shows how DB Vault realms prevent privileged database users and even privileged
applications from accessing data outside their authorization. Realms can be placed around a single
table or an entire application. Once in place, they prevent users with privileges such as the DBA role
from accessing data protected by the realm. Interestingly enough, many applications today also have
privileged accounts. When applications are consolidated, it can be advantageous to put realms
around the individual applications to prevent, as an example, an application owner from “peeking
over the fence” at the contents of another application, perhaps containing sensitive financial data.

Oracle Database 18c: New Features for Administrators 3 - 22


Database Vault: Access Control Components

The following components of Database Vault provide highly configurable access control:
• Realms and authorization types 1. The DBA can view the ORDERS table data.
– Participant SQL> SELECT order_total FROM oe.orders
– Owner WHERE customer_id = 101;

• Command rules ORDER_TOTAL


-----------
• Rule sets 78279.6

• Secure application roles 2. The security manager protects the OE.ORDERS


table with a realm.
• Factors 3. The DBA can no longer view the ORDERS table data.

Oracle Internal & Oracle Academy Use Only


SQL> SELECT order_total FROM oe.orders
WHERE customer_id = 101;
SELECT order_total FROM oe.orders
*
ERROR at line 1:
ORA-01031: insufficient privileges

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

• Realms: A boundary defined around a set of objects in a schema, a whole schema, multiple
schemas, or roles. A realm protects the objects in it, such as tables, roles, and packages
from users exercising system or object privileges, such as SELECT ANY TABLE or even from
the schema owner. A realm may also have authorizations given to users or roles as
participants or owners. The security manager can define which users are able to access the
objects that are secured by the realm via realm authorizations and under which conditions
(rule sets).
In Oracle Database Vault 12c, there are two authorization types within a realm:
- Participant: The grantee is able to access the realm-secured objects.
- Owner: The grantee has all the access rights that a participant has, and can also
grant privileges (if they have either GRANT ANY ROLE or were granted that privilege
with the WITH ADMIN option) to others on the objects in the realm.
• Command rules: An ability to block the specified SQL command under a set of specific
conditions (rule sets)
• Rule sets: A collection of rules that are evaluated for the purpose of granting access. Rule
sets work with both realms and command rules to create a trusted path. Rule sets can
incorporate factors to create this trusted path to allow fine grained control on realms and
command rules. Examples of realms and command rules configured with rule sets:
- Realms can be restricted to accept only SQL from authorized users from a specific set
of IP addresses.
- Command rules can limit sensitive commands to certain DBAs from local workstations
during business hours.

Oracle Database 18c: New Features for Administrators 3 - 23


• Secure application roles: Activated by a session only under the condition of passing a
rule set
• Factors: Attributes of a user or the system at any given time. Factors contribute to the
decision process of granting access, and combinations of several factors may be
considered at once. Factors can include parameters in SYS_CONTEXT (IP address, client
name, user name, time of day, and so on). Some or all may have been assigned a name,
which is an identity.

Oracle Internal & Oracle Academy Use Only

Oracle Database 18c: New Features for Administrators 3 - 24


DB Replay Capture and Replay with Database Vault
18c

New realm authorization types to allow users to run DB Replay capture and replay:
• DBCAPTURE authorization type
• DBREPLAY authorization type
• Managed using Database Vault admin procedures:
– DVSYS.DBMS_MACADM.AUTHORIZE_DBCAPTURE
– DVSYS.DBMS_MACADM.UNAUTHORIZE_DBCAPTURE
Requires
– DVSYS.DBMS_MACADM.AUTHORIZE_DBREPLAY DV_OWNER or DV_ADMIN role
– DVSYS.DBMS_MACADM.UNAUTHORIZE_DBREPLAY

Oracle Internal & Oracle Academy Use Only


DVSYS.DBA_DV_DBCAPTURE_AUTH
GRANTEE = name of the granted user

DVSYS.DBA_DV_DBREPLAY_AUTH
GRANTEE = name of the granted user

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

You may want to use Database Replay to evaluate the functionality and performance of any mission
critical system with Database Vault.
However, the execution of Database Replay on a database with Database Vault configured requires
that the capture or replay user is granted appropriate access controls required by Database Vault to
access data in the database. Database Vault does not rely on system and object privileges to grant
access to data to users. Database Vault relies on realms with authorizations, rule sets, command
rules and secure application roles to allow or deny users access to application data.
In Oracle Database 18c, two new authorization types can be defined for a realm to allow capture or
replay users to run captures or replays.
• A user is allowed to run a capture only if the user is authorized for DBCAPTURE authorization
type by the Database Vault.
• A user is allowed to run a replay only if the user is authorized for DBREPLAY authorization
type by the Database Vault.

Oracle Database 18c: New Features for Administrators 3 - 25


Authenticating and Authorizing Users with External Directories
12c

External directories store user credentials and authorizations in a central location (LDAP-
compliant directory, such as OID, OUD, and OVD).
PROD
• Eases administration through centralization Paul
ORCL Pass
• Enables single-point authentication Paul role_mgr
sales
Pass
• Eliminates the need for client-side wallets role_mgr
sales

Oracle external directory


DN: Paul
Example: Authentication Method: Password
Password: pass_paul
• User changes job roles.

Oracle Internal & Oracle Academy Use Only


Authorizations: role_mgr, sales

• Security administrator changes


the user roles in Oracle Virtual Directory.
• No changes are made to the services that the user accesses.

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Authenticating and authorizing users with external directories is an important feature of Oracle
Database 12c Enterprise Edition, which allows users to be defined in an external directory rather
than within the databases. Their identities remain constant throughout the enterprise.
Authenticating and authorizing users with external directories addresses the user, the administrative,
and the security challenges by centralizing storage and management of user-related information with
Enterprise User Security (EUS) relying on Oracle Directory Services such as Oracle Internet
Directory (OID), Oracle Unified Directory (OUD), and Oracle Virtual Directory (OVD).
When an employee changes jobs in such an environment, the administrator needs to modify
information only in one location (the directory) to make changes effective in multiple databases and
systems. This centralization can substantially lower administrative costs while materially improving
enterprise security.

Oracle Database 18c: New Features for Administrators 3 - 26


Architecture
12c

ODS / EUS
Directory metadata repository
DN: Ann
Authentication: Password
Password: pass_ann
AD Database : ORCL
Mapping schema: user_global

DN: Tom 2. Checks Ann’s authentication and


Authentication: Certificate authorizations for ORCL
Certificate: DN_tom
….. ldap.ora

DIRECTORY_SERVERS=(oidhost:13060:13130)

Oracle Internal & Oracle Academy Use Only


3. Verifies the user and applies roles DIRECTORY_SERVER_TYPE = OID
ORCL

1. CONNECT ann/pass_ann@orcl spfile.ora


user_global:
IDENTIFIED LDAP_DIRECTORY_ACCESS=PASSWORD | SSL
Client 4. Connected. GLOBALLY LDAP_DIRECTORY_SYSAUTH=yes

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

In the example in the slide, a client can submit the same connect command, whether connecting as
a database user or as an enterprise user. The enterprise user has the additional benefit of allowing
the use of a shared schema.
The authentication process is as follows:
1. The user presents a username and password (or other credentials).
2. The directory returns the authorization token to the database.
3. The schema is mapped from the directory information. The directory supplies the global roles
for the user. Enterprise roles are defined in the directory and global roles are defined in the
database (non-CDB or PDB). The mapping from enterprise roles to global roles is in the
directory. The directory can supply the application context. An application context supplied
from the directory is called a global context.
4. Finally, the user is connected.
If the authentication and authorization must be completed with Active Directory (AD), AD must first
go through Oracle Directory Services (ODS) to get the user’s authentication and authorization.
Note: Each PDB has its own metadata, such as global users, global roles, and so on. Each PDB
should have its own identity in the directory.

Oracle Database 18c: New Features for Administrators 3 - 27


EUS and AD AD

ODS / EUS
Create exclusive global DN: CN=analyst …
schemas authenticated by: Create shared global

user_ann
exclusive

in ORCL
schema
Authentication : Certificate
• PKI certificates schemas authenticated by:
Certificate: DN_ann
• Passwords • PKI certificates
DN: CN=trainer …
• Kerberos tickets • Passwords
Authentication : Password
• Kerberos tickets

user_tom
exclusive

in ORCL
schema
Password: pass_tom

DN: CN=manager …

Shared schema
Authentication : Password

GLOBAL_U

in ORCL
Password: pass_paul
DN: CN=director …
Authentication : Password
Password: pass_jean

Oracle Internal & Oracle Academy Use Only


> CREATE USER global_u
> CREATE USER user_ann IDENTIFIED GLOBALLY;
IDENTIFIED GLOBALLY AS
'CN=analyst,OU=div1,O=oracle,C=US'; global_u

user_ann => CN=…


> CREATE USER user_tom
IDENTIFIED GLOBALLY AS
'CN=trainer,OU=div2,O=oracle,C=US'; user_tom => CN=… ORCL

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Authentication methods and certificates of users can be centralized in the directory. A user can
connect to the database in two different ways.
• A global exclusive schema in the database that has a one-to-one schema mapping in the
directory. This method requires that the user be created in every database where the end
user requires access. The following command creates a database user identified by a
distinguished name. The DN is the distinguished name in the user’s PKI certificate in the
user’s wallet.
CREATE USER global_ann
IDENTIFIED GLOBALLY AS 'CN=analyst,OU=division1, O=oracle, C=US';
• A global shared schema in the database that has a shared schema mapping in the
directory. Any user identified to the directory can be mapped to the shared global schema in
the database. The mapped user will be authenticated by the directory and the schema
mapping will provide the privileges. The following command creates the global shared
schema:
CREATE USER global_u IDENTIFIED GLOBALLY;
No one connects directly to the GLOBAL_U schema. Any user that is mapped to the
GLOBAL_U schema in the directory can connect to this schema.

Oracle Database 18c: New Features for Administrators 3 - 28


CMU and AD DBA_USERS
DBA_ROLES
EXTERNAL_NAME

12c
Deploy and synchronize database user credentials and authorizations with ODS/EUS first.
18c
Deploy database user credentials and authorizations directly in Active Directory with
Centrally Managed Users (CMU):
– Centralized database user authentication
– Centralized database access authorization AD

ldap.ora

user_ann
exclusive

in ORCL
schema
user_ann Users mapping: Ann
DIRECTORY_SERVERS=(oidhost:13060:13130)
DIRECTORY_SERVER_TYPE = AD
g_AD_u granted

Shared schema
Groups mapping:

Oracle Internal & Oracle Academy Use Only


spfile.ora mgr_role ORCL

in ORCL
g_AD_u
G-ORCL : g_AD_u
LDAP_DIRECTORY_ACCESS=
PASSWORD | SSL | PASSWORD_XS | SSL_XS Global shared or exclusive schemas
LDAP_DIRECTORY_SYSAUTH=yes authenticated by:
• PKI certificates

in ORCL
mgr_role
• Passwords MGR-ORCL : mgr_role

global
role
• Kerberos tickets

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

With Oracle Database 18c, Centrally Managed Users (CMU) allows sites to manage database user
credentials and authorizations in Active Directory directly without the need to deploy and synchronize
them with EUS relying on Oracle Directory Services. Active Directory stores authentication and
authorization data that is used by the database to authenticate users.
CMU provides the following capabilities:
• Supports passwords, Kerberos, and PKI certificates
- AD stores user database password verifiers: Passwords use verifiers as they did
before. The only difference is that the verifier for a user is not stored in the database,
but in AD.
- AD includes Kerberos Key Distribution Center: The difference is that the user is now a
global user (not authenticated externally).
- AD verifies client Distinguished Name (DN) and may act as Certificate Authority
• Supports AD account policies:
- For passwords: Expiration, complexity, and history
- For lockout: Threshold, duration and reset account lockout counter
- For Kerberos: Ticket timeout, clock skew between server and KDC

Oracle Database 18c: New Features for Administrators 3 - 29


• Supports AD users and groups mapping to:
- Oracle global schemas as exclusive schema, Oracle global schemas as
shared schema
- Oracle global roles
- Oracle administrative users
Because there is an EXTERNAL_NAME column in the DBA_USERS view to display the proper
mapping for global users, there is a new similar column in the DBA_ROLES view to display the
mapping for global roles.

Oracle Internal & Oracle Academy Use Only

Oracle Database 18c: New Features for Administrators 3 - 30


Choosing Between EUS and CMU
EUS CMU

Simplified Implementation

Authentication Password, Kerberos, PKI certificates


Enforce directory account policies

Authorization Role authorization


Administrative users

Shared DB schema mapping


Exclusive schema mapping

Oracle Internal & Oracle Academy Use Only


Enterprise
Current User trusted DB link
Domains
Integrated with Oracle Label Security, XDB

Consolidated reporting and management of data access

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

The following key advantages will lead you to use CMU rather than EUS:
• Simplified centralized directory services integration with less cost and complexity
- Authentication in AD for password, Kerberos and PKI
- Map AD groups to shared database accounts and roles
- Map exclusive AD user to database user
- Support AD account policies
• No client update required
• Supports all Oracle Database clients 10g and later
EUS and Oracle Directory Services authentication and authorization work as before.

Oracle Database 18c: New Features for Administrators 3 - 31


Summary

In this lesson, you should have learned how to:


• Create schema-only accounts
• Isolate a new PDB keystore
• Convert a PDB to run in isolated or united mode
• Migrate PDB keystore between keystore types
• Create user-defined TDE master keys
• Protect fixed-user database link passwords

Oracle Internal & Oracle Academy Use Only


• Export and import fixed-user database links with encrypted passwords
• Configure encryption of sensitive data in Database Replay files
• Perform Database Replay capture and replay in a database with
Database Vault
• Explain Enterprise users integration with Active Directory
Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Oracle Database 18c: New Features for Administrators 3 - 32


Practice 3: Overview

• 3-1: Creating schema-only accounts


• 3-2: Managing PDB keystores
• 3-3: Creating user-defined TDE master keys
• 3-4: Exporting and importing fixed-user database links
• 3-5: Encrypting sensitive data in DB Replay files

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Oracle Database 18c: New Features for Administrators 3 - 33


Oracle Internal & Oracle Academy Use Only
4

Using RMAN Enhancements

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.
Objectives

After completing this lesson, you should be able to:


• Reuse preplugin backups after conversion of a non-CDB to a PDB
• Reuse preplugin backups after plugging/relocating a PDB into another CDB
• Duplicate an active PDB into an existing CDB
• Duplicate a CDB as encrypted
• Duplicate a CDB as decrypted
• Recover a standby database from primary

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Oracle Database 18c: New Features for Administrators 4 - 2


Migrating a Non-CDB to a CDB
12c

Control Redo CDB1


Possible methods:
Data files / Tempfiles
files Log
files • Data Pump (TTS or TDB or full export/import)
CDB root
• Plugging (XML file definition with DBMS_PDB)
Data files / Tempfiles • Cloning
Create PDB2
from • Replication
non-CDB ORCL
PDB2

Clone
After conversion:
impdp TTS Plug
• Is it possible to recover the PDB back in time before
Dump XML Data Replication
the non-CDB was converted?

Oracle Internal & Oracle Academy Use Only


file file files
• Are the non-CDB backups transported with the
expdp TTS Unplug using non-CDB?
DBMS_PDB
Datafiles Control Redo log
files files
non-CDB ORCL

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

In Oracle Database 12c, there are different possible methods to migrate a non-CDB to a CDB.
Whichever method is used, are the non-CDB backups transported with the non-CDB during the
migration?
• Does Oracle Data Pump transport the non-CDB backups?
- You either use transportable tablespace (TTS) or full conventional export/import or full
transportable database (TDB) provided that in the last one any user-defined object
resides in a single user-defined tablespace.
- Using any of these Data Pump methods, Data Pump transports objects definition and
data, but not backups.
• Does the plugging method transport the non-CDB backups? Generating an XML metadata
file from the non-CDB to use it during the plugging step into the CDB only describes the
non-CDB data files, but it does not describe the list of backups associated to the non-CDB.
• Does the cloning method transport the non-CDB backups? Cloning non-CDBs in a CDB
requires copying the files of the non-CDB to a new location. It does not copy the backups to a
recovery location.
• Does replication transport the non-CDB backups? The replication method replicates the data
from a non-CDB to a PDB. When the PDB catches up with the non-CDB, you fail over to the
PDB. Backups are not associated with the replication.
Because there are no backups transported with the non-CDB into the target CDB, no restore nor
recovery using the old backups is possible. Even if the non-CDB backups were manually
transported/copied to the target CDB, users cannot perform restore/recover operations using these
backups. You had to create a new baseline backup for the CDB converted as a PDB.

Oracle Database 18c: New Features for Administrators 4 - 3


Migrating a Non-CDB and Transporting Non-CDB Backups to a CDB
18c

Data files / Tempfiles


CDB1 1. Export backups metadata with
Archive
Backups 7 log files
DBMS_PDB.exportRmanBackup.
Control Redo
CDB root files Log
files
2. Unplug the non-CDB using
Data files / Tempfiles DBMS_PDB.describe.

5 Open PDB2 7 3. Archive the current redo log file.


PDB2 6 4. Transfer data files including backups to the target
CDB.
4 Plug as PDB2 5. Plug using the XML file.

Backups 6. Execute the noncdb_to_pdb.sql script.


1 XML file
XML 7. Open PDB. This automatically imports backups

Oracle Internal & Oracle Academy Use Only


containing Archive
backup metadata file 3
log files metadata into the CDB dictionary.
2 Datafiles Restore/recover the PDB with preplugin backups:
DBMS_PDB.exportRmanBackup
DBMS_PDB.describe 1. Catalog the archived redo log file.
Datafiles Control Redo log 2. Restore PDB using preplugin backups.
files files
non-CDB ORCL
3. Recover PDB using preplugin backups.

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

In Oracle Database 18c, you can transport the existing backups and backed up archive log files of
the non-CDB and reuse them to restore and recover the new PDB.
The backups transported from the non-CDB into the PDB are called preplugin backups.
Transporting the backups and backed up archive log files associated to a non-CDB before migration
requires the following steps:
1. The following new DBMS_PDB.exportRmanBackup procedure must be executed in the
non-CDB opened in read/write mode. This is a mandatory step for non-CDB migration.
The procedure exports all RMAN backup metadata that belongs to the non-CDB into its own
dictionary. The metadata is transported along with the non-CDB during the migration.
2. Use dbms_pdb.describe to generate an XML metadata file from the non-CDB describing
the structure of the non-CDB with the list of datafiles.
3. Archive the current redo log file required for a potential restore/recover using preplugin
backups.
4. Transfer the data files, backups, and archive log files to the target CDB.
5. Use the XML metadata file during the plugging step to create the new PDB into the CDB.
6. Run the ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql script to delete unnecessary
metadata from the PDB SYSTEM tablespace.
7. Open the PDB. When the PDB is open, the exported backup metadata is automatically
copied into the CDB dictionary, except the current redo log file archived in step 3. Catalog the
archived redo log file as one of the preplugin archived logs.
Because the backups for the PDB are now available in the CDB, they can be reused to recover the
PDB.

Oracle Database 18c: New Features for Administrators 4 - 4


Relocating/Plugging a PDB into Another CDB
12c

Data files / Tempfiles CDB2 After relocating/plugging the PDB into another CDB:
CDB root Control Redo • Is it possible to recover the PDB back in time before
files Log
files it was relocated/unplugged?
Data files / Tempfiles
Open PDB2 4 • Are the PDB backups transported with the
3 relocated/unplugged PDB?
PDB2

3 Plug

XML
Unplug using file Datafiles
2
DBMS_PDB

Oracle Internal & Oracle Academy Use Only


Data files / Tempfiles
CDB1
Archive
Backups Redo
log files Control Log
CDB root files files

Data files / Tempfiles Close PDB1 1

PDB1

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

In Oracle Database 12c, PDBs can be hot cloned from one CDB into another CDB by using local
UNDO tablespaces.
Are the PDB backups transported with the PDB during the cloning?
• Cloning a PDB into another CDB requires copying the files of the PDB to a new location. It
does not copy the backups to a recovery location.
If there are no backups transported with the PDB into the target CDB, no restore nor recovery using
the old backups is possible. Even if the PDB backups were manually transported/copied to the target
CDB, users cannot perform restore/recover operations using these backups. You had to create a
new baseline backup for PDBs relocated or plugged.

Oracle Database 18c: New Features for Administrators 4 - 5


Plugging a PDB and Transporting PDB Backups to a CDB - 1
18c

CDB2
Data files / Tempfiles
Archive 1. Export backups metadata by using
Backups 4 log files DBMS_PDB.exportRmanBackup.
Control Redo
CDB root files Log
files 2. Unplug the PDB by using
Data files / Tempfiles
DBMS_PDB.describe.
4 Open PDB2
3 3. Transfer the data files including backups to
PDB2
the target CDB.
3 Plug as PDB2 4. Plug using the XML file.
5. Open PDB. This automatically imports
Backups backups metadata into the CDB dictionary.
XML file containing XML Then you can restore/recover the PDB by using

Oracle Internal & Oracle Academy Use Only


1 Archive
PDB backup file
metadata log files the transported backups:
DBMS_PDB.exportRmanBackup DBMS_PDB.describe
Datafiles 1. Restore PDB using preplugin backups.
2 Unplug PDB1 2. Recover PDB using preplugin backups.
Control Redo log
Data files / Tempfiles files files
PDB1 CDB1

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

In Oracle Database 18c, you can transport the existing backups and backed up archive log files of
the PDB and reuse them to restore and recover the new plugged PDB.
To transport the backups and backed up archive log files associated to a PDB before replugging the
PDB, perform the following steps:
1. The following new DBMS_PDB.exportRmanBackup procedure can be executed in the PDB
opened in read/write mode. This is not a mandatory step before unplugging the PDB:
SQL> EXEC dbms_pdb.exportRmanBackup ('<pdb_name>')
The procedure exports all RMAN backup metadata that belongs to the PDB into its own
dictionary. The metadata is transported along with the PDB during the unplug operation.
2. Unplug the PDB.
3. Transfer the data files, backups, and archive log files to the target CDB.
4. Plug the PDB with the COPY clause to copy the data files, backups, and backed up archive
log files of the source PDB into a new directory.
5. Open the new PDB. When the PDB is open, the exported backup metadata is automatically
copied into the CDB dictionary.
Because the backups for the PDB are now available in the CDB, they can be reused to recover the
PDB.

Oracle Database 18c: New Features for Administrators 4 - 6


Plugging a PDB and Transporting PDB Backups to a CDB - 2
18c

Data files / Tempfiles


CDB2 1. Unplug the PDB with DBMS_PDB.describe.
Archive
Backups 4 log files 2. Transfer the datafiles including backups to the
Control Redo
CDB root files Log
files
target CDB.
Data files / Tempfiles
3. Plug using the XML file.
3 Open PDB2
4. Open PDB.
2
PDB2 5. Catalog preplugin backups into CDB.
Then you can restore/recover the PDB using the
2 Plug as PDB2
transported backups:

XML 1. Restore PDB using preplugin backups.

Oracle Internal & Oracle Academy Use Only


Backups
file 2. Recover PDB using preplugin backups.
DBMS_PDB.describe
Archive
log files
1 Unplug PDB1
Control Redo log
Data files / Tempfiles files files
PDB1 CDB1

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

If you forgot to execute the DBMS_PDB.exportRmanBackup procedure before unplugging the PDB,
you can still catalog the existing backups and backed up archive log files of the plugged PDB after
the plugging operation and reuse them to restore and recover the plugged PDB.
If preplugin backups and archive log files are moved or new backups and archive log files were
created on the source CDB after the PDB was transported, then the target CDB does know about
them. You can catalog those preplugin files:
RMAN> SET PREPLUGIN CONTAINER=<pdb_name>;
RMAN> CATALOG PREPLUGIN ARCHIVELOG '<archivelog>';
RMAN> CATALOG PREPLUGIN BACKUP '<backup_name>';

Oracle Database 18c: New Features for Administrators 4 - 7


Using PrePlugin Backups

Use the PrePlugin option to perform RMAN operations using preplugin backups.
• Restore a PDB from its preplugin backups cataloged in the target CDB.
RMAN> RESTORE PLUGGABLE DATABASE pdb_noncdb FROM PREPLUGIN;

• Recover a PDB from its preplugin backups until the datafile was plugged in.
RMAN> RECOVER PLUGGABLE DATABASE pdb_noncdb FROM PREPLUGIN;

• Check whether preplugin backups and archive log files are cataloged in the target CDB.
RMAN> SET PREPLUGIN CONTAINER pdb1;

Oracle Internal & Oracle Academy Use Only


RMAN> LIST PREPLUGIN BACKUP;
RMAN> LIST PREPLUGIN ARCHIVELOG ALL;
RMAN> LIST PREPLUGIN COPY;
• Verify that cataloged preplugin backups are available on disk.
RMAN> CROSSCHECK PREPLUGIN BACKUP;
RMAN> DELETE PREPLUGIN BACKUP;

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

• A restore operation from preplugin backups restores the datafiles taken before the PDB was
plugged in.
• A recover operation using preplugin backups use preplugin incremental backup and
archivelogs to recover the datafile until the datafile was plugged in. At the end of the recover
operation, the datafile is checkpointed as of plugin SCN.
The preplugin archivelogs are restored to the target archivelog destination by default as long
as the target archivelog destination is not a fast recovery area (FRA). If the target archivelog
destination is the FRA, then the user has to provide an explicit archivelog destination using
the SET ARCHIVELOG DESTINATION command before executing RECOVER FROM
PREPLUGIN.
• If there are preplugin metadata that belongs to more than one PDB, a command that does
not clarify the PDB the syntax refers to errors out indicating that the user should scope the
PDB. The scoping of PDB can be done by using the SET PREPLUGIN CONTAINER
command. Scoping is not necessary if the user has connected to PDB as the target CDB.
The SET PREPLUGIN CONTAINER command is necessary if you connected to the target
CDB.
• CROSSCHECK, DELETE, and CHANGE commands can use the PREPLUGIN option. The
CROSSCHECK command can validate the existence of preplugin backups, archived log files,
and image copies. The DELETE command can delete preplugin backups, archived log files
and image copies, and also expired preplugin backups.
RMAN> DELETE EXPIRED PREPLUGIN BACKUP;
RMAN> CHANGE PREPLUGIN archivelog all unavailable;
RMAN> CHANGE PREPLUGIN backup available;
RMAN> CHANGE PREPLUGIN copy unavailable;

Oracle Database 18c: New Features for Administrators 4 - 8


To Be Aware Of

• The source and destination CDBs must have COMPATIBLE set to 18.1 or higher to
create/restore/recover preplugin backups.
• In case of plugging in a non-CDB, the non-CDB must use ARCHIVELOG mode.
• The target CDB does not manage preplugin backups.
– Use CROSSCHECK and DELETE commands to manage the preplugin backups.
• A RESTORE using preplugin backups can restore datafiles from one PDB only.
• Backups taken by the source cdb1 are visible in target cdb2 only.

Oracle Internal & Oracle Academy Use Only


cdb1 cdb2 cdb3
Unplug PDB1 Unplug PDB2

PDB1 Plug as PDB2 PDB2 Plug as PDB3 PDB3

Backups Archive Backups Archive Backups Archive


log files log files log files

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

• The target CDB does not manage the source database backups. However, there are
commands to delete and crosscheck the source database backups.
• In one RMAN command, you cannot specify datafiles belonging to different PDBs when using
preplugin backups.
• The CDB root must be opened to make use of preplugin backups.
• Backups taken by the source cdb1 are visible in target cdb2 only. For instance, a PDB can
migrate from cdb1 to cdb2 and then to cdb3. The backups of the PDB taken at cdb1 are
accessible by cdb2. They are not accessible by cdb3. The cdb3 can only see backups of
the PDB taken by cdb2.

Oracle Database 18c: New Features for Administrators 4 - 9


Example

RMAN> SET PREPLUGIN CONTAINER pdb1;


RMAN> CATALOG PREPLUGIN ARCHIVELOG '/u03/app/…/o1_mf_1_8_dnqwm59v_.arc';
RMAN> RUN { RESTORE PLUGGABLE DATABASE pdb1 FROM PREPLUGIN;
RECOVER PLUGGABLE DATABASE pdb1 FROM PREPLUGIN;
}
RMAN> RECOVER PLUGGABLE DATABASE pdb1;

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

In this example, you first avoid any ambiguity to which PDB the backups belong to by scoping the
PDB.
Then you catalog the last archive log file created after the PDB was unplugged and the metadata
exported.
Then you restore and recover the PDB using preplugin backups.
And finally, you run a normal media recovery after recovering from preplugin backups.

Oracle Database 18c: New Features for Administrators 4 - 10


Cloning Active PDB into Another CDB Using DUPLICATE

Use the DUPLICATE command to create a copy of a PDB or subset of a PDB.


12c
Duplicate a CDB or PDBs or PDB tablespaces in active mode to a fresh auxiliary
instance.
RMAN> DUPLICATE TARGET DATABASE TO cdb1 PLUGGABLE DATABASE pdb1b;

18c Duplicate a PDB or PDB tablespaces in active mode to an existing opened CDB.
– Set the COMPATIBLE initialization parameter to 18.1.
– Clone only one PDB at a time.

Oracle Internal & Oracle Academy Use Only


– Set the destination CDB in RW mode.
– Set the REMOTE_RECOVERY_FILE_DEST initialization parameter in the destination
CDB to the location where to restore foreign archive log files.
RMAN> DUPLICATE PLUGGABLE DATABASE pdb1 AS pdb2 FROM ACTIVE DATABASE
DB_FILE_NAME_CONVERT ('cdb1', 'cdb2');

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

In Oracle Database 12c, to duplicate PDBs, you must create the auxiliary instance as a CDB. To do
so, start the instance with the declaration enable_pluggable_database=TRUE in the
initialization parameter file. When you duplicate one or more PDBs, RMAN also duplicates the CDB
root and the CDB seed. The resulting duplicate database is a fully new, functional CDB that contains
the CDB root, the CDB seed, and the duplicated PDBs.
In Oracle Database 18c, the destination instance acts the auxiliary instance.
• An active PDB can be duplicated directly into an open CDB.
• The passwords for target and auxiliary connections must be the same when using active
duplicate.
• In the auxiliary instance, define the location where to restore the foreign archive log files via
the new initialization parameter, REMOTE_RECOVERY_FILE_DEST.
RMAN should be connected to the CDB root of the target and auxiliary instances.
Limitations
• Non-CDB to PDB duplication is not supported.
• Encryption is not supported for PDB cloning.
• SPFILE, NO STANDBY, FARSYNC STANDBY, LOG_FILE_NAME_CONVERT keywords are not
supported.
• NORESUME, DB_FILE_NAME_CONVERT, SECTION SIZE, USING COMPRESSED
BACKUPSET keywords are supported.

Oracle Database 18c: New Features for Administrators 4 - 11


Example: 1

To duplicate pdb1 from CDB1 into CDB2:


1. Set the REMOTE_RECOVERY_FILE_DEST initialization parameter in CDB2.
SQL> ALTER SYSTEM SET REMOTE_RECOVERY_FILE_DEST='/dir_to_restore_archive log files';

2. Connect to the source (TARGET for DUPLICATE command): CDB1


3. Connect to the existing CDB2 that acts as the auxiliary instance:

rman TARGET sys/password@cdb1 AUXILIARY sys/password@cdb2

Oracle Internal & Oracle Academy Use Only


CDB1 CDB2
DUPLICATE pdb1

pdb1 pdb1

4. Start duplicate.
RMAN> DUPLICATE PLUGGABLE DATABASE pdb1 TO cdb2 FROM ACTIVE DATABASE;

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

The example shows a duplication of pdb1 from cdb1 into the existing cdb2 as pdb1.
To perform this operation, connections to the source (TARGET) cdb1 and to the destination
(AUXILIARY) cdb2 are required.
The location where to restore the foreign archive log files in the auxiliary instance is defined via the
new initialization parameter, REMOTE_RECOVERY_FILE_DEST.
Then the DUPLICATE command defines that the operation is performed while the source pdb1 is
opened.
• cdb2 needs to be opened in read/write.

Oracle Database 18c: New Features for Administrators 4 - 12


Example: 2

To duplicate pdb1 from CDB1 into CDB2:


1. Set the REMOTE_RECOVERY_FILE_DEST initialization parameter in CDB2.
SQL> ALTER SYSTEM SET REMOTE_RECOVERY_FILE_DEST='/dir_to_restore_archive log files';

2. Connect to the source (TARGET for DUPLICATE command): CDB1


3. Connect to the existing CDB2 that acts as the auxiliary instance:

rman TARGET sys/password@cdb1 AUXILIARY sys/password@cdb2

Oracle Internal & Oracle Academy Use Only


CDB1 CDB2
DUPLICATE pdb1

pdb1 pdb2

4. Start duplicate.
RMAN> DUPLICATE PLUGGABLE DATABASE pdb1 AS pdb2 TO cdb2 FROM ACTIVE DATABASE;

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

The example shows a duplication of pdb1 from cdb1 into the existing cdb2 as pdb2.
To perform this operation, connections to the source (TARGET) cdb1 and to the destination
(AUXILIARY) cdb2 are required.
The location where to restore the foreign archive log files in the auxiliary instance is still defined via
the new initialization parameter, REMOTE_RECOVERY_FILE_DEST.
Then the DUPLICATE command defines that the operation is performed while the source pdb1 is
opened.
• cdb2 needs to be opened read/write.

Oracle Database 18c: New Features for Administrators 4 - 13


Duplicating On-Premise CDB as Cloud Encrypted CDB

Duplicating an on-premise CDB to the Cloud:


• Any newly created tablespace is encrypted in the Cloud CDB.

ENCRYPT_NEW_TABLESPACES = CLOUD_ONLY

SQL> CREATE TABLESPACE …


On-premise Database Cloud
Database ORCL No encrypted Mandatory Service database
ORCL
tablespaces No ENCRYPTION clause Encryption

• The Cloud CDB holds a keystore because this is the default behavior on Cloud.

Oracle Internal & Oracle Academy Use Only


• All forms of normal duplication are compatible:
– Active duplication
– Backup-based duplication
– Targetless duplicate

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

If you decide to migrate an on-premise CDB to the Cloud, any tablespace created in the Cloud CDB
will be encrypted, even if no encryption clause is declared.
Oracle Database 12c allows encryption of new user-defined tablespaces via a new
ENCRYPT_NEW_TABLESPACES instance parameter.
• A user-defined tablespace that is created in a CDB in the Cloud is transparently encrypted
with Advanced Encryption Standard 128 (AES 128) even if the ENCRYPTION clause for the
SQL CREATE TABLESPACE statement is not specified, and the
ENCRYPT_NEW_TABLESPACES instance parameter is being set to CLOUD_ONLY by default.
• A user-defined tablespace that is created in an on-premise database is not transparently
encrypted. Only the ENCRYPTION clause of the CREATE TABLESPACE statement determines
if the tablespace is encrypted.
All forms of duplication are compatible except for standby duplicate.
• Active duplication connects as TARGET to the source database and as AUXILIARY to the
Cloud instance.
• Backup-based duplication without a target connection connects as CATALOG to the recovery
catalog database and as AUXILIARY to the Cloud instance. RMAN uses the metadata in the
recovery catalog to determine which backups or copies are required to perform the
duplication.

Oracle Database 18c: New Features for Administrators 4 - 14


• Targetless duplication without connections to the target nor to the recovery catalog
database instance connects only to the Cloud instance and uses backups or copies of
the source database that are stored in a disk location on the destination host. RMAN
obtains metadata about where the backups and copies reside from the BACKUP
LOCATION clause of the DUPLICATE command. A disk backup location containing all
the backups or copies required for duplication must be available to the destination
host, the compute node for the Cloud CDB.

Oracle Internal & Oracle Academy Use Only

Oracle Database 18c: New Features for Administrators 4 - 15


Duplicating On-Premise Encrypted CDB as Cloud Encrypted CDB

Duplicating an on-premise CDB with encrypted tablespaces to the Cloud:


1. Tablespaces of the source CDB need to be decrypted.

Copy
Encrypted
tablespaces
On-premise ORCL
Database Mandatory Database Cloud
ORCL Keystore Service database
Encryption ORCL

Oracle Internal & Oracle Academy Use Only


2. Restored tablespaces are re-encrypted in the Cloud CDB.
– Requires the master TDE key from the source CDB keystore
– Requires the source keystore to be copied and opened at the destination CDB

RMAN> SET DECRYPTION WALLET OPEN IDENTIFIED BY password;


RMAN> DUPLICATE DATABASE TO orcl FROM ACTIVE DATABASE AS ENCRYPTED;

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

If the source database already contains encrypted tablespaces, the DUPLICATE must have access
to the TDE master key of the source (TARGET) database because the clone instance needs to
decrypt the datafiles before re-encrypting them during the restore operation. In this case, the
keystore has to be copied from the on-premise CDB to the clone instance before starting the
DUPLICATE and must be opened.
The DUPLICATE command allows the new ‘AS ENCRYPTED’ clause to restore the CDB with
encryption.
For more information about duplicating databases to Oracle Cloud Infrastructure, refer to
“Duplicating Databases to Oracle Cloud Oracle“ in Database Backup and Recovery User’s Guide
18c and also RMAN Duplicate from an Active Database (https://docs.us-phoenix-
1.oraclecloud.com/Content/Database/Tasks/mig-rman-duplicate-active-
database.htm#RMANDUPLICATEfromanActiveDatabase)

Oracle Database 18c: New Features for Administrators 4 - 16


Duplicating Cloud Encrypted CDB as On-Premise CDB

1. Tablespaces of the source CDB are necessarily encrypted.

Copy
Encrypted
tablespaces ORCL Optional
On-premise
Database Cloud Keystore Encryption Database
Service database
ORCL
ORCL

2. Restored tablespaces need to be decrypted to be created:


– Requires the TDE master key from the source CDB keystore

Oracle Internal & Oracle Academy Use Only


– Requires the source keystore to be copied and opened at the destination CDB

RMAN> SET DECRYPTION WALLET OPEN IDENTIFIED BY password;


RMAN> DUPLICATE DATABASE TO orcl FROM ACTIVE DATABASE AS DECRYPTED;

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

The source database already contains encrypted tablespaces; therefore, DUPLICATE must have
access to the master key of the source (TARGET) database because the clone instance needs to
decrypt the datafiles before the restore operation. In this case, the keystore has to be copied from
the Cloud CDB to the clone instance before starting DUPLICATE and must be opened by using the
SET DECRYPTION WALLET OPEN IDENTIFIED BY 'password' command.
The DUPLICATE command uses the ‘AS DECRYPTED’ clause to restore the CDB without
encryption.
If the user does not have Advanced Security Option (ASO) license on on-premise side, the on-
premise database cannot have TDE encrypted tablespaces. The DUPLICATE command using the
‘AS DECRYPTED’ clause provides a way to get encrypted tablespaces/databases from Cloud to on-
premise servers. It is important to note that it will not decrypt tablespaces that were explicitly created
with encryption, using the ENCRYPTION USING clause.
For more information, refer to "Duplicating an Oracle Cloud Database as an On-premise Database"
in Oracle Database Backup and Recovery User’s Guide 18c.

Oracle Database 18c: New Features for Administrators 4 - 17


Automated Standby Synchronization from Primary
12c
A standby database might lag behind the primary for various reasons like:
• Unavailability or insufficient network bandwidth between primary and standby database
• Unavailability of standby database
• Corruption/accidental deletion of archive redo data on primary
 Manually restore the primary controlfile on standby after use of RECOVER FROM
SERVICE.
18c RECOVER FROM SERVICE automatically rolls a standby forward:
1. Remember all datafile names on the standby.
2. Restart standby in nomount.

Oracle Internal & Oracle Academy Use Only


3. Restore controlfile from primary.
4. Mount standby database.
5. Rename datafiles from stored standby names.
6. Restore new datafiles to new names.
7. Recover standby.

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

In Oracle Database 12c, the RECOVER … FROM SERVICE command refreshes the standby data
files and rolls them forward to the same point-in-time as the primary. However, the standby control
file still contains old SCN values, which are lower than the SCN values in the standby data files.
Therefore, to complete the synchronization of the physical standby database, you must refresh the
standby control file to update the SCN#. Therefore, you have to place the physical standby database
in NOMOUNT mode and restore using the control file of the primary database to standby.
The automation in Oracle Database 18c performs the following steps:
1. Remember all datafile names on the standby.
2. Restart standby in nomount.
3. Restore controlfile from primary.
4. Mount standby database.
5. Rename datafiles from stored standby names.
6. Restore new datafiles to new names.
7. Recover standby.

Oracle Database 18c: New Features for Administrators 4 - 18


Summary

In this lesson, you should have learned how to:


• Reuse preplugin backups after conversion of a non-CDB to a PDB
• Reuse preplugin backups after plugging a PDB into another CDB
• Duplicate an active PDB into an existing CDB
• Duplicate a CDB as encrypted
• Duplicate a CDB as decrypted
• Recover a standby database from primary

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Oracle Database 18c: New Features for Administrators 4 - 19


Practice 4: Overview

• 4-1: Recovering a plugged non-CDB using preplugin backups


• 4-2: Recovering a plugged PDB using preplugin backups
• 4-3: Duplicating a PDB into an existing CDB
• 4-4: Duplicating an on-premise CDB for Cloud

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Oracle Database 18c: New Features for Administrators 4 - 20


5
Using General Database
Enhancements

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.
Objectives

After completing this lesson, you should be able to:


• Manage private temporary tables
• Use the Data Pump Import CONTINUE_LOAD_ON_FORMAT_ERROR option of the
DATA_OPTIONS parameter
• Perform online modification of partitioning and subpartitioning strategy
• Perform online MERGE partition and subpartition
• Generate batched DDL by using the DBMS_METADATA package
• Benefit from Unicode 9.0 support

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Oracle Database 18c: New Features for Administrators 5 - 2


Global Temporary Tables
12c

• The definition of global temporary tables is visible by all sessions.


ACC_TMP

• Each session can see and modify only its own data.

ACC_TMP ACC_TMP

SQL> CREATE GLOBAL TEMPORARY TABLE hr.employees_temp


AS SELECT * FROM hr.employees;

Oracle Internal & Oracle Academy Use Only


• Global temporary tables retain data only for the duration of a transaction or session.
• DML locks are not acquired on the data.
• You can create indexes, views, and triggers on global temporary tables.
• Global temporary tables are created by using the GLOBAL TEMPORARY clause.

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Temporary tables can be created to hold session-private data that exists only for the duration of a
transaction or session.
The CREATE GLOBAL TEMPORARY TABLE command creates a temporary table that can be
transaction specific or session specific. For transaction-specific temporary tables, data exists for the
duration of the transaction, whereas for session-specific temporary tables, data exists for the
duration of the session. Data in a session is private to the session. Each session can see and modify
only its own data. DML locks are not acquired on the data of the temporary tables. The clauses that
control the duration of the rows are:
• ON COMMIT DELETE ROWS: To specify that rows are visible only within the transaction. This
is the default.
• ON COMMIT PRESERVE ROWS: To specify that rows are visible for the entire session
You can create indexes, views, and triggers on temporary tables and you can also use the Export
and Import utilities to export and import the definition of a temporary table. However, no data is
exported, even if you use the ROWS option. The definition of a temporary table is visible to all
sessions.

Oracle Database 18c: New Features for Administrators 5 - 3


Private Temporary Tables
18c

USER_PRIVATE_TEMP_TABLES

Private Temporary Tables (PTTs) exist only for the session that creates them.
• You can create a PTT with the CREATE PRIVATE TEMPORARY TABLE statement.
• Table name must start with ORA$PTT_ : PRIVATE_TEMP_TABLE_PREFIX = ORA$PTT_

SQL> CREATE PRIVATE TEMPORARY TABLE ORA$PTT_mine (c1 DATE, … c3 NUMBER(10,2));

• The CREATE PRIVATE TEMPORARY TABLE statement does not commit a transaction.
• Two concurrent sessions may have a PTT with the same name but different shape.
ORA$PTT_mine ORA$PTT_mine

Oracle Internal & Oracle Academy Use Only


• PTT definition and contents are automatically dropped at the end of a session or transaction.
SQL> CREATE PRIVATE TEMPORARY TABLE ORA$PTT_mine (c1 DATE …)
ON COMMIT PRESERVE DEFINITION;

SQL> DROP TABLE ORA$PTT_mine;

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Private Temporary Tables (PTTs) are local to a specific session. In contrast with Global Temporary
Tables, the definition and contents are local to the creating session only and are not visible to other
sessions.
There are two types of duration for the created PTTs.
• Transaction: The PTT is automatically dropped when the transaction in which it was created
ends with either a ROLLBACK or COMMIT. This is the default behavior if no ON COMMIT
clause is defined at PTT creation.
• Session: The PTT is automatically dropped when the session that created it ends. This is the
behavior if the ON COMMIT PRESERVE DEFINITION clause is defined at the PTT creation.
A PTT must be named with a prefix 'ORA$PTT_'. The prefix is defined by default by the
PRIVATE_TEMP_TABLE_PREFIX initialization parameter, modifiable at the instance level only.
Creating a PTT does not commit the current transaction. Since it is local to the current session, a
concurrent user may also create a PTT with the same name but having a different shape.
At this time, PTTs cannot include User Defined Types, constraints, column default values, object
types or XML types, or an identity clause.
PTTs must be created in the user schema. Creating a PTT in another schema, using the ALTER
SESSION SET CURRENT SCHEMA command, is not allowed.

Oracle Database 18c: New Features for Administrators 5 - 4


Import with the CONTINUE_LOAD_ON_FORMAT_ERROR option

12c
When import detects a format error in the data stream, it aborts the load.
• All table data for the current operation is rolled back.
• Solution: Either re-export and re-import or recover as much of the data as possible
from the file with this corruption.
18c
Importing with the CONTINUE_LOAD_ON_FORMAT_ERROR option:
• Detects a format error in the data stream while importing data
• Instead of aborting the import operation, resumes loading data at the next granule
boundary

Oracle Internal & Oracle Academy Use Only


• Recovers at least some data from the dump file
• Is ignored for network mode import
$ impdp hr TABLES = employees DUMPFILE = dpemp DIRECTORY = dirhr
DATA_OPTIONS = CONTINUE_LOAD_ON_FORMAT_ERROR

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

In Oracle Database 12c, when a stream format error is detected, Data Pump import aborts and all
the rows already loaded are rolled back.
Oracle Database 18c introduces a new value for the DATA_OPTIONS parameter for impdp. When a
stream format error is detected and the CONTINUE_LOAD_ON_FORMAT_ERROR option is specified
for the DATA_OPTIONS parameter for impdp, the Data Pump jumps ahead and continue loading
from the next granule. Oracle Data Pump has a directory of granules for the data stream for a table
or partition. Each granule has a complete set of rows. Data for a row does not cross granule
boundaries. The directory is a list of offsets into the stream of where a new granule, and therefore, a
new row, begins. Any number of stream format errors may occur. Each time, loading resumes at the
next granule.
Using this parameter for a table or partition that has stream format errors means that rows from the
export database will not be loaded. This could be hundreds or thousands of rows. Nevertheless, all
rows that do not present stream format errors are loaded which could be hundreds or thousands of
rows.
The DATA_OPTIONS parameter for DBMS_DATAPUMP.SET_PARAMETER has a new flag to enable
this behavior: KU$_DATAOPT_CONT_LD_ON_FMT_ERR.

Oracle Database 18c: New Features for Administrators 5 - 5


Online Partition and Subpartition Maintenance Operations

Improve the high-availability of data by supporting an online implementation of a number of


frequently used DDLs:
• CREATE INDEX / ALTER TABLE ADD COLUMN | ADD CONSTRAINT
11g

• DROP INDEX / ALTER INDEX UNUSABLE / ALTER TABLE DROP CONSTRAINT


12c

| SET COLUMN UNUSED | MOVE | MOVE PARTITION | SPLIT PARTITION |


MODIFY nonpartitioned to partitioned | MOVE PARTITION INCLUDING ROWS
SQL> ALTER INDEX hr.i_emp_ix UNUSABLE ONLINE;

Oracle Internal & Oracle Academy Use Only


SQL> ALTER TABLE sales MODIFY PARTITION BY RANGE (c1) INTERVAL (100)
(PARTITION p1 …, PARTITION p2 …) ONLINE UPDATE INDEXES;

• ALTER TABLE MODIFY to allow repartitioning a table, add or remove subpartitioning


18c

• ALTER TABLE MERGE PARTITION


18c

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Beginning with Oracle Database 12c, you can use the new ONLINE keyword to allow the execution
of DML statements during the following DDL operations:
• DROP INDEX
• ALTER TABLE DROP CONSTRAINT
• ALTER INDEX UNUSABLE
• ALTER TABLE SET COLUMN UNUSED
• ALTER TABLE MOVE
• ALTER TABLE MODIFY/SPLIT PARTITION
This enhancement enables simpler application development, especially for application migrations.
There are no application disruptions for schema maintenance operations.
To change the partitioning method of a table, you had to either use DBMS_REDEFINITION
procedures or do it manually with CTAS.
In Oracle Database 18c, you can change a partitioning method online, for example, convert the
HASH method to the RANGE method, or add or remove subpartitioning to a partitioned table to reflect
a new workload and for more manageability of data. Repartitioning a table can lead to better
performance like changing the partitioning key to get more partition pruning. This avoids a big down
time during the conversion of large partitioned tables. The ALTER TABLE MODIFY command
supports a completely nonblocking DDL to repartition a table.

Oracle Database 18c: New Features for Administrators 5 - 6


You can also complete an online partition maintenance operation such as MERGE, which is
often used in rolling up all old partitions into a single partition and then archiving them. Until
Oracle Database 18c release, the MERGE operation required an exclusive lock on the
relevant partitions for the entire duration of the operation. This prevented DMLs on these
specific partitions in the meantime.

Oracle Internal & Oracle Academy Use Only

Oracle Database 18c: New Features for Administrators 5 - 7


Online Modification of Partitioning and Subpartitioning Strategy

• Prevents concurrent DDLs on the affected table, until the operation completes
• ONLINE clause: Does not hold a blocking X DML lock on the table being modified
• No tablespace defined for the partitions; defaults to the original table’s tablespace
• The UPDATE INDEXES clause:
– Changes the partitioning state of indexes and storage properties of the indexes being
converted
– Cannot change the columns on which the original list of indexes are defined
– Cannot change the uniqueness property of the index or any other index property

Oracle Internal & Oracle Academy Use Only


— No tablespace defined for indexes:
— Local indexes after the conversion collocate with the table partition.
— Global indexes after the conversion reside in the same tablespace of the original global index
on the nonpartitioned table.

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

The ONLINE modification of a partitioned table prevents concurrent DDLs on the affected table, until
the operation completes, but it does not hold an exclusive blocking DML lock on the table. If the
ONLINE clause is not mentioned, the DDL operation holds a blocking exclusive DML lock on the
table being modified.
If the user does not specify the tablespace defaults for the partitions, the partitions of the
repartitioned table default to the original table’s tablespace.
The UPDATE INDEXES clause can be used to change the partitioning state of indexes and storage
properties of the indexes being converted. The columns on which the original list of indexes are
defined cannot be changed. This clause cannot change the uniqueness property of the index or any
other index property.
If no partitioning is defined for existing indexes of the original table by using the UPDATE INDEXES
clause, the following defaulting behavior applies for all unspecified indexes:
• Global indexes that are prefixed by the partitioning keys are converted to local partitioned
indexes.
• Local indexes are retained as local partitioned indexes if they are prefixed by the partitioning
keys in either the partitioning or subpartitioning dimension.
• All indexes that are nonprefixed by the partitioning keys are converted to global indexes.
• Because partitioned bitmap indexes can only be local, bitmap indexes are always local
irrespective of their prefixed column behavior.
All auxiliary structures, such as triggers, constraints, and Virtual Private Database (VPD) predicates
associated to the table, are retained exactly on the partitioned table as well.
This modification operation is not supported for IOTs, nor on tables in presence of domain indexes.

Oracle Database 18c: New Features for Administrators 5 - 8


Online Modification of Subpartitioning Strategy: Example

Before online conversion:


SQL> CREATE TABLE sales (prodno NUMBER NOT NULL, custno NUMBER, time_id DATE,
… qty_sold NUMBER(10,2), amt_sold NUMBER(10,2))
PARTITION BY RANGE (time_id)
(PARTITION s_q1_17 VALUES LESS THAN (TO_DATE('01-APR-2017','dd-MON-yyyy')),
PARTITION s_q2_17 VALUES LESS THAN (TO_DATE('01-JUL-2017','dd-MON-yyyy')), …);

SQL> CREATE INDEX i1_custno ON sales (custno) LOCAL;


SQL> CREATE UNIQUE INDEX i2_time_id ON sales (time_id);
SQL> CREATE INDEX i3_prodno ON sales (prodno);

Online conversion:

Oracle Internal & Oracle Academy Use Only


SQL> ALTER TABLE sales MODIFY PARTITION BY RANGE (time_id)
SUBPARTITION BY HASH (custno) SUBPARTITIONS 8
(PARTITION s_q1_17 VALUES LESS THAN (TO_DATE('01-APR-2017','dd-MON-yyyy')),
PARTITION s_q2_17 VALUES LESS THAN (TO_DATE('01-JUL-2017','dd-MON-yyyy')), …)
ONLINE UPDATE INDEXES (i1_custno LOCAL, i2_time_id GLOBAL PARTITION BY
RANGE (time_id) ( PARTITION ip1 VALUES LESS THAN (MAXVALUE)));

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

In the example in the slide, the user changes the partitioning method of the range-partitioned SALES
table into a range table subpartitioned by hash and also the state of the existing indexes on the
table. This modification operation is completely nonblocking because the ONLINE keyword is
specified.
The operation subpartitions each partition of the SALES table into eight hash partitions set on the
new subpartitioning key, CUSTNO.
Each partition of the range local partitioned index I1_CUSTNO is hash subpartitioned into eight
subpartitions. The unique index I2_TIME_ID is maintained as a global range partitioned unique
index with no subpartitioning.
All unspecified indexes whose index columns are a prefix of the new subpartitioning key are
automatically converted to a local partitioned index. Other indexes are kept as global nonpartitioned
indexes, such as I3_PRODNO.
All auxiliary structures on the table being modified, such as triggers, constraints, VPDs and others,
are retained on the partitioned table as well.

Oracle Database 18c: New Features for Administrators 5 - 9


Online MERGE Partition and Subpartition: Example

Before partition merging operation:


SQL> CREATE TABLE sales
(prod_id NUMBER, cust_id NUMBER, time_id DATE, channel_id NUMBER,
promo_id NUMBER, quantity_sold NUMBER(10,2), amount_sold NUMBER(10,2))
PARTITION BY RANGE (time_id) INTERVAL (100)
( PARTITION p1 VALUES LESS THAN (100),
PARTITION p2 VALUES LESS THAN (500));

SQL> CREATE INDEX i1_time_id ON sales (time_id) LOCAL TABLESPACE tbs_2;


SQL> CREATE INDEX i2_promo_id ON sales (promo_id) GLOBAL TABLESPACE tbs_2;

Oracle Internal & Oracle Academy Use Only


Online partition merging operation:
SQL> ALTER TABLE sales MERGE PARTITIONS jan17,feb17,mar17 INTO PARTITION q1_17
COMPRESS UPDATE INDEXES ONLINE;

• The online operation can also be performed on subpartitions.

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

The ONLINE partition maintenance operation, such as merging partitions of a partitioned table,
prevents concurrent DDLs on the affected (sub) partitions, until the operation completes. It also does
not acquire/hold a blocking exclusive DML lock on the (sub) partitions being merged, even if it is only
for a short duration. If the ONLINE clause is not mentioned, the DDL operation holds a blocking
exclusive DML lock on the table being modified.
In the example in the slide, the user merges three partitions (January 2017, February 2017, and
March 2017) of the partitioned SALES table into the q1_2017 (First quarter 2017) partition. This
operation is completely nonblocking because the ONLINE keyword is specified.
The I1_EMPNO unique index is maintained as a local partitioned index. The I2_MGR index is
maintained as a global partitioned index.
The same online merging operation can be executed on subpartitions.
This maintenance operation is not supported for IOTs, nor on tables in presence of domain indexes.

Oracle Database 18c: New Features for Administrators 5 - 10


Batched DDL from DBMS_METADATA Package

The DBMS_METADATA.SET_TRANSFORM_PARAM procedure identifies differences between


two tables.
• 12c
Generates one ALTER TABLE statement for every difference found

• 18c
One single ALTER TABLE statement for all the differences related to scalar columns
– No change in behavior for LOB and complex types
– ALTER TABLE simplified patches

DECLARE

Oracle Internal & Oracle Academy Use Only



DBMS_METADATA.SET_TRANSFORM_PARAM (th, 'BATCH_ALTER_DDL', TRUE);

/
ALTER TABLE "APP1"."TEST1" ADD ("Y" VARCHAR2(40), "T" VARCHAR2(30), "Z" DATE)
ALTER TABLE "APP1"."TEST1" RENAME TO "TEST2"

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Enterprise class application tables are typically very large and have a large number of columns. In
Oracle Database 12c, when comparing two application tables with different column lists,
DBMS_METADATA detects columns that need to be added or dropped and subsequently generates
one ALTER TABLE for each ADD or DROP column. When there is a large number of columns to be
added, there is a significant performance impact when executing a large number of ALTER TABLE
statements.
The performance impact resulting from executing ALTER TABLE ADD | DROP column statements
can be mitigated by batching the commands and collectively adding or dropping the new columns
with a single ALTER TABLE ADD | DROP column DDL statement.
Previous behavior:
ALTER TABLE app1.big_tab1 ADD (COL3 VARCHAR2(20) COLLATE POLISH_CI);
ALTER TABLE app1.big_tab1 ADD (COL4 CHAR2(32) COLLATE BIMARY);
ALTER TABLE app1.big_tab1 ADD (COL5 NUMBER);
New behavior:
ALTER TABLE app1.big_tab1 ADD
( COL3 VARCHAR2(20) COLLATE POLISH_CI,
COL4 CHAR(32) COLLATE BINARY,
COL5 NUMBER);

Oracle Database 18c: New Features for Administrators 5 - 11


Unicode 9.0 Support

• Unicode is an evolving standard.


• Unicode 9.0 was recently released:
– 11 new code blocks
– 7,500 new characters
– 6 new language scripts
– 72 new emoji characters
• Oracle Database 12c R2 (12.2.0.2) has been updated to use the 9.0.0 standard.
– Both AL32UTF8 and AL16UTF16 are updated.

Oracle Internal & Oracle Academy Use Only


– Oracle Globalization Development Kit (GDK) for Java is updated.

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Unicode 9.0 adds a total of 7500 characters. It also includes a few other important updates on the
core specification as well as standard annexes and technical standards. For a complete list of the
changes, refer to the Unicode Consortium website at:
http://unicode.org/versions/Unicode9.0.0/
The new language scripts and characters add support for lesser-used languages worldwide:
• Osage, a Native American language
• Nepal Bhasa, a language of Nepal
• Fulani and other African languages
• The Bravanese dialect of Swahili, used in Somalia
• The Warsh orthography for Arabic, used in North and West Africa
• Tangut, a major historic script of China
Note: An emoji is a small digital image or icon used to express an idea or emotion. The origin of the
word is Japanese, where ‘e’ stands for ‘picture’ and moji for ‘letter, character’.

Oracle Database 18c: New Features for Administrators 5 - 12


Summary

In this lesson, you should have learned how to:


• Manage private temporary tables
• Use the Data Pump Import CONTINUE_LOAD_ON_FORMAT_ERROR option of the
DATA_OPTIONS parameter
• Perform online modification of partitioning and subpartitioning strategy
• Perform online MERGE partition and subpartition
• Generate batched DDL by using the DBMS_METADATA package
• Benefit from Unicode 9.0 support

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Oracle Database 18c: New Features for Administrators 5 - 13


Practice 5: Overview

• 5-1: Managing private temporary tables


• 5-2: Using the Data Pump Import CONTINUE_LOAD_ON_FORMAT_ERROR option
• 5-3: Converting a HASH partitioned table to a RANGE partitioned table, online
• 5-4: Converting a LIST partitioned table on two keys to a LIST AUTOMATIC partitioned
table on one key, online
• 5-5: Converting a LIST AUTOMATIC partitioned table to a LIST AUTOMATIC partitioned
table with SUBPARTITIONING, online
• 5-6: Merging partitions of a partitioned table, online

Oracle Internal & Oracle Academy Use Only


• 5-7: Using batched DDL

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Oracle Database 18c: New Features for Administrators 5 - 14


6

Improving Performance

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.
Objectives

After completing this lesson, you should be able to:


• Configure and use Automatic In-Memory
• Configure the window capture of In-Memory expressions
• Describe the Memoptimized Rowstore feature and use in-memory hash index structures
• Describe the new SQL Tuning Set package
• Describe the concurrency of SQL execution of SQL Performance Analyzer tasks
• Describe SQL Performance Analyzer result set validation

Oracle Internal & Oracle Academy Use Only


• Describe a SQL Exadata-aware profile

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Oracle Database 18c: New Features for Administrators 6 - 2


In-Memory Column Store: Dual Format of Segments in SGA
12c

UPDATE
SELECT

Buffer Cache IM column store TX Journal

INMEMORY_SIZE
Threshold / low mem

Oracle Internal & Oracle Academy Use Only


DBWn user IMCO Wnnn First data access or
Instance startup
SGA

CREATE TABLE … INMEMORY


Row format ALTER TABLE … INMEMORY

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Oracle Database 12c introduces the In-Memory Database option.


The option allows the DBA to allocate space to the IM column store in SGA. The DBA turns on the
INMEMORY attribute at object creation time or when altering an existing object to convert it to an in-
memory candidate. An in-memory table gets in-memory column units (IMCUs) allocated in the IM
column store at first table data access or at database startup. An in-memory copy of the table is
made by performing a conversion from the on-disk format to the new in-memory columnar format.
This conversion is done each time the instance restarts because the IM column store copy resides
only in memory. When this conversion is done, the in-memory version of the table gradually
becomes available for queries. If a table is partially converted, queries are able to use the partial in-
memory version and go to disk for the rest, rather than waiting for the entire table to be converted.
There is a new background process, IMCO, which creates and refreshes IMCUs to populate and
repopulate the IM column store. IMCO is the background coordinator process, which schedules the
objects to be populated or repopulated in the IM column store. Wnnn is a background process that
actually populates the objects in memory.
When rows in the table are updated, the corresponding entries in the IMCUs are marked stale. The
row version that is recorded in the journal is constructed from the buffer cache, and is unaffected by
the subsets of columns that are in the IMCU. IMCU synchronization is performed by the IMCO/Wnnn
background processes with the updated rows populated in the transaction journal based on events:
• An internal threshold, including a number of invalidations to the rows per IMCU
• Transaction journal running low on memory
• RAC invalidations

Oracle Database 18c: New Features for Administrators 6 - 3


Deploying the In-Memory Column Store
12c

1. Verify the database compatibility value.


COMPATIBLE = 12.2.0.0.0

2. Configure the IM column store size.


INMEMORY_SIZE = 100G
• You can dynamically increase the IM column store size.
SQL> ALTER SYSTEM SET inmemory_size = 110g scope=both;

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

No special installation is required to set up the feature because it is shipped with


Oracle Database 12c.
1. The database compatibility must be set to 12.1.0.2, 12.2.0.0, or later.
2. Configure the IM column store size by setting the INMEMORY_SIZE instance parameter.
Use the K, M, or G letters to define the amount unit. Since Oracle Database 12c Release 2,
the size of the in-memory area can be dynamically increased after instance startup but not
decreased. The memory allocated to the area is deducted from the total available memory for
SGA_TARGET. There is no LRU algorithm to manage the in-memory objects. In-memory
objects may be partially populated into the IM column store if there is not enough space to
accommodate the entire object. When this object is queried, as much of the data from the
column store is retrieved and the rest is retrieved either from the buffer cache, flash cache, or
disk. The DBA can set priorities on objects to define which in-memory objects should be
populated in the IM column store.
Set the INMEMORY_SIZE parameter to a minimum of 128M and logically to at least the sum
of the estimated size of in-memory tables.
The parameter can be set per-PDB to limit the maximum size used by each PDB. Note that the sum
of the per-PDB values does not necessarily have to be equal to the CDB value. It may even be
greater.

Oracle Database 18c: New Features for Administrators 6 - 4


Setting In-Memory Object Attributes

3. Enable or disable objects to be populated into the IM column store.


– IMCUs are initialized and populated at query access time only.
SQL> CREATE TABLE large_tab (c1 …) INMEMORY; Dual format

SQL> ALTER TABLE t1 INMEMORY ; Dual format

SQL> ALTER TABLE sales NO INMEMORY; Row format only


– IMCUs can be initialized when the database is opened.
SQL> CREATE TABLE test (…) INMEMORY PRIORITY CRITICAL;

Oracle Internal & Oracle Academy Use Only


– Use MEMCOMPRESS to define the compression level.
SQL> ALTER TABLE t1 INMEMORY MEMCOMPRESS FOR CAPACITY HIGH;

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

3. Turn on the INMEMORY attribute at object creation or when you alter it, to convert the object
into a columnar representation in the IM column store. All columns of the in-memory table are
populated into memory unless some columns are disabled by using the NO INMEMORY
clause. It is recommended to specify all columns simultaneously rather than having an
ALTER TABLE for each column, because it is more efficient.
Two INMEMORY subattributes define the following behaviors:
- The loading priority of the object data in the IM column store: The INMEMORY clause
can be have the PRIORITY subclause. An in-memory table is populated into memory
at first data access by default. This default behavior is the “on demand” behavior.
Using different priority levels, table data can be populated into the IM column
store soon after the database starts up.
- The degree of compression of the columns of an object in the IM column store: The
INMEMORY clause can be have the MEMCOMPRESS subclause.
• The segments that are compatible with the INMEMORY attribute are tables, partitions,
subpartitions, inline LOBs, materialized views, materialized join views, and materialized view
logs.
• Clustered tables and IOTs are not supported with the INMEMORY clause.

Oracle Database 18c: New Features for Administrators 6 - 5


Managing Heat Map and Automatic Data Optimization Policies
12c

1 2

Heat Map statistics Real Memory


Enable Heat Map in PDB V$HEAT_MAP_SEGMENT
collected on segments in
HEAT_MAP=ON Time
PDB
select * from EMP;

update DEPT…;

DBA_HEAT_MAP_SEG_HISTOGRAM view HEAT_MAP_STAT$ table

MMON
3 4 Window 5

Oracle Internal & Oracle Academy Use Only


Create ADO Policy on table ADO Policy evaluated ADO action executed
EMP compressed
EMP If no access during 3 days No access since 3 days
 COMPRESS (pol1) COMPRESS (pol1) 6
If tablespace TBSEMP FULL TBSEMP not FULL yet View ADO results
 Move EMP to another
 No movement (pol2)
tablespace (pol2) COMPRESSION_STAT$ table

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Oracle Database 12c enables the automation of Information Lifecycle Management (ILM) actions by:
• Collecting heat map statistics that track segment and block data usage and segment-level
usage frequencies in addition to daily aggregate usage statistics.
• Creating Automatic Data Optimization (ADO) policies that define conditions when segments
should be moved to other tablespaces and/or when segments/blocks can be compressed.
1. The first operation for the DBA is to enable heat map at the PDB level, tracking activity on
blocks and segments. The heat map activates system-generated statistics collection, such as
segment access and row and segment modification.
2. Real-time statistics are collected in memory (V$HEAT_MAP_SEGMENT view) and regularly
flushed by scheduled DBMS_SCHEDULER jobs to the persistent table HEAT_MAP_STAT$.
Persistent data is visible by using the DBA_HEAT_MAP_SEG_HISTOGRAM view.
3. The next step is to create ADO policies in the PDB on segments or groups of segments or as
default ADO behavior on tablespaces.
4. The next step is to schedule when ADO policy evaluation must happen if the default
scheduling does not match business requirements. ADO policy evaluation relies on heat map
statistics. MMON evaluates row-level policies periodically and starts jobs to compress
whichever blocks qualify. Segment-level policies are evaluated and executed only during the
maintenance window.
5. The DBA can then view ADO execution results by using the DBA_ILMEVALUATIONDETAILS
and DBA_ILMRESULTS views in the PDB.
6. Finally, the DBA can verify if the segment in the PDB is moved and stored on the tablespace
that is defined in the ADO policy and/or if blocks or the segment was compressed, by viewing
the COMPRESSION_STAT$ table.
Oracle Database 18c: New Features for Administrators 6 - 6
Creating ADO In-Memory Policies
12c

• Types of ADO In-Memory policies based on heat map statistics: HEAT_MAP = ON

– Define policy to set IM attribute.


– Define policy to unset IM attribute  Eviction from IM column store.
– Define policy to modify IM compression.
SQL> CREATE TABLE app.emp (c number) INMEMORY;
SQL> ALTER TABLE app.emp ILM
ADD POLICY NO INMEMORY SEGMENT AFTER 10 DAYS OF NO ACCESS;
• Define an anticipated time for evicting IM segments from the IM column store:

Oracle Internal & Oracle Academy Use Only


Create ADO Policy on table ADO Policy evaluated ADO action executed

EMP If no access since 10 days No access since 10 days EMP evicted


 Eviction from IM Eviction (pol1)
Column Store (pol1) V$IM_ADOTASKS

DBA_ ILMDATAMOVEMENTPOLICIES V$IM_ADOTASKDETAILS

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

In Oracle Database 12c, without any Automatic Data Optimization (ADO) policies defined on an in-
memory segment, a segment that is populated in the IM column store is removed only if the segment
is dropped, moved, or the INMEMORY attribute on the segment is removed. This behavior can result
in memory pressure if the size of the data to be loaded into memory is more than the free space
available in the IM column store. The performance of the user workload would be optimal if the IM
column store contains the most frequently queried segments.
Oracle Database 12c later introduced three types of ADO In-Memory policies:
• An ADO policy to set the INMEMORY attribute on an object. This type of policy allows
specification of an IM clause as part of the ADO policy clause and annotates the table or
partition with this IM clause when the policy condition is satisfied. It does not populate the
segment to the IM store; the segment gets populated based on the priority in the IM clause.
SQL> ALTER TABLE t1 ILM ADD POLICY SET INMEMORY
AFTER 5 days OF creation;
• An ADO policy for the anticipated length of inactivity (NO ACCESS or MODIFICATION) that
would indicate eviction of the object from the IM column store. The ADO policy considers
heat map statistics. The object is kept in the IM column store as long as the activity does not
subside. Eviction unsets the INMEMORY attribute on the object.
• An ADO policy to modify IM compression: Change the compression level of an object from a
lower level of compression to a higher level.
SQL> ALTER TABLE t1 ILM ADD POLICY MODIFY INMEMORY
MEMCOMPRESS FOR QUERY HIGH AFTER 10 days OF no access;

Oracle Database 18c: New Features for Administrators 6 - 7


Automatic In-Memory: Overview
18c

Before Automatic In-Memory was introduced, the DBA had to:


• Define when in-memory segments should be populated into the IM column store
• Define ADO IM policies to evict / populate IM segments from or into the IM column store
AIM automates the management of the IM column store by using heat map statistics:
• Ensures that the “working data set” is in the IM column store at all times
• Moves IM segments in and out of the IM column store
Benefits:

Oracle Internal & Oracle Academy Use Only


• Automatic actions: Makes the management of the IM column store easier
• Automatic eviction: Increases effective IM column store capacity
• Improved performance: Keeps as much of the working data set in memory as possible

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Oracle Database 18c introduces the Automatic In-Memory (AIM) feature. The benefits of configuring
Automatic In-Memory (AIM) are:
• Ease of management of the IM store: Management of the IM column store for reducing
memory pressure by eviction of cold IM segments involves significant user intervention. AIM
addresses these issues with minimal user intervention.
• Improved performance: AIM ensures that the “working data set” is in the IM column store at
all times. The working data set is a subset of all the IM enabled segments that is actively
queried at any time. The working data set is expected to change with time for many
applications. The working data set (or actively queried IM segments) contains a hot portion
that is active and a cold portion that is not active. For data ageing applications, the action
would be to remove cold IMCUs from the IM column store.
With AIM, the DBA need not define IM priority attributes or ADO IM policies on IM segments.
AIM automatically reconfigures the IM column store by evicting cold data out of the IM column store
and populating the hot data. The unit of data eviction and population is an on-disk segment. AIM
uses the heat map statistics of IM-enabled segments together with user-specified configurations to
decide the set of objects to evict under memory pressure.

Oracle Database 18c: New Features for Administrators 6 - 8


AIM Action

• Increase the effective capacity of the IM column store by evicting inactive IM segments
with priority NONE from the IM column store under memory pressure.
• Evict at segment level:
– According to the amount of time that an IM segment has been inactive
– According to the window of time used by AIM to determine the statistics for decision-
making
• Populate hot data.

Oracle Internal & Oracle Academy Use Only


Note: ADO IM policies override AIM considerations.

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

AIM automatically performs segment level actions.


• Segment-level population prioritizes the population of active data segments to the IM column
store under memory pressure based on workload patterns.
• Segment-level eviction increases the effective capacity of the IM column store by evicting
inactive IM segments with priority NONE from the IM column store under memory pressure.
The eviction relies on the amount of time that an IM segment has been inactive in order for
AIM to consider the segment for eviction. Eviction also relies on the time window used by
AIM to determine the statistics for decision-making.
In case ADO IM policies exist, they override AIM actions.

Oracle Database 18c: New Features for Administrators 6 - 9


Configuring Automatic In-Memory

• Activate heat map statistics: SQL> ALTER SYSTEM SET heat_map = ON;
• Set the initialization parameter:
SQL> ALTER SYSTEM SET INMEMORY_AUTOMATIC_LEVEL = MEDIUM SCOPE = BOTH;

• Use the DBMS_INMEMORY_ADMIN.AIM_SET_PARAMETER procedure to configure the


sliding stats window in days:
SQL> EXEC dbms_inmemory_admin.aim_set_parameter (-
parameter => dbms_inmemory_admin.AIM_STATWINDOW_DAYS , -
value => 1)

Oracle Internal & Oracle Academy Use Only


• Use the DBMS_INMEMORY_ADMIN.AIM_GET_PARAMETER procedure to get the current
values of the AIM parameters.

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

The new INMEMORY_AUTOMATIC_LEVEL initialization parameter makes the IM column store self-
managed eventually. However, limited controls are needed to modify the behavior of this feature and
to disable it if necessary. You can turn on or off the automatic management of the IM column store
by using one of the possible values:
• LOW: When under memory pressure, the database evicts cold segments from the IM column
store. This is the default value.
• MEDIUM: In Oracle Database 12c, an in-memory table is populated into memory at first data
access by default. This default behavior is the “on demand” behavior. Using different priority
levels, table data can be populated into the IM column store soon after the database starts
up. In Oracle Database 18c, this AIM level includes an additional optimization that prioritizes
population of segments under memory pressure rather than allowing on-demand population.
This level ensures that any hot segment that was not populated because of memory pressure
is populated first.
• OFF: This option disables AIM, returning the IM column store to its Oracle Database
12c Release 2 behavior.
Oracle recommends that you provision enough memory for the working data set to fit in the IM
column store. As a general rule, AIM requires an additional 5 KB multiplied by the number
of INMEMORY segments of SGA memory. For example, if 10,000 segments have
the INMEMORY attribute, then reserve 50 MB of the IM column store for AIM.
AIM uses the new DBMS_INMEMORY_ADMIN.AIM_SET_PARAMETER procedure to set the duration
to filter heat map statistics for IM-enabled objects as part of its decision algorithms. The constants
are used to populate the SYS.ADO_IMPARAM$ table. The default value for the sliding stats window
in days is: AIM_STATWINDOW_DAYS_DEFAULT := 31

Oracle Database 18c: New Features for Administrators 6 - 10


Diagnostic Views
18c

What are the decisions and actions made by AIM?

DBA_INMEMORY_AIMTASKS
V$IM_ADOTASKS Tracks decisions made by AIM
STATUS = RUNNING | at a point in time STATE = RUNNING |
UNKNOWN | UNKNOWN |

DONE DONE

V$IM_ADOTASKDETAILS Provides information DBA_INMEMORY_AIMTASKDETAILS

ACTION = EVICT | NO
about the options considered ACTION = EVICT | NO
ACTION | and the decisions made ACTION |

Oracle Internal & Oracle Academy Use Only


PARTIAL POPULATE PARTIAL POPULATE

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

• V$IM_ADOTASKS: This view provides information about AIM tasks. An AIM IM task provides
a way to track decisions made by AIM at a point in time. The STATUS column describes the
current state of the task, RUNNING, UNKNOWN, or DONE.
• DBA_INMEMORY_AIMTASKS: This view provides information on AIM IM tasks to database
administrators. The view columns are identical to the V$IM_ADOTASKS view with an extra
column, IM_SIZE, which corresponds to the in-memory size at the time of task creation.
• V$IM_ADOTASKDETAILS: The database investigates various possible actions as part of an
AIM task. This view provides information about the options considered and the decisions
made (ACTION column).
• DBA_INMEMORY_AIMTASKDETAILS: This is a view that provides the database administrator
with details related to the AIM task actions, and particularly the AIM action decided for this
object.
For details about these new Oracle Database 18c views, refer to the Oracle Database In-Memory
Guide 18c.

Oracle Database 18c: New Features for Administrators 6 - 11


Populating In-Memory Expression Results DBA_EXPRESSION_STATISTICS
SNAPSHOT =
LATEST | CUMULATIVE [ WINDOW

Two types of IMEs: optimizer-defined expressions and user-defined virtual columns


• Query performance improved by caching:
INMEMORY_EXPRESSIONS_USAGE = ENABLE
– Results of frequently evaluated query expressions
– Results of user-defined virtual columns INMEMORY_VIRTUAL_COLUMNS = ENABLE

• Results stored in IM expression units IMEU1 IMEU2

(IMEUs) for subsequent reuse


T a*b k-4 (a*b) e/f c=1
table / and
(k-4) e/f=10

Oracle Internal & Oracle Academy Use Only


• Candidates detected by and eligible according to expression statistics
12c
SQL> exec DBMS_INMEMORY_ADMIN.IME_CAPTURE_EXPRESSIONS ('CURRENT')

18c
SQL> exec DBMS_INMEMORY_ADMIN.IME_CAPTURE_EXPRESSIONS ('WINDOW')

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Automatically identifying frequently used complex expressions or calculations, and then storing their
results in the IM column store can improve query performance. Storing precomputed virtual column
results can also significantly improve query performance by avoiding repeated evaluations.
The cached results can range from function evaluations on columns used in application, scan, or join
expressions, to bit-vectors derived during predicate evaluation for in-memory scans. Caching can
also address other internal computations that are not explicitly recited in a database query, such as
hash value computations for join operations.
Where are the in-memory expressions and virtual column results (IMEs) stored?
An IMCU is a basic unit of the in-memory copy of the table data. Each IMCU has its own in-memory
expression unit (IMEU), which contains expression results corresponding to the rows stored in that
IMCU.
Why are expressions and virtual columns considered good IME candidates?
Statistics such as frequency of execution and cost of evaluation on a per-segment basis are
regularly maintained by the optimizer and stored in the Expression Statistics Store (ESS). ESS uses
an LRU algorithm to automatically track which expressions are most frequently used.
In Oracle Database 12c, the DBMS_INMEMORY_ADMIN.IME_CAPTURE_EXPRESSIONS procedure
identifies the most frequently accessed (hottest) expressions in the database in the specified time
range, materializes them as hidden virtual columns, and adds them to their respective tables during
the next repopulation. The time range can be defined as:
• CUMULATIVE: The database considers all expression statistics since the creation of the
database.
• CURRENT: The database considers only expressions statistics from the past 24 hours.
Oracle Database 18c introduces the new WINDOW time range.
Oracle Database 18c: New Features for Administrators 6 - 12
Populating In-Memory Expression Results Within a Window

1. Open a window: SQL> exec DBMS_INMEMORY_ADMIN.IME_OPEN_CAPTURE_WINDOW()

2. Let workload run.

Optionally, get the current capture state of the expression capture window and the
time stamp of the most recent modification.
SQL> exec DBMS_INMEMORY_ADMIN.IME_GET_CAPTURE_STATE( P_CAPTURE_STATE, -
P_LAST_MODIFIED)

3. Close the window: SQL> exec DBMS_INMEMORY_ADMIN.IME_CLOSE_CAPTURE_WINDOW()

Oracle Internal & Oracle Academy Use Only


4. Populate all the hot expressions captured in the window into the IM column store:
SQL> exec DBMS_INMEMORY_ADMIN.IME_CAPTURE_EXPRESSIONS ('WINDOW')

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

You can define an expression capture window of an arbitrary length, which ensures that only the
expressions occurring within this window are considered for in-memory materialization. This
mechanism is especially useful when you know of a small interval that is representative of the entire
workload. For example, during the trading window, a brokerage firm can gather the set of
expressions, and materialize them in the IM column store to speed-up future query processing for
the entire workload.
To populate expressions tracked in the most recent user-specified expression capture window,
perform the following steps:
1. Open a window by invoking the DBMS_INMEMORY_ADMIN.IME_OPEN_CAPTURE_WINDOW
procedure.
2. Let the workload run until you think you have collected enough expressions.
3. Close the window by invoking the
DBMS_INMEMORY_ADMIN.IME_CLOSE_CAPTURE_WINDOW procedure.
4. Add all the hot expressions captured in the previous window into the IM column store by
invoking the DBMS_INMEMORY_ADMIN.IME_CAPTURE_EXPRESSIONS('WINDOW')
procedure.
You can get the current capture state of the expression capture window and the time stamp of the
most recent modification by invoking the DBMS_INMEMORY_ADMIN.IME_GET_CAPTURE_STATE
procedure.
You can still invoke the DBMS_INMEMORY_ADMIN.IME_CAPTURE_EXPRESSIONS('CURRENT')
procedure to add all the hot expressions captured in the past 24 hours, which includes WINDOW as
well, and the DBMS_INMEMORY_ADMIN.IME_CAPTURE_EXPRESSIONS('CUMULATIVE')
procedure to add all the hot expressions captured since the creation of the database.
Oracle Database 18c: New Features for Administrators 6 - 13
Memoptimized Rowstore
18c

Fast ingest and query rates for thousands of devices from the Internet requires:
• High-speed streaming of single-row inserts
• Very fast lookups to key-value type data in the database buffer cache
– Querying data with the PRIMARY KEY integrity constraint enabled
– Using a new in-memory hash index structure
– Accessing table rows permanently pinned in the buffer cache
• Aggregated and streamed data to the database through the trusted clients

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Smart devices connected to the Internet that have the ability to send and receive data require
support for fast ingest and query rates for thousands of devices. The Memoptimized Rowstore
feature is meant to provide high-speed streaming of single-row inserts and very fast lookups to key-
value type data. The feature works only on tables that have PRIMARY KEY integrity constraint
enabled.
To provide the speed necessary to service thousands of devices, the data is aggregated and
streamed to the database through the trusted clients.
The fast query part of the Memoptimized Rowstore feature allows access to existing rows through a
new hash index structure and pinned database blocks.
Oracle Database supports ingest and access of row-based data in a fraction of the time that it takes
for conventional SQL transactions. With the ability to ingest high-speed streaming of input data and
the use of innovative protocols and hash indexing of key-value pairs for lookups, the Memoptimized
Rowstore feature significantly reduces transaction latency and overhead, and enables businesses to
deploy thousands of devices to monitor and control all aspects of their business.

Oracle Database 18c: New Features for Administrators 6 - 14


In-Memory Hash Index

Hash index maps a given key to the address of rows in the database buffer cache:
1. Gets the address of the row in the buffer cache
2. Reads the row from the buffer cache Database Buffer Cache

Additional memory: MEMOPTIMIZE_POOL_SIZE = 100M OE.T table


Hash Index Block
x4 Key value 1 to row1 address Key 1: row1 data
Values map
x2 Key value 2 to row2 address Key 2: row2 data
x6 Key value 3 to row3 address keys in blocks
Key 3: row3 data

Oracle Internal & Oracle Academy Use Only


• Enable tables for MEMOPTIMIZE FOR READ
DBA_TABLES
SQL> ALTER TABLE oe.t MEMOPTIMIZE FOR READ; DBA_TAB_PARTITIONS
DBA_TAB_SUBPARTITIONS
– Does not change table on-disk structures DBA_OBJECT_TABLES
MEMOPTIMIZE_READ = ENABLED
– Does not require application code change MEMOPTIMIZE_WRITE = DISABLED

• Populates the hash index for an object with DBMS_MEMOPTIMIZE.POPULATE procedure

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

An in-memory hash table mapping a given key to the location of corresponding rows enables quick
access of the Oracle data block storing the row.
The in-memory hash table is indexed with a user-specified primary key, very similar to hash clusters
containing tables with PRIMARY KEY constraint enabled. This in-memory structure is called a hash
index, although the underlying data structure is a hash table. The data structure resides in the
instance memory, requiring additional space in SGA. You can set the MEMOPTIMIZE_POOL_SIZE
initialization parameter to reserve static SGA allocation at instance startup.
To build a fast code path, having an in-memory hash index data structure is not sufficient. Rows of
tables are stored in disk blocks and, when row data is queried, the database buffer cache caches
tables blocks in the SGA of the database instance. Given that the blocks are aged out based on the
replacement policy used by the buffer cache, the blocks have to be permanently pinned in the buffer
cache to avoid disk I/O. This is the reason for setting the MEMOPTIMIZE FOR READ attribute to a
table that does not change the on-disk structure.
Using the DDL command ALTER TABLE t MEMOPTIMIZE FOR READ cascades the attribute to all
existing partitions (and sub-partitions). Use the NO MEMOPTIMIZE FOR READ clause to disable the
feature on an object. By default, tables are MEMOPTIMIZE FOR READ disabled.
Setting the MEMOPTIMIZE FOR WRITE attribute to a table gives a primary key and inserts the key
and corresponding data and metadata in the hash index structure during row inserts. The hash index
structure is updated during other write operations like delete and update.
Use the DBMS_MEMOPTIMIZE.POPULATE procedure to populate the hash index for an object, table,
partition or subpartition.
SQL> exec DBMS_MEMOPTIMIZE.POPULATE (SCHEMA_NAME => 'SH',
table_name => 'SALES', PARTITION_NAME => 'SALES_Q3_2003')
Oracle Database 18c: New Features for Administrators 6 - 15
DBMS_SQLTUNE Versus DBMS_SQLSET Package

Package SQL tuning task SQL Profile STS management:


management: management: • Create / drop STS.
• Create / drop • Accept SQL profile. • Populate STS.
tuning task. • Drop / alter SQL • Query STS content.
• Execute tuning profile. • Manipulate staging
task. • Manipulate staging tables.
• Display advisor tables.
recommendations.
12c
  
DBMS_SQLTUNE

Oracle Internal & Oracle Academy Use Only


18c

DBMS_SQLSET

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

In Oracle Database 12c, to perform manual and automatic tuning of statements, and management of
SQL profiles and SQL Tuning Sets (STS), you can use the DBMS_SQLTUNE package that contains
the necessary APIs.
In Oracle Database 18c, the DBMS_SQLSET package is the new package that contains SQL Tuning
Set functionality.
SQL Tuning Sets: Manipulation

12c
SQL Tuning Set functionality is available only if one of the following conditions exist:
• Tuning Pack is enabled.
• Real Application Testing (RAT) option is installed.
18c
SQL Tuning Set functionality is available for free with Oracle DB Enterprise Edition.
• A new DBMS_SQLSET package is available to create, edit, drop, populate, and query
STS and manipulate staging tables.
SQL> EXEC dbms_sqlset.create_sqlset | delete_sqlset | update_sqlset |
drop_sqlset

Oracle Internal & Oracle Academy Use Only


SQL> EXEC dbms_sqlset.capture_cursor_cache | load_sqlset

SQL> EXEC dbms_sqlset.create_stgtab | pack_stgtab | unpack_stgtab |


remap_stgtab
• The new package is not part of the tuning pack or RAT option.

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

In Oracle Database 12c, the package containing the SQL Tuning Set functionality is DBMS_SQLTUNE
that is part of the tuning pack or Real Application Testing option.
In Oracle Database 18c, the DBMS_SQLSET package is the new package to contain the SQL Tuning
Set functionality.
• Create and drop STS: CREATE_SQLSET, DROP_SQLSET
• Populate STS: CAPTURE_CURSOR_CACHE, LOAD_SQLSET
• Query STS content: SELECT_SQLSET function
• Manipulate staging tables: CREATE_STGTAB, PACK_STGTAB, UNPACK_STGTAB,
REMAP_STGTAB
The new package is not part of the Tuning Pack nor the Real Application Testing option. It is
available for free with Oracle Database Enterprise Edition.
Most of the functions and procedures of the DBMS_SQLTUNE package can be found in the new
DBMS_SQLSET package, except the procedures related to profiles (ACCEPT_SQL_PROFILE), tuning
tasks, and baselines.

Oracle Database 18c: New Features for Administrators 6 - 17


SQL Performance Analyzer

• Targeted users: DBAs, QAs, application developers


• Helps predict the impact of system changes on SQL workload response time:
– Database upgrades
– Implementation of tuning recommendations
– Schema / database parameter changes
– Statistics gathering
– OS and hardware changes
• Builds different versions of SQL workload performance (SQL execution plans and

Oracle Internal & Oracle Academy Use Only


execution statistics)
• Re-executes SQL statements serially or concurrently
12c 18c

• Analyzes performance differences


• Offers fine-grained performance analysis on individual SQL

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

The Oracle Real Application Testing option in Oracle Database 12c includes SQL Performance
Analyzer (SQLPA), which gives you an accurate assessment of the impact of change on the SQL
statements that make up the workload.
SQLPA helps you forecast the impact of a potential change on the performance of a SQL query
workload. SQLPA is used to predict and prevent potential performance problems for any database
environment change like database upgrades, schema or parameter changes, or statistics gathering
change that affects the structure of the SQL execution plans. This capability provides DBAs with
detailed information about the performance of SQL statements, such as before-and-after execution
statistics, and statements with performance improvement or degradation. This enables you to make
changes in a test environment to determine whether the workload performance will be improved
through a database upgrade.
In Oracle Database 12c, when a SQLPA task is executed for analysis, each statement in the SQL
Tuning Set (STS) is executed one after the other, sequentially. Depending on the number of
statements stored in the STS and their complexity, the execution might experience long running
times.
In an STS, each statement is independent of each other. This makes it possible to concurrently
execute the statements in an STS. Oracle Database 18c allows the concurrent execution of
statements in an STS. You can choose the execution mode for an SPA task to concurrently execute
STS statements and define the degree of parallelism (DOP) to be used during SPA task execution.

Oracle Database 18c: New Features for Administrators 6 - 18


Using SQL Performance Analyzer

1. Capture SQL workload on production.


2. Transport the SQL workload to a test system.
3. Build “before-change” performance data.
4. Make changes.
5. Build “after-change” performance data.
6. Compare results from steps 3 and 5.
7. Tune regressed SQL.

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Using SQL Performance Analyzer


1. Gather SQL: In this phase, you collect the set of SQL statements that represent your SQL
workload on the production system.
2. Transport: You must transport the resultant workload to the test system. The STS is
exported from the production system and the STS is imported into the test system.
3. Compute “before-version” performance: Before any changes take place, you execute the
SQL statements, collecting baseline information that is needed to assess the impact that a
future change might have on the performance of the workload.
4. Make a change: When you have the before-version data, you can implement your planned
change and start viewing the impact on performance.
5. Compute “after-version” performance: This step takes place after the change is made in
the database environment. Each statement of the SQL workload runs under a mock
execution (collecting statistics only), collecting the same information as captured in step 3.
6. Compare and analyze SQL Performance: When you have both versions of the SQL
workload performance data, you can carry out the performance analysis by comparing the
after-version data with the before-version data.
7. Tune regressed SQL.

Oracle Database 18c: New Features for Administrators 6 - 19


Steps 6-7: Comparing / Analyzing Performance
and Tuning Regressed SQL
• Rely on user-specified metrics to compare SQL performance. SQL Tuning Advisor

• Calculate impact of change on individual SQLs and


SQL workload.
• Use SQL execution frequency to define a weight of
importance. Improvement Regression

• Detect improvements, regressions, and unchanged Compare


analysis

performance. Database instance

• Detect changes in execution plans.

Oracle Internal & Oracle Academy Use Only


18c
Validate that the same result set was returned during
the initial SPA test and during subsequent tests.
SQL> EXEC dbms_sqlpa.set_analysis_task_parameter(:atname,-
'COMPARE_RESULTSET', 'FALSE') Test
database
• Recommend running SQL Tuning Advisor to tune regressed SQLs.

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

After re-executing the SQL statements, you compare and analyze before and after performance,
based on the execution statistics, such as elapsed time, CPU time, and buffer gets.
In Oracle Database 18c, SQL Performance Analyzer (SPA) result set validation allows users to
validate that the same result set is returned during the initial SPA test-execute and during
subsequent test-executes. It assures you that repeated SQL queries are executing as expected and
is required in certain regulatory environments. If the result set returned by a query is different before
and after the change, it is most likely due to a bug in the SQL execution layer. Because this can have
a severe impact on SQL, it is desirable for SPA to be able to detect such issues and report them.
You can ensure the result set validation by setting the COMPARE_RESULTSET parameter.

Oracle Database 18c: New Features for Administrators 6 - 20


SQL Performance Analyzer: PL/SQL Example

1. Create the tuning task:


EXEC :tname:= dbms_sqlpa.create_analysis_task( -
sqlset_name => 'MYSTS', task_name => 'MYSPA')

2. Set task execution parameters:


EXEC dbms_sqlpa.set_analysis_task_parameter( :tname, 'TEST_EXECUTE_DOP', 4)

3. Execute the task to build the before-change performance data:


EXEC dbms_sqlpa.execute_analysis_task(task_name => :tname, -

Oracle Internal & Oracle Academy Use Only


execution_type => 'TEST EXECUTE', execution_name => 'before')

4. Produce the before-change report:


SELECT dbms_sqlpa.report_analysis_task(task_name => :tname,
type=>'text', section=>'summary') FROM dual;

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

The example in the slide shows you how to use the DBMS_SQLPA package to invoke SQL
Performance Analyzer to access the SQL performance impact of some changes.
1. Create the tuning task to run SQL Performance Analyzer.
2. Set the degree of concurrency for re-executing the statements with the TEST_EXECUTE_DOP
parameter.
3. Execute the task once to build before-change performance data. You can specify various
parameters, for example, the EXECUTION_TYPE parameter as follows:
- EXPLAIN PLAN to generate explain plans for all SQL statements in the SQL
workload
- TEST EXECUTE to execute all SQL statements in the SQL workload. The procedure
executes only the query part of the DML statements to prevent side-effects to the
database or user data. When TEST EXECUTE is specified, the procedure generates
execution plans and execution statistics.
- COMPARE [PERFORMANCE] to analyze and compare two versions of SQL performance
data
- CONVERT SQLSET to read the statistics captured in a SQL Tuning Set and model
them as a task execution
4. Produce the before-change report (special settings for report: set long 100000,
longchunksize 100000, and linesize 90).

Oracle Database 18c: New Features for Administrators 6 - 21


SQL Performance Analyzer: PL/SQL Example

After making your changes:


5. Create the after-change performance data:
EXEC dbms_sqlpa.execute_analysis_task(task_name => :tname, -
execution_type => 'TEST EXECUTE', execution_name => 'after')

6. Generate the after-change report:


SELECT dbms_sqlpa.report_analysis_task(task_name => :tname,
type=>'text', section=>'summary') FROM dual;

7. Set task comparison parameters and compare the task executions:


EXEC dbms_sqlpa.set_analysis_task_parameter(:tname, 'COMPARE_RESULTSET', 'TRUE')

Oracle Internal & Oracle Academy Use Only


EXEC dbms_sqlpa.execute_analysis_task(task_name => :tname,
execution_type => 'COMPARE PERFORMANCE')

8. Generate the analysis report:


SELECT dbms_sqlpa.report_analysis_task(task_name => :tname,
type=>'text', section=>'summary') FROM dual;

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

5. Make your changes and execute the task again after making the changes.
6. Generate the after-changes report.
7. Compare the two executions. You can set the COMPARE_RESULTSET parameter to TRUE to
validate that the result set returned during the initial SPA test-execute is identical to the result
during subsequent test-executes.
8. Generate the analysis report.
Note: For more information about the DBMS_SQLPA package, see the Oracle Database PL/SQL
Packages and Types Reference Guide.

Oracle Database 18c: New Features for Administrators 6 - 22


SQL Exadata-Aware Profile
18c

SQL Tuning Advisor provides better execution plans, faster SQL, less resource usage, and
enhanced performance of Exadata systems by using new algorithms.
• SQL Tuning Advisor executes a new analysis to determine if any of these system
statistics are not up to date:
– I/O Seek Time
– Multi-Block Read Count (MBRC)
– I/O Transfer Speed
• If any of the system statistics are found to be stale and gathering them improves the
performance of the SQL being tuned, SQL Tuning Advisor recommends an Exadata-

Oracle Internal & Oracle Academy Use Only


aware SQL profile.
• Accepting such a profile impacts performance of only the SQL being tuned and not any
of the other SQLs.

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

On Exadata systems, the cost of smart scans is dependent on three system statistics:
• I/O Seek Time
• Multi-Block Read Count (MBRC)
• I/O Transfer Speed
The values of these system statistics are usually different on Exadata as compared to non-Exadata
and can influence which execution plan would be optimal.
In Oracle Database 18c, SQL Tuning Advisor executes a new analysis to determine if any of these
system statistics are not up to date. If any of the system statistics are found to be stale and gathering
them improves the performance of the SQL being tuned, this will be recommended via a SQL profile
called an Exadata-aware SQL profile.
Accepting such a profile impacts performance of only the SQL being tuned and not any of the other
SQLs. This is consistent with the existing behavior of a SQL profile.
The SQL Tuning Advisor report in this case displays:
1- SQL Profile Finding (see explain plans section below)
--------------------------------------------------------
A potentially better execution plan was found for this statement.
Recommendation (better benefit)
-------------------------------
- Consider accepting the recommended SQL profile. It is an Exadata-aware
SQL profile.
execute dbms_sqltune.accept_sql_profile(task_name => 'TASK_XXXXX',
task_owner => 'EXAUSR', replace => TRUE);

Oracle Database 18c: New Features for Administrators 6 - 23


Summary

In this lesson, you should have learned how to:


• Configure and use Automatic In-Memory
• Configure the window capture of in-memory expressions
• Describe the Memoptimized Rowstore feature and use in-memory hash index structures
• Describe the new SQL Tuning Set package
• Describe the concurrency of SQL execution of SQL Performance Analyzer tasks
• Describe the SQL Performance Analyzer result set validation

Oracle Internal & Oracle Academy Use Only


• Describe a SQL Exadata-aware profile

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Oracle Database 18c: New Features for Administrators 6 - 24


Practice 6: Overview

• 6-1: Configuring and Using AIM


• 6-2: Tracking IM Expressions Within a Capture Window

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Oracle Database 18c: New Features for Administrators 6 - 25


Oracle Internal & Oracle Academy Use Only
7
Handling Enhancements in Big Data
and Data Warehousing

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.
Objectives

After completing this lesson, you should be able to:


• Query inlined external tables
• Manage in-memory external tables
• Use the new query capabilities of the Analytics view
• Create and use polymorphic table functions
• Use new functions for approximate Top-N queries

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Oracle Database 18c: New Features for Administrators 7 - 2


Querying External Tables
12c

1. Create the external table before querying against the external table.
CREATE TABLE ext_emp (id NUMBER, …, email VARCHAR2(25))
ORGANIZATION EXTERNAL
( TYPE ORACLE_LOADER DEFAULT DIRECTORY ext_dir • DEFAULT DIRECTORY
ACCESS PARAMETERS
( records delimited by newline • ACCESS PARAMETERS
badfile ext_dir:'empxt%a_%p.bad'
logfile ext_dir:'empxt%a_%p.log' • DISCARDFILE, BADFILE, LOGFILE
values are null
fields terminated by ',' missing field
• LOCATION
(emp_id, first_name, last_name, job_id) )
LOCATION ('empext1.dat') ) • REJECT LIMIT
REJECT LIMIT UNLIMITED;
/extdir/empext1.dat

Oracle Internal & Oracle Academy Use Only


SQL> SELECT * FROM ext_emp;

2. Modify parameters during a query: no need to alter the external table definition.
SQL> SELECT * FROM ext_emp EXTERNAL MODIFY ( ACCESS PARAMETERS ( BADFILE ext_dir:'empxt2%a_%p.bad',
LOGFILE ext_dir:'empxt2%a_%p.log')
LOCATION ('empext2.dat'));

/extdir/empext2.dat

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

In Oracle Database 12c, querying an external table requires a persistent object for the external table
to be created in the data dictionary.
You can define the DEFAULT DIRECTORY, ACCESS PARAMETERS, LOCATION, and REJECT
LIMIT values. When querying the external table, these parameters can be modified.
Modifications in ACCESS PARAMETERS are limited to DISCARDFILE, BADFILE, and LOGFILE.
If the second query in the slide is running concurrently with the first one that reads the data from
‘empext1.dat,’ there is no need to create a separate external table for the second query. The
second query overrides the default LOCATION to fetch external data from another location. The
queries share the external metadata, which was created for a single EXT_EMP table in the data
dictionary.

Oracle Database 18c: New Features for Administrators 7 - 3


Querying Inlined External Tables
18c

Increasing the flexibility and ease of SQL access, inlined external tables:
• Are similar to inline views
• Allow the runtime definition of an external table as part of a SQL query statement
• Transparently access data outside the Oracle database
• Simplify the access of external data by a simpler and more efficient code
SQL> SELECT ext_emp.id FROM EXTERNAL ( (id NUMBER, …, email VARCHAR2(25))
TYPE ORACLE_LOADER
DEFAULT DIRECTORY ext_dir
ACCESS PARAMETERS

Oracle Internal & Oracle Academy Use Only


( records delimited by newline
badfile ext_dir:'empxt%a_%p.bad' logfile ext_dir:'empxt%a_%p.log'
fields terminated by ',' missing field values are null
(id, …, email)
)
LOCATION ('empext1.dat') /extdir/empext1.dat
REJECT LIMIT UNLIMITED
) ext_emp;

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Oracle Database 18c offers the possibility to query an external table without creating a persistent
object for the external table in the data dictionary. Compared to the example in the previous slide,
the first step can be skipped.
In this case, the query inlines the external table. For inlining an external table, the EXTERNAL
keyword along with other information must be provided. The same external table parameters and
any user-specified columns that must be specified in the CREATE TABLE syntax must also be
specified when inlining the external table in a query.
This information includes a list of external table columns defining the table, access driver type and
external table parameters. A REJECT limit can be specified as an option. Note that the MODIFY
keyword must be omitted when inlining an external table because the external table is not referenced
in the data dictionary.
External table metadata exist only for the query duration. It is created during query compilation and
purged when the query has been aged out of the cursor cache.
The user querying the inlined external table must have the READ privilege on the directory object
containing the external data, and the WRITE privilege on the directory objects containing the bad, log
and discard files.
In the example in the slide, the external table is aliased as EXT_EMP. This allows inlined external
tables to be joined.
There are restrictions:
• Partitioning and LONG, BFILE, or ADT external table columns are not supported.
• Creating a materialized view or materialized zone map that includes an inline external table
clause in the definition query raises an error.

Oracle Database 18c: New Features for Administrators 7 - 4


Database In-Memory Support for External Tables
18c

• Uses the features of Database In-Memory when the external data must be queried
repeatedly as multiple accesses of the external storage.
– Set the INMEMORY attribute on the external table / all external table partitions.
– Set the MEMCOMPRESS attribute.
SQL> CREATE TABLE test (…) ORGANIZATION EXTERNAL (TYPE ORACLE_LOADER … )
INMEMORY MEMCOMPRESS FOR CAPACITY HIGH;
• Enables populating data from external tables into the in-memory column store.
SQL> EXEC DBMS_INMEMORY.POPULATE ('HR', 'test')

Oracle Internal & Oracle Academy Use Only


• Allows advanced analytics on external data with Database In-Memory.
• Allows advanced analytics on a much larger data domain than just what resides in Oracle
databases. DBA_EXTERNAL_TABLES
INMEMORY = ENABLED | DISABLED
SQL> ALTER SESSION SET query_rewrite_integrity=stale_tolerated; INMEMORY_COMPRESSION =
SQL> SELECT * FROM test; FOR QUERY LOW | HIGH
FOR CAPACITY LOW | HIGH

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Oracle Database 18c enables the population of data from external tables into the in-memory column
store.
This allows population of data that is not stored in Oracle Database. This can be valuable when you
have other external data stores and you want to perform advanced analytics on that data with
Database In-Memory. This can be particularly valuable when the external data needs to be queried
repeatedly. You can avoid multiple accesses of the external storage and the queries can use the
features of Database In-Memory multiple times.
Data from external sources with ORACLE_LOADER and ORACLE_DATAPUMP access types can be
summarized and populated into the in-memory column store where repeated, ad-hoc analytic
queries can be run that might be too expensive to run on the source data.
The in-memory external tables also benefit from in-memory expressions.
You can set the INMEMORY attribute and its correlated MEMCOMPRESS attribute when creating and
altering an external table. If the external table is partitioned, all individual partitions are defined as in-
memory segments. The ability to exclude certain columns is not yet implemented.
Querying an in-memory external table requires the QUERY_REWRITE_INTEGRITY parameter in the
session to be set as STALE_TOLERATED and if updates of the external file occur, either repopulation
of the in-memory segment with the DBMS_INMEMORY.REPOPULATE procedure or altering the in-
memory table as NO INMEMORY and resetting it as MEMORY is required.
Statistics are collected on in-memory external tables because they are on in-memory heap tables
like IM populate external table read time (ms).
If Oracle Database 18c allows heat maps and makes Automatic Data Optimization (ADO) manage
eviction of cooling segments from the in-memory area, this is not the case on external tables and
external table partitions.
Oracle Database 18c: New Features for Administrators 7 - 5
Analytic Views

Analytic views (AVs) are metadata-only objects defined over standard tables or views:
• They provide a hierarchical organization and analytical hierarchy-aware calculations.
• They are queried via SQL or the DBMS_MDX_ODBO package.
12c
Microsoft’s Multidimensional Expression (MDX) provides features that are not
accessible in SQL query.
18c
Enhancing the AV SQL query capabilities adds support for two such features:
– Query-scoped calculations
Dynamically modify an AV at query-time
– Filter-before aggregate

Oracle Internal & Oracle Academy Use Only


Query-scoped AV

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Oracle Database 12c introduced Analytic views (AVs). Along with their associated objects—attribute
dimensions and hierarchies—AVs are metadata-only objects defined over standard tables or views.
They provide a hierarchical organization and analytical hierarchy-aware calculations, helping Oracle
to compete in the business analytics space with players such as SAP’s HANA.
An AV encapsulates aggregations, calculations, and joins of fact data that are specified by attribute
dimensions and hierarchies and by measures.
AVs may be queried either directly in SQL or via Microsoft’s Multidimensional Expression (MDX)
language. The MDX interface is provided by the PL/SQL package DBMS_MDX_ODBO, and is called by
the Oracle OLE DB for OLAP Provider (ODBO) or an XML for Analysis (XMLA) configuration.
MDX provides features that are not accessible in SQL query.
When you create an AV with the CREATE ANALYTIC VIEW statement, the calculations are burnt
into the persisted AV and the user cannot add additional computations at query time.
Oracle Database 18c addresses that shortfall in SQL by enhancing the AV SQL query capabilities to
add support for two features:
• Query-scoped calculations
• Filter-before aggregate
Oracle Database 18c offers the ability to dynamically modify an AV at query time, allowing utilization
of the two preceding features on a query-by-query basis.

Oracle Database 18c: New Features for Administrators 7 - 6


Visual Totals Versus Non-Visual Totals
12c

Table FACT_SALES Totals of sales with MDX


TIME SALES TIME Visual Non-Visual SQL equivalent
SALES SALES
Q1-2016 10
2016 30 100 Predicates specified in the WHERE clause
Q2-2016 20 simply reduce the rows returned, but do
Q1-2016 10 10
Q3-2016 30 not impact the aggregated measure data.
Q2-2016 20 20
Q4-2016 40

No SQL equivalent If a node has any descendants in the WHERE


clause, only those descendants are used to
aggregate up to that node.
• SQL querying an AV always produces the non-visual result.

Oracle Internal & Oracle Academy Use Only


• Visual totals with MDX:
– If the selection includes the year 2016 but none of its descendants, the data for 2016 is
aggregated from all of its leaves.
– If the selection includes year 2016 and in addition includes Q1-2016 and Q2-2016, the
data for 2016 using only those quarters is aggregated.
Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

An AV is defined on top of a fact table (or view), which provides the data for the leaves of the
associated hierarchies. A query on an AV produces rows for all members of its hierarchies. Measure
data for rows representing non-leaves are aggregated according to the function specified in the AV
metadata, typically a SUM function. Any predicates specified in the WHERE clause simply reduce the
rows returned, but do not impact the aggregated measured data.
MDX uses a feature called visual totals that can be toggled on or off via a check box in Excel to
impact the aggregated measured data.
• When the feature is disabled, the data returned via an MDX AV query is identical to the data
returned for the equivalent SQL AV query. In the example of the slide, the total for the year
2016 where selection includes the year 2016 but none of its descendants, the data for 2016
is aggregated from all of its leaves.
• When the feature is enabled, however, they can diverge. When visual totals is enabled, if a
node has any descendants in the selection only those descendants are used to aggregate up
to that node. For example, if the selection includes the year 2016 but none of its
descendants, the data for 2016 is aggregated from all of its leaves. If the selection includes
2016 and, in addition includes Q1-2016 and Q2-2016, data for 2016 using only those
quarters is aggregated.

Oracle Database 18c: New Features for Administrators 7 - 7


Filter-Before Aggregate Predicates and Calculated Measures
18c

How can SQL produce visual totals in queries?


• Use filter-before aggregate predicates based on hierarchy navigation.
1. Each hierarchy may specify a filter-before
Non-visual totals Visual totals
aggregate predicate, which serves to filter the
TIME SALES TIME SALES leaves of that hierarchy before aggregating the
2016 100 2016 30 measures (keeps Q1-2016 and Q2-2016 leaves).
Q1-2016 10 Q1-2016 10
Q2-2016 20 Q2-2016 20
2. The predicate for a given hierarchy specifies some
set of hierarchy members.
with SQL filter-before

Oracle Internal & Oracle Academy Use Only


aggregate predicate 3. The fact rows are then filtered to include only the
leaf descendants of those members.

• Define calculated measures within a SELECT statement.


• Optionally, use a WHERE clause to further filter the output.

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

A user can produce either non-visual or visual results when using SQL to query an AV.
Each hierarchy may specify a filter-before aggregate predicate, which serves to filter the leaves of
that hierarchy before aggregating the measures. The predicate for a given hierarchy specifies some
set of hierarchy members. The fact rows are then filtered to include only the leaf descendants of
those members.
If the example in the slide had facts at the month level rather than quarter level, a filter-before
predicate that included (Q1-2016, Q2-2016) would therefore filter the fact rows to only (JAN-2016,
FEB-2016, MAR-2016, APR-2016, MAY-2016, JUN-2016). This does not impact the sales data for
the quarter rows because each quarter includes all months within that quarter, but causes the year
row to aggregate only over the first six months. The resulting query includes rows for the non-filtered
leaves and their ancestors.
In addition to filtering leaves based on hierarchy navigation, you can also define calculated
measures that are not declared in the AV, like the percentage change of sales from a current period
and a previous period. The temporary calculated measures apply to the rows resulting from the filter-
before aggregation.
A WHERE clause can be added to simply reduce the rows returned after the filter-before aggregation.
The WHERE clause does not impact the aggregated measure data.

Oracle Database 18c: New Features for Administrators 7 - 8


Query-Scoped Calculations Using Hierarchy-Based Predicates

Set the COMPATIBLE initialization parameter to 18.0.0 at least.


COMPATIBLE = 18.0.0.0.0

• Without filter-before aggregate predicates:


SELECT time_hier.member_name, sales FROM av.sales_av HIERARCHIES(time_hier)
WHERE time_hier.level_name IN ('ALL','YEAR','QUARTER')
ORDER BY time_hier.hier_order;

• Using filter-before aggregate predicates:


SELECT time_hier.member_name, sales FROM ANALYTIC VIEW (

Oracle Internal & Oracle Academy Use Only


USING sales_av HIERARCHIES(time_hier)
FILTER FACT (time_hier TO level_name = 'MONTH'
AND TO_CHAR(month_end_date,'Q') IN (1, 2)
)
)
WHERE time_hier.level_name IN ('ALL','YEAR','QUARTER')
ORDER BY time_hier.hier_order;

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

How to define hierarchy-based predicates


• Set the COMPATIBLE initialization parameter to 18.0.0 or higher.
• Use the FROM ANALYTIC VIEW (USING … FILTER FACT …) clause in the query
statement.
In the first example of the slide, without any filter-before aggregate predicate, the query reports all
sums of sales aggregated per existing quarter of every year, for every year, and for all years
together.
In the second example, the filter-before aggregate predicate stipulates to report sales of all years,
but only for the first half of each year, and finally for all years, taking into account only the first two
quarters for each year.
You can see the difference between the first and second query results:
• In the first query result, because there are three quarters for year 2011, the sum for CY2011
is higher than the result from the second query because the filter-before aggregate predicate
filtered out the fourth quarter.
• This is the same behavior for all years. Therefore, the aggregated sum of sales for all years is
smaller in the second query than in the first query.

Oracle Database 18c: New Features for Administrators 7 - 9


Using Multiple Hierarchy-Based Predicates
SELECT time_hier.member_name AS time,
geography_hier.member_name AS geography, sales
FROM ANALYTIC VIEW (
USING sales_av HIERARCHIES(time_hier, geography_hier)
FILTER FACT ( time_hier TO level_name = 'QUARTER' AND (quarter_name
like 'Q1%' OR quarter_name like 'Q2%'),
geography_hier TO level_name = 'COUNTRY'
AND country_name in ('Mexico','Canada'))
)
WHERE time_hier.level_name IN ('YEAR') AND geography_hier.level_name = 'REGION'
ORDER BY time_hier.hier_order;

SELECT time_hier.member_name AS time,


geography_hier.member_name AS geography, sales
FROM ANALYTIC VIEW (

Oracle Internal & Oracle Academy Use Only


USING sales_av HIERARCHIES(time_hier, geography_hier)
FILTER FACT ( time_hier TO level_name = 'QUARTER' AND (quarter_name
like 'Q1%' OR quarter_name like 'Q2%'),
geography_hier TO level_name = 'COUNTRY'
AND country_name in ('Mexico', 'Canada', 'Chile'))
)
WHERE time_hier.level_name IN ('YEAR') AND geography_hier.level_name = 'REGION'
ORDER BY time_hier.hier_order;

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

You can define multiple hierarchy-based predicates.


The first example of the slide is based on two filter-before aggregate predicates. The first filter-before
aggregate predicate stipulates to report per year the sum of sales for the first half of each year. The
second filter-before aggregate predicate stipulates to report per region the sum of sales for only two
countries, Mexico and Canada. Because both Mexico and Canada belong to the same region, North
America, the query reports for each year the sum of sales for this region for the first half of each
year.
In the second example, the first filter-before aggregate predicate is still the same as in the first query.
The second filter-before aggregate predicate stipulates to report per region the sum of sales for three
countries, Mexico, Canada and Chile. In this case, because Chile belongs to another region than the
other two countries, South America, the query reports for each year the sum of sales for both regions
for the first half of each year.

Oracle Database 18c: New Features for Administrators 7 - 10


Query-Scoped Calculations Using Calculated Measures

• To define calculated measures:


SELECT time_hier.member_name, sales, sales_prior_period,
ROUND(sales_prior_period_pct_change,3) AS percent_change_sales
FROM ANALYTIC VIEW (
USING sales_av HIERARCHIES(time_hier)
ADD MEASURES (
sales_prior_period AS (LAG(sales) OVER (HIERARCHY time_hier OFFSET 1)),
sales_prior_period_pct_change AS (LAG_DIFF_PERCENT(sales) OVER
(HIERARCHY time_hier OFFSET 1))
)
)
WHERE time_hier.level_name = 'YEAR'
ORDER BY time_hier.hier_order;

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Use the USING and ADD MEASURES clauses within a SELECT statement to define a calculated
measure.
The query in the example of the slide adds two calculated measures that are not burnt into the AV,
the sales value of the previous period and the percent change of sales from the previous period and
the current period.

Oracle Database 18c: New Features for Administrators 7 - 11


Using Hierarchy-Based Predicates and Calculated Measures

Combine FILTER FACT and ADD MEASURES:


SELECT time_hier.member_name AS time, geography_hier.member_name AS geography, sales,
sales_prior_period,
ROUND(sales_prior_period_pct_change,3) AS percent_change_sales
FROM ANALYTIC VIEW (
USING sales_av HIERARCHIES(time_hier, geography_hier)
FILTER FACT ( time_hier TO level_name = 'QUARTER' AND (quarter_name
like 'Q1%' OR quarter_name like 'Q2%'),
geography_hier TO level_name = 'COUNTRY'
AND country_name in ('Mexico','Canada'))
ADD MEASURES (
sales_prior_period AS (LAG(sales) OVER (HIERARCHY time_hier OFFSET 1)),
sales_prior_period_pct_change AS (LAG_DIFF_PERCENT(sales) OVER

Oracle Internal & Oracle Academy Use Only


(HIERARCHY time_hier OFFSET 1)))
)
WHERE time_hier.level_name = 'YEAR' AND geography_hier.level_name = 'REGION'
ORDER BY time_hier.hier_order;

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

The query in the slide combines FILTER FACT and ADD MEASURES to return sales, sales prior
period, and percent change sales prior period for the first half of years in Mexico and Canada.

Oracle Database 18c: New Features for Administrators 7 - 12


Using Hierarchy-Based Predicates and Calculated Measures

Combine FILTER FACT, ADD MEASURES and WHERE clause:


SELECT time_hier.member_name AS time, geography_hier.member_name AS geography, sales,
sales_prior_period,
ROUND(sales_prior_period_pct_change,3) AS percent_change_sales
FROM ANALYTIC VIEW (
USING sales_av HIERARCHIES(time_hier, geography_hier)
FILTER FACT ( time_hier TO level_name = 'QUARTER' AND (quarter_name
like 'Q1%' OR quarter_name like 'Q2%'),
geography_hier TO level_name = 'COUNTRY'
AND country_name in ('Mexico','Canada'))
ADD MEASURES (
sales_prior_period AS (LAG(sales) OVER (HIERARCHY time_hier OFFSET 1)),
sales_prior_period_pct_change AS (LAG_DIFF_PERCENT(sales) OVER

Oracle Internal & Oracle Academy Use Only


(HIERARCHY time_hier OFFSET 1)))
)
WHERE time_hier.level_name = 'YEAR' AND geography_hier.level_name = 'REGION'
AND sales > 30000000
ORDER BY time_hier.hier_order;

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

The query in the slide combines FILTER FACT and ADD MEASURES to return sales, sales prior
period, percent change sales prior period for the first half of years in Mexico and Canada. Moreover
the WHERE clause added simply reduces the rows returned after the filter-before aggregation and the
measures calculation are applied. If you compare the resulting rows with the resulting rows from the
previous slide, you observe that it does not impact the filter-before aggregated values nor the
aggregated measured data; it restricts only the resulting aggregated rows to those whose sales are
greater than 30000000.

Oracle Database 18c: New Features for Administrators 7 - 13


Polymorphic Table Functions

• A table function (TF) is a function that returns a collection of rows that can be called from
the from-clause of a SQL query block.
SQL> SELECT * FROM TF_NOOP(emp);

• A polymorphic table function (PTF) is a TF whose return type is determined by the


arguments to the PTF.
SQL> SELECT * FROM PTF_NOOP(emp);

Goal
• Provide a framework for DBAs and developers for writing PTFs that are simple to use

Oracle Internal & Oracle Academy Use Only


and have an efficient and scalable implementation inside the RDBMS.
1. The DBA or developer creates the PTF.
2. A SQL writer invokes the PTF in a query.
3. RDBMS compiles and executes the PTF.

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

In Oracle Database 12c, you can quickly write a very simple table function that will accept a range of
table shapes but will be quite slow because it cannot be parallelized and data is returned only after
all source rows have been processed.
In Oracle Database 18c, a polymorphic table function (PTF) provides an efficient and scalable
mechanism to extend the analytical capabilities of the RDBMS to be used by SQL and PL/SQL
developers who require a simpler, more flexible and performant table function. A table function will
act like a normal Oracle row-source object, accepting a wide range of table shapes where the
RDBMS manages the execution plan.
The SQL writer is able to invoke table functions (TF) without knowing the details of the
implementation of the PTF, and the PTF does not need to know about the details or how the function
is being executed by the RDBMS (for example in serial or parallel), and whether the input rows to the
PTF were partitioned and/or ordered.

Oracle Database 18c: New Features for Administrators 7 - 14


Row-Semantics and Table-Semantics PTFs

PTFs take a table as an argument.


PTF
• Use Row Semantic (RS) PTF when the new columns can
be determined by just looking at any single row.
Row-Semantics Table-Semantics • Use Table Semantic (TS) PTF when the new columns can
PTF PTF be determined by looking at not only the current row but
also some state that summarizes previously processed
rows.
SQL> SELECT * FROM PTF_NOOP(emp); • A PTF is invoked from the from-clause of a SQL query
block like existing TFs.

Oracle Internal & Oracle Academy Use Only


• The table-argument can be one of the following:
DBA_PROCEDURES
POLYMORPHIC = NULL | TABLE | ROW
̶ A schema-level object that is allowed in a from-clause
̶ A with-clause query
• A new EXPLAIN operation appears in plans:
POLYMORPHIC TABLE FUNCTION

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

PTFs (also called as non-leaf PTFs) are categorized as Row-Semantics or Table-Semantics


depending on whether the result rows they produce depend on a single row or a set of rows.
The input table to a Table-Semantics PTF can optionally be partitioned into subtables, and the input
table or partitions can also be ordered. This ordering and partitioning is specified in the query.
A PTF is invoked from the from-clause of a SQL query block like existing table functions. Any table
functions (PTF or otherwise) no longer require wrapping the invocation in the TABLE() operator; the
TABLE() operator is now optional.
PTF arguments can be standard scalar arguments that can be passed to a regular table function, but
PTFs can additionally take a table-argument. A table-argument is either a with-clause query or a
schema-level object that is allowed in a from-clause (for example, tables, views, or table functions).
The result of a PTF can be cached in the result query cache of the SGA if the query-block uses level
hints:
SELECT * FROM (SELECT /*+ result_cache */ * FROM PTF_NOOP(emp));

Oracle Database 18c: New Features for Administrators 7 - 15


Components to Create PTFs

A PTF is composed of two parts:


• The PL/SQL package that contains functions/procedures for the PTF implementation
– Required function DESCRIBE: Returns the new table “shape”
– Required procedure FETCH_ROWS: For a given subset of rows, produces the new
associated column values
• The PL/SQL function that names the PTF and then associates it with the implementation
package

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

A PTF is composed of two parts.


• The PL/SQL package that contains the API for the PTF implementation
• The PL/SQL function that names the PTF and then associates it with the implementation
package. This function can itself be either a schema-level function or a package-level
function.
The DESCRIBE function is called by the RDBMS to determine the type of rows produced by the PTF
(the “row shape”). This function is invoked during SQL cursor compilation when the SQL query
under compilation refers to a PTF. The SQL compiler from the invoked PTF locates its
implementation package and then finds the DESCRIBE function inside the implementation package.
All the argument values from the query that called the PTF are passed into the DESCRIBE function.
Like any PL/SQL function, the arguments of the PTF function and DESCRIBE function must match,
but with the type of any TABLE argument replaced with the DBMS_TF.table_T descriptor type and
the type of any COLUMNS argument replaced with the DBMS_TF.columns_t descriptor.
The DESCRIBE function returns the list of any new columns that the PTF will create (or null if no new
columns are being produced) using the DBMS_TF.columns_new_t descriptor.
The FETCH_ROWS function is responsible for consuming the rows in the input stream and producing
the corresponding new columns.

Oracle Database 18c: New Features for Administrators 7 - 16


Steps to Create PTFs

1. Create the package containing the DESCRIBE function and FETCH_ROWS procedure.
CREATE OR REPLACE PACKAGE change_case_p AS
function Describe(tab IN OUT DBMS_TF.Table_t, new_case varchar2)
return DBMS_TF.describe_t;
procedure Fetch_Rows(new_case varchar2);
END change_case_p;
/
CREATE OR REPLACE PACKAGE BODY change_case_p AS …

2. Create the function.


CREATE OR REPLACE FUNCTION change_case (tab TABLE, new_case varchar2) return TABLE
pipelined ROW | TABLE POLYMORPHIC USING change_case_p;

Oracle Internal & Oracle Academy Use Only


 Specify exactly one formal argument of type TABLE.
 Specify the return type of the PTF as TABLE.
 Specify the type of the PTF function (ROW or TABLE POLYMORPHIC).
 Indicate which package contains the actual PTF implementation.

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

The DESCRIBE function returns a descriptor containing the type information of the new column that
the PTF produces.
Additionally, DESCRIBE marks the columns of the input table with two kinds of non-mutually
exclusive annotations:
• Read Columns: The columns that are going to be read during execution. By default, none of
the input table columns are marked “read”.
• Pass-through Columns: The columns that should be passed (unmodified) from the PTF
input to the output. By default, all of the input table columns are marked “pass-through”.
The input to a PTF is a single stream of rows that is divided into arbitrary sized chunks of rows
(including, possibly, zero rows). Each of these chunks is called a rowset.
The FETCH_ROWS function is responsible for consuming the rows in the input stream, one rowset at
a time, and producing the corresponding new columns. There is only one rowset active at any time.
• Each call to FETCH_ROWS must act upon the active rowset, and after processing the active
rowset, it can either return or remain inside the FETCH_ROWS and request and process
another rowset.

Oracle Database 18c: New Features for Administrators 7 - 17


DBMS_TF Routines

• DBMS_TF.Table_t: Descriptor for the PTF input table


• DBMS_TF.Columns_New_t: Descriptor for the table produced by the PTF
• DBMS_TF.Row_Set_t: Input rowset
• DBMS_TF.Row_Set_t: Output rowset
• DBMS_TF.Get_Row_Set: Computes the input rowset into the output
• DBMS_TF.Put_Row_Set: Provides the output rowset

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

The execution procedure FETCH_ROWS is called by the RDBMS during query execution. The PTF
execution calls FETCH_ROWS. Associated with each call to FETCH_ROWS is an input rowset (a
collection of rows from the underlying table or query) that the PTF is expected to process.
The FETCH_ROWS procedure uses the PTF Server API (either DBMS_TF.Get_Row_Set or
DBMS_TF.Get_Col) to read the input rowset, and typically this rowset will be used to produce an
output rowset (a collection of rows to be returned as output), which is then written back to the
RDBMS using the PTF Server API (either DBMS_TF.Put_Row_Set or DBMS_TF.Put_Col). Each
call to FETCH_ROWS is accompanied by a rowset, which is some data/system-specific collection of
input rows that is expected to be processed by the PTF.

Oracle Database 18c: New Features for Administrators 7 - 18


How Does RDBMS Compile and Execute PTFs?

• The PTF query is a single cursor.


• Only columns of interest are passed to the PTF. The PTF row-source does the “row
stitching”.
1. The PTF row-source keeps the input row-set in memory and passes the relevant
columns to the PTF.
2. The PTF processes the input columns to produce the output columns.
3. The PTF row-source finally “stitches” them to produce the new rows that are returned to
the parent row-source.

Oracle Internal & Oracle Academy Use Only


• Predicates, projections, or partitioning are pushed into the underlying table/query (where
semantically possible).
• Parallelism is based on type of PTF and query-specified partitioning (if any).

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Conceptually, a TS PTF operates on an entire table (or a logical partition of a table), while an RS
PTF can produce a new row exclusively from a single input row.
A TS PTF is designed to be used for implementing analytic functions that can act like aggregation
functions.
The query can optionally partition the TS PTF input and optionally order it. This is not allowed for an
RS PTF.

Oracle Database 18c: New Features for Administrators 7 - 19


Exact Top-N Query Processing: SQL Row-Limiting Clause
12c

In data analysis applications, users need to find the most frequent values.
Example: Find the top three job titles contributing to the most payroll expenses.
• Use a row-limiting clause to limit the rows returned by the query.
– Specify the number of rows to return with the FETCH FIRST/NEXT keywords.
– Specify the percentage of rows to return with the PERCENT keyword.

SQL> SELECT job, SUM(sal) sum_sal FROM emp


GROUP BY job
ORDER BY sum_sal DESC

Oracle Internal & Oracle Academy Use Only


FETCH FIRST 3 ROWS ONLY ;

• Queries that order data and limit row output are referred to as Top-N queries.
• The Top-N queries return exact results.
• FETCH FIRST n ROWS ONLY applies to global ordering.

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

In Oracle Database 12c, SQL SELECT syntax has been enhanced to allow a row-limiting clause,
which limits the number of rows that are returned in the result set.
Limiting the number or rows returned can be valuable for reporting, analysis, data browsing, and
other tasks. Queries that order data and then limit row output are widely used and are often referred
to as Top-N queries.
You can specify the number of rows or percentage of rows to return with the FETCH FIRST/NEXT
keywords. You can use the OFFSET keyword to specify that the returned rows begin with a row after
the first row of the full result set.
You specify the row-limiting clause in the SQL SELECT statement by placing it after the ORDER BY
clause. Note that an ORDER BY clause is not required.
• OFFSET: Use this clause to specify the number of rows to skip before row limiting begins.
The value for offset must be a number. If you specify a negative number, offset is treated as
0. If you specify NULL or a number greater than or equal to the number of rows that are
returned by the query, 0 rows are returned.
• ROW | ROWS: Use these keywords interchangeably. They are provided for semantic clarity.
• FETCH: Use this clause to specify the number of rows or percentage of rows to return.
- FIRST | NEXT: Use these keywords interchangeably. They are provided for
semantic clarity.
- row_count | percent PERCENT: Use row_count to specify the number of rows
to return. Use percent PERCENT to specify the percentage of the total number of
selected rows to return. The value for percent must be a number.

Oracle Database 18c: New Features for Administrators 7 - 20


Exact Top-N Query Processing: Rank Window Function
12c

Example: Find the top three job titles contributing to the most payroll expenses within each
department.
• Use a rank window function and a nested query.
SQL> SELECT deptno, job, sum_sal
FROM (SELECT deptno, job, SUM(sal) sum_sal, RANK() OVER
(PARTITION BY deptno ORDER BY sum(sal) DESC)
sum_sal_rank
FROM emp
GROUP BY deptno, job)
WHERE sum_sal_rank <= 3;

Oracle Internal & Oracle Academy Use Only


• The rank window queries return exact results.
• The rank window function applies to groups of rows
ordering.

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Analytic functions compute an aggregate value based on a group of rows. They differ from aggregate
functions in that they return multiple rows for each group.
The group of rows is called a window and is defined by the RANK clause. For each row, a sliding
window of rows is defined. The window determines the range of rows used to perform the
calculations for the current row. Window sizes can be based on either a physical number of rows or
a logical interval such as time.
Analytic functions are the last set of operations performed in a query except for the final ORDER BY
clause. All joins and all WHERE, GROUP BY, and HAVING clauses are completed before the analytic
functions are processed. Therefore, analytic functions can appear only in the select list or ORDER BY
clause.
Use the PARTITION BY clause to partition the query result set into groups based on one or more
values. If you omit this clause, then the function treats all rows of the query result set as a single
group.
You can specify multiple analytic functions in the same query, each with the same or different
PARTITION BY keys.
Use the ORDER BY clause to specify how data is ordered within a partition.

Oracle Database 18c: New Features for Administrators 7 - 21


.

Approximate Top-N Query Processing


12c

When aggregation functions and analytic functions sort large volumes of data, exact Top-
N queries require lots of memory and are time consuming.
• Approximate query processing is much faster.
• It is useful for situations where a tolerable amount of error is acceptable.
APPROX_FOR_AGGREGATION = true • Automatically replaces exact query processing for
aggregation queries with approximate query processing.

APPROX_FOR_COUNT_DISTINCT = true Automatically replaces COUNT (DISTINCT expr) queries
with APPROX_COUNT_DISTINCT queries.

Oracle Internal & Oracle Academy Use Only


APPROX_FOR_PERCENTILE =
NONE
PERCENTILE_CONT | PERCENTILE_CONT DETERMINISTIC | PERCENTILE_DISC | PERCENTILE_DISC
DETERMINISTIC | ALL | ALL DETERMINISTIC

• APPROX_FOR_PERCENTILE converts exact percentile functions to their approximate


percentile function counterparts.

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

The APPROX_FOR_AGGREGATION initialization parameter replaces exact query processing for


aggregation queries with approximate query processing. The value by default is FALSE which means
that the query processing is exact. Setting this to TRUE is the equivalent of setting
APPROX_FOR_COUNT_DISTINCT to TRUE and APPROX_FOR_PERCENTILE to ALL.
The APPROX_FOR_COUNT_DISTINCT initialization parameter automatically replaces COUNT
(DISTINCT expr) queries with APPROX_COUNT_DISTINCT queries. Query results for
APPROX_COUNT_DISTINCT queries are returned faster than the equivalent COUNT (DISTINCT
expr) queries: a tolerable amount of error is acceptable in order to obtain faster query results.
APPROX_FOR_PERCENTILE converts exact percentile functions to their approximate percentile
function counterparts. Approximate percentile function queries are faster than their exact percentile
function query counterparts, so they can be useful in situations where a tolerable amount of error is
acceptable in order to obtain faster query results.
If you set a 10053 event for a COUNT(DISTINCT) query and the APPROX_FOR_AGGREGATION
initialization parameter to TRUE, the trace file would display the “transformed final query”:
*******************************************
Approximate Aggregate Transformation (AAT)
*******************************************
AAT: transformed final query
******* UNPARSED QUERY IS *******
SELECT APPROX_COUNT_DISTINCT("EMP"."DEPTNO") "COUNT(DISTINCTDEPTNO)" FROM
"SCOTT"."EMP" "EMP"

Oracle Database 18c: New Features for Administrators 7 - 22


Approximate Top-N Query Processing
18c

• Use the new approximate functions, APPROX_COUNT and APPROX_SUM to replace their
exact counterparts of the exact version, COUNT and SUM.
• For each APPROX_COUNT / APPROX_SUM that appears in the SELECT list, a
corresponding APPROX_RANK function in the HAVING clause is required.

SQL> SELECT job, 0.9 * APPROX_SUM(sal)


FROM emp
GROUP BY job
HAVING APPROX_RANK(ORDER BY APPROX_SUM(sal) desc) <= 3;

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Oracle Database 18c introduces the APPROX_COUNT and APPROX_SUM approximate functions to
replace their exact counterparts, COUNT and SUM functions in approximate Top-N queries.
To use one or the other approximate function in a SELECT list, there must be an APPROX_RANK
function in the HAVING clause.
In the example of the slide, the basic syntax returns the Top-N rows globally.
Compared to the basic syntax for the exact aggregate queries, the extended syntax lifts the following
restriction: It is legal to apply an expression on top of an approximate function.
SELECT job, 0.9 * APPROX_SUM(sal) sum_sal
FROM emp
GROUP BY job
HAVING APPROX_RANK(ORDER BY APPROX_SUM(sal) DESC) <= 10;

Oracle Database 18c: New Features for Administrators 7 - 23


Approximate Top-N Query Processing: Example 1

• Find the jobs that are among the top 10 in terms of total salary per department.
SQL> SELECT deptno, job, APPROX_SUM(sal),
APPROX_RANK(partition by deptno ORDER BY APPROX_SUM(sal) desc) Rk
FROM emp
GROUP BY deptno, job
HAVING APPROX_RANK(partition by deptno ORDER BY APPROX_SUM(sal) desc)
<= 10;

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

If you want to partition the table and return Top-N rows per partition, called “intra partition top N”, the
basic syntax must use a view and rank window function.
SELECT deptno, job, sum_sal
FROM (SELECT deptno, job, sum(sal) sum_sal,
RANK() OVER (PARTITION BY deptno ORDER BY sum(sal) DESC)
sum_sal_rank
FROM emp
GROUP BY deptno, job)
WHERE sum_sal_rank < 10;
There is an extended syntax for approximate Top-N queries to handle such cases as in the example
in the slide. The detailed processing is as follows:
1. GROUP BY exp_1, …, expr_j first. Each output of GROUP BY contains the group by keys and
the approximate aggregated values.
2. Partition the GROUP BY outputs. The partition key is specified in APPROX_RANK(PARTITION
BY … ORDER BY … DESC). Within each partition, the approximate aggregated values are
ranked by the specified order. Each output of PARTITION BY contains the group by keys,
the approximate aggregated values, and their corresponding ranks. The PARTITION BY
keys must be a subset of the GROUP BY keys. When PARTITION BY keys are equivalent to
GROUP BY keys, the output PARTITION BY is simply the output of GROUP BY
concatenated with ranks that are all one because there is only one row per partition.
3. Filters are applied on the PARTITION BY outputs. A HAVING clause is mandatory. The
HAVING clause can contain only ANDed predicates. Each predicate must be in the format of
APPROX_RANK(PARTITION BY … ORDER BY … DESC)<= N.

Oracle Database 18c: New Features for Administrators 7 - 24


Approximate Top-N Query Processing: Example-2

• Find the jobs that are among the top 2 in terms of total salary, and among the top 3 in
terms of number of employees holding the job titles per department.
SQL> SELECT deptno, job, APPROX_SUM(sal), APPROX_COUNT(*)
FROM emp GROUP BY deptno, job
HAVING APPROX_RANK(partition by deptno order by APPROX_SUM(sal) desc)
<= 2
AND APPROX_RANK(partition by deptno order by APPROX_COUNT(*) desc) <= 3;

• There can be multiple approximate functions in the SELECT list  For each approximate
function, there has to be a corresponding predicate in the HAVING clause.

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

To use both approximate functions in a SELECT list, you must define a HAVING clause for the first
approximate function and an ANDed predicate for the other approximate function.
The query asks to return the jobs that are among the top 2 in terms of total salary, and among the
top 3 in terms of number of employees holding the job titles per department. A job title ‘CEO’ might
satisfy the first filter condition (CEO has the highest pay) but not satisfy the second filter condition
(one CEO in a company only).
The corresponding execution plan shows:
-----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 8 | 120 | 4 (25)| 00:00:01 |
|* 1 | SORT GROUP BY APPROX| | 8 | 120 | 4 (25)| 00:00:01 |
| 2 | TABLE ACCESS FULL | EMP | 42 | 630 | 3 (0)| 00:00:01 |
-----------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(APPROX_RANK(PARTITION BY "DEPTNO" ORDER BY
APPROX_SUM("SAL") DESC)<=2 AND APPROX_RANK(PARTITION BY "DEPTNO" ORDER
BY APPROX_COUNT(0) DESC)<=3)
APPROX is the option of the SORT GROUP BY row source to indicate that the row source contains
approximate aggregates.

Oracle Database 18c: New Features for Administrators 7 - 25


Approximate Top-N Query Processing: Example-3

• Find the jobs that are among the top 3 in terms of total salary, and among the top 2 in
terms of number of employees holding the job titles per department.
SQL> SELECT deptno, job, APPROX_SUM(sal), APPROX_COUNT(*)
FROM emp
GROUP BY deptno, job
HAVING APPROX_RANK(partition by deptno order by APPROX_SUM(sal) desc)
<= 3
AND APPROX_RANK(partition by deptno order by APPROX_COUNT(*) desc)
<= 2;

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

The query asks to return the jobs that are among the top 2 in terms of total salary, and among the
top 3 in terms of number of employees holding the job titles per department.
Comparing the query in the slide with the query in the previous slide, the results are different. Both
filter conditions must be satisfied to produce appropriate groups.

Oracle Database 18c: New Features for Administrators 7 - 26


Approximate Top-N Query Processing: Example-4

• Report the accuracy of the approximate aggregate by using the MAX_ERROR attribute.

SQL> SELECT deptno, job, APPROX_SUM(sal) sum_sal,


APPROX_SUM(sal, 'MAX_ERROR') sum_sal_err
FROM emp
GROUP BY deptno, job
HAVING APPROX_RANK(partition by deptno order by APPROX_SUM(sal) desc)
<= 2;

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

You can get a report of the accuracy of the approximate aggregate.


The report displays the error bound defined as the maximum possible value of estimated aggregated
value minus actual aggregated value.
The first output row in the example in the slide indicates that the estimated sum of manager’s salary
is 11350 and the maximum error is 10. In other words, the actual sum falls between 11340 and
11350. The estimated value is always an over-estimate.

Oracle Database 18c: New Features for Administrators 7 - 27


Summary

In this lesson, you should have learned how to:


• Query inlined external tables
• Manage in-memory external tables
• Use the new query capabilities of the Analytic view
• Create and use polymorphic table functions
• Use new functions for approximate Top-N queries

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Oracle Database 18c: New Features for Administrators 7 - 28


Practice 7: Overview

• 7-1: Querying Inlined External Tables


• 7-2: Populating External Tables in an In-Memory Column Store
• 7-3: Using Hierarchy-Based Predicates and Calculated Measures on Analytic Views
• 7-4: Using Polymorphic Table Functions

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Oracle Database 18c: New Features for Administrators 7 - 29


Oracle Internal & Oracle Academy Use Only
8

Sharding Enhancements

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.
Objectives

After completing this lesson, you should be able to describe:


• User-defined sharding method
• Support for PDBs as shards
• Oracle GoldenGate enhancements for Oracle Sharding support
• Query System Objects Across Shards
• Setting multi-shard query data consistency level
• Sharding support for JSON, LOBs, and spatial objects

Oracle Internal & Oracle Academy Use Only


• Improved multi-shard query enhancements
• Where to find Oracle Sharding documentation in Oracle Database 18c

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Oracle Database 18c: New Features for Administrators 8 - 2


System-Managed and Composite Sharding Methods
12c

Only two methods are supported in 12.2:


System-Managed Sharding:
Data is automatically distributed across
shards using partitioning by consistent hash.

Composite Sharding:
Data is first partitioned by list or range across
multiple shardspaces, and then further
partitioned by consistent hash across multiple

Oracle Internal & Oracle Academy Use Only


shards in each shardspace.

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Introduced in Oracle Database 12c release 12.2.0.1, Oracle Sharding provided two methods of
sharding data:
• System-managed sharding
• Composite Sharding
System-managed sharding is a sharding method that does not require the user to specify a
mapping of data to shards. Data is automatically distributed across shards using partitioning by
consistent hash. The partitioning algorithm evenly and randomly distributes data across shards. The
distribution used in system-managed sharding is intended to eliminate hot spots and provide uniform
performance across shards. Oracle Sharding automatically maintains balanced distribution of data
when shards are added to or removed from an SDB.
Consistent hash is a partitioning strategy that is commonly used in scalable distributed systems. It is
different from traditional hash partitioning. With traditional hashing, the bucket number is calculated
as HF(key) % N, where HF is a hash function and N is the number of buckets. This approach works
well if N is constant, but requires reshuffling of all data when N changes. More advanced algorithms,
such as linear hashing, do not require rehashing of the entire table to add a hash bucket, but they
impose restrictions on the number of buckets (such as it can only be a power of 2), and on the order
in which the buckets can be split.
The implementation of consistent hashing that is used in Oracle Sharding avoids these limitations by
dividing the possible range of values of the hash function (for example, from 0 to 232) into a set of N
adjacent intervals, and assigning each interval to a chunk. In this example, the SDB contains 1024
chunks, and each chunk gets assigned a range of 222 hash values. Therefore, partitioning by
consistent hash is essentially partitioning by the range of hash values.

Oracle Database 18c: New Features for Administrators 8 - 3


Composite sharding is a method that allows you to create multiple shardspaces for different
subsets of data in a table partitioned by consistent hash. A shardspace is a set of shards that
stores data that corresponds to a range or list of key values. System-managed sharding does
not give you any control over the assignment of data to shards. Load balancing is maintained
across shards in each shardspace.
When sharding by consistent hash on a primary key, there may be requirements such as: to
differentiate subsets of data within an SDB in order to store them in different geographic
locations, to allocate different hardware resources to them, or to configure high availability
and disaster recovery differently. Usually this differentiation is done based on the value of
another (non-primary) column, for example, customer location or a class of service.
With composite sharding, data is first partitioned by list or range across multiple shardspaces,
and then further partitioned by consistent hash across multiple shards in each shardspace.
The two levels of sharding make it possible to automatically maintain a balanced distribution

Oracle Internal & Oracle Academy Use Only


of data across shards in each shardspace and, at the same time, partition data across
shardspaces. The example in the slide shows two tablespace sets: tbs1 at the top and tbs2
at the bottom.
• Tablespace set tbs1 is labeled “Shardspace for GOLD customers - shspace1” and
contains three shards, each of which contains a range of tablespaces and their
respective partitions.
• Tablespace set tbs2 is labeled “Shardspace for SILVER customers - shspace2” and
contains four shards, each of which contains a range of tablespaces and their
respective partitions.

Oracle Database 18c: New Features for Administrators 8 - 4


User-Defined Sharding Method
18c

This method enables users to define LIST- or RANGE-based sharding.

Oracle Internal & Oracle Academy Use Only


SQL> CREATE SHARDED TABLE accounts (id NUMBER, account_nb NUMBER, cust_id NUMBER,
branch_id NUMBER, state VARCHAR(2), status VARCHAR2(1))
PARTITION BY LIST (state)
( PARTITION p_northwest VALUES ('OR', 'WA') TABLESPACE ts1,

PARTITION p_northeast VALUES ('NY', 'VM', 'NJ') TABLESPACE ts5,
PARTITION p_southeast VALUES ('FL', 'GA') TABLESPACE ts6 );

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Oracle Database 18c introduces the user-defined sharding method that lets you explicitly specify the
mapping of data to individual shards. It is used when, because of performance, regulatory, or other
reasons, certain data needs to be stored on a particular shard, and the administrator must have full
control over moving data between shards.
Another advantage of user-defined sharding is that, in case of planned or unplanned outage of a
shard, you know exactly what data is not available. The disadvantage of user-defined sharding is the
need for the database administrator to monitor and maintain balanced distribution of data and
workload across shards.
With user-defined sharding, a sharded table can be partitioned by range or list. There is no
tablespace set defined for user-defined sharding. Each tablespace has to be created individually and
explicitly associated with a shardspace. A shardspace is a set of shards that store data that
corresponds to a range or list of key values.
As with system-managed sharding, tablespaces created for user-defined sharding are assigned to
chunks. However, no chunk migration is automatically started when a shard is added to the SDB.
The user needs to execute the MOVE CHUNK command for each chunk that needs to be migrated.
GDSCTL CREATE SHARDCATALOG supports user-defined sharding with the value USER in the –
sharding option
The SPLIT CHUNK command, which is used to split a chunk in the middle of the hash range for
system-managed sharding, is not supported for user-defined sharding. You must use the ALTER
TABLE SPLIT PARTITION statement to split a chunk.

Oracle Database 18c: New Features for Administrators 8 - 5


In user-defined sharding, a shardspace consists of a shard, or a set of fully replicated shards.
For ease of management, regions can be defined for locations of replicas. Replication can be
done with Oracle Data Guard; Oracle GoldenGate does not support user-defined sharding.

Oracle Internal & Oracle Academy Use Only

Oracle Database 18c: New Features for Administrators 8 - 6


Support for PDBs as Shards

12c
Sharded databases must consist of sharding catalogs and shards that can be:
• Single-instance database
• Oracle RAC–enabled stand-alone databases
• CDBs not supported

18c
• A shard and shard catalog can be a single PDB in a CDB.
• GDSCTL ADD SHARD command includes the –cdb option.

Oracle Internal & Oracle Academy Use Only


• New GDSCTL commands: ADD CDB, MODIFY CDB, REMOVE CDB, CONFIG CDB

GDSCTL> ADD CDB –connect db11 –pwd GSMUSER_password


GDSCTL> ADD SHARD –cdb db11 -connect connect_string –shardgroup shgrp1 -deploy_as active_standby -pwd
GSMUSER_password

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

To support consolidation of databases on under-utilized hardware, for ease of management, or


geographical business requirements, you can use single PDBs in CDBs as a shard database.
The GDSCTL command ADD SHARD is extended with the –cdb option, and new commands ADD
CDB, MODIFY CDB, CONFIG CDB, and REMOVE CDB are implemented so that Oracle Sharding can
support a multitenant architecture. The GDSCTL command ADD CDB is used to add a pre-created
CDB to the shard catalog. The GDSCTL ADD SHARD command, extended with the -cdb option in
18c, is used to add shards, which are PDBs contained within a CDB to the sharded database upon
deployment.
• Use the MODIFY CDB command to change the metadata of the CDB in the shard catalog.
• Use the REMOVE CDB command to remove a CDB from the shard catalog. Removing a CDB
does not destroy it.
• Use the CONFIG CDB command to display information about the CDB in the shard catalog.
Oracle Data Guard supports replication only at the CDB level. The existing sharding architecture
allows replicated copies of the sharded data for high availability, and you can optionally configure
and use Data Guard to create and maintain these copies. Data Guard does not currently support
replication at the PDB level; it can only replicate an entire container.
Information about migrating single-instance shards to PDBs can be found in the Oracle Database
Using Oracle Sharding 18c guide in Oracle Help Center.

Oracle Database 18c: New Features for Administrators 8 - 7


Improved Oracle GoldenGate Support

• Split chunks not supported


12c

18c
• Split chunk support
• Automatic CDR support of tables with unique indexes/constraints

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Enhancements in Oracle GoldenGate 13c were introduced to provide support for Oracle Sharding
high availability, but there were some limitations. In 18c GoldenGate now supports the GDSCTL
SPLIT CHUNK command. Auto CDR was introduced in Oracle Database 12.2 (and Oracle
GoldenGate 12.3) to automate the conflict detection and resolution configuration in active-active
GoldenGate replication setups. However, Auto CDR was allowed only on tables with primary keys. In
Oracle Database 18c, this restriction is relaxed and Auto CDR is supported on tables with just
unique keys/indexes but no primary keys.

Oracle Database 18c: New Features for Administrators 8 - 8


Query System Objects Across Shards

12c
• Shards managed individually
• No aggregate views from all shards

18c
SHARDS() clause and shard_id
SQL> SELECT sql_text, shard_id FROM SHARDS(sys.v$sql) a WHERE a.sql_id = '1234';

• Query performance views


SQL> SELECT shard_id, callspersec FROM SHARDS(v$servicemetric)
WHERE service_name LIKE 'oltp%' AND group_id = 10;

Oracle Internal & Oracle Academy Use Only


• Statistics collection
SQL> SELECT table_name, partition_name, blocks, num_rows
FROM SHARDS(dba_tab_partition) p WHERE p.table_owner = :1;

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

In Oracle Database 12c Release 2, to perform maintenance operations, you had to go to each
database individually. Easy, centralized diagnostics collection from all of the shards was not
available.
With Oracle Database 18c, you can use the SHARDS() clause to query Oracle-supplied tables to
gather performance, diagnostic, and audit data from V$ views and DBA_* views. The shard catalog
database can be used as the entry point for centralized diagnostic operations using the SQL
SHARDS() clause. The SHARDS() clause allows you to query the same Oracle supplied objects,
such as V$, DBA/USER/ALL views and dictionary objects and tables, on all of the shards and return
the aggregated results.
As shown in the examples, an object in the FROM part of the SELECT statement is wrapped in the
SHARDS() clause to specify that this is not a query to a local object, but to objects on all shards in
the sharded database configuration. A virtual column called SHARD_ID is automatically added to a
SHARDS()-wrapped object during execution of a multi-shard query to indicate the source of every
row in the result. The same column can be used in predicate for pruning the query.
A query with the SHARDS() clause can be executed only on the shard catalog database.

Oracle Database 18c: New Features for Administrators 8 - 9


Consistency Levels for Multi-Shard Queries

12c
Multi-shard queries always used SCN synchronization and were resource intensive

18c
New initialization parameter: MULTISHARD_QUERY_DATA_CONSISTENCY

SQL> ALTER SYSTEM SET MULTISHARD_QUERY_DATA_CONSISTENCY =


delayed_standby_allowed
SCOPE=SPFILE;

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

You may want to specify different data consistency levels for some multi-shard queries, because, for
example, it may be desirable for some queries to avoid the cost of SCN synchronization across
multiple shards and these shards could be globally distributed. Another use case is when you are
using standbys for replication and it is acceptable to have slightly stale data for multi-shard queries,
the results could be fetched from the primary or its standbys.
A new user-visible database parameter, multishard_query_consistency_level, has been
added in Oracle Database 18c to specify consistency level for multi-shard queries. The parameter
can have one of the following values:
• strong (default): With this setting, SCN synchronization is performed across all shards, and
data is consistent across all shards. This setting provides global consistent read capability.
This is the default value.
• shard_local: With this setting, SCN synchronization is not performed across all shards.
Data is consistent within each shard. This setting provides the most current data.
• delayed_standby_allowed: With this setting, SCN synchronization is not performed
across all shards. Data is consistent within each shard. This setting allows data to be fetched
from Data Guard standby databases when possible (for example, depending on load
balancing), and may return stale data from standby databases.
The default mode is strong, which performs SCN synchronization across all shards. Other modes
skip SCN synchronization. The delayed_standby_allowed mode allows fetching data from the
standbys as well, depending on load balancing and do on and thus may have stale data.
This parameter can be set either at the system level or at the session level.
See the Oracle Database Reference Guide for more information about
MULTISHARD_QUERY_DATA_CONSISTENCY usage.

Oracle Database 18c: New Features for Administrators 8 - 10


Sharding Support for JSON, LOBs, and Spatial Objects
18c

System-managed sharded databases:


• Create tablespace set for LOBs

SQL> CREATE TABLESPACE SET lobtss1;

• Include tablespace set for LOBs in parent table CREATE

SQL> CREATE SHARDED TABLE customers (CustId VARCHAR2(60) NOT NULL, …


image BLOB,
CONSTRAINT pk_customers PRIMARY KEY (CustId),

Oracle Internal & Oracle Academy Use Only


CONSTRAINT json_customers CHECK (CustProfile IS JSON))
TABLESPACE SET TSP_SET_1
LOB(image) STORE AS (TABLESPACE SET LOBTSS1)
PARTITION BY CONSISTENT HASH (CustId) PARTITIONS AUTO;

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

LOB is a widely used, first class data type in Oracle Database. Release 18c enables the use of
LOBs, JSON, and spatial objects in an Oracle Sharding environment, which is useful for applications
that use these data types where storage in sharded tables would facilitate business requirements.
JSON operators that generate temporary LOBs, large JSON documents (those that require LOB
storage), spatial objects, index and operators, and persistent LOBs can be used in an Oracle
Sharding environment. The following interfaces are new or changed as part of this feature.
This release enables JSON operators that generate temporary LOBs, large JSON documents (those
that require LOB storage), spatial objects, index and operators, and persistent LOBS to be used in a
sharded environment.
In a system-managed sharded database, you must specify a tablespace set for the LOBs, and then
include it in the CREATE SHARDED TABLE statement for the parent table as shown in the examples
here.

Oracle Database 18c: New Features for Administrators 8 - 11


Sharding Support for JSON, LOBs, and Spatial Objects
18c

Composite sharded databases:


• Create tablespace sets for LOBs
SQL> CREATE TABLESPACE SET LOBTSS1 IN SHARDSPACE cust_america ... ;
SQL> CREATE TABLESPACE SET LOBTSS2 IN SHARDSPACE cust_europe ... ;

• Include tablespace sets for LOBs in parent table CREATE

SQL> CREATE SHARDED TABLE customers ( CustId VARCHAR2(60) NOT NULL, … image BLOB,
CONSTRAINT pk_customers PRIMARY KEY (CustId),
CONSTRAINT json_customers CHECK (CustProfile IS JSON))

Oracle Internal & Oracle Academy Use Only


PARTITIONSET BY LIST (GEO) PARTITION BY CONSISTENT HASH (CustId)
PARTITIONS AUTO (PARTITIONSET america VALUES ('AMERICA')
TABLESPACE SET tsp_set_1
LOB(image) STORE AS (TABLESPACE SET LOBTSS1),
PARTITIONSET europe VALUES ('EUROPE')
TABLESPACE SET tsp_set_2
LOB(image) STORE AS (TABLESPACE SET LOBTSS2));

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

In a composite sharded database, you must specify a tablespace set for each shardspace for the
LOBs, and then include them in the CREATE SHARDED TABLE statement for the parent table as
shown in the examples in the slide.

Oracle Database 18c: New Features for Administrators 8 - 12


Sharding Support for JSON, LOBs, and Spatial Objects
18c

User-defined sharded databases:


• Create tablespace sets for LOBs
SQL> CREATE TABLESPACE lobts1 … IN SHARDSPACE shspace1;
SQL> CREATE TABLESPACE lobts2 … in shardspace shspace2;

• Include tablespaces for LOBs in parent CREATE table

SQL> CREATE SHARDED TABLE customers (CustId VARCHAR2(60) NOT NULL, …


image BLOB,
CONSTRAINT pk_customers PRIMARY KEY (CustId),

Oracle Internal & Oracle Academy Use Only


CONSTRAINT json_customers CHECK (CustProfile IS JSON))
PARTITION BY RANGE (CustId)
( PARTITION ck1 values less than ('m') tablespace ck1_tsp
LOB(image) store as (TABLESPACE LOBTS1),
PARTITION ck2 values less than (MAXVALUE) tablespace ck2_tsp
LOB(image) store as (tablespace LOBTS2));

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

In a user-defined sharded database, you must specify a tablespace, not a tablespace set, for each
shardspace for the LOBs, and then include them in the CREATE SHARDED TABLE statement for the
parent table as shown in the examples in the slide.

Oracle Database 18c: New Features for Administrators 8 - 13


Improved Multi-Shard Query Support

12c
• There are restrictions on query shapes.
• Only system-managed sharding is supported.

18c
• All query shapes supported
• System-managed, user-defined, and composite sharding methods supported
• Centralized execution plan display available
• Oracle supplied objects in queries

Oracle Internal & Oracle Academy Use Only


• Multi-column sharding keys supported
• SET operators supported

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

In Oracle Database 12.2, there were several restrictions on the query shapes that could be used on
queries over multiple shards, and multi-shard queries were supported only in sharded databases
using the system-managed sharding method.
The restrictions lifted in Oracle Database 18c are:
• Support for composite and user-defined sharding
• Multi-shard query execution plan display
• Support for all query shapes like views, subqueries, joins on non-sharding column, and so
on.
• Support for Oracle-supplied tables/views (using SHARDS() clause and SHARD_ID) and
PL/SQL functions.
• Support for multi-column sharding keys.
• Use of SET operators

Oracle Database 18c: New Features for Administrators 8 - 14


Oracle Sharding Documentation

12c

Oracle Sharding documentation contained in Oracle Database Administrator’s Guide, Part


VII “Sharded Database Management”.
18c

Oracle Sharding documentation has its own book, Oracle Database Using Oracle Sharding,
included in Oracle Database documentation library in Oracle Help Center.

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

In Oracle Database 18c, the Oracle Sharding documentation has been moved from part seven of the
Oracle Database Administrator’s Guide to its own new book, called Oracle Database Using Oracle
Sharding, in the Oracle Database documentation library in Oracle Help Center.

Oracle Database 18c: New Features for Administrators 8 - 15


Summary

In this lesson, you should have learned how to describe:


• User-defined sharding method
• Support for PDBs as shards
• Oracle GoldenGate enhancements for Oracle Sharding support
• Query System Objects Across Shards
• Setting multi-shard query data consistency level
• Sharding support for JSON, LOBs, and spatial objects

Oracle Internal & Oracle Academy Use Only


• Improved multi-shard query enhancements
• Where to find Oracle Sharding documentation in Oracle Database 18c

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Oracle Database 18c: New Features for Administrators 8 - 16


Practice 8: Overview

There are no practices for this lesson.

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Oracle Database 18c: New Features for Administrators 8 - 17


Oracle Internal & Oracle Academy Use Only
A

Database Sharding

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.
Objectives

After completing this lesson, you should be able to:


• Describe the challenges and benefits of a sharded database
• Describe sharded database architecture
• Configure a sharded database (SDB)

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

To get detailed information about how to perform any of the operations covered in this lesson, refer to
the following guides in the Oracle documentation:
• Oracle Database SQL Language Reference 12c Release 2 (12.2)
• SQL*Plus User’s Guide and Reference Release 2 (12.2)
• SQLcl – The New SQL*Plus
• Video: Oracle SQL Developer Meets SQL*Plus
• Oracle SQLcl Slidedeck: Overview of Our New Command-line Interface
• The Modern Command Line
• SQLcl: The New Challenger for the SQL*Plus Crown
• Kris’ blog

Oracle Database 18c: New Features for Administrators A - 2


What Is Database Sharding?

• Shared-nothing architecture for scalability and availability


• Horizontally partitioned data across independent databases
• Loosely coupled data tier without clusterware

Server
Server A Server B Server C

Oracle Internal & Oracle Academy Use Only


Sharded table in 3 databases

Unsharded table in 1 database


Sharded database (SDB)

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Sharding is a data tier architecture where data is horizontally partitioned across independent
databases. Each database in such a configuration is called a shard. All shards together make up a
single logical database, which is known as a sharded database or SDB.
Horizontal partitioning involves splitting a database table across shards so that each shard contains
the table with the same columns but a different subset of rows. The diagram in the slide shows an
unsharded table on the left with the rows represented by different colors. On the right, the same table
data is shown horizontally partitioned across three shards or independent databases. Each partition
of the logical table resides in a specific shard. Such a table is referred to as a sharded table.
Sharding is a shared-nothing database architecture because shards do not share physical resources
such as CPU, memory, or storage devices. Shards are also loosely coupled in terms of software;
they do not run clusterware.
From a database administrator’s perspective, an SDB consists of multiple databases that can be
managed either collectively or individually. However, from an application developer’s perspective, an
SDB looks like a single database: the number of shards and the distribution of data across them are
completely transparent to database applications.

Oracle Database 18c: New Features for Administrators A - 3


Sharding: Benefits

• Extreme scalability by adding shards (independent databases)


• Fault containment by eliminating single points of failure
• Global data distribution with the ability to store particular data in a specific shard
• Rolling upgrades with independent availability of shards
• Simplicity of cloud deployment with different sized shards

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Sharding eliminates performance bottlenecks and makes it possible to linearly scale performance
and capacity by adding shards. Sharding is a shared-nothing architecture that eliminates single
points of failure—such as shared disks, SAN, and clusterware—and provides strong fault isolation.
The failure or slowdown of one shard does not affect the performance or availability of other shards.
Sharding enables storing particular data close to its consumers and satisfying regulatory
requirements when data must be located in a particular jurisdiction. Applying configuration changes
on one shard at a time does not affect other shards, and allows administrators to first test changes
on a small subset of data. Sharding is well suited to deployment in the cloud. Shards may be sized
as required to accommodate whatever cloud infrastructure is available and still achieve required
service levels. A sharded database (logical representation) supports up to 1,000 shards
(independent databases).

Oracle Database 18c: New Features for Administrators A - 4


Oracle Sharding: Advantages

• Relational schemas
• Database partitioning
• ACID properties and read consistency
• SQL and other programmatic interfaces
• Complex data types
• Online schema changes
• Multicore scalability

Oracle Internal & Oracle Academy Use Only


• Advanced security
• Compression
• High availability features
• Enterprise-scale backup and recovery

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Oracle Sharding provides the benefits of sharding without sacrificing the capabilities of an enterprise
RDBMS.

Oracle Database 18c: New Features for Administrators A - 5


Application Considerations for Sharding

• Available only in new database creations


• Intended for OLTP applications that:
– Have a well-defined data model and data distribution strategy
– Have a hierarchical tree structure data model with a single root table
– Primarily access data by using a sharding key that is stable and with high cardinality
– Generally access data associated with a single value for the sharding key
– Use Oracle integrated connection pools (UCP, OCI, ODP.NET, and JDBC) to connect
to the sharded database

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Oracle Sharding is for OLTP applications that are suitable for a sharded database.
Existing applications that were never intended to be sharded require some level of redesign to
achieve the benefits of a sharded architecture. In some cases, it may be as simple as providing the
sharding key; in other cases, it may be impossible to horizontally partition the data and workload as
required by a sharded database.
Many customer-facing web applications, such as e-commerce, mobile, and social media are well-
suited for sharding. Such applications have a well-defined data model and data distribution strategy
(hash, range, list, or composite) and primarily access data by using a sharding key. Examples of
sharding keys include customer_ID, account_number, and country_id. Applications also
usually require partial denormalization of data to perform well with sharding.
OLTP transactions that access data associated with a single value of the sharding key are the
primary use cases for a sharded database—for example, lookup and update of a customer’s records,
subscriber documents, financial transactions, e-commerce transactions, and so on. Because all the
rows that have the same value of the sharding key are guaranteed to be on the same shard, such
transactions are always single-shard and executed with the highest performance and provide the
highest level of consistency. Multi-shard operations are supported, but with a reduced level of
performance and consistency. Such transactions include simple aggregations, reporting, and so on,
and play a minor role in a sharded application relative to workloads dominated by single-shard OLTP
transactions.

Oracle Database 18c: New Features for Administrators A - 6


Sharding is intended for OLTP applications that are suitable for a sharded database architecture.
Specifically:
• The data model should be a hierarchical tree structure with a single root table. Oracle
Sharding supports any number of levels within the hierarchy.
• The sharding key must be based on a column that has high cardinality; the number of unique
values in this column must be much bigger than the number of shards. Customer ID, for
example, is a good candidate for the sharding key, whereas a United States state name is
not.
• The sharding key should be very stable; its value should almost never change.
• The sharding key must be present in all the sharded tables. This allows the creation of a
family of equi-partitioned tables based on the sharding key. The sharding key must be the
leading column of the primary key of the root table.
• Joins between tables in a table family should be performed by using the sharding key.

Oracle Internal & Oracle Academy Use Only


• Composite sharding enables two levels of sharding: one by hash and another by list of range.
This is accomplished by the application providing two keys: a super sharding key and a
sharding key.
• All database requests that require high performance and fault isolation must access data
associated with only a single value of the sharding key. The application must provide the
sharding key when establishing a database connection. If this is the case, the request is
routed directly to the appropriate shard.
• Multiple requests can be executed in the same session as long as they all are related to the
same sharding key. Such transactions typically access 10s or 100s of rows. Examples of
single-shard transactions include order entry, lookup and update of a customer’s billing
record, and lookup and update of a subscriber’s documents.
• Database requests that must access data associated with multiple values of the sharding
key, or for which the value of the sharding key is unknown, are routed to the query
coordinator, which orchestrates parallel execution of the query across multiple shards.
• Applications use Oracle integrated connection pools (UCP, OCI, ODP.NET, or JDBC) to
connect to a sharded database.
The deployment of a sharded database creates all the shards and enables Oracle Data Guard or
Oracle GoldenGate. There is no solution to enable sharding on an existing Oracle database.

Oracle Database 18c: New Features for Administrators A - 7


Components of Database Sharding

• Sharded database (SDB)


• Shards
• Global service
• Shard catalog
• Shard directors
• Connection pools
• Management tools
– GDSCTL

Oracle Internal & Oracle Academy Use Only


– EMCC 13c

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Shards are independent Oracle databases that are hosted on database servers that have their own
local resources: CPU, memory, and disk. No shared storage is required across the shards. A
sharded database is a collection of shards. Shards can all be placed in one region or can be placed
in different regions. A region in the context of Oracle Sharding represents a data center or multiple
data centers that are in close network proximity. All shards of an SDB always have the same
database schema and contain the same schema objects.
A global service is an extension to the notion of a traditional database service. All the properties of
traditional database services are supported for global services. For sharded databases, additional
properties are set for global services, for example, database role, replication lag tolerance, region
affinity between clients and shards, and so on. For a read/write transactional workload, a single
global service is created to access data from any primary shard in an SDB.
The shard catalog is an enhanced Global Data Services (GDS) catalog to support Oracle Sharding.
A shard director is a specific implementation of a global service manager that acts as a regional
listener for clients that connect to an SDB, and maintains a current topology map of the SDB. Oracle
supports connection pooling in data access drivers such as OCI, JDBC, ODP.NET, and so on. In
Oracle 12c Release 2, these drivers can recognize sharding keys that are specified as part of a
connection request. The diagram in the slide shows the typical components of Oracle Sharding.

Oracle Database 18c: New Features for Administrators A - 8


Shard Catalog

• Is an enhanced Global Data Services (GDS) catalog containing persistent sharding


configuration data
• Is used to initiate all configuration changes
• Is used for connections for all DDL commands
• Contains the master copy of all duplicated tables
• Replicates changes to duplicated tables by using materialized views
• Acts as a query coordinator to process multi-shard queries

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

The shard catalog is a special-purpose Oracle Database that is a persistent store for SDB
configuration data, and plays a key role in centralized management of a sharded database. All
configuration changes, such as adding and removing shards and global services, are initiated on the
shard catalog. All DDLs in an SDB are executed by connecting to the shard catalog.
The shard catalog also contains the master copy of all duplicated tables in an SDB. It uses
materialized views to automatically replicate changes to duplicated tables in all shards. The shard
catalog database also acts as a query coordinator that is used to process multi-shard queries and
queries that do not specify a sharding key.
High availability for the shard catalog can be implemented by using Oracle Data Guard. The
availability of the shard catalog has no impact on the availability of the SDB. An outage of the shard
catalog affects only the ability to perform maintenance operations or multi-shard queries during the
brief period required to complete an automatic failover to a standby shard catalog. OLTP
transactions continue to be routed and executed by the SDB, and are unaffected by a catalog
outage.

Oracle Database 18c: New Features for Administrators A - 9


Shard Directors

The following are the key capabilities of shard directors:


• Maintaining runtime data about SDB configuration and availability of shards
• Measuring network latency between its own and other regions
• Acting as a regional listener for clients to connect to an SDB
• Managing global services
• Performing connection load balancing

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

The global service manager was introduced in Oracle Database 12c to route connections based on
database role, load, replication lag, and locality. In support of Oracle Sharding, global service
managers have been enhanced to support the routing of connections based on the location of data.
A global service manager, in the context of Oracle Sharding, is known as a shard director.
A shard director is a specific implementation of a global service manager that acts as a regional
listener for clients that connect to an SDB, and maintains a current topology map of the SDB. Based
on the sharding key that is passed during a connection request, it routes the connections to the
appropriate shard.
For a typical SDB, a set of shard directors is installed on dedicated low-end commodity servers in
each region. Multiple shard directors should be deployed for high availability. In Oracle Database
12c Release 2, up to five shard directors can be deployed in a given region.

Oracle Database 18c: New Features for Administrators A - 10


Complete Deployment of a System-Managed SDB

Shardgroup
Shard Director Shard Catalog shgrp1
Region
Shdir1,2 shardcat
Availability_Domain1
Primaries
Clients Connection …
Pools

HA Standbys
Connection
Pools …

Oracle Internal & Oracle Academy Use Only


Region
Availability_Domain2
Shard Director Shard Catalog
Shdir3,4 shardcat_stdby Shardgroup
shgrp2

Data Guard
Fast-Start Failover

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Oracle Sharding is built on the Global Data Services (GDS) architecture. GDS is the Oracle
scalability, availability, and manageability framework for multidatabase environments. GDS presents
a multi-database configuration to database clients as a single logical database by transparently
providing failover, load balancing, and centralized management for database services.
GDS routes a client request to an appropriate database based on availability, load, network latency,
replication lag, and other parameters. In Oracle Database 12c Release 1, GDS supports only fully
replicated databases: it assumes that when a global database service is enabled on multiple
databases, all of them contain a full set of data provided by the service.
Oracle Database 12c Release 2 extends the concept of a GDS pool to a Sharded GDS pool. Unlike
the regular GDS pool that contains a set of fully replicated databases, the sharded GDS pool
contains all shards of an SDB and their replicas. For database clients, the sharded GDS pool creates
an illusion of a single sharded database, the same way as the regular GDS pool creates an illusion
of a single non-sharded database.
The diagram in the slide illustrates a typical GDS architecture that has two data centers (APAC,
EMEA) and two sets of replicated databases (SALES, HR). The GDS catalog is using Oracle Data
Guard between the two regions for high availability. The SALES database is replicated with Active
Data Guard. The HR database is replicated with Oracle GoldenGate.

Oracle Database 18c: New Features for Administrators A - 11


Creating Sharded Tables

• Use a sharding key (partition key) to distribute partitions across shards at the tablespace
level.
• The NUMBER, INTEGER, SMALLINT, RAW, (N)VARCHAR, (N)CHAR, DATE, and
TIMESTAMP data types are supported for the sharding key.

SQL> CREATE SHARDED TABLE customers


( CustNo NUMBER NOT NULL
, Name VARCHAR2(50)
, Address VARCHAR2(250)

Oracle Internal & Oracle Academy Use Only


, CONSTRAINT RootPK PRIMARY KEY(CustNo) )
PARTITION BY CONSISTENT HASH (CustNo)
PARTITIONS AUTO TABLESPACE SET ts1;

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

A sharded table is a table that is partitioned into smaller and more manageable pieces among
multiple database instances, called shards. Oracle Sharding is implemented based on the Oracle
Database partitioning feature. It is essentially distributed partitioning because it extends partitioning
by supporting the distribution of table partitions across shards.
Partitions are distributed across shards at the tablespace level, based on a sharding key. Each
partition of a sharded table resides in a separate tablespace, and each tablespace is associated with
a specific shard. Depending on the sharding method, the association can be established
automatically or defined by the administrator. Even though the partitions of a sharded table reside in
multiple shards, to the application, the table looks and behaves exactly the same as a partitioned
table in a single database. The SQL statements that are issued by an application need not refer to
shards or depend on the number of shards and their configuration.
The slide syntax shows a table that is partitioned by consistent hash, which is a special type of hash
partitioning that is commonly used in scalable distributed systems. This technique automatically
spreads tablespaces across shards to provide an even distribution of data and workload. The
database creates and manages tablespaces as a unit, called a tablespace set. The PARTITIONS
AUTO clause specifies that the number of partitions should be automatically determined. This type of
hashing provides more flexibility and efficiency in migrating data between shards, which is important
for elastic scalability.

Oracle Database 18c: New Features for Administrators A - 12


Sharded Table Family

• A set of tables sharded in the same way


• Only a single root table (table with no parent) per family
• Only a single table family per SDB
• Only a single sharding method (partitioning method) per SDB, that cannot be changed
after creation
SQL> CREATE SHARDED TABLE Orders
( OrderNo NUMBER NOT NULL
, CustNo NUMBER NOT NULL

Oracle Internal & Oracle Academy Use Only


, OrderDate DATE
, CONSTRAINT OrderPK PRIMARY KEY (CustNo, OrderNo)
, CONSTRAINT CustFK FOREIGN KEY (CustNo)
REFERENCES Customers(CustNo) )
PARTITION BY REFERENCE (CustFK);

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

A sharded table family is a set of tables that are sharded in the same way. Parent-child relationships
between database tables with a referential constraint in a child table (foreign key) that refers to the
primary key of the parent table form a tree-like structure where every child has a single parent. Such
a set of tables is referred to as a table family. A table in a table family that has no parent is called the
root table. There can be only one root table in a table family. In Oracle Database 12c Release 2, only
a single table family is supported in an SDB.
Reference partitioning is the recommended way to create a sharded table family. The corresponding
partitions of all the tables in the family are stored in the same tablespace set. Partitioning by
reference simplifies the syntax because the partitioning scheme is specified only for the root table.
Also, partition management operations that are performed on the root table are automatically
propagated to its descendants. For example, when adding a partition to the root table, a new
partition is created on all its descendants. The partitioning column is present in all tables in the
family. This is despite the fact that reference partitioning, in general, allows a child table to be equi-
partitioned with the parent table without having to duplicate the key columns in the child table. The
reason for this is that reference partitioning requires a primary key in a parent table because the
primary key must be specified in the foreign key constraint of a child table that is used to link the
child to its parent. However, a primary key on a sharded table must either be the same as the
sharding key or contain the sharding key as the leading column. This makes it possible to enforce
global uniqueness of a primary key without coordination with other shards, a critical requirement for
linear scalability.

Oracle Database 18c: New Features for Administrators A - 13


Partitions, Tablespaces, and Chunks

• Each partition of a sharded table is stored in a separate tablespace.


• The corresponding data value partitions of all the tables in a table family are always
stored in the same shard.
– Guaranteed when the tables in a table family are created in the same tablespace set
• The child tables of a table family can be stored in separate tablespace sets.
– Uses chunks or groups of tablespaces that contain a single partition from each table
with the corresponding partitions in the family

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Distribution of partitions across shards is achieved by creating partitions in tablespaces that reside
on different shards. Each partition of a sharded table is stored in a separate tablespace, making the
tablespace the unit of data distribution in an SDB. To minimize the number of multi-shard joins, the
corresponding partitions of all the tables in a table family are always stored in the same shard. This
is guaranteed when the tables in a table family are created in the same set of distributed tablespaces
as shown in the syntax examples for this lesson, where the tablespace set ts1 is used for all tables.
However, it is possible to create different tables from a table family in different sets of tablespaces,
for example, the Customers table in the tablespace set ts1 and Orders in the tablespace set ts2. In
this case, it must be guaranteed that the tablespace that stores partition 1 of Customers always
resides in the same shard as the tablespace that stores partition 1 of Orders. To support this
functionality, a set of corresponding partitions from all the tables in a table family, called a chunk, is
formed. A chunk contains a single partition from each table of a table family.
The illustration in the slide shows a chunk that contains corresponding partitions from the tables of
the Customers-Orders-LineItems schema.

Oracle Database 18c: New Features for Administrators A - 14


Sharding Methods: System-Managed Sharding

Data is automatically distributed across shards using partitioning by consistent hash.

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

System-managed sharding is a sharding method that does not require the user to specify a mapping
of data to shards. Data is automatically distributed across shards using partitioning by consistent
hash. The partitioning algorithm evenly and randomly distributes data across shards. The distribution
used in system-managed sharding is intended to eliminate hot spots and provide uniform
performance across shards. Oracle Sharding automatically maintains balanced distribution of data
when shards are added to or removed from an SDB.
Consistent hash is a partitioning strategy that is commonly used in scalable distributed systems. It is
different from traditional hash partitioning. With traditional hashing, the bucket number is calculated
as HF(key) % N where HF is a hash function and N is the number of buckets. This approach works
fine if N is constant, but requires reshuffling of all data when N changes. More advanced algorithms,
such as linear hashing, do not require rehashing of the entire table to add a hash bucket, but they
impose restrictions on the number of buckets, such as the number of buckets can only be a power
of 2, and on the order in which the buckets can be split.
The implementation of consistent hashing that is used in Oracle Sharding avoids these limitations by
dividing the possible range of values of the hash function (for example, from 0 to 232) into a set of N
adjacent intervals, and assigning each interval to a chunk. In this example, the SDB contains 1024
chunks, and each chunk gets assigned a range of 222 hash values. Therefore, partitioning by
consistent hash is essentially partitioning by the range of hash values.

Oracle Database 18c: New Features for Administrators A - 15


Sharding Methods: Composite Sharding

Data is first partitioned by list or range across multiple shardspaces, and then further
partitioned by consistent hash across multiple shards in each shardspace.

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

The composite sharding method allows you to create multiple shardspaces for different subsets of
data in a table partitioned by consistent hash. A shardspace is a set of shards that stores data that
corresponds to a range or list of key values. System-managed sharding does not give you any
control over the assignment of data to shards.
When sharding by consistent hash on a primary key, there is often a requirement to differentiate
subsets of data within an SDB in order to store them in different geographic locations, allocate to
them different hardware resources, or configure high availability and disaster recovery differently.
Usually this differentiation is done based on the value of another (non-primary) column, for example,
customer location or a class of service.
With composite sharding, data is first partitioned by list or range across multiple shardspaces, and
then further partitioned by consistent hash across multiple shards in each shardspace. The two
levels of sharding make it possible to automatically maintain a balanced distribution of data across
shards in each shardspace, and at the same time, partition data across shardspaces. The slide
illustration shows two tablespace sets: tbs1 at the top and tbs2 at the bottom. Tablespace set tbs1 is
labeled “Shardspace for GOLD customers - shspace1” and contains three shards, each of which
contains a range of tablespaces and their respective partitions. Tablespace set tbs2 is labeled
“Shardspace for SILVER customers - shspace2” and contains four shards, each of which contains a
range of tablespaces and their respective partitions.

Oracle Database 18c: New Features for Administrators A - 16


Duplicated Tables

• Are nonsharded tables that duplicate data on all shards


• Help eliminate cross-shard queries
• Are created in the shard catalog
• Use materialized view replication
• Can be refreshed by using a refresh frequency (default 60 seconds) that is set with the
SHRD_DUPL_TABLE_REFRESH_RATE initialization parameter
• Cannot be stored in tablespaces used for sharded tables

Oracle Internal & Oracle Academy Use Only


SQL> CREATE DUPLICATED TABLE Products
( StockNo NUMBER PRIMARY KEY
, Description VARCHAR2(20)
, Price NUMBER(6,2));

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

In addition to sharded tables, an SDB can contain tables that are duplicated on all shards. For many
applications, the number of database requests handled by a single shard can be maximized by
duplicating read-only or read-mostly tables across all shards. This strategy is a good choice for
relatively small tables that are often accessed together with sharded tables. A table with the same
contents in each shard is called a duplicated table.
Oracle Sharding synchronizes the contents of duplicated tables by using Materialized View
Replication. A duplicated table on each shard is represented by a read-only materialized view. The
master table for the materialized views is located in the shard catalog. The CREATE DUPLICATED
TABLE statement automatically creates the master table, materialized views, and other objects
required for materialized view replication. The materialized views on all the shards are automatically
refreshed at a configurable frequency. The refresh frequency of all duplicated tables is controlled by
the SHRD_DUPL_TABLE_REFRESH_RATE database initialization parameter. The default value for the
parameter is 60 seconds.

Oracle Database 18c: New Features for Administrators A - 17


Routing in an Oracle Sharded Environment

• Direct Routing based on sharding_key


– For OLTP workloads that specify sharding_key (for example, customer_id)
during connect
– Enabled by enhancements to mid-tier connection pools and drivers
• Proxy Routing via a coordinator (shard catalog)
– For workloads that cannot specify sharding_key (as part of a connection)
– For reporting, batch jobs
– Queries spanning one or more or all shards

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

• Direct Routing: In the first case (the first bullet point), a transaction happens on a single
shard. In the second case (second bullet point), JDBC/UCP, OCI, and ODP.NET recognize
the sharding keys.
• Proxy Routing: In the last case (last bullet point), the queries perform in parallel across
shards (for example, aggregates on sales data).

Oracle Database 18c: New Features for Administrators A - 18


Direct Routing via Sharding Key

App Tier
Connection
Pool

Routing Tier

Shard
Directors

Oracle Internal & Oracle Academy Use Only


Data Tier

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

• The sharding keys are provided by the applications at connection checkout.


• The client specifies the sharding key (for example, customer_id).
• The shard director looks up the key, and redirects the client to the shard database that
contains the data.
• The client executes the SQL directly on the shard.

Oracle Database 18c: New Features for Administrators A - 19


Connection Pool as Shard Director

App Tier
Connection
Pool

Routing Tier

Shard
Directors

Oracle Internal & Oracle Academy Use Only


Data Tier

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Fast Path for Key Access


• Upon first connection to a shard:
- The connection pool retrieves all the key ranges in the shard
- The connection pool caches the key range mappings
• The database request for a key that is in any of the cached key ranges goes directly to the
shard (that is, bypasses the shard director).

Oracle Database 18c: New Features for Administrators A - 20


Proxy Routing: Limited to System Managed in 12.2.0.1

App Tier
Connection
Pool

Routing Tier Coordinator


Shard (shard catalog)
Directors

Oracle Internal & Oracle Academy Use Only


Data Tier

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Non-Sharding Key Access and Multi-Shard Queries


• A connection is made to the coordinator: Applications connect to the catalog service via a
separate connection pool.
• The coordinator parses the SQL and will proxy/route the request to the correct shard.
• The same flow is used for multi-shard queries:
- The coordinator acts as the SQL proxy/router.
- Shard pruning and scatter-gather are supported.
• This feature is for developer convenience and not for high performance.

Oracle Database 18c: New Features for Administrators A - 21


Lifecycle Management of SDB

• The DBA can manually move or split a chunk from one shard to another.
• When a new shard is added, chunks are automatically rebalanced.
• Before a shard is removed, chunks must be manually moved.
• Connection pools are notified (via ONS) about a split, a move, addition or removal of
shards, auto-resharding, and read-only access operations.
• All shards can be patched with one command via opatchauto.
• EM supports monitoring and management of SDB.

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

In the second case (the second bullet point), RMAN incremental backup and transportable
tablespace are used.
In the fourth case (the fourth bullet point), the application can either reconnect or access read-only.

Oracle Database 18c: New Features for Administrators A - 22


Sharding Deployment Outline: DBA Steps

1. Create users and groups on all host servers.


2. Perform an Oracle-database-software-only installation on the shard catalog server and
save a response file.
3. Perform silent installations of the Oracle database software only on all the shard hosts
and the additional shard catalog host.
4. Install the Oracle Global Service manager software on all the shard director hosts.
5. Create a non-container database by using DBCA on the shard catalog host with Oracle
Managed Files (required).

Oracle Internal & Oracle Academy Use Only


6. Configure the remote scheduler on the shard catalog host.
7. Register the remote scheduler on each shard host with the shard catalog host.

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Oracle Sharding architecture uses separate server hosts for the shard catalog, shard directors, and
shards. The number of shards supported in a given sharded database (SDB) is 1,000. Deploying a
sharded database can be a lengthy process because the Oracle software is installed separately on
each server host.
The slide presents a very high-level overview of the steps that are necessary to deploy
Oracle Sharding. For detailed information, see Oracle Database Administrator’s Guide 12c Release
2 (12.2).

Oracle Database 18c: New Features for Administrators A - 23


Sharding Deployment Outline: DBA Steps

8. Use GDSCTL on the shard catalog host to create a shard catalog.


9. Use GDSCTL on the shard catalog host to create and start the shard directors.
10.Create additional shard catalogs in a different region for high availability.
11.Define the primary shardgroup (region) by using GDSCTL connected to the shard
director host.
12.Define the Active Data Guard standby shardgroup by using GDSCTL connected to the
shard director host.
13.Define each shard host as belonging to the primary or standby shardgroup.

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

The slide continues with the very high-level overview of the steps that are necessary to
deploy Oracle Sharding. For detailed information, see Oracle Database Administrator’s Guide 12c
Release 2 (12.2).

Oracle Database 18c: New Features for Administrators A - 24


Sharding Deployment Outline: DBA Steps

14.Use GDSCTL connected to the shard director host to run the DEPLOY command, which:
– Creates all primary and standby shard databases using DBCA
– Enables archiving and flashback for all shards
– Configures Data Guard Broker with Fast-Start Failover enabled
– Starts observers on the standby group’s shard director
15.Use GDSCTL to add and start a global service that runs on all primary shards.
16.Use GDSCTL to add and start a global service for read-only workloads on all standby
shards.

Oracle Internal & Oracle Academy Use Only


17.Use SQL*Plus connected to the shard catalog database to design the sharded schema
model (developer steps).

Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

The slide continues with the very high-level overview of the steps that are necessary to
deploy Oracle Sharding. For detailed information, Oracle Database Administrator’s Guide 12c
Release 2 (12.2).

Oracle Database 18c: New Features for Administrators A - 25


Summary

In this lesson, you should have learned how to:


• Describe the challenges and benefits of a sharded database
• Describe sharded database architecture
• Configure a sharded database (SDB)

Oracle Internal & Oracle Academy Use Only


Copyright © 2018, Oracle and/or its affiliates. All rights reserved.

Oracle Database 18c: New Features for Administrators A - 26

Potrebbero piacerti anche