Sei sulla pagina 1di 107

Database Recovery

Techniques

Silberschatz Chapter 16
Database Recovery
 Basic Storage Structure
– Storage structure: stable Storage
– Data Access, Buffering and DBMS caching

 What is recovery?
– Types Of Failure
– Write-Ahead Logging, Steal/No-Steal, and Force/No-Force
– Checkpoints in the System Log
– Transaction Rollback

 How to do recovery?
– Log based recovery: Deferred Update (no UNDO/REDO)
– Log based recovery: Immediate Update (UNDO/REDO)
– Shadow Paging recovery (no UNDO/no REDO)
– ARIES

 How is it handled in Oracle?


– Redo log, Undo segment, Archive log
– Recovery structure
– Backup structure

Slide -2
Database Recovery
 Basic Storage Structure
 Storage structure: stable Storage
 Data Access, Buffering and DBMS caching

 What is recovery?
– Types Of Failure
– Write-Ahead Logging, Steal/No-Steal, and Force/No-Force
– Checkpoints in the System Log
– Transaction Rollback

 How to do recovery?
– Log based recovery: Deferred Update (no UNDO/REDO)
– Log based recovery: Immediate Update (UNDO/REDO)
– Shadow Paging recovery (no UNDO/no REDO)
– ARIES

 How is it handled in Oracle?


– Redo log, Undo segment, Archive log
– Recovery structure
– Backup structure

Slide -3
Storage Structure
 Volatile storage:
– does not survive system crashes.
– examples: main memory, cache memory.

 Nonvolatile storage:
– survives system crashes.
– examples: disk, tape, flash memory,
non-volatile (battery backed up) RAM.

 Stable storage:
– a theoretical form of storage that survives all failures.
– approximated by maintaining multiple copies on
distinct nonvolatile media (e.g. combination of RAID
and archive tape backups, copy block to remote site).
Slide -4
Example of Data Access
buffer
buffer
Buffer Block A x input(A)
x
Buffer Block B Y A
Y
output(B) B
read(X)
write(Y)
disk

x1 x2

y1

work area work area


of T1 of T2

memory

Slide -5
Data Access
 Physical blocks are those blocks residing on the disk.

 Buffer blocks are the blocks residing temporarily in main memory.

 Block movements between disk and main memory are initiated


through the following two operations:
– input(A) transfers the physical block A to main memory.
– output(B) transfers the buffer block B to the disk, and replaces the
appropriate physical block there.

 Each transaction Ti has its private work-area in which local copies of


all data items accessed and updated by it are kept.
– Ti's local copy of a data item X is called xi.

 We assume, for simplicity, that each data item fits in, and is stored
inside, a single block.

Slide -6
Data Access (Cont.)
 Transaction transfers data items between system buffer blocks and its
private work-area using the following operations :
– read(X) assigns the value of data item X to the local variable xi.
– write(X) assigns the value of local variable xi to data item {X} in the
buffer block.
– both these commands may necessitate the issue of an input(BX)
instruction before the assignment, if the block BX in which X resides is
not already in memory.

 Transactions
– Perform read(X) while accessing X for the first time;
– All subsequent accesses are to the local copy.
– After last access, transaction executes write(X).

 output(BX) need not immediately follow write(X). The block BX may


contain other data items being accessed. The system can perform the
output operation when it deems fit.

 Problem: if system crashes after write(X) but before output(BX) , new


value X is never written to disk, thus is lost.
Slide -7
DBMS caching
 DBMS cache (collection of in-memory buffers) holds the cached database
disk blocks from the disk.
– data blocks
– index blocks
– log blocks

 DBMS maintains a directory for the cache to keep track of which database
items are in the buffers. It maintains a number of lists containing:
– active transaction list ( started but not committed as yet)
– all committed transactions since last check point
– all aborted transactions since last check point

 DBMS first checks cache directory for required disk page. If not in cache
directory, DBMS will look from the disk. The disk page containing the item
is copied into cache.

 It may be necessary to replace (or flush) the cache buffers to make space
available using LRU (least recently used) or FIFO (first in first out) buffer
replacement strategy.

Slide -8
Oracle SGA

Source: Oracle 10g DB Administrator: Implementation and Administration Slide -9


Oracle SGA Internal
 Shared pool
– Library cache stores parsed SQL and results of Optimizer’s query access path.
New query request will be first checked against old requests already available in
the shared pool, and the new request will be parsed if no reusable old requests
available.
– Dictionary cache stores metadata, tables, indexes, mviews, etc.

 Database buffer cache


– Stores datablocks to or from disk until flushed for reusing the space.
– Buffer cache can consist of multiple buffer caches of different sizes: 2K, 4K, 8K,
16K and 32K, with default being 8K.
– Keep pool retains data in the buffer for longer.
– Recycle pool removes data from the buffer more rapidly.

 Redo log buffer


– Stores copy of changed data and the original data.

 Large pool
– To support occasional processing that requires large chunks, e.g. backups and
parallel processing.

Slide -10
DBMS caching
 When performing an action on an item, the DBMS first checks
the DBMS cache (in-memory buffers) to determine if the disk
page containing the item is in the cache.

 Each cache buffer contains special bits included in the directory


entry to indicate if the cache buffer has been modified.
– dirty bit = 0 when disk page is read from database into cache
buffer
– dirty bit = 1 when the buffer is modified and ready for
database update
– Pin-Unpin bit , where pin= 1 when it cannot be written back
to database

 Old values are called before image (BFIM).

 New value is called after image (AFIM).

Slide -11
DBMS caching

 Two strategies for flushing a modified buffer back to disk:


– In-place update: write buffer back to same original disk
location. A single copy of each database disk block is
maintained, and old value in disk is overwritten.
A log is used - a sequential (append-only) disk file

– Shadowing: write an updated buffer at a different disk


location, thus keeping multiple versions of data items.
A log is not necessary since both BFIM and AFIM can
be kept on disk

Slide -12
Database Recovery
 Basic Storage Structure
– Storage structure: stable Storage
– Data Access, Buffering and DBMS caching

 What is recovery?
 Types Of Failure
 Write-Ahead Logging, Steal/No-Steal, and Force/No-Force
 Checkpoints in the System Log
 Transaction Rollback

 How to do recovery?
– Log based recovery: Deferred Update (no UNDO/REDO)
– Log based recovery: Immediate Update (UNDO/REDO)
– Shadow Paging recovery (no UNDO/no REDO)
– ARIES

 How is it handled in Oracle?


– Redo log, Undo segment, Archive log
– Recovery structure
– Backup structure

Slide -13
Types Of Failure

 SQL transaction statement failure

 User process failure

 Network failure

 User error /mistake

 Instance failure

 Media failure

Source: Oracle 10g DB Administrator: Implementation and Administration Slide -14


Examples of SQL Transaction Statement Failure

Source: Oracle Database 10g: Administration Workshop I Slide -15


Examples of User Process Failure

Source: Oracle Database 10g: Administration Workshop I Slide -16


Examples of Network Failure

Source: Oracle Database 10g: Administration Workshop I Slide -17


Examples of User Error / Mistake

 If data is not committed, simply roll back to recover.


 If data is committed, recovery can be made using available undolog.
 Oracle example: using flashback query to find out the before salary value
of employee id=100 :
SELECT SALARY FROM EMPLOYEE
AS OF TIMESTAMP (SYSTIMESTAMP – INTERVAL ‘10’ MINUTES)
WHERE EMPLOYEE_ID=100;

Source: Oracle Database 10g: Administration Workshop I Slide -18


Examples of User Error / Mistake

 If the before value is not


available in the undolog, perhaps
due to undolog being overwritten
by retention period exceeded.

 Oracle LogMiner can be used to


query online redo logs and
archived redo logs. Thus, it is
important to ensure redo logs are
available to support the
necessary recovery.

Source: Oracle Database 10g: Administration Workshop I Slide -19


Examples of Instance Failure

 Typically occurs when database instance is shutdown due


to hardware or software failure before synchronizing
database files.
Source: Oracle Database 10g: Administration Workshop I Slide -20
Examples of Media Failure

 Media failure typically occurs when there is a loss or corruption of


one or more database files (e.g. datafile, control file or redo log).
 It is good practice to configure the database for maximum
recoverability, i.e. schedule regular backups, multiplex control files,
multiplex redo log groups, retain archived copies of redo logs.
Source: Oracle Database 10g: Administration Workshop I Slide -21
Failures

 Catastrophic failures (usually relate to physical damage):


– restore an archived copy of the database and reconstruct
by reapply or redoing the operations of committed
transactions from the backed up log, up to the time of
failure.

 Non-catastrophic failures (usually relate to non-physical


damage):
– Reverse any changes that caused the inconsistency by
undoing some operations and redoing some operations
to restore a consistent state using the online system log.

Slide -22
Recovery for Non-catastrophic failures

 Usually attempts to restore the database to the most recent


consistent state just before failure.

 Recovery algorithms have two parts:


1. Actions taken during normal transaction processing to
ensure enough information exists to recover from failures.

2. Actions taken after a failure to recover the database


contents to a state that ensures atomicity, consistency and
durability.

 Intertwined with OS functions – buffering and caching.

Slide -23
Recovery Concepts: Write-Ahead Logging (WAL)

 Write-ahead logging
– This is used by in-place update.

– BFIM (old value) is recorded in log entry and the entry is flushed
(force-written) to disk before AFIM (new value) replaces BFIM.

– In an update, the log blocks (containing redo-type log and undo-type


log) must be first flushed (force-written) to disk before the committed
transaction (actual data block) is written to disk.

 REDO type log entry includes AFIM (new value) so it can redo and set
database item to new value
– Should be idempotent, i.e executing it over and over is equivalent to just
once.

 UNDO type log entry includes BFIM (old value) so it can undo and set
database item back old value.

Slide -24
Recovery Concepts: Steal/No-Steal, Force/No-Force
 Steal
– A transaction updates a cache page and then is written to disk before
commits.
– “Steals” a page, as in the case of needing to free up buffer frames for another
transaction, so it needs to write the most recent updated page quickly to disk,
even before it is committed.
– Does not require large buffer memory to store updated pages .

 No-steal
– A transaction updates a cache page but cannot be written to disk before it
commits. (e.g. the pin bit is set to 1).

 Force
– All pages updated by a transaction are immediately written to disk when
transaction is committed.

 No-force
– Pages updated by a transaction are not immediately written to disk when
transaction is committed.
– A deferred update approach.

Slide -25
Exercise:
Why do most DBMS use a steal/no-force strategy?

Slide -26
Recovery Concepts: Checkpoints

 Problems in recovery procedure:

1. searching the entire log is time-consuming.


2. we might unnecessarily redo transactions which
have already output their updates to the database.

 A method to overcome the above problem is to


periodically insert checkpoints in the log when the
system writes out to the database on the disk.

Slide -27
Checkpoints in System Log

 The DBMS writes out the latest modified items in


the buffers to disk and inserts a checkpoint in the
log.

 When checkpoint is taken:


1. Transactions are temporarily suspended.
2. Force-write all modified main memory
buffers to disk.
3. Write a checkpoint record to the log, force-
write the log to disk.
4. Resume transaction execution.

Slide -28
Checkpoints (Cont.)
 All committed transactions in the log before a
checkpoint do not need to have their WRITE
operations REDONE in case of a system failure.

 The DBMS’s Recovery Manager decides the


checkpoint intervals to be taken:
– every m minutes, or;
– by t committed transactions since the last
checkpoint.

 Fuzzy checkpointing minimizes checkpoint delay by


allowing to do step 3 and 4 when step 2 is not
finished. A pointer is used pointing to the last valid
checkpoint in the log, until step 2 is completed.
Slide -29
Checkpoints (Cont.)
 During recovery we need to consider only the most recent
transaction Ti that started before the checkpoint, and
transactions that started after Ti.

1. Scan backwards from end of log to find the most recent <checkpoint>
record .
2. Continue scanning backwards till a record <Ti start> is found.
3. Need only consider the part of log following from the start record.
Earlier part of log can be ignored during recovery, and can be erased
whenever desired.
4. Recovery in case of immediate modification:
• For all transactions (starting from Ti or later) with no <Ti commit>,
execute undo(Ti).
• Scanning forward in the log, for all transactions starting from Ti or
later with a <Ti commit>, execute redo(Ti).

Slide -30
Example Of Checkpoints Recovery Using Immediate Update

Tc Tf
T1
T2
T3
T4

checkpoint system failure

Slide -31
Recovery Concepts: Transaction Rollback

 When a transaction T is rolled back, the data values that


have been changed by the transaction T and written to the
database are restored to the BFIM state. UNDO type log
entry is used to accomplish this.

 An cascading rollback effect will take place if another


transaction S has read and performed operations on the
same data items written by transaction T. This can be
rather complex and time consuming.

 To achieve cascade rollback, the log needs to record the


read_time operations.

 For cascadeless schedule there is no need for cascading


rollback and no need to record the read_time operations in
the log.
Slide -32
Example: Illustrating cascading rollback
(a) The read and write operations of three transactions.
(b) System log at point of crash
(c) Operations before the crash

DB

Slide -33
Example: Illustrating cascading rollback
(a) The read and write operations of three transactions.
(b) System log at point of crash
(c) Operations before the crash 1. What recovery is necessary if system crash
before [read_item, T3, A]?

DB

Slide -34
Example: Illustrating cascading rollback
(a) The read and write operations of three transactions.
(b) System log at point of crash
(c) Operations before the crash 2. What recovery is necessary if system
crashes before [write_item, T2,D, 25, 26]?

DB

Slide -35
Database Recovery
 Basic Storage Structure
– Storage structure: stable Storage
– Data Access, Buffering and DBMS caching

 What is recovery?
– Types Of Failure
– Write-Ahead Logging, Steal/No-Steal, and Force/No-Force
– Checkpoints in the System Log
– Transaction Rollback

 How to do recovery?
 Log based recovery: Deferred Update (no UNDO/REDO)
 Log based recovery: Immediate Update (UNDO/REDO)
 Shadow Paging recovery (no UNDO/no REDO)
 ARIES

 How is it handled in Oracle?


– Redo log, Undo segment, Archive log
– Recovery structure
– Backup structure

Slide -36
Two techniques (approaches) in Recovery:

 Log-based recovery.
– Deferred update – RDU (No-Undo/Redo)
– Immediate update – RIU (Undo/Redo)

 Shadow-paging (NO-UNDO/NO-REDO)

Slide -37
Recovery based on deferred update techniques - RDU

 All transaction updates prior to the commit point are stored in


main memory buffer and in the log.

 Physical update of the database occurs after transaction passes


the commit point and after the log is force-written to disk.

 Since the database is not being affected prior to commit point,


no UNDO recovery is necessary and it only requires REDO the
operations of the committed transaction from the system log.

 This is suitable for transactions that are short with a few item
changes which does not take up excessive buffer space.

 This is similar to the WAL (write-ahead-logging) protocol.

 This is also referred to as the NO-UNDO/REDO algorithm.


Slide -38
Deferred Update and Recovery
 Transaction starts by writing <Ti start> record to log.
 Record first in the log and write log to some stable storage.
 Defer update database to after partial commit, write record <Ti
commit> in log.
 Use the log information to execute deferred update.
 Once physical update is completed, transaction reached
“committed” state.
 Only AFIM (new value) is required in the log for recovery
purpose. A write(X) operation results in a log record <Ti, X, V>
being written, where V is the new value for X
– Note: Since undo is not necessary old value is not needed in the log
 If the log contains <start> and <commit>, recovery would only
require the redo operation.
 Redoing a transaction Ti ( redoTi) sets the value of all data items
updated by the transaction to the new values..

Slide -39
Deferred Update and Recovery
Silberschatz fig,17.4, example of transaction logs

Slide -40
Recovery based on immediate update techniques - RIU

 Database is physically updated without the transaction reaching its commit


point.

 Using the WAL write-ahead-logging protocol, the log (on disk) records the
update operations before the update is applied to the database.

 If all physical updates of the database occur prior to the commit point.
– Use UNDO/-NO-REDO algorithm.

 If only some physical update of the database occurs prior to the commit
point .
– Transaction updates prior to the commit point are stored in main
memory buffer by force writing.

– Apply UNDO (roll-back) if transaction fails before reaching commit


point but has already begun physical update of the database.

– Use UNDO/REDO algorithm (complex).

Slide -41
Immediate Database Modification

 The immediate database modification scheme allows database


updates of an uncommitted transaction to be made as the writes are
issued.
– since undoing may be needed, update logs must have both old
value and new value.

 Update log record must be written before database item is written.


– We assume that the log record is output directly to stable storage.
– Can be extended to postpone log record output, so long as prior to
execution of an output(B) operation for a data block B all log
records corresponding to items B must be flushed to stable storage.

 Output of updated blocks to disk can take place at any time before or
after transaction commit.

 Order in which blocks are output to disk can be different from the
order in which they are written.

Slide -42
Immediate Database Modification (Cont.)
 Recovery procedure has two operations instead of one:
– undo(Ti) restores the value of all data items updated by Ti to their
old values, going backwards from the last log record for Ti.
– redo(Ti) sets the value of all data items updated by Ti to the new
values, going forward from the first log record for Ti

 Both operations must be idempotent.


– That is, even if the operation is executed multiple times the effect is
the same as if it is executed once. This is necessary since operations
may get re-executed during recovery.

 When recovering after failure:


– Transaction Ti needs to be undone if the log contains the record
<Ti start>, but does not contain the record <Ti commit>.
– Transaction Ti needs to be redone if the log contains both the record
<Ti start> and the record <Ti commit>.

 Undo operations are performed first, then redo operations.


Slide -43
Immediate Update Recovery Example
Consider previous example under immediate update with a system crash before
completion of the transactions.

Silberschatz fig,17.7

Slide -44
Summary Of Recovery On Concurrent Transactions

Immediate Deferred Update


Update
Record in log <Ti, Xi, V1, V2> <Ti, Xi, V2>

<Ti, Start> UNDO No recovery


action
<Ti, Start> REDO REDO
<Ti, Commit>

1. UNDO – move backward from end of log to the <check-point>


2. REDO – move forward from the <check-point> to end of log

Slide -45
Exercise
What recovery
is necessary
for these
concurrent
transactions
using deferred
update?

(a) The READ


and WRITE
operations of 4
transactions
.
(b) System log
at the point of
crash.

Slide -46
Shadow Paging

 Shadow paging is an alternative to log-based recovery; this


scheme is useful if transactions execute serially in a non-
multi-user environment.

 Idea: maintain two page tables during the lifetime of a


transaction –the current page table, and the shadow page
table.

 Store the shadow page table in nonvolatile storage, such that


state of the database prior to transaction execution may be
recovered.
– Shadow page table is never modified during execution

Slide -47
Shadow Paging

 At beginning both page tables are identical. Only current page


table is used for data accesses during transaction execution.

 The database is viewed as a number of fixed-sized disk pages


(blocks). A directory is constructed in main memory and all
reads and writes operations go through the directory to the
database pages.

 The current directory is copied into a shadow directory when a


transaction begins executing.

 Only the current directory is modified by the transaction. A


write_item operation will create a new modified database disk
page while keeping the old diskpage. The shadow directory is
not modified and continues to point to the old unmodified
diskpage.
Slide -48
Shadow Paging

 The old version is referenced by the shadow directory and the


new version by the current directory.

 When the transaction is ready to commit, the shadow directory


is discarded.

 Recovery is thus made simple, by reinstating the shadow


directory (saved on disk) and discarding the current directory
and the modified database pages.

 This is regarded as NO-UNDO/NO-REDO technique.

 No log is required for single-user environment. Log and


checkpoints may be needed for concurrency control for multi-
user environment.

Slide -49
An example of shadow paging.

Challenges and complexities:


-Track location changes of updated database pages (data fragmentation)
-High overhead or writing shadow directory if directory is large
-Must be able release old data-pages and free space (garbage collection)

Slide -50
The ARIES Recovery Algorithm

 ARIES uses a number of techniques to reduce recovery time,


reduce checkpoint overhead and reduce logging information.

 ARIES uses the Steal/No-force approach and is based on the


following 3 concepts:
1. WAL Write-ahead logging
2. Repeating history during redo (trace all actions of the
database prior to the crash to reconstruct, undo all active-
uncommitted transactions)
3. Logging changes during undo in case of failures during
recovery process

Slide -51
ARIES Recovery Algorithm

ARIES recovery involves three passes:


 Analysis pass: Determines
– Which transactions to undo
– Dirty pages (those updated in memory but disk version are not up to
date) at time of crash
– RedoLSN: LSN (log sequence number) from which redo should start

 Redo pass:
– Repeats history, redoing all actions from RedoLSN
 RecLSN and PageLSNs are used to avoid redoing actions already
reflected on page

 Undo pass:
– Rolls back all incomplete transactions
 Transactions whose abort was complete earlier are not undone
– Key idea: no need to undo these transactions: earlier undo
actions were logged, and are redone as required
Silberschatz 16.8.6.2
Slide -52
Database Recovery
 Basic Storage Structure
– Storage structure: stable Storage
– Data Access, Buffering and DBMS caching

 What is recovery?
– Types Of Failure
– Write-Ahead Logging, Steal/No-Steal, and Force/No-Force
– Checkpoints in the System Log
– Transaction Rollback

 How to do recovery?
– Log based recovery: Deferred Update (no UNDO/REDO)
– Log based recovery: Immediate Update (UNDO/REDO)
– Shadow Paging recovery (no UNDO/no REDO)
– ARIES

 How is it handled in Oracle?


 Redo log, Undo segment, Archive log
 Recovery structure
 Backup structure

Slide -53
Oracle Recovery Structure
 Controlfiles contain pointers to datafiles, dictating where datafiles
should be in relation to redo log entries. Controlfiles are used during
database mount.
– This file is typically multiplexed and stored as control01.ctl,
control02.ctl and control03.ctl.

 Redo logs and archive logs consist of records of all transactions made
to a database.
– This file is multiplexed in three groups and store as redo01.log,
redo02.log and redo03.log..
– Restoration of recovered backup is a simple process of applying
redo log entries to the datafile, until the datafile “catches up” to the
time indicated by controlfile

 Undo log segments are used for transaction rollback (before commit)
– stored in the undo tablespace datafile undotbs01.dbf

Slide -54
LGWR Process

 Each redo log group consists of a redo log file (member) and its multiplexed
copies.
 LGWR writes redo records to all members of a redo log group until the file is
filled or a log switch is requested, in which case it writes to the next group.
Source: Oracle Database 10g: Administration Workshop I Slide -55
ARCn Process

 ARCn is essential for backup recovery, especially for disk failure.

Source: Oracle Database 10g: Administration Workshop I Slide -56


ARCn Process
 ARCn initiates backup (archive) of the filled inactive log group at every log
switch before the filled redo logfile can be reused.

 If NOARCHIVELOG mode is set (i.e. the default), the redo log file will be
overwritten. To check: SQL> ARCHIVE LOG LIST;

 To set database in ARCHIVELOG mode, the database must be in MOUNT


state. It is essential to restart the instance after setting ARCHIVELOG mode.
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP MOUNT;
SQL> ALTER DATABASE ARCHIVELOG;
SQL> ALTER DATABASE OPEN;

 It is good to back up the database after setting ARCHIVELOG mode on so any


future recovery will take effect from this latest backup copy and archived redo-
log files from this point. Note: DBA privilege is required to do this.
Source: Oracle Database 10g: Administration Workshop I Slide -57
CKPT Process

 Typically in 10g, the checkpoint information is updated every 3 seconds


or less in the control file.
 Checkpoint information is updated in the datafile header at log switch.
Source: Oracle Database 10g: Administration Workshop I Slide -58
Instance Recovery

 It is done automatically. It is necessary because the database


cannot open due to un-synchronized SCN (system change
number) in the control file and in the datafile header.

 The recovery uses information stored in the redo log groups to


redo the transactions to synchronize the datafiles to the control
file.
– Roll forward: datafiles are restored to their state before the instance failed
– Roll backward: uncommitted changes are returned to their original state.
Thus the datafile will only contain committed data.

 During the instance recovery, the transactions between the


checkpoint position and the end of redo log must be applied to the
datafiles.
Source: Oracle 10g DB Ad.ministrator: Implementation and Administration Slide -59
Instance Recovery

Source: Oracle Database 10g: Administration Workshop I Slide -60


Instance Recovery

 Typically the distance between the checkpoint position and the end of
the redo log group can never be more than 90% of the smallest redo log
group.
Source: Oracle Database 10g: Administration Workshop I Slide -61
Using Advisor (MTTR – Mean Time To Recovery)

 Aim is to reduce MTTR and to increase MRBF (Mean Time


Between Failure).
Source: Oracle Database 10g: Administration Workshop I Slide -62
Secure Redo Log Files

 Good practice to have each a


unique name for each archive
redo log file to avoid overwriting
older redo log files.

Source: Oracle Database 10g: Administration Workshop I Slide -63


Unique Naming Of Archive Redo Log Files
 Good practice to have each a unique name for each archive
redo log file to avoid overwriting older redo log files.

 Naming format: %s, %t, %r with optional %d


%s : log sequence number
%t : thread number
%r : resetlogs ID
%d : database ID (optional for single database)

 Archive redo log files can be written to as many as 10


different destinations, with default location being the flash
recovery area: DB_RECOVERY_FILE_DEST.

Source: Oracle Database 10g: Administration Workshop I Slide -64


Checkpoints, Redo Logs, Archive Logs

 The combination of archive logs and redo logs allows you to


recover your DB from a point in time

 Checkpoint: a point in time where all buffers are flushed to disk


– Writes all pending redo log buffer and database buffer cache
changes to disk, writing the redo log buffer first, followed by
the database buffer cache
– By default, executed automatically when the log buffer is
one-third full of pending changes, every three seconds, when
a log switch occurs, or when COMMIT or ROLLBACK
commands are executed
– To alter configuration:
SQL> LOG_CHECKPOINT_INTERVAL
SQL> LOG_CHECKPOINT_TIMEOUT

Slide -65
UNDO DATA vs REDO DATA

UNDO data stores before image and is stored in UNDO segment.


REDO data stores old and new values in REDO log files in
persistent storage typically multiplexed.
Slide -66
DML - COMMIT vs ROLLBACK

 COMMIT statement
– Making pending changes in existing transaction in current session
permanent
– Permanently stores changes to a database

 ROLLBACK statement
– Removing pending changes in existing transaction in current session
– Reverses changes done with COMMIT

 Recovery before COMMIT


– UNDO transaction using Undo log info

 Recovery after COMMIT


– REDO transaction using Redo log

Slide -67
Transaction Prior To Commit

 Recovery before COMMIT


– UNDO transaction using Undo log info
Source: Oracle 10g DB Administrator: Implementation and Administration Slide -68
Transaction At Commit

 COMMIT statement
– Making pending changes in existing transaction in current session
permanent
– Permanently stores changes to a database
Source: Oracle 10g DB Administrator: Implementation and Administration Slide -69
Transaction Rollback

 Recovery after COMMIT


– REDO transaction using Redo log
Source: Oracle 10g DB Administrator: Implementation and Administration Slide -70
UNDO DATA

Source: Oracle Database 10g: Administration Workshop I Slide -71


Transaction and UNDO

Slide -72
Oracle Backup Structure

Slide -73
What is Oracle Backup?

 Backup is the process of making some kind of copies of parts


of a database, or an entire database.

Source: Oracle 10g DB Administrator: Implementation and Administration Slide -74


What is Oracle Restoration?

 Restoration is the process of copying files from a backup.

Source: Oracle 10g DB Administrator: Implementation and Administration Slide -75


What is Oracle Recovery?
 The process of rebuilding a database after some part of a database has been lost.
 Recovery is the process of executing procedures in Oracle Database to update
the recovered backup files to an up-to-date state

Source: Oracle 10g DB Administrator: Implementation and Administration Slide -76


Slide 77
Copying backup

Applying redo

Slide 78
Use of Undo Data

Slide 79
Use of Undo Data

Slide 80
Redo log changes
are applied to
data files until
current online log Undo to roll back
is reached and uncommitted
most recent changes.
transactions have
been re-entered.

Slide 81
Flashback Technology

Slide -82
Flashback Drop and Recycle Bin

 Using FLASHBACK TABLE to undo a DROP TABLE


 RECYCLEBIN = ON is default init. Parameter
 Recycle Bin starts as a 10g feature
Slide -83
Flashback Drop From Recycle Bin

 Oracle will always choose to restore the most recently dropped


version of the table (if there are two identically named tables)

SQL> FLASHBACK TABLE <table_name> TO BEFORE


DROP [RENAME TO new_table_name];

Slide -84
Recycle Bin

like a data
dictionary
table without
tablespace

Slide -85
Flashback Query Data

Slide -86
Methods of Backup and Recovery

Two basic methods of backup and recovery:


– Cold backups
– Hot backups

Source: Oracle 10g DB Administrator: Implementation and Administration Slide -87


Cold Backups

 For a cold backup, shut down the database completely and


then copy all the files
– All datafiles
– All redo log files
– All archive log files
– All controlfiles
– Optionally you can also back up parameter files and
any networking configuration files

 Restore at least all the datafiles and controlfiles


– Optionally use more current redo log files, archive log
files, and controlfiles—allowing a recovery by applying
redo log entries to datafiles

Slide -88
Cold Backup
 Since database is shutdown, it provides a consistent snapshot of the database to
be copied.

Source: Oracle 10g DB Administrator: Implementation and Administration Slide -89


What is a Hot Backup?

 The objective of a hot backup is to obtain a snapshot of all data in


the database
– Can create a backup that is reconstructable by applying log
entries to datafiles, based on SCN matches between datafiles,
controlfiles, and redo log entries

 Hot backup: performed when DB is online, active, and available


for use
– Many tools and methods for performing hot backups in Oracle
Export and Import Utilities
Backup Mode Tablespace Copies
RMAN (Recovery Manager)
Oracle Enterprise Manager and the Database Control

Source: Oracle 10g DB Administrator: Implementation and Administration Slide -90


More On Hot Backup

– Takes a snapshot of a database one file or type of file


at a time
Not necessarily consistent across all files in
backup

– Made of pieces of a DB, where those files making up


a complete DB backup are not recoverable to a
working database as a group
Individual files can be slotted into a running DB,
and can be recovered individually, or as a group

Source: Oracle 10g DB Administrator: Implementation and Administration Slide -91


Backup Strategy

 A backup strategy is required to plan ahead:


– What types of backups should you use?

– Which tools should you use to back up, and what


tools will you use in the event of failure?

– How often should you back up your database?

Source: Oracle 10g DB Administrator: Implementation and Administration Slide -92


Backup Strategy

 Establish a plan before implementing a backup plan

 Establish a strategy to allow for a better selection of


options when implementing backups

 Backup strategy is dependent on factors such as the


type of DB, how much data can be lost, and available
equipment

Source: Oracle 10g DB Administrator: Implementation and Administration Slide -93


Type of Database

 An OLTP database can be large and active


– Performing regular cold backups is generally
unacceptable as it requires a complete DB shutdown
– OLTP DBs must often be available all 24-hours
– OLTP DBs tend to change rapidly, in small chunks, in
many different parts of the database, or all at once
– Incremental backups using RMAN are useful (they only
copy what has changed since previous backup)
Same rule applies to data warehouses because the
amount of regular updating is small in relation to the
physical size of the entire data warehouse
– Large parts of data warehouses are often static,
and even read-only, so they need a single backup

Slide -94
Database Availability (24x7x365)

 A database must be available globally without


interruption
– Hot backups are essential
– Additionally, hot backups, especially in the
case of using RMAN, can allow for recovery of
failure due to partial errors such as a single disk
failure in a collection of disks, or the loss of a
single table
– Essential requirement is availability
– More likely to apply to high-concurrency
OLTP data, rather than to data warehouse data
Slide -95
Data Change Speed
 OLTP DBs and data warehouses can change rapidly

 A data warehouse has new data appended at regular


intervals

 An OLTP DB has small amounts of data changed in


all parts of DB, around the clock

 In terms of recoverability, a data warehouse could


simply have batch processing re-executed
– An OLTP DB has to recall all transactions from log
files and essentially re-execute them on recovery
Slide -96
Backup Strategy

 Devise a backup strategy based on how long it will


take to recover the database to an acceptable point,
and perform the recovery as fast as possible

Slide -97
Acceptable Loss Upon Failure

 Acceptable loss: how much data can a company lose while


maintaining usability/availability of data
– The less acceptable loss allowed, the more complex and
longer backups will take to execute (and more time
needed if restoration/recovery is required)
– Examples:
An OLTP database requires zero loss
A data warehouse can often be rebuilt by re-
executing batch processing
– Factors include: amount of storage capacity available
for backups, type of media used for backups, and minor
factors like network bandwidth

Slide -98
Available Equipment

 Backup to disk is much faster than backup to tape


– However, if you need to retain backups for a number of
years, using tape backups is much easier
– Typically, many database installations will use a
combination of both disk and tape backup storage
Recent backups will be stored on disk, allowing for
rapid and specific recovery scenarios
After a while, disk backups could be transferred to a
sequential media such as tape, where recovery
would be naturally slower and more cumbersome,
but less expensive and easier to manage

Slide -99
Planning for Potential Disaster and Recovery

 Always plan for a potential disaster!

 Backup and recovery should be at the top list of priorities

 Automate the backup process if possible


– Use scripting and scheduling to perform backups
periodically and automatically
– RMAN allows full automation of a backup strategy and
even allows for embedded scripting, and even
executing backup processing in parallel or on a specific
node in a clustered environment
– Test existing backup implementation if possible and
always test anything you construct as new, preferably
off the production server environment

Slide -100
Physical Standby Database

Source: Oracle 10g DB Administrator: Implementation and Administration Slide -101


Logical Standby Database

Source: Oracle 10g DB Administrator: Implementation and Administration Slide -102


Replications
 Master-To-Slaver: Any changes made to master database are replicated to all
slave databases.
 Master-To-Master: Any changes made to any one of the databases are replicated
to all other databases.

Source: Oracle 10g DB Administrator: Implementation and Administration Slide -103


Remote Backup Systems

 Remote backup systems provide high availability by


allowing transaction processing to continue even if the
primary site is destroyed.

Silberschatz 16.9

Slide -104
Four main issues concerning remote backup

1. Detection of failure:
 Backup site must detect when primary site has failed
– To distinguish primary site failure from link failure maintain several
communication links between the primary and the remote backup.

2. Transfer of control:
– To take over control backup site first perform recovery using its copy
of the database and all the long records it has received from the
primary. Thus, completed transactions are redone and incomplete
transactions are rolled back.
– When the backup site takes over processing it becomes the new
primary.
– To transfer control back to old primary when it recovers, old primary
must receive redo logs from the old backup and apply all updates
locally.

Slide -105
Remote Backup Systems (Cont.)

3. Time to recover: To reduce delay in takeover, backup site periodically


process the redo log records (in effect, performing recovery from previous
database state), performs a checkpoint, and can then delete earlier parts of
the log.

– Hot-Spare configuration permits very fast takeover:


 Backup continually processes redo log record as they arrive,
applying the updates locally.
 When failure of the primary is detected the backup rolls back
incomplete transactions, and is ready to process new transactions.

– Alternative to remote backup: distributed database with replicated data


 Remote backup is faster and cheaper, but less tolerant to failure
– more on this in Silberschatz text Chapter 22 –distributed
database

Slide -106
Remote Backup Systems (Cont.)

4. Time to commit
– Ensure durability of updates by delaying transaction commit until update is
logged at backup; avoid this delay by permitting lower degrees of durability.

– One-safe: commit as soon as transaction’s commit log record is written at


primary
 Problem: updates may not arrive at backup before it takes over.

– Two-very-safe: commit when transaction’s commit log record is written at


primary and backup
 Reduces availability since transactions cannot commit if either site fails.

– Two-safe: proceed as in two-very-safe if both primary and backup are active.


If only the primary is active, the transaction commits as soon as commit log
record is written at the primary.
 Better availability than two-very-safe; avoids problem of lost transactions
in one-safe.

Slide -107

Potrebbero piacerti anche