Sei sulla pagina 1di 6

Excess redo log generation during Hot Backup Oracle 9i

24 Comments Posted by advait on May 27, 2008


Many of you must have heard or experienced that while taking hot backup of database LGWR process writes aggressively. Meaning that more
redo data has been written to redo log file and consecutively more archive logs gets generated.
Here is the common misconception we have in our mind. If some one ask, why excessive redo logs and archive logs are getting generated
when we start a hot backup of database ?? Quickly we answer .. Its simple, when we put tablespace in hot backup mode, Oracle will take a
check point of tablespace and data files belonging to this tablespace will be freezed. Any user activity happening on objects belonging to this
tablespace wont write data to these datafiles, instead it will write data to redo log files. So obviously there will be more redo log file
generation.
Well, to some extent this is COMPLETELY WRONG !!!
I will straight way come to the point and explain you what happens when we put the tablespace in hot backup mode.
Your first assumption that datafiles belonging to the tablespace in hot backup mode is freezed is wrong. Datafiles are not freezed, only the
datafile headers will be freezed !! So simply imagine that when you put the tablespace in backup mode, Oracle will take a checkpoint and
update the datafile headers with checkpoint SCN and there after it is freezed until we take tablespace out of backup mode.
Other datafile (other then header part) remains as normal and data changes happens continuously to this datafile.
Now you may want to ask me do I mean to say that datafiles gets updated continuously even when we are coping the same to backup
location ?. The answer is YES. Never think that the datafile you are coping is Consistent. No, datafiles gets changed continuously !!!
You might want to ask couple of more questions then.
1) If we say that backup file is not consistent and changes continuously, then how come Oracle is able to recover the database when we
restore that datafile?
2) If the data changes are anyway happening continuously on data files, then why there is excess redo log generation ?
Thats it !! dont ask me more then this. Let me explain answers to these questions.
Consider a typical case, where an Oracle database is installed on Linux platform. The standard Oracle block size if 8K and lets say that OS level
data block size is 512K. Now when we put the tablespace in Begin Backup mode checkpoint has happened and datafile header is freezed.
You found which are the files related to this tablespace and started copying using OS command. Now when you copy a datafile using OS
command it is going to copy as per OS block size. Lets say when you start copying it gave 8 blocks to you to copy that means you are
copying 4K (512bytes X 8) to backup location. That means you are copying half of Oracle block to backup location. Now this process of copy
can be preempted by Server CPU depending on load. Lets say when you started copying after copy of those 8 block (4K, half of Oracle block),
your process get preempted by CPU and it has allocated CPU time to some other important process. Mean while DBWR process changes that
block that you have copied halfway (since datafile is not freezed and only header is freezed, continuous updates can happen to datafile).
After a while CPU returns back and gives you next 8 blocks to copy (rest of the halk Oracle block). Now here is the problem !!! we copied half
of the oracle block taken at time T0 and another half taken at time T1 and in-between the data block got changed. Does this sounds consistent
? Not to me !! Such type of block is called Fractured Block.
Well, since Oracle copies files like this it should do some thing, so that during recovery it wont face any problem.
Usually in case of a normal tablespace (which is not in begin backup mode), when a transaction happens oracle generates redo information
and puts in redo log file. This is the bare minimum information that oracle generates in order to redo the information. It does not copy the
complete block. Where as in case of begin backup mode, if a transaction happens and changes any block FOR THE FIST TIME, oracle copies the
complete block to redo log file. This happens only during first time. If subsequent transaction updates the same block again, oracle will not
copy the complete block to redo, instead it will generate minimum information to redo the changes. Now because oracle has to copy the
complete block when it changes for the first time in begin backup mode, we say that excess redo gets generated when we put tablespace in
begin backup mode.
Question arises, why Oracle has to copy the complete block to redo log files. As you have seen above that during copy of datafile, there can be
many fractured blocks, and during restore and recovery its going to put those block back and try to recover. Now assume that block is
fractured and oracle has minimum information that it generates in the redo. Under such condition it wont be possible for Oracle to recover
such blocks. So instead Oracle just copies the entire block back from redo log files to datafiles during recovery process. This will make the
datafile consistent. So recovery process is very important which takes care of all fractured blocks and makes it possible to recover a database.
I hope this explains above 2 questions.
Now you can easily explain why hot backup is not possible if database is in NOARCHIVELOG mode.
When you take a backup using RMAN, it does not generate excessive redo logs. The reason is simple. RMAN is intelligent. It does not use OS
block for copying, instead it uses oracle blocks for copying datafiles so the blocks are consistent.
An RMAN hot copy of datafile 1, for example, will have Oracle blocks from all possible times from when the backup started to when it finished.
It will, internally, be a complete mess and quite unusable in a recovery scenario on its own.
However, none of the Oracle blocks within that datafile copy will be *internally* inconsistent in their own right. And that is why we dont need
a consistent copy of those blocks in the redo logs as we would if wed left it to the operating system to copy the file.
Hope this helps !!

24 Responses to Excess redo log generation during Hot Backup Oracle 9i

1.

dizwell May 27, 2008 at 10:37 pm


Sort of.
Your last sentence ends wrongly, however. Even with RMAN, **files** are very much INconsistent. It is the *blocks* which are consistent in an
RMAN backup, not the files.
An RMAN hot copy of datafile 1, for example, will have Oracle blocks from all possible times from when the backup started to when it
finished. It will, internally, be a complete mess and quite unusable in a recovery scenario on its own.
However, none of the Oracle blocks within that datafile copy will be *internally* inconsistent in their own right. And that is why we dont
need a consistent copy of those blocks in the redo logs as we would if wed left it to the operating system to copy the file.
Reply

2.

advait May 28, 2008 at 10:15 am


Yes, thats right. I meant to say that block will be consistent and not the datafile. Since with RMAN the smallest unit of copy will be Oracle
block. Sorry for the confusion.
Thanks for correcting !!
Reply

3.

Prateek Mahajan June 9, 2008 at 9:27 am


Nice article clearing a very common doubt. However, in the Typical Case example that you have given, the O/S block must be 512 bytes
and not 512 KB. Similary the paranthesis information written as (512K X 4) must be corrected to (512bytes X 8) i.e eight O/S blocks of
512 bytes each, totalling 4K.
Reply

4.

Gaurav June 12, 2008 at 11:09 pm


Hi,
I have a question, we have oracle database sitting on windows environment.We are using RMAN to backup the database.I have observed
that for some days archivelogs which are generated are as big as 14gb and on other days they are 2.5GB. I am wondering how does this
happen. The database activity is more less consistent throughout the week as ours is 24*7 environment.Any idea why does this happen?
Reply

5.

advait June 13, 2008 at 11:46 am


Hi Gaurav,

Usually the size of arhivelog files cannot exceed the size of online redo log files. So if your online redo log file is 100MB then max size or your
archive log files cannot exceed 100MB. It can be less than that as well, in case if we use ALTER SYSTEM SWITCH LOGFILE command or
ALTER SYSTEM ARCHIVE LOG ALL; command.
Could you please tell me what is the size of your redo log file?
Regards,
Advait Deo
Reply

6.

Gaurav June 22, 2008 at 10:15 pm


Hi Advait,
Thanks a lot for your reply.I think i confused you with my question.I will re write it.Our database size is about 130GB and the size of each
redo log file is 102400KB.We have rman backups and the retention policy for the redo logs and the backups is 4 days.We do not have
incremental backups and a fullbackup of the database is taken every night. We have a standby database to which the logs are transferred
with the help of data guard.Now for past some time i have observed that there have been excess amount of archive logs generation on
certain days and for the other days its just below normal. For example on 16th the total size of archive logs which were generated was about
19GB and on Thursday it was 2GB.The database activity was more or less equivalent as we have 24*7 operations running on database. I
want to know how is this happening?
Reply

7.

stalin August 30, 2008 at 5:46 am


backup
Reply

8.

Pingback: Oracle 9i Undo Management Mehrajhussains Weblog

9.

N.Vasantha Kumar September 23, 2008 at 7:27 am


Sir,
I have no clarity on hot backup data is writing to datafiles. Since we are keeping the Tablespace is begin backup mode means headers of
Tablespace, datafiles are freezed.
we are taking backup. data is being sent to backup location. on the other hand data is being written to datafiles but not updating the
headers. which data is being backed up and how can oracle knows the new data (newly inserted data).
Reply

10.

advait September 23, 2008 at 11:46 am


The answer to your question is Archive log files. All this information about newly inserted data is available in archive log file. From that
point Oracle knows.
Reply

11.

rajendra prasad September 26, 2008 at 3:29 pm


yes this helped me to understand what happans in hot backup mode

Thanks,
Rajendra Prasad.
Reply

12.

raj September 27, 2008 at 7:34 pm


Thanks a lot for this extensive information.
Reply

13.

Shiv Mohan October 3, 2008 at 11:09 am


I have another query.While hot backup lots of transactions are going on and again lots of redo logs and finally lots of archivelogs.
Then how to sync. the datafiles ?
Reply

14.

advait October 6, 2008 at 7:30 am


data gets written to the datafile during hot backup so only header gets synched up when we do end hot backup. The SCN number in datafile
gets updated to current SCN.
Reply

15.

prthvira October 30, 2008 at 1:32 pm


great information, but one query, once we end hot backups,SCN in datafile header is updated to current scn, at that time do any changes,
inserts happened during backup are written to datafile from archived redo logs. if not then why it is compulsary to put database in archive
log mode during hot backups?.(i am not concerned about restore and recover but only about hot backup process)
Reply

16.

sridhar February 3, 2009 at 2:56 pm


HI,
i want to know y do we apply arichive before media recovery.
Thanks,
kola.sreedhar
Reply

17.

Nirnay Jaiswal March 23, 2009 at 8:38 am


Hi Prthvira,
Nothing is written to datafiles from archived redo logs, after the hot backup ends. Only the datafile header is updated with the current scn.

Now, why it is compulsory to put database in archive log mode during hot backup? Because when you start copying the datafile thru O/S
commands it takes some time for copying the file, say at time t1 you start to copy the file datafile dat01.dbf to dat01_bak.dbf and the SCN
was SCN1 and when the copying was finished at time t2 then the SCN has changed to SCN1 + N. So you have blocks in your copied file
dat01_bak.dbf which have the SCN ranging from SCN1 to SCN1 + N. But all the blocks have to be with the same SCN to be consistent, and
to be used. So to make the backup copy file dat01_bak.dbf consistent, you need to have the archive logs starting from SCN1 to SCN1 + N.
Hope this Helps Nirnay Jaiswal
Reply

18.

jeevan May 30, 2009 at 6:00 pm


Hi all masters,
I have a doubt,since data gets written in to the datafiles,why should the complete blocks get written in to the redologs.Is there any use with
that stuff?
Why cant the redolog process go as asusual ?
Regards
Jeevan
Reply

19.

akhil mahajan July 27, 2009 at 7:17 am


Originally posted by the article writer
Question arises, why Oracle has to copy the complete block to redo log files. As you have seen above that during copy of datafile, there can
be many fractured blocks, and during restore and recovery its going to put those block back and try to recover. Now assume that block is
fractured and oracle has minimum information that it generates in the redo. Under such condition it wont be possible for Oracle to recover
such blocks. So instead Oracle just copies the entire block back from redo log files to datafiles during recovery process. This will make the
datafile consistent. So recovery process is very important which takes care of all fractured blocks and makes it possible to recover a
database.
The question is still unanswered. Why cant the Oracle just go on to apply the changed entries in the block and why is the redo of whole
block generated? I understand that the redo generated is minimal but we actually dont need the redo of whole block. Isnt it?
I have never come across the right explanation for this question. I hope u ll help me.
Reply

20.

Sagar B V February 1, 2010 at 3:40 pm


Hi everyone,
Really good explanation for junior DBAs!
Thanks a lot
Reply

21.

ramesh July 20, 2010 at 6:17 pm


Hi
I have one doubt regarding when tablespace or datafile is in offline
what is the bagground process how to see

Reply

22.

Sreenivas August 7, 2010 at 10:01 am


Nice explanation.thanks
Reply

23.

asha singh October 21, 2010 at 11:16 am


Differnce between full backup, whole backup & partial Backup
Reply

24.

sanjeev July 21, 2011 at 11:18 am


That is great Advait, its really wonderful description on accessive redo generation.
Reply

Potrebbero piacerti anche