Monday, June 15, 2009

Identify and fix table fragmentation in Oracle 10g - how?

Table fragmentation – when?


If a table is only subject to inserts, there will not be any fragmentation.
Fragmentation comes with when we update/delete data in table.
The space which gets freed up during non-insert DML operations is not immediately re-used (or sometimes, may not get reuse ever at all). This leaves behind holes in table which results in table fragmentation.

To understand it more clearly, we need to be clear on how oracle manages space for tables.

“High water mark” of table actually defines the border line between (ever) used and unused (never) space. While performing full table scan, Oracle will always read the data up to HWM. And if there is lot of free space with-in HWM, that is read too, and hence degrading the performance of FTS.

Now lets see how to identify HWM, unused (never used) space and free space (used but deleted/updated) and then take a call whether the concerned table is candidate for a reorganization or not.


SQL> create table test as select * from dba_tables; -- Create a table

Table created.

SQL> analyze table test compute statistics; -- Analyze it

Table analyzed.


SQL> select blocks "Ever Used", empty_blocks "Never Used", num_rows "Total rows"
2 from user_tables where table_name='TEST'; -- The number of blocks used/free

Ever Used Never Used Total rows
---------- ---------- ----------
49 6 1680



SQL> delete from test where owner='SYS'; --- Im deleting almost half the number of rows.

764 rows deleted.

SQL> commit;

Commit complete.


SQL> analyze table test compute statistics; -- Analyze it again

Table analyzed.

SQL> select blocks "Ever Used", empty_blocks "Never Used", num_rows "Total rows"

2 from user_tables where table_name='TEST'; -- No difference in blocks usage

Ever Used Never Used Total rows
---------- ---------- ----------
49 6 916

PL/SQL procedure successfully completed.

Even though you deleted almost half the rows, the above shows that table HWM is up to 49 blocks, and to perform any FTS, Oracle will go up to 49 blocks to search the data. If your application is so-written that there are many FTS on this table, you may consider, reorganizing this table.



Reasons to reorganization



a) Slower response time (from that table)
b) High number of chained (actually migrated) rows.
c) Table has grown many folds and the old space is not getting reused.

Note: Index based queries may not get that much benefited by reorg as compared to queries which does Full table scan.

How to reorganize?

Before Oracle 10g, there were mainly 2 ways to do the reorganization of the table
a) Export, drop, and import.
b) Alter table move (to another tablespace, or same tablespace).

Oracle 10g provides us a new way of reorganizing the data.

Shrink command: This command is only applicable for tables which are tablespace with auto segment space management.

Before using this command, you should have row movement enabled.

SQL> alter table test enable row movement;

Table altered.

There are 2 ways of using this command.


1. Break in two parts: In first part rearrange rows and in second part reset the HWM.

Part 1: Rearrange (All DML's can happen during this time)

SQL> alter table test shrink space compact;

Table altered.

Part 2: Reset HWM (No DML can happen. but this is fairly quick, infact goes unnoticed.)

SQL> alter table sa shrink space;

Table altered.

2. Do it in one go:

SQL> alter table sa shrink space; (Both rearrange and restting HWM happens in one statement)

Table altered.



Few advantages over the conventional methods

1. Unlike "alter table move ..", indexes are not in UNUSABLE state. After shrink command, indexes are updated also.

2. Its an online operation, So you dont need downtime to do this reorg.

3. It doesnot require any extra space for the process to complete.


Conclusion

Its a new 10g feature to shrink (reorg) the tables (almost) online which can be used with automatic segment space management.

Size your Undo tablespace

What should be the size of UNDO tablespace? Generally this question is vaguely answered saying that sizing comes with experience (of DBA) or looking at load on server or sometimes even by the number of ORA-1555 or out of space errors.

This paper is to help DBA’s in calculating the size of UNDO tablespace by using a simple formula.

While designing an application, generally it is tough to know about the number of transactions and subsequently number of rows changed per second.
So I suggest having a “big undo tablespace” to start with and based on load, after doing some calculations and resize your UNDO tablespace.

In my case one of the applications was going to production (live), and I had no idea that how many transactions will happen against this database. All what I was told that there will be optimum (transactional) activity on this database. The word “optimum” itself is vague.

So I started with UNDO tablespace with size of 2GB and datafiles with autoextend “on” .

Note:
In production, you must be very careful in using this (autoextend on) as the space may grow to inifinity very fast. So my advice is either dont use this option, or use with "maxsize" or continuously monitor space (which is tough).


I month later, I noticed the activity from V$undostat.

Here is the step by step approach:


Step 1: Longest running query.


SQL> select max(maxquerylen) from v$undostat;


MAX(MAXQUERYLEN)
----------------
1793

This gives you ideal value for UNDO_RETENTION. To be on the safer size you should add few more seconds to get the right value. So in my case, the size of undo retention should be say 2000 secs.

Step 2: Size of UNDO tablespace.


Size of UNDO needed = UNDO_RETENTION x [UNDO block Generation per sec x DB_BLOCK_SIZE] + Overhead(30xDB_BLOCK_SIZE)

Out of these we know UNDO_RETENTION and DB_BLOCK_SIZE

All we need is to find out “UNDO Blocks per second”

Which can be easily fetched from v$undostat

SQL> SELECT (SUM(undoblks))/ SUM ((end_time - begin_time) * 24*60*60) "UPS"
2 FROM v$undostat;


UPS
------------------------------
8.11985583

V$undostat stores data for every 10 mins and begin/end times are start/end time of those intervals. We multiplied it with 24*60*60 because the difference between two dates will be in days and to get to seconds, we need it to multiply with 24hrs*60mins*60secs

So now we have all the values needed.

Undo size needed = [8.12 x 2000 x 8192] + [30 x 8192] = 133283840 bytes = 127.11 MB

TOP SQL

One of the important tasks of the DBA is to know what the high CPU consuming processes on database server are.
In my last organization, we used get number of request saying that DB server is running slow.
Now the problem is that, this server is hosting 86 databases, and finding out which is the culprit process and database sounds a daunting task (but it isn't).

See this:

First find out the top CPU processes on your system:

You may use TOP (or ps aux) or any other utility to find the top cpu consuming process.

Here is a sample top output:


bash-3.00$ top
PID USERNAME LWP PRI NICE SIZE RES STATE TIME CPU COMMAND
17480 oracle 11 59 0 1576M 1486M sleep 0:09 23.51% oracle
9172 oracle 258 59 2 1576M 1476M sleep 0:43 1.33% oracle
9176 oracle 14 59 2 1582M 1472M sleep 0:52 0.43% oracle
17550 oracle 1 59 0 3188K 1580K cpu/1 0:00 0.04% top
9178 oracle 13 59 2 1571M 1472M sleep 2:30 0.03% oracle

You can see the bold section. Process# 17480 is consuming 23.51 % CPU.

Now this process can belong to any process out of many instances on this server.
To find out which instance this process belongs to:



bash-3.00$ ps -ef grep 17480

oracle 17480 17479 0 03:02:03 ? 0:48 oracleorcl (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))


The instance name is highlighted in BOLD

Now you know which instance is holding that session.

Change your environmental settings (ORACLE_SID, ORACLE_HOME) related to this database.
and connect to the database as SYSDBA

bash-3.00$ sqlplus "/ as sysdba"

SQL*Plus: Release 10.2.0.2.0 - Production on Thu Dec 21 04:03:44 2006

Copyright (c) 1982, 2005, Oracle. All Rights Reserved.


Connected to:
Oracle Database 10g Release 10.2.0.2.0 - Production

SQL> select ses.sid SID,sqa.SQL_TEXT SQL from
2 v$session ses, v$sqlarea sqa, v$process proc
3 where ses.paddr=proc.addr
4 and ses.sql_hash_value=sqa.hash_value
5 and proc.spid=17480;

SID SQL
--------- -----------------
67 delete from test


Now you have the responsible SQL behind 23% CPU using process.
In my case it was a deliberate DELETE statement to induct this test but in your case it can be a query worth tuning.


Mostly knowing what is the problem is solution of the problem. (At least you know what the issue is).
Issue is to be addressed right away or to be taken to development team is a subjective issue which i don’t want to comment.

INDEXES (When to rebuild and why ) Information

When should I rebuild my indexes?


Need is necessary for any change. I hope all agree to this. So why many DBA’s (not all) rebuilds indexes on periodical basis without knowing the impact of it?

Let’s revisit the facts stated by many Oracle experts:


- B Tree indexes can never be unbalanced
- The distance of any leaf block (which contains the index value) from the root block is always same.
- B Tree performance is good for both small and large tables and does not degrade with the growth of table

When will rebuild help?

When the data in index is sparse (lots of holes in index, due to deletes or updates) and your query is usually range based.

If your SQL’s use “=” predicate, rebuilding of index may not help you at all unless the DML’s (updates/deletes) on the table are cause of increasing the height of index. Due to heavy updates/deletes on a table, the space in the index can be left unused (unless it is reused) due to which block splits and if the splitting goes beyond the level of accomodation, the height of index increases by 1.
In simpler terms, unless you reduce the height (BLEVEL) of index, the time taken to reach the leaf block from root block will not change.


As explained above, the height of index can be decreased by rebuild of an index(only if height was increased to updates/deletes).

Only in case of range scan, rebuild (consider coalesce option also) may help.

Select * from emp where empno between 1000 and 2000;

In above case there could be lot of deleted/updated data which will also be read while reading between 1000 to 2000. Rebuilding will reclaim any unused space and hence the select could be faster.

• Index fast full scan/ Index full scan. (rare case)
If your SQL’s are designed in a way that mostly data is selected from index (it doesn’t goes to table). In this case the whole index is read for data not for redirecting to table.

Ex:
Select count(empno) from emp; -- you have an index in empno
Select empno from emp -- you have an index in empno

Both the above SELECTs will fetch the data from Index only. Since the whole index is getting read, the space used by deleted rows is also read at the time of SELECT operation



How to find indexes which are candidate for rebuild?


I know it’s a very arguable question, but let me put my words and then you can comment.

I follow different approaches to find out indexes for rebuild

- Find out indexes having height(blevel+1) > 4 i.e. Indexes having BLEVEL > 3
How:
SQL> select owner,index_name,table_name,blevel from dba_indexes where BLEVEL>3

- Analyze indexes and find out ratio of (DEL_LF_ROWS/LF_ROWS*100) is > 20
How:
First "Analyze the index with validate structure option" and then,

SQL> SELECT name,height,lf_rows,del_lf_rows,(del_lf_rows/lf_rows)*100 as ratio FROM INDEX_STATS;


But (a big but), the reason to rebuild should be because of poor performance of your queries using indexes. You should/must not rebuild indexes if you find both the above reason true for index if it is not coupled with poor SQL performance.


See this example:

SQL> analyze index TEST_INDX validate structure; -- First analyze the suspect index

Index analyzed.

SQL> SELECT name,height,lf_rows,lf_blks,del_lf_rows FROM INDEX_STATS;

NAME HEIGHT LF_ROWS LF_BLKS DEL_LF_ROWS
------------ ---------- ---------- ----------- -------------
TEST_INDX 8 938752 29575 73342


You can see height of the index is 8 and also high number of DEL_LF_ROWS


SQL> set autotrace on
SQL> set timing on
SQL>
SQL> select count(*) from TEST_TABLE where TEST_COL like 'http://www.hots%';

COUNT(*)
----------
39700


Elapsed: 00:00:27.25

Execution Plan
----------------------------------------------------------
Plan hash value: 870163320


Id Operation Name Rows Bytes Cost (%CPU) Time


0 SELECT STATEMENT 1 117 10 (0) 00:00:01

1 SORT AGGREGATE 1 117

*2 INDEX RANGE SCAN TEST_INDX 115 13455 10 (0) 00:00:01




Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
764 consistent gets
757 physical reads

0 redo size
516 bytes sent via SQL*Net to client
468 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

Now you rebuild the indexes

SQL> alter index TEST_INDX rebuild;

Index altered.

SQL> select count(*) from TEST_TABLE where TEST_COL like 'http://www.hots%';

COUNT(*)
----------
39700

Elapsed: 00:00:06.18

Execution Plan
----------------------------------------------------------
Plan hash value: 870163320 - See here although it is using the same plan but still it is faster


Id Operation Name Rows Bytes Cost (%CPU) Time

----------------------------------------------------------------------------------

0 SELECT STATEMENT 1 117 6 (0) 00:00:01

1 SORT AGGREGATE 1 117

* 2 INDEX RANGE SCAN TEST_INDX 115 13455 6 (0) 00:00:01


Statistics
----------------------------------------------------------
15 recursive calls
0 db block gets
592 consistent gets
588 physical reads

0 redo size
516 bytes sent via SQL*Net to client
468 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)



SQL> SELECT name,height,lf_rows,lf_blks,del_lf_rows,distinct_keys,used_space FROM INDEX_STATS;

NAME HEIGHT LF_ROWS LF_BLKS DEL_LF_ROWS
------------------------------ ---------- ---------- ---------- -----------
TEST_INDX 4 865410 15434 0



This clearly indicates rebuild helped my query performance. The height of index is reduced to 4 and DEL_LF_ROWS is 0


Now coming to second of part of Harvinder's comment.

Possible ways of Rebuilding.

- Online/Offline.

ONLINE Rebuild (8i onwards)

SQL> Alter index rebuild online;

This allows parallel DML to go on while Index is getting rebuild. Remember, online index requires more space, as it creates a new index before dropping the old one.

Index Rebuild is primarily a 3 step process


Prepare phase: Oracle locks the table for a fraction of second (actually not felt) to build index structure and populate data-dictionary.

Build phase: Most of the work is done in this phase. Oracle engine populates the index using the table and allows parallel DML's, parallel DML's uses a temporary journal table (b tree index like structure) to host the entries while the new index is getting populated.

Merge phase: Now the final stage, Oracle merges the new index with the journal table and drops the old index. Even during the merge phase, any changes to the table are recorded in the journal table and they get merged towards end of this phase.

9i onwards, online index rebuild feature includes Reverse Key, Function Based and Key Compressed indexes.

Offline Index rebuild.

SQL> Alter index rebuild;

This is conventional rebuild which was used(still available) before 8i. In this rebuild process, oracle drops the old index and creates a new one. In this process, no extra space is required, but parallel dml's are not supported.






Should I rebuild or coalesce my indexes ?




There is one more critical aspect which I wanted to discuss is the cost/effort related to rebuilding indexes.

Rebuilding an index is quite a costly operation and you must evaluate the benefit Vs effort before rebuilding an index.

Rebuilding (online) an index requires additional resources like space, cpu usage, time.

Here is one more option, which is less used or probably less popular “coalesce”.


Rebuild Vs Coalesce


Rebuild


* Can move an index to a different tablespace

* Resource consuming process

* Takes more time

* Creates a new tree

* Shortens the height of an index if it was increased due to DML activities

* Rebuilding can affect future DML’s because index becomes compact and for future DML’s index has to be extend dynamically.


Coalesce


* Cannot move an index to a different tablespace

* Comparatively less resource consuming

* Takes relatively less time

* Only merge the adjacent free space between leaf blocks within a branch

* Doesn’t shorten height of index

* Since coalesce doesn’t effect the total size and only frees up the unused space, it doesn’t affect future DML’s



Coalescing the index, frees up space of adjacent leaf blocks within a branch block. This way the number of blocks or extents which an index is using will not change but there will be more number of free blocks which can be used for future inserts or updates.

In an OLTP environment, where data is highly volatile, coalesce is better because it doesn’t shrink the index and the free space remains with the index segment.

Oracle10G RMAN Database Duplication

[ Creating a duplicate database using RMAN ]

The V10gREL5 is in noarchivelog mode. The duplication is being made from
closed database consistent backups. With archivelog mode databases you can
use the "set until time | scn | sequence" clause to tell RMAN where to end
for incomplete recovery.

TARGET : V10gREL5
AUXILIARY : RMAN10

[ SQLNET setup for connectivity ]
########################################################################################
# We must use a dedicated listener configuration for RMAN. Using instance registration
# requires the database be mounted to register with the listener. Since the AUX is
# where we'll connect using sqlnet we require sysdba access to the nomount instance.
# This example is using the same server for duplication but can be used for remote
# duplication as well. If you are using a disk backup solution and duplicate to a
# remote node you must first copy the backupsets from the original hosts backup
# location to the same mount and path on the remote server. Because duplication
# uses auxiliary channels the files must be where the IO pipe is allocated. So the
# IO will take place on the remote node and disk backups must be locally available.
#########################################################################################

Listener.ora:
=============

LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = stsmesun1)(PORT = 1521))
)
)
)


SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /database/10i/app/oracle/product/10g_Prod_REL5)
(PROGRAM = extproc)
)
(SID_DESC =
(GLOBAL_DBNAME = V10gREL5)
(ORACLE_HOME = /database/10i/app/oracle/product/10g_Prod_REL5)
(SID_NAME = V10gREL5)
)
(SID_DESC =
(GLOBAL_DBNAME = RMAN10)
(ORACLE_HOME = /database/10i/app/oracle/product/10g_Prod_REL5)
(SID_NAME = RMAN10)
)
)

tnsnames.ora:
=============

RMAN10 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = stsmesun1)(PORT = 1521))
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = RMAN10)
)
)

V10R5 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = stsmesun1)(PORT = 1521))
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = V10gREL5)
)
)


# Target Database Files:

SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
/database/10i/app/oracle/oradata/V10gREL5/system01.dbf
/database/10i/app/oracle/oradata/V10gREL5/undotbs01.dbf
/database/10i/app/oracle/oradata/V10gREL5/sysaux01.dbf
/database/10i/app/oracle/oradata/V10gREL5/users01.dbf


[ Create the Auxiliary Database directories needed ]

% cd $ORACLE_HOME/dbs

# Note: I am using a single directory for my datafiles. You can use the
# parameters db_file_name_convert and log_file_name_convert and they will
# generate the "set newname for datafile" syntax needed to change the
# controlfile. These parameters accommodate up to 5 different file placement
# locations using:

db_file_name_convert = ('/old/path1', '/new/path1',
'/old/path2', '/new/path2',
'/old/path3', '/new/path3')
# up to 5 locations from Oracle9i >.
log_file_name_convert = ('/old/path1', '/new/path1',
'/old/path2', '/new/path2',
'/old/path3', '/new/path3')
# up to 5 locations from Oracle9i >.

# You can also create your own "set newname for datafile" syntax for all your
# files skipping these parameters all together. You can still use the
# log_file_name_convert parameter or you can use the 'logfile' command to
# specifiy where the logs will be created and what size.
#
# Auxiliary datafile directory for this exercise:

"/database/10i/marrocha/RMAN10/oradata"

[ Create the Auxiliary init.ora ]

# the controlfile is created during duplication with the path and name given here
#
# initRMAN10.ora

db_name = RMAN10
db_block_size = 8192
compatible = 10.1.0.1.0
remote_login_passwordfile = exclusive
control_files = ('/database/10i/marrocha/RMAN10/oradata/control01.ctl',
'/database/10i/marrocha/RMAN10/oradata/control02.ctl')
db_file_name_convert = ('/database/10i/app/oracle/oradata/V10gREL5',
'/database/10i/marrocha/RMAN10/oradata')
log_file_name_convert = ('/database/10i/app/oracle/oradata/V10gREL5',
'/database/10i/marrocha/RMAN10/oradata')


# create a passwordfile for remote connections as sysdba

% orapwd password= file=orapwRMAN10

% sqlplus /nolog
SQL> connect / as sysdba
SQL> startup nomount pfile=$ORACLE_HOME/dbs/initRMAN10.ora
SQL> exit


[ RMAN Duplication from tape backup using the Oracle disk API ]

# This is to provide an example of simulating writing to tape using RMAN and
# the Oracle disk API. RMAN Target Configuration:

% rman
RMAN> connect target

connected to target database: V10GREL5 (DBID=2510891965)

# Use the RMAN configure command to create an optimal configuration for your
# business. the channels are configured to allocate from device type SBT by
# default. Since the channels are configured here we do not need to allocate
# any channels in the RMAN scripts.
# I/O only take place over the channel allocated. If an AUX channel is allocated
# only the auxiliary will perform IO so there is not worry about overwriting the
# target database production files. As an added safegard an error would raise if
# an attempt to overwrite a target file while it's online.

RMAN> show all;

using target database controlfile instead of recovery catalog
RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 3;
CONFIGURE BACKUP OPTIMIZATION ON;
CONFIGURE DEFAULT DEVICE TYPE TO 'SBT_TAPE';
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE 'SBT_TAPE' TO 'V10GREL5_%F';
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET;
CONFIGURE DEVICE TYPE 'SBT_TAPE' PARALLELISM 2 BACKUP TYPE TO BACKUPSET;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE SBT_TAPE TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1;
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE SBT_TAPE TO 1; # default
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=oracle.disksbt,
ENV=(BACKUP_DIR=/backup/rman/tape)';
CONFIGURE MAXSETSIZE TO UNLIMITED;
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE;
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/backup/rman/snapcf_V10gREL5.f';
RMAN> exit


[ Start the Duplication ]

# set the env to the auxiliary to "startup nomount"

setenv ORACLE_SID RMAN10 # csh
or
ORACLE_SID=RMAN10 export ORACLE_SID # ksh

% sqlplus /nolog
SQL> connect / as sysdba
Connected to an idle instance
SQL> startup nomount pfile=$ORACLE_HOME/dbs/initRMAN10.ora
SQL> exit

# Set your SID back to the TARGET for duplication. The information in the target
# is used by the duplicate to fill in missing information like maxopenfiles,
# maxlogfiles and other values which are only changed when recreating the control
# file. Execute RMAN from the target home. The auxiliary should be a sqlnet
# connection.

[stsmesun1]/home/beta/marrocha> rman trace dup_rman10.log

Recovery Manager: Release 10.1.0.2.0 - Production

Copyright (c) 1995, 2003, Oracle. All rights reserved.

RMAN> connect target

connected to target database: V10GREL5 (DBID=2510891965)

RMAN> connect auxiliary sys/rman10@RMAN10

connected to auxiliary database: RMAN10 (not mounted)

RMAN> duplicate target database to RMAN10 device type sbt;

... snip from output...
contents of Memory Script:
{
Alter clone database open resetlogs;
}
executing Memory Script

database opened
Finished Duplicate Db at 09-DEC-03

>>>EOF<<<

To complete the process remove or comment out the db and log convert parameters.
Restart the database. Your duplicate database is ready for use.

RMAN 'Duplicate Database' Feature in Oracle9i / Oracle 10G

Applies to:
Oracle Server - Enterprise Edition - Version: 9.0.1.4
Oracle Server - Personal Edition - Version: 9.0
Oracle Server - Standard Edition - Version: 9.0
Information in this document applies to any platform.
"Checked for relevance on 20-March-2006"

Purpose
The purpose of this document is to introduce the RMAN 'duplicate database' feature of Oracle9i and Oracle 10G.

Scope and Application
This note is intended for DBAs and Support Personnel.

RMAN 'Duplicate Database' Feature in Oracle9i / Oracle 10G


You can use the RMAN DUPLICATE command to create a duplicate database from target database backups while still retaining the original target database. The duplicate database can be either identical to the original database or contain only a subset of the original tablespaces.

A RAC TARGET database can be duplicated as well. The procedure is the same as below.If the auxiliary instance needs to be a RAC-database aswell, than start the duplicate procedure for to a single instance and convert the auxiliary to RAC after the duplicate has
succeeded.

To prepare for database duplication, you must first create an auxiliary instance. For the duplication to work, you must connect RMAN to both the target (primary) database and an auxiliary instance started in NOMOUNT mode.

You must have at least one auxiliary channel allocated on the auxiliary instance. The principal work of the duplication is performed by the auxiliary channel, which starts a server session on the auxiliary host. This channel then restores the necessary backups of the primary database, uses them to create the duplicate database, and initiates recovery.

As part of the duplicating operation, RMAN manages the following:

1. Restores the target datafiles to the duplicate database and performs incomplete recovery by using all available incremental backups and archived logs.

2. Shuts down and starts the auxiliary database.

3. Opens the duplicate database with the RESETLOGS option after incomplete recovery to create the online redo logs.

4. Generates a new, unique DBID for the auxiliary database.


During duplication, RMAN must perform incomplete recovery because the online redo logs in the target are not backed up and cannot be applied to the auxiliary database. The farest that RMAN can go in recovery of the duplicate database isthe most recent redo log archived by the target database.

When duplicating a database, you can do the following:

1. Run the DUPLICATE command with or without a recovery catalog

2. Skip read-only tablespaces with the SKIP READONLY clause. Read-only tablespaces are included by default. If you omit them, then you can add them later.

3. Exclude tablespaces from the auxiliary database with the SKIP TABLESPACE clause. You cannot skip the SYSTEM tablespace or tablespaces containing rollback or undo segments.

4. Create the auxiliary database in a new host. If the directory structure is the same on the new host, then you can specify the NOFILENAMECHECK option and reuse the target datafile filenames for the auxiliary datafiles.

5. Use the SET UNTIL command or DUPLICATE command with the UNTIL clause when creating the auxiliary database to recover it to a noncurrent time. By default, the DUPLICATE command creates the database by using the most recent backups of the target database and then performs recovery to the most recent consistent point contained in the incremental backups and archived logs.


Register the auxiliary database in the same recovery catalog as the target database. This option is possible because RMAN gives the duplicate database a new DBID during duplication.

Preparing the Auxiliary Instance for Duplication:
The used instance name for the AUXILIARY instance, in this example, is AUX but can be any other valid instancename.

Basic Steps



Perform these tasks before performing RMAN duplication:

Task 1: setenv ORACLE_SID AUX
Create an Oracle Password File for the Auxiliary Instance

Task 2: Ensure SQL*Net Connectivity to the Auxiliary Instance.
A SQL*Net connection to the Auxilary instance is not required, but recommended. As the setup of the SQL*Net connection is easier done for the auxiliary instance than for the TARGET.RMAN is connecting to the target and auxiliary instance with SYSDBA privileges. A SQL*Net connection using SYSDBA privileges requires a password file.

Task 3: Create an Initialization Parameter File for the Auxiliary Instance
Mandatory initialization parameter settings for the auxiliary database:


db_block_size =
DB_NAME=AUX
compatible = 9.2.0.0 /* should be the same as the target
CONTROL_FILES=(/dup/oracle/oradata/trgt/control01.ctl,
/dup/oracle/oradata/trgt/control02.ctl)
#DB_FILE_NAME_CONVERT=('/oracle/oradata/trgt/','/dup/oracle/oradata/trgt/')
#LOG_FILE_NAME_CONVERT=('/oracle/oradata/trgt/redo','/dup/oracle/oradata/trgt/redo')


Task 4:
Start the Auxiliary Instance NOMOUNT

SQL> CONNECT SYS/oracle@aux AS SYSDBA
STARTUP FORCE NOMOUNT


Task 5: Mount or Open the Target Database
Task 6: Make Sure You Have the Necessary Backups and Archived Redo Logs
Task 7: Allocate Auxiliary Channels if Automatic Channels Are Not Configured

Start RMAN with a connection to the target database, the auxiliary database, and (if you use one) the recovery catalog database. You can start the RMAN executable on any host so long as it can connect to all the instances.

Note: If the auxiliary instance requires a client-side initialization parameter file, then this file must exist on the same host that runs the RMAN executable.

If automatic channels are not configured, then before issuing the DUPLICATE command, manually allocate at least one auxiliary channel within the same RUN block. If the backups reside on disk, then the more channels you allocate, the faster the duplication will be. For tape backups, limit the number of channels to the number of devices available for the operation.

This example assumes the following:


* Using a recovery catalog.
* The target database is on host1 and contains eight datafiles.
* Duplicate the target to database aux on the same host.
* Store the datafiles for AUX under /export/home/oracle/AUX/ subdirectory.
* Exclude tablespace INDX from the duplicate database, but keep all of the other tablespaces.
* Restore the duplicate db at a noncurrent time.
* Two online redo logs groups, each with two members of size 10 Mb.
* Configured the default device to disk.
* The auxiliary instance AUX has initialization parameter file in the default location (so the PFILE parameter is not necessary on the DUPLICATE command).
* Start RMAN from the AUX site.


RMAN> CONNECT TARGET system/system@R920.ca.oracle.com;
CONNECT CATALOG rman/rman@T920.ca.oracle.com;
CONNECT AUXILIARY SYS/oracle;

CONFIGURE CHANNEL DEVICE TYPE disk CLEAR;
CONFIGURE DEFAULT DEVICE TYPE
TO disk;
CONFIGURE DEVICE TYPE disk PARALLELISM 3;

# note that a RUN command is necessary because you can only execute
SET NEWNAME
# within a RUN command
RUN {
# the DUPLICATE command uses an automatic disk channel
set until time "to_date('Jan 29 2003 10:50:00','Mon DD YYYY HH24:MI:SS')";
SET NEWNAME FOR DATAFILE 1 TO '/export/home/oracle/AUX/system01.dbf';
SET NEWNAME FOR DATAFILE 2 TO '/export/home/oracle/AUX/undotbs01.dbf';
SET NEWNAME FOR DATAFILE 3 TO '/export/home/oracle/AUX/example01.dbf';

SET NEWNAME FOR DATAFILE 5 TO '/export/home/oracle/AUX/tools01.dbf';
SET NEWNAME FOR DATAFILE 6 TO '/export/home/oracle/AUX/users01.dbf';

SET NEWNAME FOR DATAFILE 7 TO '/export/home/oracle/AUX/logmnrts.dbf';
SET NEWNAME FOR DATAFILE 8 TO '/export/home/oracle/AUX/marius01.dbf';


DUPLICATE TARGET DATABASE TO AUX
SKIP TABLESPACE indx
LOGFILE
GROUP 1 ('/export/home/oracle/AUX/redo01a.log',

'/export/home/oracle/AUX/redo01b.log') SIZE 10M REUSE,
GROUP 2 ('/export/home/oracle/AUX/redo02a.log',
'/export/home/oracle/AUX/redo02b.log') SIZE 10M REUSE;
}



And this is what is going on:

executing command: SET until clause
executing command: SET NEWNAME
executing command: SET NEWNAME
executing
command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing
command: SET NEWNAME

Starting Duplicate Db at 31-JAN-03
using channel ORA_AUX_DISK_1
using channel ORA_AUX_DISK_2
using
channel ORA_AUX_DISK_3
Datafile 4 skipped by request

printing stored script: Memory Script
{
set until scn 5968565;
set
newname for datafile 1 to
"/export/home/oracle/AUX/system01.dbf";
set newname for datafile 2 to
"/export/home/oracle/AUX/undotbs01.dbf";
set newname for datafile 3 to
"/export/home/oracle/AUX/example01.dbf";
set newname for datafile 5 to
"/export/home/oracle/AUX/tools01.dbf";
set newname for datafile 6 to
"/export/home/oracle/AUX/users01.dbf";
set newname for datafile 7 to
"/export/home/oracle/AUX/logmnrts.dbf";
set newname for datafile 8 to
"/export/home/oracle/AUX/marius01.dbf";
restore
check readonly
clone database
skip tablespace INDX ;
}
executing script: Memory Script

executing command: SET until clause
executing command: SET NEWNAME
executing
command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing
command: SET NEWNAME
executing command: SET NEWNAME

Starting restore at 31-JAN-03

using channel ORA_AUX_DISK_1
using
channel ORA_AUX_DISK_2
using channel ORA_AUX_DISK_3
channel ORA_AUX_DISK_1: starting datafile backupset restore
channel
ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to /export/home/oracle/AUX/system01.dbf
restoring
datafile 00003 to /export/home/oracle/AUX/example01.dbf
restoring datafile 00007 to /export/home/oracle/AUX/logmnrts.dbf
channel
ORA_AUX_DISK_2: starting datafile backupset restore
channel ORA_AUX_DISK_2: specifying datafile(s) to restore from backup set
restoring
datafile 00002 to /export/home/oracle/AUX/undotbs01.dbf
restoring datafile 00005 to /export/home/oracle/AUX/tools01.dbf
restoring
datafile 00006 to /export/home/oracle/AUX/users01.dbf
restoring datafile 00008 to /export/home/oracle/AUX/marius01.dbf
channel
ORA_AUX_DISK_1: restored backup piece 1
piece handle=/u01/app/oracle/admin/R920/backup/DB_R920_01ee3suk_1_1 tag=TAG2003129T104747
params=NULL
channel ORA_AUX_DISK_1: restore complete
channel ORA_AUX_DISK_2: restored backup piece 1
piece handle=/u01/app/oracle/admin/R920/backup/DB_R920_02ee3suk_1_1
tag=TAG2003129T104747 params=NULL
channel ORA_AUX_DISK_2: restore complete
Finished restore at 31-JAN-03
sql statement:
CREATE CONTROLFILE REUSE SET DATABASE "AUX" RESETLOGS ARCHIVELOG
MAXLOGFILES 5
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 1
MAXLOGHISTORY 226
LOGFILE
GROUP 1 ( '/export/home/oracle/AUX/redo01a.log', '/export/home/oracle/AUX/reo01b.log' ) SIZE
10m REUSE,
GROUP 2 ( '/export/home/oracle/AUX/redo02a.log', '/export/home/oracle/AUX/reo02b.log' ) SIZE 10m REUSE
DATAFILE
'/export/home/oracle/AUX/system01.dbf'
CHARACTER
SET WE8ISO8859P1


printing stored script: Memory Script
{
switch clone datafile all;
}
executing script: Memory
Script

datafile 2 switched to datafile copy
input datafilecopy recid=1 stamp=484764999 filename=/export/home/oracle/AUX/undotbs01.dbf
datafile
3 switched to datafile copy
input datafilecopy recid=2 stamp=484764999 filename=/export/home/oracle/AUX/example01.dbf
datafile
5 switched to datafile copy
input datafilecopy recid=3 stamp=484764999 filename=/export/home/oracle/AUX/toors01.dbf
datafile
6 switched to datafile copy
input datafilecopy recid=4 stamp=484764999 filename=/export/home/oracle/AUX/users01.dbf
datafile
7 switched to datafile copy
input datafilecopy recid=5 stamp=484764999 filename=/export/home/oracle/AUX/logmnrts.dbf
datafile
8 switched to datafile copy
input datafilecopy recid=6 stamp=484764999 filename=/export/home/oracle/AUX/marius01.dbf

printing
stored script: Memory Script
{
set until time "to_date('Jan 29 2003 10:50:00','Mon DD YYYY HH24:MI:SS')";
recover
clone database
delete archivelog
;
}
executing script: Memory Script

executing command: SET until clause

Starting
recover at 31-JAN-03
using channel ORA_AUX_DISK_1
using channel ORA_AUX_DISK_2
using channel ORA_AUX_DISK_3
datafile
4 not processed because file is offline

starting media recovery

channel ORA_AUX_DISK_1: starting archive log restore
to default destination
channel ORA_AUX_DISK_1: restoring archive log
archive log thread=1 sequence=43
channel ORA_AUX_DISK_1:
restored backup piece 1
piece handle=/u01/app/oracle/admin/R920/backup/AL_R920_03ee4185_1_1 tag=TAG2003129T120109 params=NULL
channel
ORA_AUX_DISK_1: restore complete
archive log filename=/u01/app/oracle/product/9.2.0/dbs/arch1_43.dbf thread=1 seuence=43
channel
clone_default: deleting archive log(s)
archive log filename=/u01/app/oracle/product/9.2.0/dbs/arch1_43.dbf recid=1 stap=484765017
media
recovery complete
Finished recover at 31-JAN-03

printing stored script: Memory Script
{
shutdown clone;
startup
clone nomount ;
}
executing script: Memory Script

database dismounted
Oracle instance shut down

connected
to auxiliary database (not started)
Oracle instance started

Total System Global Area 52167600 bytes

Fixed Size
730032 bytes
Variable Size 50331648 bytes
Database Buffers 819200 bytes
Redo Buffers 286720 bytes
sql statement: CREATE
CONTROLFILE REUSE SET DATABASE "AUX" RESETLOGS ARCHIVELOG
MAXLOGFILES 5
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 1
MAXLOGHISTORY 226
LOGFILE
GROUP 1 ( '/export/home/oracle/AUX/redo01a.log', '/export/home/oracle/AUX/re
o01b.log' )
SIZE 10m REUSE,
GROUP 2 ( '/export/home/oracle/AUX/redo02a.log', '/export/home/oracle/AUX/re
o02b.log' ) SIZE 10m REUSE
DATAFILE
'/export/home/oracle/AUX/system01.dbf'
CHARACTER
SET WE8ISO8859P1


printing stored script: Memory Script
{
catalog clone datafilecopy "/export/home/oracle/AUX/undotbs01.dbf";
catalog clone datafilecopy "/export/home/oracle/AUX/example01.dbf";
catalog clone datafilecopy "/export/home/oracle/AUX/tools01.dbf";
catalog clone datafilecopy "/export/home/oracle/AUX/users01.dbf";
catalog clone datafilecopy "/export/home/oracle/AUX/logmnrts.dbf";
catalog clone datafilecopy "/export/home/oracle/AUX/marius01.dbf";
switch clone datafile all;
}
executing script: Memory Script

cataloged datafile copy
datafile copy filename=/export/home/oracle/AUX/undotbs01.dbf recid=1 stamp=48475037
cataloged
datafile copy
datafile copy filename=/export/home/oracle/AUX/example01.dbf recid=2 stamp=48475038
cataloged datafile copy
datafile
copy filename=/export/home/oracle/AUX/tools01.dbf recid=3 stamp=48476539
cataloged datafile copy
datafile copy filename=/export/home/oracle/AUX/users01.dbf
recid=4 stamp=48476539
cataloged datafile copy
datafile copy filename=/export/home/oracle/AUX/logmnrts.dbf recid=5 stamp=48476039
cataloged
datafile copy
datafile copy filename=/export/home/oracle/AUX/marius01.dbf recid=6 stamp=48476039
datafile 2 switched to datafile
copy
input datafilecopy recid=1 stamp=484765037 filename=/export/home/oracle/AUX/undotbs01.dbf
datafile 3 switched to datafile
copy
input datafilecopy recid=2 stamp=484765038 filename=/export/home/oracle/AUX/example01.dbf
datafile 5 switched to datafile
copy
input datafilecopy recid=3 stamp=484765039 filename=/export/home/oracle/AUX/tools01.dbf
datafile 6 switched to datafile
copy
input datafilecopy recid=4 stamp=484765039 filename=/export/home/oracle/AUX/users01.dbf
datafile 7 switched to datafile
copy
input datafilecopy recid=5 stamp=484765039 filename=/export/home/oracle/AUX/logmnrts.dbf
datafile 8 switched to datafile
copy
input datafilecopy recid=6 stamp=484765039 filename=/export/home/oracle/AUX/marius01.dbf

printing stored script:
Memory Script
{
Alter clone database open resetlogs;
}
executing script: Memory Script

database opened

printing
stored script: Memory Script
{
# drop offline and skipped tablespaces
sql clone "drop tablespace INDX including contents";
}
executing script: Memory Script

sql statement: drop tablespace INDX including contents
Finished Duplicate Db at 31-JAN-03

RMAN>

Oracle 10G new background processes

With 10g many new background processes were introduced.
This note highlights those.

MMAN

Memory Manager (MMAN) coordinates the sizing of different memory components within SGA. MMAN keeps a track of sizes of components and allocates/de-allocates memory based on their usage. This process is used only when you enable Automatic Shared Memory Management.



RVWR

Process responsible for writing flashback logs which stores pre-image of data-blocks.

These logs can be used to flash back database to past point in time. It proves very handy specially in situation of logical data corruptions like an accidental drop/truncate of a table or commit of an unwanted transaction.



CTWR

Change Tracking Writer (CTWR) Process introduced in 10g to track changes to block to assist fast RMAN incremental backups.

MMNL

The Memory Monitor Light (MMNL) process introduced in 10g is responsible for writing out statistics buffer to disk as needed. It works with Automatic Workload Repository (AWR)




MMON

The memory monitor (MMON) process was introduced in 10g is responsible for gathering statistics for Automatic Workload Repository. This helps Oracle doing automatic problem detection and self tuning.


M000

MMON background slave (m000) processes.


RBAL

RBAL (Re-balancer) is the ASM related process that performs rebalancing of disk resources controlled by ASM.

ARBx

These processes are managed by the RBAL process and are used to do the actual rebalancing of ASM controlled disk resources. The number of ARBx processes invoked is directly influenced by the asm_power_limit parameter.

ASMB

The ASMB process is used to provide information to and from the Cluster Synchronization Services used by ASM to manage the disk resources. It is also used to update statistics and provide a heartbeat mechanism.

Friday, June 5, 2009

TKPROF And Oracle Trace

TKPROF And Oracle Trace


The TKPROF program converts Oracle trace files into a more readable form. If you have a problem query you can user TKPROF to get more information. To get the most out of the utility you must enable timed statistics by setting the init.ora parameter or performing the following command:

ALTER SYSTEM SET TIMED_STATISTICS = TRUE;
If a suitable plan table is not present one can be created by doing the fooling as the SYS user:

@ORACLE_HOME\rdbms\admin\utlxplan.sql
CREATE PUBLIC SYNONYM PLAN_TABLE FOR SYS.PLAN_TABLE;
GRANT SELECT, INSERT, UPDATE, DELETE ON SYS.PLAN_TABLE TO PUBLIC;
With this done we can trace a statement:

ALTER SESSION SET SQL_TRACE = TRUE;

SELECT COUNT(*)
FROM dual;

ALTER SESSION SET SQL_TRACE = FALSE;
The resulting trace file will be located in the USER_DUMP_DEST directory. This can then be interpreted using TKPROF at the commmand prompt as follows:

TKPROF explain=user/password@service table=sys.plan_table
The resulting output file contains the following type of information for all SQL statements processed, including the ALTER SESSION commands:

********************************************************************************
count===== number of times OCI procedure was executed
cpu======= cpu time in seconds executing
elapsed=== elapsed time in seconds executing
disk====== number of physical reads of buffers from disk
query===== number of buffers gotten for consistent read
current=== number of buffers gotten in current mode (usually for update)
rows====== number of rows processed by the fetch or execute call
********************************************************************************

SELECT COUNT(*)
FROM dual

call=======count==cpu=====elapsed=====disk=====query====current=====rows
-------===-----===-----===-------=====-------==-------===-------====----
Parse=======1=====0.02=====0.02========0========0========0===========0
Execute=====1=====0.00=====0.00========0========0========0===========0
Fetch=======2=====0.00=====0.00========0========1========4===========1
-----------------------------------------------------------------------
total-------4-----0.02-----0.02--------0--------1--------4-----------1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 121

Rows-------Row Source Operation
------------------------------------------------------------
1 SORT AGGREGATE
1 TABLE ACCESS FULL DUAL
Things to look out for in the trace include:

When tracing lots of statements at once, such as batch processes, you can quickly discard those statements which have an acceptable cpu times. It's often better to focus on those statements that are taking most of the cpu time.
Inefficient statements are mostly associated with a high number of block visits. The query column indicates block visits for read consistency, including all query and subquery processing. The current column indicates visits not related to read consistency, including segment headers and blocks that are going to be updated.
The number of blocks read from disk is shown in the disk column. Since disk reads are slower than memory reads you would expect this value to be significantly lower than the sum of the query and current columns. If it is not you may have an issue with your buffer cache.
Locking problems and inefficient PL/SQL loops may lead to high cpu/elapsed values even when block visits are low.
Multiple parse calls for a single statement imply a library cache issue.
Once you've identified your problem statements you can check the execution plan to see why the statement is performing badly

Wednesday, June 3, 2009

DBA_SCHEDULER data dictionary tables

DBA_SCHEDULER data dictionary tables

Table Name Description


DBA_OBJECTS ------------------------------------ All objects in the database
DBA_SCHEDULER_PROGRAMS---------------All scheduler programs in the database
DBA_SCHEDULER_JOBS----------------------- All scheduler jobs in the database
DBA_SCHEDULER_JOB_CLASSES---------- All scheduler classes in the database
DBA_SCHEDULER_WINDOWS-------------- All scheduler windows in the database
DBA_SCHEDULER_PROGRAM_ARGS----- All arguments of all scheduler programs in the database
DBA_SCHEDULER_JOB_ARGS--------------- All arguments with set values of all scheduler jobs in the
database
DBA_SCHEDULER_JOB_LOG------------------Logged information for all scheduler jobs
DBA_SCHEDULER_JOB_RUN_DETAILS--- The details of a job run
DBA_SCHEDULER_WINDOW_LOG---------- Logged information for all scheduler windows
DBA_SCHEDULER_WINDOW_DETAILS---- The details of a window
DBA_SCHEDULER_WINDOW_GROUPS----- All scheduler window groups in the database


Table Name Description


DBA_SCHEDULER_WINGROUP_MEMBERS-------- Members of all scheduler window groups in the database
DBA_SCHEDULER_SCHEDULES----------- --------All schedules in the database
DBA_SCHEDULER_GLOBAL_ATTRIBUTE------ All scheduler global attributes
DBA_SCHEDULER_CHAINS-------------------------- All scheduler chains in the database
DBA_SCHEDULER_CHAIN_RULES----------------- All rules from scheduler chains in the database
DBA_SCHEDULER_CHAIN_STEPS------------------ All steps of scheduler chains in the database
DBA_SCHEDULER_RUNNING_CHAINS------------ All steps of all running chains in the database

Followers

About Me

My photo
N.Delhi, Delhi, India
I am an Oracle Certified Professional, Oracle 9i/10G DBA, having 4+ years of core DBA experience.