Friday, July 17, 2009

While installing 10.2.0.3 Patchset software the OUI was core dumping Java error messages.

Java bundled with the software seems to be corrupted and not working.

Using the simple runInstaller command to use the Java bundled with the software was
coredumping.


Using that Java OUI was able to get pass the bootstrap phase and the installation went on
successfully.

SOLUTION / ACTION PLAN
======================
-- To implement the solution, please execute the following steps::
Use the following command to invoke the installer.

./runInstaller -jreLoc /opt/java1.4

RMAN-08512: waiting for snapshot controlfile enqueue

Under normal circumstances, a job that must wait for the control file enqueue
waits for a brief interval and then successfully retrieves the enqueue.
Recovery Manager makes up to five attempts to get the enqueue and then fails the
job. The conflict is usually caused when two jobs are both backing up the control
file, and the job that first starts backing up the control file waits for service
from the media manager.

To determine which job is holding the conflicting enqueue:

1. After you see the first RMAN-08512: waiting for snapshot controlfile enqueue
message, start a new SQL*Plus session on the target database:

% sqlplus sys/sys_pwd@prod1


2. Execute the following query to determine which job is causing the wait:

SELECT s.sid, username AS "User", program, module, action, logon_time "Logon", l.*
FROM v$session s, v$enqueue_lock l
WHERE l.sid = s.sid and l.type = 'CF' AND l.id1 = 0 and l.id2 = 2;


You should see output similar to the following (the output in this example has
been truncated):

SID User Program Module Action Logon
--- ---- -------------------- ------------------------- ---------------- ---------
9 SYS rman@h13 (TNS V1-V3) backup full datafile: c1 0000210 STARTED
21-JUN-99


After you have determined which job is creating the enqueue, you can do one of the
following:

- Wait until the job creating the enqueue completes
- Cancel the current job and restart it once the job creating the enqueue completes
- Cancel the job creating the enqueue

Commonly, enqueue situations occur when a job is writing to a tape drive, but the tape
drive is waiting for a new cassette to be inserted. If you start a new job in
this
situation, you will probably receive the enqueue message because the first job cannot
complete until the new tape is loaded.

Tuesday, July 14, 2009

Change DBID of database

After a clone of a database to another machine with the same database name or if a new database name is created by recreating the controlfile, there is a need to recreate the dbid especially if the new database is to be registered in the same RMAN catalog.

This ensures especially if an RMAN catalog is used that the database can be registered in the catalog since only one dbid is allowed to be registered in a catalog – hence 2 databases with the same dbid on different machines, still can’t be registered in the same catalog.

To change dbid follow these steps -

Shutdown immediate;

startup mount;

Make sure the sys password is correct -

sqlplus “sys/password@test as sysdba” – should connect without errors.

Then command line :

cd $ORACLE_HOME/bin

oracle(DATABASE)@tmpu020:./nid target=sys/password@test
DBNEWID: Release 9.2.0.8.0 – 64bit Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.

Connected to database TEST (DBID=2764499561)

Control Files in database:
/u01/oradata/test/control01.ctl
/u02/oradata/test/control02.ctl

Change database ID of database TEST? (Y/[N]) => Y

Proceeding with operation
Changing database ID from 2764499561 to 3112540754
Control File /u01/oradata/test/control01.ctl – modified
Control File /u02/oradata/test/control02.ctl – modified
Datafile /u02/oradata/test/system01.dbf – dbid changed
Datafile /u02/oradata/test/users01.dbf – dbid changed
Datafile /u02/oradata/test/patrol01.dbf – dbid changed
Datafile /u02/oradata/test/temp01.dbf – dbid changed
Datafile /u02/oradata/test/temp02.dbf – dbid changed
Control File /u01/oradata/test/control01.ctl – dbid changed
Control File /u02/oradata/test/control02.ctl – dbid changed

Database ID for database TEST changed to 3112540754.
All previous backups and archived redo logs for this database are unusable.
Shut down database and open with RESETLOGS option.
Succesfully changed database ID.
DBNEWID – Completed succesfully.

oracle(DATABASE)@tmpu020:

Shutdown immediate;

startup mount;

alter database open resetlogs;

To check new dbid-

oracle(DATABASE)@tmpu020:rman target /

connected to target database: TEST (DBID=3112540754)

RMAN>

Trim a file on HP_UX

Suppose you want to remove first 100 lines of a file named mail.log
First take the backup of the original file.
Check line count
#cat mail.log | wc –l
155
#cat mail.log | sed ‘1,10d’ >/tmp/mail_new.log
If you check the line count of newly created file mail_new.log
#cat mail_new.log | wc –l
145
Now you can rename mail_new.log with mail.log after taking the backup of original mail.log

Monday, July 13, 2009

Installing CRS (Oracle Clustwerware)

1. Configure the SSH
To configure SSH you need to perform the following steps on each node in the cluster.
$ cd
$HOME
$ mkdir .ssh
$ chmod 700 .ssh
$ cd .ssh$ /usr/bin/ssh-keygen -t rsa
Now accept the default location for the key fileEnter and confirm a passphrase. (you can also press enter twice).

$ /usr/bin/ssh-keygen -t dsa

Now accept the default location for the key fileEnter and confirm a passphrase. (you can also press enter twice).

$ cat *.pub > authorized_keys. (nodeX could be the nodename to differentiate files later)

Now do the same steps on the other nodes in the cluster.When all those steps are done on the other nodes, start to copy the authorized_keys. to all the nodes into $HOME/.ssh/
For example if you have 4 nodes you will have after the copy in the .ssh 4 files with the name authorized_keys.
Then on EACH node continue the configuration of SSH by doing the following:

$ cd $HOME/.ssh$ cat *.node* > authorized_keys
$ ssh node1 cat /home/oracle/.ssh/id_rsa.pub >> authorized_keys
oracle@node1’s password:
$ ssh node1 cat /home/oracle/.ssh/id_dsa.pub >> authorized_keys
To test that everything is working correct now execute the commands
$ ssh date

2. Source the following commands in the session you are going to start the OUI,
$ exec /usr/bin/ssh-agent $SHELL$ /usr/bin/ssh-add

3. After resintallation proceed the new install in the path /CRS/app/crs

4. The Oracle Cluster Registry value is /dev/raw/raw3

5. The Voting Disk is /dev/raw/raw1

6. Edit /etc/init.d/init.crs to make the following changes,
chown oracle:dba /dev/raw/raw1
chown root:dba /dev/raw/raw3
chmod 644 /dev/raw/*

7. Check the crsctl status using,
$ ps -ef grep CRS
$ $CRS_HOME/bin/crsctl check crs

8. Private connect - 192.168.xx.xx

9. Public Connect - 10.80.xx.xx

Sunday, July 5, 2009

When and why to perform a full database export.

* When there is a requirement to clone a database on another test or development server, then take a full database mode export dump and create a database with the similar settings of target database, and import the dump in source database.

Simple Steps: how to perform a full database export using export utility.

* Use either system user or any other database user who has the EXP_FULL_DATABASE privilege.
* Set the NLS_LANG environment variable according the database character set and language details.
SQL> select * from nls_database_parameters
2 where parameter in ('NLS_LANGUAGE','NLS_TERRITORY','NLS_CHARACTERSET');


PARAMETER VALUE
------------------------------ ----------------------------
NLS_LANGUAGE AMERICAN
NLS_TERRITORY AMERICA
NLS_CHARACTERSET WE8ISO8859P1

Windows (Dos Prompt):
C:\> set NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1

Unix/Linux:
$ export NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1

* Start the export with following command and options.

exp system/password@mydb file=c:\exportdmp\exp_fulldb_MYDB_27Aug08.dmp
full=y log= c:\exportdmp\exp_fulldb_MYDB_27Aug08.log


Note: This is just a simple export command to perform the full database export. I would request and suggest you to refer Oracle Documentations on export/import and their options. Check the references.

Help on Export and Import:

Windows:
C:\> exp help=y
C:\> imp help=y

Super fast Database Copying/Cloning

A database cloning procedure is especially useful for the DBA who wants to give his developers a full-sized TEST and DEV instance by cloning the PROD instance into the development server areas.

This Oracle clone procedure can be use to quickly migrate a system from one UNIX server to another. It clones the Oracle database and this Oracle cloning procedures is often the fastest way to copy a Oracle database.

STEP 1: On the old system, go into SQL*Plus, sign on as SYSDBA and issue: “alter database backup controlfile to trace”. This will put the create database syntax in the trace file directory. The trace keyword tells oracle to generate a script containing a create controlfile command and store it in the trace directory identified in the user_dump_dest parameter of the init.ora file. It will look something like this:

STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE "OLDLSQ" NORESETLOGS
NOARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 2
MAXDATAFILES 240
MAXINSTANCES 1
MAXLOGHISTORY 113
LOGFILE
GROUP 1 ('/u03/oradata/oldlsq/log1a.dbf',
'/u03/oradata/olslsq/log1b.dbf') SIZE 30M,
GROUP 2 ('/u04/oradata/oldlsq/log2a.dbf',
'/u04/oradata/oldlsq/log2b.dbf') SIZE 30M
DATAFILE
'/u01/oradata/oldlsq/system01.dbf',
'/u01/oradata/oldlsq/mydatabase.dbf'
;
# Recovery is required if any of the datafiles are restored
# backups, or if the last shutdown was not normal or immediate.
RECOVER DATABASE
# Database can now be opened normally.
ALTER DATABASE OPEN;


STEP 2: Shutdown the old database

STEP 3: Copy all data files into the new directories on the new server. You may change the file names if you want, but you must edit the controlfile to reflect the new data files names on the new server.

rcp /u01/oradata/oldlsq/* newhost:/u01/oradata/newlsq
rcp /u01/oradata/oldlsq/* newhost:/u01/oradata/newlsq
rcp /u03/oradata/oldlsq/* newhost:/u03/oradata/newlsq
rcp /u04/oradata/oldlsq/* newhost:/u04/oradata/newlsq


STEP 4: Copy and Edit the Control file – Using the output syntax from STEP 1, modify the controlfile creation script by changing the following:

Old:

CREATE CONTROLFILE REUSE DATABASE "OLDLSQ" NORESETLOGS

New:

CREATE CONTROLFILE SET DATABASE "NEWLSQ" RESETLOGS

STEP 5: Remove the “recover database” and “alter database open” syntax

# Recovery is required if any of the datafiles are restored
# backups, or if the last shutdown was not normal or immediate.
RECOVER DATABASE
# Database can now be opened normally.
ALTER DATABASE OPEN;


STEP 6: Re-names of the data files names that have changed.

Save as db_create_controlfile.sql.

Old:

DATAFILE
'/u01/oradata/oldlsq/system01.dbf',
'/u01/oradata/oldlsq/mydatabase.dbf'

New:

DATAFILE
'/u01/oradata/newlsq/system01.dbf',
'/u01/oradata/newlsq/mydatabase.dbf'

STEP 7: Create the bdump, udump and cdump directories

cd $DBA/admin
mkdir newlsq
cd newlsq
mkdir bdump
mkdir udump
mkdir cdump
mkdir pfile


STEP 8: Copy-over the old init.ora file

rcp $DBA/admin/olslsq/pfile/*.ora newhost:/u01/oracle/admin/newlsq/pfile

STEP 9: Start the new database

@db_create_controlfile.sql

STEP 10: Place the new database in archivelog mode

Tuning Oracle's Buffer Cache

Introduction

Oracle maintains its own buffer cache inside the system global area (SGA) for each instance. A properly sized buffer cache can usually yield a cache hit ratio over 90%, meaning that nine requests out of ten are satisfied without going to disk.

If a buffer cache is too small, the cache hit ratio will be small and more physical disk I/O will result. If a buffer cache is too big, then parts of the buffer cache will be under-utilized and memory resources will be wasted.

Checking The Cache Hit Ratio

Oracle maintains statistics of buffer cache hits and misses. The following query will show you the overall buffer cache hit ratio for the entire instance since it was started:

SELECT (P1.value + P2.value - P3.value) / (P1.value + P2.value)
FROM v$sysstat P1, v$sysstat P2, v$sysstat P3
WHERE P1.name = 'db block gets'
AND P2.name = 'consistent gets'
AND P3.name = 'physical reads'

You can also see the buffer cache hit ratio for one specific session since that session started:

SELECT (P1.value + P2.value - P3.value) / (P1.value + P2.value)
FROM v$sesstat P1, v$statname N1, v$sesstat P2, v$statname N2,
v$sesstat P3, v$statname N3
WHERE N1.name = 'db block gets'
AND P1.statistic# = N1.statistic#
AND P1.sid =
AND N2.name = 'consistent gets'
AND P2.statistic# = N2.statistic#
AND P2.sid = P1.sid
AND N3.name = 'physical reads'
AND P3.statistic# = N3.statistic#
AND P3.sid = P1.sid

You can also measure the buffer cache hit ratio between time X and time Y by collecting statistics at times X and Y and computing the deltas.

Adjusting The Size Of The Buffer Cache

The db_block_buffers parameter in the parameter file determines the size of the buffer cache for the instance. The size of the buffer cache (in bytes) is equal to the value of the db_block_buffers parameter multiplied by the data block size.

You can change the size of the buffer cache by editing the db_block_buffers parameter in the parameter file and restarting the instance.

Determining If The Buffer Cache Should Be Enlarged

If you set the db_block_lru_extended_statistics parameter to a positive number in the parameter file for an instance and restart the instance, Oracle will populate a dynamic performance view called v$recent_bucket. This view will contain the same number of rows as the setting of the db_block_lru_extended_statistics parameter. Each row will indicate how many additional buffer cache hits there might have been if the buffer cache were that much bigger.

For example, if you set db_block_lru_extended_statistics to 1000 and restart the instance, you can see how the buffer cache hit ratio would have improved if the buffer cache were one buffer bigger, two buffers bigger, and so on up to 1000 buffers bigger than its current size. Following is a query you can use, along with a sample result:

SELECT 250 * TRUNC (rownum / 250) + 1 ' to '
250 * (TRUNC (rownum / 250) + 1) "Interval",
SUM (count) "Buffer Cache Hits"
FROM v$recent_bucket
GROUP BY TRUNC (rownum / 250)

Interval Buffer Cache Hits
--------------- --------------------
1 to 250 16083
251 to 500 11422
501 to 750 683
751 to 1000 177

This result set shows that enlarging the buffer cache by 250 buffers would have resulted in 16,083 more hits. If there were about 30,000 hits in the buffer cache at the time this query was performed, then it would appear that adding 500 buffers to the buffer cache might be worthwhile. Adding more than 500 buffers might lead to under-utilized buffers and therefore wasted memory.

There is overhead involved in collecting extended LRU statistics. Therefore you should set the db_block_lru_extended_ statistics parameter back to zero as soon as your analysis is complete.

In Oracle7, the v$recent_bucket view was named X$KCBRBH. Only the SYS user can query X$KCBRBH. Also note that in X$KCBRBH the columns are called indx and count, instead of rownum and count.

Determining If The Buffer Cache Is Bigger Than Necessary

If you set the db_block_lru_statistics parameter to true in the parameter file for an instance and restart the instance, Oracle will populate a dynamic performance view called v$current_bucket. This view will contain one row for each buffer in the buffer cache, and each row will indicate how many of the overall cache hits have been attributable to that particular buffer.

By querying v$current_bucket with a GROUP BY clause, you can get an idea of how well the buffer cache would perform if it were smaller. Following is a query you can use, along with a sample result:

SELECT 1000 * TRUNC (rownum / 1000) + 1 ' to '
1000 * (TRUNC (rownum / 1000) + 1) "Interval",
SUM (count) "Buffer Cache Hits"
FROM v$current_bucket
WHERE rownum > 0
GROUP BY TRUNC (rownum / 1000)

Interval Buffer Cache Hits
------------ -----------------
1 to 1000 668415
1001 to 2000 281760
2001 to 3000 166940
3001 to 4000 14770
4001 to 5000 7030
5001 to 6000 959

This result set shows that the first 3000 buffers are responsible for over 98% of the hits in the buffer cache. This suggests that the buffer cache would be almost as effective if it were half the size; memory is being wasted on an oversized buffer cache.

There is overhead involved in collecting LRU statistics. Therefore you should set the db_block_lru_statistics parameter back to false as soon as your analysis is complete.

In Oracle7, the v$current_bucket view was named X$KCBCBH. Only the SYS user can query X$KCBCBH. Also note that in X$KCBCBH the columns are called indx and count, instead of rownum and count.

Full Table Scans

When Oracle performs a full table scan of a large table, the blocks are read into the buffer cache but placed at the least recently used end of the LRU list. This causes the blocks to be aged out quickly, and prevents one large full table scan from wiping out the entire buffer cache.

Full table scans of large tables usually result in physical disk reads and a lower buffer cache hit ratio. You can get an idea of full table scan activity at the data file level by querying v$filestat and joining to SYS.dba_data_files. Following is a query you can use and sample results:

SELECT A.file_name, B.phyrds, B.phyblkrd
FROM SYS.dba_data_files A, v$filestat B
WHERE B.file# = A.file_id
ORDER BY A.file_id

FILE_NAME PHYRDS PHYBLKRD
-------------------------------- ---------- ----------
/u01/oradata/PROD/system01.dbf 92832 130721
/u02/oradata/PROD/temp01.dbf 1136 7825
/u01/oradata/PROD/tools01.dbf 7994 8002
/u01/oradata/PROD/users01.dbf 214 214
/u03/oradata/PROD/rbs01.dbf 20518 20518
/u04/oradata/PROD/data01.dbf 593336 9441037
/u05/oradata/PROD/data02.dbf 4638037 4703454
/u06/oradata/PROD/index01.dbf 1007638 1007638
/u07/oradata/PROD/index02.dbf 1408270 1408270

PHYRDS shows the number of reads from the data file since the instance was started. PHYBLKRD shows the actual number of data blocks read. Usually blocks are requested one at a time. However, Oracle requests blocks in batches when performing full table scans. (The db_file_multiblock_read_count parameter controls this batch size.)

In the sample result set above, there appears to be quite a bit of full table scan activity in the data01.dbf data file, since 593,336 read requests have resulted in 9,441,037 actual blocks read.

Spotting I/O Intensive SQL Statements

The v$sqlarea dynamic performance view contains one row for each SQL statement currently in the shared SQL area of the SGA for the instance. v$sqlarea shows the first 1000 bytes of each SQL statement, along with various statistics. Following is a query you can use:

SELECT executions, buffer_gets, disk_reads,
first_load_time, sql_text
FROM v$sqlarea
ORDER BY disk_reads

EXECUTIONS indicates the number of times the SQL statement has been executed since it entered the shared SQL area. BUFFER_GETS indicates the collective number of logical reads issued by all executions of the statement. DISK_READS shows the collective number of physical reads issued by all executions of the statement. (A logical read is a read that resulted in a cache hit or a physical disk read. A physical read is a read that resulted in a physical disk read.)

You can review the results of this query to find SQL statements that perform lots of reads, both logical and physical. Consider how many times a SQL statement has been executed when evaluating the number of reads.

Conclusion

This brief document gives you the basic information you need in order to optimize the buffer cache size for your Oracle database. Also, you can zero in on SQL statements that cause a lot of I/O, and data files that experience a lot of full table scans.

Monday, June 15, 2009

Identify and fix table fragmentation in Oracle 10g - how?

Table fragmentation – when?


If a table is only subject to inserts, there will not be any fragmentation.
Fragmentation comes with when we update/delete data in table.
The space which gets freed up during non-insert DML operations is not immediately re-used (or sometimes, may not get reuse ever at all). This leaves behind holes in table which results in table fragmentation.

To understand it more clearly, we need to be clear on how oracle manages space for tables.

“High water mark” of table actually defines the border line between (ever) used and unused (never) space. While performing full table scan, Oracle will always read the data up to HWM. And if there is lot of free space with-in HWM, that is read too, and hence degrading the performance of FTS.

Now lets see how to identify HWM, unused (never used) space and free space (used but deleted/updated) and then take a call whether the concerned table is candidate for a reorganization or not.


SQL> create table test as select * from dba_tables; -- Create a table

Table created.

SQL> analyze table test compute statistics; -- Analyze it

Table analyzed.


SQL> select blocks "Ever Used", empty_blocks "Never Used", num_rows "Total rows"
2 from user_tables where table_name='TEST'; -- The number of blocks used/free

Ever Used Never Used Total rows
---------- ---------- ----------
49 6 1680



SQL> delete from test where owner='SYS'; --- Im deleting almost half the number of rows.

764 rows deleted.

SQL> commit;

Commit complete.


SQL> analyze table test compute statistics; -- Analyze it again

Table analyzed.

SQL> select blocks "Ever Used", empty_blocks "Never Used", num_rows "Total rows"

2 from user_tables where table_name='TEST'; -- No difference in blocks usage

Ever Used Never Used Total rows
---------- ---------- ----------
49 6 916

PL/SQL procedure successfully completed.

Even though you deleted almost half the rows, the above shows that table HWM is up to 49 blocks, and to perform any FTS, Oracle will go up to 49 blocks to search the data. If your application is so-written that there are many FTS on this table, you may consider, reorganizing this table.



Reasons to reorganization



a) Slower response time (from that table)
b) High number of chained (actually migrated) rows.
c) Table has grown many folds and the old space is not getting reused.

Note: Index based queries may not get that much benefited by reorg as compared to queries which does Full table scan.

How to reorganize?

Before Oracle 10g, there were mainly 2 ways to do the reorganization of the table
a) Export, drop, and import.
b) Alter table move (to another tablespace, or same tablespace).

Oracle 10g provides us a new way of reorganizing the data.

Shrink command: This command is only applicable for tables which are tablespace with auto segment space management.

Before using this command, you should have row movement enabled.

SQL> alter table test enable row movement;

Table altered.

There are 2 ways of using this command.


1. Break in two parts: In first part rearrange rows and in second part reset the HWM.

Part 1: Rearrange (All DML's can happen during this time)

SQL> alter table test shrink space compact;

Table altered.

Part 2: Reset HWM (No DML can happen. but this is fairly quick, infact goes unnoticed.)

SQL> alter table sa shrink space;

Table altered.

2. Do it in one go:

SQL> alter table sa shrink space; (Both rearrange and restting HWM happens in one statement)

Table altered.



Few advantages over the conventional methods

1. Unlike "alter table move ..", indexes are not in UNUSABLE state. After shrink command, indexes are updated also.

2. Its an online operation, So you dont need downtime to do this reorg.

3. It doesnot require any extra space for the process to complete.


Conclusion

Its a new 10g feature to shrink (reorg) the tables (almost) online which can be used with automatic segment space management.

Size your Undo tablespace

What should be the size of UNDO tablespace? Generally this question is vaguely answered saying that sizing comes with experience (of DBA) or looking at load on server or sometimes even by the number of ORA-1555 or out of space errors.

This paper is to help DBA’s in calculating the size of UNDO tablespace by using a simple formula.

While designing an application, generally it is tough to know about the number of transactions and subsequently number of rows changed per second.
So I suggest having a “big undo tablespace” to start with and based on load, after doing some calculations and resize your UNDO tablespace.

In my case one of the applications was going to production (live), and I had no idea that how many transactions will happen against this database. All what I was told that there will be optimum (transactional) activity on this database. The word “optimum” itself is vague.

So I started with UNDO tablespace with size of 2GB and datafiles with autoextend “on” .

Note:
In production, you must be very careful in using this (autoextend on) as the space may grow to inifinity very fast. So my advice is either dont use this option, or use with "maxsize" or continuously monitor space (which is tough).


I month later, I noticed the activity from V$undostat.

Here is the step by step approach:


Step 1: Longest running query.


SQL> select max(maxquerylen) from v$undostat;


MAX(MAXQUERYLEN)
----------------
1793

This gives you ideal value for UNDO_RETENTION. To be on the safer size you should add few more seconds to get the right value. So in my case, the size of undo retention should be say 2000 secs.

Step 2: Size of UNDO tablespace.


Size of UNDO needed = UNDO_RETENTION x [UNDO block Generation per sec x DB_BLOCK_SIZE] + Overhead(30xDB_BLOCK_SIZE)

Out of these we know UNDO_RETENTION and DB_BLOCK_SIZE

All we need is to find out “UNDO Blocks per second”

Which can be easily fetched from v$undostat

SQL> SELECT (SUM(undoblks))/ SUM ((end_time - begin_time) * 24*60*60) "UPS"
2 FROM v$undostat;


UPS
------------------------------
8.11985583

V$undostat stores data for every 10 mins and begin/end times are start/end time of those intervals. We multiplied it with 24*60*60 because the difference between two dates will be in days and to get to seconds, we need it to multiply with 24hrs*60mins*60secs

So now we have all the values needed.

Undo size needed = [8.12 x 2000 x 8192] + [30 x 8192] = 133283840 bytes = 127.11 MB

TOP SQL

One of the important tasks of the DBA is to know what the high CPU consuming processes on database server are.
In my last organization, we used get number of request saying that DB server is running slow.
Now the problem is that, this server is hosting 86 databases, and finding out which is the culprit process and database sounds a daunting task (but it isn't).

See this:

First find out the top CPU processes on your system:

You may use TOP (or ps aux) or any other utility to find the top cpu consuming process.

Here is a sample top output:


bash-3.00$ top
PID USERNAME LWP PRI NICE SIZE RES STATE TIME CPU COMMAND
17480 oracle 11 59 0 1576M 1486M sleep 0:09 23.51% oracle
9172 oracle 258 59 2 1576M 1476M sleep 0:43 1.33% oracle
9176 oracle 14 59 2 1582M 1472M sleep 0:52 0.43% oracle
17550 oracle 1 59 0 3188K 1580K cpu/1 0:00 0.04% top
9178 oracle 13 59 2 1571M 1472M sleep 2:30 0.03% oracle

You can see the bold section. Process# 17480 is consuming 23.51 % CPU.

Now this process can belong to any process out of many instances on this server.
To find out which instance this process belongs to:



bash-3.00$ ps -ef grep 17480

oracle 17480 17479 0 03:02:03 ? 0:48 oracleorcl (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))


The instance name is highlighted in BOLD

Now you know which instance is holding that session.

Change your environmental settings (ORACLE_SID, ORACLE_HOME) related to this database.
and connect to the database as SYSDBA

bash-3.00$ sqlplus "/ as sysdba"

SQL*Plus: Release 10.2.0.2.0 - Production on Thu Dec 21 04:03:44 2006

Copyright (c) 1982, 2005, Oracle. All Rights Reserved.


Connected to:
Oracle Database 10g Release 10.2.0.2.0 - Production

SQL> select ses.sid SID,sqa.SQL_TEXT SQL from
2 v$session ses, v$sqlarea sqa, v$process proc
3 where ses.paddr=proc.addr
4 and ses.sql_hash_value=sqa.hash_value
5 and proc.spid=17480;

SID SQL
--------- -----------------
67 delete from test


Now you have the responsible SQL behind 23% CPU using process.
In my case it was a deliberate DELETE statement to induct this test but in your case it can be a query worth tuning.


Mostly knowing what is the problem is solution of the problem. (At least you know what the issue is).
Issue is to be addressed right away or to be taken to development team is a subjective issue which i don’t want to comment.

INDEXES (When to rebuild and why ) Information

When should I rebuild my indexes?


Need is necessary for any change. I hope all agree to this. So why many DBA’s (not all) rebuilds indexes on periodical basis without knowing the impact of it?

Let’s revisit the facts stated by many Oracle experts:


- B Tree indexes can never be unbalanced
- The distance of any leaf block (which contains the index value) from the root block is always same.
- B Tree performance is good for both small and large tables and does not degrade with the growth of table

When will rebuild help?

When the data in index is sparse (lots of holes in index, due to deletes or updates) and your query is usually range based.

If your SQL’s use “=” predicate, rebuilding of index may not help you at all unless the DML’s (updates/deletes) on the table are cause of increasing the height of index. Due to heavy updates/deletes on a table, the space in the index can be left unused (unless it is reused) due to which block splits and if the splitting goes beyond the level of accomodation, the height of index increases by 1.
In simpler terms, unless you reduce the height (BLEVEL) of index, the time taken to reach the leaf block from root block will not change.


As explained above, the height of index can be decreased by rebuild of an index(only if height was increased to updates/deletes).

Only in case of range scan, rebuild (consider coalesce option also) may help.

Select * from emp where empno between 1000 and 2000;

In above case there could be lot of deleted/updated data which will also be read while reading between 1000 to 2000. Rebuilding will reclaim any unused space and hence the select could be faster.

• Index fast full scan/ Index full scan. (rare case)
If your SQL’s are designed in a way that mostly data is selected from index (it doesn’t goes to table). In this case the whole index is read for data not for redirecting to table.

Ex:
Select count(empno) from emp; -- you have an index in empno
Select empno from emp -- you have an index in empno

Both the above SELECTs will fetch the data from Index only. Since the whole index is getting read, the space used by deleted rows is also read at the time of SELECT operation



How to find indexes which are candidate for rebuild?


I know it’s a very arguable question, but let me put my words and then you can comment.

I follow different approaches to find out indexes for rebuild

- Find out indexes having height(blevel+1) > 4 i.e. Indexes having BLEVEL > 3
How:
SQL> select owner,index_name,table_name,blevel from dba_indexes where BLEVEL>3

- Analyze indexes and find out ratio of (DEL_LF_ROWS/LF_ROWS*100) is > 20
How:
First "Analyze the index with validate structure option" and then,

SQL> SELECT name,height,lf_rows,del_lf_rows,(del_lf_rows/lf_rows)*100 as ratio FROM INDEX_STATS;


But (a big but), the reason to rebuild should be because of poor performance of your queries using indexes. You should/must not rebuild indexes if you find both the above reason true for index if it is not coupled with poor SQL performance.


See this example:

SQL> analyze index TEST_INDX validate structure; -- First analyze the suspect index

Index analyzed.

SQL> SELECT name,height,lf_rows,lf_blks,del_lf_rows FROM INDEX_STATS;

NAME HEIGHT LF_ROWS LF_BLKS DEL_LF_ROWS
------------ ---------- ---------- ----------- -------------
TEST_INDX 8 938752 29575 73342


You can see height of the index is 8 and also high number of DEL_LF_ROWS


SQL> set autotrace on
SQL> set timing on
SQL>
SQL> select count(*) from TEST_TABLE where TEST_COL like 'http://www.hots%';

COUNT(*)
----------
39700


Elapsed: 00:00:27.25

Execution Plan
----------------------------------------------------------
Plan hash value: 870163320


Id Operation Name Rows Bytes Cost (%CPU) Time


0 SELECT STATEMENT 1 117 10 (0) 00:00:01

1 SORT AGGREGATE 1 117

*2 INDEX RANGE SCAN TEST_INDX 115 13455 10 (0) 00:00:01




Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
764 consistent gets
757 physical reads

0 redo size
516 bytes sent via SQL*Net to client
468 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

Now you rebuild the indexes

SQL> alter index TEST_INDX rebuild;

Index altered.

SQL> select count(*) from TEST_TABLE where TEST_COL like 'http://www.hots%';

COUNT(*)
----------
39700

Elapsed: 00:00:06.18

Execution Plan
----------------------------------------------------------
Plan hash value: 870163320 - See here although it is using the same plan but still it is faster


Id Operation Name Rows Bytes Cost (%CPU) Time

----------------------------------------------------------------------------------

0 SELECT STATEMENT 1 117 6 (0) 00:00:01

1 SORT AGGREGATE 1 117

* 2 INDEX RANGE SCAN TEST_INDX 115 13455 6 (0) 00:00:01


Statistics
----------------------------------------------------------
15 recursive calls
0 db block gets
592 consistent gets
588 physical reads

0 redo size
516 bytes sent via SQL*Net to client
468 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)



SQL> SELECT name,height,lf_rows,lf_blks,del_lf_rows,distinct_keys,used_space FROM INDEX_STATS;

NAME HEIGHT LF_ROWS LF_BLKS DEL_LF_ROWS
------------------------------ ---------- ---------- ---------- -----------
TEST_INDX 4 865410 15434 0



This clearly indicates rebuild helped my query performance. The height of index is reduced to 4 and DEL_LF_ROWS is 0


Now coming to second of part of Harvinder's comment.

Possible ways of Rebuilding.

- Online/Offline.

ONLINE Rebuild (8i onwards)

SQL> Alter index rebuild online;

This allows parallel DML to go on while Index is getting rebuild. Remember, online index requires more space, as it creates a new index before dropping the old one.

Index Rebuild is primarily a 3 step process


Prepare phase: Oracle locks the table for a fraction of second (actually not felt) to build index structure and populate data-dictionary.

Build phase: Most of the work is done in this phase. Oracle engine populates the index using the table and allows parallel DML's, parallel DML's uses a temporary journal table (b tree index like structure) to host the entries while the new index is getting populated.

Merge phase: Now the final stage, Oracle merges the new index with the journal table and drops the old index. Even during the merge phase, any changes to the table are recorded in the journal table and they get merged towards end of this phase.

9i onwards, online index rebuild feature includes Reverse Key, Function Based and Key Compressed indexes.

Offline Index rebuild.

SQL> Alter index rebuild;

This is conventional rebuild which was used(still available) before 8i. In this rebuild process, oracle drops the old index and creates a new one. In this process, no extra space is required, but parallel dml's are not supported.






Should I rebuild or coalesce my indexes ?




There is one more critical aspect which I wanted to discuss is the cost/effort related to rebuilding indexes.

Rebuilding an index is quite a costly operation and you must evaluate the benefit Vs effort before rebuilding an index.

Rebuilding (online) an index requires additional resources like space, cpu usage, time.

Here is one more option, which is less used or probably less popular “coalesce”.


Rebuild Vs Coalesce


Rebuild


* Can move an index to a different tablespace

* Resource consuming process

* Takes more time

* Creates a new tree

* Shortens the height of an index if it was increased due to DML activities

* Rebuilding can affect future DML’s because index becomes compact and for future DML’s index has to be extend dynamically.


Coalesce


* Cannot move an index to a different tablespace

* Comparatively less resource consuming

* Takes relatively less time

* Only merge the adjacent free space between leaf blocks within a branch

* Doesn’t shorten height of index

* Since coalesce doesn’t effect the total size and only frees up the unused space, it doesn’t affect future DML’s



Coalescing the index, frees up space of adjacent leaf blocks within a branch block. This way the number of blocks or extents which an index is using will not change but there will be more number of free blocks which can be used for future inserts or updates.

In an OLTP environment, where data is highly volatile, coalesce is better because it doesn’t shrink the index and the free space remains with the index segment.

Oracle10G RMAN Database Duplication

[ Creating a duplicate database using RMAN ]

The V10gREL5 is in noarchivelog mode. The duplication is being made from
closed database consistent backups. With archivelog mode databases you can
use the "set until time | scn | sequence" clause to tell RMAN where to end
for incomplete recovery.

TARGET : V10gREL5
AUXILIARY : RMAN10

[ SQLNET setup for connectivity ]
########################################################################################
# We must use a dedicated listener configuration for RMAN. Using instance registration
# requires the database be mounted to register with the listener. Since the AUX is
# where we'll connect using sqlnet we require sysdba access to the nomount instance.
# This example is using the same server for duplication but can be used for remote
# duplication as well. If you are using a disk backup solution and duplicate to a
# remote node you must first copy the backupsets from the original hosts backup
# location to the same mount and path on the remote server. Because duplication
# uses auxiliary channels the files must be where the IO pipe is allocated. So the
# IO will take place on the remote node and disk backups must be locally available.
#########################################################################################

Listener.ora:
=============

LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = stsmesun1)(PORT = 1521))
)
)
)


SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /database/10i/app/oracle/product/10g_Prod_REL5)
(PROGRAM = extproc)
)
(SID_DESC =
(GLOBAL_DBNAME = V10gREL5)
(ORACLE_HOME = /database/10i/app/oracle/product/10g_Prod_REL5)
(SID_NAME = V10gREL5)
)
(SID_DESC =
(GLOBAL_DBNAME = RMAN10)
(ORACLE_HOME = /database/10i/app/oracle/product/10g_Prod_REL5)
(SID_NAME = RMAN10)
)
)

tnsnames.ora:
=============

RMAN10 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = stsmesun1)(PORT = 1521))
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = RMAN10)
)
)

V10R5 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = stsmesun1)(PORT = 1521))
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = V10gREL5)
)
)


# Target Database Files:

SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
/database/10i/app/oracle/oradata/V10gREL5/system01.dbf
/database/10i/app/oracle/oradata/V10gREL5/undotbs01.dbf
/database/10i/app/oracle/oradata/V10gREL5/sysaux01.dbf
/database/10i/app/oracle/oradata/V10gREL5/users01.dbf


[ Create the Auxiliary Database directories needed ]

% cd $ORACLE_HOME/dbs

# Note: I am using a single directory for my datafiles. You can use the
# parameters db_file_name_convert and log_file_name_convert and they will
# generate the "set newname for datafile" syntax needed to change the
# controlfile. These parameters accommodate up to 5 different file placement
# locations using:

db_file_name_convert = ('/old/path1', '/new/path1',
'/old/path2', '/new/path2',
'/old/path3', '/new/path3')
# up to 5 locations from Oracle9i >.
log_file_name_convert = ('/old/path1', '/new/path1',
'/old/path2', '/new/path2',
'/old/path3', '/new/path3')
# up to 5 locations from Oracle9i >.

# You can also create your own "set newname for datafile" syntax for all your
# files skipping these parameters all together. You can still use the
# log_file_name_convert parameter or you can use the 'logfile' command to
# specifiy where the logs will be created and what size.
#
# Auxiliary datafile directory for this exercise:

"/database/10i/marrocha/RMAN10/oradata"

[ Create the Auxiliary init.ora ]

# the controlfile is created during duplication with the path and name given here
#
# initRMAN10.ora

db_name = RMAN10
db_block_size = 8192
compatible = 10.1.0.1.0
remote_login_passwordfile = exclusive
control_files = ('/database/10i/marrocha/RMAN10/oradata/control01.ctl',
'/database/10i/marrocha/RMAN10/oradata/control02.ctl')
db_file_name_convert = ('/database/10i/app/oracle/oradata/V10gREL5',
'/database/10i/marrocha/RMAN10/oradata')
log_file_name_convert = ('/database/10i/app/oracle/oradata/V10gREL5',
'/database/10i/marrocha/RMAN10/oradata')


# create a passwordfile for remote connections as sysdba

% orapwd password= file=orapwRMAN10

% sqlplus /nolog
SQL> connect / as sysdba
SQL> startup nomount pfile=$ORACLE_HOME/dbs/initRMAN10.ora
SQL> exit


[ RMAN Duplication from tape backup using the Oracle disk API ]

# This is to provide an example of simulating writing to tape using RMAN and
# the Oracle disk API. RMAN Target Configuration:

% rman
RMAN> connect target

connected to target database: V10GREL5 (DBID=2510891965)

# Use the RMAN configure command to create an optimal configuration for your
# business. the channels are configured to allocate from device type SBT by
# default. Since the channels are configured here we do not need to allocate
# any channels in the RMAN scripts.
# I/O only take place over the channel allocated. If an AUX channel is allocated
# only the auxiliary will perform IO so there is not worry about overwriting the
# target database production files. As an added safegard an error would raise if
# an attempt to overwrite a target file while it's online.

RMAN> show all;

using target database controlfile instead of recovery catalog
RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 3;
CONFIGURE BACKUP OPTIMIZATION ON;
CONFIGURE DEFAULT DEVICE TYPE TO 'SBT_TAPE';
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE 'SBT_TAPE' TO 'V10GREL5_%F';
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET;
CONFIGURE DEVICE TYPE 'SBT_TAPE' PARALLELISM 2 BACKUP TYPE TO BACKUPSET;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE SBT_TAPE TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1;
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE SBT_TAPE TO 1; # default
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=oracle.disksbt,
ENV=(BACKUP_DIR=/backup/rman/tape)';
CONFIGURE MAXSETSIZE TO UNLIMITED;
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE;
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/backup/rman/snapcf_V10gREL5.f';
RMAN> exit


[ Start the Duplication ]

# set the env to the auxiliary to "startup nomount"

setenv ORACLE_SID RMAN10 # csh
or
ORACLE_SID=RMAN10 export ORACLE_SID # ksh

% sqlplus /nolog
SQL> connect / as sysdba
Connected to an idle instance
SQL> startup nomount pfile=$ORACLE_HOME/dbs/initRMAN10.ora
SQL> exit

# Set your SID back to the TARGET for duplication. The information in the target
# is used by the duplicate to fill in missing information like maxopenfiles,
# maxlogfiles and other values which are only changed when recreating the control
# file. Execute RMAN from the target home. The auxiliary should be a sqlnet
# connection.

[stsmesun1]/home/beta/marrocha> rman trace dup_rman10.log

Recovery Manager: Release 10.1.0.2.0 - Production

Copyright (c) 1995, 2003, Oracle. All rights reserved.

RMAN> connect target

connected to target database: V10GREL5 (DBID=2510891965)

RMAN> connect auxiliary sys/rman10@RMAN10

connected to auxiliary database: RMAN10 (not mounted)

RMAN> duplicate target database to RMAN10 device type sbt;

... snip from output...
contents of Memory Script:
{
Alter clone database open resetlogs;
}
executing Memory Script

database opened
Finished Duplicate Db at 09-DEC-03

>>>EOF<<<

To complete the process remove or comment out the db and log convert parameters.
Restart the database. Your duplicate database is ready for use.

RMAN 'Duplicate Database' Feature in Oracle9i / Oracle 10G

Applies to:
Oracle Server - Enterprise Edition - Version: 9.0.1.4
Oracle Server - Personal Edition - Version: 9.0
Oracle Server - Standard Edition - Version: 9.0
Information in this document applies to any platform.
"Checked for relevance on 20-March-2006"

Purpose
The purpose of this document is to introduce the RMAN 'duplicate database' feature of Oracle9i and Oracle 10G.

Scope and Application
This note is intended for DBAs and Support Personnel.

RMAN 'Duplicate Database' Feature in Oracle9i / Oracle 10G


You can use the RMAN DUPLICATE command to create a duplicate database from target database backups while still retaining the original target database. The duplicate database can be either identical to the original database or contain only a subset of the original tablespaces.

A RAC TARGET database can be duplicated as well. The procedure is the same as below.If the auxiliary instance needs to be a RAC-database aswell, than start the duplicate procedure for to a single instance and convert the auxiliary to RAC after the duplicate has
succeeded.

To prepare for database duplication, you must first create an auxiliary instance. For the duplication to work, you must connect RMAN to both the target (primary) database and an auxiliary instance started in NOMOUNT mode.

You must have at least one auxiliary channel allocated on the auxiliary instance. The principal work of the duplication is performed by the auxiliary channel, which starts a server session on the auxiliary host. This channel then restores the necessary backups of the primary database, uses them to create the duplicate database, and initiates recovery.

As part of the duplicating operation, RMAN manages the following:

1. Restores the target datafiles to the duplicate database and performs incomplete recovery by using all available incremental backups and archived logs.

2. Shuts down and starts the auxiliary database.

3. Opens the duplicate database with the RESETLOGS option after incomplete recovery to create the online redo logs.

4. Generates a new, unique DBID for the auxiliary database.


During duplication, RMAN must perform incomplete recovery because the online redo logs in the target are not backed up and cannot be applied to the auxiliary database. The farest that RMAN can go in recovery of the duplicate database isthe most recent redo log archived by the target database.

When duplicating a database, you can do the following:

1. Run the DUPLICATE command with or without a recovery catalog

2. Skip read-only tablespaces with the SKIP READONLY clause. Read-only tablespaces are included by default. If you omit them, then you can add them later.

3. Exclude tablespaces from the auxiliary database with the SKIP TABLESPACE clause. You cannot skip the SYSTEM tablespace or tablespaces containing rollback or undo segments.

4. Create the auxiliary database in a new host. If the directory structure is the same on the new host, then you can specify the NOFILENAMECHECK option and reuse the target datafile filenames for the auxiliary datafiles.

5. Use the SET UNTIL command or DUPLICATE command with the UNTIL clause when creating the auxiliary database to recover it to a noncurrent time. By default, the DUPLICATE command creates the database by using the most recent backups of the target database and then performs recovery to the most recent consistent point contained in the incremental backups and archived logs.


Register the auxiliary database in the same recovery catalog as the target database. This option is possible because RMAN gives the duplicate database a new DBID during duplication.

Preparing the Auxiliary Instance for Duplication:
The used instance name for the AUXILIARY instance, in this example, is AUX but can be any other valid instancename.

Basic Steps



Perform these tasks before performing RMAN duplication:

Task 1: setenv ORACLE_SID AUX
Create an Oracle Password File for the Auxiliary Instance

Task 2: Ensure SQL*Net Connectivity to the Auxiliary Instance.
A SQL*Net connection to the Auxilary instance is not required, but recommended. As the setup of the SQL*Net connection is easier done for the auxiliary instance than for the TARGET.RMAN is connecting to the target and auxiliary instance with SYSDBA privileges. A SQL*Net connection using SYSDBA privileges requires a password file.

Task 3: Create an Initialization Parameter File for the Auxiliary Instance
Mandatory initialization parameter settings for the auxiliary database:


db_block_size =
DB_NAME=AUX
compatible = 9.2.0.0 /* should be the same as the target
CONTROL_FILES=(/dup/oracle/oradata/trgt/control01.ctl,
/dup/oracle/oradata/trgt/control02.ctl)
#DB_FILE_NAME_CONVERT=('/oracle/oradata/trgt/','/dup/oracle/oradata/trgt/')
#LOG_FILE_NAME_CONVERT=('/oracle/oradata/trgt/redo','/dup/oracle/oradata/trgt/redo')


Task 4:
Start the Auxiliary Instance NOMOUNT

SQL> CONNECT SYS/oracle@aux AS SYSDBA
STARTUP FORCE NOMOUNT


Task 5: Mount or Open the Target Database
Task 6: Make Sure You Have the Necessary Backups and Archived Redo Logs
Task 7: Allocate Auxiliary Channels if Automatic Channels Are Not Configured

Start RMAN with a connection to the target database, the auxiliary database, and (if you use one) the recovery catalog database. You can start the RMAN executable on any host so long as it can connect to all the instances.

Note: If the auxiliary instance requires a client-side initialization parameter file, then this file must exist on the same host that runs the RMAN executable.

If automatic channels are not configured, then before issuing the DUPLICATE command, manually allocate at least one auxiliary channel within the same RUN block. If the backups reside on disk, then the more channels you allocate, the faster the duplication will be. For tape backups, limit the number of channels to the number of devices available for the operation.

This example assumes the following:


* Using a recovery catalog.
* The target database is on host1 and contains eight datafiles.
* Duplicate the target to database aux on the same host.
* Store the datafiles for AUX under /export/home/oracle/AUX/ subdirectory.
* Exclude tablespace INDX from the duplicate database, but keep all of the other tablespaces.
* Restore the duplicate db at a noncurrent time.
* Two online redo logs groups, each with two members of size 10 Mb.
* Configured the default device to disk.
* The auxiliary instance AUX has initialization parameter file in the default location (so the PFILE parameter is not necessary on the DUPLICATE command).
* Start RMAN from the AUX site.


RMAN> CONNECT TARGET system/system@R920.ca.oracle.com;
CONNECT CATALOG rman/rman@T920.ca.oracle.com;
CONNECT AUXILIARY SYS/oracle;

CONFIGURE CHANNEL DEVICE TYPE disk CLEAR;
CONFIGURE DEFAULT DEVICE TYPE
TO disk;
CONFIGURE DEVICE TYPE disk PARALLELISM 3;

# note that a RUN command is necessary because you can only execute
SET NEWNAME
# within a RUN command
RUN {
# the DUPLICATE command uses an automatic disk channel
set until time "to_date('Jan 29 2003 10:50:00','Mon DD YYYY HH24:MI:SS')";
SET NEWNAME FOR DATAFILE 1 TO '/export/home/oracle/AUX/system01.dbf';
SET NEWNAME FOR DATAFILE 2 TO '/export/home/oracle/AUX/undotbs01.dbf';
SET NEWNAME FOR DATAFILE 3 TO '/export/home/oracle/AUX/example01.dbf';

SET NEWNAME FOR DATAFILE 5 TO '/export/home/oracle/AUX/tools01.dbf';
SET NEWNAME FOR DATAFILE 6 TO '/export/home/oracle/AUX/users01.dbf';

SET NEWNAME FOR DATAFILE 7 TO '/export/home/oracle/AUX/logmnrts.dbf';
SET NEWNAME FOR DATAFILE 8 TO '/export/home/oracle/AUX/marius01.dbf';


DUPLICATE TARGET DATABASE TO AUX
SKIP TABLESPACE indx
LOGFILE
GROUP 1 ('/export/home/oracle/AUX/redo01a.log',

'/export/home/oracle/AUX/redo01b.log') SIZE 10M REUSE,
GROUP 2 ('/export/home/oracle/AUX/redo02a.log',
'/export/home/oracle/AUX/redo02b.log') SIZE 10M REUSE;
}



And this is what is going on:

executing command: SET until clause
executing command: SET NEWNAME
executing command: SET NEWNAME
executing
command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing
command: SET NEWNAME

Starting Duplicate Db at 31-JAN-03
using channel ORA_AUX_DISK_1
using channel ORA_AUX_DISK_2
using
channel ORA_AUX_DISK_3
Datafile 4 skipped by request

printing stored script: Memory Script
{
set until scn 5968565;
set
newname for datafile 1 to
"/export/home/oracle/AUX/system01.dbf";
set newname for datafile 2 to
"/export/home/oracle/AUX/undotbs01.dbf";
set newname for datafile 3 to
"/export/home/oracle/AUX/example01.dbf";
set newname for datafile 5 to
"/export/home/oracle/AUX/tools01.dbf";
set newname for datafile 6 to
"/export/home/oracle/AUX/users01.dbf";
set newname for datafile 7 to
"/export/home/oracle/AUX/logmnrts.dbf";
set newname for datafile 8 to
"/export/home/oracle/AUX/marius01.dbf";
restore
check readonly
clone database
skip tablespace INDX ;
}
executing script: Memory Script

executing command: SET until clause
executing command: SET NEWNAME
executing
command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing
command: SET NEWNAME
executing command: SET NEWNAME

Starting restore at 31-JAN-03

using channel ORA_AUX_DISK_1
using
channel ORA_AUX_DISK_2
using channel ORA_AUX_DISK_3
channel ORA_AUX_DISK_1: starting datafile backupset restore
channel
ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to /export/home/oracle/AUX/system01.dbf
restoring
datafile 00003 to /export/home/oracle/AUX/example01.dbf
restoring datafile 00007 to /export/home/oracle/AUX/logmnrts.dbf
channel
ORA_AUX_DISK_2: starting datafile backupset restore
channel ORA_AUX_DISK_2: specifying datafile(s) to restore from backup set
restoring
datafile 00002 to /export/home/oracle/AUX/undotbs01.dbf
restoring datafile 00005 to /export/home/oracle/AUX/tools01.dbf
restoring
datafile 00006 to /export/home/oracle/AUX/users01.dbf
restoring datafile 00008 to /export/home/oracle/AUX/marius01.dbf
channel
ORA_AUX_DISK_1: restored backup piece 1
piece handle=/u01/app/oracle/admin/R920/backup/DB_R920_01ee3suk_1_1 tag=TAG2003129T104747
params=NULL
channel ORA_AUX_DISK_1: restore complete
channel ORA_AUX_DISK_2: restored backup piece 1
piece handle=/u01/app/oracle/admin/R920/backup/DB_R920_02ee3suk_1_1
tag=TAG2003129T104747 params=NULL
channel ORA_AUX_DISK_2: restore complete
Finished restore at 31-JAN-03
sql statement:
CREATE CONTROLFILE REUSE SET DATABASE "AUX" RESETLOGS ARCHIVELOG
MAXLOGFILES 5
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 1
MAXLOGHISTORY 226
LOGFILE
GROUP 1 ( '/export/home/oracle/AUX/redo01a.log', '/export/home/oracle/AUX/reo01b.log' ) SIZE
10m REUSE,
GROUP 2 ( '/export/home/oracle/AUX/redo02a.log', '/export/home/oracle/AUX/reo02b.log' ) SIZE 10m REUSE
DATAFILE
'/export/home/oracle/AUX/system01.dbf'
CHARACTER
SET WE8ISO8859P1


printing stored script: Memory Script
{
switch clone datafile all;
}
executing script: Memory
Script

datafile 2 switched to datafile copy
input datafilecopy recid=1 stamp=484764999 filename=/export/home/oracle/AUX/undotbs01.dbf
datafile
3 switched to datafile copy
input datafilecopy recid=2 stamp=484764999 filename=/export/home/oracle/AUX/example01.dbf
datafile
5 switched to datafile copy
input datafilecopy recid=3 stamp=484764999 filename=/export/home/oracle/AUX/toors01.dbf
datafile
6 switched to datafile copy
input datafilecopy recid=4 stamp=484764999 filename=/export/home/oracle/AUX/users01.dbf
datafile
7 switched to datafile copy
input datafilecopy recid=5 stamp=484764999 filename=/export/home/oracle/AUX/logmnrts.dbf
datafile
8 switched to datafile copy
input datafilecopy recid=6 stamp=484764999 filename=/export/home/oracle/AUX/marius01.dbf

printing
stored script: Memory Script
{
set until time "to_date('Jan 29 2003 10:50:00','Mon DD YYYY HH24:MI:SS')";
recover
clone database
delete archivelog
;
}
executing script: Memory Script

executing command: SET until clause

Starting
recover at 31-JAN-03
using channel ORA_AUX_DISK_1
using channel ORA_AUX_DISK_2
using channel ORA_AUX_DISK_3
datafile
4 not processed because file is offline

starting media recovery

channel ORA_AUX_DISK_1: starting archive log restore
to default destination
channel ORA_AUX_DISK_1: restoring archive log
archive log thread=1 sequence=43
channel ORA_AUX_DISK_1:
restored backup piece 1
piece handle=/u01/app/oracle/admin/R920/backup/AL_R920_03ee4185_1_1 tag=TAG2003129T120109 params=NULL
channel
ORA_AUX_DISK_1: restore complete
archive log filename=/u01/app/oracle/product/9.2.0/dbs/arch1_43.dbf thread=1 seuence=43
channel
clone_default: deleting archive log(s)
archive log filename=/u01/app/oracle/product/9.2.0/dbs/arch1_43.dbf recid=1 stap=484765017
media
recovery complete
Finished recover at 31-JAN-03

printing stored script: Memory Script
{
shutdown clone;
startup
clone nomount ;
}
executing script: Memory Script

database dismounted
Oracle instance shut down

connected
to auxiliary database (not started)
Oracle instance started

Total System Global Area 52167600 bytes

Fixed Size
730032 bytes
Variable Size 50331648 bytes
Database Buffers 819200 bytes
Redo Buffers 286720 bytes
sql statement: CREATE
CONTROLFILE REUSE SET DATABASE "AUX" RESETLOGS ARCHIVELOG
MAXLOGFILES 5
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 1
MAXLOGHISTORY 226
LOGFILE
GROUP 1 ( '/export/home/oracle/AUX/redo01a.log', '/export/home/oracle/AUX/re
o01b.log' )
SIZE 10m REUSE,
GROUP 2 ( '/export/home/oracle/AUX/redo02a.log', '/export/home/oracle/AUX/re
o02b.log' ) SIZE 10m REUSE
DATAFILE
'/export/home/oracle/AUX/system01.dbf'
CHARACTER
SET WE8ISO8859P1


printing stored script: Memory Script
{
catalog clone datafilecopy "/export/home/oracle/AUX/undotbs01.dbf";
catalog clone datafilecopy "/export/home/oracle/AUX/example01.dbf";
catalog clone datafilecopy "/export/home/oracle/AUX/tools01.dbf";
catalog clone datafilecopy "/export/home/oracle/AUX/users01.dbf";
catalog clone datafilecopy "/export/home/oracle/AUX/logmnrts.dbf";
catalog clone datafilecopy "/export/home/oracle/AUX/marius01.dbf";
switch clone datafile all;
}
executing script: Memory Script

cataloged datafile copy
datafile copy filename=/export/home/oracle/AUX/undotbs01.dbf recid=1 stamp=48475037
cataloged
datafile copy
datafile copy filename=/export/home/oracle/AUX/example01.dbf recid=2 stamp=48475038
cataloged datafile copy
datafile
copy filename=/export/home/oracle/AUX/tools01.dbf recid=3 stamp=48476539
cataloged datafile copy
datafile copy filename=/export/home/oracle/AUX/users01.dbf
recid=4 stamp=48476539
cataloged datafile copy
datafile copy filename=/export/home/oracle/AUX/logmnrts.dbf recid=5 stamp=48476039
cataloged
datafile copy
datafile copy filename=/export/home/oracle/AUX/marius01.dbf recid=6 stamp=48476039
datafile 2 switched to datafile
copy
input datafilecopy recid=1 stamp=484765037 filename=/export/home/oracle/AUX/undotbs01.dbf
datafile 3 switched to datafile
copy
input datafilecopy recid=2 stamp=484765038 filename=/export/home/oracle/AUX/example01.dbf
datafile 5 switched to datafile
copy
input datafilecopy recid=3 stamp=484765039 filename=/export/home/oracle/AUX/tools01.dbf
datafile 6 switched to datafile
copy
input datafilecopy recid=4 stamp=484765039 filename=/export/home/oracle/AUX/users01.dbf
datafile 7 switched to datafile
copy
input datafilecopy recid=5 stamp=484765039 filename=/export/home/oracle/AUX/logmnrts.dbf
datafile 8 switched to datafile
copy
input datafilecopy recid=6 stamp=484765039 filename=/export/home/oracle/AUX/marius01.dbf

printing stored script:
Memory Script
{
Alter clone database open resetlogs;
}
executing script: Memory Script

database opened

printing
stored script: Memory Script
{
# drop offline and skipped tablespaces
sql clone "drop tablespace INDX including contents";
}
executing script: Memory Script

sql statement: drop tablespace INDX including contents
Finished Duplicate Db at 31-JAN-03

RMAN>

Oracle 10G new background processes

With 10g many new background processes were introduced.
This note highlights those.

MMAN

Memory Manager (MMAN) coordinates the sizing of different memory components within SGA. MMAN keeps a track of sizes of components and allocates/de-allocates memory based on their usage. This process is used only when you enable Automatic Shared Memory Management.



RVWR

Process responsible for writing flashback logs which stores pre-image of data-blocks.

These logs can be used to flash back database to past point in time. It proves very handy specially in situation of logical data corruptions like an accidental drop/truncate of a table or commit of an unwanted transaction.



CTWR

Change Tracking Writer (CTWR) Process introduced in 10g to track changes to block to assist fast RMAN incremental backups.

MMNL

The Memory Monitor Light (MMNL) process introduced in 10g is responsible for writing out statistics buffer to disk as needed. It works with Automatic Workload Repository (AWR)




MMON

The memory monitor (MMON) process was introduced in 10g is responsible for gathering statistics for Automatic Workload Repository. This helps Oracle doing automatic problem detection and self tuning.


M000

MMON background slave (m000) processes.


RBAL

RBAL (Re-balancer) is the ASM related process that performs rebalancing of disk resources controlled by ASM.

ARBx

These processes are managed by the RBAL process and are used to do the actual rebalancing of ASM controlled disk resources. The number of ARBx processes invoked is directly influenced by the asm_power_limit parameter.

ASMB

The ASMB process is used to provide information to and from the Cluster Synchronization Services used by ASM to manage the disk resources. It is also used to update statistics and provide a heartbeat mechanism.

Friday, June 5, 2009

TKPROF And Oracle Trace

TKPROF And Oracle Trace


The TKPROF program converts Oracle trace files into a more readable form. If you have a problem query you can user TKPROF to get more information. To get the most out of the utility you must enable timed statistics by setting the init.ora parameter or performing the following command:

ALTER SYSTEM SET TIMED_STATISTICS = TRUE;
If a suitable plan table is not present one can be created by doing the fooling as the SYS user:

@ORACLE_HOME\rdbms\admin\utlxplan.sql
CREATE PUBLIC SYNONYM PLAN_TABLE FOR SYS.PLAN_TABLE;
GRANT SELECT, INSERT, UPDATE, DELETE ON SYS.PLAN_TABLE TO PUBLIC;
With this done we can trace a statement:

ALTER SESSION SET SQL_TRACE = TRUE;

SELECT COUNT(*)
FROM dual;

ALTER SESSION SET SQL_TRACE = FALSE;
The resulting trace file will be located in the USER_DUMP_DEST directory. This can then be interpreted using TKPROF at the commmand prompt as follows:

TKPROF explain=user/password@service table=sys.plan_table
The resulting output file contains the following type of information for all SQL statements processed, including the ALTER SESSION commands:

********************************************************************************
count===== number of times OCI procedure was executed
cpu======= cpu time in seconds executing
elapsed=== elapsed time in seconds executing
disk====== number of physical reads of buffers from disk
query===== number of buffers gotten for consistent read
current=== number of buffers gotten in current mode (usually for update)
rows====== number of rows processed by the fetch or execute call
********************************************************************************

SELECT COUNT(*)
FROM dual

call=======count==cpu=====elapsed=====disk=====query====current=====rows
-------===-----===-----===-------=====-------==-------===-------====----
Parse=======1=====0.02=====0.02========0========0========0===========0
Execute=====1=====0.00=====0.00========0========0========0===========0
Fetch=======2=====0.00=====0.00========0========1========4===========1
-----------------------------------------------------------------------
total-------4-----0.02-----0.02--------0--------1--------4-----------1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 121

Rows-------Row Source Operation
------------------------------------------------------------
1 SORT AGGREGATE
1 TABLE ACCESS FULL DUAL
Things to look out for in the trace include:

When tracing lots of statements at once, such as batch processes, you can quickly discard those statements which have an acceptable cpu times. It's often better to focus on those statements that are taking most of the cpu time.
Inefficient statements are mostly associated with a high number of block visits. The query column indicates block visits for read consistency, including all query and subquery processing. The current column indicates visits not related to read consistency, including segment headers and blocks that are going to be updated.
The number of blocks read from disk is shown in the disk column. Since disk reads are slower than memory reads you would expect this value to be significantly lower than the sum of the query and current columns. If it is not you may have an issue with your buffer cache.
Locking problems and inefficient PL/SQL loops may lead to high cpu/elapsed values even when block visits are low.
Multiple parse calls for a single statement imply a library cache issue.
Once you've identified your problem statements you can check the execution plan to see why the statement is performing badly

Wednesday, June 3, 2009

DBA_SCHEDULER data dictionary tables

DBA_SCHEDULER data dictionary tables

Table Name Description


DBA_OBJECTS ------------------------------------ All objects in the database
DBA_SCHEDULER_PROGRAMS---------------All scheduler programs in the database
DBA_SCHEDULER_JOBS----------------------- All scheduler jobs in the database
DBA_SCHEDULER_JOB_CLASSES---------- All scheduler classes in the database
DBA_SCHEDULER_WINDOWS-------------- All scheduler windows in the database
DBA_SCHEDULER_PROGRAM_ARGS----- All arguments of all scheduler programs in the database
DBA_SCHEDULER_JOB_ARGS--------------- All arguments with set values of all scheduler jobs in the
database
DBA_SCHEDULER_JOB_LOG------------------Logged information for all scheduler jobs
DBA_SCHEDULER_JOB_RUN_DETAILS--- The details of a job run
DBA_SCHEDULER_WINDOW_LOG---------- Logged information for all scheduler windows
DBA_SCHEDULER_WINDOW_DETAILS---- The details of a window
DBA_SCHEDULER_WINDOW_GROUPS----- All scheduler window groups in the database


Table Name Description


DBA_SCHEDULER_WINGROUP_MEMBERS-------- Members of all scheduler window groups in the database
DBA_SCHEDULER_SCHEDULES----------- --------All schedules in the database
DBA_SCHEDULER_GLOBAL_ATTRIBUTE------ All scheduler global attributes
DBA_SCHEDULER_CHAINS-------------------------- All scheduler chains in the database
DBA_SCHEDULER_CHAIN_RULES----------------- All rules from scheduler chains in the database
DBA_SCHEDULER_CHAIN_STEPS------------------ All steps of scheduler chains in the database
DBA_SCHEDULER_RUNNING_CHAINS------------ All steps of all running chains in the database

Monday, May 18, 2009

ORACLE STATISTICS_LEVEL

STATISTICS_LEVEL


Property Description
Parameter type String
Syntax STATISTICS_LEVEL = { ALL | TYPICAL | BASIC }
Default value TYPICAL
Modifiable ALTER SESSION, ALTER SYSTEM


STATISTICS_LEVEL specifies the level of collection for database and operating system statistics. The Oracle Database collects these statistics for a variety of purposes, including making self-management decisions.

The default setting of TYPICAL ensures collection of all major statistics required for database self-management functionality and provides best overall performance. The default value should be adequate for most environments.

When the STATISTICS_LEVEL parameter is set to ALL, additional statistics are added to the set of statistics collected with the TYPICAL setting. The additional statistics are timed OS statistics and plan execution statistics.

Setting the STATISTICS_LEVEL parameter to BASIC disables the collection of many of the important statistics required by Oracle Database features and functionality, including:

* Automatic Workload Repository (AWR) Snapshots
* Automatic Database Diagnostic Monitor (ADDM)
* All server-generated alerts
* Automatic SGA Memory Management
* Automatic optimizer statistics collection
* Object level statistics
* End to End Application Tracing (V$CLIENT_STATS)
* Database time distribution statistics (V$SESS_TIME_MODEL and V$SYS_TIME_MODEL)
* Service level statistics
* Buffer cache advisory
* MTTR advisory
* Shared pool sizing advisory
* Segment level statistics
* PGA Target advisory
* Timed statistics
* Monitoring of statistics

When the STATISTICS_LEVEL parameter is modified by ALTER SYSTEM, all advisories or statistics are dynamically turned on or off, depending on the new value of STATISTICS_LEVEL. When modified by ALTER SESSION, the following advisories or statistics are turned on or off in the local session only. Their system-wide state is not changed:

* Timed statistics
* Timed OS statistics
* Plan execution statistics

The V$STATISTICS_LEVEL view displays information about the status of the statistics or advisories controlled by the STATISTICS_LEVEL parameter.


The initialization parameter STATISTICS_LEVEL controls quite a number of functionalities in the Oracle 10g database.
Its default value is TYPICAL.

By setting it to BASIC you switch off functionalities like automatic statistics collection for performance baselines, statistics sampeling for the active session history, table monitoring and a lot of more functionalities. If the parameter is set to BASIC it is still possible to manually set the parameters DB_CACHE_ADVICE, TIMED_STATISTICS and TIMED_OS_STATISTICS.

The parameter can also be set to ALL which causes the collection of additional statistics for timed operating system statistics and for the row source executions.
Oracle strongly recommends to collect the statistics at least at TYPICAL level.

STATISTICS_LEVEL can be set at system as well as at session level. If set on session level the following advisories or statistics get turned on or off, but their systemwide setting is not changed:
- Timed Statistics
- Timed OS Statistics
- Plan Excution Statistics

In order to find out which functionalities are controlled by STATISTICS_LEVEL you can query the dynamic performance view v$statistics_level which displays the status of the statistics/advisories controlled by STATISTICS_LEVEL:

SYS @10gR2 SQL > show parameter statistics_l
NAME TYPE VALUE
------------------------- ----------
statistics_level string TYPICAL

SYS @10gR2 SQL > DESC v$statistics_level
Name Null? Type
----------------------------------------- -------- ----------------------------
STATISTICS_NAME VARCHAR2(64)
DESCRIPTION VARCHAR2(4000)
SESSION_STATUS VARCHAR2(8)
SYSTEM_STATUS VARCHAR2(8)
ACTIVATION_LEVEL VARCHAR2(7)
STATISTICS_VIEW_NAME VARCHAR2(64)
SESSION_SETTABLE VARCHAR2(3)

SYS @10gR2 SQL > SELECT STATISTICS_NAME, ACTIVATION_LEVEL, SYSTEM_STATUS, STATISTICS_VIEW_NAME, SESSION_SETTABLE
2 FROM v$statistics_level;
STATISTICS_NAME ACTIVAT SYSTEM_S STATISTICS_VIEW_NAME SES
------------------------ ------------ ---------------------------------------------------- ---
Buffer Cache Advice TYPICAL ENABLED V$DB_CACHE_ADVICE NO
MTTR Advice TYPICAL ENABLED V$MTTR_TARGET_ADVICE NO
Timed Statistics TYPICAL ENABLED YES
Timed OS Statistics ALL DISABLED YES
Segment Level Statistics TYPICAL ENABLED V$SEGSTAT NO
PGA Advice TYPICAL ENABLED V$PGA_TARGET_ADVICE NO
Plan Execution Statistics ALL DISABLED V$SQL_PLAN_STATISTICS YES
Shared Pool Advice TYPICAL ENABLED V$SHARED_POOL_ADVICE NO
Modification Monitoring TYPICAL ENABLED NO
Longops Statistics TYPICAL ENABLED V$SESSION_LONGOPS NO
Bind Data Capture TYPICAL ENABLED V$SQL_BIND_CAPTURE NO
Ultrafast Latch Statistics TYPICAL ENABLED NO
Threshold-based Alerts TYPICAL ENABLED NO
Global Cache Statistics TYPICAL ENABLED NO
Active Session History TYPICAL ENABLED V$ACTIVE_SESSION_HISTORY NO
Undo Advisor, Alerts and FastRamp up TYPICAL ENABLED V$UNDOSTAT NO
16 rows selected

Sunday, May 17, 2009

RMAN Restore From New Catalog location...........

Question -----

Let suppose I have taken RMAN backup on D:/ drive. As this mount point is 100% filled, I have shifted this backup file to drive E:/. Now I wish to recover my database. How will I let the RMAN know the new backup location to restore ?


ANSWER -----------


You have to catalog the new location backup in the recovery catalog.

Try to use below steps:


crosscheck backup;
delete noprompt expired;

catalog backuppiece ; {repeat this for each backup file}

Note: Be sure you really moved the backup files in the other location, otherwise after entering delete noprompt expired these files are lost and you are not able to execute a full restore.

Wednesday, May 13, 2009

SGA_TARGET vs SGA_MAX_SIZE

SGA_MAX_SIZE


sga_max_size sets the maximum value for sga_target
If sga_max_size is less than the sum of db_cache_size + log_buffer + shared_pool_size + large_pool_size at initialization time, then the value of sga_max_size is ignored.


SGA_TARGET


This parameter is new with Oracle 10g. It specifies the total amaount of SGA memory available to an instance. Setting this parameter makes Oracle distribute the available memory among various components - such as shared pool (for SQL and PL/SQL), Java pool, large_pool and buffer cache - as required.
This new feature is called Automatic Shared Memory Management. With ASMM, the parameters java_pool_size, shared_pool_size, large_pool_size and db_cache_size need not be specified explicitely anymore.
sga_target cannot be higher than sga_max_size.


SGA_TARGET is a database initialization parameter (introduced in Oracle 10g) that can be used for automatic SGA memory sizing.
Parameter description:
SGA_TARGET
Property Description
Parameter type Big integer
Syntax SGA_TARGET = integer [K | M | G]
Default value 0 (SGA autotuning is disabled)
Modifiable ALTER SYSTEM
Range of values 64 to operating system-dependent
Basic Yes



SGA_TARGET provides the following:


• Single parameter for total SGA size
• Automatically sizes SGA components
• Memory is transferred to where most needed
• Uses workload information
• Uses internal advisory predictions
• STATISTICS_LEVEL must be set to TYPICAL


By using one parameter we don't need to use all other SGA parameters like.
• DB_CACHE_SIZE (DEFAULT buffer pool)
• SHARED_POOL_SIZE (Shared Pool)
• LARGE_POOL_SIZE (Large Pool)
• JAVA_POOL_SIZE (Java Pool)



Enable SGA_TARGET


SQL> show parameter sga_target

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sga_target big integer 0
SQL> show parameter sga_max_size

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sga_max_size big integer 600M



As we can see out automatic SGA tuning is not enabled so we can enable it by setting the SGA_TARGET parameter value.
SQL> alter system set sga_target=500m;
System altered.

SQL> show parameter sga_target

NAME TYPE VALUE
------------------------------------ ----------- -------
sga_target big integer 500M



Resize SGA_TARGET


• SGA_TARGET is dynamic
• Can be increased till SGA_MAX_SIZE
• Can be reduced till some component reaches minimum size
• Change in value of SGA_TARGET affects only automatically sized components
SQL> show parameter sga_max_size
NAME TYPE VALUE
------------------------------------ ----------- -----------
sga_max_size big integer 600M
SQL> show parameter sga_target
NAME TYPE VALUE
------------------------------------ ----------- -----------
sga_target big integer 500M
WE can resize it to only 600m if we will try to increase it from 600m we will get error.
SQL> alter system set sga_target=605m;



alter system set sga_target=605m *
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-00823: Specified value of sga_target greater than sga_max_size
For that we must first increase our SGA_MAX_SIZE parameter value.But we must restart out instance because its STATIC parameter.
SQL> alter system set sga_max_size=956 scope=spfile;
System altered.

SQL> startup force
ORACLE instance started.
Total System Global Area 1000189952 bytes
Fixed Size 1337492 bytes
Variable Size 624953196 bytes
Database Buffers 369098752 bytes
Redo Buffers 4800512 bytes
Database mounted.
Database opened.

SQL> show parameter sga_max_size
NAME TYPE VALUE
------------------------------------ ----------- --------
sga_max_size big integer 956M

SQL> alter system set sga_target=900m;
System altered.




Disable SGA_TARGET


We can Disable our automatic SGA tuning by setting the parameter SGA_TARGET to value digit 0.
SQL> alter system set sga_target=0;

System altered.

Understanding Shared Memory and Semaphores

Shared memory and semaphores are two important resources for an Oracle instance on Unix. An instance cannot start if it is unable to allocate what it needs.


DEFINATIONS


Shared memory is exactly that - a memory region that can shared between different processes. Oracle uses shared memory for implementing the SGA, which needs to be visible to all database sessions.
Semaphore is mechanism of inter-process communication. Semaphores allow Oracle server process to stop and wait, and then to be notified when they should resume processing.

As resources are allocated the semaphore counter is assigned a value. As resources are freed, the counter is incremented. A count of zero tells that no resources are available.
Semaphores can be thought of as flags (hence their name, semaphores). They are either on or off. A process can turn on the flag or turn it off. If the flag is already on, processes who try to turn on the flag will sleep until the flag is off. Upon awakening, the process will reattempt to turn the flag on, possibly suceeding or possibly sleeping again. Such behaviour allows semaphores to be used in implementing a post-wait driver - a system where processes can wait for events (i.e. wait on turning on a semphore) and post events (i.e. turning of a semaphore). This mechanism is used by Oracle to maintain concurrency control over the SGA, since it is writeable by all processes attached.


ALLOCATION IN SIMPLE TERMS



Shared memory required by the Oracle Instance : On instance startup, the first things that the instance does is: -Read the "init.ora" -Start the background processes -Allocate the shared memory and semphores required The size of the SGA will be calculated from various "init.ora" parameters. This will be the amount of shared memory required. The SGA is broken into 4 sections - the fixed portion, which is constant in size, the variable portion, which varies in size depending on "init.ora" parameters, the redo block buffer, which has its size controlled by log_buffers, and the db block buffer, which has its size controlled by db_block_buffers. The size of the SGA is the sum of the sizes of the 4 portions. There is unfortunately no simple ormula for determining the size of the variable portion.
Generally, the shared pool dominates all other parts of the variable portion, so as a rule of thumb, one can estimate the size as the value of shared_pool_size.
The number of semphores required is much simpler to determine.
Oracle will need exactly as many semaphores as the value of the processes "init.ora" parameter.

SHARED MEMORY ALLOCATION


1. One-segment

2. Contigous multi-segment

3. Non-contigous multi-segment


When attempting to allocate and attach shared memory for the SGA, it will attempt each one, in the above order, until one succeeds or raises an ORA error. On other, non-fatal, errors, Oracle simply cleans up and tries again using the next memory model. The entire SGA must fit into shared memory, so the total amount of shared memory allocated under any model will be equal of the size of the SGA(SGASIZE).
1. One-segment:- The one-segment model is the simplest and first model tried. In this model, the SGA resides in only one shared memory segment. Oracle attempts to allocate and attach one shared memory segement of size equal to total size of the SGA. However, if the SGASIZE is larger than the configured SHMMAX, this will obviously fail. In this case, the SGA will need to be placed in multiple shared memory segments, and Oracle proceeds to the next memory model for the SGA.
With multiple segments there are two possibilities. The segments can be attached contiguously, so that it appears to be one large shared memory segment, or non-contiguously, with gaps between the segments.
2. Contigous multi-segment - In the contiguous segment model, Oracle simply divides the SGA into SGASIZE/SHMMAX (rounded down) segments of size SHMMAX plus another segment of size SGASIZE modulo SHMMAX
3. Non- contigous multi-segment : Once the number of segments and their sizes is determined, Oracle then allocates and attaches the segments one at a time; first the fixed and variable portion segment(s), then the redo block buffer segment(s), then the db block buffer segment(s). They will be attached non-contiguously,
At this point, we have either attached the entire SGA or returned an ORA error. The total size of segments attached is exactly SGASIZE; no space is wasted. Once Oracle has the shared memory attached, Oracle proceeds to allocating the semaphores it requires.
Recommended values of kernel parameters for Shared memory in
Oracle 8i
SHMMAX= max value of shared memory segment = .5 * size of
physical memory

SHMMIN= min size of shared memory segment=1

SHMMNI= max number of shared memory identifiers on system = 100

SHMSEG= max number of shared memory segments per process = 10
max Sga that can be created by the one segment model is SHMMAX*SHMSEG
You can display the current kernel parameters by doing a "sysdef -i"


SEMAPHORE ALLOCATION


Oracle just needs to allocate a number of semaphores equal to the processes parameter in "init.ora".
SEMMSL= # of semaphores in a semaphore set

SEMMNI= the maximum # of semaphores sets in the system

SEMMNS= the number of semaphores in the system.

SEMOPM= max number of operations per semop call = 100

SEMVMX = semaphore max value = 32767
When an Oracle instance is started, all required semaphores will be allocated. Semaphores are allocated in sets.
Since each oracle process* requires a semaphore, the number that is allocated is equal to the value of the init.ora parameter PROCESSES. The total # of semaphores required is the sum of all your instance's PROCESSES.
You can allocate all of your semaphores in one or more semaphore sets. If SEMMSL=PROCESSES, then only one semaphore set is required.
The maximum # of semaphores that can be allocated will be the lesser of (SEMMSL*SEMMNI) or SEMMNS.
If SEMMSL is not equal to PROCESSES, be sure that the total # of semaphores required (sum of PROCESSES) does not exceed the maximum (SEMMSL*SEMMNI, SEMMNS).
For example, if SEMMSL=25 and SEMMNI=10, total # of semaphores required (sum of PROCESSES) must not exceed 250 (10 semaphore sets * 25 semaphores/set).
Note: some Operating Systems have a maximum # of semaphore sets in the system.
If you have more than one instance and the values of PROCESSES are different, you may want to make SEMMSL equal to the lowest PROCESSES so that you don't allocate semaphores that will not be used. Otherwise, this could prevent you from being able to allocate all of your requirements.
For example:
Instance PROD has PROCESSES=100
Instance DEV has PROCESSES=50
If SEMMSL = 50, 3 semaphore sets will be allocated, 2 for PROD and 1 for DEV.
If SEMMSL = 100, 2 semaphore sets will be allocated, 1 for PROD
and 1 for DEV.In this case, 100 semaphores will be allocated for DEV when it will only use 50. These unused 50 semaphores cannot be allocated for any other databases.
To see what semaphores have been allocated, use the Unix command 'ipcs -b'.


For example:


Semaphores:

T ID KEY MODE OWNER GROUP NSEMS
s 0 0 --ra-r----- osupport dba 25
s 1 0 --ra-r----- osupport dba 25
s 18 0 --ra-r----- osupport dba 25
s 19 0 --ra-r----- osupport dba 25
s 4 0 --ra-r----- osupport dba 25
s 5 0 --ra-r----- osupport dba 25



NSEMS=the number of semaphores in each semaphores set.
Perform these steps for each instance that is up and running:

$ svrmgrl
SVRMGR>connect internal
SVRMGR>oradebug ipc

This will show the shared memory segment and semaphore that each instance has attached/in use.
Example output from oradebug ipc command:
-------------- Shared memory --------------
Seg Id Address Size
10250 c1eaf000 4591616
Total: # of segments = 1, size = 4591616
-------------- Semaphores ----------------
Total number of semaphores = 50
Number of semaphores per set = 50
Number of semaphore sets = 1
Semaphore identifiers:
188434


The Seg Id shows 10250 for the shared memory which is attacehed to the RUNNING instance. DO NOT REMOVE THAT ONE.
The Semaphore identifiers shows 188434 for the semaphore which is attacehed to the RUNNING instance. DO NOT REMOVE THAT ONE.
Once you have noted ALL of the identifiers for ALL of the instances which are up and running, compare these id numbers to those in the "ipcs -b" listing.
The entry that does not have a running instance to match is the orphaned entry. THAT ONE SHOULD BE REMOVED.
The command used to remove these entries is: ipcrm
NOTE: The option differs for shared memory and semaphores.
ipcrm -m <== Use for the Shared Memory entry

ipcrm -s <== Use for the Semaphore entry

Followers

About Me

My photo
N.Delhi, Delhi, India
I am an Oracle Certified Professional, Oracle 9i/10G DBA, having 4+ years of core DBA experience.