Friday, July 17, 2009

While installing 10.2.0.3 Patchset software the OUI was core dumping Java error messages.

Java bundled with the software seems to be corrupted and not working.

Using the simple runInstaller command to use the Java bundled with the software was
coredumping.


Using that Java OUI was able to get pass the bootstrap phase and the installation went on
successfully.

SOLUTION / ACTION PLAN
======================
-- To implement the solution, please execute the following steps::
Use the following command to invoke the installer.

./runInstaller -jreLoc /opt/java1.4

RMAN-08512: waiting for snapshot controlfile enqueue

Under normal circumstances, a job that must wait for the control file enqueue
waits for a brief interval and then successfully retrieves the enqueue.
Recovery Manager makes up to five attempts to get the enqueue and then fails the
job. The conflict is usually caused when two jobs are both backing up the control
file, and the job that first starts backing up the control file waits for service
from the media manager.

To determine which job is holding the conflicting enqueue:

1. After you see the first RMAN-08512: waiting for snapshot controlfile enqueue
message, start a new SQL*Plus session on the target database:

% sqlplus sys/sys_pwd@prod1


2. Execute the following query to determine which job is causing the wait:

SELECT s.sid, username AS "User", program, module, action, logon_time "Logon", l.*
FROM v$session s, v$enqueue_lock l
WHERE l.sid = s.sid and l.type = 'CF' AND l.id1 = 0 and l.id2 = 2;


You should see output similar to the following (the output in this example has
been truncated):

SID User Program Module Action Logon
--- ---- -------------------- ------------------------- ---------------- ---------
9 SYS rman@h13 (TNS V1-V3) backup full datafile: c1 0000210 STARTED
21-JUN-99


After you have determined which job is creating the enqueue, you can do one of the
following:

- Wait until the job creating the enqueue completes
- Cancel the current job and restart it once the job creating the enqueue completes
- Cancel the job creating the enqueue

Commonly, enqueue situations occur when a job is writing to a tape drive, but the tape
drive is waiting for a new cassette to be inserted. If you start a new job in
this
situation, you will probably receive the enqueue message because the first job cannot
complete until the new tape is loaded.

Tuesday, July 14, 2009

Change DBID of database

After a clone of a database to another machine with the same database name or if a new database name is created by recreating the controlfile, there is a need to recreate the dbid especially if the new database is to be registered in the same RMAN catalog.

This ensures especially if an RMAN catalog is used that the database can be registered in the catalog since only one dbid is allowed to be registered in a catalog – hence 2 databases with the same dbid on different machines, still can’t be registered in the same catalog.

To change dbid follow these steps -

Shutdown immediate;

startup mount;

Make sure the sys password is correct -

sqlplus “sys/password@test as sysdba” – should connect without errors.

Then command line :

cd $ORACLE_HOME/bin

oracle(DATABASE)@tmpu020:./nid target=sys/password@test
DBNEWID: Release 9.2.0.8.0 – 64bit Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.

Connected to database TEST (DBID=2764499561)

Control Files in database:
/u01/oradata/test/control01.ctl
/u02/oradata/test/control02.ctl

Change database ID of database TEST? (Y/[N]) => Y

Proceeding with operation
Changing database ID from 2764499561 to 3112540754
Control File /u01/oradata/test/control01.ctl – modified
Control File /u02/oradata/test/control02.ctl – modified
Datafile /u02/oradata/test/system01.dbf – dbid changed
Datafile /u02/oradata/test/users01.dbf – dbid changed
Datafile /u02/oradata/test/patrol01.dbf – dbid changed
Datafile /u02/oradata/test/temp01.dbf – dbid changed
Datafile /u02/oradata/test/temp02.dbf – dbid changed
Control File /u01/oradata/test/control01.ctl – dbid changed
Control File /u02/oradata/test/control02.ctl – dbid changed

Database ID for database TEST changed to 3112540754.
All previous backups and archived redo logs for this database are unusable.
Shut down database and open with RESETLOGS option.
Succesfully changed database ID.
DBNEWID – Completed succesfully.

oracle(DATABASE)@tmpu020:

Shutdown immediate;

startup mount;

alter database open resetlogs;

To check new dbid-

oracle(DATABASE)@tmpu020:rman target /

connected to target database: TEST (DBID=3112540754)

RMAN>

Trim a file on HP_UX

Suppose you want to remove first 100 lines of a file named mail.log
First take the backup of the original file.
Check line count
#cat mail.log | wc –l
155
#cat mail.log | sed ‘1,10d’ >/tmp/mail_new.log
If you check the line count of newly created file mail_new.log
#cat mail_new.log | wc –l
145
Now you can rename mail_new.log with mail.log after taking the backup of original mail.log

Monday, July 13, 2009

Installing CRS (Oracle Clustwerware)

1. Configure the SSH
To configure SSH you need to perform the following steps on each node in the cluster.
$ cd
$HOME
$ mkdir .ssh
$ chmod 700 .ssh
$ cd .ssh$ /usr/bin/ssh-keygen -t rsa
Now accept the default location for the key fileEnter and confirm a passphrase. (you can also press enter twice).

$ /usr/bin/ssh-keygen -t dsa

Now accept the default location for the key fileEnter and confirm a passphrase. (you can also press enter twice).

$ cat *.pub > authorized_keys. (nodeX could be the nodename to differentiate files later)

Now do the same steps on the other nodes in the cluster.When all those steps are done on the other nodes, start to copy the authorized_keys. to all the nodes into $HOME/.ssh/
For example if you have 4 nodes you will have after the copy in the .ssh 4 files with the name authorized_keys.
Then on EACH node continue the configuration of SSH by doing the following:

$ cd $HOME/.ssh$ cat *.node* > authorized_keys
$ ssh node1 cat /home/oracle/.ssh/id_rsa.pub >> authorized_keys
oracle@node1’s password:
$ ssh node1 cat /home/oracle/.ssh/id_dsa.pub >> authorized_keys
To test that everything is working correct now execute the commands
$ ssh date

2. Source the following commands in the session you are going to start the OUI,
$ exec /usr/bin/ssh-agent $SHELL$ /usr/bin/ssh-add

3. After resintallation proceed the new install in the path /CRS/app/crs

4. The Oracle Cluster Registry value is /dev/raw/raw3

5. The Voting Disk is /dev/raw/raw1

6. Edit /etc/init.d/init.crs to make the following changes,
chown oracle:dba /dev/raw/raw1
chown root:dba /dev/raw/raw3
chmod 644 /dev/raw/*

7. Check the crsctl status using,
$ ps -ef grep CRS
$ $CRS_HOME/bin/crsctl check crs

8. Private connect - 192.168.xx.xx

9. Public Connect - 10.80.xx.xx

Sunday, July 5, 2009

When and why to perform a full database export.

* When there is a requirement to clone a database on another test or development server, then take a full database mode export dump and create a database with the similar settings of target database, and import the dump in source database.

Simple Steps: how to perform a full database export using export utility.

* Use either system user or any other database user who has the EXP_FULL_DATABASE privilege.
* Set the NLS_LANG environment variable according the database character set and language details.
SQL> select * from nls_database_parameters
2 where parameter in ('NLS_LANGUAGE','NLS_TERRITORY','NLS_CHARACTERSET');


PARAMETER VALUE
------------------------------ ----------------------------
NLS_LANGUAGE AMERICAN
NLS_TERRITORY AMERICA
NLS_CHARACTERSET WE8ISO8859P1

Windows (Dos Prompt):
C:\> set NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1

Unix/Linux:
$ export NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1

* Start the export with following command and options.

exp system/password@mydb file=c:\exportdmp\exp_fulldb_MYDB_27Aug08.dmp
full=y log= c:\exportdmp\exp_fulldb_MYDB_27Aug08.log


Note: This is just a simple export command to perform the full database export. I would request and suggest you to refer Oracle Documentations on export/import and their options. Check the references.

Help on Export and Import:

Windows:
C:\> exp help=y
C:\> imp help=y

Super fast Database Copying/Cloning

A database cloning procedure is especially useful for the DBA who wants to give his developers a full-sized TEST and DEV instance by cloning the PROD instance into the development server areas.

This Oracle clone procedure can be use to quickly migrate a system from one UNIX server to another. It clones the Oracle database and this Oracle cloning procedures is often the fastest way to copy a Oracle database.

STEP 1: On the old system, go into SQL*Plus, sign on as SYSDBA and issue: “alter database backup controlfile to trace”. This will put the create database syntax in the trace file directory. The trace keyword tells oracle to generate a script containing a create controlfile command and store it in the trace directory identified in the user_dump_dest parameter of the init.ora file. It will look something like this:

STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE "OLDLSQ" NORESETLOGS
NOARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 2
MAXDATAFILES 240
MAXINSTANCES 1
MAXLOGHISTORY 113
LOGFILE
GROUP 1 ('/u03/oradata/oldlsq/log1a.dbf',
'/u03/oradata/olslsq/log1b.dbf') SIZE 30M,
GROUP 2 ('/u04/oradata/oldlsq/log2a.dbf',
'/u04/oradata/oldlsq/log2b.dbf') SIZE 30M
DATAFILE
'/u01/oradata/oldlsq/system01.dbf',
'/u01/oradata/oldlsq/mydatabase.dbf'
;
# Recovery is required if any of the datafiles are restored
# backups, or if the last shutdown was not normal or immediate.
RECOVER DATABASE
# Database can now be opened normally.
ALTER DATABASE OPEN;


STEP 2: Shutdown the old database

STEP 3: Copy all data files into the new directories on the new server. You may change the file names if you want, but you must edit the controlfile to reflect the new data files names on the new server.

rcp /u01/oradata/oldlsq/* newhost:/u01/oradata/newlsq
rcp /u01/oradata/oldlsq/* newhost:/u01/oradata/newlsq
rcp /u03/oradata/oldlsq/* newhost:/u03/oradata/newlsq
rcp /u04/oradata/oldlsq/* newhost:/u04/oradata/newlsq


STEP 4: Copy and Edit the Control file – Using the output syntax from STEP 1, modify the controlfile creation script by changing the following:

Old:

CREATE CONTROLFILE REUSE DATABASE "OLDLSQ" NORESETLOGS

New:

CREATE CONTROLFILE SET DATABASE "NEWLSQ" RESETLOGS

STEP 5: Remove the “recover database” and “alter database open” syntax

# Recovery is required if any of the datafiles are restored
# backups, or if the last shutdown was not normal or immediate.
RECOVER DATABASE
# Database can now be opened normally.
ALTER DATABASE OPEN;


STEP 6: Re-names of the data files names that have changed.

Save as db_create_controlfile.sql.

Old:

DATAFILE
'/u01/oradata/oldlsq/system01.dbf',
'/u01/oradata/oldlsq/mydatabase.dbf'

New:

DATAFILE
'/u01/oradata/newlsq/system01.dbf',
'/u01/oradata/newlsq/mydatabase.dbf'

STEP 7: Create the bdump, udump and cdump directories

cd $DBA/admin
mkdir newlsq
cd newlsq
mkdir bdump
mkdir udump
mkdir cdump
mkdir pfile


STEP 8: Copy-over the old init.ora file

rcp $DBA/admin/olslsq/pfile/*.ora newhost:/u01/oracle/admin/newlsq/pfile

STEP 9: Start the new database

@db_create_controlfile.sql

STEP 10: Place the new database in archivelog mode

Tuning Oracle's Buffer Cache

Introduction

Oracle maintains its own buffer cache inside the system global area (SGA) for each instance. A properly sized buffer cache can usually yield a cache hit ratio over 90%, meaning that nine requests out of ten are satisfied without going to disk.

If a buffer cache is too small, the cache hit ratio will be small and more physical disk I/O will result. If a buffer cache is too big, then parts of the buffer cache will be under-utilized and memory resources will be wasted.

Checking The Cache Hit Ratio

Oracle maintains statistics of buffer cache hits and misses. The following query will show you the overall buffer cache hit ratio for the entire instance since it was started:

SELECT (P1.value + P2.value - P3.value) / (P1.value + P2.value)
FROM v$sysstat P1, v$sysstat P2, v$sysstat P3
WHERE P1.name = 'db block gets'
AND P2.name = 'consistent gets'
AND P3.name = 'physical reads'

You can also see the buffer cache hit ratio for one specific session since that session started:

SELECT (P1.value + P2.value - P3.value) / (P1.value + P2.value)
FROM v$sesstat P1, v$statname N1, v$sesstat P2, v$statname N2,
v$sesstat P3, v$statname N3
WHERE N1.name = 'db block gets'
AND P1.statistic# = N1.statistic#
AND P1.sid =
AND N2.name = 'consistent gets'
AND P2.statistic# = N2.statistic#
AND P2.sid = P1.sid
AND N3.name = 'physical reads'
AND P3.statistic# = N3.statistic#
AND P3.sid = P1.sid

You can also measure the buffer cache hit ratio between time X and time Y by collecting statistics at times X and Y and computing the deltas.

Adjusting The Size Of The Buffer Cache

The db_block_buffers parameter in the parameter file determines the size of the buffer cache for the instance. The size of the buffer cache (in bytes) is equal to the value of the db_block_buffers parameter multiplied by the data block size.

You can change the size of the buffer cache by editing the db_block_buffers parameter in the parameter file and restarting the instance.

Determining If The Buffer Cache Should Be Enlarged

If you set the db_block_lru_extended_statistics parameter to a positive number in the parameter file for an instance and restart the instance, Oracle will populate a dynamic performance view called v$recent_bucket. This view will contain the same number of rows as the setting of the db_block_lru_extended_statistics parameter. Each row will indicate how many additional buffer cache hits there might have been if the buffer cache were that much bigger.

For example, if you set db_block_lru_extended_statistics to 1000 and restart the instance, you can see how the buffer cache hit ratio would have improved if the buffer cache were one buffer bigger, two buffers bigger, and so on up to 1000 buffers bigger than its current size. Following is a query you can use, along with a sample result:

SELECT 250 * TRUNC (rownum / 250) + 1 ' to '
250 * (TRUNC (rownum / 250) + 1) "Interval",
SUM (count) "Buffer Cache Hits"
FROM v$recent_bucket
GROUP BY TRUNC (rownum / 250)

Interval Buffer Cache Hits
--------------- --------------------
1 to 250 16083
251 to 500 11422
501 to 750 683
751 to 1000 177

This result set shows that enlarging the buffer cache by 250 buffers would have resulted in 16,083 more hits. If there were about 30,000 hits in the buffer cache at the time this query was performed, then it would appear that adding 500 buffers to the buffer cache might be worthwhile. Adding more than 500 buffers might lead to under-utilized buffers and therefore wasted memory.

There is overhead involved in collecting extended LRU statistics. Therefore you should set the db_block_lru_extended_ statistics parameter back to zero as soon as your analysis is complete.

In Oracle7, the v$recent_bucket view was named X$KCBRBH. Only the SYS user can query X$KCBRBH. Also note that in X$KCBRBH the columns are called indx and count, instead of rownum and count.

Determining If The Buffer Cache Is Bigger Than Necessary

If you set the db_block_lru_statistics parameter to true in the parameter file for an instance and restart the instance, Oracle will populate a dynamic performance view called v$current_bucket. This view will contain one row for each buffer in the buffer cache, and each row will indicate how many of the overall cache hits have been attributable to that particular buffer.

By querying v$current_bucket with a GROUP BY clause, you can get an idea of how well the buffer cache would perform if it were smaller. Following is a query you can use, along with a sample result:

SELECT 1000 * TRUNC (rownum / 1000) + 1 ' to '
1000 * (TRUNC (rownum / 1000) + 1) "Interval",
SUM (count) "Buffer Cache Hits"
FROM v$current_bucket
WHERE rownum > 0
GROUP BY TRUNC (rownum / 1000)

Interval Buffer Cache Hits
------------ -----------------
1 to 1000 668415
1001 to 2000 281760
2001 to 3000 166940
3001 to 4000 14770
4001 to 5000 7030
5001 to 6000 959

This result set shows that the first 3000 buffers are responsible for over 98% of the hits in the buffer cache. This suggests that the buffer cache would be almost as effective if it were half the size; memory is being wasted on an oversized buffer cache.

There is overhead involved in collecting LRU statistics. Therefore you should set the db_block_lru_statistics parameter back to false as soon as your analysis is complete.

In Oracle7, the v$current_bucket view was named X$KCBCBH. Only the SYS user can query X$KCBCBH. Also note that in X$KCBCBH the columns are called indx and count, instead of rownum and count.

Full Table Scans

When Oracle performs a full table scan of a large table, the blocks are read into the buffer cache but placed at the least recently used end of the LRU list. This causes the blocks to be aged out quickly, and prevents one large full table scan from wiping out the entire buffer cache.

Full table scans of large tables usually result in physical disk reads and a lower buffer cache hit ratio. You can get an idea of full table scan activity at the data file level by querying v$filestat and joining to SYS.dba_data_files. Following is a query you can use and sample results:

SELECT A.file_name, B.phyrds, B.phyblkrd
FROM SYS.dba_data_files A, v$filestat B
WHERE B.file# = A.file_id
ORDER BY A.file_id

FILE_NAME PHYRDS PHYBLKRD
-------------------------------- ---------- ----------
/u01/oradata/PROD/system01.dbf 92832 130721
/u02/oradata/PROD/temp01.dbf 1136 7825
/u01/oradata/PROD/tools01.dbf 7994 8002
/u01/oradata/PROD/users01.dbf 214 214
/u03/oradata/PROD/rbs01.dbf 20518 20518
/u04/oradata/PROD/data01.dbf 593336 9441037
/u05/oradata/PROD/data02.dbf 4638037 4703454
/u06/oradata/PROD/index01.dbf 1007638 1007638
/u07/oradata/PROD/index02.dbf 1408270 1408270

PHYRDS shows the number of reads from the data file since the instance was started. PHYBLKRD shows the actual number of data blocks read. Usually blocks are requested one at a time. However, Oracle requests blocks in batches when performing full table scans. (The db_file_multiblock_read_count parameter controls this batch size.)

In the sample result set above, there appears to be quite a bit of full table scan activity in the data01.dbf data file, since 593,336 read requests have resulted in 9,441,037 actual blocks read.

Spotting I/O Intensive SQL Statements

The v$sqlarea dynamic performance view contains one row for each SQL statement currently in the shared SQL area of the SGA for the instance. v$sqlarea shows the first 1000 bytes of each SQL statement, along with various statistics. Following is a query you can use:

SELECT executions, buffer_gets, disk_reads,
first_load_time, sql_text
FROM v$sqlarea
ORDER BY disk_reads

EXECUTIONS indicates the number of times the SQL statement has been executed since it entered the shared SQL area. BUFFER_GETS indicates the collective number of logical reads issued by all executions of the statement. DISK_READS shows the collective number of physical reads issued by all executions of the statement. (A logical read is a read that resulted in a cache hit or a physical disk read. A physical read is a read that resulted in a physical disk read.)

You can review the results of this query to find SQL statements that perform lots of reads, both logical and physical. Consider how many times a SQL statement has been executed when evaluating the number of reads.

Conclusion

This brief document gives you the basic information you need in order to optimize the buffer cache size for your Oracle database. Also, you can zero in on SQL statements that cause a lot of I/O, and data files that experience a lot of full table scans.

Followers

About Me

My photo
N.Delhi, Delhi, India
I am an Oracle Certified Professional, Oracle 9i/10G DBA, having 4+ years of core DBA experience.