Saturday, February 28, 2009

Flash Recovery Area

1. What is a Flash Recovery Area ? The flash recovery area is an Oracle-managed directory, file system, or Automatic Storage Management disk group that provides a centralized disk location for backup and recovery files. All the files you need to completely recover a database from a media failure are part of the Flash Recovery Area. Oracle creates archived logs and flashback logs in the flash recovery area. RMAN can store its backup sets and image copies in the flash recovery area, and it uses it when restoring files during media recovery. The flash recovery area also acts as a disk cache for tape. Flash recovery extends the functionality of Oracle Managed Files to all recovery related files (backup sets, image copies, and archived logs). Oracle Database automatically manages this storage, deleting files that are no longer needed. The user specifies only the location of a Flash Recovery Area, and the amount of disk space that Oracle is allowed to use for recovery related files. This feature is integrated with the 10G MMON feature, so that out of space conditions can be handled through the standard Oracle monitoring framework. Periodically copying backups to tape frees space in the flash recovery area for other files. Oracle recommends that you enable a recovery area to simplify backup management. The following recovery-related files are stored in the flash recovery area: -- Current control file -- Online redo logs -- Archived redo logs -- Flashback logs -- Control file autobackups -- Datafile and control file copies -- Backup pieces -- Foreign archived redo log ( An archived redo log received by a logical standby database for a LogMiner session.) 2. Why should we use a Flash Recovery Area? As disk storage media is now competitive to tape with respect to purchase costs, disk-based backup is an optimal and preferable storage mechanism. By using disks for storage purposes, you gain significant benefits in terms of mass storage,and you can randomly access your data in milliseconds rather than hours. The previously expensive downtime in traditional recovery is exchanged for rapid data access and recovery times using cheap disk space. The advantage that we have over tape is that tape is a sequential access device and disk is a random access device. Hence the amount of time needed for restoring from the tape is eliminated or reduced. 3. What should be the location of Flash Recovery Area ? The flash recovery area should place on a separate disk from the working set of database files. Otherwise, the disk becomes a single point of failure for your database. 4. What should be the size of Flash Recovery Area ? The larger the flash recovery area is, the more useful it becomes. Ideally, the flash recovery area should be large enough to contain the required files. The recovery area should be able to contain a copy of all datafiles in the database and the incremental backups used by your chosen backup strategy. If providing this much space is impractical, then it is best to create an area large enough to keep a backup of the most important tablespaces and all the archived logs not yet on tape. At an absolute minimum, the flash recovery area must be large enough to contain the archived redo logs not yet on tape. If the recovery area has insufficient space to store flashback logs and meet other backup retention requirements, then the recovery area may delete flashback logs to make room. Formulas for estimating a useful flash recovery area size depend on whether: -- Your database has a small or large number of data blocks that change frequently . -- You store backups only on disk, or on disk and tape. -- You use a redundancy-based backup retention policy, or a recovery window-based retention policy . -- You plan to use Flashback Database or a guaranteed restore point as alternatives to point-in-time recovery. If you plan to enable flashback logging, then note that the volume of flashback log generation is approximately the same order of magnitude as redo log generation. For example, if you intend to set DB_FLASHBACK_RETENTION_TARGET to 24 hours, and if the database generates 20 GB of redo in a day, then a rule of thumb is to allow 20 GB to 30 GB disk space for the flashback logs. The same rule applies to guaranteed restore points when flashback logging is enabled. For example, if the database generates 20 GB redo every day, and if the guaranteed restore point will be kept for a day, then plan to allocate 20 to 30 GB. For an example suppose that you want to determine the size of a flash recovery when the backup retention policy is set to REDUNDANCY 1 and you intend to follow the Oracle Suggested Strategy of using an incrementally updated backup. You use the following formula to estimate the disk quota Disk Quota = Size of a copy of database + Size of an incremental backup + Size of (n+1) days of archived redo logs + Size of (y+1) days of foreign archived redo logs (for logical standby) + Size of control file + Size of an online redo log member * number of log groups + Size of flashback logs (based on DB_FLASHBACK_RETENTION_TARGET value) Where n is the interval in days between incremental updates and y is the delay in applying the foreign archived redo logs on a logical standby database: For Oracle suggested backup kindly refer Note.303861.1 Incrementally Updated Backup In 10G 5. Can I have same Flashback Recovery Area for multiple database ? Multiple databases can have the same value for DB_RECOVERY_FILE_DEST, but one of the following must be true: -- No two databases for which the DB_UNIQUE_NAME initialization parameters are specified have the same value for DB_UNIQUE_NAME. -- For those databases where no DB_UNIQUE_NAME is provided, no two databases have the same value for DB_NAME. When databases share a single recovery area in this way, the location should be large enough to hold the files for all databases. Add the values for DB_RECOVERY_FILE_DEST_SIZE for the databases, then allow for overhead such as mirroring or compression. 6. ASM (Automatic Storage Management) support for Flash Recovery Area : The Flash Recovery Area can be an ASM disk group. RMAN can back up and restore files that are stored in ASM disk groups.In fact, RMAN is the only way to back up and restore ASM files. Backup sets and image copies can be created in ASM, either by automatically creating them in the Flash Recovery Area, or by explicitly specifying an ASM disk group for their location. 7. OMF (Oracle Managed Files) support for Flash Recovery Area : Backup/Restore of OMF files is easier with RMAN as RMAN does not use the long OMF file names in any commands. Backup sets and image copies can be created as OMF files by creating them in the Flash Recovery Area. RMAN can be used to migrate existing files to OMF. 8. Flash Recovery Area for RAC For RAC database the location and disk quota must be the same on all instances. To accomplish this, Oracle recommends that you place the flash recovery area on the shared ASM disks. In addition, you must set the DB_RECOVERY_FILE_DEST and DB_RECOVERY_FILE_DEST_SIZE parameters to the same values on all instances. To use the Flash Recovery feature, you must first configure the flash recovery area for each instance in your Oracle RAC cluster. 9. What type of files can be stored in Flash Recovery Area ? The files in Flash Recovery Area are classified as permanent or transient. -- Permanent Files The permanent files (assuming these are configured to be stored in the recovery area) are multiplexed copies of the current control file and online redo logs. These cannot be deleted without causing the instance to fail. -- Transient Files Transient files include archived redo logs, datafile copies, control file copies, control file autobackups, backup pieces and flashback logs. Oracle manages these files automatically for deletion whenever space is required in the Flash Recovery Area. They are deleted once they become obsolete under the retention policy or have been backed up to tape. Any transient file in the flash recovery area once backed up to tape even if not deleted are internally placed on a file can be deleted list. Until there is a backup of the file on disk made to a teriary storage device it cannot be obsolete. 9. Initialization Parameters required for Flash Recovery Area To enable the Flash Recovery Area, you must set the two initialization parameters: -- DB_RECOVERY_FILE_DEST_SIZE : It is the disk limit, which is the amount of space the flash recovery area is permitted to use. The minimum size of the Flash Recovery Area should be at least large enough to contain archive logs that have not been copied to tape. Note: This value does not include certain kinds of disk overhead: -Block 0 or the OS block header of each Oracle file is not included in this size, so make sure to allow an extra 10% for this data when computing the actual disk usage required for the Flash Recovery Area. -DB_RECOVERY_FILE_DEST_SIZE does not indicate the real size occupied on disk when the underlying filesystem is mirrored, compressed, or in some other way affected by overhead not known to Oracle. -- DB_RECOVERY_FILE_DEST: This initialization parameter is a valid destination to create the Flash Recovery Area. The destination can be defined as a directory, file system, or ASM disk group. Note1: The Flash Recovery Area cannot be stored on a raw file system. Note2 : DB_RECOVERY_FILE_DEST_SIZE must be set before DB_RECOVERY_FILE_DEST. 10. Restrictions on Initialization Parameters for Flash Recovery Area - You cannot use the LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST parameters to specify redo log archive destinations. You must always use the LOG_ARCHIVE_DEST_n parameters in case you have configured flash recovery area. - LOG_ARCHIVE_DEST_10 is implicitly set to USE_DB_RECOVERY_FILE_DEST if you create a recovery area and do not set any other local archiving destinations. - Multiple database can have the same DB_RECOVERY_FILE_DEST only if the DB_NAME are different or if the DB_NAME is same (example the primary and standby database) then the DB_UNIQUE_NAME parameter must be different for the databases. - For RAC the location of Flash Recovery Area must be on a cluster file system, ASM or a shared directory configured through NFS. The location and disk quota must be the same on all instances.

DBA Interview Questions and Answerss

When a query is sent to the database and an index is not being used, what type of execution is taking place?

Full Table Scan(FTS)

How do you measure table fragmentation?

To find the table level fragmentation:The first thing we have to do is, we have to analyze the table by using ANALYZE TABLE COMPUTE STATISTICS; After that we have to check thetable name called dba_tables column called chain_cnt if the value of chain_cnt>0 the table is fragmented.keep in mind that we came to know this only after analyzing the particular table by using ANALYZE.. cmd. before go to the analyzing we check when the corresponding table has analyzed by the table dba_tables column is last_analyzed.


Difference between lock and latches?


locks are used to protect the data or resourses from the simulteneous use of them by multiple sessions which might set them in inconsistant state... Locks are external mechanism, means user can also set locks on objects by using various oracle statementsWhile latches are for the same purpose but works at internal level. Latches are used to Protect and control access to internal data structres like various SGA buffers.They are handled and maintained by oracle and we can't access or set it.. this is the main difference


why do we switch from pfile to spfile and when do we use spfile ?

We switch from pfile to spfile when we want to change some parameters in init.ora dynamically (ie. when the database is up and running ) .In case of pfile if we change the value of some parameters while the db is running , it will take effect on next time the db starts .

what is checkpoint?

At checkpoint the background process CKPT does the following1)signalling the DBWR(Background process) at checkpoints2)Updating the datafile headers with checkpoint information3)Updating controlfiles with checkpoint information.


What is the difference between TRUNCATE and DELETE?

The Delete command will log the data changes in the log file where as the truncate will simply remove the data without it. Hence Data removed by Delete command can be rolled back but not the data removed by TRUNCATE. Truncate is a DDL statement whereas DELETE is a DML statement.

What is written in Redo Log Files?

redolog gets written by the background process lgwr in the following circumfstances1. at commits2. redo log bufer becomes 1/3 full3. when there is more thana 1Mb of changed records in the redo log bufer4. when a timeout occurs (every 3 seconds)5. before the DBWn writes modified blocks in the databse buffer cache to the data files

How many maximum Redo Logfiles one can have in a Database?

Maximum number of log files a database can accomodate depends on the parameter "MAXLOGMEMBERS" specificed during database creation.

Tuesday, February 24, 2009

EXPLAIN PLAN on DDLs

Although it's mentioned in the official documentation of the EXPLAIN PLAN command, I believe it's a not so well known fact that you can EXPLAIN the following DDL commands:

CREATE TABLE
CREATE INDEX
ALTER INDEX REBUILD

What is the benefit of doing so? First of all in case of a complex CREATE TABLE AS SELECT (CTAS) statement you'll get the execution plan, and when using the Cost Based Optimizer this information will include the number of estimated rows that are going to be generated and the estimated time it takes (from 10g on, prior to 10g you need to turn the reported cost yourself into time if you like).

Of course usually this plan is supposed to correspond to the plan of the SELECT statement alone, provided that you're using the default ALL_ROWS optimizer mode, but I have already encountered situations where the plans were different although this usually was due to some buggy behaviour.

In case of a CREATE INDEX DDL you'll get again a cost and time indication, along with a row estimate. Unfortunately it doesn't show an estimation of the required TEMP space for sorting. Additionally it doesn't work very well for function-based indexes where the number of rows actually indexed might be far less than the number of rows of the underlying table. The same applies to NULLABLE columns, it would be nice if this information would be used to come up with the correct number of rows that the index will cover.

As you will see in the sample script provided below, the optimizer's cardinality estimates are not used at full extent when generating indexes, in particular function-based indexes. The cardinality estimates of a query are significantly different from that of a corresponding index expression.

Starting with Oracle 10.2 you'll get an indication of the size of the index based on the dictionary statistics in the "Notes" section, so the estimate is only as good as your statistics allow for, in addition above points apply regarding the accuracy of the estimate in case of null values or function-based indexes. The size estimate is obviously based on the average column length recorded in the statistics.

Interestingly the CREATE TABLE AS SELECT estimation doesn't provide such a size estimation which could be quite handy, too.

Explaining an ALTER INDEX REBUILD shows similar information to that of a CREATE INDEX, but it doesn't show the size estimate, which is a pity because it might provide an indication of the size reduction that might be the result of a rebuild. Of course you can help yourself by explaining the corresponding CREATE INDEX statement.

One potentially interesting information is what is going to be used to perform the create or rebuild index, e.g. the database might be capable of using another index to read the information from rather than performing a full table scan (although you should check in this case if the index isn't redundant).

Other DDLs like ALTER INDEX COALESCE obviously are not supported and a simple CREATE TABLE doesn't add any value, so it's supported but useless.

The following script shows some test results of a EXPLAIN PLAN on DDLs in 11.1.0.7:


SQL> create table explain_ddl_test
2 as
3 select * from all_objects
4 where rownum <= 1000;

Table created.

SQL>
SQL> exec dbms_stats.gather_table_stats(null, 'explain_ddl_test')

PL/SQL procedure successfully completed.

SQL>
SQL> explain plan for
2 create table explain_ddl_test2
3 as
4 select * from explain_ddl_test;

Explained.

SQL>
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 3524376278

--------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------
| 0 | CREATE TABLE STATEMENT | | 1000 | 84000 | 9 (0)| 00:00:01 |
| 1 | LOAD AS SELECT | EXPLAIN_DDL_TEST2 | | | | |
| 2 | TABLE ACCESS FULL | EXPLAIN_DDL_TEST | 1000 | 84000 | 5 (0)| 00:00:01 |
--------------------------------------------------------------------------------------------

9 rows selected.

SQL>
SQL> explain plan for
2 create index explain_ddl_test_idx
3 on explain_ddl_test (object_id);

Explained.

SQL>
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 932678384

-----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------------
| 0 | CREATE INDEX STATEMENT | | 1000 | 4000 | 7 (0)| 00:00:01 |
| 1 | INDEX BUILD NON UNIQUE| EXPLAIN_DDL_TEST_IDX | | | | |
| 2 | SORT CREATE INDEX | | 1000 | 4000 | | |
| 3 | TABLE ACCESS FULL | EXPLAIN_DDL_TEST | 1000 | 4000 | 5 (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------

Note
-----
- estimated index size: 65536 bytes

14 rows selected.

SQL>
SQL> exec dbms_stats.set_table_stats(null, 'explain_ddl_test', numblks=>10000000, numrows=>100000000)

PL/SQL procedure successfully completed.

SQL>
SQL> explain plan for
2 create index explain_ddl_test_idx
3 on explain_ddl_test (object_id);

Explained.

SQL>
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 932678384

-----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------------
| 0 | CREATE INDEX STATEMENT | | 100M| 381M| 2803K (1)| 09:20:38 |
| 1 | INDEX BUILD NON UNIQUE| EXPLAIN_DDL_TEST_IDX | | | | |
| 2 | SORT CREATE INDEX | | 100M| 381M| | |
| 3 | TABLE ACCESS FULL | EXPLAIN_DDL_TEST | 100M| 381M| 2723K (1)| 09:04:47 |
-----------------------------------------------------------------------------------------------

Note
-----
- estimated index size: 2415M bytes

14 rows selected.

SQL>
SQL> explain plan for
2 select
3 *
4 from
5 explain_ddl_test
6 where
7 case when object_id = 1 then 1 else null end is not null;

Explained.

SQL>
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 119225828

--------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 5000K| 400M| 2728K (1)| 09:05:44 |
|* 1 | TABLE ACCESS FULL| EXPLAIN_DDL_TEST | 5000K| 400M| 2728K (1)| 09:05:44 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter(CASE "OBJECT_ID" WHEN 1 THEN 1 ELSE NULL END IS NOT NULL)

13 rows selected.

SQL>
SQL> explain plan for
2 create index explain_ddl_test_idx
3 on explain_ddl_test (case when object_id = 1 then 1 else null end);

Explained.

SQL>
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 932678384

-----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------------
| 0 | CREATE INDEX STATEMENT | | 100M| 381M| 2803K (1)| 09:20:38 |
| 1 | INDEX BUILD NON UNIQUE| EXPLAIN_DDL_TEST_IDX | | | | |
| 2 | SORT CREATE INDEX | | 100M| 381M| | |
| 3 | TABLE ACCESS FULL | EXPLAIN_DDL_TEST | 100M| 381M| 2723K (1)| 09:04:47 |
-----------------------------------------------------------------------------------------------

Note
-----
- estimated index size: 2415M bytes

14 rows selected.

SQL>
SQL> explain plan for
2 select
3 *
4 from
5 explain_ddl_test
6 where
7 decode(object_id, 1, 1, null) is not null;

Explained.

SQL>
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 119225828

--------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 5000K| 400M| 2728K (1)| 09:05:44 |
|* 1 | TABLE ACCESS FULL| EXPLAIN_DDL_TEST | 5000K| 400M| 2728K (1)| 09:05:44 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter(DECODE("OBJECT_ID",1,1,NULL) IS NOT NULL)

13 rows selected.

SQL>
SQL> explain plan for
2 create index explain_ddl_test_idx
3 on explain_ddl_test (decode(object_id, 1, 1, null));

Explained.

SQL>
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 932678384

-----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------------
| 0 | CREATE INDEX STATEMENT | | 100M| 381M| 2803K (1)| 09:20:38 |
| 1 | INDEX BUILD NON UNIQUE| EXPLAIN_DDL_TEST_IDX | | | | |
| 2 | SORT CREATE INDEX | | 100M| 381M| | |
| 3 | TABLE ACCESS FULL | EXPLAIN_DDL_TEST | 100M| 381M| 2723K (1)| 09:04:47 |
-----------------------------------------------------------------------------------------------

Note
-----
- estimated index size: 2415M bytes

14 rows selected.

SQL>
SQL> exec dbms_stats.gather_table_stats(null, 'explain_ddl_test')

PL/SQL procedure successfully completed.

SQL>
SQL> alter table explain_ddl_test modify object_id null;

Table altered.

SQL>
SQL> exec dbms_stats.set_column_stats(null, 'explain_ddl_test', 'object_id', nullcnt=>500)

PL/SQL procedure successfully completed.

SQL>
SQL> explain plan for
2 select count(*)
3 from explain_ddl_test
4 where object_id is not null;

Explained.

SQL>
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 2446556867

---------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 4 | 5 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 4 | | |
|* 2 | TABLE ACCESS FULL| EXPLAIN_DDL_TEST | 500 | 2000 | 5 (0)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("OBJECT_ID" IS NOT NULL)

14 rows selected.

SQL>
SQL> explain plan for
2 create index explain_ddl_test_idx
3 on explain_ddl_test (object_id);

Explained.

SQL>
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 932678384

-----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------------
| 0 | CREATE INDEX STATEMENT | | 1000 | 4000 | 7 (0)| 00:00:01 |
| 1 | INDEX BUILD NON UNIQUE| EXPLAIN_DDL_TEST_IDX | | | | |
| 2 | SORT CREATE INDEX | | 1000 | 4000 | | |
| 3 | TABLE ACCESS FULL | EXPLAIN_DDL_TEST | 1000 | 4000 | 5 (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------

Note
-----
- estimated index size: 65536 bytes

14 rows selected.

SQL>
SQL> exec dbms_stats.set_column_stats(null, 'explain_ddl_test', 'object_id', avgclen=>1000)

PL/SQL procedure successfully completed.

SQL>
SQL> explain plan for
2 create index explain_ddl_test_idx
3 on explain_ddl_test (object_id);

Explained.

SQL>
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 932678384

-----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------------
| 0 | CREATE INDEX STATEMENT | | 1000 | 84000 | 10 (0)| 00:00:01 |
| 1 | INDEX BUILD NON UNIQUE| EXPLAIN_DDL_TEST_IDX | | | | |
| 2 | SORT CREATE INDEX | | 1000 | 84000 | | |
| 3 | TABLE ACCESS FULL | EXPLAIN_DDL_TEST | 1000 | 84000 | 5 (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------

Note
-----
- estimated index size: 131K bytes

14 rows selected.

SQL>
SQL> exec dbms_stats.gather_table_stats(null, 'explain_ddl_test')

PL/SQL procedure successfully completed.

SQL>
SQL> create index explain_ddl_test_idx on explain_ddl_test (object_id);

Index created.

SQL>
SQL> explain plan for alter index explain_ddl_test_idx rebuild;

Explained.

SQL>
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 346799211

-----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------------
| 0 | ALTER INDEX STATEMENT | | 1000 | 4000 | 5 (0)| 00:00:01 |
| 1 | INDEX BUILD NON UNIQUE| EXPLAIN_DDL_TEST_IDX | | | | |
| 2 | SORT CREATE INDEX | | 1000 | 4000 | | |
| 3 | INDEX FAST FULL SCAN| EXPLAIN_DDL_TEST_IDX | | | | |
-----------------------------------------------------------------------------------------------

10 rows selected.

SQL>
SQL> explain plan for alter index explain_ddl_test_idx rebuild online;

Explained.

SQL>
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 932678384

-----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------------
| 0 | ALTER INDEX STATEMENT | | 1000 | 4000 | 5 (0)| 00:00:01 |
| 1 | INDEX BUILD NON UNIQUE| EXPLAIN_DDL_TEST_IDX | | | | |
| 2 | SORT CREATE INDEX | | 1000 | 4000 | | |
| 3 | TABLE ACCESS FULL | EXPLAIN_DDL_TEST | 1000 | 4000 | 5 (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------

10 rows selected.

SQL>
SQL> explain plan for create table test (col1 number);

Explained.

SQL>
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------

----------------------------------------------------
| Id | Operation | Name | Cost (%CPU)|
----------------------------------------------------
| 0 | CREATE TABLE STATEMENT | | 0 (0)|
----------------------------------------------------

6 rows selected.

SQL>
SQL> explain plan for alter index explain_ddl_test_idx coalesce;
explain plan for alter index explain_ddl_test_idx coalesce
*
ERROR at line 1:
ORA-00900: invalid SQL statement

SQL>

RMAN in RAC environment

RMAN in RAC environment



Configuring parameters in RMAN
==========================
RMAN> CONFIGURE BACKUP OPTIMIZATION ON;
RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;
RMAN> CONFIGURE CHANNEL 1 DEVICE TYPE SBT_TAPE CONNECT 'SYS/@nplprd1'; # for RAC
RMAN> CONFIGURE CHANNEL 2 DEVICE TYPE SBT_TAPE CONNECT 'SYS/@nplprd2'; # for RAC
RMAN> CONFIGURE DEFAULT DEVICE TYPE TO 'SBT_TAPE';
RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 2;
RMAN> CONFIGURE DEVICE TYPE 'SBT_TAPE' PARALLELISM 2;
RMAN> CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 84 DAYS;

The configuration are written into the control file and then synced to the catalog.
You can view the configured parameters via the command SHOW ALL

RMAN> show all;

RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION ON;
CONFIGURE DEFAULT DEVICE TYPE TO 'SBT_TAPE';
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE SBT_TAPE TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET;
CONFIGURE DEVICE TYPE 'SBT_TAPE' PARALLELISM 2 BACKUP TYPE TO BACKUPSET;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE SBT_TAPE TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE SBT_TAPE TO 1; # default
CONFIGURE CHANNEL 1 DEVICE TYPE DISK CONNECT '*';
CONFIGURE CHANNEL 2 DEVICE TYPE DISK CONNECT '*';
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS 'ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=nplprd,OB2BARLIST=nplprd_weekly)' FORMAT 'nplprd_weekly.dbf';
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/software/oracle/DB10gR2/dbs/snapcf_nplprd1.f'; # default

Tarameters that has not been changed has # default at the end of the line.
The parameters can be selected also from the target database using:

SQL> col CONF# for 9999
SQL> col NAME for a25
SQL> col VALUE for a60
SQL> set lines 100
SQL> select * from v$rman_configuration;
CONF# NAME VALUE
----- ------------------------- ------------------------------------------------------------
1 BACKUP OPTIMIZATION ON
2 CONTROLFILE AUTOBACKUP ON
3 CHANNEL 1 DEVICE TYPE DISK CONNECT 'SYS/@nplprd1'
4 CHANNEL 2 DEVICE TYPE DISK CONNECT 'SYS/@nplprd2'
5 CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS 'ENV=(OB2BARTYPE=Oracle8,OB2AP
PNAME=nplprd,OB2BARLIST=nplprd_weekly)' FORMAT 'nplprd_wee
kly.dbf'
7 DEFAULT DEVICE TYPE TO 'SBT_TAPE'
8 DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET
9 DEVICE TYPE 'SBT_TAPE' PARALLELISM 2 BACKUP TYPE TO BACKUPSET

CONF# 5 is there because of configure command that is issued at our last backup, see backup command at the end of this post.
The same output can be queried from the rman catalog database with the following command:

SQL> select CONF#,NAME,VALUE from rc_rman_configuration;

The configuration is also written to the trace file that is generated in the command:

SQL> alter database backup controlfile to trace;

In the trace file we will see:

-- Configure RMAN configuration record 1
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('BACKUP OPTIMIZATION','ON');
-- Configure RMAN configuration record 2
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('CONTROLFILE AUTOBACKUP','ON');
-- Configure RMAN configuration record 3
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('CHANNEL','1 DEVICE TYPE DISK CONNECT ''SYS/@nplprd1''');
-- Configure RMAN configuration record 4
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('CHANNEL','2 DEVICE TYPE DISK CONNECT ''SYS/@nplprd2''');
-- Configure RMAN configuration record 5
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('prd_%s:%t:%p>.dbf''','DEVICE TYPE ''SBT_TAPE'' PARMS ''ENV=(OB2BARTYPE=Oracle8,OB2APPNAME
=nplprd,OB2BARLIST=nplprd_weekly)'' FORMAT ''nplprd_weekly.dbf''');
-- Configure RMAN configuration record 7
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DEFAULT DEVICE TYPE TO','''SBT_TAPE''');
-- Configure RMAN configuration record 8
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DEVICE TYPE','DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET');
-- Configure RMAN configuration record 9
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DEVICE TYPE','''SBT_TAPE'' PARALLELISM 2 BACKUP TYPE TO BACKUPSET');

Clearing configuration parameters
===========================
Configuration can be cleared in RMAN with the word CLEAR instead of the old value, for example:

RMAN> CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' CLEAR;
This command can also be done at the target database using the package dbms_backup_restore, for example if we want to clear conf# 5 we will run:

SQL> exec dbms_backup_restore.deleteconfig(5);

Debugging in RMAN
================
RMAN> spool trace to '/tmp/rman_debug';
RMAN> debug on;
RMAN> .
RMAN> .
RMAN> .
RMAN> debug off;
RMAN> spool trace off;

Enabling block change tracking
=========================
For faster incremental backups we can use the new feature introduced in 10g of change tracking via external file.
This file is not backed up since it is relevant only for backups and not used when using restore,
so if the file is corrupted or deleted, nothing will happen except for the next incremental backup that will take more time then usual.
To enable this feature, run the following command on the target database.

SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING;

Since we are using OMF on ASM no need to specify file location, to identify its location, run the following command:
SQL> col FILENAME for a50
SQL> SELECT * FROM V$BLOCK_CHANGE_TRACKING;

STATUS FILENAME BYTES
---------- -------------------------------------------------- ----------
ENABLED +DATA/nplprd/changetracking/ctf.290.622221583 11599872


Finally, the backup command
========================
RMAN> run {
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS 'ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=nplprd,OB2BARLIST=nplprd_weekly)' FORMAT 'nplprd_weekly.dbf';
show all;
backup incremental level 1
database;
backup
archivelog all;
DELETE ARCHIVELOG ALL BACKED UP 2 TIMES TO DEVICE TYPE sbt_tape;
DELETE FORCE NOPROMPT OBSOLETE;
}

Wednesday, February 11, 2009

NID

NID
DBNEWID is a database utility that can change the internal database identifier (DBID) and the database name (DBNAME) for an operational database.

Prior to the introduction of the DBNEWID utility, you could manually create a copy of a database and give it a new database name (DBNAME) by re-creating the control file. However, you could not give the database a new identifier (DBID). The DBID is an internal, unique identifier for a database. Because Recovery Manager (RMAN) distinguishes databases by DBID, you could not register a seed database and a manually copied database together in the same RMAN repository. The DBNEWID utility solves this problem by allowing you to change any of the following:

* Only the DBID of a database
* Only the DBNAME of a database
* Both the DBNAME and DBID of a database

Changing the DBID of a database is a serious procedure. When the DBID of a database is changed, all previous backups and archived logs of the database become unusable. After you change the DBID, you must open the database with the RESETLOGS option, which re-creates the online redo logs and resets their sequence to 1 (see the Oracle9i Database Administrator's Guide). Consequently, you should make a backup of the whole database immediately after changing the DBID.

Changing the DBNAME without changing the DBID does not require you to open with the RESETLOGS option, so database backups and archived logs are not invalidated. However, changing the DBNAME does have consequences. You must change the DB_NAME initialization parameter after a database name change to reflect the new name. Also, you may have to re-create the Oracle password file. If you restore an old backup of the control file (before the name change), then you should use the initialization parameter file and password file from before the database name change.

Syntax:

Keyword Description Default
TARGET Username / Password None
DBNAME New database name None
LOGFILE Output log None
REVERT Revert failed change NO
SETNAME Set a new database name only NO
APPEND Append to output log NO
HELP Displays help messages NO

/*
Ensure that you have a recoverable whole database backup and ensure that the target database is mounted but not open, and that it was shut down consistently prior to mounting.
*/

SHUTDOWN IMMEDIATE
STARTUP MOUNT

/* Invoke the DBNEWID utility on the command line, specifying a valid user with the SYSDBA privilege.
*/

% nid TARGET=sys/oracle@test_db

/*
To change the database name in addition to the DBID, specify the DBNAME parameter.
This example changes the name to orabase:
*/

% nid TARGET=sys/oracle@test DBNAME=orabase

/*
The DBNEWID utility performs validations in the headers of the datafiles and control files before attempting I/O to the files. If validation is successful, then DBNEWID prompts you to confirm the operation (unless you specify a log file, in which case it does not prompt), changes the DBID for each datafile (including offline normal and read-only datafiles), and then exits. The database is left mounted but is not yet usable.
*/

DBNEWID: Release 10.2.0.1.0

(c) Copyright 2005 Oracle Corporation. All rights reserved.

Connected to database TEST_DB (DBID=3942195360)

Control Files in database:
/oracle/dbs/cf1.f
/oracle/dbs/cf2.f

Change database id of database SOLARIS? (Y/[N]) => y

Proceeding with operation
Datafile /oracle/dbs/tbs_01.f - changed
Datafile /oracle/dbs/tbs_02.f - changed
Datafile /oracle/dbs/tbs_11.f - changed
Datafile /oracle/dbs/tbs_12.f - changed
Datafile /oracle/dbs/tbs_21.f - changed

/*
New DBID for database TEST_DB is 3942196782.
All previous backups and archived redo logs for this database are unusable
Proceed to shutdown database and open with RESETLOGS option.
DBNEWID - Database changed.

If validation is not successful, then DBNEWID terminates and leaves the target database intact. You can open the database, fix the error, and then either resume the DBNEWID operation or continue using the database without changing its DBID.
After DBNEWID successfully changes the DBID, shut down the database
*/

SHUTDOWN IMMEDIATE

-- Mount the database

STARTUP MOUNT

-- Open the database in RESETLOGS mode and resume normal use

ALTER DATABASE OPEN RESETLOGS;

/*
Make a new database backup. Because you reset the online redo logs, the old backups and archived logs are no longer usable in the current incarnation of the database.
*/
/*
The following steps describe how to change the database name without changing the DBID.

1. Ensure that you have a recoverable whole database backup.
2. Ensure that the target database is mounted but not open, and that it was shut down consistently prior to mounting.
*/

SHUTDOWN IMMEDIATE
STARTUP MOUNT

/*
3. Invoke the utility on the command line, specifying a valid user with the SYSDBA privilege. You must specify both the DBNAME and SETNAME parameters. This example changes the name to orabase:
*/

% nid TARGET=SYS/oracle@test_db DBNAME=orabase SETNAME=YES

/*
DBNEWID performs validations in the headers of the control files (not the datafiles) before attempting I/O to the files. If validation is successful, then DBNEWID prompts for confirmation, changes the database name in the control files, and exits. After DBNEWID completes successfully, the database is left mounted but is not yet usable.
*/

DBNEWID: Release 10.2.0.1.0

(c) Copyright 2005 Oracle Corporation. All rights reserved.

Connected to database TEST_DB (DBID=3942196782)

Control Files in database:
/oracle/dbs/cf1.f
/oracle/dbs/cf2.f

Change database name of database TEST_DB to ORABASE? (Y/[N]) => Y

Proceeding with operation

Database name changed from TEST_DB to ORABASE - database needs to be
shutdown.
Modify parameter file and generate a new password file before restarting.

DBNEWID - Successfully changed database name

/*
If validation is not successful, then DBNEWID terminates and leaves the target database intact. You can open the database, fix the error, and then either resume the DBNEWID operation or continue using the database without changing the database name.
4. Shut down the database.
*/

SHUTDOWN IMMEDIATE

/*
5. Set the DB_NAME initialization parameter in the initialization parameter file to the new database name.
6. Create a new password file.
7. Start up the database and resume normal use.
*/

STARTUP

/*
To revert a stalled DBID change operation, run the DBNEWID utility again, specifying the REVERT keyword.
*/

% nid TARGET=SYS/oracle REVERT=YES LOGFILE=$HOME/nid.log
/*
Connects with operating system authentication and changes only the DBID:
*/

% nid TARGET=/

/*
Changing the DBID and Database Name

The following example connects as user SYS and changes the DBID and also changes the database name to test2:
*/

% nid TARGET=SYS/oracle@test_db DBNAME=orabase

DBVERIFY

dbv USERID=username/password
segment_id='tablespace_name.segfile.segblock'
logfile='logging_file_name_and_path'
feedback='integer'
help='Y/N'
parfile='parameter_file_name_and_path'


/*
The following example shows a sample use of the command-line interface to this mode of DBVERIFY.

SEGMENT_ID specifies the segment to be verified. It is composed of the tablespace ID number (tsn), segment header file number (segfile), and segment header block number (segblock). These can be obtained by querying TABLESPACE_ID, HEADER_FILE, and HEADER_BLOCK from sys_user_segs.
*/

SELECT tablespace_id, header_file, header_block
FROM sys_user_segs
ORDER BY 1,2,3;

dbv USERID=uwclass/uwclass SEGMENT_ID=2.3.38451

DBVERIFY - Verification starting : SEGMENT_ID = 2.3.38451

DBVERIFY - Verification complete

Total Pages Examined : 32
Total Pages Processed (Data) : 28
Total Pages Failing (Data) : 0
Total Pages Processed (Index): 0
Total Pages Failing (Index) : 0
Total Pages Processed (Other): 3
Total Pages Processed (Seg) : 1
Total Pages Failing (Seg) : 0
Total Pages Empty : 0
Total Pages Marked Corrupt : 0
Total Pages Influx : 0
Highest block SCN : 30027821 (0.30027821)

Followers

About Me

My photo
N.Delhi, Delhi, India
I am an Oracle Certified Professional, Oracle 9i/10G DBA, having 4+ years of core DBA experience.