Monday, May 18, 2009

ORACLE STATISTICS_LEVEL

STATISTICS_LEVEL


Property Description
Parameter type String
Syntax STATISTICS_LEVEL = { ALL | TYPICAL | BASIC }
Default value TYPICAL
Modifiable ALTER SESSION, ALTER SYSTEM


STATISTICS_LEVEL specifies the level of collection for database and operating system statistics. The Oracle Database collects these statistics for a variety of purposes, including making self-management decisions.

The default setting of TYPICAL ensures collection of all major statistics required for database self-management functionality and provides best overall performance. The default value should be adequate for most environments.

When the STATISTICS_LEVEL parameter is set to ALL, additional statistics are added to the set of statistics collected with the TYPICAL setting. The additional statistics are timed OS statistics and plan execution statistics.

Setting the STATISTICS_LEVEL parameter to BASIC disables the collection of many of the important statistics required by Oracle Database features and functionality, including:

* Automatic Workload Repository (AWR) Snapshots
* Automatic Database Diagnostic Monitor (ADDM)
* All server-generated alerts
* Automatic SGA Memory Management
* Automatic optimizer statistics collection
* Object level statistics
* End to End Application Tracing (V$CLIENT_STATS)
* Database time distribution statistics (V$SESS_TIME_MODEL and V$SYS_TIME_MODEL)
* Service level statistics
* Buffer cache advisory
* MTTR advisory
* Shared pool sizing advisory
* Segment level statistics
* PGA Target advisory
* Timed statistics
* Monitoring of statistics

When the STATISTICS_LEVEL parameter is modified by ALTER SYSTEM, all advisories or statistics are dynamically turned on or off, depending on the new value of STATISTICS_LEVEL. When modified by ALTER SESSION, the following advisories or statistics are turned on or off in the local session only. Their system-wide state is not changed:

* Timed statistics
* Timed OS statistics
* Plan execution statistics

The V$STATISTICS_LEVEL view displays information about the status of the statistics or advisories controlled by the STATISTICS_LEVEL parameter.


The initialization parameter STATISTICS_LEVEL controls quite a number of functionalities in the Oracle 10g database.
Its default value is TYPICAL.

By setting it to BASIC you switch off functionalities like automatic statistics collection for performance baselines, statistics sampeling for the active session history, table monitoring and a lot of more functionalities. If the parameter is set to BASIC it is still possible to manually set the parameters DB_CACHE_ADVICE, TIMED_STATISTICS and TIMED_OS_STATISTICS.

The parameter can also be set to ALL which causes the collection of additional statistics for timed operating system statistics and for the row source executions.
Oracle strongly recommends to collect the statistics at least at TYPICAL level.

STATISTICS_LEVEL can be set at system as well as at session level. If set on session level the following advisories or statistics get turned on or off, but their systemwide setting is not changed:
- Timed Statistics
- Timed OS Statistics
- Plan Excution Statistics

In order to find out which functionalities are controlled by STATISTICS_LEVEL you can query the dynamic performance view v$statistics_level which displays the status of the statistics/advisories controlled by STATISTICS_LEVEL:

SYS @10gR2 SQL > show parameter statistics_l
NAME TYPE VALUE
------------------------- ----------
statistics_level string TYPICAL

SYS @10gR2 SQL > DESC v$statistics_level
Name Null? Type
----------------------------------------- -------- ----------------------------
STATISTICS_NAME VARCHAR2(64)
DESCRIPTION VARCHAR2(4000)
SESSION_STATUS VARCHAR2(8)
SYSTEM_STATUS VARCHAR2(8)
ACTIVATION_LEVEL VARCHAR2(7)
STATISTICS_VIEW_NAME VARCHAR2(64)
SESSION_SETTABLE VARCHAR2(3)

SYS @10gR2 SQL > SELECT STATISTICS_NAME, ACTIVATION_LEVEL, SYSTEM_STATUS, STATISTICS_VIEW_NAME, SESSION_SETTABLE
2 FROM v$statistics_level;
STATISTICS_NAME ACTIVAT SYSTEM_S STATISTICS_VIEW_NAME SES
------------------------ ------------ ---------------------------------------------------- ---
Buffer Cache Advice TYPICAL ENABLED V$DB_CACHE_ADVICE NO
MTTR Advice TYPICAL ENABLED V$MTTR_TARGET_ADVICE NO
Timed Statistics TYPICAL ENABLED YES
Timed OS Statistics ALL DISABLED YES
Segment Level Statistics TYPICAL ENABLED V$SEGSTAT NO
PGA Advice TYPICAL ENABLED V$PGA_TARGET_ADVICE NO
Plan Execution Statistics ALL DISABLED V$SQL_PLAN_STATISTICS YES
Shared Pool Advice TYPICAL ENABLED V$SHARED_POOL_ADVICE NO
Modification Monitoring TYPICAL ENABLED NO
Longops Statistics TYPICAL ENABLED V$SESSION_LONGOPS NO
Bind Data Capture TYPICAL ENABLED V$SQL_BIND_CAPTURE NO
Ultrafast Latch Statistics TYPICAL ENABLED NO
Threshold-based Alerts TYPICAL ENABLED NO
Global Cache Statistics TYPICAL ENABLED NO
Active Session History TYPICAL ENABLED V$ACTIVE_SESSION_HISTORY NO
Undo Advisor, Alerts and FastRamp up TYPICAL ENABLED V$UNDOSTAT NO
16 rows selected

Sunday, May 17, 2009

RMAN Restore From New Catalog location...........

Question -----

Let suppose I have taken RMAN backup on D:/ drive. As this mount point is 100% filled, I have shifted this backup file to drive E:/. Now I wish to recover my database. How will I let the RMAN know the new backup location to restore ?


ANSWER -----------


You have to catalog the new location backup in the recovery catalog.

Try to use below steps:


crosscheck backup;
delete noprompt expired;

catalog backuppiece ; {repeat this for each backup file}

Note: Be sure you really moved the backup files in the other location, otherwise after entering delete noprompt expired these files are lost and you are not able to execute a full restore.

Wednesday, May 13, 2009

SGA_TARGET vs SGA_MAX_SIZE

SGA_MAX_SIZE


sga_max_size sets the maximum value for sga_target
If sga_max_size is less than the sum of db_cache_size + log_buffer + shared_pool_size + large_pool_size at initialization time, then the value of sga_max_size is ignored.


SGA_TARGET


This parameter is new with Oracle 10g. It specifies the total amaount of SGA memory available to an instance. Setting this parameter makes Oracle distribute the available memory among various components - such as shared pool (for SQL and PL/SQL), Java pool, large_pool and buffer cache - as required.
This new feature is called Automatic Shared Memory Management. With ASMM, the parameters java_pool_size, shared_pool_size, large_pool_size and db_cache_size need not be specified explicitely anymore.
sga_target cannot be higher than sga_max_size.


SGA_TARGET is a database initialization parameter (introduced in Oracle 10g) that can be used for automatic SGA memory sizing.
Parameter description:
SGA_TARGET
Property Description
Parameter type Big integer
Syntax SGA_TARGET = integer [K | M | G]
Default value 0 (SGA autotuning is disabled)
Modifiable ALTER SYSTEM
Range of values 64 to operating system-dependent
Basic Yes



SGA_TARGET provides the following:


• Single parameter for total SGA size
• Automatically sizes SGA components
• Memory is transferred to where most needed
• Uses workload information
• Uses internal advisory predictions
• STATISTICS_LEVEL must be set to TYPICAL


By using one parameter we don't need to use all other SGA parameters like.
• DB_CACHE_SIZE (DEFAULT buffer pool)
• SHARED_POOL_SIZE (Shared Pool)
• LARGE_POOL_SIZE (Large Pool)
• JAVA_POOL_SIZE (Java Pool)



Enable SGA_TARGET


SQL> show parameter sga_target

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sga_target big integer 0
SQL> show parameter sga_max_size

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sga_max_size big integer 600M



As we can see out automatic SGA tuning is not enabled so we can enable it by setting the SGA_TARGET parameter value.
SQL> alter system set sga_target=500m;
System altered.

SQL> show parameter sga_target

NAME TYPE VALUE
------------------------------------ ----------- -------
sga_target big integer 500M



Resize SGA_TARGET


• SGA_TARGET is dynamic
• Can be increased till SGA_MAX_SIZE
• Can be reduced till some component reaches minimum size
• Change in value of SGA_TARGET affects only automatically sized components
SQL> show parameter sga_max_size
NAME TYPE VALUE
------------------------------------ ----------- -----------
sga_max_size big integer 600M
SQL> show parameter sga_target
NAME TYPE VALUE
------------------------------------ ----------- -----------
sga_target big integer 500M
WE can resize it to only 600m if we will try to increase it from 600m we will get error.
SQL> alter system set sga_target=605m;



alter system set sga_target=605m *
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-00823: Specified value of sga_target greater than sga_max_size
For that we must first increase our SGA_MAX_SIZE parameter value.But we must restart out instance because its STATIC parameter.
SQL> alter system set sga_max_size=956 scope=spfile;
System altered.

SQL> startup force
ORACLE instance started.
Total System Global Area 1000189952 bytes
Fixed Size 1337492 bytes
Variable Size 624953196 bytes
Database Buffers 369098752 bytes
Redo Buffers 4800512 bytes
Database mounted.
Database opened.

SQL> show parameter sga_max_size
NAME TYPE VALUE
------------------------------------ ----------- --------
sga_max_size big integer 956M

SQL> alter system set sga_target=900m;
System altered.




Disable SGA_TARGET


We can Disable our automatic SGA tuning by setting the parameter SGA_TARGET to value digit 0.
SQL> alter system set sga_target=0;

System altered.

Understanding Shared Memory and Semaphores

Shared memory and semaphores are two important resources for an Oracle instance on Unix. An instance cannot start if it is unable to allocate what it needs.


DEFINATIONS


Shared memory is exactly that - a memory region that can shared between different processes. Oracle uses shared memory for implementing the SGA, which needs to be visible to all database sessions.
Semaphore is mechanism of inter-process communication. Semaphores allow Oracle server process to stop and wait, and then to be notified when they should resume processing.

As resources are allocated the semaphore counter is assigned a value. As resources are freed, the counter is incremented. A count of zero tells that no resources are available.
Semaphores can be thought of as flags (hence their name, semaphores). They are either on or off. A process can turn on the flag or turn it off. If the flag is already on, processes who try to turn on the flag will sleep until the flag is off. Upon awakening, the process will reattempt to turn the flag on, possibly suceeding or possibly sleeping again. Such behaviour allows semaphores to be used in implementing a post-wait driver - a system where processes can wait for events (i.e. wait on turning on a semphore) and post events (i.e. turning of a semaphore). This mechanism is used by Oracle to maintain concurrency control over the SGA, since it is writeable by all processes attached.


ALLOCATION IN SIMPLE TERMS



Shared memory required by the Oracle Instance : On instance startup, the first things that the instance does is: -Read the "init.ora" -Start the background processes -Allocate the shared memory and semphores required The size of the SGA will be calculated from various "init.ora" parameters. This will be the amount of shared memory required. The SGA is broken into 4 sections - the fixed portion, which is constant in size, the variable portion, which varies in size depending on "init.ora" parameters, the redo block buffer, which has its size controlled by log_buffers, and the db block buffer, which has its size controlled by db_block_buffers. The size of the SGA is the sum of the sizes of the 4 portions. There is unfortunately no simple ormula for determining the size of the variable portion.
Generally, the shared pool dominates all other parts of the variable portion, so as a rule of thumb, one can estimate the size as the value of shared_pool_size.
The number of semphores required is much simpler to determine.
Oracle will need exactly as many semaphores as the value of the processes "init.ora" parameter.

SHARED MEMORY ALLOCATION


1. One-segment

2. Contigous multi-segment

3. Non-contigous multi-segment


When attempting to allocate and attach shared memory for the SGA, it will attempt each one, in the above order, until one succeeds or raises an ORA error. On other, non-fatal, errors, Oracle simply cleans up and tries again using the next memory model. The entire SGA must fit into shared memory, so the total amount of shared memory allocated under any model will be equal of the size of the SGA(SGASIZE).
1. One-segment:- The one-segment model is the simplest and first model tried. In this model, the SGA resides in only one shared memory segment. Oracle attempts to allocate and attach one shared memory segement of size equal to total size of the SGA. However, if the SGASIZE is larger than the configured SHMMAX, this will obviously fail. In this case, the SGA will need to be placed in multiple shared memory segments, and Oracle proceeds to the next memory model for the SGA.
With multiple segments there are two possibilities. The segments can be attached contiguously, so that it appears to be one large shared memory segment, or non-contiguously, with gaps between the segments.
2. Contigous multi-segment - In the contiguous segment model, Oracle simply divides the SGA into SGASIZE/SHMMAX (rounded down) segments of size SHMMAX plus another segment of size SGASIZE modulo SHMMAX
3. Non- contigous multi-segment : Once the number of segments and their sizes is determined, Oracle then allocates and attaches the segments one at a time; first the fixed and variable portion segment(s), then the redo block buffer segment(s), then the db block buffer segment(s). They will be attached non-contiguously,
At this point, we have either attached the entire SGA or returned an ORA error. The total size of segments attached is exactly SGASIZE; no space is wasted. Once Oracle has the shared memory attached, Oracle proceeds to allocating the semaphores it requires.
Recommended values of kernel parameters for Shared memory in
Oracle 8i
SHMMAX= max value of shared memory segment = .5 * size of
physical memory

SHMMIN= min size of shared memory segment=1

SHMMNI= max number of shared memory identifiers on system = 100

SHMSEG= max number of shared memory segments per process = 10
max Sga that can be created by the one segment model is SHMMAX*SHMSEG
You can display the current kernel parameters by doing a "sysdef -i"


SEMAPHORE ALLOCATION


Oracle just needs to allocate a number of semaphores equal to the processes parameter in "init.ora".
SEMMSL= # of semaphores in a semaphore set

SEMMNI= the maximum # of semaphores sets in the system

SEMMNS= the number of semaphores in the system.

SEMOPM= max number of operations per semop call = 100

SEMVMX = semaphore max value = 32767
When an Oracle instance is started, all required semaphores will be allocated. Semaphores are allocated in sets.
Since each oracle process* requires a semaphore, the number that is allocated is equal to the value of the init.ora parameter PROCESSES. The total # of semaphores required is the sum of all your instance's PROCESSES.
You can allocate all of your semaphores in one or more semaphore sets. If SEMMSL=PROCESSES, then only one semaphore set is required.
The maximum # of semaphores that can be allocated will be the lesser of (SEMMSL*SEMMNI) or SEMMNS.
If SEMMSL is not equal to PROCESSES, be sure that the total # of semaphores required (sum of PROCESSES) does not exceed the maximum (SEMMSL*SEMMNI, SEMMNS).
For example, if SEMMSL=25 and SEMMNI=10, total # of semaphores required (sum of PROCESSES) must not exceed 250 (10 semaphore sets * 25 semaphores/set).
Note: some Operating Systems have a maximum # of semaphore sets in the system.
If you have more than one instance and the values of PROCESSES are different, you may want to make SEMMSL equal to the lowest PROCESSES so that you don't allocate semaphores that will not be used. Otherwise, this could prevent you from being able to allocate all of your requirements.
For example:
Instance PROD has PROCESSES=100
Instance DEV has PROCESSES=50
If SEMMSL = 50, 3 semaphore sets will be allocated, 2 for PROD and 1 for DEV.
If SEMMSL = 100, 2 semaphore sets will be allocated, 1 for PROD
and 1 for DEV.In this case, 100 semaphores will be allocated for DEV when it will only use 50. These unused 50 semaphores cannot be allocated for any other databases.
To see what semaphores have been allocated, use the Unix command 'ipcs -b'.


For example:


Semaphores:

T ID KEY MODE OWNER GROUP NSEMS
s 0 0 --ra-r----- osupport dba 25
s 1 0 --ra-r----- osupport dba 25
s 18 0 --ra-r----- osupport dba 25
s 19 0 --ra-r----- osupport dba 25
s 4 0 --ra-r----- osupport dba 25
s 5 0 --ra-r----- osupport dba 25



NSEMS=the number of semaphores in each semaphores set.
Perform these steps for each instance that is up and running:

$ svrmgrl
SVRMGR>connect internal
SVRMGR>oradebug ipc

This will show the shared memory segment and semaphore that each instance has attached/in use.
Example output from oradebug ipc command:
-------------- Shared memory --------------
Seg Id Address Size
10250 c1eaf000 4591616
Total: # of segments = 1, size = 4591616
-------------- Semaphores ----------------
Total number of semaphores = 50
Number of semaphores per set = 50
Number of semaphore sets = 1
Semaphore identifiers:
188434


The Seg Id shows 10250 for the shared memory which is attacehed to the RUNNING instance. DO NOT REMOVE THAT ONE.
The Semaphore identifiers shows 188434 for the semaphore which is attacehed to the RUNNING instance. DO NOT REMOVE THAT ONE.
Once you have noted ALL of the identifiers for ALL of the instances which are up and running, compare these id numbers to those in the "ipcs -b" listing.
The entry that does not have a running instance to match is the orphaned entry. THAT ONE SHOULD BE REMOVED.
The command used to remove these entries is: ipcrm
NOTE: The option differs for shared memory and semaphores.
ipcrm -m <== Use for the Shared Memory entry

ipcrm -s <== Use for the Semaphore entry

Oradebug

The oradebug utility falls into the “hidden” classification of utilities due to the lack of available documentation. The utility is invoked directly from SQL*Plus beginning in version 8.1.5 and Server Manager in releases prior to that. The utility can trace a user session as well as perform many other, more global, database tracing functions.

oradebug requires the SYSDBA privilege to execute (connect internal on older Oracle versions). The list of oradebug options can be viewed by typing oradebug help at the SQL*Plus prompt:

SQL> oradebug help
HELP [command] Describe one or all commands
SETMYPID Debug current process
SETOSPID Set OS pid of process to debug
SETORAPID ['force'] Set Oracle pid of process to debug
DUMP [addr] Invoke named dump
DUMPSGA [bytes] Dump fixed SGA
DUMPLIST Print a list of available dumps
EVENT Set trace event in process
SESSION_EVENT Set trace event in session
DUMPVAR [level] Print/dump a fixed PGA/SGA/UGA variable
SETVAR Modify a fixed PGA/SGA/UGA variable
PEEK [level] Print/Dump memory
POKE Modify memory
WAKEUP Wake up Oracle process
SUSPEND Suspend execution
RESUME Resume execution
FLUSH Flush pending writes to trace file
CLOSE_TRACE Close trace file
TRACEFILE_NAME Get name of trace file
LKDEBUG Invoke global enqueue service debugger
NSDBX Invoke CGS name-service debugger
-G Parallel oradebug command prefix
-R Parallel oradebug prefix (return output
SETINST Set instance list in double quotes
SGATOFILE Dump SGA to file; dirname in double quotes
DMPCOWSGA Dump & map SGA as COW; dirname in double quotes
MAPCOWSGA Map SGA as COW; dirname in double quotes
HANGanalyze [level] Analyze system hang
FFBEGIN Flash Freeze the Instance
FFDEREGISTER FF deregister instance from cluster
FFTERMINST Call exit and terminate instance
FFRESUMEINST Resume the flash frozen instance
FFSTATUS Flash freeze status of instance
SKDSTTPCS Helps translate PCs to names
WATCH
Watch a region of memory
DELETE watchpoint Delete a watchpoint
SHOW watchpoints Show watchpoints
CORE Dump core without crashing process
UNLIMIT Unlimit the size of the trace file
PROCSTAT Dump process statistics
CALL [arg1] ... [argn] Invoke function with arguments


Identical to tkprof, oradebug depends on trace files to store its output. These files are in the same location as specified by the user_dump_dest initialization parameter. The trace files are named according to the SPID of the process where the oradebug command is executed – the same naming scheme as described earlier.

Some of the oradebug options apply to a particular session and therefore require a system process id (SPID) as obtained from v$process. Other options are more global in nature and can be executed without attaching to any session.


ORADEBUG Session Trace


The oradebug utility provides the ability to trace a particular user session. In addition, oradebug provides additional details that are not available through other tracing mechanisms.

The following steps are required to trace a user session with oradebug:

1. Obtain the SPID from v$process.
SQL> select username, spid from v$process;

2. Start the debug session with the SPID of the process that needs traced.
SQL> oradebug setospid 2280

3. Select the appropriate trace level. There are four different options when specifying a tracing level:


Level 1 provides “base set” tracing information. Bind variables are displayed as variables

Level 4 – provides Level 1 data and the actual data values of bind variables.

Level 8 – provides Level 1 data and information on wait events when the elapsed time is greater than the CPU time.

Level 12 – combines levels 1, 4 and 8 tracing information. A Level 12 trace contains base set, bind variable values and wait events.



The oradebug command below will enable the maximum tracing possible:

SQL> oradebug event 10046 trace name context forever, level 12

1.Turn tracing off.
SQL> oradebug event 10046 trace name context off



2.Obtain the trace file name. The oradebug facility provides an easy way to obtain the file name:
SQL> oradebug tracefile_name
c:\oracle9i\admin\ORCL92\udump\mooracle_ora_2280.trc



3.Format the trace file with tkprof (as described in the earlier section on tkprof).

The result will be a trace file that contains more trace file information. Viewing wait events and bind variable values can be critical to diagnosing performance issues.


ORADEBUG Dumps



The oradebug utility provides many options for dumping database information to trace files. In order to know what data can be dumped, the dumplist command will return the available options.

• EVENTS
• TRACE_BUFFER_ON
• TRACE_BUFFER_OFF
• HANGanalyze
• LATCHES
• PROCESSSTATE
• SYSTEMSTATE
• INSTANTIATIONSTATE
• REFRESH_OS_STATS
• CROSSIC
• CONTEXTAREA
• HEAPDUMP
• HEAPDUMP_ADDR
• POKE_ADDRESS
• POKE_LENGTH
• POKE_VALUE
• POKE_VALUE0
• GLOBAL_AREA
• MEMORY_LOG
• REALFREEDUMP
• ERRORSTACK
• HANGANALYZE_PROC
• TEST_STACK_DUMP
• BG_MESSAGES
• ENQUEUES
• SIMULATE_EOV
• KSFQP_LIMIT
• KSKDUMPTRACE
• DBSCHEDULER
• GRANULELIST
• GRANULELISTCHK
• SCOREBOARD
• GES_STATE
• ADJUST_SCN
• NEXT_SCN_WRAP
• CONTROLF
• FULL_DUMPS
• BUFFERS
• RECOVERY
• SET_TSN_P1
• BUFFER
• PIN_BLOCKS
• BC_SANITY_CHECK
• FLUSH_CACHE
• LOGHIST
• ARCHIVE_ERROR
• REDOHDR
• LOGERROR
• OPEN_FILES
• DATA_ERR_ON
• DATA_ERR_OFF
• BLK0_FMTCHG
• TR_SET_BLOCK
• TR_SET_ALL_BLOCKS
• TR_SET_SIDE
• TR_CRASH_AFTER_WRITE
• TR_READ_ONE_SIDE
• TR_CORRUPT_ONE_SIDE
• TR_RESET_NORMAL
• TEST_DB_ROBUSTNESS
• LOCKS
• GC_ELEMENTS
• FILE_HDRS
• KRB_CORRUPT_INTERVAL
• KRB_CORRUPT_SIZE
• KRB_PIECE_FAIL
• KRB_OPTIONS
• KRB_SIMULATE_NODE_AFFINITY
• KRB_TRACE
• KRB_BSET_DAYS
• DROP_SEGMENTS
• TREEDUMP
• LONGF_CREATE
• ROW_CACHE
• LIBRARY_CACHE
• SHARED_SERVER_STATE
• KXFPCLEARSTATS
• KXFPDUMPTRACE
• KXFPBLATCHTEST
• KXFXSLAVESTATE
• KXFXCURSORSTATE
• WORKAREATAB_DUMP
• OBJECT_CACHE
• SAVEPOINTS
• OLAP_DUMP

One scenario in which a dump may aid in diagnosing the problem is when dealing with a system hang. If no errors exist in the alert log and the database appears to be hung, connect as SYSDBA and execute the following:

SQL> oradebug setmypid
SQL> oradebug unlimit
SQL> oradebug dump systemstate 10

This creates a very large trace file from the system dump. For this reason, the oradebug unlimit option should be used to override the maximum trace file size as specified in init.ora.

oradebug also has the capability to only produce trace output if a particular error is encountered. This command is especially useful when certain errors are encountered inconsistently. Rather than generate enormous trace files, the trace data will only be generated when the particular error occurs. The following command will monitor a particular session and only generate trace data when the ORA-00942 error is hit:

SQL> oradebug event 942 trace name errorstack level 3

oradebug can also be used to suspend and resume any user database connection. When suspended, the user session will simply hang until the resume command is processed. During the suspension period, v$session_wait will indicate that the session is waiting on the debugger.

Only a few of the oradebug options were discussed. oradebug is a powerful utility to be used only by expert DBAs when diagnosing serious database issues. oradebug can be used in tandem with tkprof to get more detailed information in trace files. In addition, the tool can be used to generate system or process state dumps which can be vital when diagnosing system hang scenarios.

Monday, May 11, 2009

How To Generate EXPLAIN PLAN

EXPLAIN PLAN
SET STATEMENT_ID = 'Raise in Tokyo' {STATEMENT_ID Can be any integer or text}
INTO plan_table
FOR UPDATE employees
SET salary = salary * 1.10
WHERE department_id =
(SELECT department_id FROM departments
WHERE location_id = 1200);

Sunday, May 10, 2009

Analyzing Tables, Indexes, and Clusters

You analyze a schema object (table, index, or cluster) to:

* Collect and manage statistics for it
* Verify the validity of its storage format
* Identify migrated and chained rows of a table or cluster

Note:
Do not use the COMPUTE and ESTIMATE clauses of ANALYZE to collect optimizer statistics. These clauses are supported for backward compatibility. Instead, use the DBMS_STATS package, which lets you collect statistics in parallel, collect global statistics for partitioned objects, and fine tune your statistics collection in other ways. The cost-based optimizer, which depends upon statistics, will eventually use only statistics that have been collected by DBMS_STATS.

You must use the ANALYZE statement (rather than DBMS_STATS) for statistics collection not related to the cost-based optimizer, such as:

* To use the VALIDATE or LIST CHAINED ROWS clauses
* To collect information on freelist blocks


The following topics are discussed in this section:

* Using DBMS_STATS to Collect Table and Index Statistics

* Validating Tables, Indexes, Clusters, and Materialized Views

* Listing Chained Rows of Tables and Clusters


Using DBMS_STATS to Collect Table and Index Statistics

You can use the DBMS_STATS package or the ANALYZE statement to gather statistics about the physical storage characteristics of a table, index, or cluster. These statistics are stored in the data dictionary and can be used by the optimizer to choose the most efficient execution plan for SQL statements accessing analyzed objects.

Oracle recommends using the more versatile DBMS_STATS package for gathering optimizer statistics, but you must use the ANALYZE statement to collect statistics unrelated to the optimizer, such as empty blocks, average space, and so forth.

The DBMS_STATS package allows both the gathering of statistics, including utilizing parallel execution, and the external manipulation of statistics. Statistics can be stored in tables outside of the data dictionary, where they can be manipulated without affecting the optimizer. Statistics can be copied between databases or backup copies can be made.

The following DBMS_STATS procedures enable the gathering of optimizer statistics:

* GATHER_INDEX_STATS
* GATHER_TABLE_STATS
* GATHER_SCHEMA_STATS
* GATHER_DATABASE_STATS


Validating Tables, Indexes, Clusters, and Materialized Views

To verify the integrity of the structure of a table, index, cluster, or materialized view, use the ANALYZE statement with the VALIDATE STRUCTURE option. If the structure is valid, no error is returned. However, if the structure is corrupt, you receive an error message.

For example, in rare cases such as hardware or other system failures, an index can become corrupted and not perform correctly. When validating the index, you can confirm that every entry in the index points to the correct row of the associated table. If the index is corrupt, you can drop and re-create it.

If a table, index, or cluster is corrupt, you should drop it and re-create it. If a materialized view is corrupt, perform a complete refresh and ensure that you have remedied the problem. If the problem is not corrected, drop and re-create the materialized view.

The following statement analyzes the emp table:

ANALYZE TABLE emp VALIDATE STRUCTURE;

You can validate an object and all dependent objects (for example, indexes) by including the CASCADE option. The following statement validates the emp table and all associated indexes:

ANALYZE TABLE emp VALIDATE STRUCTURE CASCADE;

By default the CASCADE option performs a complete validation. Because this operation can be resource intensive, you can perform a faster version of the validation by using the FAST clause. This version checks for the existence of corruptions using an optimized check algorithm, but does not report details about the corruption. If the FAST check finds a corruption, you can then use the CASCADE option without the FAST clause to locate it. The following statement performs a fast validation on the emp table and all associated indexes:

ANALYZE TABLE emp VALIDATE STRUCTURE CASCADE FAST;

You can specify that you want to perform structure validation online while DML is occurring against the object being validated. There can be a slight performance impact when validating with ongoing DML affecting the object, but this is offset by the flexibility of being able to perform ANALYZE online. The following statement validates the emp table and all associated indexes online:

ANALYZE TABLE emp VALIDATE STRUCTURE CASCADE ONLINE;





Listing Chained Rows of Tables and Clusters

You can look at the chained and migrated rows of a table or cluster using the ANALYZE statement with the LIST CHAINED ROWS clause. The results of this statement are stored in a specified table created explicitly to accept the information returned by the LIST CHAINED ROWS clause. These results are useful in determining whether you have enough room for updates to rows.

Creating a CHAINED_ROWS Table

To create the table to accept data returned by an ANALYZE...LIST CHAINED ROWS statement, execute the UTLCHAIN.SQL or UTLCHN1.SQL script. These scripts are provided by the database. They create a table named CHAINED_ROWS in the schema of the user submitting the script.



After a CHAINED_ROWS table is created, you specify it in the INTO clause of the ANALYZE statement. For example, the following statement inserts rows containing information about the chained rows in the emp_dept cluster into the CHAINED_ROWS table:

ANALYZE CLUSTER emp_dept LIST CHAINED ROWS INTO CHAINED_ROWS;




Eliminating Migrated or Chained Rows in a Table



You can use the information in the CHAINED_ROWS table to reduce or eliminate migrated and chained rows in an existing table. Use the following procedure.

1. Use the ANALYZE statement to collect information about migrated and chained rows.

ANALYZE TABLE order_hist LIST CHAINED ROWS;

2. Query the output table:

SELECT *
FROM CHAINED_ROWS
WHERE TABLE_NAME = 'ORDER_HIST';

OWNER_NAME TABLE_NAME CLUST... HEAD_ROWID TIMESTAMP
---------- ---------- -----... ------------------ ---------
SCOTT ORDER_HIST ... AAAAluAAHAAAAA1AAA 04-MAR-96
SCOTT ORDER_HIST ... AAAAluAAHAAAAA1AAB 04-MAR-96
SCOTT ORDER_HIST ... AAAAluAAHAAAAA1AAC 04-MAR-96

The output lists all rows that are either migrated or chained.

3. If the output table shows that you have many migrated or chained rows, then you can eliminate migrated rows by continuing through the following steps:


4. Create an intermediate table with the same columns as the existing table to hold the migrated and chained rows:

CREATE TABLE int_order_hist
AS SELECT *
FROM order_hist
WHERE ROWID IN
(SELECT HEAD_ROWID
FROM CHAINED_ROWS
WHERE TABLE_NAME = 'ORDER_HIST');


5. Delete the migrated and chained rows from the existing table:

DELETE FROM order_hist
WHERE ROWID IN
(SELECT HEAD_ROWID
FROM CHAINED_ROWS
WHERE TABLE_NAME = 'ORDER_HIST');


6. Insert the rows of the intermediate table into the existing table:

INSERT INTO order_hist
SELECT *
FROM int_order_hist;


7. Drop the intermediate table:

DROP TABLE int_order_history;


8. Delete the information collected in step 1 from the output table:

DELETE FROM CHAINED_ROWS
WHERE TABLE_NAME = 'ORDER_HIST';


9. Use the ANALYZE statement again, and query the output table.

Any rows that appear in the output table are chained. You can eliminate chained rows only by increasing your data block size. It might not be possible to avoid chaining in all situations. Chaining is often unavoidable with tables that have a LONG column or large CHAR or VARCHAR2 columns.

Saturday, May 9, 2009

hOW to find most resource consuming statement in Oracle

1. SQL> select s.sid, s.serial# "Session", p.serial# "Process" from v$session s, v$process p where s.paddr = p.addr and p.spid="pid of that process using top in unix";

2. By the above command you get the sid of that long running statement.

3. Next you have to find the event

4. SQL> select SID,EVENT,SECONDS_IN_WAIT,STATE from v$session_wait order by SECONDS_IN_WAIT desc;

5. Next find the "hash value" to get the text of sql

6. SQL> select sid,sql_hash_value from v$session where sid="above got sid"

7. SQL> select SQL_FULLTEXT from v$sql where hash_value="above got hash value";

Tuesday, May 5, 2009

Virtual Private Database

Oracle 10g Virtual Private Database

Virtual Private Database (VPD) was first introduced in Oracle8i. It set a new standard in database security, being built into the database server, instead of each application accessing the data. Security is no longer bypassed when a user accesses the database with an ad hoc query tool or a new application. Virtual Private Database is a key enabling technology for organizations building hosted, web-based applications that expose mission-critical resources to customers.

Virtual Private Database Overview

Virtual Private Database enables the database to perform query modification based on a security policy you have defined in a package. A security policy is a restriction associated with a table or view.

When a user directly or indirectly accesses a table or view associated with a VPD security policy, the server dynamically modifies the user’s SQL statement. The modification is based on a where condition (a predicate) returned by a function which implements the security policy. The database modifies the statement dynamically and is transparent to the user.

Data access via Virtual Private Database will perform the following five steps:

1.

User sends SQL to the database server.

2.

The associated table triggers a pre-defined security policy.

3.

The security policy returns a predicate.

4.

The SQL statement is modified according to the security policy.

5.

Secured data returns to user.


In Oracle8i, the Virtual Private Database provided the following key features:

*

Fine-grained Access Control

*

Application Context

*

Row Level Security

*

VPD support for table and view

Oracle9i expanded the Virtual Private Database features as follows:

*

Oracle Policy Manager

*

Partitioned fine-grained access control

*

Global application context

*

VPD support for synonyms



-------------------------------------------------------------------------------------



Newly Added features in Oracle 10G


Five types of policies, column relevant policies, and column masking make VPD an even more powerful tool in the DBA's security toolbox


Virtual Private Database (VPD), also known as Fine Grained Access Control, provides powerful row-level security capabilities. Introduced in Oracle8i, it has become widely popular and can be found in a variety of applications ranging from education software to financial services.

VPD works by transparently modifying requests for data to present a partial view of the tables to the users based on a set of defined criteria. During runtime, predicates are appended to all the queries to filter any rows the user is supposed to see. For example, if the user is supposed to see only accounts of account manager SCOTT, the VPD setup automatically rewrites the query:

select * from accounts;

to:

select * from accounts
where am_name = 'SCOTT';

The DBA sets a security policy on the table ACCOUNTS. The policy has an associated function called policy function, which returns the string where am_name = 'SCOTT', which is applied as a predicate. If you are not familiar with the full functionality of the feature, I encourage you to read the Oracle Magazine article "Keeping Information Private with VPD."

Policy Types


The repeated parsing necessary to generate the predicate is overhead that you can trim in some situations. For example, in most real life cases the predicate is not as static as where am_name = 'SCOTT'; it's probably more dynamic based on who the user is, the authority level of the user, which account manager she reports to, and so on. The string created and returned by the policy function may become very dynamic, and to guarantee the outcome, Oracle must re-execute the policy function every time, wasting resources and reducing performance. This type of policy, where the predicate can potentially be very different each time it is executed, is known as a "dynamic" policy, and has been available in Oracle9i Database and prior releases..

In addition to retaining dynamic policy, Oracle Database 10g introduces several new types of policies based on how the predicate is constructed providing better controls for improving performance: context_sensitive, shared_context_sensitive, shared_static, and static. Now, let's what each policy type means and how to use it in appropriate situations.




Dynamic Policy. To retain backward compatibility, the default policy type in 10g is "dynamic"—just as it was in Oracle9i. In this case, the policy function is re-evaluated each time the table is accessed, for each row and for every user. Let's examine the policy predicate closely:

where am_name = 'SCOTT'

Ignoring the where clause, the predicate has two distinct parts: the portion before the equality operator (am_name) and the one after it ('SCOTT'). In most cases, the one after is more like a variable in that it is supplied from the user's data (if the user is SCOTT, the value would be 'SCOTT'.) The part before the equality sign is static. So, even though the function does have to evaluate the policy function for each row to generate the appropriate predicate, the knowledge about the static-ness of the before-part and dynamic-ness of the after-part can be used to improve performance. This approach is possible in 10g using a policy of type "context_sensitive" as a parameter in the dbms_rls.add_policy call:

policy_type => dbms_rls.context_sensitive

In another example scenario, we have a table called ACCOUNTS with several columns, one of which is BALANCE, indicating the account balance. Let's assume that a user is allowed to view accounts below a certain balance that is determined by an application context. Instead of hard-coding this balance amount in a policy function, we can use an application context as in:

create or replace vpd_pol_func
(
p_schema in varchar2,
p_table in varchar2
)
return varchar2
is
begin
return 'balance < sys_context(''vpdctx'', ''maxbal'')';
end;

The attribute MAXBAL of the application context VPDCTX can be set earlier in the session and the function can simply get the value at the runtime.

Note the example carefully here. The predicate has two parts: the one before the less-than sign and the other after it. The one before, the word "balance," is a literal. The one after is more or less static because the application context variable is constant until it is changed. If the application context attribute does not change, the entire predicate is constant, and hence the function need not be re-executed. Oracle Database 10g recognizes this fact for optimization if the policy type is defined as context sensitive. If no session context changes have occurred in the session, the function is not re-executed, significantly improving performance.


Static Policy. Sometimes a business operation may warrant a predicate that is more static. For instance, in the context-sensitive policy type example, we defined the maximum balance seen by a user as a variable. This approach is useful in the case of web applications where an Oracle userid is shared by many web users and based on their authority this variable (application context) is set by the application. Therefore web users TAO and KARTHIK, both connecting to the database as user APPUSER, may have two different values of the application context in their session. Here the value of MAXBAL is not tied to the Oracle userid, but rather to the individual session of TAO and KARTHIK.

In the static policy case the predicate is more predictable, as described below.

LORA and MICHELLE are account managers for Acme Bearings and Goldtone Bearings respectively. When they connect to the database, they use their own id and should only see the rows pertaining to them. In Lora's case, the predicate becomes where CUST_NAME = 'ACME'; for Michelle, where CUST_NAME = 'GOLDTONE'. Here the predicate is tied to their userids, and hence any session they create will always have the same value in the application context.

This fact can be exploited by 10g to cache the predicate in the SGA and reuse that in the session without ever re-executing the policy function. The policy function looks like this:

create or replace vpd_pol_func
(
p_schema in varchar2,
p_table in varchar2
)
return varchar2
is
begin
return 'cust_name = sys_context(''vpdctx'', ''cust_name'')';
end;

And the policy is defined as:

policy_type => dbms_rls.static

This approach ensures that the policy function is executed only once. Even if the application contexts are changed in the session, the function is never re-executed, making this process extremely fast.

Static policies are recommended for hosting your applications across several subscribers. In this case a single database has data for several users or subscribers. When each subscriber logs in, an after-logon trigger can set the application context to a value that is used in the policy function to very quickly generate a predicate.

However, defining a policy as static is also a double-edged sword. In the above example, we assumed that the value of the application context attribute VPDCTX.CUST_NAME does not change inside a session. What if that assumption is incorrect? If the value changes, the policy function will not be executed and therefore the new value will not be used in the predicate, returning wrong results! So, be very careful in defining a policy as static; you must be absolutely certain that the value will not change. If you can't make that assumption, better to define the policy as context sensitive instead.


Shared Policy Types To reuse code and maximize the usage of parsed code, you might decide to use a common policy function for several tables. For instance, in the above example, we may have different tables for different types of accounts—SAVINGS and CHECKING—but the rule is still the same: users are restricted from seeing accounts with balances more than they are authorized for. This scenario calls for a single function used for policies on CHECKING and SAVINGS tables. The policy is created as context_sensitive.

Suppose this is the sequence of events:

1. Session connected
2. Application context is set
3. select * from savings;
4. select * from checking;

Even though the application context does not change between steps 3 and 4, the policy function will be re-executed, simply because the tables selected are different now. This is not desirable, as the policy function is the same and there is no need to re-execute the function.

New in 10g is the ability to share a policy across objects. In the above example, you would define the policy type of these policies as:

policy_type => dbms_rls.shared_context_sensitive

Declaring the policies as "shared" improves performance by not executing the function again in the cases as shown above.


Selective Columns


Now imagine a situation where the VPD policy should be applied only if certain columns are selected. In the above example with table ACCOUNTS, the rows are as follows:

ACCTNO ACCT_NAME BALANCE
------ ------------ -------
1 BILL CAMP 1000
2 TOM CONNOPHY 2000
3 ISRAEL D 1500

Michelle is not supposed to see accounts with balances over 1,600. When she issues a query like the following:

select * from accounts;

she sees:

ACCTNO ACCT_NAME BALANCE
------ ------------ -------
1 BILL CAMP 1000
3 ISRAEL D 1500

acctno 2, with balance more than 1,600, has been suppressed in the display. As far as Michelle is concerned, there are only two rows in the table, not three. When she issues a query such as:

select count(*) from accounts;

which simply counts the number of records from the table, the output is two, not three.

However, here we may decide to relax the security policy a bit. In this query Michelle can't view confidential data such as account balance; she merely counts all the records in the table. Consistent with the security policy, we may allow this query to count all the records whether or not she is allowed to see them. If this is the requirement, another parameter in the call to dbms_rls.add_policy in 10g allows that function:

sec_relevant_cols => 'BALANCE'

Now when the user selects the column BALANCE, either explicitly or implicitly as in select *, the VPD policy will kick in to restrict the rows. Otherwise all rows of the table will be selected, as in the query where the user has selected only the count of the total rows, not the column BALANCE. If the above parameter is set as shown, then the query

select count(*) from accounts;

will show three columns, not two. But the query:

select * from accounts;

will still return only two records, as expected.


Column Masking


Now let's add more requirements to our current example. Instead of suppressing the display of rows with a balance above the threshold, we may want to show all the rows while masking the balance column where the value is above the threshold. The security-relevant column is still BALANCE.

Michelle is not supposed to see accounts with balances over 1,600. When she issues a query like the following:

select * from accounts;

she would have seen only two rows, acctnos 1 and 3. But, instead, we may want her to see:

ACCTNO ACCT_NAME BALANCE
------ ------------ -------
1 BILL CAMP 1000
2 TOM CONNOPHY
3 ISRAEL D 1500

Note how all the rows are displayed but the value of the column BALANCE is shown as null (displayed as ) for acctno 2, where the balance is actually 2,000, more than the threshold of 1,600. This approach is called "column masking," and is enabled by specifying the parameter in the call to dbms_rls.add_policy:

sec_relevant_cols_opt => dbms_rls.all_rows

This tactic can be very useful in cases where only values of certain columns are important, and requires no complicated custom code. It is also a great alternative to requiring stored data encryption.


Conclusion

In Oracle Database 10g, VPD has grown into a very powerful feature with the ability to support a variety of requirements, such as masking columns selectively based on the policy and applying the policy only when certain columns are accessed. The performance of the policy can also be increased through multiple types of policy by exploiting the nature of the application, making the feature applicable to multiple situations.

Followers

About Me

My photo
N.Delhi, Delhi, India
I am an Oracle Certified Professional, Oracle 9i/10G DBA, having 4+ years of core DBA experience.