Total Pageviews

Monday, 30 January 2012

Materialized views

What is difference between Materialized View and Materialized Log?
  
Materialized views are schema objects that can be used to summarize, precompute, replicate, and distribute data. E.g. to construct a data warehouse or in multi-master replication environment.

A materialized view is a database object that contains the results of a query. The FROM clause of the query can name tables, views, and other materialized views. Collectively these objects are called master tables (a replication term) or detail tables (a data warehousing term).The databases containing the master tables are called the master databases.

Materialized Views are mainly used for two reasons:

1) Replication of data to separate remote databases.

2) For improving the performance of queries by computing and storing the results of complex 
aggregations of data.
 
In situations where complex sql queries are performed  mainly in reporting or datawarehouse environments
Materialized Views are really helpful in improving  performance.Because whenever a SQL query is executed 
oracle database has to lot of work in order to retrieve the data, For example it may have to do sorting 
(Memory or Disk Based), it has to decide the execution plan for the sql statement (Do a full tables scan or 
a indexed based scan) and lots of other stuff before retrieving the requested data.
 
These type of queries if performed repeatedly will affect the performance of the server in a negative
 way. 
 
But with Materialized Views the performance can be improved significantly, because when a materialized
view is created it stores all the data along with the execution plans.So even if the query is executed 
repeatedly it will not eat up all the resources as it did earlier.
   
The Materialized view can be created on the base of tables, views or other materialized views. 
When a Materialized View is created, oracle also create a table with the same name as that of the 
materialized view and also creates a materialized view object.
 
we will only cover two types of materialized views:

1) Complete - Refreshable Materialized Views
2) Fast-refresh Materialized Views 
COMPLETE REFRESH MATERIALIZED VIEWS

In this type of materialized view there is a complete refresh of data at periodic intervals.

SQL> alter user Scott identified by tiger account unlock;

User altered.

SQL> grant create materialized view to Scott;

Grant succeeded.

SQL> conn scott;
Enter password: 
Connected. 
 
SQL> create table sales(
  2  sales_id int,  
  3  sales_amt int,
  4  region_id int,
  5  sales_dtt timestamp,
  6  constraint sales_pk primary key (sales_id));

Table created.

SQL> insert into sales values(1,101,100,sysdate-50);
SQL> insert into sales values(2,511,200,sysdate-20)
SQL> insert into sales values(3,11,100,sysdate)
SQL> commit;

Now lets create a materialized view.

SQL> create materialized view sales_mv
  2  refresh
  3  complete
  4  next sysdate+1/1440
  5  as
  6  select sales_amt, sales_dtt from sales;

Materialized view created.

So above we have created a materialized view based on the sales table, which will completely refresh
itself after every one minute.

SQL> select mview_name, refresh_method, refresh_mode, build_mode,
fast_refreshable from user_mviews
where mview_name = 'SALES_MV';

MVIEW_NAME         REFRESH_  REFRESH_MODE BUILD_MOD FAST_REFRESHABLE
-------------------- -------- ------------ --------- ----------------
SALES_MV             COMPLETE  DEMAND       IMMEDIATE               NO

Materialized views can also be refreshed by (ON DEMAND or ON COMMIT). Since i did not mention 
either of these clauses the default refresh is on demand as seen above in REFRESH_MODE column.

If you query the user_objects you can see that several objects have been created.

SQL> col object_name format a20
SQL> select object_name, object_type from user_objects
           where object_name like 'SALES%' order by object_name;

OBJECT_NAME          OBJECT_TYPE
-------------------- -------------------
SALES                TABLE
SALES_MV             MATERIALIZED VIEW
SALES_MV             TABLE
SALES_PK             INDEX

The materialized view is basically a logical container that stores data in a regular table.

If you query the USER_SEGMENTS view you will find the base table its primary-key and the table
that stores the data returned by the Materialized View.

SQL> select segment_name,segment_type from user_segments
  2  where segment_name like 'SALES%'
  3  order by segment_name;

SEGMENT_NAME    SEGMENT_TYPE
------------    ---------------
SALES           TABLE
SALES_MV        TABLE
SALES_PK        INDEX


Now lets check the already existing data and some more.

SQL> select sales_amt, to_char(sales_dtt,'dd-mon-yyyy') from sales_mv;


 SALES_AMT TO_CHAR(SAL
---------- -----------
       101 22-nov-2011
       511 22-dec-2011
        11 11-jan-2012


SQL> insert into sales values(4,99,200,sysdate);

1 row created.


SQL>insert into sales values(5,127,300,sysdate);

1 row created.

SQL> commit;

Commit complete.


After one minute the materialized view will get updated.

SQL> select sales_amt, to_char(sales_dtt,'dd-mon-yyyy') from sales_mv;

SQL>  SALES_AMT  TO_CHAR(SAL
      ---------- -----------
             101 22-nov-2011
             511 22-dec-2011
              11 11-jan-2012
              99 11-jan-2012
             127 11-jan-2012

 if you have set a longer refresh interval and you do not want to wait that long you can order the 
refresh manually through the following command.


SQL> exec dbms_mview.refresh('SALES_MV','C');

PL/SQL procedure successfully completed.

# Where "C" stands for complete refresh.


So the whole process can ve summarized as following.

1) User or application creates transactions.
2) Base table is updated because of the transactions.
3) A complete refresh occurs or is done manually and  the data in the materialized view is deleted 
and completely refreshed with the contents of the master table(SALES).
4) The User or application can query the materialized view which contains a point in time snapshot 
of the base table's data.

FAST REFRESH MATERIALIZED VIEW

Fast refreshable materialized views work a little bit differently. When a fast refresh materialized 
view is created it initially populates the materialized view table with data from the base or master table.

After the initial data is populated only modified data is applied to the materialized view table after 
each refresh, Instead of a complete refresh like that in Complete refresh materialized views.

Three basic steps are required to create a fast refresh materialized view.

1) Create a base or master table if it does not exist.
2) Create a Materialized view log on the base table.
3) Create a fast refresh materialized view.

Since i have already created a materialized view and base table  i am going to drop them and make
a fresh start.

SQL> drop materialized view sales_mv;

Materialized view dropped.

SQL> drop table sales purge;

Table dropped.


SQL> create table sales(
  2  sales_id int,
  3  sales_amt int,
  4  region_id int,
  5  sales_dtt timestamp
  6 );


SQL> alter table sales add constraint sales_pk primary key(sales_id);

Table altered.


SQL> desc sales;

 Name                 Null?    Type
 ------------------ -------- ---------------------
 SALES_ID           NOT NULL NUMBER(38)
 SALES_AMT                   NUMBER(38)
 REGION_ID                   NUMBER(38)
 SALES_DTT                   TIMESTAMP(6)



SQL> insert into sales values(1,101,100,sysdate-50);

1 row created.

SQL> insert into sales values(2,511,200,sysdate-20);

1 row created.

SQL> insert into sales values(3,11,100,sysdate);

1 row created.

SQL> commit;

Commit complete.

SQL> 

When creating a fast refreshable materialized view a materialized view log is required.

The reason behind this is that the log keeps track of all the  changes made to the master table, so 
when a materialized view  is refreshed only updated data is applied.

It is something similar to the "block change tracking file"  feature in RMAN.

Further a materialized view can be created on the basis of primary key or by ROWID.

If the master table has a primary key then primary key clause can be used otherwise use ROWID.

Now lets create a materialized view log on the master table.

SQL> create materialized view log on sales with primary key;

Materialized view log created.

If your base table does not have a primary key then a following error will occur.

ORA-12014: table does not contain primary key constraint

In that case make materialized view log based on ROWID.

SQL> create materialized view log sales with rowid;


Also, when creating a materialized view you have to mention whether
the data is refreshed via PRIMARY KEY or ROWID.

We are creating a materialized view based on primary key refresh.

NOTE: The primary key columns must be part of the MV select query from  the base table.


SQL> create materialized view sales_mv
     refresh
     with primary key
     fast
     next sysdate+3/1440
     as
     select sales_id, sales_amt, sales_dtt from sales


Materialized view created.

Now lets query the USER_OBJECTS view.

SQL> select object_name, object_type from user_objects
  2  where object_name like '%SALES%'
  3  order by object_name;

OBJECT_NAME          OBJECT_TYPE
-------------------- -------------------
MLOG$_SALES          TABLE
RUPD$_SALES          TABLE
SALES                TABLE
SALES_MV             MATERIALIZED VIEW
SALES_MV             TABLE
SALES_PK             INDEX
SALES_PK1            INDEX

7 rows selected.


Explanation.

MLOG$_SALES = This is a table created along with the materialized view.
              It contains data that has changed in the base table.

RUPD$_SALES = This table is created when a materialized view
              uses primary key for fast refresh. This is used
              to support updatable materialized views. But right
              now we are creating Read only MVs so ignore this table. 

SALES_PK1 = This index is automatically created and is based on the
            primary key columns of the base table.



SQL> select sales_amt, to_char(sales_dtt,'dd-mon-yyyy') from sales_mv;

 SALES_AMT TO_CHAR(SAL
---------- -----------
       101 23-nov-2011
       511 23-dec-2011
        11 12-jan-2012

Lets add some records.

SQL> insert into sales values(4,99,200,sysdate);

1 row created.

SQL> insert into sales values(5,127,300,sysdate);

1 row created.

SQL> commit;

Commit complete.


Now before refresh the mlog$_sales table will contain information about the two changes that
have been made to the base table.


SQL> select count(*) from mlog$_sales;

  COUNT(*)
----------
         2

Wait for three minutes or refresh the view manually.

SQL> exec dbms_mview.refresh('SALES_MV','F');

Lets check the records.

SQL> select sales_amt, to_char(sales_dtt,'dd-mon-yyyy') from sales_mv;

 SALES_AMT TO_CHAR(SAL
---------- -----------
       101 23-nov-2011
       511 23-dec-2011
        11 12-jan-2012
        99 12-jan-2012
       127 12-jan-2012


After the refresh is complete and the data is refreshed
the MLOG$_SALES table will contain no records.

SQL> select count(*) from mlog$_sales;

  COUNT(*)
----------
         0

You can also check time of the last refresh when it happened.

 SQL> select mview_name, last_refresh_type, last_refresh_date
     from user_mviews;

MVIEW_NAME                     LAST_REF LAST_REFR
------------------------------ -------- ---------
SALES_MV                       FAST     12-JAN-12

The whole process above is summarized as following:

1) User or application creates transactions.
2) Data is commited in the base table.
3) Then the MVlog table is populated with the changes.
4) A fast refresh occurs automatically or manually.
5) All the changes that have been made since last refresh
   are applied to the materialized view and rows that are no
   longer required are deleted from MVlog table.
6) The users can query the materialized view which contains
   point in time snapshot of master tables data.



Materialized View Log - When DML changes are made to master table data, Oracle Database stores rows describing those changes in the materialized view log and then uses the materialized view log to refresh materialized views based on the master table. This process is called incremental or fast refresh. Without a materialized view log, Oracle Database must reexecute the materialized view query to refresh the materialized view. This process is called a complete refresh. Usually, a fast refresh takes less time than a complete refresh.

Growing Materialized View (Snapshot) Logs

if you have a Fast Refresh Materialized View that hasn't been refreshed for a considerable period of time, you will find that the Materialized View Log ("Snapshot Log" in earlier versions) would keep growing because it patiently expects the MV to come around for the next refresh.  You can end up with very large MV Logs and also performance issues inserting into the base tables as each insert also has to update the MV Logs.  Furthermore, other MVs that do refresh from the same base table may also suffer because they have to contend(struggle) with larger MV Logs.

Here is a demonstration with two users "ANOTHER_USR" and "LAZY_USR" having created their own Materialized Views against my source table.  If LAZY_USR does not execute a Refresh of his MV, the Snapshot Log on my source table does not get purge.


23:39:21 SQL> --- create two users that will be creating MVs


23:39:21 SQL> create user another_usr identified by another_usr default 
tablespace users;

User created.

23:39:21 SQL> grant create session, create table, create materialized view 
to another_usr;

Grant succeeded.

23:39:21 SQL> alter user another_usr quota unlimited on users;

User altered.

23:39:22 SQL> create user lazy_usr identified by lazy_usr default tablespace 
users;

User created.

23:39:22 SQL> grant create session, create table, create materialized view to 
lazy_usr;

Grant succeeded.

23:39:22 SQL> alter user lazy_usr quota unlimited on users;

User altered.
 
23:39:22 SQL> --- create the source table and mv log on it

23:39:22 SQL> create table my_data_table
23:39:22   2  as select object_id as ID_Number, object_name as Data_Item, 
created as Crtn_date
23:39:22   3  from dba_objects where object_id is not null;

Table created.

23:39:22 SQL> alter table my_data_table modify (ID_Number not null);

Table altered.

23:39:22 SQL> alter table my_data_table add constraint my_data_table_pk 
primary key (ID_Number);

Table altered.

 
23:39:22 SQL> create materialized view log on my_data_table ;

Materialized view log created.

23:39:22 SQL> grant select on my_data_table to another_usr;

Grant succeeded.

23:39:22 SQL> grant select on mlog$_my_data_table to another_usr;

Grant succeeded.

23:39:22 SQL> grant select on my_data_table to lazy_usr;

Grant succeeded.

23:39:22 SQL> grant select on mlog$_my_data_table to lazy_usr;

Grant succeeded.


23:39:22 SQL> --- create the mv in the another_usr account
23:39:22 SQL> connect another_usr/another_usr
Connected.
23:39:22 SQL> create materialized view my_mv
23:39:22   2  refresh fast on demand
23:39:22   3  as
23:39:22   4  select rowid as row_identifier, id_number, data_item
23:39:22   5  from hemant.my_data_table
23:39:22   6  /

Materialized view created.

23:39:22 SQL> 
23:39:22 SQL> --- query the data dictionary
23:39:22 SQL> connect hemant/hemant
Connected.
23:39:22 SQL> col owner format a12
23:39:22 SQL> col name format a12
23:39:22 SQL> col snapshot_site format a12
23:39:22 SQL> col refresh_method format a12
23:39:22 SQL> select name from v$database;

NAME                                                                            
------------                                                                    
ORCL                                                                            

23:39:22 SQL> select v.owner, v.name, v.snapshot_site, v.refresh_method,
23:39:22   2        to_char(l.current_snapshots,'DD-MON HH24:MI:SS')
23:39:22   3  from dba_registered_snapshots v, dba_snapshot_logs l
23:39:22   4  where v.snapshot_id = l.snapshot_id
23:39:22   5  and l.log_owner = 'HEMANT'
23:39:22   6  and l.master = 'MY_DATA_TABLE'
23:39:22   7  order by 1,2
23:39:22   8  /

OWNER        NAME         SNAPSHOT_SIT REFRESH_METH TO_CHAR(L.CURRENT_SNAPSH    
------------ ------------ ------------ ------------ ------------------------    
ANOTHER_USR  MY_MV        ORCL         PRIMARY KEY  15-JAN 23:39:23             

23:39:22 SQL> -- insert into the source table
23:39:22 SQL> insert into my_data_table values (1000000,'A DUMMY',sysdate);

1 row created.

23:39:22 SQL> commit;

Commit complete.
 
23:39:22 SQL> select count(*) from mlog$_my_data_table;

  COUNT(*)                                                                      
----------                                                                      
         1                                                                      

23:39:22 SQL> exec dbms_lock.sleep(30);

PL/SQL procedure successfully completed.

23:39:52 SQL> --- check if the MV has been updated ?!
23:39:52 SQL> select v.owner, v.name, v.snapshot_site, v.refresh_method,
23:39:52   2        to_char(l.current_snapshots,'DD-MON HH24:MI:SS')
23:39:52   3  from dba_registered_snapshots v, dba_snapshot_logs l
23:39:52   4  where v.snapshot_id = l.snapshot_id
23:39:52   5  and l.log_owner = 'HEMANT'
23:39:52   6  and l.master = 'MY_DATA_TABLE'
23:39:52   7  order by 1,2
23:39:52   8  /

OWNER        NAME         SNAPSHOT_SIT REFRESH_METH TO_CHAR(L.CURRENT_SNAPSH    
------------ ------------ ------------ ------------ ------------------------    
ANOTHER_USR  MY_MV        ORCL         PRIMARY KEY  15-JAN 23:39:23             

 
23:39:52 SQL> exec dbms_lock.sleep(10);

PL/SQL procedure successfully completed.

23:40:02 SQL> 
23:40:02 SQL> -- refresh the MV
23:40:02 SQL> connect another_usr/another_usr
Connected.
23:40:03 SQL> exec dbms_mview.refresh('MY_MV');

PL/SQL procedure successfully completed.
 
23:40:03 SQL> --- re-query the data dictionary
23:40:03 SQL> connect hemant/hemant
Connected.
23:40:03 SQL> select v.owner, v.name, v.snapshot_site, v.refresh_method,
23:40:03   2        to_char(l.current_snapshots,'DD-MON HH24:MI:SS')
23:40:03   3  from dba_registered_snapshots v, dba_snapshot_logs l
23:40:03   4  where v.snapshot_id = l.snapshot_id
23:40:03   5  and l.log_owner = 'HEMANT'
23:40:03   6  and l.master = 'MY_DATA_TABLE'
23:40:03   7  order by 1,2
23:40:03   8  /

OWNER        NAME         SNAPSHOT_SIT REFRESH_METH TO_CHAR(L.CURRENT_SNAPSH    
------------ ------------ ------------ ------------ ------------------------    
ANOTHER_USR  MY_MV        ORCL         PRIMARY KEY  15-JAN 23:40:03             

23:40:03 SQL> select count(*) from mlog$_my_data_table;

  COUNT(*)                                                                      
----------                                                                      
         0                                                                      
 
23:40:03 SQL> --- create another mv in the lazy_usr account
23:40:03 SQL> connect lazy_usr/lazy_usr
Connected.
23:40:03 SQL> create materialized view lazy_mv
23:40:03   2  refresh fast on demand
23:40:03   3  as
23:40:03   4  select rowid as row_identifier, id_number, data_item
23:40:03   5  from hemant.my_data_table
23:40:03   6  /

Materialized view created.

 
23:40:03 SQL> --- insert new data
23:40:03 SQL> connect hemant/hemant
Connected.
23:40:03 SQL> insert into my_data_table values (2000000,'Another dummy',sysdate);

1 row created.

23:40:03 SQL> commit;

Commit complete.

23:40:03 SQL> select count(*) from mlog$_my_data_table;

  COUNT(*)                                                                      
----------                                                                      
         1                                                                      


23:40:03 SQL> exec dbms_lock.sleep(30);

PL/SQL procedure successfully completed.

23:40:33 SQL> -- refresh another_usr's mv only
23:40:33 SQL> connect another_usr/another_usr
Connected.
23:40:33 SQL> exec dbms_mview.refresh('MY_MV');

PL/SQL procedure successfully completed.


23:40:34 SQL> --- requery the data dictionary
23:40:34 SQL> connect hemant/hemant
Connected.
23:40:34 SQL> select v.owner, v.name, v.snapshot_site, v.refresh_method,
23:40:34   2        to_char(l.current_snapshots,'DD-MON HH24:MI:SS')
23:40:34   3  from dba_registered_snapshots v, dba_snapshot_logs l
23:40:34   4  where v.snapshot_id = l.snapshot_id
23:40:34   5  and l.log_owner = 'HEMANT'
23:40:34   6  and l.master = 'MY_DATA_TABLE'
23:40:34   7  order by 1,2
23:40:34   8  /

OWNER        NAME         SNAPSHOT_SIT REFRESH_METH TO_CHAR(L.CURRENT_SNAPSH    
------------ ------------ ------------ ------------ ------------------------    
ANOTHER_USR  MY_MV        ORCL         PRIMARY KEY  15-JAN 23:40:34             
LAZY_USR     LAZY_MV      ORCL         PRIMARY KEY  15-JAN 23:40:04             

23:40:34 SQL> select count(*) from mlog$_my_data_table;

  COUNT(*)                                                                      
----------                                                                      
         1                                                                      

 My data dictionary query, at the first execution at 23:39:22, showed 
that ANOTHER_USR had an MV called "MY_MV" against my table 
MY_DATA_TABLE.
  Inserting a single row in MY_DATA_TABLE caused a corresponding row to 
be inserted into the MV Log MLOG$_MY_DATA_TABLE.  However, as 
ANOTHER_USR had not refreshed his MV even as at 23:39:52, the Data 
Dictionary showed that the MV was still as of 23:39:23.
After ANOTHER_USR did refresh his MV (at 23:40:03), the MV Log MLOG$_MY_DATA_TABLE was purged of the record that was used to track the one row DML against MY_DATA_TABLE.

However, when LAZY_USR created an MV but did not refresh the MV, even as at 23:40:34, the one row in MLOG$_MY_DATA_TABLE  was not purged.  Until LAZY_USR executes a Refresh of his MV,  MLOG$_MY_DATA_TABLE will keep growing, even though ANOTHER_USR might be diligently refreshing his MV.
Over time, I will find MLOG$_MY_DATA_TABLE to have grown "excessively large !".

The solution is to identify LAZY_USR (which the query against the Data Dictionary does do) and then :
0  (optional).  Drop LAZY_USR's MV "LAZY_MV"
2.  Purge MLOG$_MY_DATA_TABLE (using either DBMS_MVIEW.PURGE_LOG or a TRUNCATE)
3.  ReCreate or Refresh LAZY_USR's MV "LAZY_MV"

I have seen situations where LAZY_USR is a an account in a "remote" database managed by another application / development / support / DBA team that doesn't bother to inform HEMANT that they are no longer refreshing the MV even as HEMANT's schema continues to grow only to hold entries in MLOG$_MY_DATA_TABLE. 
The example I have posted is of 1 source table and 1 row DML.  In real-life you may have a dozen such tables and tens of thousands of rows before you notice this (and you would notice it only if you are diligently monitoring all the schemas in your database).

#1031924.6 on MetaLink.

Note that even if you do drop the non-refreshed MV as advised in the Note, it would still be preferable, if possible, to drop the Snapshot Logs on the base tables -- because you would want to rebuild the Snapshot Log afresh as a small log -- and rebuild other MVs that are still in use. (I say "if possible" because a complete rebuild of some MVs might also take too long to be acceptable "on-line"). 


 

Tuesday, 17 January 2012

Applying Patch Set Update or Critical Patch Update

What is patch
Patch is a program or set of instructions that can be used to fix  a particular problem or
enhance/add particular feature in to existing product/program/software

Applying patch
For this test i’ll be applying 10.2.0.4.4 PSU update with my test environment with physical standby.
Patch 9352164 – 10.2.0.4.4 Patch Set Update
my current environment
[PRIMARY]
[oracle@lnxdg01 ~]$ $ORACLE_HOME/OPatch/opatch lsinventory
Invoking OPatch 10.2.0.5.1

Oracle Interim Patch Installer version 10.2.0.5.1
Copyright (c) 2010, Oracle Corporation.  All rights reserved.

Oracle Home       : /opt/app/oracle/product/10.2.0/db_1
Central Inventory : /opt/app/oracle/oraInventory
   from           : /etc/oraInst.loc
OPatch version    : 10.2.0.5.1
OUI version       : 10.2.0.4.0
OUI location      : /opt/app/oracle/product/10.2.0/db_1/oui
Log file location : /opt/app/oracle/product/10.2.0/db_1/cfgtoollogs/opatch/
opatch2011-10-31_23-49-52PM.log

Patch history file: /opt/app/oracle/product/10.2.0/db_1/cfgtoollogs/opatch/
opatch_history.txt

Lsinventory Output file location : /opt/app/oracle/product/10.2.0/db_1/
cfgtoollogs/opatch/lsinv/lsinventory2011-10-31_23-49-52PM.txt

--------------------------------------------------------------------------
Installed Top-level Products (2): 

Oracle Database 10g                                          10.2.0.1.0
Oracle Database 10g Release 2 Patch Set 3                    10.2.0.4.0
There are 2 products installed in this Oracle Home.
There are no Interim patches installed in this Oracle Home.

--------------------------------------------------------------------------
OPatch succeeded.
[oracle@lnxdg01 ~]$ 
 
[STANDBY]
[oracle@lnxdg02 installer]$ $ORACLE_HOME/OPatch/opatch lsinventory
Invoking OPatch 10.2.0.5.1

Oracle Interim Patch Installer version 10.2.0.5.1
Copyright (c) 2010, Oracle Corporation. All rights reserved.
Oracle Home          :  /opt/app/oracle/product/10.2.0/db_1
Central Inventory    :  /opt/app/oracle/oraInventory
from                       :  /etc/oraInst.loc
OPatch version       :  10.2.0.5.1
OUI version            :  10.2.0.4.0
OUI location           :  /opt/app/oracle/product/10.2.0/db_1/oui
Log file location       :  /opt/app/oracle/product/10.2.0/db_1/cfgtoollogs/opatch
                                 /opatch2011-10-31_23-50-37PM.log
Patch history file      :  /opt/app/oracle/product/10.2.0/db_1/cfgtoollogs/opatch
                                 /opatch_history.txt Lsinventory
Output file location  :  /opt/app/oracle/product/10.2.0/db_1/cfgtoollogs/opatch/lsinv
                                 /lsinventory2011-10-31_23-50-37PM.txt --------------------------------------------------------------------------------
Installed Top-level Products (2):
Oracle Database   10g                                           10.2.0.1.0
Oracle Database 10g Release 2 Patch Set 3           10.2.0.4.0
There are 2 products installed in this Oracle Home.
There are no Interim patches installed in this Oracle Home. --------------------------------------------------------------------------------
OPatch succeeded.

shutdown both primary and standby database/listener.
[oracle@lnxdg01 installer]$ ps -ef | grep pmon
oracle   16683 16832  0 00:14 pts/1    00:00:00 grep pmon
[oracle@lnxdg01 installer]$ ps -ef | grep lsnrctl
oracle   16685 16832  0 00:14 pts/1    00:00:00 grep lsnrctl

oracle@lnxdg02 installer]$ ps -ef | grep pmon
oracle   16048  3311  1 00:14 pts/1    00:00:00 grep pmon
[oracle@lnxdg02 installer]$ ps -ef | grep lsnrctl
oracle   16051  3311  0 00:15 pts/1    00:00:00 grep lsnrctl 
 
Download patch from my oracle support
Log on to my oracle support

click on patches&updates

In search field , Enter the patch number and click on search

In the results page patch number&patch name will be displayed , Click on that
and select platform and click on download.
 
 unzip p9352164_10204_Linux-x86-64.zip
 Use below command to check the conflicts aganist the oracle_home and avoid to 
land in problems
[oracle@lnxdg01 installer]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ./9352164

Invoking OPatch 10.2.0.5.1

Oracle Interim Patch Installer version 10.2.0.5.1
Copyright (c) 2010, Oracle Corporation. All rights reserved.

PREREQ session

Oracle Home         :  /opt/app/oracle/product/10.2.0/db_1
Central Inventory   :  /opt/app/oracle/oraInventory
from                      :  /etc/oraInst.loc
OPatch version      :  10.2.0.5.1
OUI version           :  10.2.0.4.0
OUI location          :  /opt/app/oracle/product/10.2.0/db_1/oui
Log file location      :  /opt/app/oracle/product/10.2.0/db_1/cfgtoollogs/opatch
                                /opatch2011-11-01_00-19-39AM.log
Patch history file     : /opt/app/oracle/product/10.2.0/db_1/cfgtoollogs/opatch/opatch_history.txt

Invoking prereq "checkconflictagainstohwithdetail" Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.

Apply patch

[oracle@lnxdg01 9352164]$ $ORACLE_BASE/OPatch/opatch apply
Invoking OPatch 10.2.0.5.1

Oracle Interim Patch Installer version 10.2.0.5.1
Copyright (c) 2010, Oracle Corporation.  All rights reserved.

Oracle Home       : /opt/app/oracle/product/10.2.0/db_1
Central Inventory : /opt/app/oracle/oraInventory
   from           : /etc/oraInst.loc
OPatch version    : 10.2.0.5.1
OUI version       : 10.2.0.4.0
OUI location      : /opt/app/oracle/product/10.2.0/db_1/oui
Log file location : /opt/app/oracle/product/10.2.0/db_1/cfgtoollogs/opatch/
opatch2011-11-01_00-46-26AM.log

Patch history file: /opt/app/oracle/product/10.2.0/db_1/cfgtoollogs/opatch/
opatch_history.txt

ApplySession applying interim patch '9352164' to OH '/opt/app/oracle/
product/10.2.0/db_1'

Running prerequisite checks...
Provide your email address to be informed of security issues, install and
initiate Oracle Configuration Manager. Easier for you if you use your My
Oracle Support Email address/User Name.
Visit http://www.oracle.com/support/policies.html for details.
Email address/User Name:          

You have not provided an email address for notification of security issues.
Do you wish to remain uninformed of security issues ([Y]es, [N]o) [N]:  Y

OPatch detected non-cluster Oracle Home from the inventory and will patch 
the local system only.

Please shutdown Oracle instances running out of this ORACLE_HOME on the 
local system.
(Oracle Home = '/opt/app/oracle/product/10.2.0/db_1')

Is the local system ready for patching? [y|n]
y
User Responded with: Y
Backing up files and inventory (not for auto-rollback) for the Oracle Home
Backing up files affected by the patch '9352164' for restore. This might take 
a while...
Backing up files affected by the patch '9352164' for rollback. This might 
take a while...
Execution of 'sh /u01/installer/9352164/custom/scripts/pre -apply 9352164':

Return Code = 0

Patching component oracle.rdbms.rsf, 10.2.0.4.0...
Updating archive file "/opt/app/oracle/product/10.2.0/db_1/lib/
libgeneric10.a" 
with "lib/libgeneric10.a/qcodfdef.o"
.
.
.
Updating archive file "/opt/app/oracle/product/10.2.0/db_1/lib32/
libnro10.a" 
with "lib32/libnro10.a/ncrfgp.o"

Patching component oracle.network.listener, 10.2.0.4.0...

Patching component oracle.ctx, 10.2.0.4.0...
Copying file to "/opt/app/oracle/product/10.2.0/db_1/ctx/admin/driutl.plb"

Patching component oracle.sdo.locator, 10.2.0.4.0...
Copying file to "/opt/app/oracle/product/10.2.0/db_1/md/admin/locdbmig.sql"

Patching component oracle.rdbms.plsql, 10.2.0.4.0...

Patching component oracle.sqlplus, 10.2.0.4.0...
Updating archive file "/opt/app/oracle/product/10.2.0/db_1/lib/
libsqlplus.a" 
with "lib/libsqlplus.a/aficon.o"
Updating archive file "/opt/app/oracle/product/10.2.0/db_1/lib/
libsqlplus.a" 
with "lib/libsqlplus.a/aficmx.o"
Updating archive file "/opt/app/oracle/product/10.2.0/db_1/lib32/
libsqlplus.a" 
with "lib32/libsqlplus.a/aficon.o"
Updating archive file "/opt/app/oracle/product/10.2.0/db_1/lib32/
libsqlplus.a" 
with "lib32/libsqlplus.a/aficmx.o"
Running make for target iextjob
Running make for target iextjobo
Running make for target client_sharedlib
Running make for target idgmgrl
Running make for target ioracle
Running make for target client_sharedlib
Running make for target itnslsnr
Running make for target iwrap
Running make for target genplusso
ApplySession adding interim patch '9352164' to inventory

Verifying the update...
Inventory check OK: Patch ID 9352164 is registered in Oracle Home inventory 
with proper meta-data.
Files check OK: Files from Patch ID 9352164 are present in Oracle Home.

--------------------------------------------------------------------------
**************************************************************************
**************************************************************************
**                                ATTENTION                               
**                                                                           
** Please note that the Patch Set Update Installation (PSU Deinstallation)    
** is not complete until all the Post Installation (Post Deinstallation)      
** instructions noted in the Readme accompanying this PSU, have been          
** successfully completed.                                                    
**                                                                            
***************************************************************************
**************************************************************************

--------------------------------------------------------------------------
Execution of 'sh /u01/installer/9352164/custom/scripts/post -apply 9352164':

Return Code = 0

The local system has been patched and can be restarted.

OPatch succeeded. 
 
Post Installation
The following steps load modified SQL files into the database. For a RAC environment, perform these steps on only one node.
For each database instance running on the Oracle home being patched, connect to the database using SQL*Plus. Connect as SYSDBA and run the catbundle.sql script as follows:
cd $ORACLE_HOME/rdbms/admin
SQL> startup

ORACLE instance started.

Total System Global Area  268435456 bytes
Fixed Size      2083304 bytes
Variable Size    205522456 bytes
Database Buffers    54525952 bytes
Redo Buffers      6303744 bytes
Database mounted.
Database opened.
SQL> @catbundle.sql psu apply

PL/SQL procedure successfully completed.

PL/SQL procedure successfully completed

Generating apply and rollback scripts...
Check the following file for errors:
/opt/app/oracle/product/10.2.0/db_1/cfgtoollogs/catbundle/catbundle
_PSU_TEST01
_GENERATE_2011Nov01_01_08_46.log
Apply script: /opt/app/oracle/product/10.2.0/db_1/rdbms/admin/catbundle
_PSU_TEST01_APPLY.sql
Rollback script: /opt/app/oracle/product/10.2.0/db_1/rdbms/admin/
catbundle_PSU_TEST01_ROLLBACK.sql

PL/SQL procedure successfully completed.

Executing script file...
.
.
.
SQL> SPOOL off
SQL> SET echo off
Check the following log file for errors:
/opt/app/oracle/product/10.2.0/db_1/cfgtoollogs/catbundle/
catbundle_PSU_TEST01_APPLY_2011Nov01_01_08_56.log

execute utlrp script
SQL>  @utlrp 
 
For information about the catbundle.sql script, see My Oracle Support Note 605795.1 Introduction to Oracle Database catbundle.sql.
Check the following log files in $ORACLE_HOME/cfgtoollogs/catbundle for any errors:

checking the inventory again.

[oracle@lnxdg01 admin]$ $ORACLE_BASE/OPatch/opatch lsinventory

Invoking OPatch 10.2.0.5.1

Oracle Interim Patch Installer version 10.2.0.5.1
Copyright (c) 2010, Oracle Corporation. All rights reserved.

Oracle Home : /opt/app/oracle/product/10.2.0/db_1
Central Inventory : /opt/app/oracle/oraInventory
from : /etc/oraInst.loc
OPatch version : 10.2.0.5.1
OUI version : 10.2.0.4.0
OUI location : /opt/app/oracle/product/10.2.0/db_1/oui
Log file location : /opt/app/oracle/product/10.2.0/db_1/cfgtoollogs/opatch
                          /opatch2011-11-01_01-21-42AM.log
Patch history file: /opt/app/oracle/product/10.2.0/db_1/cfgtoollogs/opatch/opatch_history.txt
Lsinventory Output file location : /opt/app/oracle/product/10.2.0/db_1/cfgtoollogs/opatch/lsinv
                                                 /lsinventory2011-11-01_01-21-42AM.txt --------------------------------------------------------------------------------
Installed Top-level Products (2):
Oracle Database 10g                                                10.2.0.1.0
Oracle Database 10g Release 2 Patch Set 3              10.2.0.4.0
There are 2 products installed in this Oracle Home.

Interim patches (1) :

Patch 9352164 : applied on Tue Nov 01 00:57:11 SGT 2011
Unique Patch ID: 12307477
Created on 1 Apr 2010, 11:33:14 hrs PST8PDT

Bugs fixed:

9377578, 6418420, 7835247, 7207654, 7592346, 6724797, 7936993, 7331867 9093300, 7535152, 7552067, 5879114, 8866013, 5457450, 8344348, 7272297 7136866, 7196894, 6258480, 7013124, 6512622, 6355586, 7196532, 8568395 8309587, 7557226, 5399699, 6509115, 8568397, 8568398, 7127618, 5701695 6052226, 7424804, 6817593, 7553884, 6741425, 7513673, 6783812, 8437213 6452766, 6469211, 7527650, 8309592, 8479537, 5991038, 5686407, 6945157 7639602, 9119226, 6403091, 7589862, 7552082, 6711853, 8304589, 6052169 8199266, 6327692, 5756769, 9352191, 7460818, 6268409, 8232056, 6687381 6972843, 8230457, 6800507, 7027551, 6778714, 6200820, 6645719, 7393804 6775231, 3934160, 6683178, 6650256, 5085288, 7528105, 7329252, 6378112 6151380, 6844866, 4723109, 8544896, 5126719, 5890312, 5934363, 7036453 7610362, 8426816, 8433026, 7270434, 7172531, 6451626, 8247855, 5497338 6324944, 6874522, 7175513, 6960489, 7341598, 8576156, 6797677, 8342923 5895190, 6756739, 7150470, 7593835, 7356443, 7044551, 8227106, 4695511 7298688, 5747462, 8556340, 7197445, 5348308, 7937113, 8341623, 7569205 8409848, 6053134, 6163771, 6851669, 6181488, 6375150, 6825866, 7210921 7295780, 6345573, 7033630, 6954722, 6824129, 7523475, 7457766, 7309458 8324577, 6840740, 6804746, 7375611, 8268054, 6981690, 6512811, 6988017 7375613, 8344399, 7340448, 8362683, 7375617, 8251247, 5933656, 6005347 9145204, 6599920, 7238230, 6379441, 6452375, 6352003, 6833965, 7136489 6610218, 7612639, 6392076, 7225204, 9119194, 5476236, 9442328, 7609057 7609058, 6605106, 6374297, 6193945, 4693355, 8217795, 7039896, 7432514 7330909, 6952701, 7190270, 8287155, 7587008, 7207932, 6802650, 7189447 8481935, 4598439, 9442331, 6615740, 7155655, 6749617, 9442335, 7159505 5868257, 5727166, 7173005, 6917874, 9442339, 7013768, 7691766, 7385253 7291739, 7225720, 7257770, 7363767, 7244238, 6941717, 8267348, 7710551 8354686, 7247217, 8328954, 7299153, 8909984, 6681695, 8702276, 9119284 8217011, 7661251, 6265559, 6823287, 6991626, 6954829, 5259835, 6500033 5923486, 7432601, 7022234, 8534387, 5147386, 7697802, 6653934, 7375644 6490140, 7662491, 8331466, 5623467, 6070225, 6635214, 7396409, 6638558 7038750, 6714608, 6838714, 6870937, 7219752, 7263842, 7278117, 6882739 5404871, 8836667, 8373286, 7393292, 6678845, 6903051, 7936793, 6600051 7155248, 4966512, 7155249, 7197637, 8836308, 8568402, 8568404, 8568405 8431487, 5704108, 6343150, 7280764, 6923450, 7643632, 6145177, 8836671 8310931, 6640411, 8347704, 8836675, 7155250, 7155251, 8836677, 7155252 8836678, 7155253, 8292378, 7155254, 6219529, 7411865, 8227091, 8340379 7276960, 6145687, 7659217, 5863926, 7022905, 6852598, 7123643, 6596564 6413089, 6851438, 8836681, 8836683, 8836684, 8836686, 7579469, 7494333 7315642, 8340383, 6786022, 8340387, 6926448, 7600026, 7462072, 6679303 8815639, 7197583, 7172752, 7326645, 7008262, 9173244, 9173248, 7573151 8490879, 7477934, 6725634, 6733655, 6799205, 6980597, 7499353, 6084232 6014513, 7140204, 7254987, 8833280, 6647480, 6120004, 7693128, 6760697 6051177, 8247215, 6858062, 7189645, 6844739, 6768251, 7196863, 5630796 7378661, 7378735, 5970301, 6705822, 8290506, 6658484, 7599944, 9173253 8309623, 7125408, 7257461, 6987790, 7568556, 6919819, 8886674, 5883691 6955744, 7801939, 6074620, 7149004, 6857917, 8283650, 6445948, 5929055 6110752, 7552042, 8210889, 8287504, 6506617, 7306915, 6271590, 5386204 6976005, 8330783, 7606362, 5377242, 7043989, 8309632, 7575925, 6870047 8309637, 5902053, 8309639, 7028176, 6827260, 7588384, 4726401, 6720712 5910650, 6752765, 6971433, 6024730, 8315482, 6628122, 8239142, 9352164 5695562, 4637902, 7345904, 8309642, 6994160, 8556586, 6404447, 8220734 6919764, 7597354, 7523787, 6029179, 5231155, 6455659 --------------------------------------------------------------------------------
OPatch succeeded.

Recompiling Views in the Database
Recompiling views in the database is a one-time action that is required to complete the installation of CPU fixes that were initially released in January, 2008. It requires that the database be in upgrade mode, so this step is not rolling RAC installable. You may defer this step to a later downtime. If you do this, your system will continue to work; however, the installation of the January 2008 CPU fixes will not be complete until the view recompilation is completed.
Skip this section if you have recompiled views for this database during the installation of a previous PSU or CPU.
The time required to recompile the views and related objects depends on the total number of objects and on your system configuration. In one internal Oracle test with approximately 2000 views and 4000 objects, the total execution time for view_recompile_jan2008cpu.sql and utlrp.sql was about 30 minutes.
If you want to check whether view recompilation has already been performed for the database, execute the following statement:

SELECT * FROM registry$history where ID = '6452863';
If the view recompilation has been performed, this statement returns
one or more rows. 
If the view recompilation has not been performed,this statement
returns no rows.
 
The following steps recompile the views in the database. For a RAC environment, perform these steps on only one node.
Run the pre-check script (so named because it was initially released in CPUJan2008), which reports the maximum number of views and objects that may be recompiled:
SQL> @recompile_precheck_jan2008cpu.sql

Running precheck.sql... Number of views to be recompiled :2043 -----------------------------------------------------------------------
Number of objects to be recompiled :4114
Please follow the README.txt instructions for running viewrecomp.sql PL/SQL procedure successfully completed.
The purpose of this step is to help you determine whether view recompilation should be
done at the same time as the PSU install, or scheduled later.
 
Run the view recompilation script. Note that this script is run with the database in upgrade mode, which restricts connections as SYSDBA.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup upgrade;
ORACLE instance started.
Total System Global Area 268435456 bytes
Fixed Size 2083304 bytes
Variable Size 205522456 bytes
Database Buffers 54525952 bytes
Redo Buffers 6303744 bytes
Database mounted.
Database opened.
SQL> @view_recompile_jan2008cpu.sql
PL/SQL procedure successfully completed.

PL/SQL procedure successfully completed.

PL/SQL procedure successfully completed.

1 row created

Commit complete.

No. of Invalid Objects is :50
Please refer to README.html to for instructions on validating these objects

PL/SQL procedure successfully completed.

Logfile for the current viewrecomp.sql session is : vcomp_TEST01_01Nov2011_01_44_49.log

SQL> @$ORACLE_HOME/rdbms/admin/utlrp TIMESTAMP
--------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_BGN 2011-11-01 01:49:54

PL/SQL procedure successfully completed. TIMESTAMP
--------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_END 2011-11-01 01:50:36

PL/SQL procedure successfully completed.

DOC> The following query reports the number of objects that have compiled
DOC> with errors (objects that compile with errors have status set to 3 in
DOC> obj$). If the number is higher than expected, please examine the error
DOC> messages reported with each object (using SHOW ERRORS) to see if they
DOC> point to system misconfiguration or resource constraints that must be
DOC> fixed before attempting to recompile these objects.
DOC># OBJECTS WITH ERRORS ------------------- 0
DOC> The following query reports the number of errors caught during
DOC> recompilation. If this number is non-zero, please query the error
DOC> messages in the table UTL_RECOMP_ERRORS to see if any of these errors
DOC> are due to misconfiguration or resource constraints that must be
DOC> fixed before objects can compile successfully.
DOC># ERRORS DURING RECOMPILATION
---------------------------
0

PL/SQL procedure successfully completed.

SQL>
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
Total System Global Area 268435456 bytes
Fixed Size 2083304 bytes
Variable Size 205522456 bytes
Database Buffers 54525952 bytes
Redo Buffers 6303744 bytes
Database mounted.
Database opened.
SQL>

Restart database, now on standby database start database and listener assuming that the patch is already installed.

SQL> startup nomount;
ORACLE instance started.

Total System Global Area  268435456 bytes
Fixed Size      2083304 bytes
Variable Size     92276248 bytes
Database Buffers   167772160 bytes
Redo Buffers      6303744 bytes
SQL> alter database mount standby database;

Database altered.

SQL> alter database recover managed standby database disconnect;

Database altered.

SQL> 
 
 

 
 

 

 

 

 

Sunday, 15 January 2012

Claim freespace using shrink and shrink space compact

SQL> @show_space_table.sql
Unformatted Blocks .....................             748
FS1 Blocks (0-25)  .....................               0
FS2 Blocks (25-50) .....................           7,742
FS3 Blocks (50-75) .....................             254
FS4 Blocks (75-100).....................              62
Full Blocks        .....................             271
Total Blocks............................           9,216
Total Bytes.............................      75,497,472
Total MBytes............................              72
Unused Blocks...........................               0
Unused Bytes............................               0
Last Used Ext FileId....................               4
Last Used Ext BlockId...................           2,688
Last Used Block.........................           1,024
 
PL/SQL procedure successfully completed.
SQL> select sum(bytes) from user_extents where segment_name='TEST_CHAINED_TABLE';
 
SUM(BYTES)
----------
  75497472
 
SQL> alter table test_chained_table shrink space compact;
alter table test_chained_table shrink space compact
*
ERROR at line 1:
ORA-10636: ROW MOVEMENT is not enabled
 
 
SQL> alter table test_chained_table enable row movement;
 
Table altered.
 
SQL> alter table test_chained_table shrink space compact;
 
Table altered.
 
SQL> select sum(bytes) from user_extents where segment_name='TEST_CHAINED_TABLE';
 
SUM(BYTES)
----------
  75497472
 
SQL> @show_space_table.sql
Unformatted Blocks .....................             748
FS1 Blocks (0-25)  .....................               0
FS2 Blocks (25-50) .....................               1
FS3 Blocks (50-75) .....................               0
FS4 Blocks (75-100).....................           3,169
Full Blocks        .....................           5,159
Total Blocks............................           9,216
Total Bytes.............................      75,497,472
Total MBytes............................              72
Unused Blocks...........................               0
Unused Bytes............................               0
Last Used Ext FileId....................               4
Last Used Ext BlockId...................           2,688
Last Used Block.........................           1,024
 
PL/SQL procedure successfully completed.
 
SQL> alter table test_chained_table shrink space;
 
Table altered.
 
SQL>  select sum(bytes) from user_extents where segment_name='TEST_CHAINED_TABLE';
 
SUM(BYTES)
----------
  43057152
 
SQL> @show_space_table.sql
Unformatted Blocks .....................               0
FS1 Blocks (0-25)  .....................               0
FS2 Blocks (25-50) .....................               1
FS3 Blocks (50-75) .....................               0
FS4 Blocks (75-100).....................               0
Full Blocks        .....................           5,159
Total Blocks............................           5,256
Total Bytes.............................      43,057,152
Total MBytes............................              41
Unused Blocks...........................               4
Unused Bytes............................          32,768
Last Used Ext FileId....................               4
Last Used Ext BlockId...................          11,648
Last Used Block.........................               4
 
PL/SQL procedure successfully completed.
 
SQL> alter table test_chained_table disable row movement;
 
Table altered.

DataGuard lost archive ; How to resyncronize?

One of my customer had a huge database (about 30 TB) which had a disastery solution based on phisical stadby database using dataguard.Version is 10.2.0.3. By some reason log transfer and apply service stopped .Created archive logs on primary db filled the space . To keep db running archived logs were deleted without being set to standby side. So Dataguard is no more sync with primary side.
As we had lost archived logs , no backup and not sent to anywhere what should I do ?
Setting up dataguard from scratch costs too much space and time for such big db.
Rman helps.
Action plan is:
1-determine last SCN on standby db
2-Stop log apply and transport services.
3-Backup primary database incremental ; from SCN last applied on standby db.
4-Transfer backup sets to standby side.
5-Register backup sets to stanby db
6-Recover standby db ;
7-
Create new standby control file
8-OPTIONAL - Transfer newly created files.
9-Re-start log apply and transfer services.


Details are with commands used:
1-determine last SCN on standby db

PRIMARY
SQL> SELECT CURRENT_SCN FROM V$DATABASE;
CURRENT_SCN
-----------
3360225821

STANDBY
SQL> SELECT CURRENT_SCN FROM V$DATABASE;
CURRENT_SCN
-----------
3215410716


2-Stop log apply and transport services.


2.1 stop redo sent on primary
alter system set log_archive_dest_state_2 ='defer' scope=both ;
2.2 stop redo apply on standby
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
3-Backup primary database incremental ; from SCN last applied on standby db.
--for faster backup try with multi channel
run {
allocate channel ch1 device type disk;
allocate channel ch2 device type disk;
allocate channel ch3 device type disk;
allocate channel ch4 device type disk;
allocate channel ch5 device type disk;
allocate channel ch6 device type disk;
BACKUP INCREMENTAL FROM SCN
3215410716 DATABASE FORMAT '/intl_migration/cdrdb/backup/tmpForStandby_%U' tag 'FORSTANDBY';
release channel ch1;
release channel ch2;
release channel ch3;
release channel ch4;
release channel ch5;
release channel ch6;
}
4-Transfer backup sets to standby side.

because the incremental backup was 1 TB size ; I needed to seperate under different mount points.
Don't worry about keeping them in different folders. We will register them.
SOURCE FOLDERS
/intl_migration/cdrdb/backup/

DEST FOLDER
/medftp/backupDG
/app3/backupDG/


bin
prompt
lcd /intl_migration/cdrdb/backup/
cd /medftp/backupDG
mput tmpForStandby_rkk2lbe4_1_1 tmpForStandby_rlk2lbe5_1_1 tmpForStandby_rmk2lbe7_1_1 tmpForStandby_rnk2lbe9_1_1 tmpForStandby_rok2lbeb_1_1 tmpForStandby_rpk2lbed_1_1 tmpForStandby_rqk2m14c_1_1 tmpForStandby_rrk2m19j_1_1 tmpForStandby_rsk2m2bt_1_1 tmpForStandby_rtk2m2eu_1_1 tmpForStandby_ruk2m2km_1_1 tmpForStandby_rvk2m3m2_1_1

bin
prompt
lcd /intl_migration/cdrdb/backup/
cd /app3/backupDG/
mput tmpForStandby_s0k2mmu4_1_1 tmpForStandby_s1k2mn2d_1_1 tmpForStandby_s2k2mnlu_1_1 tmpForStandby_s3k2mnut_1_1 tmpForStandby_s4k2mob3_1_1 tmpForStandby_s5k2moee_1_1 tmpForStandby_s6k2nc22_1_1 tmpForStandby_s7k2ncda_1_1 tmpForStandby_s8k2nd6s_1_1 tmpForStandby_s9k2ne6n_1_1 tmpForStandby_sak2ne8i_1_1 tmpForStandby_sbk2nf3c_1_1 tmpForStandby_ssk2o28c_1_1

5-Register backup sets to stanby db
OnStandby db
rman target /
RMAN> CATALOG START WITH '/app3/backupDG/tmpForStandby';
RMAN> CATALOG START WITH '/medftp/backupDG/tmpForStandby';
6-Recover standby db ;
one important note ;
because this is a backup taken for only phisical standby db sync ; noredo key word is required.
See : http://download.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmdupdb.htm#sthref955

RMAN>
run {
allocate channel ch1 device type disk;
allocate channel ch2 device type disk;
allocate channel ch3 device type disk;
allocate channel ch4 device type disk;
allocate channel ch5 device type disk;
allocate channel ch6 device type disk;
allocate channel ch7 device type disk;
allocate channel ch8 device type disk;

RECOVER DATABASE NOREDO;

release channel ch1;
release channel ch2;
release channel ch3;
release channel ch4;
release channel ch5;
release channel ch6;
release channel ch7;
release channel ch8;
}

7-Create new standby control file
Before re-starting log apply service on standby db; create a new standby controlfile in primary db , copy it to standby .Creating a new controlfile is my suggestion because during non transferred and applied logs ; some chages may be done affecting controlfile like adding redo members, adding datafile, adding new tablespaces...etc

7-1 shutdown standby db instance
7-2 create new standby control file move it to standby side destinations (generally 3).
SQL> alter database create standby controlfile as '/tmp/stby.ctl'; --on primary db
scp /tmp/stby.ctl oracle@stdbyserver:/oradata/ctl<1,2,3>/ctl.dbf
7-3 start standby db in mount , and start log apply service MenagedRecoveryProcess;
SQL> startup mount;

8-OPTIONAL - Transfer newly created files.
If new datafiles were added during the time that dataguard had been stopped as it happened to me; you need to copy the newly created files .They were not included incremental backup set;
and not created cause of stopped MRP.
8-1 determine all datafiles from database (remember we have just created a new controlfile , both primary and standby has same information)
SQL> spool '/tmp/hede.txt';
SQL> select 'file ' ,name from v$datafile;
# sh /tmp/hede.txt > fileSatus.txt
# cat fileSatus.txt grep cannot
/oradata/file004.dbf : cannot open
/oradata/file005.dbf : cannot open
Means we have to copy these 2 files to standby side.

8-2 After determining missing datafiles ; backup them as image copy in primary db ,copy to standby side.
BACKUP AS COPY DATAFILE '/oradata/file004.dbf' FORMAT '/tmp/file004.dbf' TAG stdbyImgCopy;
BACKUP AS COPY DATAFILE '/oradata/file005.dbf' FORMAT '/tmp/file005.dbf' TAG stdbyImgCopy;

scp /tmp/file004.dbf oracle@stdbyserver:/oradata/file004.dbf
scp /tmp/file005.dbf oracle@stdbyserver:/oradata/file005.dbf
9-Re-start log apply and transfer services.

9.1 start redo sent on primary
alter system set log_archive_dest_state_2 ='enable' scope=both ;

9.2 start redo apply on standby
SQL> startup mount;
SQL>ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

9-3 check if for any problems; you may encounter problems. Check alert.log and status of proceesses
SQL> SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;
After success of this operation We were freed of time and space to re-establish all 30 TB database.
A similar workaound is documented in metalink for Oracle 9i : Doc ID:290817.1

Tuesday, 10 January 2012

Reverse Key Index

Indexes are used to help oracle retrieve data faster. But there is a drawback of Index, if an index create on a large table which under go massive insert/ update, the index may raise contention issue. To get ride from this Oracle provides couple of solutions and Reverse key Index is one of them.

 Real world case scenario
Case 1: Suppose table 'track_user' contains login related information of a system. The primary key column 'ID' of the table populated by an increasing sequence. So every new entries (for the index) come to the same blocks when a row inserted into the table. This is the way contention may increase!

As all we know that the primary key constraint is impose on column by creating an unique index on that column. So if we use reverse key indexes in this case then the index entries will go to different blocks and contention will be reduced.

Case 2: If you have a table with column which is populated by an increasing sequence and some times it go under some delete operation for old records. A index was created on That column and this index is on face range scan when you issue a select on that table. But this index has contention issues on index blocks.

To avoid the contention issue you can use Reverse key index as a solution.

What is Reverse key index ?
  Reverse key index was first introduce in Oracle 8 .A Reverse Key Index simply takes the index column values and reverses them before inserting into the index.

Uses

  1. One is in RAC environments. If you have a column populated by an increasing sequence the new entries come to the same blocks when you are inserting rows. If you have many concurrent sessions inserting rows from different RAC instances then you will have a contention for the same index blocks between nodes. If you use reverse key indexes in this case then the new index entries will go to different blocks and contention will be reduced. For example, if you insert rows with keys 101, 102 and 103 into a table with a regular index, the rows are likely to be inserted into the same leaf block. W  In a Reverse Key Index the keys in our example become 101, 201 and 301, and the rows are inserted into disparate parts of the index segment.
  2. In single instance databases there is also a case where reverse key indexes can be helpful. If you have a column populated by an increasing sequence, you delete some old rows from the table and you do not do range scans on that column and you have contention issues on index blocks, reverse key indexes can be considered. The reverse key index will scatter the entries across different blocks during inserting and your many concurrent sessions will not have index block contention issues.
  3. If you are deleting some old rows, the blocks from a normal index on that column will have some used and some empty space in them, but they will not be put on the freelist because they are not completely free. That empty space will not be used because the sequence values are always increasing and they will not go to those old blocks because of that. You will be able to use that space for different values with reverse key indexes.
 Disadvantages

One of the things to be careful about reverse key indexes is that you cannot perform range scans on them. Because the entries are stored as reversed you lose the capability to range scan on that index. You can only perform fetch-by-key value or full-index scans.

Create And Manage Reverse Key Index

You create a Reverse Key Index with the key word REVERSE:

Create Index index_name on table_name (a,b,c) Reverse;

You can rebuild a Reverse Key Index into a regular index with the keyword NOREVERSE

Alter Index index_name Rebuild Noreverse;

If you rebuild a Reverse Key Index without the keyword NOREVERSE, it will rebuilt the Reverse Key Index.

Alter Index index_name Rebuild;

 I had a situation just fitting the reasons to use a reverse key index. We have many concurrent programs that insert into the same table. The table has a primary key column populated by an increasing sequence. There are no range scans on that column. The data is deleted time to time according to some rules which leave some old data undeleted in the table. When these programs are running statspack reports show high buffer busy waits for the index segment (More than 900,000 waits for a 30 minute period causing %85 of all buffer busy waits). Also as this database will be converted to a RAC database soon, this case seems very appropriate to use a reverse key index on the related column.

change an existing index as a reverse key index

You cannot rebuild a normal index into a reverse key index. You must drop the normal index and create the reverse index.
 Trying to gather statistics on an invisible index using either gather_index_stats or gather_table_stats with the cascade option gets an error.

YAS@11G>create table t as select * from all_objects;

Table created.

YAS@11G>create index t_ind on t(object_name) invisible;

Index created.

USER_INDEXES has a new column named VISIBILITY to indicate whether the index is visible or not.
YAS@11G>select index_name,VISIBILITY from user_indexes where index_name='T_IND';

INDEX_NAME                     VISIBILITY
------------------------------ ---------
T_IND                          INVISIBLE
 
YAS@11G>exec dbms_stats.gather_index_stats(user,'T_IND');
BEGIN dbms_stats.gather_index_stats(user,'T_IND'); END;

*
ERROR at line 1:
ORA-00904: : invalid identifier
ORA-06512: at "SYS.DBMS_STATS", line 14037
ORA-06512: at "SYS.DBMS_STATS", line 14060
ORA-06512: at line 1

YAS@11G>exec dbms_stats.gather_table_stats(user,'T',cascade=>true);
BEGIN dbms_stats.gather_table_stats(user,'T',cascade=>true); END;

*
ERROR at line 1:
ORA-00904: : invalid identifier
ORA-06512: at "SYS.DBMS_STATS", line 17806
ORA-06512: at "SYS.DBMS_STATS", line 17827
ORA-06512: at line 1

YAS@11G>alter index t_ind visible;

Index altered.

YAS@11G>exec dbms_stats.gather_index_stats(user,'T_IND');

PL/SQL procedure successfully completed.

YAS@11G>exec dbms_stats.gather_table_stats(user,'T',cascade=>true);

PL/SQL procedure successfully completed.

is this is a bug or intended behavior but we need a way to collect statistics for an invisible index, otherwise how can we test if it will be used or not? There is a way to collect statistics for an invisible index, which is to set the parameter OPTIMIZER_USE_INVISIBLE_INDEXES.
YAS@11G>alter session set optimizer_use_invisible_indexes=true;

Session altered.

YAS@11G>exec dbms_stats.gather_index_stats(user,'T_IND');

PL/SQL procedure successfully completed.

YAS@11G>alter session set optimizer_use_invisible_indexes=false;

Session altered.

We can change the parameter OPTIMIZER_USE_INVISIBLE_INDEXES to see if the index will be used.
YAS@11G>alter session set optimizer_use_invisible_indexes=true;

Session altered.

YAS@11G>explain plan for select * from t where object_name='STANDARD';

Explained.

YAS@11G>select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
Plan hash value: 1376202287

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     2 |   202 |     4   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T     |     2 |   202 |     4   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T_IND |     2 |       |     3   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

  2 - access("OBJECT_NAME"='STANDARD')

14 rows selected.

YAS@11G>alter session set optimizer_use_invisible_indexes=false;

Session altered.

If we make the index visible we see that the index is considered and used.
YAS@11G>alter index t_ind visible;

Index altered.

YAS@11G>explain plan for select * from t where object_name='STANDARD';

Explained.

YAS@11G>select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
Plan hash value: 1376202287

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     2 |   316 |     4   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T     |     2 |   316 |     4   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T_IND |     2 |       |     3   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

  2 - access("OBJECT_NAME"='STANDARD')

Note
-----
  - dynamic sampling used for this statement

18 rows selected.

 


Monday, 9 January 2012

How to check the category of a Stored Outline

Stored Outlines is a feature within Oracle which helps preserve Execution Plan Stability.

The only way to determine if outlines are being used and their category is by using the oradebug utility.

SQL> oradebug setmypid
Statement processed.

SQL> oradebug dumpvar sga sgauso
qolprm sgauso_ [3C4C708, 3C4C72C) = 00000000 00000000 00000000 00000000 00000000 00000000 00000000 0

The above output indicates that no outlines are being used

DUMPVAR
To dump an SGA variable use

ORADEBUG DUMPVAR SGA variable_name
e.g.
ORADEBUG DUMPVAR SGA sgauso

SQL> alter system set use_stored_outlines = true;

System altered.

SQL> oradebug dumpvar sga sgauso
qolprm sgauso_ [3C4C708, 3C4C72C) = 00000001 45440007 4C554146 00000054 00000000 00000000 00000000 0

The above output indicates that the stored outlines are being used and the category
is DEFAULT
(Ascii equivalent of 44 45 46 41 55 4C 54)
The 7 here stands for the length of the category.

If you set a value at the session level you need to check the UGA

SQL> alter session set use_stored_outlines = false;

Session altered.

SQL> oradebug dumpvar uga ugauso
qolprm ugauso_p [8EE7A90, 8EE7AB4) = 00000004 00000000 00000000 00000000 00000000 00000000 00000000

SQL> alter session set use_stored_outlines = true;

Session altered.

SQL> oradebug dumpvar uga ugauso
qolprm ugauso_p [8EE7A90, 8EE7AB4) = 00000004 45440007 4C554146 00000054 00000000 00000000 00000000
 means no stored outlines are using .