Total Pageviews

Friday, 30 December 2011

Resize standby datafile if disk runs out of space on standby site.

PURPOSE: TO AVOID RECREATION OF STANDBY DATABASE IN CASE FILE IS NOT RESIZED ON STANDBY :
ITLINUXDEVBLADE07-PRIMARY
Database is DGTEST9i
[oracle@itlinuxdevblade07 dgtest9i]$ df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/cciss/c0d0p2 25G 21G 2.0G 92% /opt
[oracle@itlinuxdevblade07 dgtest9i]$
2 gb freespace on disk on PRIMARY.
ITLINUXDEVBLADE08
[oracle@itlinuxdevblade08 oradata]$ df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/cciss/c0d0p2 25G 23G 550M 98% /opt
[oracle@itlinuxdevblade08 oradata]$
The corresponding disk on the standby site as in db_file_name_convert has only 550 mb free.
Now create a new tablespace for the testing purpose..
SQL> create tablespace dropme datafile ‘/opt/oracle/oradata/dgtest9i/dropme.dbf’ size 200m;
Tablespace created.
SQL> alter system switch logfile;
System altered.
SQL> /
System altered.
[oracle@itlinuxdevblade07 dgtest9i]$ df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/cciss/c0d0p2 25G 22G 1.8G 93% /opt
[oracle@itlinuxdevblade07 dgtest9i]$
On STANDBY
[oracle@itlinuxdevblade08 dgtest9i]$ df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/cciss/c0d0p2 25G 23G 350M 99% /opt
[oracle@itlinuxdevblade08 dgtest9i]$
Space available reduced from 550 mb to 350 mb
The logs were succesfully applied on STANDBY disk as there was enough space.
Now add a sum of more than the amount available on the standby disk by RESIZING the file on PRIMARY to the tablespace.
PRIMARY site..
SQL> Alter database datafile ‘/opt/oracle/oradata/dgtest9i/dropme.dbf’ RESIZE 1024M;
Database altered.
SQL> SQL>
SQL> alter system switch logfile;
System altered.
Go to STANDBY…
[oracle@itlinuxdevblade08 dgtest9i]$ df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/cciss/c0d0p2 25G 23G 350M 99% /opt
[oracle@itlinuxdevblade08 dgtest9i]$ ls -lrt
total 801848
-rw-r—– 1 oracle dba 209723392 May 10 14:23 dropme.dbf
-rw-r—– 1 oracle dba 7544832 May 10 14:23 control01.ctl
[oracle@itlinuxdevblade08 dgtest9i]$
The file has not been resized….as there was not enough space…
Now check the logs for errors..
On standby …alert log shows media recovery failed as the datafile could not be resized and media recovery stops
The standby database cannot be opened now as the files need recovery..
Media Recovery Log /opt/oracle/dgtest9i/arch/arch59.log
MRP0: Background Media Recovery terminated with error 1237
Wed May 10 14:23:04 2006
Errors in file /opt/oracle/admin/dgtest9i/bdump/dgtest9i_mrp0_10198.trc:
ORA-01237: cannot extend datafile 3
ORA-01110: data file 3: ‘/opt/oracle/dgtest9i/dropme.dbf’
ORA-19502: write error on file “/opt/oracle/dgtest9i/dropme.dbf”, blockno 71297 (blocksize=8192)
ORA-27072: skgfdisp: I/O error
Linux-x86_64 Error: 2: No such file or directory
Additional information: 71296
Some recovered datafiles maybe left media fuzzy
Media recovery may continue but open resetlogs may fail
MRP0: Background Media Recovery process shutdown
STEPS to solve this problem.
1. Shutdown both databases.
2. Create a new db_file_name_convert path as the file in question has to be moved to different disks where space exists.
*.db_file_name_convert=’/opt/oracle/dgtest9i/’,'/opt/oracle/oradata/dgtest9i/’,'/tmp/dgtest9i/’,'/tmp/dgtest9i/’…both machines.
3. Copy dropme.dbf to /tmp/dgtest9i on both machines from original area.
4. Startup mount;..PRIMARY DATABASE
5. alter database rename file ‘/opt/oracle/oradata/dgtest9i/dropmedbf’ to ‘/tmp/dgtest9i/dropme.dbf’;
6. alter database open;
7. alter database create standby controlfile as ‘/tmp/control01.ctl’ ….primary site.
8. ftp the standby controlfile to /opt/oracle/dgtest9i on standby….controlfile area in standby spfile.
9.startup nomount;…….STANDBY site.
SQL> show parameter convert;
NAME TYPE VALUE
———————————— ———– ——————————
db_file_name_convert string /opt/oracle/oradata/dgtest9i/, /opt/oracle/dgtest9i/, /tmp/dgtest9i/, /tmp/dgtest9i/
log_file_name_convert string /opt/oracle/oradata/dgtest9i/, /opt/oracle/dgtest9i/
SQL> alter database mount standby database;
Database altered.
SQL> select name from v$datafile;
/opt/oracle/dgtest9i/system01.dbf
/opt/oracle/dgtest9i/undotbs01.dbf
/tmp/dgtest9i/dropme.dbf
Primary site : Switch a few logfiles….
Now start managed recovery on standby.
SQL>recover managed standby database disconnect;
Alert log below…
Completed: ALTER DATABASE RECOVER managed standby database d
Wed May 10 14:54:30 2006
RFS: Possible network disconnect with primary database
Wed May 10 14:54:38 2006
Fetching gap sequence for thread 1, gap sequence 59-63
Trying FAL server: DGTEST9I_BLADE07
Wed May 10 14:54:41 2006
RFS: Successfully opened standby logfile 4: ‘/opt/oracle/dgtest9i/st01.log’
Wed May 10 14:54:56 2006
Media Recovery Log /opt/oracle/dgtest9i/arch/arch59.log
Media Recovery Log /opt/oracle/dgtest9i/arch/arch60.log
Media Recovery Log /opt/oracle/dgtest9i/arch/arch61.log
Media Recovery Log /opt/oracle/dgtest9i/arch/arch62.log
Media Recovery Log /opt/oracle/dgtest9i/arch/arch63.log
Media Recovery Log /opt/oracle/dgtest9i/arch/arch64.log
Media Recovery Log /opt/oracle/dgtest9i/arch/arch65.log
Media Recovery Waiting for thread 1 seq# 66 (in transit)
Go to /tmp/dgtest9i on STANDBY and make sure file is resized to 1 gb..
[oracle@itlinuxdevblade08 dgtest9i]$ ls -lrt
total 1049612
-rw-r—– 1 oracle dba 1073750016 May 10 14:55 dropme.dbf

Monday, 19 December 2011

Schema Refresh

I'm refreshing TESTDB by taking Data from PRODB,here only one schema is refreshed.

Source side:


Preparatory Steps:

Create directory or use an exiting directory by giving read and write permission for 'system' Database user to use that direcotry(TEST_MIG).

SQL> grant read,write on directory TEST_MIG to system;

Grant succeeded.

SQL> alter user system identified by TESTDBdba account unlock;

PRODDB:

Step 1:Exporting the Data from the source Database(PRODDB in our case)

vi expdp_refresh_schema_sep27.sh

$ expdp system/PRODDB@PRODDB DUMPFILE=REFRESH_SCHEMA.DMP DIRECTORY=DATA_PUMP_DIR SCHEMAS=REFRESH_SCHEMA LOGFILE=REFRESH_SCHEMA.log

$ nohup sh expdp_refresh_schema_sep27.sh>refresh_schema.out &

Step 2:Copying the dump file(Source Data) to Target Database server
We can use 'winscp' tool(A graphical utility for copying files from windows to linux or viceversa) or ftp or scp or tar or rsync for coping Data from source server to target server.

Step 3:Moving Data into the target Database.

$ impdp system/TESTDBdba@TESTDB DUMPFILE=REFRESH_SCHEMA.DMP DIRECTORY=TEST_MIG REMAP_SCHEMA=REFRESH_SCHEMA:REFRESH_SCHEMA LOGFILE=REFRESH_SCHEMA.log


Step 4:Verify the Data in Source and Target Databases.

Note:
In oracle 11g rel2,version:11.2.0.1.0 there are about 44 Distinct object_types comparing to previous versions this number is huge.

SQL> select *from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for Linux: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production


SQL> select distinct object_type from dba_objects;

OBJECT_TYPE
-------------------
EDITION
INDEX PARTITION
CONSUMER GROUP
SEQUENCE
TABLE PARTITION
SCHEDULE
QUEUE
RULE
JAVA DATA
PROCEDURE
OPERATOR

OBJECT_TYPE
-------------------
LOB PARTITION
DESTINATION
WINDOW
SCHEDULER GROUP
DATABASE LINK
LOB
PACKAGE
PACKAGE BODY
LIBRARY
PROGRAM
RULE SET

OBJECT_TYPE
-------------------
CONTEXT
TYPE BODY
JAVA RESOURCE
XML SCHEMA
TRIGGER
JOB CLASS
UNDEFINED
DIRECTORY
MATERIALIZED VIEW
TABLE
INDEX

OBJECT_TYPE
-------------------
SYNONYM
VIEW
FUNCTION
JAVA CLASS
JAVA SOURCE
INDEXTYPE
CLUSTER
TYPE
RESOURCE PLAN
JOB
EVALUATION CONTEXT

44 rows selected.

Source Database:

PRODDB:
---------

SQL> select count(*) from dba_objects
where owner='REFRESH_SCHEMA';

COUNT(*)
----------
132

SQL> select count(*) from dba_tables
where owner='REFRESH_SCHEMA';

COUNT(*)
----------
34

SELECT COUNT(*) FROM DBA_OBJECTS
WHERE OWNER='REFRESH_SCHEMA'
AND OBJECT_TYPE IN('TABLE','JOB','VIEW','PACKAGE','TRIGGER','SYNONYM','FUNCTION','PROCEDURE','TYPE')
ORDER BY OBJECT_TYPE;
SQL> SELECT COUNT(*) FROM DBA_OBJECTS
WHERE OWNER='REFRESH_SCHEMA'
AND OBJECT_TYPE IN('TABLE','JOB','VIEW','PACKAGE','TRIGGER','SYNONYM','FUNCTION','PROCEDURE','TYPE')
ORDER BY OBJECT_TYPE;
2 3 4
COUNT(*)
----------
62


SELECT COUNT(*) FROM DBA_OBJECTS
WHERE OWNER='REFRESH_SCHEMA'
AND OBJECT_TYPE IN


TARGET DATABASE:


TESTDB:
-------------
SQL> select count(*) from dba_objects
where owner='REFRESH_SCHE'; 2

COUNT(*)
----------
131

SQL> select count(*) from dba_tables
where owner='APEX4_DEV'; 2

COUNT(*)
----------
34

SQL> SELECT COUNT(*) FROM DBA_OBJECTS
WHERE OWNER='REFRESH_SCHEMA'
AND OBJECT_TYPE IN('TABLE','JOB','VIEW','PACKAGE','TRIGGER','SYNONYM','FUNCTION','PROCEDURE','TYPE')
ORDER BY OBJECT_TYPE;

COUNT(*)
----------
62

Friday, 16 December 2011

The standby database to lag behind primary database

  Look for the current SCN numbers of the primary and standby databases. On the primary:

SQL> select current_scn from v$database;

CURRENT_SCN
-----------
1447102

On the standby:

SQL> select current_scn from v$database;

CURRENT_SCN
-----------
1301571

Clearly there is a difference. But this by itself does not indicate a problem; since the standby is expected to lag behind the primary (this is an asynchronous non-real time apply setup). The real question is how much it is lagging in the terms of wall clock. To know that I used the scn_to_timestamp function to translate the SCN to a timestamp:

SQL> select scn_to_timestamp(1447102) from dual;

SCN_TO_TIMESTAMP(1447102)
-------------------------------
18-DEC-09 08.54.28.000000000 AM
 The same query to know the timestamp associated with the SCN of the standby database as well.
SQL> select scn_to_timestamp(1301571) from dual;

SCN_TO_TIMESTAMP(1301571)
-------------------------------
15-DEC-09 07.19.27.000000000 PM

This shows that the standby is two and half days lagging! The data at this point is not just stale; it must be rotten.

The next question is why it would be lagging so far back in the past. This is a 10.2 database where FAL server should automatically resolved any gaps in archived logs. Something must have happened that caused the FAL (fetch archived log) process to fail. To get that answer, first, I checked the alert log of the standby instance. I found these lines that showed the issue clearly:


Fri Dec 18 06:12:26 2009
Waiting for all non-current ORLs to be archived...
Media Recovery Waiting for thread 1 sequence 700
Fetching gap sequence in thread 1, gap sequence 700-700
… …
Fri Dec 18 06:13:27 2009
FAL[client]: Failed to request gap sequence
GAP - thread 1 sequence 700-700
DBID 846390698 branch 697108460
FAL[client]: All defined FAL servers have been attempted.

Going back in the alert log, I found these lines:

Tue Dec 15 17:16:15 2009
Fetching gap sequence in thread 1, gap sequence 700-700
Error 12514 received logging on to the standby
FAL[client, MRP0]: Error 12514 connecting to DEL1 for fetching gap sequence
Tue Dec 15 17:16:15 2009
Errors in file /opt/oracle/admin/DEL2/bdump/del2_mrp0_18308.trc:
ORA-12514: TNS:listener does not currently know of service requested in connect descriptor
Tue Dec 15 17:16:45 2009
Error 12514 received logging on to the standby
FAL[client, MRP0]: Error 12514 connecting to DEL1 for fetching gap sequence
Tue Dec 15 17:16:45 2009
Errors in file /opt/oracle/admin/DEL2/bdump/del2_mrp0_18308.trc:
ORA-12514: TNS:listener does not currently know of service requested in connect descriptor

This clearly showed the issue. On December 15th at 17:16:15, the Managed Recovery Process encountered an error while receiving the log information from the primary. The error was ORA-12514 “TNS:listener does not currently know of service requested in connect descriptor”. This is usually the case when the TNS connect string is incorrectly specified. The primary is called DEL1 and there is a connect string called DEL1 in the standby server.

The connect string works well. Actually, right now there is no issue with the standby getting the archived logs; so there connect string is fine - now. The standby is receiving log information from the primary. There must have been some temporary hiccups causing that specific archived log not to travel to the standby. If that log was somehow skipped (could be an intermittent problem), then it should have been picked by the FAL process later on; but that never happened. Since the sequence# 700 was not applied, none of the logs received later – 701, 702 and so on – were applied either. This has caused the standby to lag behind since that time.

So, the fundamental question was why FAL did not fetch the archived log sequence# 700 from the primary. To get to that, I looked into the alert log of the primary instance. The following lines were of interest:


Tue Dec 15 19:19:58 2009
Thread 1 advanced to log sequence 701 (LGWR switch)
Current log# 2 seq# 701 mem# 0: /u01/oradata/DEL1/onlinelog/o1_mf_2_5bhbkg92_.log
Tue Dec 15 19:20:29 2009Errors in file /opt/oracle/product/10gR2/db1/admin/DEL1/bdump/del1_arc1_14469.trc:
ORA-00308: cannot open archived log '/u01/oraback/1_700_697108460.dbf'
ORA-27037: unable to obtain file status
Linux Error: 2: No such file or directory
Additional information: 3
Tue Dec 15 19:20:29 2009
FAL[server, ARC1]: FAL archive failed, see trace file.
Tue Dec 15 19:20:29 2009
Errors in file /opt/oracle/product/10gR2/db1/admin/DEL1/bdump/del1_arc1_14469.trc:
ORA-16055: FAL request rejected
ARCH: FAL archive failed.
Archiver continuing
Tue Dec 15 19:20:29 2009
ORACLE Instance DEL1 - Archival Error. Archiver continuing.

These lines showed everything clearly. The issue was:

ORA-00308: cannot open archived log '/u01/oraback/1_700_697108460.dbf'
ORA-27037: unable to obtain file status
Linux Error: 2: No such file or directory


The archived log simply was not available. The process could not see the file and couldn’t get it across to the standby site.

Upon further investigation I found that the DBA actually removed the archived logs to make some room in the filesystem without realizing that his action has removed the most current one which was yet to be transmitted to the remote site. The mystery surrounding why the FAL did not get that log was finally cleared.

Solution

Now that I know the cause, the focus was now on the resolution. If the archived log sequence# 700 was available on the primary, I could have easily copied it over to the standby, registered the log file and let the managed recovery process pick it up. But unfortunately, the file was gone and I couldn’t just recreate the file. Until that logfile was applied, the recovery will not move forward. So, what are my options?

One option is of course to recreate the standby - possible one but not technically feasible considering the time required. The other option is to apply the incremental backup of primary from that SCN number. That’s the key – the backup must be from a specific SCN number. I have described the process since it is not very obvious. The following shows the step by step approach for resolving this problem. I have shown where the actions must be performed – [Standby] or [Primary].

1. [Standby] Stop the managed standby apply process:

SQL> alter database recover managed standby database cancel;

Database altered.

2. [Standby] Shutdown the standby database

3. [Primary] On the primary, take an incremental backup from the SCN number where the standby has been stuck:

RMAN> run {
2> allocate channel c1 type disk format '/u01/oraback/%U.rmb';
3> backup incremental from scn 1301571 database;
4> }

using target database control file instead of recovery catalog
allocated channel: c1
channel c1: sid=139 devtype=DISK

Starting backup at 18-DEC-09
channel c1: starting full datafile backupset
channel c1: specifying datafile(s) in backupset
input datafile fno=00001 name=/u01/oradata/DEL1/datafile/o1_mf_system_5bhbh59c_.dbf
… …
piece handle=/u01/oraback/06l16u1q_1_1.rmb tag=TAG20091218T083619 comment=NONE
channel c1: backup set complete, elapsed time: 00:00:06
Finished backup at 18-DEC-09
released channel: c1

4. [Primary] On the primary, create a new standby controlfile:

SQL> alter database create standby controlfile as '/u01/oraback/DEL1_standby.ctl';

Database altered.

5. [Primary] Copy these files to standby host:

oracle@oradba1 /u01/oraback# scp *.rmb *.ctl oracle@oradba2:/u01/oraback
oracle@oradba2's password:
06l16u1q_1_1.rmb 100% 43MB 10.7MB/s 00:04
DEL1_standby.ctl 100% 43MB 10.7MB/s 00:04

6. [Standby] Bring up the instance in nomount mode:

SQL> startup nomount

7. [Standby] Check the location of the controlfile:

SQL> show parameter control_files

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
control_files string /u01/oradata/standby_cntfile.ctl

8. [Standby] Replace the controlfile with the one you just created in primary.

9. $ cp /u01/oraback/DEL1_standby.ctl /u01/oradata/standby_cntfile.ctl
10.[Standby] Mount the standby database:

SQL> alter database mount standby database;
11.[Standby] RMAN does not know about these files yet; so you must let it know – by a process called cataloging. Catalog these files:

$ rman target=/

Recovery Manager: Release 10.2.0.4.0 - Production on Fri Dec 18 06:44:25 2009

Copyright (c) 1982, 2007, Oracle. All rights reserved.

connected to target database: DEL1 (DBID=846390698, not open)
RMAN> catalog start with '/u01/oraback';

using target database control file instead of recovery catalog
searching for all files that match the pattern /u01/oraback

List of Files Unknown to the Database
=====================================
File Name: /u01/oraback/DEL1_standby.ctl
File Name: /u01/oraback/06l16u1q_1_1.rmb

Do you really want to catalog the above files (enter YES or NO)? yes
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /u01/oraback/DEL1_standby.ctl
File Name: /u01/oraback/06l16u1q_1_1.rmb

12.Recover these files:

RMAN> recover database;

Starting recover at 18-DEC-09
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00001: /u01/oradata/DEL2/datafile/o1_mf_system_5lptww3f_.dbf
...…
channel ORA_DISK_1: reading from backup piece /u01/oraback/05l16u03_1_1.rmb
channel ORA_DISK_1: restored backup piece 1
piece handle=/u01/oraback/05l16u03_1_1.rmb tag=TAG20091218T083619
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07

starting media recovery

archive log thread 1 sequence 8012 is already on disk as file /u01/oradata/1_8012_697108460.dbf
archive log thread 1 sequence 8013 is already on disk as file /u01/oradata/1_8013_697108460.dbf
… …

13. After some time, the recovery fails with the message:

archive log filename=/u01/oradata/1_8008_697108460.dbf thread=1 sequence=8009
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 12/18/2009 06:53:02
RMAN-11003: failure during parse/execution of SQL statement: alter database recover logfile '/u01/oradata/1_8008_697108460.dbf'
ORA-00310: archived log contains sequence 8008; sequence 8009 required
ORA-00334: archived log: '/u01/oradata/1_8008_697108460.dbf'

This happens because we have come to the last of the archived logs. The expected archived log with sequence# 8008 has not been generated yet.

14.At this point exit RMAN and start managed recovery process:

SQL> alter database recover managed standby database disconnect from session;

Database altered.

15.Check the SCN’s in primary and standby:

[Standby] SQL> select current_scn from v$database;

CURRENT_SCN
-----------
1447474
[Primary] SQL> select current_scn from v$database;

CURRENT_SCN
-----------
1447478
Now they are very close to each other.

Thursday, 15 December 2011

Active Session History (ASH)

How to run ASHRPT.SQL script

To generate a text report of ASH information, run the ashrpt.sql script at the SQL prompt:

@$ORACLE_HOME/rdbms/admin/ashrpt.sql

First, you need to specify whether you want an HTML or a text report.

Enter value for report_type: text

Specify the time frame to collect ASH information by first specifying the begin time in minutes prior to the system date.

Enter value for begin_time: -10
Note: here you have to just put number in minutes eg: 10 for 10 minutes

Next, enter the duration in minutes that the report for which you want to capture ASH information from the begin time. The default duration of system date minus begin time is accepted in the following example:

Enter value for duration:
Note: left blank for default value. Default value is SYSDATE

The report in this example will gather ASH information beginning from 10 minutes before the current time and ending at the current time. Next, accept the default report name or enter a report name. The default name is accepted in the following example:

Enter value for report_name:
Using the report name ashrpt_1_0310_0131.txt
Note: Left it blank for default value.

The session history report is generated.

Purpose of ASH

ASH is way of sampling the state of sessions connected to an Oracle database in order to monitory database load as well as drill down into any performance issues that may arise. Any session that is connected to the database and is waiting for an event that does not belong to the Idle wait class is considered as an active session.ASH maintains a fixed sized circular buffer in the database System Global Area (SGA). The fixed
sized circular buffer will be allocated during database start-up time. ASH by default on an Oracle database, samples once every second and logs some 30 to 60 (depending on version) pieces of information on any session that is active at the time of sampling.Each session sample is a set of rows and the V$ACTIVE_SESSION_HISTORY view returns one row for each active session per sample, returning the latest session sample rows first. Because the active session samples are stored in a Fixed Sized circular buffer in SGA, the greater the system activity, the smaller the number of seconds of session activity that can be stored in the Fixed Sized circular buffer. This means that the duration for which a session sample appears in the V$ view, or the number of seconds of session activity that is displayed in the V$ view, is completely dependent on the database activity.

ASH Analysis

1.Online Analysis
2.Offline Analysis
Let start with ONLINE analysis
========================
I have open two session. first one (SID 16) is running the DML like
SQL> delete test where rowid='AAAKB9AAEAAAAAiAAA';
From second session (SID 15) run the same DML, and it is obvious that second
session will wait for first session to commit.
Lets check the info in V$ACTIVE_SESSION_HISTORY.
------------------------------------------------
Run the following script.
SQL>   select SESSION_ID,NAME,P1,P2,P3,WAIT_TIME,CURRENT_OBJ#,CURRENT_FILE#,CURRENT_BLOCK#
       from v$active_session_history ash, v$event_name enm
       where ash.event#=enm.event# and SESSION_ID=&SID and SAMPLE_TIME>=(sysdate-&minute/(24*60)); Input is
Enter value for sid: 15
Enter value for minute: 1  /* How many minutes activity you want to see */
output is 59 lines as it is wiatting more than 1 minute more than 1 minute
SESSION_ID NAME                                   P1         P2         P3  WAIT_TIME CURRENT_OBJ# CURRENT_FILE# CURRENT_BLOCK#
---------- ------------------------------ ---------- ---------- ---------- ---------- ------------ ------------- --------------
        15 enq: TX - row lock contention  1415053318     589825        143          0        41085             4             34
        15 enq: TX - row lock contention  1415053318     589825        143          0        41085             4             34
        15 enq: TX - row lock contention  1415053318     589825        143          0        41085             4             34
continue .............
        15 enq: TX - row lock contention  1415053318     589825        143          0        41085             4             34
        15 enq: TX - row lock contention  1415053318     589825        143          0        41085             4             34
        15 enq: TX - row lock contention  1415053318     589825        143          0        41085             4             34
So you have object details with problem info.
 
Lets do the OFFLINE analysis of ASH
==============================
So if your are not able to find the problem online, you can dump the ASH to a trace.
Command would be like below: where level means minute. lets dump for 10 minutes history
1. SQL> alter session set events 'immediate trace name ashdump level 10';
or
2. SQL> alter system set events 'immediate trace name ashdump level 10';
or
3. SQL> oradebug setmypid
    SQL> oradebug dump ashdump 10;
So you will get the trace file in udump.

ASH Contains

* SQL identifier of SQL statement(SQL_ID-SQL identifier of the SQL statement that the session was executing at the time of sampling)
* Object number, file number, and block number(CURRENT_OBJ#-Object ID of the object that the session is referencing,CURRENT_FILE#-File number of the file containing the block that the session is referencing,CURRENT_BLOCK#-ID of the block that the session is referencing)
* Wait event identifier (EVENT_ID-Identifier of the resource or event for which the session is waiting or for which the session last waited)
* Session identifier and session serial number(sid,serial#-Session serial number (used to uniquely identify a session's objects))
* Module and action name(module-name of executing module when sampled,action-name of executing module when sampled)
* Client identifier of the session(client id)
* Service hash identifier (service_hash-hash that identifies the service)




Wednesday, 23 November 2011

Fractured block in Oracle


A block in which the header and footer are not consistent at a given SCN. In a user-managed backup, an operating system utility can back up a datafile at the same time that DBWR is updating the file. It is possible for the operating system utility to read a block in a half-updated state, so that the block that is copied to the backup media is updated in its first half, while the second half contains older data. In this case, the block is fractured.

For non-RMAN backups, the ALTER TABLESPACE ... BEGIN BACKUP or ALTER DATABASE BEGIN BACKUP command is the solution for the fractured block problem. When a tablespace is in backup mode, and a change is made to a data block, the database logs a copy of the entire block image before the change so that the database can reconstruct this block if media recovery finds that this block was fractured.

The block that the operating system reads can be split, that is, the top of the block is written at one point in time while the bottom of the block is written at another point in time. If you restore a file containing a fractured block and Oracle reads the block, then the block is considered a corrupt 

Thursday, 27 October 2011

Checking Conflicts Before Applying CPU Patch

Hi Friends, today I want to bring an important update regarding Jul CPU 2011 patch applying.

most of the times, we will never check any conflicts for the patches when applying any CPU. ofcourse this conflict checking command is not there even in README.html file that is distributed with patch.

Please use below command to check the conflicts aganist the oracle_home and avoid to land in problems
step 1: unzip your patch zip file
step 2: run below command
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir <patch_directory>
Example:
$ unzip p9655017_10204_linux.zip
$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir 9655017
The other day, when I am doing patching on a RAC database, after executing the above conflict command, got below error
Following patches have conflicts. Please contact Oracle Support and get the merged patch of the patches :
6600051, 8836683
Whenever you get this type of error message, plz contact oracle support by raising a service request(SR)
In my case, Oracle support suggested to apply a merge patch 9347333 before applying Jul CPU 2011. Once done with applying merge patch, without any further issues I successfully applied CPU patch
Sometimes apart from above message you may see below warning messages which you can ignore
Summary of Conflict Analysis:
Patches that can be applied now without any conflicts are :
10013975, 10014009, 10014012, 10014015, 10325878, 10325885, 11787762, 11787763, 11787765, 11787766, 12419249, 12566121, 12566124, 12566126, 12566129, 12566131, 12566134, 12566136, 12566137, 12566139, 12566141, 12566142, 12566143, 7155248, 7155249, 7155250, 7155251, 7155252, 7155253, 7155254, 7197583, 7375611, 7375613, 7375617, 7609057, 8309592, 8309632, 8568395, 8568397, 8568398, 8568402, 8568404, 8836667, 8836671, 8836675, 8836677, 8836678, 8836683, 8836684, 8836686, 9173244, 9173253, 9442328, 9442331, 9442339, 9678690, 9678695, 9678697
Following patches are not required, as they are subset of the patches in Oracle Home or subset of the patches in the given list :
10249540, 8836681, 8568405
Following patches will be rolled back from Oracle Home on application of the patches in the given list :
10013975, 10014009, 10014012, 10014015, 10325878, 10249540, 8836681, 8568405, 7155248, 7155249, 7155250, 7155251, 7155252, 7197583, 7375611, 7375613, 7375617, 7609057, 8309592, 8309632, 8568395, 8568397, 8568398, 8568402, 8568404, 8836667, 8836671, 8836675, 8836677, 8836678, 8836683, 8836684, 8836686, 9173244, 9173253, 9442328, 9442331, 9442339, 9678690, 9678695

Thursday, 13 October 2011

SYSDBA AND SYSOPER

 SYSDBA:

The SYS user is automatically granted the SYSDBA privilege upon installation. When you log in as user SYS, you must connect to the database as SYSDBA. Connecting as a SYSDBA user invokes the SYSDBA privilege.
 Sysdba has all rights to do any thing on the database.

SYSOPER:
 Sysoper doesn't have some privileges , they are
 1. Change character set
 2.Create database
 3.Drop database
 4.ALTER DATABASE RECOVER (Complete recovery only. Any form of incomplete recovery, such as        UNTIL TIME|CHANGE|CANCEL|CONTROLFILE requires connecting as SYSDBA.)

SYSTEM:
This account can perform all administrative functions except the following:
1.Backup and recovery
2.Database upgrade

When you connect with SYSDBA or SYSOPER privileges, you connect with a default schema, not with the schema that is generally associated with your user name. For SYSDBA this schema is SYS; for SYSOPER the schema is PUBLIC.





  


















Wednesday, 12 October 2011

FAL_SERVER & FAL_CLIENT IN DATAGUARD

FAL_CLIENT and FAL_SERVER are initialization parameters used to configure log gap detection and resolution at the standby database side of a physical database configuration. This functionality is provided by log apply services and is used by the physical standby database to manage the detection and resolution of archived redo logs.

FAL_CLIENT and FAL_SERVER only need to be defined in the initialization parameter file for the standby database(s). It is possible; however, to define these two parameters in the initialization parameter for the primary database server to ease the amount of work that would need to be performed if the primary database were required to transition its role.

FAL_CLIENT specifies the TNS network services name for the standby database (which is sent to the FAL server process by log apply services) that should be used by the FAL server process to connect to the standby database. The syntax would be: 

FAL_CLIENT=<net_service_name_of_standby_database>
 
· FAL_SERVER specifies the TNS network service name that the standby database should use to connect to the FAL server process. The syntax would be:

FAL_SERVER=<net_service_name_of_primary_database>



Oracle 11g Case Sensitive Passwords

Oracle 11g by default force case sensitivity of user passwords. The users have to provide passwords in the same case (upper, lower or mixed) they created the password with. Lets observe this behavior.

Case sensitive user passwords in Oracle 11g

SQL> create user u1 identified by U1;

User created.

SQL> grant create session to u1;

Grant succeeded.

SQL> conn u1/u1
ERROR:
ORA-01017: invalid username/password; logon denied


Warning: You are no longer connected to ORACLE.
We have created a user named u1 with a password U1. Now this user has to provide the password as U1 but not u1. This behavior is controlled with an initialization parameter SEC_CASE_SENSITIVE_LOGON. By default it has a value TRUE.
SQL> conn / as sysdba
Connected.
SQL> show parameter sec_case_sensitive_logon;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
sec_case_sensitive_logon             boolean     TRUE

SQL> alter system set sec_case_sensitive_logon=false scope=both;

System altered.

SQL> show parameter sec_case_sensitive_logon

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
sec_case_sensitive_logon             boolean     FALSE

SQL> conn u1/u1
Connected.
Now that sec_case_sensitive_logon is set to false, the case of the password doesn't matter.
Remember this password "U1" is stored in the case it was created with, so as soon as you switch the case sensitive password to on again, it will start forcing the case of the password.
SQL> alter system set sec_case_sensitive_logon=true scope=both;

System altered.

SQL> conn u1/u1
ERROR:
ORA-01017: invalid username/password; logon denied


Warning: You are no longer connected to ORACLE.

SQL> conn u1/U1
Connected.
SQL> 
As oracle releases before 11g didn't have case sensitivity on password, so when you import your users from a pre-11g database to 11g database their password is still case insensitive. The passwords for these users will become case sensitive when they change their passwords after they are imported to 11g database, if the password case sensitivity is enabled. So after importing users to 11g database pro pre 11g releases e.g. 10g ask the users to change their passwords immediately for better password security.
There is a new column in DBA_USERS view which shows the history of the user passwords.
SQL> SELECT USERNAME,PASSWORD_VERSIONS FROM DBA_USERS;

USERNAME                       PASSWORD_VERSIONS
------------------------------ -----------------
SCOTT                          10G 11G
HR                             10G 11G
ZAHID                          10G 11G
U1                             11G
KAREEM                         10G
The users having password_version=10g 11g, means they are imported from 10g and they have modified their passwords after being imported to 11g.
The users having password_version=11g, means these users were created in 11g database.
The users having password_version=10g, means they were imported from 10g and haven't yet changed their password after being imported to 11g.

Case sensitive password in Password File in Oracle 11g

$ cd $ORACLE_HOME/dbs

$ orapwd file=orapwora11g entries=30 password=System  ignorecase=n

-- This will create a password file with a password "System" and ignorecase=n
-- ignorecase=n is the default.

$ sqlplus sys/system@ora11g as sysdba

Copyright (c) 1982, 2009, Oracle.  All rights reserved.

ERROR:
ORA-01017: invalid username/password; logon denied


Enter user-name: 

-- When I provide the password as "system" the logon was denied.

$ sqlplus sys/System@ora11g as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Wed Jul 21 06:11:41 2010

Copyright (c) 1982, 2009, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> 

-- I am successfully logged in when I used the password "System".
ignorecase=n is the default with the orapwd command in oracle 11g i.e. you mention it or not it will force the password to be case sensitive when users log in as SYSDBA remotely.
To turn off password case sensitivity in password file we need to explicitly mention ignorecase=y while creating the password file.
$ cd $ORACLE_HOME/dbs

$ rm orapwora11g

$ orapwd file=orapwora11g entries=30 password=System ignorecase=y

$ sqlplus sys/system@ora11g as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Wed Jul 21 06:19:24 2010

Copyright (c) 1982, 2009, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL>
This time I created a password file with ignorecase=y and I can login with a password "system".
If password file was created with ignorecase=n and you have imported a SYSDBA or SYSOPER user from previous oracle release, their passwords will be included in the password file but they will be able to use case insensitive passwords unless they modify their passwords. So ask users to modify their passwords once they are shifted to 11g for better security.

Db Links Passwords Case Sensitivity in Oracle 11g

There could be three scenarios of db link connections.
1). A user connecting from a pre-11g database to 11g database

If the password case sensitivity is switched on in the 11g database then recreate the db link with the password in uppercase before the user can connect to 11g database. As db link passwords are always created in uppercase no matter what case was used at the time of db link creation.
2). A user connecting from 11g database to 11g database.

If the user is connecting from a 11g database to 11g database and password case sensitivity is enabled, the user must enter the password using the case in which it was created.
3). A user connecting from 11g database to a pre-11g database.

If the user is connecting from a 11g database to pre-11g database then the case of the password doesn't matter. As there is no password case sensitivity in the pre-11g database.

Friday, 19 August 2011

What SQL is currently using the most resources?

SELECT active_session_history.user_id,
dba_users.username, sqlarea.sql_text,
SUM(active_session_history.wait_time + active_session_history.time_waited) total_wait_time
FROM v$active_session_history active_session_history,
v$sqlarea sqlarea, dba_users
WHERE active_session_history.sample_time BETWEEN SYSDATE - 60/2880 AND SYSDATE
AND active_session_history.sql_id = sqlarea.sql_id
AND active_session_history.user_id = dba_users.user_id
GROUP BY active_session_history.user_id,sqlarea.sql_text, dba_users.username
ORDER BY 4;