Total Pageviews

Friday, 30 December 2011

Resize standby datafile if disk runs out of space on standby site.

PURPOSE: TO AVOID RECREATION OF STANDBY DATABASE IN CASE FILE IS NOT RESIZED ON STANDBY :
ITLINUXDEVBLADE07-PRIMARY
Database is DGTEST9i
[oracle@itlinuxdevblade07 dgtest9i]$ df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/cciss/c0d0p2 25G 21G 2.0G 92% /opt
[oracle@itlinuxdevblade07 dgtest9i]$
2 gb freespace on disk on PRIMARY.
ITLINUXDEVBLADE08
[oracle@itlinuxdevblade08 oradata]$ df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/cciss/c0d0p2 25G 23G 550M 98% /opt
[oracle@itlinuxdevblade08 oradata]$
The corresponding disk on the standby site as in db_file_name_convert has only 550 mb free.
Now create a new tablespace for the testing purpose..
SQL> create tablespace dropme datafile ‘/opt/oracle/oradata/dgtest9i/dropme.dbf’ size 200m;
Tablespace created.
SQL> alter system switch logfile;
System altered.
SQL> /
System altered.
[oracle@itlinuxdevblade07 dgtest9i]$ df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/cciss/c0d0p2 25G 22G 1.8G 93% /opt
[oracle@itlinuxdevblade07 dgtest9i]$
On STANDBY
[oracle@itlinuxdevblade08 dgtest9i]$ df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/cciss/c0d0p2 25G 23G 350M 99% /opt
[oracle@itlinuxdevblade08 dgtest9i]$
Space available reduced from 550 mb to 350 mb
The logs were succesfully applied on STANDBY disk as there was enough space.
Now add a sum of more than the amount available on the standby disk by RESIZING the file on PRIMARY to the tablespace.
PRIMARY site..
SQL> Alter database datafile ‘/opt/oracle/oradata/dgtest9i/dropme.dbf’ RESIZE 1024M;
Database altered.
SQL> SQL>
SQL> alter system switch logfile;
System altered.
Go to STANDBY…
[oracle@itlinuxdevblade08 dgtest9i]$ df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/cciss/c0d0p2 25G 23G 350M 99% /opt
[oracle@itlinuxdevblade08 dgtest9i]$ ls -lrt
total 801848
-rw-r—– 1 oracle dba 209723392 May 10 14:23 dropme.dbf
-rw-r—– 1 oracle dba 7544832 May 10 14:23 control01.ctl
[oracle@itlinuxdevblade08 dgtest9i]$
The file has not been resized….as there was not enough space…
Now check the logs for errors..
On standby …alert log shows media recovery failed as the datafile could not be resized and media recovery stops
The standby database cannot be opened now as the files need recovery..
Media Recovery Log /opt/oracle/dgtest9i/arch/arch59.log
MRP0: Background Media Recovery terminated with error 1237
Wed May 10 14:23:04 2006
Errors in file /opt/oracle/admin/dgtest9i/bdump/dgtest9i_mrp0_10198.trc:
ORA-01237: cannot extend datafile 3
ORA-01110: data file 3: ‘/opt/oracle/dgtest9i/dropme.dbf’
ORA-19502: write error on file “/opt/oracle/dgtest9i/dropme.dbf”, blockno 71297 (blocksize=8192)
ORA-27072: skgfdisp: I/O error
Linux-x86_64 Error: 2: No such file or directory
Additional information: 71296
Some recovered datafiles maybe left media fuzzy
Media recovery may continue but open resetlogs may fail
MRP0: Background Media Recovery process shutdown
STEPS to solve this problem.
1. Shutdown both databases.
2. Create a new db_file_name_convert path as the file in question has to be moved to different disks where space exists.
*.db_file_name_convert=’/opt/oracle/dgtest9i/’,'/opt/oracle/oradata/dgtest9i/’,'/tmp/dgtest9i/’,'/tmp/dgtest9i/’…both machines.
3. Copy dropme.dbf to /tmp/dgtest9i on both machines from original area.
4. Startup mount;..PRIMARY DATABASE
5. alter database rename file ‘/opt/oracle/oradata/dgtest9i/dropmedbf’ to ‘/tmp/dgtest9i/dropme.dbf’;
6. alter database open;
7. alter database create standby controlfile as ‘/tmp/control01.ctl’ ….primary site.
8. ftp the standby controlfile to /opt/oracle/dgtest9i on standby….controlfile area in standby spfile.
9.startup nomount;…….STANDBY site.
SQL> show parameter convert;
NAME TYPE VALUE
———————————— ———– ——————————
db_file_name_convert string /opt/oracle/oradata/dgtest9i/, /opt/oracle/dgtest9i/, /tmp/dgtest9i/, /tmp/dgtest9i/
log_file_name_convert string /opt/oracle/oradata/dgtest9i/, /opt/oracle/dgtest9i/
SQL> alter database mount standby database;
Database altered.
SQL> select name from v$datafile;
/opt/oracle/dgtest9i/system01.dbf
/opt/oracle/dgtest9i/undotbs01.dbf
/tmp/dgtest9i/dropme.dbf
Primary site : Switch a few logfiles….
Now start managed recovery on standby.
SQL>recover managed standby database disconnect;
Alert log below…
Completed: ALTER DATABASE RECOVER managed standby database d
Wed May 10 14:54:30 2006
RFS: Possible network disconnect with primary database
Wed May 10 14:54:38 2006
Fetching gap sequence for thread 1, gap sequence 59-63
Trying FAL server: DGTEST9I_BLADE07
Wed May 10 14:54:41 2006
RFS: Successfully opened standby logfile 4: ‘/opt/oracle/dgtest9i/st01.log’
Wed May 10 14:54:56 2006
Media Recovery Log /opt/oracle/dgtest9i/arch/arch59.log
Media Recovery Log /opt/oracle/dgtest9i/arch/arch60.log
Media Recovery Log /opt/oracle/dgtest9i/arch/arch61.log
Media Recovery Log /opt/oracle/dgtest9i/arch/arch62.log
Media Recovery Log /opt/oracle/dgtest9i/arch/arch63.log
Media Recovery Log /opt/oracle/dgtest9i/arch/arch64.log
Media Recovery Log /opt/oracle/dgtest9i/arch/arch65.log
Media Recovery Waiting for thread 1 seq# 66 (in transit)
Go to /tmp/dgtest9i on STANDBY and make sure file is resized to 1 gb..
[oracle@itlinuxdevblade08 dgtest9i]$ ls -lrt
total 1049612
-rw-r—– 1 oracle dba 1073750016 May 10 14:55 dropme.dbf

Monday, 19 December 2011

Schema Refresh

I'm refreshing TESTDB by taking Data from PRODB,here only one schema is refreshed.

Source side:


Preparatory Steps:

Create directory or use an exiting directory by giving read and write permission for 'system' Database user to use that direcotry(TEST_MIG).

SQL> grant read,write on directory TEST_MIG to system;

Grant succeeded.

SQL> alter user system identified by TESTDBdba account unlock;

PRODDB:

Step 1:Exporting the Data from the source Database(PRODDB in our case)

vi expdp_refresh_schema_sep27.sh

$ expdp system/PRODDB@PRODDB DUMPFILE=REFRESH_SCHEMA.DMP DIRECTORY=DATA_PUMP_DIR SCHEMAS=REFRESH_SCHEMA LOGFILE=REFRESH_SCHEMA.log

$ nohup sh expdp_refresh_schema_sep27.sh>refresh_schema.out &

Step 2:Copying the dump file(Source Data) to Target Database server
We can use 'winscp' tool(A graphical utility for copying files from windows to linux or viceversa) or ftp or scp or tar or rsync for coping Data from source server to target server.

Step 3:Moving Data into the target Database.

$ impdp system/TESTDBdba@TESTDB DUMPFILE=REFRESH_SCHEMA.DMP DIRECTORY=TEST_MIG REMAP_SCHEMA=REFRESH_SCHEMA:REFRESH_SCHEMA LOGFILE=REFRESH_SCHEMA.log


Step 4:Verify the Data in Source and Target Databases.

Note:
In oracle 11g rel2,version:11.2.0.1.0 there are about 44 Distinct object_types comparing to previous versions this number is huge.

SQL> select *from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for Linux: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production


SQL> select distinct object_type from dba_objects;

OBJECT_TYPE
-------------------
EDITION
INDEX PARTITION
CONSUMER GROUP
SEQUENCE
TABLE PARTITION
SCHEDULE
QUEUE
RULE
JAVA DATA
PROCEDURE
OPERATOR

OBJECT_TYPE
-------------------
LOB PARTITION
DESTINATION
WINDOW
SCHEDULER GROUP
DATABASE LINK
LOB
PACKAGE
PACKAGE BODY
LIBRARY
PROGRAM
RULE SET

OBJECT_TYPE
-------------------
CONTEXT
TYPE BODY
JAVA RESOURCE
XML SCHEMA
TRIGGER
JOB CLASS
UNDEFINED
DIRECTORY
MATERIALIZED VIEW
TABLE
INDEX

OBJECT_TYPE
-------------------
SYNONYM
VIEW
FUNCTION
JAVA CLASS
JAVA SOURCE
INDEXTYPE
CLUSTER
TYPE
RESOURCE PLAN
JOB
EVALUATION CONTEXT

44 rows selected.

Source Database:

PRODDB:
---------

SQL> select count(*) from dba_objects
where owner='REFRESH_SCHEMA';

COUNT(*)
----------
132

SQL> select count(*) from dba_tables
where owner='REFRESH_SCHEMA';

COUNT(*)
----------
34

SELECT COUNT(*) FROM DBA_OBJECTS
WHERE OWNER='REFRESH_SCHEMA'
AND OBJECT_TYPE IN('TABLE','JOB','VIEW','PACKAGE','TRIGGER','SYNONYM','FUNCTION','PROCEDURE','TYPE')
ORDER BY OBJECT_TYPE;
SQL> SELECT COUNT(*) FROM DBA_OBJECTS
WHERE OWNER='REFRESH_SCHEMA'
AND OBJECT_TYPE IN('TABLE','JOB','VIEW','PACKAGE','TRIGGER','SYNONYM','FUNCTION','PROCEDURE','TYPE')
ORDER BY OBJECT_TYPE;
2 3 4
COUNT(*)
----------
62


SELECT COUNT(*) FROM DBA_OBJECTS
WHERE OWNER='REFRESH_SCHEMA'
AND OBJECT_TYPE IN


TARGET DATABASE:


TESTDB:
-------------
SQL> select count(*) from dba_objects
where owner='REFRESH_SCHE'; 2

COUNT(*)
----------
131

SQL> select count(*) from dba_tables
where owner='APEX4_DEV'; 2

COUNT(*)
----------
34

SQL> SELECT COUNT(*) FROM DBA_OBJECTS
WHERE OWNER='REFRESH_SCHEMA'
AND OBJECT_TYPE IN('TABLE','JOB','VIEW','PACKAGE','TRIGGER','SYNONYM','FUNCTION','PROCEDURE','TYPE')
ORDER BY OBJECT_TYPE;

COUNT(*)
----------
62

Friday, 16 December 2011

The standby database to lag behind primary database

  Look for the current SCN numbers of the primary and standby databases. On the primary:

SQL> select current_scn from v$database;

CURRENT_SCN
-----------
1447102

On the standby:

SQL> select current_scn from v$database;

CURRENT_SCN
-----------
1301571

Clearly there is a difference. But this by itself does not indicate a problem; since the standby is expected to lag behind the primary (this is an asynchronous non-real time apply setup). The real question is how much it is lagging in the terms of wall clock. To know that I used the scn_to_timestamp function to translate the SCN to a timestamp:

SQL> select scn_to_timestamp(1447102) from dual;

SCN_TO_TIMESTAMP(1447102)
-------------------------------
18-DEC-09 08.54.28.000000000 AM
 The same query to know the timestamp associated with the SCN of the standby database as well.
SQL> select scn_to_timestamp(1301571) from dual;

SCN_TO_TIMESTAMP(1301571)
-------------------------------
15-DEC-09 07.19.27.000000000 PM

This shows that the standby is two and half days lagging! The data at this point is not just stale; it must be rotten.

The next question is why it would be lagging so far back in the past. This is a 10.2 database where FAL server should automatically resolved any gaps in archived logs. Something must have happened that caused the FAL (fetch archived log) process to fail. To get that answer, first, I checked the alert log of the standby instance. I found these lines that showed the issue clearly:


Fri Dec 18 06:12:26 2009
Waiting for all non-current ORLs to be archived...
Media Recovery Waiting for thread 1 sequence 700
Fetching gap sequence in thread 1, gap sequence 700-700
… …
Fri Dec 18 06:13:27 2009
FAL[client]: Failed to request gap sequence
GAP - thread 1 sequence 700-700
DBID 846390698 branch 697108460
FAL[client]: All defined FAL servers have been attempted.

Going back in the alert log, I found these lines:

Tue Dec 15 17:16:15 2009
Fetching gap sequence in thread 1, gap sequence 700-700
Error 12514 received logging on to the standby
FAL[client, MRP0]: Error 12514 connecting to DEL1 for fetching gap sequence
Tue Dec 15 17:16:15 2009
Errors in file /opt/oracle/admin/DEL2/bdump/del2_mrp0_18308.trc:
ORA-12514: TNS:listener does not currently know of service requested in connect descriptor
Tue Dec 15 17:16:45 2009
Error 12514 received logging on to the standby
FAL[client, MRP0]: Error 12514 connecting to DEL1 for fetching gap sequence
Tue Dec 15 17:16:45 2009
Errors in file /opt/oracle/admin/DEL2/bdump/del2_mrp0_18308.trc:
ORA-12514: TNS:listener does not currently know of service requested in connect descriptor

This clearly showed the issue. On December 15th at 17:16:15, the Managed Recovery Process encountered an error while receiving the log information from the primary. The error was ORA-12514 “TNS:listener does not currently know of service requested in connect descriptor”. This is usually the case when the TNS connect string is incorrectly specified. The primary is called DEL1 and there is a connect string called DEL1 in the standby server.

The connect string works well. Actually, right now there is no issue with the standby getting the archived logs; so there connect string is fine - now. The standby is receiving log information from the primary. There must have been some temporary hiccups causing that specific archived log not to travel to the standby. If that log was somehow skipped (could be an intermittent problem), then it should have been picked by the FAL process later on; but that never happened. Since the sequence# 700 was not applied, none of the logs received later – 701, 702 and so on – were applied either. This has caused the standby to lag behind since that time.

So, the fundamental question was why FAL did not fetch the archived log sequence# 700 from the primary. To get to that, I looked into the alert log of the primary instance. The following lines were of interest:


Tue Dec 15 19:19:58 2009
Thread 1 advanced to log sequence 701 (LGWR switch)
Current log# 2 seq# 701 mem# 0: /u01/oradata/DEL1/onlinelog/o1_mf_2_5bhbkg92_.log
Tue Dec 15 19:20:29 2009Errors in file /opt/oracle/product/10gR2/db1/admin/DEL1/bdump/del1_arc1_14469.trc:
ORA-00308: cannot open archived log '/u01/oraback/1_700_697108460.dbf'
ORA-27037: unable to obtain file status
Linux Error: 2: No such file or directory
Additional information: 3
Tue Dec 15 19:20:29 2009
FAL[server, ARC1]: FAL archive failed, see trace file.
Tue Dec 15 19:20:29 2009
Errors in file /opt/oracle/product/10gR2/db1/admin/DEL1/bdump/del1_arc1_14469.trc:
ORA-16055: FAL request rejected
ARCH: FAL archive failed.
Archiver continuing
Tue Dec 15 19:20:29 2009
ORACLE Instance DEL1 - Archival Error. Archiver continuing.

These lines showed everything clearly. The issue was:

ORA-00308: cannot open archived log '/u01/oraback/1_700_697108460.dbf'
ORA-27037: unable to obtain file status
Linux Error: 2: No such file or directory


The archived log simply was not available. The process could not see the file and couldn’t get it across to the standby site.

Upon further investigation I found that the DBA actually removed the archived logs to make some room in the filesystem without realizing that his action has removed the most current one which was yet to be transmitted to the remote site. The mystery surrounding why the FAL did not get that log was finally cleared.

Solution

Now that I know the cause, the focus was now on the resolution. If the archived log sequence# 700 was available on the primary, I could have easily copied it over to the standby, registered the log file and let the managed recovery process pick it up. But unfortunately, the file was gone and I couldn’t just recreate the file. Until that logfile was applied, the recovery will not move forward. So, what are my options?

One option is of course to recreate the standby - possible one but not technically feasible considering the time required. The other option is to apply the incremental backup of primary from that SCN number. That’s the key – the backup must be from a specific SCN number. I have described the process since it is not very obvious. The following shows the step by step approach for resolving this problem. I have shown where the actions must be performed – [Standby] or [Primary].

1. [Standby] Stop the managed standby apply process:

SQL> alter database recover managed standby database cancel;

Database altered.

2. [Standby] Shutdown the standby database

3. [Primary] On the primary, take an incremental backup from the SCN number where the standby has been stuck:

RMAN> run {
2> allocate channel c1 type disk format '/u01/oraback/%U.rmb';
3> backup incremental from scn 1301571 database;
4> }

using target database control file instead of recovery catalog
allocated channel: c1
channel c1: sid=139 devtype=DISK

Starting backup at 18-DEC-09
channel c1: starting full datafile backupset
channel c1: specifying datafile(s) in backupset
input datafile fno=00001 name=/u01/oradata/DEL1/datafile/o1_mf_system_5bhbh59c_.dbf
… …
piece handle=/u01/oraback/06l16u1q_1_1.rmb tag=TAG20091218T083619 comment=NONE
channel c1: backup set complete, elapsed time: 00:00:06
Finished backup at 18-DEC-09
released channel: c1

4. [Primary] On the primary, create a new standby controlfile:

SQL> alter database create standby controlfile as '/u01/oraback/DEL1_standby.ctl';

Database altered.

5. [Primary] Copy these files to standby host:

oracle@oradba1 /u01/oraback# scp *.rmb *.ctl oracle@oradba2:/u01/oraback
oracle@oradba2's password:
06l16u1q_1_1.rmb 100% 43MB 10.7MB/s 00:04
DEL1_standby.ctl 100% 43MB 10.7MB/s 00:04

6. [Standby] Bring up the instance in nomount mode:

SQL> startup nomount

7. [Standby] Check the location of the controlfile:

SQL> show parameter control_files

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
control_files string /u01/oradata/standby_cntfile.ctl

8. [Standby] Replace the controlfile with the one you just created in primary.

9. $ cp /u01/oraback/DEL1_standby.ctl /u01/oradata/standby_cntfile.ctl
10.[Standby] Mount the standby database:

SQL> alter database mount standby database;
11.[Standby] RMAN does not know about these files yet; so you must let it know – by a process called cataloging. Catalog these files:

$ rman target=/

Recovery Manager: Release 10.2.0.4.0 - Production on Fri Dec 18 06:44:25 2009

Copyright (c) 1982, 2007, Oracle. All rights reserved.

connected to target database: DEL1 (DBID=846390698, not open)
RMAN> catalog start with '/u01/oraback';

using target database control file instead of recovery catalog
searching for all files that match the pattern /u01/oraback

List of Files Unknown to the Database
=====================================
File Name: /u01/oraback/DEL1_standby.ctl
File Name: /u01/oraback/06l16u1q_1_1.rmb

Do you really want to catalog the above files (enter YES or NO)? yes
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /u01/oraback/DEL1_standby.ctl
File Name: /u01/oraback/06l16u1q_1_1.rmb

12.Recover these files:

RMAN> recover database;

Starting recover at 18-DEC-09
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00001: /u01/oradata/DEL2/datafile/o1_mf_system_5lptww3f_.dbf
...…
channel ORA_DISK_1: reading from backup piece /u01/oraback/05l16u03_1_1.rmb
channel ORA_DISK_1: restored backup piece 1
piece handle=/u01/oraback/05l16u03_1_1.rmb tag=TAG20091218T083619
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07

starting media recovery

archive log thread 1 sequence 8012 is already on disk as file /u01/oradata/1_8012_697108460.dbf
archive log thread 1 sequence 8013 is already on disk as file /u01/oradata/1_8013_697108460.dbf
… …

13. After some time, the recovery fails with the message:

archive log filename=/u01/oradata/1_8008_697108460.dbf thread=1 sequence=8009
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 12/18/2009 06:53:02
RMAN-11003: failure during parse/execution of SQL statement: alter database recover logfile '/u01/oradata/1_8008_697108460.dbf'
ORA-00310: archived log contains sequence 8008; sequence 8009 required
ORA-00334: archived log: '/u01/oradata/1_8008_697108460.dbf'

This happens because we have come to the last of the archived logs. The expected archived log with sequence# 8008 has not been generated yet.

14.At this point exit RMAN and start managed recovery process:

SQL> alter database recover managed standby database disconnect from session;

Database altered.

15.Check the SCN’s in primary and standby:

[Standby] SQL> select current_scn from v$database;

CURRENT_SCN
-----------
1447474
[Primary] SQL> select current_scn from v$database;

CURRENT_SCN
-----------
1447478
Now they are very close to each other.

Thursday, 15 December 2011

Active Session History (ASH)

How to run ASHRPT.SQL script

To generate a text report of ASH information, run the ashrpt.sql script at the SQL prompt:

@$ORACLE_HOME/rdbms/admin/ashrpt.sql

First, you need to specify whether you want an HTML or a text report.

Enter value for report_type: text

Specify the time frame to collect ASH information by first specifying the begin time in minutes prior to the system date.

Enter value for begin_time: -10
Note: here you have to just put number in minutes eg: 10 for 10 minutes

Next, enter the duration in minutes that the report for which you want to capture ASH information from the begin time. The default duration of system date minus begin time is accepted in the following example:

Enter value for duration:
Note: left blank for default value. Default value is SYSDATE

The report in this example will gather ASH information beginning from 10 minutes before the current time and ending at the current time. Next, accept the default report name or enter a report name. The default name is accepted in the following example:

Enter value for report_name:
Using the report name ashrpt_1_0310_0131.txt
Note: Left it blank for default value.

The session history report is generated.

Purpose of ASH

ASH is way of sampling the state of sessions connected to an Oracle database in order to monitory database load as well as drill down into any performance issues that may arise. Any session that is connected to the database and is waiting for an event that does not belong to the Idle wait class is considered as an active session.ASH maintains a fixed sized circular buffer in the database System Global Area (SGA). The fixed
sized circular buffer will be allocated during database start-up time. ASH by default on an Oracle database, samples once every second and logs some 30 to 60 (depending on version) pieces of information on any session that is active at the time of sampling.Each session sample is a set of rows and the V$ACTIVE_SESSION_HISTORY view returns one row for each active session per sample, returning the latest session sample rows first. Because the active session samples are stored in a Fixed Sized circular buffer in SGA, the greater the system activity, the smaller the number of seconds of session activity that can be stored in the Fixed Sized circular buffer. This means that the duration for which a session sample appears in the V$ view, or the number of seconds of session activity that is displayed in the V$ view, is completely dependent on the database activity.

ASH Analysis

1.Online Analysis
2.Offline Analysis
Let start with ONLINE analysis
========================
I have open two session. first one (SID 16) is running the DML like
SQL> delete test where rowid='AAAKB9AAEAAAAAiAAA';
From second session (SID 15) run the same DML, and it is obvious that second
session will wait for first session to commit.
Lets check the info in V$ACTIVE_SESSION_HISTORY.
------------------------------------------------
Run the following script.
SQL>   select SESSION_ID,NAME,P1,P2,P3,WAIT_TIME,CURRENT_OBJ#,CURRENT_FILE#,CURRENT_BLOCK#
       from v$active_session_history ash, v$event_name enm
       where ash.event#=enm.event# and SESSION_ID=&SID and SAMPLE_TIME>=(sysdate-&minute/(24*60)); Input is
Enter value for sid: 15
Enter value for minute: 1  /* How many minutes activity you want to see */
output is 59 lines as it is wiatting more than 1 minute more than 1 minute
SESSION_ID NAME                                   P1         P2         P3  WAIT_TIME CURRENT_OBJ# CURRENT_FILE# CURRENT_BLOCK#
---------- ------------------------------ ---------- ---------- ---------- ---------- ------------ ------------- --------------
        15 enq: TX - row lock contention  1415053318     589825        143          0        41085             4             34
        15 enq: TX - row lock contention  1415053318     589825        143          0        41085             4             34
        15 enq: TX - row lock contention  1415053318     589825        143          0        41085             4             34
continue .............
        15 enq: TX - row lock contention  1415053318     589825        143          0        41085             4             34
        15 enq: TX - row lock contention  1415053318     589825        143          0        41085             4             34
        15 enq: TX - row lock contention  1415053318     589825        143          0        41085             4             34
So you have object details with problem info.
 
Lets do the OFFLINE analysis of ASH
==============================
So if your are not able to find the problem online, you can dump the ASH to a trace.
Command would be like below: where level means minute. lets dump for 10 minutes history
1. SQL> alter session set events 'immediate trace name ashdump level 10';
or
2. SQL> alter system set events 'immediate trace name ashdump level 10';
or
3. SQL> oradebug setmypid
    SQL> oradebug dump ashdump 10;
So you will get the trace file in udump.

ASH Contains

* SQL identifier of SQL statement(SQL_ID-SQL identifier of the SQL statement that the session was executing at the time of sampling)
* Object number, file number, and block number(CURRENT_OBJ#-Object ID of the object that the session is referencing,CURRENT_FILE#-File number of the file containing the block that the session is referencing,CURRENT_BLOCK#-ID of the block that the session is referencing)
* Wait event identifier (EVENT_ID-Identifier of the resource or event for which the session is waiting or for which the session last waited)
* Session identifier and session serial number(sid,serial#-Session serial number (used to uniquely identify a session's objects))
* Module and action name(module-name of executing module when sampled,action-name of executing module when sampled)
* Client identifier of the session(client id)
* Service hash identifier (service_hash-hash that identifies the service)