Saturday, 14 December 2013

How to Kill Idle Session

    Consider a situation when DBA having some maintenance task. He has already passed the message to all the database user to stop using the database and he finds some of the users is still using the database and he is not able to communicate with that particular user (may be he is not available of chair) then DBA choose to kill that session. For this purpose either you can use Toad or run the below batch file.
declare
   ora_user varchar2(30) default 'HRMS'; --Oracle Username to kill all sessions for
begin
   execute immediate 'alter user '||ora_user||' account lock';
   for crs in (select sid,serial# from v$session where username = ora_user) loop
      execute immediate 'alter system kill session '''||crs.sid||','||crs.serial#||'''';
   end loop;
end;
/
How to Kill idle Session of particular Schema
Contents: kill_idle_sessions.bat
sqlplus orafin/**** @d:\kill_idle_sessions.sql

Contents: kill_idle_sessions.sql
connect / as sysdba
exec kill_idle_sessions
exit.

Wednesday, 11 December 2013

ORA-29913: error in executing ODCIEXTTABLEOPEN callout

ORA-29913: error in executing ODCIEXTTABLEOPEN callout

Yesterday I was worked in db migration to export the schema & import into another database.
I have faced some issues. I would like to share you.
Database version is 10.2.0.4 & server is IBM AIX 5.3. Schema size is around 25 GB. So I am using Datapump & got the below error.

ORA-31693: Table data object "MANOJ"."RB_TABLE" failed to load/unload and
is being skipped due to error:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-39077: unable to subscribe agent ORA-39077: unable to subscribe
agent KUPC$A_1_20101210

I have checked several URL & they told its related to external table. But my table is a normal table.
I got only error for "MANOJ"."RB_TABLE" & remaining tables were exported successfully.
I got parallel parameter having issue. While export using expdp I used parallel=6 parameter.
Again I have tried parallel =2, 3, 4 & got the same error.
Finally i set parallel=1 (default value – No need to mention while export using expdp) & exported successfully.

Note:

My server having 16 CPU’s & my database side parallel slave’s are available..

Friday, 22 November 2013

ORA-00406: COMPATIBLE parameter needs to be 10.0.0.0.0 or greater

Recently I was worked on UAT Database migration from one server to another new server. Once I got a request I have checked the space requirements and  other checklist details on new server also. I felt everything looks good.

During scheduled time, I started the RMAN online backup with compression option because I have less space for backup location. Also I don’t have any other file system to place the RMAN backup.

While started the RMAN backup I am getting below error.


RMAN-03009: failure of backup command on db_ch2 channel at 02/29/2013 06:22:28
ORA-00406: COMPATIBLE parameter needs to be 10.0.0.0.0 or greater
ORA-00722: Feature "Backup Compression"
continuing other job steps, job failed will not be re-run
released channel: db_ch1
released channel: db_ch2
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of backup command on db_ch1 channel at 02/29/2013 06:22:28
ORA-00406: COMPATIBLE parameter needs to be 10.0.0.0.0 or greater
ORA-00722: Feature "Backup Compression"
  
I have checked the compatible parameter and compatible parameter is 9.2.0.  But database running on 10.2.0.4. 
We need to change the compatible parameter is 10.2.0
I have checked with application team and try to get approval for quick recycle the database. But they are doing some crucial test and they don’t allowed to do DB recycle. After one week I got the approval and changed the init parameter, recycled the database.  So while checking the prerequiste, we should cover all the things... If we miss anything, it will create a hmmm :-(

While bring up the database, I am getting below error.

ORA-32004: obsolete and/or deprecated parameter(s) specified
ORA-19905: log_archive_format must contain %s, %t and %r

SQL> !cat initDBUAT1.ora | grep log_archive_format
log_archive_format = _%t_%s.log

I have changed the log_archive_format init parameter and started the database.

SQL> !cat initDBUAT1.ora | grep log_archive_format = _%t_%s_%r..


Wednesday, 13 November 2013

Guaranteed Restore Points in Oracle

Guaranteed Restore Points came with oracle 10gR2. It can ensure that you can rewind the database to a time without tradional point in time recovery. It uses db_file_recovery_dest to keep the changed blocks, even if flashback logging is not enabled for your database. It never age out of the control file and must be explicitly dropped.
A normal restore point assigns a restore point name to an SCN or specific point in time. The control file can maintain a record of thousands of normal restore points with no significant effect on database performance. Normal restore points eventually age out of the control file if not manually deleted, so they require no ongoing maintenance.
SQL>CREATE RESTORE POINT before_upgrade;
Creating a normal restore point eliminates manually recording an SCN in advance or determines the correct SCN after the fact by using features such as Flashback Query. Like a normal restore point, a guaranteed restore point serves as an alias for an SCN in recovery operations.
A guaranteed restore point ensures that you can use Flashback Database to rewind a database to its state at the restore point SCN, even if the generation of flashback logs is disabled. If flashback logging is enabled, then a guaranteed restore point enforces the retention of flashback logs required for Flashback Database to any SCN after the earliest guaranteed restore point.
Caution: If we are using Guaranteed Restore Points with Flashback logging turned on, we need to exercise care that we drop restore points that are not in use or that we allocate sufficient space for the flashback logs as specified by the init.ora parameter db_recovery_file_dest_size. This is because the flashback logs will be retained and not overwritten due to space constraints when we create guaranteed restore points.
To create first guaranteed restore point when flashback off, you first start the database in mount state after a consistent shut down. After opening in mount state you can create your first guaranteed restore point like below.
SQL> CREATE RESTORE POINT before_upgrade GUARANTEE FLASHBACK DATABASE;
You can use the LIST command to list either a specific restore point or all restore points known to the RMAN repository
LIST RESTORE POINT restore_point_name;
LIST RESTORE POINT ALL;
SCN              RSP Time  Type       Time      Name
—————-           ——— ———-  ———        —---      ----
431843           4-NOV-13            4-NOV-13 NORMAL_RS
448619           4-NOV-13 GUARANTEED 4-NOV-13 GUARANTEED_RS
RMAN indicates the SCN and time of the restore point, the type of restore point, and the name of the restore point. You can useV$RESTORE_POINT control file view to check a list of all currently defined restore points (normal and guaranteed).
SQL> SELECT NAME, SCN, TIME, DATABASE_INCARNATION#,
GUARANTEE_FLASHBACK_DATABASE,STORAGE_SIZE
FROM V$RESTORE_POINT;
You can use the below query to find only Guaranteed restore points:
SQL> SELECT NAME, SCN, TIME, DATABASE_INCARNATION#,
GUARANTEE_FLASHBACK_DATABASE,STORAGE_SIZE
FROM V$RESTORE_POINT
WHERE GUARANTEE_FLASHBACK_DATABASE = ‘YES’;
Note: For normal restore points, STORAGE_SIZE is zero. For guaranteed restore points, STORAGE_SIZE indicates the amount of disk space in the flash recovery area used to retain logs required to guarantee FLASHBACK DATABASE to that restore point.
Dropping Restore Points:
When you are satisfied that you do not need an existing restore point, or when you want to create a new restore point with the name of an existing restore point, you can drop the restore point.
SQL>DROP RESTORE POINT before_app_upgrade;
Note: The same statement is used to drop both normal and guaranteed restore points. Normal restore points eventually age out of the control file, even if not explicitly dropped where as guaranteed restore points never age out of the control file. They remain until they are explicitly dropped.
How to Flashback database till Restore Points:
Login to RMAN and bring the database to mount mode.
FLASHBACK DATABASE TO RESTORE POINT ‘before_upgrade’;
FLASHBACK DATABASE TO SCN 448619;
ALTER DATABASE OPEN;
Requirement for using Guaranteed Restore Points:
 The compatible restore point must be set to 10.2 or greater.
The database must be running in archivelog mode.
A flashback recovery area must be configured.
If flashback database is not enabled, then the database must be mounted (not open) when creating the first guaranteed restore point.
Checking Existing Parameter before using Guaranteed Restore Points:
Check the existing database version. You can use the guaranteed restore point facility since the oracle database version 10.2 or higher.
SQL> Select * from V$Version;
Check the database mode. If it is not in archivelog then you need to convert it into archive log mode. Check the separate post to convert archivelog mode: Convert Noarchive to Archive
SQL> select log_mode from v$database;
SQL> show parameter db_recovery_file_dest;
Check the separate post to set the recovery size and destination: Set Flash Recovery Destination
The DB_FLASHBACK_RETENTION_TARGET parameter should be at least 1 day (24 hours). You need enough space for flashback logs to keep 1 day archivelogs size.
SQL> show parameter DB_FLASHBACK_RETENTION_TARGET;
If required set LOG_BUFFER to at least 8 MB to give flashback database more buffer space in memory and setPARALLEL_EXECUTION_MESSAGE_SIZE to at least 8192. This will improve the media recovery phase of any flashback database operation.
SQL> show parameter LOG_BUFFER;
SQL> show parameter PARALLEL_EXECUTION_MESSAGE_SIZE;
Enable Flashback database. If flashback database is not enabled, then the database must be mounted (not open) when creating the first guaranteed restore point.
SQL> select FLASHBACK_ON from v$database;
SQL> Alter database flashback on;
SQL> Select FLASHBACK_ON from v$database;
SQL> CREATE RESTORE POINT before_date_20121202 GUARANTEE FLASHBACK DATABASE;
Now create or update some object for the test purpose before applying the restore point.
SQL> Create table pay_payment_master_demo as select * from pay_payment_master;
SQL> commit;
SQL> flashback database to restore point before_date_20121202;
ERROR at line 1:
ORA-38757: Database must be mounted and not open to FLASHBACK.

Thus to use this future your database status should be at mount mode.
Shutdown immediate;
Startup mount;
SQL> flashback database to restore point before_date_20121202;
Flashback complete.
SQL> alter database open resetlogs;
Database altered.
SQL> select * from pay_payment_master_demo;
ERROR at line 1:
ORA-00942: table or view does not exist
We can not see that table which we have been created before restore. Thus our database revert back to restore point..

Thursday, 24 October 2013

How to drop and re-create TEMP Tablespace in Oracle

1. Create Temporary Tablespace Temp
CREATE Temporary Tablespace TEMP2 tempfile  ‘d:\oracle\oradata\oradata\temp01’ SIZE 1500M, ‘d:\oracle\oradata\oradata\temp02’ SIZE 1500M; 
2. Move Default Database temp tablespace
Alter database default TEMPORARY tablespace TEMP2;
3. Make sure no sessions are using your Old Temp tablespace
SQL>Select username, session_num, session_addr from v$sort_usage;
If the result set contains any rows then your next step will be to find the SID from the V$SESSION view. You can find session id by using SESSION_NUM or SESSION_ADDR from previous result set.
SQL> Select sid, serial#, status from v$session where serial#=session_num;
or
SQL> Select sid, serial#, status from v$session where saddr=session_addr;
Now kill the session with IMMEDIATE option or you can directly using from toad
SQL> Alter system kill ‘sid,serial#’ immediate;
4. Drop temp tablespace
SQL> drop tablespace temp including contents and datafiles;
5. Recreate Tablespace Temp
SQL> create TEMPORARY tablespace TEMP tempfile ‘D:\oracle\oradata\temp\temp01’ size 1500m;
6 Move Tablespace Temp, back to new temp tablespace
SQL> Alter database default temporary tablespace TEMP;
7. Drop temporary for tablespace temp
SQL> drop tablespace TEMP2 including contents and datafiles;
In fact there is no need to shutdown while doing these operation. If any thing happens with temp tablespace, oracle database will ignore the error, but DML and SELECT query will suffer...

Wednesday, 9 October 2013

How to recover or re-create temporary tablespace in 10g

In database you may discover that your temporary tablespace is deleted from OS or it might get corrupted. In order to get it back you might think about recover it. The recovery process is simply recover the temporary file from backup and roll the file forward using archived log files if you are in archivelog mode.
Another solution is simply drop the temporary tablespace and then re-create a new one and assign new one as a default tablespace to the database users.
SQL> Select File_Name, File_id, Tablespace_name from DBA_Temp_Files;
FILE_NAME                     FILE_ID TABLESPACE_NAME
----------------------------- ------- ----------------
D:\ORACLE\ORADATA\MANOJ\TEMP02.DBF 1     TEMP Make the affected temporary files offline and 
create new TEMP tablespace and assign it default temporary tablespace:

SQL> Alter database tempfile 1 offline;
SQL> Create temporary tablespace TEMP1 tempfile 'D:\ORACLE\ORADATA\MANOJ\TEMP02.DBF' size 1500M;
SQL> alter database default temporary tablespace TEMP1;


Check the users who are not pointed to default temp tablespace and assign them externally then finally drop the old tablespace.
SQL> Select temporary_tablespace, username from dba_users where temporary_tablespace<>'TEMP';
TEMPORARY_TABLESPACE       USERNAME
--------------------       ---------
TEMP                       SH1
TEMP                       SH2

SQL>alter user SH1 temporary tablespace TEMP1;
SQL>alter user SH2 temporary tablespace TEMP1;
SQL>Drop tablespace temp;

Wednesday, 2 October 2013

BACKUP CURRENT CONTROLFILE creates a Snapshot Controlfile

A short post today.

Recently, there was a discussion on forums about when a Snapshot controlfile is created.  It is known that the snapshot controlfile is created to re-synchronise with the Catalog.

But in my test database environment, I don't use a Catalog.  I only use the controlfile (and backups of it).

One poser said that he ran BACKUP DATABASE followed by BACKUP CURRENT CONTROLFILE. He asked if the same snapshot controlfile was used [through the two commands].  I pointed out that since they were two *separate* BACKUP calls, Oracle would create a separate Snapshot Controlfile for the second BACKUP command.

Here is a simple demo (note : I don't use a Catalog)

First the current state of the directory ($ORACLE_HOME/dbs in my case) where the Snapshot Controlfile would be created :
[oracle@localhost dbs]$ pwd
/home/oracle/app/oracle/product/11.2.0/dbhome_2/dbs
[oracle@localhost dbs]$ date
Mon Sep 23 08:17:46 PDT 2013
[oracle@localhost dbs]$ ls -ltr|tail -3
-rw-r----- 1 oracle oracle 1536 Oct  2  2010 orapworcl
-rw-rw---- 1 oracle oracle 1544 Sep 23 08:16 hc_orcl.dat
-rw-r----- 1 oracle oracle 3584 Sep 23 08:16 spfileorcl.ora
There is no snapshot currently present.

Next, I run a BACKUP CURRENT CONTROLFILE :
[oracle@localhost dbs]$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Mon Sep 23 08:17:53 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ORCL (DBID=1229390655)

RMAN> backup current controlfile;

Starting backup at 23-SEP-13
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=69 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
channel ORA_DISK_1: starting piece 1 at 23-SEP-13
channel ORA_DISK_1: finished piece 1 at 23-SEP-13
piece handle=/home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2013_05_20/o1_mf_ncnnf_TAG20130520T081800_8snhob80_.bkp tag=TAG20130520T081800 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 23-SEP-13

RMAN> quit

Recovery Manager complete.
The controlfile backup got written to the FRA.

 Is there a snapshot controlfile ?
[oracle@localhost dbs]$ ls -ltr|tail -3
-rw-rw---- 1 oracle oracle    1544 Sep 23 08:16 hc_orcl.dat
-rw-r----- 1 oracle oracle    3584 Sep 23 08:16 spfileorcl.ora
-rw-rw---- 1 oracle oracle 9748480 Sep 23 08:18 snapcf_orcl.f
[oracle@localhost dbs]$
Yes, a snapshot controlfile did get created ..!

Sunday, 15 September 2013

Recover the Datafile without any prior backup

Today I planned to test “Recover the datafile without prior backup”

Currently there is no failure in database. I shutdown the database & removed the below datafile in OS level “D:\ORACLE\APP\ORADATA\CHE\TEST.DBF”.

Note:
1.Database must be in archive log mode.
2.We need all the archive log files from lost datafile created.


1.Login into che database

C:\>set oracle_sid=CHE

SQL*Plus: Release 11.1.0.7.0 - Production on Wed May 19 14:21:03 2013

Copyright (c) 1982, 2008, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> startup
ORACLE instance started.

Total System Global Area 535662592 bytes
Fixed Size 1348508 bytes
Variable Size 226495588 bytes
Database Buffers 301989888 bytes
Redo Buffers 5828608 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 9 - see DBWR trace file
ORA-01110: data file 9: 'D:\ORACLE\APP\ORADATA\CHE\TEST.DBF'


SQL> alter database datafile 9 offline;

Database altered.

SQL> alter database open;

Database altered.

SQL> select * from v$recover_file;

FILE# ERROR TIME
---------- ----------------------------------- ---------
9 FILE NOT FOUND

SQL> select file#,status from v$datafile;

FILE# STATUS
---------- -------
1 SYSTEM
2 ONLINE
3 ONLINE
4 ONLINE
5 ONLINE
6 ONLINE
7 ONLINE
8 ONLINE
9 RECOVER

9 rows selected.

We need to recover the datafile 9.

SQL> alter database create datafile 'D:\ORACLE\APP\ORADATA\CHE\TEST.DBF';

Database altered.

2.Recover the datafile without backup
C:\>set oracle_sid=che

RMAN> connect target /

Recovery Manager: Release 11.1.0.7.0 - Production on Thu May 19 13:45:17 2013
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Connected to target database: CHE (DBID=3460503574)

RMAN> recover datafile 9;

Starting recover at 19-MAY-133
Using channel ORA_DISK_1

Starting media recovery

archived log for thread 1 with sequence 30 is already on disk as file D:\ORACLE\
APP\FLASH_RECOVERY_AREA\CHE\ARCHIVELOG\2013_05_19\O1_MF_1_30_5Z7CTYPG_.ARC
archived log for thread 1 with sequence 31 is already on disk as file D:\ORACLE\
APP\FLASH_RECOVERY_AREA\CHE\ARCHIVELOG\2013_05_19\O1_MF_1_31_5Z7CV1QG_.ARC
archived log for thread 1 with sequence 32 is already on disk as file D:\ORACLE\
APP\FLASH_RECOVERY_AREA\CHE\ARCHIVELOG\2013_05_19\O1_MF_1_32_5Z7CV7OZ_.ARC
archived log for thread 1 with sequence 33 is already on disk as file D:\ORACLE\
APP\FLASH_RECOVERY_AREA\CHE\ARCHIVELOG\2013_05_19\O1_MF_1_33_5Z7CV9KL_.ARC
archived log for thread 1 with sequence 34 is already on disk as file D:\ORACLE\
APP\FLASH_RECOVERY_AREA\CHE\ARCHIVELOG\2013_05_19\O1_MF_1_34_5Z7D4FOH_.ARC
archived log file name=D:\ORACLE\APP\FLASH_RECOVERY_AREA\CHE\ARCHIVELOG\2013_05_
19\O1_MF_1_30_5Z7CTYPG_.ARC thread=1 sequence=30
archived log file name=D:\ORACLE\APP\FLASH_RECOVERY_AREA\CHE\ARCHIVELOG\2013_05_
19\O1_MF_1_31_5Z7CV1QG_.ARC thread=1 sequence=31
archived log file name=D:\ORACLE\APP\FLASH_RECOVERY_AREA\CHE\ARCHIVELOG\2013_05_
19\O1_MF_1_32_5Z7CV7OZ_.ARC thread=1 sequence=32
media recovery complete, elapsed time: 00:00:01
Finished recover at 19-MAY-13

3. Verify the database

SQL> alter database datafile 9 online;

Database altered.

SQL> select file#,error,time from v$recover_file;

no rows selected

SQL> select file#, status from v$datafile;

FILE# STATUS
---------- -------
1 SYSTEM
2 ONLINE
3 ONLINE
4 ONLINE
5 ONLINE
6 ONLINE
7 ONLINE
8 ONLINE
9 ONLINE

Wednesday, 4 September 2013

Is it Possible to 'DROP' a Datafile from a Tablespace?

In fact Oracle does not provide an interface for dropping datafiles like you could drop a schema and its object in case of table, a view, a user, etc. Once you add a datafile in the tablespace then the datafile cannot be removed but in some case you need do it then you can perform some work to find closer results.
How to Deal different scenario Need to remove Datafile:
Select file_name, tablespace_name from dba_data_files where tablespace_name ='SDH_TIMS_DBF';
FILE_NAME                                 TABLESPACE_NAME
------------------------------------      ---------------
D:\ORACLE\ORADATA\MANOJ\SDH_TIMS01.DBF   SDH_TIMS_DBF
D:\ORACLE\ORADATA\MANOJ\SDH_TIMS02.DBF   SDH_TIMS_DBF
If the datafile you want to remove is the only datafile in that tablespace then simply drop the entire tablespace:
DROP TABLESPACE <tablespace name> INCLUDING CONTENTS;
Note: Before performing certain operations such as taking tablespaces/datafiles offline, and trying to drop them, ensure you have a full backup.
The DROP TABLESPACE command removes the tablespace, the datafiles, and its contents from data dictionary.  Oracle will no longer have access to ANY object that was contained in this tablespace.  The physical datafile must then be removed using an operating system command (Oracle NEVER physically removes any datafiles). 

If you have more than one datafile in the tablespace and you want to keep the objects of first datafile then you must export all the objects you want to keep then Drop the tablespace.
Select owner,segment_name,segment_type from dba_segments where tablespace_name='<name ‎of tablespace>'
Note: Make sure you specify the tablespace name in capital letters.
OWNER
SEGMENT_NAME
SEGMENT_TYPE
TIMS
GEN_BUYER_OPEN_BALANCE
TABLE
TIMS
GEN_BUYER_PROFILE
TABLE
TIMS
GEN_BUYER_STATEMENT
TABLE
TIMS
GEN_COMPANY_QUANTITY_TYPE
TABLE
TIMS
GEN_CONTRACT_PROFILE
TABLE
TIMS
GEN_CONTRACT_WH_LOCATIONS
TABLE
TIMS
GEN_DEPOSIT_CU
TABLE
TIMS
GEN_DEPOSIT_INSTALLMENT
TABLE
TIMS
STK_ITEM_STATEMENT
TABLE PARTITION
TIMS
USR_SMAN_SALESMAN_FK_I
INDEX
TIMS
AG_DTL_PK
INDEX
TIMS
AG_DTL_AGING_FK_I
INDEX

Now Re-create the tablespace with the desired datafiles then import the objects into that tablespace. 

If you just added the datafile and Oracle has not yet allocated any space within this datafile, then you can resize to make the datafile smaller than 5 Oracle blocks.  If the datafile is resized to smaller than 5 oracle blocks, then it will never be considered for extent allocation. At some later date, the tablespace can be rebuilt to exclude the incorrect datafile.
ALTER DATABASE DATAFILE <filename> RESIZE;
Here we are not including the OFFLINE DROP command because is not meant to allow you to remove a datafile.
ALTER DATABASE DATAFILE <datafile name> OFFLINE DROP;
ALTER DATABASE DATAFILE <datafile name> OFFLINE;  --in archivelog mod
What the above command really means is that you are offlining the datafile with the intention of dropping the tablespace. Once the datafile is offline, Oracle no longer attempts to access it, but it is still considered part of that tablespace.  This datafile is marked only as offline in the controlfile and there is no SCN comparison done between the controlfile and the datafile during startup. The entry for that datafile is not deleted from the controlfile to give us the opportunity to recover that datafile...