Saturday, 27 July 2013

Difference Between OBSOLETE AND EXPIRED Backup

RMAN considers backups of datafiles and control files as obsolete, that is, no longer needed for recovery, according to criteria that we specify in the CONFIGURE command. We can then use the REPORT OBSOLETE  command to view obsolete files and DELETE OBSOLETE to delete them .

For ex  :  we set our retention policy to redundancy 2. this means we always want to keep at least 2 backup, after 2 backup, if we take an another backup oldest one become obsolete because there is 3 backup and we want to keep 2. if our flash recovery area is full then obsolete backups can be overwrite.


A status of "expired" means that the backup piece or backup set is not found in the backup destination or missing .Since backup info is hold in our controlfile and catalog . Our controlfile thinks that there is a backup under a directory with a name but someone delete this file from operating system. We can run crosscheck command to check if these files are exist and if rman found a file is missing then mark that backup record as expired which means is no more exists.

Tuesday, 23 July 2013

Can we rebuild the Oracle BITMAP index using REBUILD ONLINE option?

Recently development team come up with new plan for index rebuild maintenance activity using their job for index rebuild instead of DBA manually running...

What their code will do?
Create the procedure on database level - This procedure will create the dynamic index rebuild script based on BLEVEL > 2 indexes.

Note: we are frequently rebuilding the indexes which BLEVEL > 2 and avoid the fragmentation.
I have suggested to developer, while index rebuild to user parallel 4 (degree value) for performance (rebuild the indexes faster) , once they done the rebuild to reset the default degree value.
Also asked to use “REBUILD ONLINE “  option for all indexes except BITMAP INDEX. Because we can't rebuild online for BITMAP index
After they tested in non production region, “We can able to rebuild online for BITMAP index".  I have done index rebuild for BITMAP indexes several times. I used only REBUILD option for BITMAP.
I am not sure... So I have tested again.
Finally I knew, from oracle 10g onwards, we can rebuild the BITMAP index using REBUILD ONLINE.

On Oracle 11g:

 SQL> create table AM.AM_objects as select * from dba_objects;
 Table created.

 SQL> create bitmap index AM.AM_objects_idx on AM.AM_objects(object_id);
 Index created.

 SQL> alter index AM.AM_objects_idx rebuild online;
 Index altered.

 On Oracle 10g:

 SQL> create table AM.AM_objects as select * from dba_objects;
Table created.

SQL> create bitmap index AM.AM_objects_idx on AM.AM_objects(object_id);
 Index created.

SQL> alter index AM.AM_objects_idx rebuild online;
 Index altered.

 On Oracle 9i: 

 SQL> create table AM.AM_objects as select * from dba_objects;
 Table created.

SQL> create bitmap index AM.AM_objects_idx on AM.AM_objects(object_id);
 Index created.

SQL>  alter index AM.AM_objects_idx rebuild online;
alter index AM.AM_objects_idx rebuild online
*
ERROR at line 1:
ORA-08108: may not build or rebuild this type of index online

Monday, 8 July 2013

Automatic Monitoring and Alerting

In oracle 10g we were introduced MMON process which collects metrics and statistics.
No more complex queries for monitoring the database
No more heavy load on the database done by our own monitoring scripts
Just query one view or two.
Of course that you can enhance this mechanism by changing the thresholds,
push the alerts via mail...
All of these can be done via SQL or by using Enterprise Manager/Database Console.
Warning & critical threshold can be set by using DBMS_SERVER_ALERT package

Thresholds definition can reviewed by using:
SELECT object_name, metrics_name, warning_value, critical_value
FROM dba_thresholds;

Outstanding alerts can be reviewed by using:
SELECT reason FROM dba_outstanding_alerts;
When the alerts are cleared it can be reviewed by querying DBA_ALERT_HISTORY

Sunday, 7 July 2013

WRH$_ACTIVE_SESSION_HISTORY table Does Not Get Purged

Yesterday I received tablespace alert “Sysaux tablespace threshold reached 95%”
I saw the sysaux tablespace total size is around 4.5 GB.

I think sysaux tablespace growth is abnormal.
I have checked AWR retention is 7 days. It means AWR keeps last 7 days historical records, remaining things are purged automatically.

V$sysaux _occupants are also showed “AWR used more space in sysaux tablespace”.
Sys.WRH$_ACTIVE_SESSION_HISTORY tables & indexes are huge objects in sysaux tablespace.

I have verified how many snapshot details stored in automatic workload repository.
It’s stored around 4000 snapshot details & last 6 months AWR reports (Snapshot interval is 1 hour).

AWR didn’t purge the Sys.WRH$_ACTIVE_SESSION_HISTORY tables.

Finally I manually purged the AWR snapshot & reclaimed 1.5 GB free space in sysaux tablespace.

exec
DBMS_WORKLOAD_REPOSITORY.drop_snapshot_range (
low_snap_id => 1500,
high_snap_id => 5500);

Why AWR didn’t purge automatically?

Please refer below metalink notes for more details: (10.2.0.1 to 10.2.0.5)
WRH$_ACTIVE_SESSION_HISTORY Does Not Get Purged [ID 387914.1]

Wednesday, 3 July 2013

Why my indexes not being imported using impdp?

Last week I have faced the fragmentation issue.
I did below steps and know while import, impdp will create the indexes very fast compare as Manual index creation.

Steps:

1. Export the table
2. Truncate the table
3. Drop the indexes - Table size was very huge. So while import it causing perf issue. So I dropped the indexes.
4. Import the table

After table imported, i have checked the index status and it was not created.
So I manually created the indexes and it took some times.

Why impdp didn’t import the indexes. Why?

I did some test cases and found the root cause.
===========================================================================
-- Create some test table and index
SQL> create table AM_TEST as select * from dba_objects;
Table created.

SQL> select count(*) from AM_TEST; --60935 rows
SQL> create index obj_idx_AM_test on AM_TEST(object_id);
Index created.

--To take the test table Export
$ cat exp_Manoj_AM_TEST_Jun22.par

userid=Manoj/Manoj
DIRECTORY=DP
DUMPFILE=exp_Manoj_tables_AM_TEST%u.dmp
LOGFILE=exp_Manoj_tables_AM_TEST.log
PARALLEL=4
ESTIMATE=STATISTICS
JOB_NAME=EXP_AM_Manoj
compression=ALL
TABLES=(Manoj.AM_TEST)

Take the table backup
$ expdp parfile=exp_Manoj_AM_TEST_Jun22.par
Export: Release 11.2.0.2.0 - Production on Fri Jun 22 14:17:19 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "Manoj"."EXP_AM_Manoj":  Manoj/******** parfile=exp_Manoj_AM_TEST_Jun22.par
Estimate in progress using STATISTICS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
.  estimated "Manoj"."AM_TEST"                             7 MB
Total estimation using STATISTICS method: 7 MB
. . exported "Manoj"."AM_TEST"                         647.9 KB   60935 rows
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Master table "Manoj"."EXP_AM_Manoj" successfully loaded/unloaded
******************************************************************************
Dump file set for Manoj.EXP_AM_Manoj is:
  /U01/backup/exp_Manoj_tables_AM_TEST01.dmp
  /U01/backup/exp_Manoj_tables_AM_TEST02.dmp
Job "Manoj"."EXP_AM_Manoj" successfully completed at 14:17:29

--Truncated the table and dropped the indexes
--Import the table
$cat imp_467302_AM_TEST_Jun22.par
userid=Manoj/Manoj
DIRECTORY=DP
DUMPFILE=exp_Manoj_tables_AM_TEST%u.dmp
LOGFILE=imp_Manoj_tables_AM_TEST.log
PARALLEL=4
JOB_NAME=EXP_AM_Manoj
TABLES=(Manoj.AM_TEST)
TABLE_EXISTS_ACTION=APPEND

After imported the table, data was loaded and but index not there and I reproduced the same issue.

SQL> select count(*) from Manoj.AM_test;             --60935 rows

SQL> select owner,index_name,table_name from dba_indexes where table_name='AM_TEST';

no rows selected

So again I have truncated the table and imported again ( just change the TABLE_EXISTS_ACTION=REPLACE parameter instead of APPEND)

$cat imp_467302_AM_TEST_Jun22.par
userid=Manoj/Manoj
DIRECTORY=DP
DUMPFILE=exp_Manoj_tables_AM_TEST%u.dmp
LOGFILE=imp_Manoj_tables_AM_TEST.log
PARALLEL=4
JOB_NAME=EXP_AM_Manoj
TABLES=(Manoj.AM_TEST)
TABLE_EXISTS_ACTION=REPLACE

--Import the table

$ impdp parfile=imp_467302_AM_TEST_Jun22.par

Import: Release 11.2.0.2.0 - Production on Fri Jun 22 15:51:22 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "Manoj"."EXP_AM_Manoj" successfully loaded/unloaded
Starting " Manoj "."EXP_AM_ Manoj ":  Manoj /******** parfile=imp_467302_AM_TEST_Jun22.par
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported " Manoj "."AM_TEST"                         647.9 KB   60935 rows
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Job " Manoj"."EXP_AM_ Manoj" successfully completed at 15:51:23


SQL> select owner,index_name,table_name from dba_indexes where table_name='AM_TEST';
OWNER                          INDEX_NAME                     TABLE_NAME
------------------------------ ------------------------------ ------------------------------
Manoj                        OBJ_IDX_AM_TEST                AM_TEST

Cause:

While import we used TABLE_EXIST_ACTION=APPEND parameter. So this parameter only importing data on that table and its skipped the indexes.

Solution:

1. Instead of truncate we can drop the table and import the table. (or)
2. TABLE_EXIST_ACTION=REPLACE – use this command while import. (Internally it will drop and recreate the table)
  
I am not sure …I hope TABLE_EXIST_ACTION=APPEND doesn’t import the any DDL commands like (indexes, constraints statements), Statistics and grants…
Need to dig some more :-)

Which Sessions Generating Lots of Redo logs in oracle?

Last week I got a critical mail alert from one of my production database “archive log destination file system reached 98%”.

Every 3 hours, we took the archive logs backup using RMAN script. I checked the backup log & last ran was successful. The RMAN jobs are running fine & we decided lot of DML operations is running on database. That’s only we getting lot of archive log files.

If file system reached 100%, Database will be hang & db connections will also fail;
To avoid this issue, manually I ran the RMAN archive log backup script & changed the archive log location to new file system.

I am investigating which session/transaction generating more redo logs?

I found one good article in www.oracle.com

Using below query, we can easily identify which session/transaction generating more redo logs.

Query 1:

V$Sess_io & V$Session

SELECT s.sid, s.serial#, s.username, s.program,
i.block_changes
FROM v$session s, v$sess_io i
WHERE s.sid = i.sid
ORDER BY 5 desc;

BLOCK_CHANGES column shows how much blocks have been changed the session.

Query 2:

Below query shows the amount of undo blocks & undo records accessed by the transaction.


SELECT s.sid, s.serial#, s.username, s.program,
t.used_ublk, t.used_urec
FROM v$session s, v$transaction t
WHERE s.taddr = t.addr
ORDER BY 5,6 desc;

Note: Actually one of the batch cycle processing 2 million records. That day it was processed around 10 million records. Due to this we getting more number of redo logs. I informed to application team & they agreed “to notify our team, feed files more than 3 million records”.

Tuesday, 2 July 2013

User Managed Backup --- Oracle

User Managed Backup

1. Physical Backup

1. Cold Backup (Consistent Backup)
2. Hot Backup (Inconsistent Backup)

2. Logical Backup

1. Export / Import
2. Expdp /Impdp (available on oracle10g onwards)

Now we see the physical backup.


Cold Backup: (Consistent Backup)


A cold (or off-line) backup is a backup performed while the database is off-line and unavailable to its users. Cold backups can be taken regardless if the database is in ARCHIVELOG or NOARCHIVELOG mode.It is easier to restore from off-line backups as no recovery (from archived logs) would be required to make the database consistent.

Why you say cold backup is consistent backup?


After shutdown the database we have taken the cold backup. During this time all datafile headers SCN are same. When we restore the cold backup, no redo is needed in case of recovery.We had taken backup datafile, logfile, controlfile, parameter file & password file.


Cold Backup Steps:

1. Get the file path information using below query

Select name from v$datafile;
select member from v$logfile;
select name from v$controlfile;


2.Taken the password file & parameter file backup ( Optional)
3.Alter system switch logfile;
4.Shutdown immediate
5.Copy all the data files /control files /log files using os command & placed in backup path.
6.Startup the database.


Hot Backup (Inconsistent backup)

A hot (or on-line) backup is a backup performed while the database is open and available for use (read and write activity). Except for Oracle exports, one can only do on-line backups when the database is ARCHIVELOG mode.

Why you say Hot backup is inconsistent backup?

While database running we have taken the hot backup. During this time backup tablespace datafile headers SCN are not same with another tablespace datafiles. When we restore the hot backup, redo is needed for recovery.

Hot Backup Steps:

1.Get the file path information using below query.
Select tablespace_name, file_name from dba_data_files order by 1;

2. Put the tablespace in begin backup mode Using os command to copy the datafiles belongs to begin backup mode tablespace & placed in backup path.
3.Once copied the datafile, release the tablespace from begin backup mode to end backup
4.Repeat the steps 1-3 for all your tablespaces.
5.Taken the controlfile backup


Alter Database backup controlfile to ‘/u01/backup/control01.ctl’; ---> à Binary format

Below one is readable format. (Used for cloning)

Alter Database backup controlfile to trace;
Alter database backup controlfile to trace as ‘/u01/backup/control01.ora’;

6.Backup all your archive log files between the previous backup and the new backup as well.
7.Taken the password file & parameter file backup ( Optional)

Example:
steps:


1.Alter tablespace system begin backup;
To ensure the begin backup mode tablespace using below query
Select * from v$backup; (refer the Change#, Time column)

2. Host cp /u01/oracle/raja/system1.DBF /u01/backup/system01.dbf à using os command to copy the datafile.
3. Alter tablespace system end backup;
To ensure the begin backup mode tablespace using below query
Select * from v$backup;