RMAN

From dbawiki
Revision as of 12:16, 20 October 2018 by Stuart (talk | contribs) (Validate backups or restores)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

list - tells you what has already been done
report - tells you what needs to be done

Contents

Change date and time format to get better reporting[edit]

Set this at unix prompt before starting RMAN

export NLS_DATE_FORMAT="dd-mon-yyyy hh24:mi:ss"

Crosscheck the archivelogs[edit]

change archivelog all crosscheck;

What is the size and duration of the RMAN backup?[edit]

set pages 100 lines 200
col mb for 999,999,999
col completed for a11
col sortcol noprint

select vi.instance_name sid
,      to_char(completion_time, 'DD-MON-YYYY') completed
,      completion_time sortcol
,      type
,      round(sum(bytes)/1048576)      mb
,      round(sum(elapsed_seconds)/60) min
from
(
select
case
   when s.backup_type          = 'L'                             then 'ARCHIVELOG'
   when s.controlfile_included = 'YES'                           then 'CONTROLFILE'
   when s.backup_type          = 'D' and s.incremental_level = 0 then 'LEVEL 0'
   when s.backup_type          = 'I' and s.incremental_level = 1 then 'LEVEL 1'
   when s.backup_type          = 'D' and s.incremental_level is null then 'FULL'
   else s.backup_type
end type
,      trunc(s.completion_time) completion_time
,      p.bytes
,      s.elapsed_seconds
from   v$backup_piece p
,      v$backup_set   s
where  p.status = 'A'
and    p.recid  = s.recid
union all
select 'datafilecopy' type
,      trunc(completion_time) completion_time
,      output_bytes
,      0 elapsed_seconds
from   v$backup_copy_details
)
,      v$instance vi
group  by vi.instance_name, to_char(completion_time, 'DD-MON-YYYY'), completion_time, type
order  by 1,3,4
/

Delete archivelogs that have been backed up[edit]

delete noprompt archivelog all backed up 1 times to sbt_tape;
delete noprompt archivelog until time 'sysdate-1' backed up 1 times to device type 'sbt_tape';
delete noprompt archivelog until time 'sysdate-(1*6/24)' backed up 1 times to device type 'sbt_tape';
delete noprompt archivelog until logseq 42000 backed up 1 times to device type 'sbt_tape';
delete expired archivelog all;

A quick backup - suppressing autobackup[edit]

run {
set nocfau;     # this undocumented command will ensure that no autobackup is generated at end of this script
backup as copy database format '/u33/backup/prd1s1/rman/dbf/%U' tag='quick_copy_backup';
backup as copy archivelog all format '/u33/backup/prd1s1/rman/ctl/%U';
backup as copy current controlfile format '/u33/backup/prd1s1/rman/ctl/%U';
}

A quick backup check![edit]

select to_char(start_time,'DD-MON-YY HH24:MI') starttime
,      to_char(end_time,'DD-MON-YY HH24:MI')   endtime
,      round((end_time-start_time)*24*60)      mins
,      input_type
,      status
from   v$rman_backup_job_details
where  start_time > trunc(sysdate)-1
order  by start_time
/

A quick backup using TSM[edit]

set echo on
connect target /;
connect catalog rman/rman@rman;

run
{
    allocate channel t0 type sbt_tape parms 'ENV=(TDPO_OPTFILE=/oracle/${ORACLE_SID}/admin/tdpo.opt)';
    backup database including archivelog;
}

A quick backup using EMC Networker[edit]

set echo on
connect target /;
connect catalog rman/rman@rman;

run
{
    allocate channel t0 type sbt_tape parms 'ENV=(NSR_SERVER=hn6000.cln.be,NSR_CLIENT=hn511,NSR_DATA_VOLUME_POOL=DD1DAILY)';
    backup database including archivelog;
}

A quick backup using Commvault[edit]

The library used here is for AIX

set echo on
connect target /;
connect catalog rman/rman@rman;

run
{
    allocate channel t0 type sbt_tape parms 'SBT_LIBRARY=/opt/commvault/Base64/libobk.a(shr.o),ENV=(cvsrcclientname=ci00031701-hn5219)';
    backup database including archivelog;
}

...and this works on RHEL

set echo on
connect target /;
connect catalog rman/rman@rman;

run
{
    allocate channel t0 type sbt_tape parms 'SBT_LIBRARY=/opt/commvault/Base64/libobk.so,ENV=(cvsrcclientname=ci00028834-hn1018)' ;
    backup database including archivelog;
}

Backup archivelogs that have not yet been backed up then delete any older than 72 hours[edit]

run {
    allocate channel type 'sbt_tape' format 'LOG_d%d_t%t_s%s_u%u' parms 'env=(tdpo_optfile=/oracle/JDBEOP1/admin/tdpo.opt)';
    backup archivelog all not backed up 1 times tag=bu20140808t170528p5685;
    delete noprompt archivelog until time 'sysdate-(1 * 72 / 24)'  backed up 1 times to device type 'sbt_tape';
}

Backup archivelogs between a range of sequence numbers[edit]

run {
    allocate channel t1 device type 'sbt_tape' format 'ARC_d%d_t%t_s%s_u%u' parms 'env=(tdpo_optfile=c:\home\ibmtools\scripts\rman\tdpo_golfp1.opt)';
    backup archivelog from logseq 224801  until logseq 224890;
}

Backup the archivelogs to tape then delete the ones on disk[edit]

In a space emergency, backup all the archivelogs to the configured tape and immediately delete them from disk

backup device type sbt_tape archivelog all delete all input;

or ensure the correct device...

run {
    allocate channel c1 type sbt_tape parms 'ENV=(TDPO_OPTFILE=/oracle/${ORACLE_SID}/admin/tdpo.opt)';
    backup device type sbt_tape archivelog all delete all input;
}

Restore archivelogs[edit]

    restore archivelog from logseq=38400 until logseq=38499;
    restore archivelog logseq=25444;

Monitoring FRA usage[edit]

If the Flash (or Fast) Recovery Area is used for archive redo logs as well as flashback logs, we need to monitor it's size

col fsname   heading "Filesystem"     for a50
col space_limit  heading "Allocated GB"   for 999,999,990
col space_used heading "Used GB"        for a20
col percused heading "% Used"         for 90.0
col space_reclaimable  heading "Reclaimable GB" for a20
col percrec  heading "% Reclaimable"  for 90.0
set linesize 1000
with maxsizes as (
select name
,      floor(space_limit/1024/1024/1024) space_limit
from   v$recovery_file_dest
)
,    useds as (
select name
,      ceil(space_used/1024/1024/1024) space_used
from   v$recovery_file_dest
)
,    reclaimables as (
select name
,      (space_reclaimable/1024/1024/1024) space_reclaimable
from   v$recovery_file_dest
)
select m.name  fsname
,      trunc(m.space_limit)  space_limit
,      trunc(u.space_used)  ||' (' ||
       round((u.space_used/m.space_limit)*100,2) ||'%)' space_used
,      trunc(r.space_reclaimable)  ||' (' ||
       round((r.space_reclaimable/m.space_limit)*100,2)  ||'%)' space_reclaimable
from   maxsizes     m
,      useds        u
,      reclaimables r
where  m.name = u.name
and    u.name = r.name
order  by m.name
/

and in detail...

set linesize 1000
select *
from   v$recovery_area_usage
where  percent_space_used > 0;

size taken up by the flashback logs

select estimated_flashback_size
from   v$flashback_database_log;

Validate backups or restores[edit]

Validate that the database components are free of corruption

backup <something> validate;

or validate that the database components on tape are free of corruption

restore <something> validate;

eg (using Commvault):

connect target /
connect catalog rman/rman@rman
run {
allocate channel c0 type sbt_tape parms 'SBT_LIBRARY=/opt/commvault/Base64/libobk.a(shr.o),ENV=(cvsrcclientname=ci00031669-hn491)' ;
set until time "to_date('2018-08-17 16:00:00','YYYY-MM-DD HH24:MI:SS')";
restore database validate;
}

or

backup database validate;
backup archivelog all validate;
backup current controlfile validate;
backup tablespace users validate;
backup datafile 1 validate;

or

restore database validate;
restore archivelog all validate;
restore spfile validate;
restore tablespace users validate;

Use "preview" instead of "validate" to see the list of backup pieces that would be used in the restore process. "preview" actually mimics the performance of the task. "validate" does a brief check.

Tuning RMAN performance[edit]

Identifying Bottlenecks with V$BACKUP_SYNC_IO and V$BACKUP_ASYNC_IO[edit]

Build a report of RMAN backup pieces using awk to munge the RMAN output[edit]

export NLS_DATE_FORMAT='yyyymmddhh24miss'; echo "restore controlfile preview; restore database preview;" | rman target / | awk '
/Finished restore at /{timestamp=$4}
/Recovery must be done beyond SCN /{if ($7>scn) scn=$7 }
/^ *(Piece )Name: / { sub(/^ *(Piece )Name: /,"") ; files[$0]=1 }
END{ for (i in files) print i > "files-"timestamp"-SCN-"scn".txt" }
'

If that approach does not work, try this

vi restval.rman
connect target /
connect catalog rman/rman@rman
run {
    allocate channel t1 type 'sbt_tape' parms 'ENV=(NSR_SERVER=hn6000.cln.be, NSR_CLIENT=hn481, NSR_DATA_VOLUME_POOL=DD1DAILY)';
    restore controlfile preview;
    restore database preview;
    release channel t1;
}


cat restval.rman | NLS_DATE_FORMAT='yyyy-mon-dd hh24:mi:ss' rman | awk '
/Finished restore at /{timestamp=$4}
/Recovery must be done beyond SCN /{if ($7>scn) scn=$7 }
/^ *(Piece )Name: / { sub(/^ *(Piece )Name: /,"") ; files[$0]=1 }
END{ for (i in files) print i > "files-"timestamp"-SCN-"scn".txt" }
'

Script to validate the database and all archivelogs needed for a restore[edit]

Environment must be set before running...

export NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"

rman <<EORMAN
set echo on
connect target;
restore database validate;
exit
EORMAN

if [[ $? != 0 ]]; then
    echo "*** ERROR: RMAN restore validate of database failed"
    exit 1;
fi

cmd=$(sqlplus -S "/ as sysdba" <<'EOSQL'
whenever sqlerror exit sql.sqlcode
whenever oserror exit 1
set newpa none pages 0 head off veri off feed off term off echo off pause off numw 32
select 'restore validate archivelog from scn '||min_first_change#||' until scn '||max_next_change# from v$backup_archivelog_summary;
exit
EOSQL
)

if [[ $? != 0 ]]; then
    echo "*** ERROR: $cmd"
    exit 1;
fi

rman <<EORMAN
set echo on
connect target;
${cmd};
exit
EORMAN

if [[ $? != 0 ]]; then
    echo "*** ERROR: RMAN restore validate of archivelogs failed"
    exit 1;
fi

Restore and recover datafile while database is running[edit]

Had a problem where the name of the data file was created incorrectly. It had non-printable characters in it due to backspace character being setup wrongly and someone making a typing mistake.

SQL> alter database datafile 67 offline;

rman target / catalog rman/rman@rman12d
run {
    allocate channel t1 type 'sbt_tape' parms 'ENV=(NSR_SERVER=hn6000.cln.be,NSR_CLIENT=hn491,NSR_DATA_VOLUME_POOL=DD1DAILY)';
    set newname for datafile 67 to '/cln/acc/ora_data3/adsa1/ADS_ARCHIVE_05.dbf';
    restore datafile 67;
    switch datafile 67;
    recover datafile 67;
}

SQL> alter database datafile 67 online;

In version 12c file renaming is more like it should be!

alter database move datafile '/cln/exp/ora_data2/clne/data8/ts_thaler_cu_31.dbf' to '/cln/exp/ora_data2/clne/data9/ts_thaler_cu_31.dbf';

Recover datafile blocks using RMAN[edit]

connect target /

backup validate datafile 00091;

run {
    set maxcorrupt for datafile 91 to 32;
    backup validate datafile 00091;
}

blockrecover datafile 91 block 889664;

Quick check that the tdpo configuration is correct[edit]

Channel test

run {
    allocate channel c1 type sbt_tape parms 'ENV=(TDPO_OPTFILE=/oracle/${ORACLE_SID}/admin/tdpo.opt)';
}

Quick backup to tape via tdpo[edit]

rman nocatalog target /
run {
    allocate channel c1 type sbt_tape parms 'ENV=(TDPO_OPTFILE=/oracle/${ORACLE_SID}/admin/tdpo.opt)';
    backup database;
}

An example of what Networker sends to Unix to be run on the command line[edit]

/usr/bin/nsrdasv -z /ux/HN512/backup/nmda.config/nmda_oracle_hn512_11.2.0.4.cfg \
                 -s hn6000.company.be \
                 -g CTI_2_ARCLOG_HN6014_LBK \
                 -LL \
                 -m hn512 \
                 -a DIRECT_ACCESS=No \
                 -l full 
                 -q \
                 -W 78 \
                 -N RMAN:/ux/HN512/backup/rman/backup_scripts/backup-lbk_archivelog.rman \
    rman send 'NSR_ENV (NSR_CLIENT=hn512, NSR_DIRECT_ACCESS=No, NSR_GROUP=CTI_2_ARCLOG_HN6014_LBK, NSR_PARENT_JOBID=11667069, \
               NSR_SERVER=hn6000.company.be, NSR_SAVESET_NAME=RMAN:/ux/HN512/backup/rman/backup_scripts/backup-lbk_archivelog.rman, \
               NSR_MIN_DDBOOST_VERSION=2.5.1.1)' \
         cmdfile '/ux/HN512/backup/rman/backup_scripts/backup-lbk_archivelog.rman'

Clone (or duplicate) a database[edit]

  • Connect to destination host
  • Set environment for destination (auxiliary) SID - easiest way is make sure SID is in oratab
  • Check sufficient disk space for datafiles
  • Make a minimal init.ora file - depending on version, may need some memory parameters and compatible parameter in addition to db_name
  • Add file_convert parameters to init.ora in case datafile location is different
  • Make directories for audit, datafiles and archivelogs (if new db)
  • Check source db is accessible via TNS (not necessary but useful depending on restore method)
  • Check catalog is accessible via TNS (not necessary but useful depending on restore method)
  • Create a password file (if new db)
  • Shutdown destination db (if already existing)
  • Delete destination db files (if already existing)
  • Startup auxiliary instance in nomount mode referencing the minimal pfile
  • Run an RMAN file like this one to clone the database. This one is specific to EMC (Legato) Networker
export NLS_LANG=american
export NLS_DATE_FORMAT=YYYY-MM-DD:HH24:MI:SS

rman target sys/sys@${FROM_SID} catalog rman/rman@catalog_db auxiliary / | tee -a $HOME/dup_${FROM_SID}_to_${TO_SID}_$(date +'%Y%m%d').log

run {
    allocate auxiliary channel t1 type 'sbt_tape' parms 'ENV=(NSR_SERVER=${TAPE_SERVER},NSR_CLIENT=${FROM_HOST},NSR_DATA_VOLUME_POOL=${TAPE_POOL})';
    allocate auxiliary channel t2 type 'sbt_tape' parms 'ENV=(NSR_SERVER=${TAPE_SERVER},NSR_CLIENT=${FROM_HOST},NSR_DATA_VOLUME_POOL=${TAPE_POOL})';
    allocate auxiliary channel t3 type 'sbt_tape' parms 'ENV=(NSR_SERVER=${TAPE_SERVER},NSR_CLIENT=${FROM_HOST},NSR_DATA_VOLUME_POOL=${TAPE_POOL})';
    allocate auxiliary channel t4 type 'sbt_tape' parms 'ENV=(NSR_SERVER=${TAPE_SERVER},NSR_CLIENT=${FROM_HOST},NSR_DATA_VOLUME_POOL=${TAPE_POOL})';
    set until time '${RECOVER_TO_TIME}';
    duplicate target database to ${TO_SID}
    nofilenamecheck
    spfile
        parameter_value_convert '/oracle/exp/ora_data2/','/oracle/kap/ora_data1/', '/oracle/exp/ora_bin2/','/oracle/kap/ora_bin1/'
        set controlfiles='/oracle/kap/ora_data1/common/db/bilk/control01.ctl', '/oracle/kap/ora_data1/common/db/bilk/control02.ctl', '/oracle/kap/ora_data1/common/db/bilk/control03.ctl'
        set db_file_name_convert='/oracle/exp/ora_data2/common/db/bile/','/oracle/kap/ora_data1/common/db/bilk/'
        set log_file_name_convert='/oracle/exp/ora_data2/common/db/bile/','/oracle/kap/ora_data1/common/db/bilk/'
    ;
    release channel t1;
    release channel t2;
    release channel t3;
    release channel t4;
 }

RMAN Recovery (Duplication) using the BACKUP DATABASE TO COPY command[edit]

There are several interesting reasons for using backup to copy instead of the default (backupset). Our scenario here is that:

  • a 36Tb database has been shrunk to 3Tb and needs reorganising
  • somehow the ckfs command shows that a filesystem needs repairing but the database is still working fine
  • it is a production database and needs minimal downtime

During the day run the backup so that the files are ready for the intervention at night[edit]

A filesystem needs to be available with enough space to hold the (new) 3Tb database.
The %b format means just the filenames are used.

export NLS_DATE_FORMAT='DD-MON-YYYY HH24:MI:SS'

run
{
    allocate channel c1 type disk format '/path/to/backup/datafiles/%b';
    allocate channel c2 type disk format '/path/to/backup/datafiles/%b';
    allocate channel c3 type disk format '/path/to/backup/datafiles/%b';
    allocate channel c4 type disk format '/path/to/backup/datafiles/%b';
    allocate channel c5 type disk format '/path/to/backup/datafiles/%b';
    allocate channel c6 type disk format '/path/to/backup/datafiles/%b';
    allocate channel c7 type disk format '/path/to/backup/datafiles/%b';
    allocate channel c8 type disk format '/path/to/backup/datafiles/%b';
    backup as copy database;
    release channel c1;
    release channel c2;
    release channel c3;
    release channel c4;
    release channel c5;
    release channel c6;
    release channel c7;
    release channel c8;
}

This backup failed partially because someone in the past had added a few datafiles to a tablespace but appended a space to the end of the filename so we needed this additional step: The %U format means that a unique filename is generated for each file (but includes the file_id so it can be reassociated with its name).

run {
    allocate channel c1 type disk format '/path/to/backup/datafiles/%U';
    backup as copy datafile 170;
    backup as copy datafile 171;
    release channel c1;
}

At intervention time, stop the applications accessing the database, stop the listeners and wait for any ongoing transactions to end.

Protect the archivelogs[edit]

Run an archivelog backup sending any new logs to tape but don't delete them from disk as they will be needed for recovery.

run {
    allocate channel t1 type 'sbt_tape' parms 'ENV=(NSR_SERVER=hn6000.crelan.be,NSR_CLIENT=hn512,NSR_DATA_VOLUME_POOL=DD2DAILY,NSR_SAVESET_BROWSE=1 MONTH,NSR_SAVESET_RETENTION=1 MONTH,NSR_END_ERROR_IGNORE=TRUE)';
    crosscheck archivelog all;
    backup format 'arc_%d_%I_%t_%s_%p' archivelog all not backed up 2 times;
    release channel t1;
}

Backup the controlfile[edit]

run {
    allocate channel c1 type disk;
    backup as copy current controlfile format '/home/oracle/control01.ctl';
    release channel c1;
}

Take a text copy as well just in case...

alter database backup controlfile to trace as '$HOME/lbk_control01.sql';

Create a pfile if a recent one does not exist[edit]

create pfile from spfile;

Protect the redo logs[edit]

Copy one member of each group just in case...

select 'cp -p '||member||' /path/to/backup/datafiles/ &' from v$logfile where member like '%a.log';

Edit the pfile[edit]

Change any parameters that have a directory path to reflect the location of the new files. At the very least, change the control_files parameter.

Startup nomount[edit]

Startup the instance using nomount and specify the modified pfile

startup nomount pfile='path/to/pfile/initSID.ora'

Mount the database[edit]

If all has been modified correctly, monting the database should prove successful

alter database mount

Rename the datafiles in the controlfile[edit]

set lines 1000 pages 0 trims on feed off
col txt for a500
spool rename_files_in_controlfile.sql
select 'alter database rename file ''' || name || ''' to ''' ||  replace(name, '/path/to/original/datafiles', '/path/to/backup/datafiles')  || ''';' txt from v$datafile
/
spool off
@rename_files_in_controlfile

Check all files have been renamed and that there are no spaces in the names![edit]

select name from v$datafile where name not like '/path/to/original/datafiles/%';
select name from v$datafile where name like '% %';

Disable block change tracking if it was previously enabled[edit]

alter database disable block change tracking;

Start the recovery process[edit]

export NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"

run {
allocate channel c1 device type disk;
recover database;
release channel c1;
}

Moment of truth[edit]

alter database open;

Post recovery checks[edit]

set lines 1000 pages 1000
col file_name for a60
select file_name,status,online_status from dba_data_files;
select * from v$tempfile;
select member from v$logfile;

Script to check if sufficient space is available on the destination (auxiliary) filesystem to hold a duplicate copy of the source (target) database[edit]

#!/usr/bin/ksh
# ==============================================================================
# Name         : db_space_calculation.ksh
# Description  : Works out if there will be enough space to hold a clone of a
#                specified database on a particular filesystem
#
# Parameters   : none. Script asks questions
#
#
# Modification History
# ====================
# When      Who               What
# ========= ================= ==================================================
# 18-JUL-17 Stuart Barkley    Created
# 17-AUG-17 Stuart Barkley    Get control_files parameter
# 21-AUG-17 Stuart Barkley    Try to get Networker pool and rman stuff
# ==============================================================================
#
PROGNAME=$(basename $0)
# increment this every time it changes
VERSION=0.4.3


printf "\n%s\n" "${PROGNAME} version $VERSION"
printf "%s" "Enter name of source (clone from) database: "
read FROM_SID
printf "%s" "Enter sys password for ${FROM_SID} database: "
read FROM_SYS_PASSWORD
printf "%s" "Enter name of destination (clone to) database: "
read TO_SID

# ----------------------------------------------------------------------
# if destination database exists, we can use it to get datafile location
# ----------------------------------------------------------------------
grep "^${TO_SID}:" /etc/oratab >/dev/null 2>&1
RETVAL=$?
if [[ ${RETVAL} -eq 0 ]]; then
    ORAENV_ASK=NO
    export ORACLE_SID=${TO_SID}
    . oraenv >/dev/null 2>&1
    RETVAL=$((RETVAL+$?))
fi
if [[ ${RETVAL} -ne 0 ]]; then
    # destination database does not yet exist, ask user
    printf "%s" "Enter datafile directory/filesystem for ${TO_SID} database: "
    read TO_DATA_DIR
fi



function calc { awk "BEGIN { print $* }"; }


# --------------------------------------
# get host name of the FROM_SID database
# --------------------------------------
FROM_HOST=`sqlplus -s sys/${FROM_SYS_PASSWORD}@${FROM_SID} as sysdba<<'EOSQL'
set numwid 15 headi off newpa none feedb off
select host_name
from   v$instance
/
EOSQL
`
if [[ $? -ne 0 ]]; then
    printf "\n%s\n" "ERROR: Failed to get host name of ${FROM_SID} database."
    cat /tmp/results.$$ && rm -f /tmp/results.$$
    exit 1
fi
sleep 1


# --------------------------------------------------------
printf "\n%s"  "checking size of the ${FROM_SID} database"
# --------------------------------------------------------
DATABASE_SIZE=`sqlplus -s sys/${FROM_SYS_PASSWORD}@${FROM_SID} as sysdba<<'EOSQL'
set numwid 15 headi off newpa none feedb off
with used_space as
(
select  bytes
from    v$datafile
union   all
select  bytes
from    v$tempfile
union   all
select  bytes
from    v$log
)
, free_space as
(
select sum(bytes) sum_bytes
from dba_free_space
)
select trim(round(sum(used_space.bytes)/1024) - round(free_space.sum_bytes/1024)) kb_used
from   free_space
,      used_space
group  by free_space.sum_bytes
/
EOSQL
`
if [[ $? -ne 0 ]]; then
    echo " ...NOK"
    printf "\n%s\n" "ERROR: Failed to get size of ${FROM_SID} database."
    cat /tmp/results.$$ && rm -f /tmp/results.$$
    exit 1
else
    echo " ...OK (${DATABASE_SIZE} Kb)" && rm -f /tmp/results.$$
fi
sleep 1


# ---------------------
# get datafile location
# ---------------------
if [[ "${TO_DATA_DIR}" == "" ]]; then
    # it was not filled in manually, destination must exist
    sqlplus -s sys/${FROM_SYS_PASSWORD}@${TO_SID} as sysdba<<'EOSQL' | read TO_DATA_DIR
    set numwid 15 headi off newpa none feedb off
    select distinct substr(name,1,(instr(name,'/',-1,1)-1)) path_name from v$datafile;
EOSQL
fi


# ----------------------------------------------------------------
printf "%s"  "checking space available for the ${TO_SID} database"
# ----------------------------------------------------------------
if [[ "$(uname -s)" == "AIX" ]]; then
    SPACE_AVAILABLE=$(df -k $TO_DATA_DIR | sed 1d | awk 'BEGIN {i=0} {i=i+$3} END {print i}')
else
    # Linux, Darwin, Solaris
    SPACE_AVAILABLE=$(df -k $TO_DATA_DIR | sed 1d | awk 'BEGIN {i=0} {i=i+$4} END {print i}')
fi

SPACE_USED=$(du -ks ${TO_DATA_DIR} | awk 'BEGIN {i=0;} {i=i+$1} END {print i}')

# do the calculations in the database to avoid scientific notation issues
sqlplus -s sys/${FROM_SYS_PASSWORD}@${FROM_SID} as sysdba<<EOSQL | read SPACE_AFTER_INSTALL TOTAL_SPACE_AVAILABLE
set numwid 15 headi off newpa none feedb off
select sign(${SPACE_USED} + ${SPACE_AVAILABLE} - ${DATABASE_SIZE}), trim(${SPACE_USED} + ${SPACE_AVAILABLE})
from dual
/
EOSQL

if [[ ${SPACE_AFTER_INSTALL} -le 0 ]]; then
    echo " ...NOK"
    printf "\n%s\n" "ERROR: Insufficient space."
    echo "Size of database ${FROM_SID} : ${DATABASE_SIZE} Kb"
    echo "Space available for ${TO_SID}: ${TOTAL_SPACE_AVAILABLE} Kb"
    exit 1
else
    echo " ...OK, space is sufficient"
fi

sleep 1
printf "\n%s\n" "other bits useful for a sync..."
sleep 1
# ------------------------------------------------
# find the audit file area on source, if available
# ------------------------------------------------
RETVAL=`sqlplus -s sys/${FROM_SYS_PASSWORD}@${FROM_SID} as sysdba<<'EOSQL'
set numwid 15 headi off newpa none feedb off
col value for a100
select value from v$parameter where name = 'audit_file_dest'
/
EOSQL
`
echo "FROM_ADUMP_DIR='${RETVAL}'"

# -------------------------------------------------
# find the archive log area on source, if available
# -------------------------------------------------
RETVAL=`sqlplus -s sys/${FROM_SYS_PASSWORD}@${FROM_SID} as sysdba<<'EOSQL'
set numwid 15 headi off newpa none feedb off
col value for a100
select value from v$parameter where name = 'log_archive_dest_1'
/
EOSQL
`
RETVAL=$(echo ${RETVAL}|sed -e 's/LOCATION=//')
echo "FROM_ARCH_DIR='${RETVAL}'"


# -------------------------------------------------------
# work out the RMAN connection details and Networker pool
# -------------------------------------------------------
ssh ${FROM_HOST} "cat /ux/*/backup/rman/backup_scripts/backup-${FROM_SID}*arch*.rman" >/tmp/results.$$ 2>/dev/null
if [[ $? -eq 0 ]]; then
    TAPE_POOL=$(cat /tmp/results.$$ | perl -ne 'print $1 if /NSR_DATA_VOLUME_POOL=(\w+),?/')
    echo "TAPE_POOL=${TAPE_POOL}"
    FROM_RMAN=$(cat /tmp/results.$$ | perl -ne 'print "FROM_RMANCAT_USR=$1\nFROM_RMANCAT_PWD=$2\nFROM_RMANCAT=$3\n" if /rcvcat\s+(\w+)\/(\w+)@(\w+)$/')
    echo "${FROM_RMAN}"
else
    echo "Could not connect to ${FROM_HOST} to get the RMAN catalog and Networker pool name. Please check manually"
fi


# -----------------------------------------------------
# find the audit file area on destination, if available
# -----------------------------------------------------
RETVAL=`sqlplus -s sys/${FROM_SYS_PASSWORD}@${TO_SID} as sysdba<<'EOSQL'
set numwid 15 headi off newpa none feedb off
col value for a100
select value from v$parameter where name = 'audit_file_dest'
/
EOSQL
`
echo "TO_ADUMP_DIR='${RETVAL}'"

# ------------------------------------------------------
# find the archive log area on destination, if available
# ------------------------------------------------------
RETVAL=`sqlplus -s sys/${FROM_SYS_PASSWORD}@${TO_SID} as sysdba<<'EOSQL'
set numwid 15 lines 1000 headi off newpa none feedb off
col value for a100
select value from v$parameter where name = 'log_archive_dest_1'
/
EOSQL
`
RETVAL=$(echo ${RETVAL}|sed -e 's/LOCATION=//')
echo "TO_ARCH_DIR='${RETVAL}'"

# --------------------------------------------------
# find out where the controlfiles are on destination
# --------------------------------------------------
RETVAL=`sqlplus -s sys/${FROM_SYS_PASSWORD}@${TO_SID} as sysdba<<'EOSQL'
set numwid 15 lines 1000 headi off newpa none feedb off
col value for a300
select value from v$parameter where name = 'control_files'
/
EOSQL
`
echo "CONTROL_FILES=\"${RETVAL}\""

# ------------------------------------------------
# work out the DB_FILE_NAME_CONVERT RMAN parameter
# ------------------------------------------------
RETVAL=`sqlplus -s sys/${FROM_SYS_PASSWORD}@${FROM_SID} as sysdba<<'EOSQL'
set numwid 15 headi off newpa none feedb off
col value for a100
select distinct substr(name,1,(instr(name,'/',-1,1)-1)) path_name from v$datafile
/
EOSQL
`
DB_FILE_NAME_CONVERT="'${RETVAL}'"

RETVAL=`sqlplus -s sys/${FROM_SYS_PASSWORD}@${TO_SID} as sysdba<<'EOSQL'
set numwid 15 headi off newpa none feedb off
col value for a100
select distinct substr(name,1,(instr(name,'/',-1,1)-1)) path_name from v$datafile
/
EOSQL
`
echo DB_FILE_NAME_CONVERT="\"${DB_FILE_NAME_CONVERT},'${RETVAL}'\""


# -------------------------------------------------
# work out the LOG_FILE_NAME_CONVERT RMAN parameter
# -------------------------------------------------
RETVAL=`sqlplus -s sys/${FROM_SYS_PASSWORD}@${FROM_SID} as sysdba<<'EOSQL'
set numwid 15 headi off newpa none feedb off
col value for a100
select distinct substr(member,1,(instr(member,'/',-1,1)-1)) path_name from v$logfile
/
EOSQL
`
LOG_FILE_NAME_CONVERT="'${RETVAL}'"

RETVAL=`sqlplus -s sys/${FROM_SYS_PASSWORD}@${TO_SID} as sysdba<<'EOSQL'
set numwid 15 headi off newpa none feedb off
col value for a100
select distinct substr(member,1,(instr(member,'/',-1,1)-1)) path_name from v$logfile
/
EOSQL
`
echo LOG_FILE_NAME_CONVERT="\"${LOG_FILE_NAME_CONVERT},'${RETVAL}'\""

Backup database to disk, scp files and duplicate on different host[edit]

On source host

sqlplus / as sysdba
create pfile='/backup/init_destsid.ora' from spfile;
exit
#!/usr/bin/ksh
ORAENV_ASK=NO
export ORACLE_SID=sourcesid
. oraenv
export NLS_DATE_FORMAT="DD-MM-YY HH24:MI:SS"
rman nocatalog target /<<EORMAN
configure controlfile autobackup on;
configure device type disk parallelism 4;
configure maxsetsize to unlimited;
run {
allocate channel c1 device type disk maxpiecesize 2048M;
allocate channel c2 device type disk maxpiecesize 2048M;
allocate channel c3 device type disk maxpiecesize 2048M;
allocate channel c4 device type disk maxpiecesize 2048M;
backup as compressed backupset database format "/backup/df_d%d_t%t_s%s_r%r";
backup as compressed backupset archivelog all format "/backup/al_d%d_t%t_s%s_r%r";
backup as compressed backupset current controlfile format "/backup/cf_%F";
release channel c1;
release channel c2;
release channel c3;
release channel c4;
}
EORMAN
scp /backup/* dest_server:/big_directory/

On destination server

export NLS_DATE_FORMAT="DD-MM-YY HH24:MI:SS"
sqlplus / as sysdba
startup nomount pfile='/big_directory/init_destsid.ora';

Make sure there is only one set of backup files in the directory. RMAN could get confused about which controlfile to use even though the documentation says it will take the most recent.

rman auxiliary /
duplicate database to destsid
backup location '/big_directory'
nofilenamecheck;

Create a self-contained, autonomous backup[edit]

Maybe find out how many threads there are first and script it.
Find the archivelog sequence number before the backup starts.

select min(sequence#) from v$log where thread#=1;
MIN1=$(echo "select 'xX '||min(sequence#) from v\\$log where thread#=1;" | sqlplus -s "/ as sysdba" | grep "xX" | awk '{print $2}')

Run the backup switching logfile before and after.

rman target / nocatalog log=autonomous_backup.log
run {
allocate channel...
sql "alter system archive log current";
backup database include current controlfile;
sql "alter system archive log current";
restore database preview;
}

Find the archivelog sequence number now that the backup has finished.

MAX1=$(echo "select 'xX '||max(sequence#) from v\\$log where thread#=1;" | sqlplus -s "/ as sysdba" | grep "xX" | awk '{print $2}')

Backup just the archivelogs that will be necessary to recover.

rman target / nocatalog log=autonomous_backup_arch.log
run {
allocate channel...
backup archivelog sequence between ${MIN1} and ${MAX1} thread 1;
}

Create a physical standby from an RMAN backup[edit]

Backup primary database to disk[edit]

#!/bin/ksh
#
# Inspired by the book RMAN Recipes for Oracle database 11g

export ORACLE_SID=JDBEOP1


echo `date`
HOSTNAME=`hostname`
TODAY=`date +%Y%m%d_%H%M`

ORAENV_ASK=NO
. oraenv

export nls_date_format="dd-mon-yyyy hh24:mi:ss"

sqlplus / as sysdba<<EOSQL
set numwidth 20 lines 2000
col status for a12
select sysdate, current_scn from v\$database;
select group#,thread#,sequence#,archived,status,first_change#,first_time from v\$log;
EOSQL

rman target / nocatalog msglog rman_database_${ORACLE_SID}_${TODAY}.log<<EORMAN
run {
    allocate channel d1 type disk;
    allocate channel d2 type disk;
    allocate channel d3 type disk;
    allocate channel d4 type disk;
    allocate channel d5 type disk;
    allocate channel d6 type disk;
    backup format '/JDBEOP1-CLONE/rman_db_t%t_s%s_p%p'  database;
    sql 'alter system archive log current';
    backup format '/JDBEOP1-CLONE/rman_al_t%t_s%s_p%p'  archivelog all;
    backup format '/JDBEOP1-CLONE/spfile_t%t_s%s_p%p'   spfile;
    backup format '/JDBEOP1-CLONE/rman_ctl_t%t_s%s_p%p' current controlfile for standby;
    release channel d1;
    release channel d2;
    release channel d3;
    release channel d4;
    release channel d5;
    release channel d6;
}
EORMAN

echo `date`
ls -altr /JDBEOP1-CLONE

Copy RMAN backup files[edit]

Copy over the backup files from source to destination server

  • using scp
scp -p /CLONEDISK/* bemauerp12:/CLONEDISK/
  • using rsync
rsync -uav --progress /CLONEDISK/* bemauerp12:/CLONEDISK/
  • using zfs disk (Solaris)

On source server unmount the disk and comment the appropriate line in /etc/vfstab

umount /CLONEDISK
vxdg deport clonedg
vi /etc/vfstab

# /dev/vx/dsk/clonedg/lv_clonedg_jdbeop1clone     /dev/vx/rdsk/clonedg/lv_clonedg_jdbeop1clone    /CLONEDISK  vxfs    3       yes     -

On destination server uncomment the appropriate line in /etc/vfstab and mount the disk

vi /etc/vfstab

/dev/vx/dsk/clonedg/lv_clonedg_jdbeop1clone     /dev/vx/rdsk/clonedg/lv_clonedg_jdbeop1clone    /CLONEDISK  vxfs    3       yes     -

vxdg import clonedg
mount /CLONEDISK

Edit the tnsnames.ora file on the destination server[edit]

Standby database needs to be able to connect to the primary via tns

vi $TNS_ADMIN/tnsnames.ora

PRIMARY =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = primary)(PORT = 1555))
    )
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = JDBEOP1)
    )
  )

STANDBY =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = standby)(PORT = 1555))
    )
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = JDBEOP1_DR)
    )
  )

Restore the database on the destination server[edit]

#!/usr/bin/ksh

HOSTNAME=`hostname`
TODAY=`date +%Y%m%d_%H%M`; export TODAY

# ===================
# set the environment
# ===================
export ORACLE_SID=JDBEOP1
ORAENV_ASK=NO
. oraenv

echo $ORACLE_SID
echo $ORACLE_HOME

# ==================================
# kill any previous (failed) attempt
# ==================================
sqlplus / as sysdba <<EOSQL
shutdown abort
EOSQL

# ======================
# start a dummy instance
# ======================
sqlplus / as sysdba <<EOSQL
startup nomount pfile='/oracle/JDBEOP1/admin/change/init_clone.ora'
EOSQL

# =====================
# start the duplication
# =====================
echo `date`
export NLS_DATE_FORMAT='DD-MON-YYYY HH24:MI:SS'
rman nocatalog cmdfile=duplicate4standby.cmd msglog=duplicate_${HOSTNAME}_${ORACLE_SID}_${TODAY}.log
echo `date`

init_clone.ora file[edit]

Used for the initial start of the standby database in the above script

DB_NAME=JDBEOP1
DB_UNIQUE_NAME=JDBEOP1_DR

duplicate4standby.cmd file[edit]

RMAN command file used in the above script

connect target sys/&syspasswordonprimary@JDBEOP1
connect auxiliary /
run {
allocate channel c1 device type disk;
allocate channel c2 device type disk;
allocate auxiliary channel a1 device type disk;
allocate auxiliary channel a2 device type disk;
set newname for tempfile 1 to '/oracle/JDBEOP1/oradata1/temp01.dbf';
set newname for tempfile 2 to '/oracle/JDBEOP1/oradata2/temp03.dbf';
set newname for datafile 518 to '/oracle/JDBEOP1/arcdata/ARCENG_05.dbf';
set newname for datafile 555 to '/oracle/JDBEOP1/arcdata/PRODARCT_09.dbf';
set newname for datafile 550 to '/oracle/JDBEOP1/arcdata/PRODARCI_08.dbf';
duplicate target database for standby 
nofilenamecheck
dorecover
spfile set db_unique_name          = 'JDBEOP1_DR'
       set sga_max_size            = '124748364800'
       set sga_target              = '124748364800'
       set db_cache_size           = '100000000000'
       set fal_server              = 'JDBEOP1'
       set fal_client              = 'JDBEOP1_DR' 
       set db_block_checksum       = 'FULL'
       set db_lost_write_protect   = 'TYPICAL'
       set db_recovery_file_dest   = '/oracle/JDBEOP1/recovery_area'
       set log_archive_dest_1      = 'LOCATION="USE_DB_RECOVERY_FILE_DEST"'
       set standby_archive_dest    = 'LOCATION="USE_DB_RECOVERY_FILE_DEST"'
;
}

Use clonedb to create an almost instant clone of a database[edit]

Harnesses the power of an RMAN image copy backup to generate clones that can be used for dev/qa etc.
Only the blocks different from those in the RMAN datafiles are kept in the cloned datafiles making huge space savings.

A collection of ways to do High Availability with Oracle[edit]

Includes the following methods...

  • Data Guard
  • Active Data Guard
  • Recovery Manager (RMAN)
  • Flashback Technology
  • Data Guard and Applications
  • Oracle Secure Backup (OSB)
  • Global Data Services (GDS)
  • Oracle GoldenGate
  • Oracle Sharding
  • Cloud MAA

Backup the control file[edit]

backup current controlfile;

Take a binary copy of the controlfile that can be used without a catalog to recover an instance somewhere else[edit]

backup as copy current controlfile format '/tmp/MOCMMSP2.ctl';

Take a text copy of the controlfile that can be used without a catalog to rebuild an instance (worst case)[edit]

backup current controlfile to trace format '/tmp/MOCMMSP2.ctl';

or from SQL prompt...

alter database backup controlfile to trace as '/tmp/database_cf.sql';

Check the status of currently running backups[edit]

Monitor progress of RMAN backups

select sid
,      serial#
,      context
,      sofar
,      totalwork
,      round(sofar/totalwork*100,2) "%_complete"
from   v$session_longops
where  1=1
and    opname     like 'RMAN%'
and    opname not like '%aggregate%'
and    totalwork != 0
and    sofar     != totalwork
/

Check the status of the database backups for the past n days[edit]

Run this one from sys on the database server

set pages 500 lines 200
col operation   for a12
col backup      for a27
col start_time  for a17
col end_time    for a17
col input_type  for a12
col status      for a12
col duration    for 90.99
col days_ago    for 90

select vrbsd.operation
,      vrbsd.command_id                              backup
,      to_char(vrbsd.start_time,'DD-MON-YY HH24:MI') start_time
,      to_char(vrbsd.end_time,'DD-MON-YY HH24:MI')   end_time
,      vrbd.input_type                               type
,      vrbsd.status                                  status
,      (vrbd.elapsed_seconds/3600)                   duration
,      trunc(sysdate)-trunc(vrbsd.start_time)        days_ago
from   v$rman_backup_job_details     vrbd
,      v$rman_backup_subjob_details  vrbsd
where  1=1
and    vrbd.session_key = vrbsd.session_key
and    vrbsd.start_time > sysdate-&days_back
order  by vrbsd.start_time
/

Check the status of the database backups from the RMAN catalog[edit]

Run from the RMAN catalog owner, this checks to see how long ago a full backup was taken from all databases in the catalog

set lines 300 pages 66
col status for a15
select s.db_name                                "Database"
,      s.db_key                                 dbid
,      s.status                                 "Status"
,      max(s.start_time)                        "Start time"
,      max(round(s.mbytes_processed/1024))      "Processed data (GB)"
,      max(round((end_time-start_time)*60*24))  "Duration (mins)"
,      trunc(sysdate)-trunc(s.start_time)       "Days since last backup"
from   rc_rman_status s
where  1=1
and    s.operation   = 'BACKUP'
and    s.object_type = 'DB FULL'
and    s.start_time  =
       (
       select max(md.start_time)
       from   rc_rman_status md
       where  1=1
       and    md.operation   = 'BACKUP'
       and    md.object_type = 'DB FULL'
       and    md.db_name     = s.db_name
       )
group  by s.db_name
,      s.db_key
,      s.object_type
,      s.operation
,      s.status
,      trunc(sysdate)-trunc(s.start_time)
order  by trunc(sysdate)-trunc(s.start_time) desc
/

Run from the RMAN catalog owner, this shows the backup history for a particular database in the catalog

set lines 300 pages 100
col status for a15
col objtype for a15
select s.db_name                           "Database"
,      s.db_key                            dbid
,      s.object_type                       objtype
,      s.status                            "Status"
,      s.start_time                        "Start time"
,      round(s.mbytes_processed)           "Processed data(Mb)"
,      round((end_time-start_time)*60*24)  "duration (min)"
,      round((sysdate-s.start_time)*24)    "Hours since last backup"
from   rc_rman_status s
where  1=1
and    db_name              = '&db_name'
and    s.operation          = 'BACKUP'
and    s.object_type        not in ('CONTROLFILE', 'ARCHIVELOG')
order  by trunc(sysdate)-trunc(s.start_time) desc
/

Run from the RMAN catalog owner, this shows the archivelog backup history for a particular database in the catalog

set lines 300 pages 100
col status for a15
col objtype for a15
select s.db_name                           "Database"
,      s.db_key                            dbid
,      s.object_type                       objtype
,      s.status                            "Status"
,      s.start_time                        "Start time"
,      round(s.mbytes_processed)           "Processed data(Mb)"
,      round((end_time-start_time)*60*24)  "duration (min)"
,      round((sysdate-s.start_time)*24)    "Hours since last backup"
from   rc_rman_status s
where  1=1
and    db_name              = '&db_name'
and    s.operation          = 'BACKUP'
and    s.object_type        ='ARCHIVELOG'
order  by trunc(sysdate)-trunc(s.start_time) desc
/

Clean up the RMAN catalog[edit]

Sometimes the catalog needs cleaning up. Maybe the backup scripts are not doing a crosscheck and delete properly.

list backupset of database summary completed between 'SYSDATE-3000' and 'SYSDATE-${RETENTION_DAYS}';

If this produces results, records exist that go back before the retention period and may need deleting (depending on the policy).

change backupset ${key} delete;

If you get this error...

RMAN-06091: no channel allocated for maintenance (of an appropriate type)

Allocate a fake tape channel (but not in a run block!)
Apparently you can - didn't work for me

allocate channel for maintenance device type sbt parms 'SBT_LIBRARY=oracle.disksbt, ENV=(BACKUP_DIR=/tmp)';
delete force obsolete;

Which tapes are the backups on?[edit]

Not really necessary as RMAN can work it out but if you need to know...

select media
, to_char(completion_time,'DD-MON-YYYY HH24:MI:SS') backed_up
from   rc_backup_piece
where  1=1
and    db_id           = 4053457814
and    completion_Time > sysdate - 2
order  by 2

Block change tracking[edit]

Check / Set it

set lines 1000
select * from v$block_change_tracking;
show parameter db_create_file_dest
alter system set db_create_file_dest='/oracle/SID/oradata1' scope=both;

or check from the Operating System for the CTWR background process
Here we see 3 of the 8 databases on this machine have block change tracking enabled

hn5114 /nas/software/oracle/scripts/synchro $ps -ef | grep -i [c]twr
  oracle  3016096        1   0   Jul 17      - 11:46 ora_ctwr_adst
  oracle   132874        1   0   Aug 23      -  1:08 ora_ctwr_lbkrt
 oraebst 62392598        1   0   Jul 17      - 10:12 ora_ctwr_EBST

Enable it

alter database enable block change tracking;
alter database enable block change tracking using file '/oracle/SID/oradata1/change_track.dbf';

Disable it

alter database disable block change tracking;

Rename / Move it

alter database disable block change tracking;
alter database enable block change tracking using file '<something different>';

To remove all rows from v$rman_status[edit]

If the status of an RMAN backup is stuck in RUNNING, the next backup will not start up. Reset the view with this...

exec sys.dbms_backup_restore.resetcfilesection(28);

To reset all RMAN configuration back to defaults[edit]

execute dbms_backup_restore.resetconfig;

Convert timestamp to SCN[edit]

select timestamp_to_scn(to_timestamp('08-JAN-2015 14:24:34','DD-MON-YYYY HH24:MI:SS')) scn from dual;

Convert SCN to timestamp[edit]

select scn_to_timestamp(1234567890) timestamp from dual;

Check for hanging RMAN sessions by checking wait[edit]

COLUMN EVENT FORMAT a10
COLUMN SECONDS_IN_WAIT FORMAT 999
COLUMN STATE FORMAT a20
COLUMN CLIENT_INFO FORMAT a30

SELECT p.SPID
,      EVENT
,      SECONDS_IN_WAIT SEC_WAIT
,      sw.STATE
,      CLIENT_INFO
FROM   V$SESSION_WAIT sw
,      V$SESSION s
,      V$PROCESS p
WHERE  1=1
and    (sw.EVENT LIKE 'sbt%' or sw.EVENT LIKE '%MML%')
AND    s.SID    = sw.SID
AND    s.PADDR  = p.ADDR
;

Useful RMAN commands[edit]

Create RMAN catalog schema[edit]

create tablespace rmanAGRDWHP1 datafile '/oracle/RMANV11/oradata1/rmanTDPOTEST.dbf' size 5M autoextend on;
alter database datafile '/oracle/RMANV11/oradata1/rmanTDPOTEST.dbf' autoextend on next 5M maxsize unlimited;

create user rmanTDPOTEST identified by threeguesses
       temporary tablespace temp
       default tablespace rmanTDPOTEST
       quota unlimited on rmanTDPOTEST;

grant connect, resource, recovery_catalog_owner to rmanTDPOTEST;

Connect RMAN to the database controlfile (assuming db environment is set)[edit]

rman target /

Connect RMAN to the recovery catalog (assuming db environment is set)[edit]

rman target / catalog catowner/catpass@catdb

Problem with this method is that anyone doing a 'ps -ef' will see the catalog password! Do it this way and the password will not be visible...

rman target /
Recovery Manager: Release 11.2.0.3.0 - Production on Sun Jun 8 10:49:57 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: EUDSTGP2 (DBID=1013086362)

RMAN> connect catalog catowner/catpass@catdb;

connected to recovery catalog database

Show what backups are available[edit]

list backup of database;
list backup of spfile;
list backup of controlfile;
LIST ARCHIVELOG ALL;
LIST BACKUP OF ARCHIVELOG ALL;
LIST BACKUP OF DATAFILE 1;
LIST BACKUP SUMMARY;
LIST INCARNATION;
LIST BACKUP BY FILE;
LIST BACKUP OF DATAFILE 11 SUMMARY;
LIST BACKUP OF ARCHIVELOG FROM SEQUENCE 1437;
LIST CONTROLFILECOPY "/tmp/cntrlfile.copy";
LIST BACKUPSET OF DATAFILE 1;
LIST FAILURE;
LIST FAILURE 641231 detail;

Show all the configured options[edit]

show all;

What needs backing up to satisfy retention policy (times backed up/recovery window etc)?[edit]

report need backup;

What can be deleted?[edit]

report obsolete;

Delete what is not needed[edit]

Obsolete - No longer needed as they are older than the retention period. They will go back longer if there was no full backup within the retention period.
Expired - The backup file exists in the control file / catalog but no longer exists in reality.

show retention;

crosscheck backup;

list backup summary;

delete noprompt expired archivelog all;

delete noprompt expired backup;

delete noprompt obsolete;

list backup summary;

Check whether the backup pieces registered in the controlfile still exist[edit]

CROSSCHECK BACKUP;
CROSSCHECK COPY;
CROSSCHECK backup of database;
CROSSCHECK backup of controlfile;
CROSSCHECK archivelog all;

Restore database from TSM (tape) to different server using RMAN[edit]

Scenario
Customer wants a complete database restored to a different server so that they can retrieve data that is no longer available on the source database.
It is not possible to restore from an export as the required dump files are no longer available.

Source: PROD1 on prod004
Destination: PRODX on test002

Method

  • Request Unix team to attach a temporary filesystem big enough to hold the complete database (and an extra bit for contingency) to the destination server
  • Request Unix team to restore (or copy from source) the Oracle software to the mounted filesystem
  • Recreate the /oracle/SID/admin directory (contains necessary tdpo files)
  • Update dsm.sys with SID stanza from source server
  • Test the tdpo access from destination server
  • Add destination SID to oratab
  • Create a minimal init.ora file
  • Check that the desired data can be restored (validate)
  • Create an RMAN command file to perform the work
  • Create a shell to run the process



Request Unix team to attach a temporary filesystem big enough to hold the complete database (and an extra bit for contingency) to the destination server
Either attach the storage directly to the destination server or nfs mount it (with hard mount options) from another server

Request Unix team to restore (or copy from source) the Oracle software to the mounted filesystem
The whole of the Oracle source code can be copied from the source server to the destination server. The contents of the dbs directory can be removed.

Recreate the /oracle/SID/admin directory (contains necessary tdpo files)
For this customer, we keep all the TSM related files in this directory. These include the tdpo.opt, dsm.opt and TSM.PWD files.

test002:/oracle/PROD1/admin# ll -tr
total 62
-rwxrwx---   1 oracle   dba          257 Jul 13 12:58 tdpo.opt
-rwxrwx---   1 oracle   dba          814 Jul 13 12:58 dsm.opt
-rwxrwx---   1 oracle   dba          268 Jul 13 12:58 inclexcl.lst
drwxrwx---   2 oracle   dba           96 Jul 13 13:18 tdpoerror_PROD1
-rwxrwx---   1 oracle   dba          184 Jul 13 13:18 TSM.PWD
-rwxrwx---   1 oracle   dba            0 Nov 12 11:59 dsmerror.log
drwxrwx---   2 oracle   dba         1024 Nov 12 16:50 change


Update dsm.sys with SID stanza from source server
The stanza from the source server relating to this database should be copied into the dsm.sys on the destination server so that tdpo can access the TSM server.

SErvername                 TSM_PROD1_TDPO
   TCPPort                 1518
   TCPServeraddress        tsm-bxt
   TCPBuffsize             31
   TCPWindowsize           1024
   LARGECOMMBUFFERS        YES
   TXNBytelimit            10000
   PASSWORDACCESS          GENERATE
   PASSWORDDIR             /oracle/PROD1/admin
   MAXCMDRETRIES           4
   INCLEXCL                /oracle/PROD1/admin/inclexcl.lst
   QUERYSCHEDPERIOD        3
   RETRYPERIOD             30
   SCHEDLOGNAME            /oracle/PROD1/admin/dsmsched.log
   ERRORLOGNAME            /oracle/PROD1/admin/dsmerror.log
   SCHEDLOGRETENTION       15 D
   SCHEDMODE               POLLING
   NODENAME                SO_U_PROD1_ORA_X_BAX
   DIRMC                   D01NOS_14


Test the tdpo access from destination server
Use tdpoconf to test access to the TSM server and to check the related files are all usable.

tdpoconf showenv -tdpo_opt=/oracle/PROD1/admin/tdpo.opt


Add destination SID to oratab
Add a line with the new SID

echo "PRODX:/oracle/PROD1/product/11204:Y" >> /etc/oratab


Create a minimal init.ora file
This is just a dummy file so that we can start an idle instance.

echo "DB_NAME=dummy" > /oracle/PROD1/admin/change/init_dummy.ora


Check that the desired data can be restored (validate)
Use an RMAN preview command to ensure that files can be restored as far back as the required date.
If you have no access to the catalog, you will need to use "set dbid=<DBID>;"

rman target / catalog catuser/catpass@catdb

run {
set until time "to_date('18-OCT-14 10:10:00','DD-MON-YY HH24:MI:SS')";
restore controlfile preview;
}


Create an RMAN command file to perform the work
This syntax does not require a connection to the target database

connect catalog catuser/catpass@catdb;
connect auxiliary /

run {
allocate auxiliary channel d1 device type disk;
allocate auxiliary channel a1 device type sbt_tape parms 'ENV=(TDPO_OPTFILE=/oracle/PROD1/admin/tdpo.opt)';
allocate auxiliary channel a2 device type sbt_tape parms 'ENV=(TDPO_OPTFILE=/oracle/PROD1/admin/tdpo.opt)';
allocate auxiliary channel a3 device type sbt_tape parms 'ENV=(TDPO_OPTFILE=/oracle/PROD1/admin/tdpo.opt)';
allocate auxiliary channel a4 device type sbt_tape parms 'ENV=(TDPO_OPTFILE=/oracle/PROD1/admin/tdpo.opt)';
allocate auxiliary channel a5 device type sbt_tape parms 'ENV=(TDPO_OPTFILE=/oracle/PROD1/admin/tdpo.opt)';

duplicate database PROD1 dbid 2992702833 to PRODX
    until time "to_date('18-OCT-14 10:10:00','DD-MON-YY HH24:MI:SS')"
    spfile
    set db_unique_name = 'PRODX'
    nofilenamecheck
;

}


Create a shell to run the process

#!/usr/bin/ksh

HOSTNAME=`hostname`
export TODAY=`date +%Y%m%d_%H%M`

export ORACLE_SID=PRODX
ORAENV_ASK=NO
. oraenv

echo $ORACLE_SID
echo $ORACLE_HOME

sqlplus / as sysdba <<EOSQL
shutdown abort
EOSQL

sqlplus / as sysdba <<EOSQL
startup nomount pfile='/oracle/PROD1/admin/change/init_dummy.ora'
EOSQL

echo `date`
export NLS_DATE_FORMAT='DD-MON-YYYY HH24:MI:SS'
rman cmdfile=/oracle/PROD1/admin/change/dupdb.cmd msglog=/oracle/PROD1/admin/change/dupdb_${HOSTNAME}_${ORACLE_SID}_${TODAY}.log
echo `date`


Bad Hair Day[edit]

When media failure strikes, look in the alertlog for more info if you can still get to it.

Restore and recover tablespace[edit]

If some/all datafiles in one tablespace are damaged, you can recover the tablespace.

rman target /
startup mount;
restore tablespace users;
recover tablespace users;
alter database open;

or while database is open...

rman target /
sql 'alter tablespace users offline immediate';
restore tablespace users;
recover tablespace users;
sql 'alter tablespace data_ts online';

or until a particular moment in time (TSPITR)

RMAN> run {
    allocate channel t1 type 'sbt_tape' parms 'ENV=(NSR_SERVER=hn6000.xxxxxx.be,NSR_CLIENT=hn511,NSR_DATA_VOLUME_POOL=DD1DAILY)';
    recover tablespace "nh_PFT_0v_quartz_ts" until time "to_date('27-NOV-14 09:25:00','DD-MON-YY HH24:MI:SS')" auxiliary destination '/cln/exp/ora_data3/devaux';
}

NOTE 1: If you do not specify auxiliary destination, it will try to use the same location as the original datafiles - and will conflict!
NOTE 2: There is a bug which ignores the fact that a tape channel has been allocated.
The workaround is to use CONFIGURE to allocate a channel. Do not forget to reset it later.

rman target /
configure channel device type 'SBT_TAPE' PARMS 'ENV=(NSR_SERVER=hn6000.xxxxxx.be,NSR_CLIENT=hn511,NSR_DATA_VOLUME_POOL=DD1DAILY)';
run {
    sql 'alter tablespace TS_BAMBOO offline immediate';
    recover tablespace "TS_BAMBOO" until time "to_date('21-MAR-18 10:30:00','DD-MON-YY HH24:MI:SS')" auxiliary destination '/cln/exp/ora_data3/devaux';
    sql 'alter tablespace TS_BAMBOO online';
}
configure channel device type 'SBT_TAPE' clear;

Output of the above script:

rman cmdfile=tspitr_restore_devtools.rman

Recovery Manager: Release 12.1.0.2.0 - Production on Wed Mar 21 15:59:20 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

RMAN>
1> connect target *
2> connect catalog *
3> run {
4>     recover tablespace TS_BAMBOO until time "to_date('21-MAR-18 10:30:00','DD-MON-YY HH24:MI:SS')" auxiliary destination '/cln/exp/ora_data3/devaux';
5> }
6>
connected to target database: DEVTOOLS (DBID=2968626886)

connected to recovery catalog database

Starting recover at 21-MAR-18
starting full resync of recovery catalog
full resync complete
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=485 device type=DISK
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=509 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: NMDA Oracle v8.2.1

Creating automatic instance, with SID='aslo'

initialization parameters used for automatic instance:
db_name=DEVTOOLS
db_unique_name=aslo_pitr_DEVTOOLS
compatible=12.1.0.2.0
db_block_size=8192
db_files=200
diagnostic_dest=/cln/exp/ora_bin1/app/oracle
_system_trig_enabled=FALSE
sga_target=2560M
processes=200
db_create_file_dest=/cln/exp/ora_data3/devaux
log_archive_dest_1='location=/cln/exp/ora_data3/devaux'
#No auxiliary parameter file used


starting up automatic instance DEVTOOLS

Oracle instance started

Total System Global Area    2684354560 bytes

Fixed Size                     3788008 bytes
Variable Size                654312216 bytes
Database Buffers            1979711488 bytes
Redo Buffers                  46542848 bytes
Automatic instance created
Running TRANSPORT_SET_CHECK on recovery set tablespaces
TRANSPORT_SET_CHECK completed successfully

contents of Memory Script:
{
# set requested point in time
set until  time "to_date('21-MAR-18 10:30:00','DD-MON-YY HH24:MI:SS')";
# restore the controlfile
restore clone controlfile;

# mount the controlfile
sql clone 'alter database mount clone database';

# archive current online log
sql 'alter system archive log current';
# avoid unnecessary autobackups for structural changes during TSPITR
sql 'begin dbms_backup_restore.AutoBackupFlag(FALSE); end;';
# resync catalog
resync catalog;
}
executing Memory Script

executing command: SET until clause

Starting restore at 21-MAR-18
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=161 device type=DISK
allocated channel: ORA_AUX_SBT_TAPE_1
channel ORA_AUX_SBT_TAPE_1: SID=166 device type=SBT_TAPE
channel ORA_AUX_SBT_TAPE_1: NMDA Oracle v8.2.1

channel ORA_AUX_SBT_TAPE_1: starting datafile backup set restore
channel ORA_AUX_SBT_TAPE_1: restoring control file
channel ORA_AUX_SBT_TAPE_1: reading from backup piece c-2968626886-20180321-09
channel ORA_AUX_SBT_TAPE_1: piece handle=c-2968626886-20180321-09 tag=TAG20180321T085709
channel ORA_AUX_SBT_TAPE_1: restored backup piece 1
channel ORA_AUX_SBT_TAPE_1: restore complete, elapsed time: 00:00:25
output file name=/cln/exp/ora_data3/devaux/DEVTOOLS/controlfile/o1_mf_fc4ww0f3_.ctl
Finished restore at 21-MAR-18

sql statement: alter database mount clone database

sql statement: alter system archive log current

sql statement: begin dbms_backup_restore.AutoBackupFlag(FALSE); end;

starting full resync of recovery catalog
full resync complete

contents of Memory Script:
{
# set requested point in time
set until  time "to_date('21-MAR-18 10:30:00','DD-MON-YY HH24:MI:SS')";
plsql <<<--
declare
  sqlstatement       varchar2(512);
  pdbname            varchar2(30);
  offline_not_needed exception;
  pragma exception_init(offline_not_needed, -01539);
begin
   pdbname := null; -- pdbname
  sqlstatement := 'alter tablespace '||  'TS_BAMBOO' ||' offline immediate';
  krmicd.writeMsg(6162, sqlstatement);
  krmicd.execSql(sqlstatement, 0, pdbname);
exception
  when offline_not_needed then
    null;
end; >>>;
# set destinations for recovery set and auxiliary set datafiles
set newname for clone datafile  1 to new;
set newname for clone datafile  3 to new;
set newname for clone datafile  2 to new;
set newname for clone tempfile  1 to new;
set newname for datafile  6 to
 "/cln/exp/ora_data3/devtools/bamboo_dev_01.dbf";
set newname for datafile  8 to
 "/cln/exp/ora_data3/devtools/bamboo_dev_02.dbf";
# switch all tempfiles
switch clone tempfile all;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile  1, 3, 2, 6, 8;

switch clone datafile all;
}
executing Memory Script

executing command: SET until clause

sql statement: alter tablespace TS_BAMBOO offline immediate

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

renamed tempfile 1 to /cln/exp/ora_data3/devaux/DEVTOOLS/datafile/o1_mf_temp_%u_.tmp in control file

Starting restore at 21-MAR-18
using channel ORA_AUX_DISK_1
using channel ORA_AUX_SBT_TAPE_1

channel ORA_AUX_SBT_TAPE_1: starting datafile backup set restore
channel ORA_AUX_SBT_TAPE_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_SBT_TAPE_1: restoring datafile 00006 to /cln/exp/ora_data3/devtools/bamboo_dev_01.dbf
channel ORA_AUX_SBT_TAPE_1: reading from backup piece dbfull_DEVTOOLS_2968626886_971292740_68520_1
channel ORA_AUX_SBT_TAPE_1: piece handle=dbfull_DEVTOOLS_2968626886_971292740_68520_1 tag=TAG20180320T193217
channel ORA_AUX_SBT_TAPE_1: restored backup piece 1
channel ORA_AUX_SBT_TAPE_1: restore complete, elapsed time: 00:06:45
channel ORA_AUX_SBT_TAPE_1: starting datafile backup set restore
channel ORA_AUX_SBT_TAPE_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_SBT_TAPE_1: restoring datafile 00002 to /cln/exp/ora_data3/devaux/DEVTOOLS/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_SBT_TAPE_1: reading from backup piece dbfull_DEVTOOLS_2968626886_971293571_68522_1
channel ORA_AUX_SBT_TAPE_1: piece handle=dbfull_DEVTOOLS_2968626886_971293571_68522_1 tag=TAG20180320T193217
channel ORA_AUX_SBT_TAPE_1: restored backup piece 1
channel ORA_AUX_SBT_TAPE_1: restore complete, elapsed time: 00:00:45
channel ORA_AUX_SBT_TAPE_1: starting datafile backup set restore
channel ORA_AUX_SBT_TAPE_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_SBT_TAPE_1: restoring datafile 00001 to /cln/exp/ora_data3/devaux/DEVTOOLS/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_SBT_TAPE_1: reading from backup piece dbfull_DEVTOOLS_2968626886_971293622_68524_1
channel ORA_AUX_SBT_TAPE_1: piece handle=dbfull_DEVTOOLS_2968626886_971293622_68524_1 tag=TAG20180320T193217
channel ORA_AUX_SBT_TAPE_1: restored backup piece 1
channel ORA_AUX_SBT_TAPE_1: restore complete, elapsed time: 00:00:45
channel ORA_AUX_SBT_TAPE_1: starting datafile backup set restore
channel ORA_AUX_SBT_TAPE_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_SBT_TAPE_1: restoring datafile 00003 to /cln/exp/ora_data3/devaux/DEVTOOLS/datafile/o1_mf_undotbs1_%u_.dbf
channel ORA_AUX_SBT_TAPE_1: reading from backup piece dbfull_DEVTOOLS_2968626886_971293662_68526_1
channel ORA_AUX_SBT_TAPE_1: piece handle=dbfull_DEVTOOLS_2968626886_971293662_68526_1 tag=TAG20180320T193217
channel ORA_AUX_SBT_TAPE_1: restored backup piece 1
channel ORA_AUX_SBT_TAPE_1: restore complete, elapsed time: 00:00:25
channel ORA_AUX_SBT_TAPE_1: starting datafile backup set restore
channel ORA_AUX_SBT_TAPE_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_SBT_TAPE_1: restoring datafile 00008 to /cln/exp/ora_data3/devtools/bamboo_dev_02.dbf
channel ORA_AUX_SBT_TAPE_1: reading from backup piece dbfull_DEVTOOLS_2968626886_971293677_68527_1
channel ORA_AUX_SBT_TAPE_1: piece handle=dbfull_DEVTOOLS_2968626886_971293677_68527_1 tag=TAG20180320T193217
channel ORA_AUX_SBT_TAPE_1: restored backup piece 1
channel ORA_AUX_SBT_TAPE_1: restore complete, elapsed time: 00:00:25
Finished restore at 21-MAR-18

datafile 1 switched to datafile copy
input datafile copy RECID=4 STAMP=971367011 file name=/cln/exp/ora_data3/devaux/DEVTOOLS/datafile/o1_mf_system_fc4xd5vb_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=5 STAMP=971367011 file name=/cln/exp/ora_data3/devaux/DEVTOOLS/datafile/o1_mf_undotbs1_fc4xf9rs_.dbf
datafile 2 switched to datafile copy
input datafile copy RECID=6 STAMP=971367011 file name=/cln/exp/ora_data3/devaux/DEVTOOLS/datafile/o1_mf_sysaux_fc4xbkwr_.dbf

contents of Memory Script:
{
# set requested point in time
set until  time "to_date('21-MAR-18 10:30:00','DD-MON-YY HH24:MI:SS')";
# online the datafiles restored or switched
sql clone "alter database datafile  1 online";
sql clone "alter database datafile  3 online";
sql clone "alter database datafile  2 online";
sql clone "alter database datafile  6 online";
sql clone "alter database datafile  8 online";
# recover and open resetlogs
recover clone database tablespace  "TS_BAMBOO", "SYSTEM", "UNDOTBS1", "SYSAUX" delete archivelog;
alter clone database open resetlogs;
}
executing Memory Script

executing command: SET until clause

sql statement: alter database datafile  1 online

sql statement: alter database datafile  3 online

sql statement: alter database datafile  2 online

sql statement: alter database datafile  6 online

sql statement: alter database datafile  8 online

Starting recover at 21-MAR-18
using channel ORA_AUX_DISK_1
using channel ORA_AUX_SBT_TAPE_1

starting media recovery

archived log for thread 1 with sequence 13773 is already on disk as file /cln/exp/ora_data3/archivelog/devtools/log1_13773_884618118.arc
archived log for thread 1 with sequence 13774 is already on disk as file /cln/exp/ora_data3/archivelog/devtools/log1_13774_884618118.arc
archived log for thread 1 with sequence 13775 is already on disk as file /cln/exp/ora_data3/archivelog/devtools/log1_13775_884618118.arc
archived log for thread 1 with sequence 13776 is already on disk as file /cln/exp/ora_data3/archivelog/devtools/log1_13776_884618118.arc
archived log for thread 1 with sequence 13777 is already on disk as file /cln/exp/ora_data3/archivelog/devtools/log1_13777_884618118.arc
archived log for thread 1 with sequence 13778 is already on disk as file /cln/exp/ora_data3/archivelog/devtools/log1_13778_884618118.arc
archived log for thread 1 with sequence 13779 is already on disk as file /cln/exp/ora_data3/archivelog/devtools/log1_13779_884618118.arc
archived log for thread 1 with sequence 13780 is already on disk as file /cln/exp/ora_data3/archivelog/devtools/log1_13780_884618118.arc
archived log for thread 1 with sequence 13781 is already on disk as file /cln/exp/ora_data3/archivelog/devtools/log1_13781_884618118.arc
archived log file name=/cln/exp/ora_data3/archivelog/devtools/log1_13773_884618118.arc thread=1 sequence=13773
archived log file name=/cln/exp/ora_data3/archivelog/devtools/log1_13774_884618118.arc thread=1 sequence=13774
archived log file name=/cln/exp/ora_data3/archivelog/devtools/log1_13775_884618118.arc thread=1 sequence=13775
archived log file name=/cln/exp/ora_data3/archivelog/devtools/log1_13776_884618118.arc thread=1 sequence=13776
archived log file name=/cln/exp/ora_data3/archivelog/devtools/log1_13777_884618118.arc thread=1 sequence=13777
archived log file name=/cln/exp/ora_data3/archivelog/devtools/log1_13778_884618118.arc thread=1 sequence=13778
archived log file name=/cln/exp/ora_data3/archivelog/devtools/log1_13779_884618118.arc thread=1 sequence=13779
archived log file name=/cln/exp/ora_data3/archivelog/devtools/log1_13780_884618118.arc thread=1 sequence=13780
archived log file name=/cln/exp/ora_data3/archivelog/devtools/log1_13781_884618118.arc thread=1 sequence=13781
media recovery complete, elapsed time: 00:00:41
Finished recover at 21-MAR-18

database opened

contents of Memory Script:
{
# make read only the tablespace that will be exported
sql clone 'alter tablespace  TS_BAMBOO read only';
# create directory for datapump import
sql "create or replace directory TSPITR_DIROBJ_DPDIR as ''
/cln/exp/ora_data3/devaux''";
# create directory for datapump export
sql clone "create or replace directory TSPITR_DIROBJ_DPDIR as ''
/cln/exp/ora_data3/devaux''";
}
executing Memory Script

sql statement: alter tablespace  TS_BAMBOO read only

sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''/cln/exp/ora_data3/devaux''

sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''/cln/exp/ora_data3/devaux''

Performing export of metadata...
   EXPDP> Starting "SYS"."TSPITR_EXP_aslo_able":
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/GRANT/OWNER_GRANT/OBJECT_GRANT
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/TRIGGER
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
   EXPDP> Master table "SYS"."TSPITR_EXP_aslo_able" successfully loaded/unloaded
   EXPDP> ******************************************************************************
   EXPDP> Dump file set for SYS.TSPITR_EXP_aslo_able is:
   EXPDP>   /cln/exp/ora_data3/devaux/tspitr_aslo_93713.dmp
   EXPDP> ******************************************************************************
   EXPDP> Datafiles required for transportable tablespace TS_BAMBOO:
   EXPDP>   /cln/exp/ora_data3/devtools/bamboo_dev_01.dbf
   EXPDP>   /cln/exp/ora_data3/devtools/bamboo_dev_02.dbf
   EXPDP> Job "SYS"."TSPITR_EXP_aslo_able" successfully completed at Wed Mar 21 16:12:30 2018 elapsed 0 00:01:02
Export completed


contents of Memory Script:
{
# shutdown clone before import
shutdown clone abort
# drop target tablespaces before importing them back
sql 'drop tablespace  TS_BAMBOO including contents keep datafiles cascade constraints';
}
executing Memory Script

Oracle instance shut down

sql statement: drop tablespace  TS_BAMBOO including contents keep datafiles cascade constraints

Performing import of metadata...
   IMPDP> Master table "SYS"."TSPITR_IMP_aslo_adcd" successfully loaded/unloaded
   IMPDP> Starting "SYS"."TSPITR_IMP_aslo_adcd":
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/GRANT/OWNER_GRANT/OBJECT_GRANT
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/TRIGGER
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
   IMPDP> Job "SYS"."TSPITR_IMP_aslo_adcd" successfully completed at Wed Mar 21 16:13:36 2018 elapsed 0 00:00:52
Import completed


contents of Memory Script:
{
# make read write and offline the imported tablespaces
sql 'alter tablespace  TS_BAMBOO read write';
sql 'alter tablespace  TS_BAMBOO offline';
# enable autobackups after TSPITR is finished
sql 'begin dbms_backup_restore.AutoBackupFlag(TRUE); end;';
# resync catalog
resync catalog;
}
executing Memory Script

sql statement: alter tablespace  TS_BAMBOO read write

sql statement: alter tablespace  TS_BAMBOO offline

sql statement: begin dbms_backup_restore.AutoBackupFlag(TRUE); end;

starting full resync of recovery catalog
full resync complete

Removing automatic instance
Automatic instance removed
auxiliary instance file /cln/exp/ora_data3/devaux/DEVTOOLS/datafile/o1_mf_temp_fc4xjs4o_.tmp deleted
auxiliary instance file /cln/exp/ora_data3/devaux/DEVTOOLS/onlinelog/o1_mf_3_fc4xjmgh_.log deleted
auxiliary instance file /cln/exp/ora_data3/devaux/DEVTOOLS/onlinelog/o1_mf_2_fc4xjgl4_.log deleted
auxiliary instance file /cln/exp/ora_data3/devaux/DEVTOOLS/onlinelog/o1_mf_1_fc4xj9s8_.log deleted
auxiliary instance file /cln/exp/ora_data3/devaux/DEVTOOLS/datafile/o1_mf_sysaux_fc4xbkwr_.dbf deleted
auxiliary instance file /cln/exp/ora_data3/devaux/DEVTOOLS/datafile/o1_mf_undotbs1_fc4xf9rs_.dbf deleted
auxiliary instance file /cln/exp/ora_data3/devaux/DEVTOOLS/datafile/o1_mf_system_fc4xd5vb_.dbf deleted
auxiliary instance file /cln/exp/ora_data3/devaux/DEVTOOLS/controlfile/o1_mf_fc4ww0f3_.ctl deleted
auxiliary instance file tspitr_aslo_93713.dmp deleted
Finished recover at 21-MAR-18

Recovery Manager complete.
 /home/oracle $sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Wed Mar 21 16:16:52 2018

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options


SQL> select tablespace_name,status from dba_tablespaces;

TABLESPACE_NAME                STATUS
------------------------------ ---------
SYSTEM                         ONLINE
SYSAUX                         ONLINE
UNDOTBS1                       ONLINE
TEMP                           ONLINE
USERS                          ONLINE
TS_STASH                       ONLINE
TS_BAMBOO                      OFFLINE
TS_SONAR                       ONLINE

8 rows selected.

SQL> alter tablespace TS_BAMBOO online;

Tablespace altered.

SQL> select tablespace_name,status from dba_tablespaces;

TABLESPACE_NAME                STATUS
------------------------------ ---------
SYSTEM                         ONLINE
SYSAUX                         ONLINE
UNDOTBS1                       ONLINE
TEMP                           ONLINE
USERS                          ONLINE
TS_STASH                       ONLINE
TS_BAMBOO                      ONLINE
TS_SONAR                       ONLINE

8 rows selected.

SQL> exit

...or even better if flashbask is enabled (this takes seconds instead of hours) - but this is the whole database not just a tablespace!!

SQL> flashback database to timestamp to_timestamp ('27-NOV-14 09:25:00','DD-MON-YY HH24:MI:SS');

Flashback complete.

SQL> alter database open resetlogs;

Database altered.

SQL> 

Recover a dropped table[edit]

Used the purge option? Pisser - no flashback. However, from 12c, it is possible (causing RMAN to work very hard) to restore a table.
It's possible in any version doing the steps manually (so long as RMAN is being used and database is in archivelog mode), but 12c automates it down to 1 line!

recover table adam.sales until time '2015-07-02 09:33:00' auxiliary destination '/tmp/auxi';

Restore and recover datafile[edit]

To get a list of datafiles and their numbers, use 'report schema' or 'select file#,name from v$datafile;' (this queries the control file)

rman target /
startup mount;
restore datafile 1;
recover datafile 1;
alter database open;

or while database is open...

rman target /
sql 'alter datafile 3 offline';
restore datafile 3;
recover datafile 3;
sql 'alter tdatafile 3 online';

Restore datafile to a different location and recover[edit]

rman target /
startup mount;
run {
set newname for datafile 2 to '/oracle/SID/users.dbf';
restore tablespace users;
switch datafile all;
recover tablespace users;
alter database open;
}

Restore database using RMAN[edit]

If all else fails, restore the database to a time before the bad thing happened, then recover it by rolling forward with the archivelogs.
Brief list of actions involved. Not all steps will be needed. Depends on same/different server restore:

- create dsm.sys stanza to setup access to TSM server
- create tdpo.opt to allow Oracle access to TSM
- create dsm.opt to link tdpo.opt to dsm.sys
- create other required logging directories/files (as mentioned in dsm.sys stanza)
- start idle Oracle instance
- restore recent controlfile from TSM
- shutdown instance and restart in mount mode
- restore database from backup
- recover database and roll forward to desired point-in-time

Can do it by time, scn, logseq.
Can do it in one step or two.

export ORACLE_SID=<SID>
. oraenv
sqlplus / as sysdba
startup mount
exit

cat start_restore.ksh<<EORMAN
rman target /
connect catalog catusr/catpwd@catdb;
run {
allocate channel c1 device type 'sbt_tape' parms 'ENV=(TDPO_OPTFILE=/oracle/${ORACLE_SID}/admin/tdpo.opt)';
set until time "to_date('20-MAR-14 10:00:00','DD-MON-YY HH24:MI:SS')";
restore database;
release channel c1;
}
EORMAN

Add this if connecting to a catalog connect catalog catusr/catpwd@catdb;

cat start_recover.ksh<<EORMAN
rman target /
run {
allocate channel c1 device type 'sbt_tape' parms 'ENV=(TDPO_OPTFILE=/oracle/${ORACLE_SID}/admin/tdpo.opt)';
recover database until time "to_date('20-MAR-14 10:00:00','DD-MON-YY HH24:MI:SS')";
release channel c1;
}
EORMAN


or by scn...

rman target / catalog catusr/catpwd@catdb
run {
set until scn=10328571660050;
recover database;
}

or

export ORACLE_SID=<SID>
. oraenv
rman target / catalog catusr/catpwd@catdb
startup mount;
run {
set until logseq 101145;
restore database;
recover database;
}

Lost the control files as well?[edit]

Assuming database is started in NOMOUNT mode...

Catalog available[edit]

Restore controlfile(s) to original location(s)

RESTORE CONTROLFILE;

Restore the controlfile but only to a location other than those mentioned in the CONTROL_FILES parameter

RESTORE CONTROLFILE ... TO 'filename';

Restore controlfile(s) from specified file to original locations unless to 'filename' is specified (then only to this location).

RESTORE CONTROLFILE ... FROM 'piece handle'; or
RESTORE CONTROLFILE ... FROM 'piece handle' to 'filename'; or
RESTORE CONTROLFILE ... FROM TAG 'tag'; or
RESTORE CONTROLFILE ... FROM TAG 'tag' to 'filename';
No catalog available[edit]

Note: If controlfile was backed up with autobackup format set to non-default location, format must be specified for restore with: SET CONTROLFILE AUTOBACKUP FORMAT '<format>';
If catalog is not available, following is not possible. RMAN cannot know from where to restore it

RESTORE CONTROLFILE;

Restore controlfile(s) to original locations

RESTORE CONTROLFILE FROM AUTOBACKUP;

Restore the controlfile to specified location

RESTORE CONTROLFILE FROM AUTOBACKUP... TO 'filename';

Restore controlfile(s) from specified file to original locations unless to 'filename' is specified (then only to this location). Note: SET DBID must be used here.

SET DBID=<dbid>;
RESTORE CONTROLFILE ... FROM 'piece handle'; or
RESTORE CONTROLFILE ... FROM 'piece handle' to 'filename'; or

If catalog is not available, following is not possible. Catalog needs to be available to be able to restore from a tag

RESTORE CONTROLFILE ... FROM TAG 'tag'; or
RESTORE CONTROLFILE ... FROM TAG 'tag' to 'filename';

Another method using EMC Netbackup / Networker[edit]

RMAN> 
connected to target database: INSA (not mounted)

RMAN> connect catalog *
connected to recovery catalog database

RMAN> run {
2>     set dbid=3273853992;
3>     allocate channel t1 type 'sbt_tape' parms 'ENV=(NSR_SERVER=hn6000.xxxxxx.be,NSR_CLIENT=hn492,NSR_DATA_VOLUME_POOL=DD2DAILY)' debug 1;
4>     restore controlfile;
5> }
executing command: SET DBID
database name is "INSA" and DBID is 3273853992

allocated channel: t1
channel t1: SID=155 device type=SBT_TAPE
channel t1: NMDA Oracle v8.2.1

Starting restore at 21-MAR-18

channel t1: starting datafile backup set restore
channel t1: restoring control file
channel t1: reading from backup piece c-3273853992-20180319-0b
channel t1: piece handle=c-3273853992-20180319-0b tag=TAG20180319T103116
channel t1: restored backup piece 1
channel t1: restore complete, elapsed time: 00:00:25
output file name=/cln/acc/ora_data4/insa/control01.ctl
output file name=/cln/acc/ora_data4/insa/control02.ctl
output file name=/cln/acc/ora_data4/insa/control03.ctl
Finished restore at 21-MAR-18
released channel: t1

RMAN> **end-of-file**

RMAN> exit

rman
RMAN> connect target /

connected to target database: INSA (DBID=3273853992, not open)

RMAN> run {
2> allocate channel t1 type 'sbt_tape' parms 'ENV=(NSR_SERVER=hn6000.xxxxxx.be,NSR_CLIENT=hn492,NSR_DATA_VOLUME_POOL=DD2DAILY)' debug 1;
3> restore database;
4> }

using target database control file instead of recovery catalog
allocated channel: t1
channel t1: SID=155 device type=SBT_TAPE
channel t1: NMDA Oracle v8.2.1

Starting restore at 21-MAR-18

channel t1: starting datafile backup set restore
channel t1: specifying datafile(s) to restore from backup set
channel t1: restoring datafile 00006 to /cln/acc/ora_data4/insa/INS_DATA01.dbf
channel t1: reading from backup piece dbfull_INSA_3273853992_970341263_5178_1
channel t1: piece handle=dbfull_INSA_3273853992_970341263_5178_1 tag=TAG20180309T191421
channel t1: restored backup piece 1
channel t1: restore complete, elapsed time: 00:11:25
...
...
channel t1: starting datafile backup set restore
channel t1: specifying datafile(s) to restore from backup set
channel t1: restoring datafile 00023 to /cln/acc/ora_data4/insa/INS_DATA13.dbf
channel t1: reading from backup piece dbfull_INSA_3273853992_970353194_5201_1
channel t1: piece handle=dbfull_INSA_3273853992_970353194_5201_1 tag=TAG20180309T191421
channel t1: restored backup piece 1
channel t1: restore complete, elapsed time: 00:00:25
channel t1: starting datafile backup set restore
channel t1: specifying datafile(s) to restore from backup set
channel t1: restoring datafile 00003 to /cln/acc/ora_data4/insa/indx01.dbf
channel t1: reading from backup piece dbfull_INSA_3273853992_970353229_5202_1
channel t1: piece handle=dbfull_INSA_3273853992_970353229_5202_1 tag=TAG20180309T191421
channel t1: restored backup piece 1
channel t1: restore complete, elapsed time: 00:00:15
channel t1: starting datafile backup set restore
channel t1: specifying datafile(s) to restore from backup set
channel t1: restoring datafile 00005 to /cln/acc/ora_data4/insa/users01.dbf
channel t1: reading from backup piece dbfull_INSA_3273853992_970353254_5203_1
channel t1: piece handle=dbfull_INSA_3273853992_970353254_5203_1 tag=TAG20180309T191421
channel t1: restored backup piece 1
channel t1: restore complete, elapsed time: 00:00:15
channel t1: starting datafile backup set restore
channel t1: specifying datafile(s) to restore from backup set
channel t1: restoring datafile 00004 to /cln/acc/ora_data4/insa/tools01.dbf
channel t1: reading from backup piece dbfull_INSA_3273853992_970353261_5204_1
channel t1: piece handle=dbfull_INSA_3273853992_970353261_5204_1 tag=TAG20180309T191421
channel t1: restored backup piece 1
channel t1: restore complete, elapsed time: 00:00:15
Finished restore at 21-MAR-18
released channel: t1

RMAN> exit

SQL>
SQL> select open_mode from v$database;

OPEN_MODE
--------------------
MOUNTED

SQL> select controlfile_type from v$database;

CONTROL
-------
BACKUP

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
 /home/oracle $rman

Recovery Manager: Release 11.2.0.4.0 - Production on Wed Mar 21 13:33:19 2018

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

RMAN> exit


 /home/oracle $rman

Recovery Manager: Release 11.2.0.4.0 - Production on Wed Mar 21 13:44:20 2018

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

RMAN> connect target /

connected to target database: INSA (DBID=3273853992, not open)

RMAN> connect catalog rman/rman@rman12d;

connected to recovery catalog database

RMAN> run {
2> allocate channel t1 type 'sbt_tape' parms 'ENV=(NSR_SERVER=hn6000.xxxxxx.be,NSR_CLIENT=hn492,NSR_DATA_VOLUME_POOL=DD2DAILY)' debug 1;
3> recover database ;
4> }

allocated channel: t1
channel t1: SID=155 device type=SBT_TAPE
channel t1: NMDA Oracle v8.2.1

Starting recover at 21-MAR-18

starting media recovery

channel t1: starting archived log restore to default destination
channel t1: restoring archived log
archived log thread=1 sequence=1242
channel t1: reading from backup piece arclog_INSA_3273853992_970346643_5184_1
channel t1: piece handle=arclog_INSA_3273853992_970346643_5184_1 tag=TAG20180309T204403
channel t1: restored backup piece 1
channel t1: restore complete, elapsed time: 00:00:15
archived log file name=/cln/acc/ora_data2/archivelog/insaalog1_1242_960291562.arc thread=1 sequence=1242
...
...
channel t1: starting archived log restore to default destination
channel t1: restoring archived log
archived log thread=1 sequence=1338
channel t1: reading from backup piece arclog_INSA_3273853992_971166624_5584_1
channel t1: piece handle=arclog_INSA_3273853992_971166624_5584_1 tag=TAG20180319T083024
channel t1: restored backup piece 1
channel t1: restore complete, elapsed time: 00:00:15
archived log file name=/cln/acc/ora_data2/archivelog/insaa/log1_1338_960291562.arc thread=1 sequence=1338
channel t1: starting archived log restore to default destination
channel t1: restoring archived log
archived log thread=1 sequence=1339
channel t1: reading from backup piece arclog_INSA_3273853992_971173822_5588_1
channel t1: piece handle=arclog_INSA_3273853992_971173822_5588_1 tag=TAG20180319T103022
channel t1: restored backup piece 1
channel t1: restore complete, elapsed time: 00:00:15
archived log file name=/cln/acc/ora_data2/archivelog/insaa/log1_1339_960291562.arc thread=1 sequence=1339
unable to find archived log
archived log thread=1 sequence=1340
released channel: t1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 03/21/2018 14:21:52
RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 1340 and starting SCN of 274665299833

RMAN> alter database open resetlogs;

database opened
new incarnation of database registered in recovery catalog
starting full resync of recovery catalog
full resync complete

RMAN> exit

Restore database after failed upgrade[edit]

Problem[edit]

Upgrade has failed part way
Catalog is still on old ORACLE_HOME
Database is already on new ORACLE_HOME

Solution[edit]

  • Shut the database down
shutdown immediate
  • Set old home in oratab
vi /etc/oratab
  • Restore a controlfile from before the upgrade
rman target /
connect catalog user/pwd@cat
startup nomount;
list backup of controlfile;
restore controlfile from autobackup;
  • Read the controlfile
startup mount;
  • Restore the database
run {
allocate channel c1 device type 'sbt_tape' parms 'ENV=(TDPO_OPTFILE=/oracle/SID/admin/tdpo.opt)';
set until time "to_date('11-DEC-2015 10:30:00','DD-MON-YY HH24:MI:SS')";
restore database;
release channel c1;
}
  • Recover the database
run {
allocate channel c1 device type 'sbt_tape' parms 'ENV=(TDPO_OPTFILE=/oracle/SID/admin/tdpo.opt)';
set until time "to_date('11-DEC-2015 10:30:00','DD-MON-YY HH24:MI:SS')";
recover database;
release channel c1;
}
  • Start the database
alter database open resetlogs;

Duplicating an 11gR2 Oracle database with no connection to the target[edit]

One of the new Backup and Recovery features of Oracle 11g Release 2 (11gR2) is the ability to duplicate a database without connecting to the target database.
In RMAN terminology, the “target” database is the one you wish to duplicate.
In this case, the only requirement is that you provide a disk backup location where RMAN can find all the backups, data file copies, archived logs, and control file copies for database duplication.
This database duplication enhancement is helpful when it is not possible to connect to the target database and the recovery catalog.

This came in handy for me recently on a customer project. They wanted me to test a disaster recovery scenario in which the production database server was completely gone and we needed to restore it from tape onto another server.
This is a fairly typical DR situation, but in this case it was made more difficult because the directory structure on the test server didn’t match that of the production server.
Specifically, on the production server, everything was located on the E: drive while on the Test server, the only drive that exists is C:.
This means that all file locations need to be changed as part of the duplicate procedure.
This can all be done in one single RMAN command.

Note: One thing that makes this process easier is to enable controlfile autobackups on the production database.
This is a good practice in general and should be configured for all databases.

rman target / nocatalog
configure controlfile autobackup on;

Here are the steps that I followed:

Step1) Copy the most recent backup files and archivelog files to the test server.
In this case, we restored from tape to the location c:\prod backup files. The backup files and archivelogs come from the following locations:

E:\app\flash_recovery_area\prod\ARCHIVELOG\2013_03_21
E:\app\flash_recovery_area\prod\BACKUPSET\2013_03_21
E:\app\flash_recovery_area\prod\AUTOBACKUP\2013_03_21

Step 2) Create a file called INITPROD2.ORA in %ORACLE_HOME%\database containing one line: db_name=PROD2

Step 3) At a command prompt:

set oracle_sid=prod2
oradim –new –sid prod2
sqlplus / as sysdba
startup nomount
exit
rman auxiliary / msglog=c:\restore.log

Step 4) At the RMAN prompt:

duplicate database to prod2
spfile
    set control_files='C:\app\oracle\oradata\prod2\control01.ctl'
    set db_file_name_convert='E:\app\oradata\prod','C:\app\oracle\oradata\prod2′,'E:\app\prod_tbs','C:\app\oracle\oradata\prod2′
    set log_file_name_convert='E:\app\oradata\prod','C:\app\oracle\oradata\prod2′,'C:\oracle\oradata\prod','C:\app\oracle\oradata\prod2′
    set db_recovery_file_dest='c:\app\administrator\flash_recovery_area'
    set diagnostic_dest='c:\app\administrator\diag'
    backup location "c:\prod backup files\";

That's all it takes! As you can see, I changed all file location parameters and converted datafile and logfile names with the _CONVERT parameters.
You can set any parameters for the new database in this way. For example you may want to disable all database jobs in the duplicate database by setting job_queue_processes=0.
Or, change the size of the SGA for a smaller server. RMAN will create the PROD2 database, perform as much recovery as possible given the available archive logs and open the database with the resetlogs option.
This is a really nice feature of RMAN.

Update RMAN configuration parameters from Sql*Plus[edit]

If you backup the controlfile with:
alter database backup controlfile to trace as '/oracle/${OSID}/admin/${NSID}_controlfile.sql';
This example found in backup controlfile

-- Configure snapshot controlfile filename
EXECUTE SYS.DBMS_BACKUP_RESTORE.CFILESETSNAPSHOTNAME('/oracle/product/11.2.0.3/dbs/snapshot_controlfile_WM820T.ctl');
-- Configure RMAN configuration record 1
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('RETENTION POLICY','TO RECOVERY WINDOW OF 31 DAYS');
-- Configure RMAN configuration record 2
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DEFAULT DEVICE TYPE TO','''SBT_TAPE''');
-- Configure RMAN configuration record 3
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('CHANNEL','DEVICE TYPE ''SBT_TAPE'' FORMAT   ''LOG_%d_t%T_s%s_u%U'' PARMS  ''ENV=(TDPO_OPTFILE=/oracle/WM820T/admin/tdpo.opt)''');
-- Configure RMAN configuration record 4
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DEVICE TYPE','''SBT_TAPE'' PARALLELISM 1 BACKUP TYPE TO BACKUPSET');
-- Configure RMAN configuration record 5
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE','''SBT_TAPE'' TO ''LOG_%d_controlfile_%F.rman''');
-- Configure RMAN configuration record 6
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('CONTROLFILE AUTOBACKUP','ON');
-- Configure RMAN configuration record 7
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('SNAPSHOT CONTROLFILE NAME','TO ''snapshot_controlfile_WM820T.ctl''');

Script to reorganise datafiles into neat new filesystems[edit]

This script can generate 'set newname' statements for use when duplicating a database for standby.
If the files are scattered all over the place, this will gather them together and spread the datafiles evenly over the available space.

-- =============================================================================
-- Filename    : reorganise_datafiles.sql
-- Description : Shares the datafiles on one host over several
--               filesystems on another host
--
-- Notes       : Fill the l_datafile_tab and optionally the l_archive_tab
--               with the names of the new filesystems
--
-- Modification History
-- ====================
-- Who               When      What
-- ================= ========= =================================================
-- Stuart Barkley    22-OCT-12 Created
-- Stuart Barkley    29-OCT-12 Added tempfiles to cursor c_row
-- Stuart Barkley    01-NOV-12 Added code to treat archive log files separately
-- =============================================================================

spool reorganise_datafiles
set serveroutput on format wrapped size unlimited
set termo on
set trims on
set lines 200
set pages 1000
set feedb off
set verif off
create or replace type l_filesystem_obj as object
    (
        name        varchar2(100)
    ,   bigness     number
    )
/

create or replace type l_filesystem_tabtype is table of l_filesystem_obj
/

declare

    -- =========================================================
    -- add/delete/replace the lines in this table with the names
    -- of the new mount points for normal datafiles
    -- =========================================================
    l_datafile_tab            l_filesystem_tabtype := l_filesystem_tabtype(
                                                                            l_filesystem_obj( '/oracle/JDBEOP1/oradata1/' ,0)
                                                                          , l_filesystem_obj( '/oracle/JDBEOP1/oradata2/' ,0)
                                                                          , l_filesystem_obj( '/oracle/JDBEOP1/oradata3/' ,0)
                                                                          , l_filesystem_obj( '/oracle/JDBEOP1/oradata4/' ,0)
                                                                          );

    -- =========================================================
    -- add/delete/replace the lines in this table with the names
    -- of the new mount points for archive log datafiles
    -- =========================================================
    l_archive_tab             l_filesystem_tabtype := l_filesystem_tabtype(
                                                                            l_filesystem_obj( '/oracle/JDBEOP1/arcdata/' ,0)
                                                                          );

    -- =====================================
    -- gather info about data and temp files
    -- =====================================
    cursor c_datafiles is
    select file_id                                        file_id
    ,      file_name                                      full_file_name
    ,      substr(file_name,instr(file_name,'/',-1)+1)    file_name
    ,      round(sum(bytes)/1024/1024/1024,2)             gb
    ,      'data'                                         file_type
    from   dba_data_files ddf
    where  1=1
    and    upper(file_name) not like '%ARC%'
    group  by file_id
    ,      file_name
    union
    select file_id                                        file_id
    ,      file_name                                      full_file_name
    ,      substr(file_name,instr(file_name,'/',-1)+1)    file_name
    ,      round(sum(bytes)/1024/1024/1024,2)             gb
    ,      'temp'                                         file_type
    from   dba_temp_files ddf
    where  1=1
    group  by file_id
    ,      file_name
    order  by 4 desc
    ;

    -- ===================================
    -- gather info about archive log files
    -- ===================================
    cursor c_archivefiles is
    select file_id                                        file_id
    ,      file_name                                      full_file_name
    ,      substr(file_name,instr(file_name,'/',-1)+1)    file_name
    ,      round(sum(bytes)/1024/1024/1024,2)             gb
    ,      'data'                                         file_type
    from   dba_data_files ddf
    where  1=1
    and    upper(file_name) like '%ARC%'
    group  by file_id
    ,      file_name
    order  by 4 desc
    ;

    l_running_total       pls_integer    := 0;
    l_fs_num              binary_integer := 1;
    l_total_db_size       number;

begin

    dbms_output.enable(null);

    for r_row in c_datafiles
    loop

        -- ===========================================
        -- when we get to the last bucket, start again
        -- ===========================================
        if l_fs_num > nvl(l_datafile_tab.count,0) then
           l_fs_num := 1;
        end if;

        -- ==============================
        -- update the size of this bucket
        -- ==============================
        l_datafile_tab(l_fs_num).bigness := l_datafile_tab(l_fs_num).bigness + r_row.gb;

        -- =====================
        -- print out a good line
        -- =====================
        dbms_output.put_line ( 'set newname for '||r_row.file_type||'file '||r_row.file_id||' to '''||l_datafile_tab(l_fs_num).name||r_row.file_name||''';' );
        l_fs_num := l_fs_num+1;

    end loop;  -- c_datafiles


    -- ========================================= 
    -- datafiles done, now the archive log files
    -- =========================================
    l_fs_num := 1;
    for r_row in c_archivefiles
    loop

        -- ===========================================
        -- when we get to the last bucket, start again
        -- ===========================================
        if l_fs_num > nvl(l_archive_tab.count,0) then
           l_fs_num := 1;
        end if;

        -- ==============================
        -- update the size of this bucket
        -- ==============================
        l_archive_tab(l_fs_num).bigness := l_archive_tab(l_fs_num).bigness + r_row.gb;

        -- =====================
        -- print out a good line
        -- =====================
        dbms_output.put_line ( 'set newname for '||r_row.file_type||'file '||r_row.file_id||' to '''||l_archive_tab(l_fs_num).name||r_row.file_name||''';' );
        l_fs_num := l_fs_num+1;

    end loop;  -- c_archivefiles


    -- =========
    -- summarise
    -- =========
    dbms_output.put_line (null);
    dbms_output.put_line ( 'Summary');
    dbms_output.put_line ( '=======');
    for l_fs_num in 1 .. nvl(l_datafile_tab.count,0)
    loop
        dbms_output.put_line ( 'Mount point '||l_datafile_tab(l_fs_num).name||' will have a total size of '||l_datafile_tab(l_fs_num).bigness||' Gb');
    end loop;

    for l_fs_num in 1 .. nvl(l_archive_tab.count,0)
    loop
        dbms_output.put_line ( 'Mount point '||l_archive_tab(l_fs_num).name||' will have a total size of '||l_archive_tab(l_fs_num).bigness||' Gb');
    end loop;
    dbms_output.put_line (null);


exception

when others then
    dbms_output.put_line(sqlerrm);
    raise_application_error(-20000,'Problem');

end;
/

spool off

drop type l_filesystem_tabtype
/

drop type l_filesystem_obj
/

RMAN reporting by André Araujo (check the status of the backups using the output in the database)[edit]

Reference: pythian.com
Reproduced here in case it disappears from the internets.


A lot of metadata about the RMAN backup jobs can be found in the V$RMAN_% views.
These views show past RMAN jobs as well as jobs currently running.
Once the jobs complete backup sets, metadata about the sets and pieces are also added to the control file and can be accessed through the V$BACKUP_% views.

For the queries in this post I need only four of those views:

  • V$BACKUP_SET
  • V$BACKUP_SET_DETAILS
  • V$RMAN_BACKUP_JOB_DETAILS
  • GV$RMAN_OUTPUT

Query 1 (Backup jobs' status and metadata)

set lines 220
set pages 1000
col cf         for 9,999
col df         for 9,999
col elapsed_seconds heading "ELAPSED|SECONDS"
col i0         for 9,999
col i1         for 9,999
col l          for 9,999
col input_type for a13
col dow for a11
col start_time for a23
col end_time   for a19
col output_mbytes for 9,999,999 heading "OUTPUT|MBYTES"
col session_recid for 999999 heading "SESSION|RECID"
col session_stamp for 99999999999 heading "SESSION|STAMP"
col status for a10 trunc
col time_taken_display for a10 heading "TIME|TAKEN"
col output_instance for 9999 heading "OUT|INST" noprint
select j.session_recid
,      j.session_stamp
,      to_char(j.start_time, 'yyyy-mm-dd hh24:mi:ss') start_time
,      to_char(j.end_time, 'yyyy-mm-dd hh24:mi:ss')   end_time
,      (j.output_bytes/1024/1024)                     output_mbytes
,      j.status
,      j.input_type
,      decode(to_char(j.start_time, 'd')
             , 1, 'Sunday',   2, 'Monday'
             , 3, 'Tuesday',  4, 'Wednesday'
             , 5, 'Thursday', 6, 'Friday'
             , 7, 'Saturday')   dow
,      j.elapsed_seconds
,      j.time_taken_display
,  x.cf, x.df, x.i0, x.i1, x.l
,  ro.inst_id output_instance
from V$RMAN_BACKUP_JOB_DETAILS j
  left outer join (select
                     d.session_recid, d.session_stamp,
                     sum(case when d.controlfile_included = 'YES' then d.pieces else 0 end) CF,
                     sum(case when d.controlfile_included = 'NO'
                               and d.backup_type||d.incremental_level = 'D' then d.pieces else 0 end) DF,
                     sum(case when d.backup_type||d.incremental_level = 'D0' then d.pieces else 0 end) I0,
                     sum(case when d.backup_type||d.incremental_level = 'I1' then d.pieces else 0 end) I1,
                     sum(case when d.backup_type = 'L' then d.pieces else 0 end) L
                   from
                     V$BACKUP_SET_DETAILS d
                     join V$BACKUP_SET s on s.set_stamp = d.set_stamp and s.set_count = d.set_count
                   where s.input_file_scan_only = 'NO'
                   group by d.session_recid, d.session_stamp) x
    on x.session_recid = j.session_recid and x.session_stamp = j.session_stamp
  left outer join (select o.session_recid, o.session_stamp, min(inst_id) inst_id
                   from GV$RMAN_OUTPUT o
                   group by o.session_recid, o.session_stamp)
    ro on ro.session_recid = j.session_recid and ro.session_stamp = j.session_stamp
where j.start_time > trunc(sysdate)-&NUMBER_OF_DAYS
order by j.start_time;

Query 2 (Backup set details)

set lines 220
set pages 1000
col backup_type for a4 heading "TYPE"
col controlfile_included heading "CF?"
col incremental_level heading "INCR LVL"
col completion_time for a23
col pieces for 999 heading "PCS"
col elapsed_seconds heading "ELAPSED|SECONDS"
col device_type for a10 trunc heading "DEVICE|TYPE"
col compressed for a4 heading "ZIP?"
col output_mbytes for 9,999,999 heading "OUTPUT|MBYTES"
col input_file_scan_only for a4 heading "SCAN|ONLY"
select
  d.bs_key, d.backup_type, d.controlfile_included, d.incremental_level, d.pieces,
  to_char(d.start_time, 'yyyy-mm-dd hh24:mi:ss') start_time,
  to_char(d.completion_time, 'yyyy-mm-dd hh24:mi:ss') completion_time,
  d.elapsed_seconds, d.device_type, d.compressed, (d.output_bytes/1024/1024) output_mbytes, s.input_file_scan_only
from V$BACKUP_SET_DETAILS d
  join V$BACKUP_SET s on s.set_stamp = d.set_stamp and s.set_count = d.set_count
where session_recid = &SESSION_RECID
  and session_stamp = &SESSION_STAMP
order by d.start_time;

Query 3 (Backup job output)

set lines 200 pages 1000 verif off headi off newpa none feedb off
select output
from   gv$rman_output
where  session_recid = &SESSION_RECID
and    session_stamp = &SESSION_STAMP
order  by recid;

RMAN tracing/debug[edit]

1. Please login with rman log and trace >> rman target <un/pw@target_db> catalog <un/pw@catalog_db> debug trace='/tmp/rman_trace.trc' log='/tmp/rman_log.log';
2. Please rename the log and the tracefiles following your naming convention.
3. Select two datafiles as samples, one biggest datafile and one smallest datafile.
4. Take two samples of backup one onto u03 and one onto u01 naming the log and the tracefiles accordingly. You can follow this example :
run
{
allocate channel d1 device type disk format 'd:/backups/%U.bkp';
backup database include current controlfile;
release channel d1;
}
5.During the backup, we will hanganalyze the take the system state dump. Performance team will analyze them

Hang Analyzer and System state Dump:
===================================
$ sqlplus /nolog
connect / as sysdba
oradebug setmypid
oradebug unlimit
oradebug tracefile_name
oradebug hanganalyze 3
wait 90 seconds
oradebug hanganalyze 3
exit


$ sqlplus /nolog
connect / as sysdba
oradebug setmypid
oradebug unlimit
oradebug tracefile_name
oradebug dump systemstate 255
wait 90 seconds
oradebug dump systemstate 255
wait 90 seconds
oradebug dump systemstate 255
exit