Datapump

From dbawiki
Jump to: navigation, search

Contents

Use the flashback parameter to export and import a consistent dump[edit]

This performs a consistent dump of 2 schemas
Keeping the parameters in a file allows it to be reused for the import:

expdp system/***** parfile=db_params.par

where db_params.par would look like this:

flashback_time=systimestamp
directory=DATA_PUMP_DIR
dumpfile=db_schemas.dmp
logfile=db_schemas.log
schemas=(apps,test)

The SCN that most closely matches the specified time(stamp) is found, and this SCN is used to enable the Flashback utility.
The export operation is performed with data that is consistent as of this SCN.
By default expdp is consistent only for the table it is currently exporting. Use 'flashback_time=systimestamp' to simulate the old export 'consistent=y'

Typical export with Data Pump[edit]

This method is useful to dynamically build a script. Note the %u to generate the parallel file names...

SID=$1
TIMESTAMP=`date +'%Y%m%d%H%M%S'`
FLASHBACK_TO="to_timestamp('13-02-2018 13:35:00', 'DD-MM-YYYY HH24:MI:SS')"

expdp "/"                                                  \
dumpfile=expdp_${SID}_D_FULL_${TIMESTAMP}_%u.dmp           \
logfile=expdp_${SID}_D_FULL_${TIMESTAMP}.log               \
job_name=expdp_${SID}_D_FULL_${TIMESTAMP}                  \
flashback_time=${FLASHBACK_TO}                             \
directory=data_pump_dir                                    \
reuse_dumpfiles=y                                          \
parallel=8                                                 \
full=y                                                     \
exclude=statistics

Reduce the amount of data exported/imported with clauses like these...

exclude=table:"in (select table_name from dba_tables where owner = 'eucvdta'  and table_name like 'F4%')"  \
exclude=table:"in (select table_name from dba_tables where owner = 'eucrpdta'  and table_name like 'F4%')" \
query=prd1.op_log:'"where trunc(creation_date) > trunc(sysdate-7)"'   \
QUERY=SIM_HEADER:"WHERE run_time > (SELECT TRUNC(MAX(run_time))-7 FROM sim_header) OR run_time IN (SELECT TRUNC(run_time, 'YEAR')-1 FROM sim_header)"
QUERY=SIM_BLOB:"WHERE sim_run_id > (SELECT MIN(sim_run_id) FROM sim_header WHERE run_time > (SELECT TRUNC(MAX(run_time))-7 FROM sim_header)) OR sim_run_id IN (SELECT sim_run_id FROM sim_header WHERE run_time IN (SELECT TRUNC(run_time, 'YEAR')-1 FROM sim_header))"
QUERY=SIM_HEADER_DETAILS: "WHERE sim_run_id > (SELECT MIN(sim_run_id) FROM sim_header WHERE run_time > (SELECT TRUNC(MAX(run_time))-7 FROM sim_header)) OR sim_run_id IN (SELECT sim_run_id FROM sim_header WHERE run_time IN (SELECT TRUNC(run_time, 'YEAR')-1 FROM sim_header))"
QUERY=SIM_DETAILS_BLOB: "WHERE unique_sim_key IN (SELECT unique_sim_key FROM sim_header_details WHERE sim_run_id > (SELECT MIN(sim_run_id) FROM sim_header WHERE run_time > (SELECT TRUNC(MAX(run_time))-7 FROM sim_header)) OR sim_run_id IN (SELECT sim_run_id FROM sim_header WHERE run_time IN (SELECT TRUNC(run_time, 'YEAR')-1 FROM sim_header)))"

Could be made neater by using a parfile...

expdp / parfile=/home/ibmtools/etc/oracle/nhapplp1_full.parfile

and the parfile could look like this

full=y
reuse_dumpfiles=y
compression=all
exclude=statistics
parallel=6

Datapump pitfalls/gotchas[edit]

  • Exclude a single table from a full export

If you have a par file like this and expect it to work, you will be disappointed:

exclude=table:"in ('scott.emp')"
full=y

The emp table will be exported!
To get this to work, you need to use a trick...

query=scott.emp:"where rownum < 1"
full=y


  • exclude and include are conceptually different...
exclude=function:"in ('scott.raise_sal')"

is not the same as

include=function:"not in ('scott.raise_sal')"

the first one does what you think it will do... it exports all objects except a function called raise_sal owned by scott.
the second one, convoluted though it is, will not do what you might think. It exports all functions (and only functions) except raise_sal owned by scott.

Find partition ed tables in database[edit]

Before converting a database from Enterprise Edition to Standard Edition (Metalink: 139642.1), partitioned tables will need to be merged (as Standard Edition does not allow them)

select table_name
,      count(partition_name)
from   dba_tab_partitions
where  1=1
and    table_owner not in ('SYS','SYSTEM','PUBLIC','CTXSYS','MDSYS','AUDSYS')
group  by table_name
order  by 1;

Get more detailed information while Data pump is working[edit]

To get an idea of how long it takes to export each object, add this to the parfile (or command line)

metrics=y

Keep the master table after a successful data pump session[edit]

To see what went on in the master table, add this to the parfile (or command line)

keep_master=y

The master table contains all the data pump log messages.
It is used to track the detailed progress of a Data Pump job - which is more than the log messages.
Amongst other things it conatins:

  • Completed rows of a table.
  • Total number of errors during data pump operation.
  • Elapsed time for each table to do data pump export/import operation.
  • The current set of dump files.
  • The current state of every object exported or imported and their locations in the dump file set.
  • The job's user-supplied parameters.
  • The status of every worker process.
  • The state of current job status and restart information.
  • The dump file location, the directory name information.

Trace the Data pump job[edit]

Tracing can be enabled by specifying an 7 digit hexadecimal mask in the TRACE parameter of expdp or impdp.
The first three digits enable tracing for a specific DataPump component, while the last four digits are usually: 0300.
Any leading zero’s can be omitted, and the value specified for the TRACE parameter is not case sensitive.
TRACE does not add anything to the output of DataPump, it creates additional trace files.
Add TRACE=<hex digits below> to the parfile (or command line)

Here is a summary of the Data Pump trace levels:
  10300 SHDW: To trace the Shadow process
  20300 KUPV: To trace Fixed table
  40300 'div' To trace Process services
  80300 KUPM: To trace Master Control Process
 100300 KUPF: To trace File Manager
 200300 KUPC: To trace Queue services
 400300 KUPW: To trace Worker process(es)        
 800300 KUPD: To trace Data Package
1000300 META: To trace Metadata Package
1FF0300 'all' To trace all components, full tracing

To trace an already running export job[edit]

expdp system/manager attach=sys_export_full_01
Export> stop
expdp system/manager attach=sys_export_full_01 trace=480300
Export> cont

Check to see if the objects can be recovered from the recyclebin[edit]

This would save time restoring from tape/dumps

select original_name,ts_name,operation,droptime,can_undrop from dba_recyclebin where owner='PRODDTA' order by droptime;
flashback table proddta.F5642026 to before drop;

Check the indexes and restore as necessary...

select index_name from dba_indexes where owner = 'PRODDTA' and table_name = 'F5642026';
alter index "bin$dbo9uchtzsbgqfemiadccq==$0" rename to proddta.F5642026_IX;

Import tables from a dump file into a schema[edit]

impdp \'/ as sysdba\' parfile=import_tables.par

where import_tables.par looks something like this

directory=DATA_PUMP_DIR_JDBEOP1
dumpfile=expdp_JDBEOP1_D_PRODDTA_20131007200000_%U.dmp
logfile=RESTORE_20131012.log
job_name=RESTORE_20131012
tables=(PRODDTA.F47032,PRODDTA.F47031,PRODDTA.F47022,PRODDTA.F47021,PRODDTA.F4311Z1,PRODDTA.F4301Z1)
remap_schema=PRODDTA:DMIRROR
remap_tablespace=PRODDTAT:RESTORE_TS_2013
remap_tablespace=PRODDTAI:RESTORE_TS_2013
table_exists_action=replace

where directory has already been created with

create or replace directory DATA_PUMP_DIR_JDBEOP1 as '/oracle/export/ubenoa26_jdbeop1';
grant read,write on <DATA_PUMP_DIR_JDBEOP1 to OPS$IMPDP;

without a par file and with a transform to ignore table creation attributes[edit]

impdp \'/ as sysdba\'                        \
    DIRECTORY=DATA_PUMP_DIR                  \
    tables=SY812.F00950,SY812.F95921         \
    transform=segment_attributes:n           \
    parallel=5                               \
    remap_schema=SY812:QAC_REPORT_TEST       \
    remap_tablespace=SY812T:QAC_REPORT_TEST  \
    dumpfile=expdp_JDB_20140531200001_%u.dmp \
    job_name=impdp_SY812                     \
    logfile=impdp_JDBEOP1_SY812.log

replace existing tables in an existing schema[edit]

impdp \'/ as sysdba\'                                      \
    directory=data_pump_dir                                \
    tables=proddta.F5642026,proddta.F596111                \
    transform=segment_attributes:n                         \
    parallel=5                                             \          
    dumpfile=expdp_JDBEOP1_D_PRODDTA_20140725210000_%u.dmp \
    job_name=impdp_JDBEOP1_D_PRODDTA_F5642026_F596111      \
    logfile=impdp_JDBEOP1_D_PRODDTA_F5642026_F596111.log

import existing tables into a different schema[edit]

impdp \'/ as sysdba\'                                                          \
directory=data_pump_dir                                                        \
tables=DWH.FACT_OPEX,DWH.FACT_OPEX_PTC,DWH.FACT_AFIN_SCEN,DWH.FACT_AFIN_PTC    \
remap_schema=DWH:_kantal                                                       \
remap_tablespace=USERS:TOOLS,INDX:TOOLS                                        \
dumpfile=backup_export_10g_CODWHP1_20151104210000_%u.dmp                       \
job_name=import_DWH_TABLES                                                     \
logfile=import_DWH_TABLES.log 

import existing partitioned tables into a different schema[edit]

impdp \'/ as sysdba\'                                                          \
directory=data_pump_dir                                                        \
tables=TIP_PRD1.EODMARKETPNL:SYS_P1751,TIP_PRD1.PHYSICALPOSITION:SYS_P16591    \
remap_schema=DWH:_kantal                                                       \
remap_tablespace=USERS:TOOLS,INDX:TOOLS                                        \
dumpfile=backup_export_10g_CODWHP1_20151104210000_%u.dmp                       \
job_name=import_DWH_TABLES                                                     \
logfile=import_DWH_TABLES.log 

Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX[edit]

Import is taking a long time and stuck on this line. Is it waiting for something or just slow?
Datapump does not import indexes, it recreates them using create index statements. This can be slow, especially as all the metadata uses just 1 worker.

Check the status of the job[edit]

select * from dba_datapump_jobs;

Check to see if the impdp session is waiting for resources[edit]

select vsw.*
from   dba_datapump_sessions dds
,      v$session vs
,      v$session_wait vsw
where  dds.saddr = vs.saddr
and    vs.sid    = vsw.sid
/

Check dba_resumable for data pump jobs[edit]

select * from dba_resumable;

Check alert log for possible error/warning messages[edit]

Find out what Datapump jobs are running[edit]

SET lines 200
COL owner_name FORMAT a10;
COL job_name FORMAT a20
COL state FORMAT a11
COL operation LIKE state
COL job_mode LIKE state

SELECT job_name
,      owner_name
,      operation
,      job_mode
,      state
,      attached_sessions
FROM   dba_datapump_jobs
WHERE  1=1
and    job_name NOT LIKE 'BIN$%'
ORDER  BY 2,1
/
JOB_NAME             OWNER_NAME OPERATION   JOB_MODE    STATE       ATTACHED_SESSIONS
-------------------- ---------- ----------- ----------- ----------- -----------------
EXP_20131018100328   OPS$ORADP  EXPORT      FULL        EXECUTING                   1

SQL> 

Kill a datapump job with PL/SQL[edit]

SET serveroutput on
SET lines 100
DECLARE
    l_handle NUMBER;
BEGIN
    l_handle := DBMS_DATAPUMP.ATTACH('EXP_20131018100328','OPS$ORADP');
    DBMS_DATAPUMP.STOP_JOB (l_handle,1,0);
END;
/

Connect to a running export Datapump job from command line[edit]

expdp / attach="expdp_JDBEOT21_D_FULL_20130228143835"

or

expdp \'/ as sysdba\' attach="expdp_JDBEOT21_D_FULL_20130228143835"

Suspend a running data pump[edit]

stop_job

Restart a previously suspended data pump job[edit]

start_job

Terminate a data pump job[edit]

Also kills the operating system processes

kill_job

Drop erroneous Data Pump master tables[edit]

select 'DROP TABLE '||owner||'.'||table_name||' purge;'
from   dba_tables
where  1=1
and    table_name like '%SYS%EXPORT%'
/

and

set pages 100 lines 200
select 'drop table "' || o.owner||'"."'||object_name || '" purge;'
from   dba_objects o
,      dba_datapump_jobs j
where  1=1
and    o.owner       = j.owner_name
and    o.object_name = j.job_name
and    j.state      != 'EXECUTING'
/

Did the import work?[edit]

Moving a database with Data pump? Check the number of rows imported against those exported!
I noticed that sometimes, maybe tables with BLOB/LOB columns?, that the import does not import all rows...
eg:

Processing object type DATABASE_EXPORT/SCHEMA/TABLE/PRE_TABLE_ACTION
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
. . imported "WM712"."BIZDOCCONTENT"                     104.8 GB       0 out of 10389777 rows

Run this on the import logfile to check for this condition

cat import_WM712P.log | perl -ne 'if (m/(\d+) out of (\d+) rows/) { print if $1 != $2; }'

or a more thorough approach... count the number of records in all tables both locally and remotely for comparison.
Change value for l_remote_tns. File in csv format is created in DATA_PUMP_DIR.

set serveroutput on
declare
    cursor c_tables is
    select owner
    ,      table_name
    from   dba_tables
    where  1=1
    and    tablespace_name not in ('SYSTEM','SYSAUX','TEMP')
    ;
    l_remote_tns       varchar2(100) := 'wm712windows';
    l_sql              varchar2(240) := null;
    l_local_row_count  number        := 0;
    l_remote_row_count number        := 0;
    f_file             utl_file.file_type;
begin
    dbms_output.enable(null);
    f_file := utl_file.fopen('DATA_PUMP_DIR','table_row_count.csv','W');
    utl_file.put_line (f_file, 'Table;Local;Remote');
    for r_tables in c_tables
    loop
        l_sql := 'select count(1) into :l_local_row_count from '||r_tables.owner||'.'||r_tables.table_name;
        execute immediate l_sql into l_local_row_count;
        l_sql := 'select count(1) into :l_remote_row_count from '||r_tables.owner||'.'||r_tables.table_name||'@'||l_remote_tns;
        l_remote_row_count := 0;
        begin
            execute immediate l_sql into l_remote_row_count;
        exception
        when others then
            null;
        end;
        --dbms_output.put_line (r_tables.owner||'.'||r_tables.table_name||';'||l_local_row_count||';'||l_remote_row_count);
        utl_file.put_line (f_file, r_tables.owner||'.'||r_tables.table_name||';'||l_remote_row_count||';'||l_local_row_count);
    end loop;
    utl_file.fclose (f_file);
exception
when others then
    dbms_output.put_line('Problem: '||sqlerrm);
end;
/

Import into schemas and tablespaces other than those contained in the export dump file[edit]

These will have to be manually pre-created

CREATE USER MACHPENAL PROFILE "DEFAULT" IDENTIFIED BY ****** DEFAULT TABLESPACE "ACCMACH" TEMPORARY TABLESPACE "MACH_TEMP" QUOTA UNLIMITED ON "ACCMACH" QUOTA UNLIMITED ON "MACHX_TBS" ACCOUNT UNLOCK;
GRANT ALTER SESSION TO "MACHPENAL";
GRANT CREATE JOB TO "MACHPENAL";
GRANT CREATE MATERIALIZED VIEW TO "MACHPENAL";
GRANT MANAGE SCHEDULER TO "MACHPENAL";
GRANT RESUMABLE TO "MACHPENAL";
GRANT UNLIMITED TABLESPACE TO "MACHPENAL";
GRANT EXECUTE ON "SYS"."DBMS_LOCK" TO "MACHPENAL";
GRANT "CONNECT" TO "MACHPENAL";
GRANT "MACHP_ROLE" TO "MACHPENAL";
impdp system/****** parfile=impdp_prd_accmach.par

where parfile looks something like this:

SCHEMAS=(CHEOPSPENAL,CHEOPSCIVIL,CHEOPSTECH,CHEOPSFIX)
#INCLUDE=POST_TABLE_ACTION
#INCLUDE=TRIGGER
#INCLUDE=PROCACT_SCHEMA
EXCLUDE=STATISTICS
REMAP_TABLESPACE=CHEOPSCIVIL_AXYLIS:ACCMACH                                     
REMAP_TABLESPACE=CHEOPSCIVIL_DEFAULT:ACCMACH                                    
REMAP_TABLESPACE=CHEOPSCIVIL_JPABL:ACCMACH                                      
REMAP_TABLESPACE=CHEOPSCIVIL_JPADL1:ACCMACH                                     
REMAP_TABLESPACE=CHEOPSCIVIL_JPADL2:ACCMACH                                     
REMAP_TABLESPACE=CHEOPSCIVIL_JPAND:ACCMACH                                      
REMAP_TABLESPACE=CHEOPSCIVIL_JPARL:ACCMACH                                      
REMAP_TABLESPACE=CHEOPSCIVIL_JPATH:ACCMACH   
...
REMAP_TABLESPACE=CHEOPSCIVIL_VGVRN:ACCMACH                                      
REMAP_TABLESPACE=CHEOPSCIVIL_VGVVD:ACCMACH                                      
REMAP_TABLESPACE=CHEOPSCIVIL_VGWLB:ACCMACH                                      
REMAP_TABLESPACE=CHEOPSCIVIL_VGWLW:ACCMACH                                      
REMAP_TABLESPACE=CHEOPSCIVIL_VGWRG:ACCMACH
REMAP_SCHEMA=CHEOPSFIX:MACHFIX
REMAP_SCHEMA=CHEOPSCIVIL:MACHCIVIL
REMAP_SCHEMA=CHEOPSTECH:MACHTECH
REMAP_SCHEMA=CHEOPSPENAL:MACHPENAL
DUMPFILE=expdp_prdmach_F_230311_%U.dmp
LOGFILE=impdp_refresh_240311.log
DIRECTORY=expdp_dir
PARALLEL=8

Change parallelism after expdp has started[edit]

Export started with 1 worker but taking a long time? Change the number of parallel workers!

expdp \'\/ as sysdba\' attach=SYS_EXPORT_TABLE_01

Export: Release 12.1.0.2.0 - Production on Mon Oct 1 15:44:41 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

Job: SYS_EXPORT_TABLE_01
  Owner: SYS
  Operation: EXPORT
  Creator Privs: TRUE
  GUID: 772B5F839E8000ECE0530A085A3842B1
  Start Time: Monday, 01 October, 2018 14:59:20
  Mode: TABLE
  Instance: crelant
  Max Parallelism: 1
  Timezone: +02:00
  Timezone version: 18
  Endianness: BIG
  NLS character set: AL32UTF8
  NLS NCHAR character set: AL16UTF16
  EXPORT Job Parameters:
  Parameter Name      Parameter Value:
     CLIENT_COMMAND        /******** AS SYSDBA parfile=/oracle/scripts/expdp_clnt_tables.par
  State: EXECUTING
  Bytes Processed: 73,116,744,528
  Percent Done: 50
  Current Parallelism: 1
  Job Error Count: 0
  Dump File: /cln/tst/ora_data1/clnt/archivelog/clnt_tables_20180926.dmp
    bytes written: 73,117,818,880

Worker 1 Status:
  Instance ID: 1
  Instance name: clnt
  Host name: hn5306.cln.be
  Process Name: DW00
  State: EXECUTING
  Object Schema: TPK
  Object Name: EID01
  Object Type: TABLE_EXPORT/TABLE/TABLE_DATA
  Completed Objects: 5
  Total Objects: 192
  Completed Rows: 835,298
  Worker Parallelism: 1
Export> parallel=4

Export> status
Job: SYS_EXPORT_TABLE_01
  Operation: EXPORT
  Mode: TABLE
  State: EXECUTING
  Bytes Processed: 73,116,744,528
  Percent Done: 50
  Current Parallelism: 4
  Job Error Count: 0
  Dump File: /cln/tst/ora_data1/clnt/archivelog/clnt_tables_20180926.dmp
    bytes written: 73,117,818,880


Worker 1 Status:
  Instance ID: 1
  Instance name: clnt
  Host name: hn5306.cln.be
  Process Name: DW00
  State: EXECUTING
  Object Schema: TPK
  Object Name: CCD01
  Object Type: TABLE_EXPORT/TABLE/TABLE_DATA
  Completed Objects: 10
  Total Objects: 192
  Completed Rows: 317,968
  Worker Parallelism: 1

Worker 2 Status:
  Instance ID: 1
  Instance name: clnt
  Host name: hn5306.cln.be
  Process Name: DW01
  State: EXECUTING
  Object Schema: TPK
  Object Name: BCD18
  Object Type: TABLE_EXPORT/TABLE/TABLE_DATA
  Completed Objects: 4
  Total Objects: 192
  Worker Parallelism: 1

Worker 3 Status:
  Instance ID: 1
  Instance name: clnt
  Host name: hn5306.cln.be
  Process Name: DW02
  State: EXECUTING
  Object Schema: TPK
  Object Name: FCD06
  Object Type: TABLE_EXPORT/TABLE/TABLE_DATA
  Completed Objects: 4
  Total Objects: 192
  Worker Parallelism: 1

Worker 4 Status:
  Instance ID: 1
  Instance name: clnt
  Host name: hn5306.cln.be
  Process Name: DW03
  State: EXECUTING
  Object Schema: TPK
  Object Name: CBD10
  Object Type: TABLE_EXPORT/TABLE/TABLE_DATA
  Completed Objects: 4
  Total Objects: 192
  Worker Parallelism: 1

Export>

Monitor Data Pump sessions[edit]

select s.sid
,      s.serial#
from   v$session s
,      dba_datapump_sessions dps
where s.saddr = dps.saddr;

Once the session is identified, you can see the parallel query slave sessions spawned off by the main Data Pump session by querying the V$PX_SESSION view:

select sid from v$px_session
where qcsid = <DP Session ID>;

How long will the export take?[edit]

Data Pump sends information about its progress to the longops view and can be consulted thus:

select sid
,      serial#
,      sofar
,      totalwork 
from   v$session_longops 
where  1=1
and    opname = '<DP Job Name>' 
and    sofar != totalwork
/

or

select vsl.*
from   v$session_longops vsl
where  1=1
and    (vsl.sid,vsl.serial#) in (select sid,serial# from dba_datapump_sessions)
and    vsl.sofar != vsl.totalwork
/

or

set serveroutput on
DECLARE
  ind NUMBER;              -- Loop index
  h1 NUMBER;               -- Data Pump job handle
  percent_done NUMBER;     -- Percentage of job complete
  job_state VARCHAR2(30);  -- To keep track of job state
  js ku$_JobStatus;        -- The job status from get_status
  ws ku$_WorkerStatusList; -- Worker status
  sts ku$_Status;          -- The status object returned by get_status
BEGIN
h1 := DBMS_DATAPUMP.attach('&JOB_NAME', '&OWNER'); -- job name and owner
dbms_datapump.get_status(h1,
           dbms_datapump.ku$_status_job_error +
           dbms_datapump.ku$_status_job_status +
           dbms_datapump.ku$_status_wip, 0, job_state, sts);
js := sts.job_status;
ws := js.worker_status_list;
      dbms_output.put_line('*** Job percent done = ' ||
                           to_char(js.percent_done));
      dbms_output.put_line('restarts - '||js.restart_count);
ind := ws.first;
  while ind is not null loop
    dbms_output.put_line('rows completed - '||ws(ind).completed_rows);
    ind := ws.next(ind);
  end loop;
DBMS_DATAPUMP.detach(h1);
end;
/

Debugging a Data Pump session[edit]

While exporting/importing, the Datapump job can sometimes appear to hang. What is it doing?
Firstly, we can add "METRICS=Y" to the parameter file. This gives a bit more logging info.
Next, we can dig deeper into the session using the session id.

set lines 150 pages 100
col program for a30
col username for a15
col spid for a7
col job_name for a25
select to_char (sysdate, 'DD-MON-YY HH24:MI:SS') timenow
,      s.program
,      s.sid
,      s.status
,      s.username
,      dds.job_name
,      p.spid
,      s.serial#
,      p.pid
from   v$session s
,      v$process p
,      dba_datapump_sessions dds
where  p.addr  = s.paddr
and    s.saddr = dds.saddr
/
select sw.sid
,      sw.event
,      sw.seconds_in_wait
from   v$session s
,      dba_datapump_sessions dds
,      v$session_wait sw
where  dds.saddr = s.saddr
and    s.sid     = sw.sid
/
execute sys.dbms_system.set_ev (&sid,15,10046,8,'');

then go to the user dump directory and use tkprof to read the trace file.

Enable logging during a datpump impdp/expdp[edit]

As of 12c, much better logging is available:

expdp / full=y dump_file=test.dmp logfile=test.log logtime=all

This prefixes all messages in the logfile with a timestamp.

Enabling SQL trace[edit]

Cannot simply use:

alter session set sql_trace=true;

as Data Pump starts another session. An event has to be set:

select sid
,      serial#
,      username
,      program
from   v$session 
where  1=1
and    upper(program) like '%(DW%)'
or     upper(program) like '%(DM%)'
/

then use the sid and serial# to set the event:

exec dbms_system.set_ev(<sid>, <serial#>, 10046, 12, '' );

Trace files will be in user-dump_dest

For the full beans on what Data Pump can do, visit Arup Nanda's Data Pump page

Restore a schema from a dumpfile into a new schema so that user can pick his objects[edit]

  • Restore relevant file from TSM
  • Connect to the database as sysdba and
create tablespace temp_recovery
       datafile '/oracle/<ORACLE_SID>/oradata1/temp_recovery.dbf'
       size 100M autoextend on next 100M maxsize 20000M
       extent management local
       segment space management auto;

Create a new schema to put the recovered data

create user temp_recovery identified by temp_recovery default tablespace temp_recovery temporary tablespace temp;
grant connect, resource to temp_recovery;

Create a par file to remap the schema and tablespace

vi impdp_schema.par   

directory=DATA_PUMP_DIR
dumpfile=/oracle/export/<ORACLE_SID>/expdp_<ORACLE_SID>_D_FULL_20120918213006_%U.dmp
logfile=impdp_schema.log
job_name=impdp_schema
schemas=SCHEMA_TO_RECOVER
remap_schema=SCHEMA_TO_RECOVER:temp_recovery
remap_tablespace=USERS:temp_recovery
table_exists_action=skip

Run the import

impdp \'/ as sysdba\' parfile=impdp_schema.par

Restore schemas from dump. Clean out schema objects first.[edit]

You can also drop schema but then you lose all grants.
Check we have the dump files...

ll
-rw-r-----   1 GPTASKP2 dba      8540041216 Oct 17 21:17 expdp_GPTASKP2_D_FULL_20131017210003_01.dmp
-rw-r-----   1 GPTASKP2 dba      9391812608 Oct 17 21:17 expdp_GPTASKP2_D_FULL_20131017210003_02.dmp
-rw-r-----   1 GPTASKP2 dba      32161792 Oct 17 21:17 expdp_GPTASKP2_D_FULL_20131017210003_03.dmp

Backup schemas just in case...

cat expdp_just_in_case.ksh
expdp \'/ as sysdba\'                                      \
    DIRECTORY=DATA_PUMP_DIR                                \
    schemas=GPCOMP4,GPCOMP6                                \
    dumpfile=expdp_GPTASKP2_JUST_IN_CASE_201310181500.dmp  \
    job_name=impdp_GPTASKP2_GPCOMP4_GPCOMP6                \
    logfile=expdp_GPTASKP2_GPCOMP4_GPCOMP6.log

Clean out schemas to be imported...

#!/usr/bin/ksh
# ==============================================================================
# Name         : userman_clean_out_schema.ksh
# Description  : Rids a schema of all objects. An empty shell ready for import.
#
# Parameters   : 1 - -u <schema name to clean out>
#                2 - -e if you want the schema to be wiped automatically
#
# Notes        : Generates an SQL file and an execution spool file
#
# Modification History
# ====================
# When      Who               What
# ========= ================= ==================================================
# 09-NOV-13 Stuart Barkley    Created
# 06-JAN-14 Stuart Barkley    Added SYNONYM
# 23-JAN-14 Stuart Barkley    Added -e option and tidied
# 25-APR-14 Stuart Barkley    Create files in /tmp to avoid permissions errors
# ==============================================================================

ts=`date +"%Y%M%d"`

# ----------------------------
# get any command line options
# ----------------------------
unset USERNAME

DOIT="N"
while getopts "u:e" OPT
do
    case "$OPT" in
    u) USERNAME=$OPTARG;;
    e) DOIT="Y";;
    esac
done
shift $((OPTIND-1))


# --------------------------
# check mandatory parameters
# --------------------------
if [[ -z $USERNAME ]]; then
    echo ""
    echo "Usage:  `basename $0` [-e] -u <schemaname to wipe>"
    echo "specifying -e makes the script go ahead and do the wipe automatically"
    echo ""
    exit 1
fi

sqlplus -s / as sysdba <<EOSQL
set pages 0
set feedb off
set verif off
spool  /tmp/userman_clean_out_${USERNAME}.sql
select 'drop '||object_type||' "'||owner||'"."'||object_name||'"'||decode(object_type,'TABLE',' CASCADE CONSTRAINTS PURGE;',';') text
from   dba_objects
where  1=1
and    owner       = '$USERNAME'
order  by object_type
,      object_name
/
spool off
EOSQL

if [[ "$DOIT" == "Y" ]]; then
    echo "wiping user..."
    sqlplus -s / as sysdba <<EOSQL
spool /tmp/userman_clean_out_${USERNAME}_run.log
set termo on echo on feedb on
start /tmp/userman_clean_out_${USERNAME}.sql

purge recyclebin;
spool off
EOSQL
    echo "done wiping user"
else
    echo "To execute this automatically, add '-e' as a parameter to the script"
    echo "or you can manually run the sql file I just generated: userman_clean_out_${USERNAME}.sql"
fi


Import the required data...

cat impdp_GPTASKP2_GPCOMP4_GPCOMP6.ksh
impdp \'/ as sysdba\'                                     \
    DIRECTORY=DATA_PUMP_DIR                               \
    schemas=GPCOMP4,GPCOMP6                               \
    dumpfile=expdp_GPTASKP2_D_FULL_20131017210003_%u.dmp  \
    job_name=impdp_GPTASKP2_GPCOMP4_GPCOMP6               \
    logfile=impdp_GPTASKP2_GPCOMP4_GPCOMP6

Alternatively, drop and recreate the schema / user using dbms_metadata get_ddl and get_granted_ddl[edit]

Run this before dropping the schema to generate DDL necessary to recreate it

sqlplus -s /nolog <<EOSQL 2>/tmp/results.$$
connect / as sysdba
whenever sqlerror continue
set hea off echo off pages 0 newp none long 99999 lin 2000 trims on feed off veri off

execute dbms_metadata.set_transform_param(dbms_metadata.session_transform,'SQLTERMINATOR',true);
spool /tmp/recreate_schemae_$$.sql

select to_char( dbms_metadata.get_ddl         ( 'USER',         du.username ) ) from dba_users du where du.username in upper('${SCHEMAE}');
select to_char( dbms_metadata.get_granted_ddl ( 'ROLE_GRANT',   du.username ) ) from dba_users du where du.username in upper('${SCHEMAE}');
select to_char( dbms_metadata.get_granted_ddl ( 'DEFAULT_ROLE', du.username ) ) from dba_users du where du.username in upper('${SCHEMAE}');
select to_char( dbms_metadata.get_granted_ddl ( 'SYSTEM_GRANT', du.username ) ) from dba_users du where du.username in upper('${SCHEMAE}');
select to_char( dbms_metadata.get_granted_ddl ( 'OBJECT_GRANT', du.username ) ) from dba_users du where du.username in upper('${SCHEMAE}');


-- db_links do not work properly with dbms_metadata if they are for a different schema
select 'create '                                      ||
       decode (u.name, 'PUBLIC', 'public ')           ||
       'database link "'                              ||
       decode (u.name, 'PUBLIC', null, u.name || '.') ||
       l.name                                         ||
       '" connect to "'                               ||
       l.userid                                       ||
       '" identified by values "'                     ||
       l.passwordx                                    ||
       '" using "'                                    ||
       l.host                                         ||
       '"'                                            ||
       ';' text
from   sys.link$ l
,      sys.user$ u
where  1=1
and    l.owner# = u.user#
and    u.name   in upper('${SCHEMAE}');

spool off
EOSQL

References[edit]