Datapump

From dbawiki
Revision as of 22:29, 6 November 2012 by Stuart (talk | contribs) (Use of a par file to create a consistent parallel export)
Jump to: navigation, search

Use the flashback parameter to export and import a consistent dump

This performs a consistent dump of 2 schemas
Keeping the parameters in a file allows it to be reused for the import:

expdp system/***** parfile=db_params.par

where db_params.par would look like this:

flashback_time=”to_timestamp (to_char (sysdate, ‘YYYY-MM-DD HH24:MI:SS’), ‘YYYY-MM-DD HH24:MI:SS’)”
directory=DATAPUMP_DIR
dumpfile=db_schema.dmp
schemas=(apps,test)

Use of a par file to create a consistent parallel export

Note the %U to generate the parallel file names

SCHEMAS=(CHEOPSFIX,CHEOPSCIVIL,CHEOPSTECH,CHEOPSPENAL)
DUMPFILE=expdp_prdmach_F_230311_%U.dmp
FILESIZE=25G
LOGFILE=expdp_prdmach_F_230311.log
DIRECTORY=expdp_dir
FLASHBACK_TIME="TO_TIMESTAMP('23-03-2011 19:30:00', 'DD-MM-YYYY HH24:MI:SS')"
PARALLEL=8
#TABLES=(CHEOPSTECH.DRANSWR)
#CONTENT='METADATA_ONLY'
#FULL=Y

If the import file must import into different schema names, these will have to be pre-created

CREATE USER MACHPENAL PROFILE "DEFAULT" IDENTIFIED BY machpenal DEFAULT TABLESPACE "ACCMACH" TEMPORARY TABLESPACE "MACH_TEMP" QUOTA UNLIMITED ON "ACCMACH" QUOTA UNLIMITED ON "MACHX_TBS" ACCOUNT UNLOCK;
GRANT ALTER SESSION TO "MACHPENAL";
GRANT CREATE JOB TO "MACHPENAL";
GRANT CREATE MATERIALIZED VIEW TO "MACHPENAL";
GRANT MANAGE SCHEDULER TO "MACHPENAL";
GRANT RESUMABLE TO "MACHPENAL";
GRANT UNLIMITED TABLESPACE TO "MACHPENAL";
GRANT EXECUTE ON "SYS"."DBMS_LOCK" TO "MACHPENAL";
GRANT "CONNECT" TO "MACHPENAL";
GRANT "MACHP_ROLE" TO "MACHPENAL";
impdp system/****** parfile=impdp_prd_accmach.par
SCHEMAS=(CHEOPSPENAL,CHEOPSCIVIL,CHEOPSTECH,CHEOPSFIX)
#INCLUDE=POST_TABLE_ACTION
#INCLUDE=TRIGGER
#INCLUDE=PROCACT_SCHEMA
EXCLUDE=STATISTICS
REMAP_TABLESPACE=CHEOPSCIVIL_AXYLIS:ACCMACH                                     
REMAP_TABLESPACE=CHEOPSCIVIL_DEFAULT:ACCMACH                                    
REMAP_TABLESPACE=CHEOPSCIVIL_JPABL:ACCMACH                                      
REMAP_TABLESPACE=CHEOPSCIVIL_JPADL1:ACCMACH                                     
REMAP_TABLESPACE=CHEOPSCIVIL_JPADL2:ACCMACH                                     
REMAP_TABLESPACE=CHEOPSCIVIL_JPAND:ACCMACH                                      
REMAP_TABLESPACE=CHEOPSCIVIL_JPARL:ACCMACH                                      
REMAP_TABLESPACE=CHEOPSCIVIL_JPATH:ACCMACH   
...
REMAP_TABLESPACE=CHEOPSCIVIL_VGVRN:ACCMACH                                      
REMAP_TABLESPACE=CHEOPSCIVIL_VGVVD:ACCMACH                                      
REMAP_TABLESPACE=CHEOPSCIVIL_VGWLB:ACCMACH                                      
REMAP_TABLESPACE=CHEOPSCIVIL_VGWLW:ACCMACH                                      
REMAP_TABLESPACE=CHEOPSCIVIL_VGWRG:ACCMACH
REMAP_SCHEMA=CHEOPSFIX:MACHFIX
REMAP_SCHEMA=CHEOPSCIVIL:MACHCIVIL
REMAP_SCHEMA=CHEOPSTECH:MACHTECH
REMAP_SCHEMA=CHEOPSPENAL:MACHPENAL
DUMPFILE=expdp_prdmach_F_230311_%U.dmp
LOGFILE=impdp_refresh_240311.log
DIRECTORY=expdp_dir
PARALLEL=8

Monitor Datapump sessions

select s.sid
,      s.serial#
from   v$session s
,      dba_datapump_sessions dps
where s.saddr = dps.saddr;

Once the session is identified, you can see the parallel query slave sessions spawned off by the main DataPump session by querying the V$PX_SESSION view:

select sid from v$px_session
where qcsid = <DP Session ID>;

How long will the export take?

DataPump sends information about its progress to the longops view and can be consulted thus:

select sid
,      serial#
,      sofar
,      totalwork 
from   v$session_longops 
where  1=1
and    opname = '<DP Job Name>' 
and    sofar != totalwork
/

Enable trace mode

Two trace possibilities are available, standard and full:

impdp trace=0480300 ...

provides standard tracing. Trace files will be in user-dump_dest

impdp trace=1FF0300 ...

provides full tracing. Trace files will be in user-dump_dest

Enabling SQL trace

Cannot simply use:

alter session set sql_trace=true;

as DataPump starts another session. An event has to be set:

select sid
,      serial#
,      username
,      program
from   v$session 
where  1=1
and    upper(program) like '%(DW%)'
or     upper(program) like '%(DM%)'
/

then use the sid and serial# to set the event:

exec dbms_system.set_ev(<sid>, <serial#>, 10046, 12,  );

Trace files will be in user-dump_dest

For the full beans on what DataPump can do, visit Nanda's DataPump page