Difference between revisions of "Datapump"
(→Use the flashback parameter to export and import a consistent dump) |
|||
| Line 1: | Line 1: | ||
===Use the flashback parameter to export and import a consistent dump=== | ===Use the flashback parameter to export and import a consistent dump=== | ||
| + | |||
| + | This performs a consistent dump of 2 schemas | ||
| + | Keeping the parameters in a file allows it to be reused for the import: | ||
expdp <user>/<pwd> parfile=db_params.par | expdp <user>/<pwd> parfile=db_params.par | ||
| Line 8: | Line 11: | ||
dumpfile=db_schema.dmp | dumpfile=db_schema.dmp | ||
schemas=(apps,test) | schemas=(apps,test) | ||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
===Use of a par file to create a consistent parallel dump=== | ===Use of a par file to create a consistent parallel dump=== | ||
Revision as of 22:14, 6 November 2012
Contents
Use the flashback parameter to export and import a consistent dump
This performs a consistent dump of 2 schemas Keeping the parameters in a file allows it to be reused for the import:
expdp <user>/<pwd> parfile=db_params.par
where db_params.par would look like this:
flashback_time=”to_timestamp (to_char (sysdate, ‘YYYY-MM-DD HH24:MI:SS’), ‘YYYY-MM-DD HH24:MI:SS’)” directory=DATAPUMP_DIR dumpfile=db_schema.dmp schemas=(apps,test)
Use of a par file to create a consistent parallel dump
SCHEMAS=(CHEOPSFIX,CHEOPSCIVIL,CHEOPSTECH,CHEOPSPENAL)
DUMPFILE=expdp_prdmach_F_230311_%U.dmp
FILESIZE=25G
LOGFILE=expdp_prdmach_F_230311.log
DIRECTORY=expdp_dir
FLASHBACK_TIME="TO_TIMESTAMP('23-03-2011 19:30:00', 'DD-MM-YYYY HH24:MI:SS')"
PARALLEL=8
#TABLES=(CHEOPSTECH.DRANSWR)
#CONTENT='METADATA_ONLY'
#FULL=Y
Monitor Datapump sessions
select s.sid , s.serial# from v$session s , dba_datapump_sessions dps where s.saddr = dps.saddr;
Once the session is identified, you can see the parallel query slave sessions spawned off by the main DataPump session by querying the V$PX_SESSION view:
select sid from v$px_session where qcsid = <DP Session ID>;
How long will the export take?
DataPump sends information about its progress to the longops view and can be consulted thus:
select sid , serial# , sofar , totalwork from v$session_longops where 1=1 and opname = '<DP Job Name>' and sofar != totalwork /
Enable trace mode
Two trace possibilities are available, standard and full:
impdp trace=0480300 ...
provides standard tracing. Trace files will be in user-dump_dest
impdp trace=1FF0300 ...
provides full tracing. Trace files will be in user-dump_dest
Enabling SQL trace
Cannot simply use:
alter session set sql_trace=true;
as DataPump starts another session. An event has to be set:
select sid , serial# , username , program from v$session where 1=1 and upper(program) like '%(DW%)' or upper(program) like '%(DM%)' /
then use the sid and serial# to set the event:
exec dbms_system.set_ev(<sid>, <serial#>, 10046, 12, );
Trace files will be in user-dump_dest
For the full beans on what DataPump can do, visit Nanda's DataPump page