Difference between revisions of "Datapump"
(→How long will the export take?) |
(→How long will the export take?) |
||
| Line 202: | Line 202: | ||
Data Pump sends information about its progress to the longops view and can be consulted thus: | Data Pump sends information about its progress to the longops view and can be consulted thus: | ||
<pre> | <pre> | ||
| − | + | select sid | |
| − | + | , serial# | |
| − | + | , sofar | |
| − | + | , totalwork | |
| − | + | from v$session_longops | |
| − | + | where 1=1 | |
| − | + | and opname = '<DP Job Name>' | |
| − | + | and sofar != totalwork | |
| − | + | / | |
</pre> | </pre> | ||
or | or | ||
Revision as of 12:30, 18 October 2013
Contents
- 1 Use the flashback parameter to export and import a consistent dump
- 2 Use of a par file to create a consistent parallel export
- 3 Import tables from a dump file into a schema
- 4 Find out what Datapump jobs are running
- 5 Kill a datapump job with PL/SQL
- 6 Connect to a running export Datapump job from command line
- 7 Suspend a running data pump
- 8 Restart a previously suspended data pump job
- 9 Terminate a data pump job
- 10 Drop erroneous Data Pump master tables
- 11 Import into schemas and tablespaces other than those contained in the export dump file
- 12 Monitor Data Pump sessions
- 13 How long will the export take?
- 14 Enable trace mode
- 15 Enabling SQL trace
- 16 Restore a schema from a dumpfile into a new schema so that user can pick his objects
Use the flashback parameter to export and import a consistent dump
This performs a consistent dump of 2 schemas
Keeping the parameters in a file allows it to be reused for the import:
expdp system/***** parfile=db_params.par
where db_params.par would look like this:
flashback_time=systimestamp directory=DATA_PUMP_DIR dumpfile=db_schemas.dmp logfile=db_schemas.log schemas=(apps,test)
The SCN that most closely matches the specified time(stamp) is found, and this SCN is used to enable the Flashback utility.
The export operation is performed with data that is consistent as of this SCN.
By default expdp is consistent only for the table it is currently exporting. Use 'flashback_time=systimestamp' to simulate the old export 'consistent=y'
Use of a par file to create a consistent parallel export
Note the %U to generate the parallel file names
SID=$1
TIMESTAMP=`date +'%Y%m%d%H%M%S'`
expdp "/" \
dumpfile=expdp_${SID}_D_FULL_${TIMESTAMP}_%u.dmp \
logfile=expdp_${SID}_D_FULL_${TIMESTAMP}.log \
job_name=expdp_${SID}_D_FULL_${TIMESTAMP} \
directory=data_pump_dir \
reuse_dumpfiles=y \
parallel=8 \
full=y \
exclude=statistics
Other popular optional clauses
exclude=table:"in(select table_name from dba_tables where owner = 'eucvdta' and table_name like 'F4%')" \ exclude=table:"in(select table_name from dba_tables where owner = 'eucrpdta' and table_name like 'F4%')" \ query=prd1.op_log:'"where trunc(creation_date) > trunc(sysdate-7)"' \
Import tables from a dump file into a schema
impdp \'/ as sysdba\' parfile=import_tables.par
where import_tables.par looks something like this
directory=DATA_PUMP_DIR_JDBEOP1 dumpfile=expdp_JDBEOP1_D_PRODDTA_20131007200000_%U.dmp logfile=RESTORE_20131012.log job_name=RESTORE_20131012 tables=(PRODDTA.F47032,PRODDTA.F47031,PRODDTA.F47022,PRODDTA.F47021,PRODDTA.F4311Z1,PRODDTA.F4301Z1) remap_schema=PRODDTA:DMIRROR remap_tablespace=PRODDTAT:RESTORE_TS_2013 remap_tablespace=PRODDTAI:RESTORE_TS_2013 table_exists_action=replace
where directory has already been created with
create or replace directory DATA_PUMP_DIR_JDBEOP1 as '/oracle/export/ubenoa26_jdbeop1'; grant read,write on <DATA_PUMP_DIR_JDBEOP1 to OPS$IMPDP;
Find out what Datapump jobs are running
SET lines 200 COL owner_name FORMAT a10; COL job_name FORMAT a20 COL state FORMAT a11 COL operation LIKE state COL job_mode LIKE state SELECT job_name , owner_name , operation , job_mode , state , attached_sessions FROM dba_datapump_jobs WHERE 1=1 and job_name NOT LIKE 'BIN$%' ORDER BY 2,1 /
JOB_NAME OWNER_NAME OPERATION JOB_MODE STATE ATTACHED_SESSIONS -------------------- ---------- ----------- ----------- ----------- ----------------- EXP_20131018100328 OPS$ORADP EXPORT FULL EXECUTING 1 SQL>
Kill a datapump job with PL/SQL
SET serveroutput on
SET lines 100
DECLARE
l_handle NUMBER;
BEGIN
l_handle := DBMS_DATAPUMP.ATTACH('EXP_20131018100328','OPS$ORADP');
DBMS_DATAPUMP.STOP_JOB (l_handle,1,0);
END;
/
Connect to a running export Datapump job from command line
expdp / attach="expdp_JDBEOT21_D_FULL_20130228143835"
or
expdp \'/ as sysdba\' attach="expdp_JDBEOT21_D_FULL_20130228143835"
Suspend a running data pump
stop_job
Restart a previously suspended data pump job
start_job
Terminate a data pump job
Also kills the operating system processes
kill_job
Drop erroneous Data Pump master tables
select 'DROP TABLE '||owner||'.'||table_name||' purge;' from dba_tables where 1=1 and table_name like '%SYS%EXPORT%' /
and
select 'drop table "' || o.owner||'"."'||object_name || '" purge;' from dba_objects o , dba_datapump_jobs j where 1=1 and o.owner = j.owner_name and o.object_name = j.job_name /
Import into schemas and tablespaces other than those contained in the export dump file
These will have to be manually pre-created
CREATE USER MACHPENAL PROFILE "DEFAULT" IDENTIFIED BY ****** DEFAULT TABLESPACE "ACCMACH" TEMPORARY TABLESPACE "MACH_TEMP" QUOTA UNLIMITED ON "ACCMACH" QUOTA UNLIMITED ON "MACHX_TBS" ACCOUNT UNLOCK; GRANT ALTER SESSION TO "MACHPENAL"; GRANT CREATE JOB TO "MACHPENAL"; GRANT CREATE MATERIALIZED VIEW TO "MACHPENAL"; GRANT MANAGE SCHEDULER TO "MACHPENAL"; GRANT RESUMABLE TO "MACHPENAL"; GRANT UNLIMITED TABLESPACE TO "MACHPENAL"; GRANT EXECUTE ON "SYS"."DBMS_LOCK" TO "MACHPENAL"; GRANT "CONNECT" TO "MACHPENAL"; GRANT "MACHP_ROLE" TO "MACHPENAL";
impdp system/****** parfile=impdp_prd_accmach.par
where parfile looks something like this:
SCHEMAS=(CHEOPSPENAL,CHEOPSCIVIL,CHEOPSTECH,CHEOPSFIX) #INCLUDE=POST_TABLE_ACTION #INCLUDE=TRIGGER #INCLUDE=PROCACT_SCHEMA EXCLUDE=STATISTICS REMAP_TABLESPACE=CHEOPSCIVIL_AXYLIS:ACCMACH REMAP_TABLESPACE=CHEOPSCIVIL_DEFAULT:ACCMACH REMAP_TABLESPACE=CHEOPSCIVIL_JPABL:ACCMACH REMAP_TABLESPACE=CHEOPSCIVIL_JPADL1:ACCMACH REMAP_TABLESPACE=CHEOPSCIVIL_JPADL2:ACCMACH REMAP_TABLESPACE=CHEOPSCIVIL_JPAND:ACCMACH REMAP_TABLESPACE=CHEOPSCIVIL_JPARL:ACCMACH REMAP_TABLESPACE=CHEOPSCIVIL_JPATH:ACCMACH ... REMAP_TABLESPACE=CHEOPSCIVIL_VGVRN:ACCMACH REMAP_TABLESPACE=CHEOPSCIVIL_VGVVD:ACCMACH REMAP_TABLESPACE=CHEOPSCIVIL_VGWLB:ACCMACH REMAP_TABLESPACE=CHEOPSCIVIL_VGWLW:ACCMACH REMAP_TABLESPACE=CHEOPSCIVIL_VGWRG:ACCMACH REMAP_SCHEMA=CHEOPSFIX:MACHFIX REMAP_SCHEMA=CHEOPSCIVIL:MACHCIVIL REMAP_SCHEMA=CHEOPSTECH:MACHTECH REMAP_SCHEMA=CHEOPSPENAL:MACHPENAL DUMPFILE=expdp_prdmach_F_230311_%U.dmp LOGFILE=impdp_refresh_240311.log DIRECTORY=expdp_dir PARALLEL=8
Monitor Data Pump sessions
select s.sid , s.serial# from v$session s , dba_datapump_sessions dps where s.saddr = dps.saddr;
Once the session is identified, you can see the parallel query slave sessions spawned off by the main Data Pump session by querying the V$PX_SESSION view:
select sid from v$px_session where qcsid = <DP Session ID>;
How long will the export take?
Data Pump sends information about its progress to the longops view and can be consulted thus:
select sid , serial# , sofar , totalwork from v$session_longops where 1=1 and opname = '<DP Job Name>' and sofar != totalwork /
or
select vsl.* from v$session_longops vsl where 1=1 and (vsl.sid,vsl.serial#) in (select sid,serial# from dba_datapump_sessions) and vsl.sofar != vsl.totalwork /
Enable trace mode
Two trace possibilities are available, standard and full:
impdp trace=0480300 ...
provides standard tracing. Trace files will be in user-dump_dest
impdp trace=1FF0300 ...
provides full tracing. Trace files will be in user-dump_dest
Enabling SQL trace
Cannot simply use:
alter session set sql_trace=true;
as Data Pump starts another session. An event has to be set:
select sid , serial# , username , program from v$session where 1=1 and upper(program) like '%(DW%)' or upper(program) like '%(DM%)' /
then use the sid and serial# to set the event:
exec dbms_system.set_ev(<sid>, <serial#>, 10046, 12, '' );
Trace files will be in user-dump_dest
For the full beans on what Data Pump can do, visit Arup Nanda's Data Pump page
Restore a schema from a dumpfile into a new schema so that user can pick his objects
- Restore relevant file from TSM
- Connect to the database as sysdba and
create tablespace temp_recovery
datafile '/oracle/<ORACLE_SID>/oradata1/temp_recovery.dbf'
size 100M autoextend on next 100M maxsize 20000M
extent management local
segment space management auto;
Create a new schema to put the recovered data
create user temp_recovery identified by temp_recovery default tablespace temp_recovery temporary tablespace temp; grant connect, resource to temp_recovery;
Create a par file to remap the schema and tablespace
vi impdp_schema.par directory=DATA_PUMP_DIR dumpfile=/oracle/export/<ORACLE_SID>/expdp_<ORACLE_SID>_D_FULL_20120918213006_%U.dmp logfile=impdp_schema.log job_name=impdp_schema schemas=SCHEMA_TO_RECOVER remap_schema=SCHEMA_TO_RECOVER:temp_recovery remap_tablespace=USERS:temp_recovery table_exists_action=skip
Run the import
impdp \'/ as sysdba\' parfile=impdp_schema.par