Simple way to migrate a database using transportable tablespaces
Original page here at husnusensoy.wordpress.com
Contents
Introduction[edit]
There are various ways of migrating data in Oracle 10g. Datapump, moving over link, RMAN duplicate, and transportable tablespace are the most common ones. According to my experiences I can make the following classification
Method Difficulty Most Suitable For Availability Database Link Easy Less than 50 GB High Data Pump Moderate Over 50 GB High TTS Moderate Terabytes of Data Low RMAN Duplicate Hard Full Database High
In this post I will be explaining the TTS way of doing the job. Although we define TTS as suitable for terabytes of data, when the system resources on the source system is limited, TTS is again the best option. That’s because all other techniques except the database link requires a copy of original data and this requires resource (CPU,disk etc.).
Preparing the Data Before Move (Optional)[edit]
Preparing the data before moving is important in two senses:
- Usually you don’t need to move the whole data so processing and reducing the amount of data prior to move will reduce your transfer time to some extent.
- One problem about TTS is that it requires tablespaces to be moved to be taken to read only mode during the move. If you copy the subset of the data to another tablespace, you will be increasing the availability of the master data source. You will be taking the copy tablespace to read only mode only.
SQL> CREATE TABLESPACE STAGING DATAFILE ‘/data08/asndb/staging1.dbf’ SIZE 4G AUTOEXTEND ON NEXT 4G MAXSIZE 16G, ‘/data08/asndb/staging2.dbf’ SIZE 4G AUTOEXTEND ON NEXT 4G MAXSIZE 16G, ‘/data08/asndb/staging3.dbf’ SIZE 4G AUTOEXTEND ON NEXT 4G MAXSIZE 16G BLOCKSIZE 8192 NOLOGGING ONLINE EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO FLASHBACK OFF; SQL> CREATE TABLE STAGING_TAB PCTFREE 1 PARALLEL 3 TABLESPACE STAGING AS SELECT <list of required columns> FROM ASN_USER.SMSC_LOG WHERE <filter predicate>;
For our case by this filtering we reduce the 330 GB total data to 35 GB migration data. As a result, remember that filtering on the source site is a best practice for any migration activity.
TODO List on Source Database
There are a number of steps to be done on the source site. Respectively,
- Validating the self containing property of the migration tablespace.
- Altering the migration tablespace to read only mode.
- Exporting the meta data
Validating Self Containing Property[edit]
TTS requires to be self contained. This means that the segments within the migration tablespace set can not have dependency to a segment in a tablespace out of the transportable tablespace set. Although it seems a complex task, it is simply done by a PL/SQL procedure in DBMS_TTS package.
SQL> begin 2 sys.dbms_tts.transport_set_check(‘STAGING’, TRUE); 3 end; 4 / PL/SQL procedure successfully completed. SQL> select * from sys.transport_set_violations; no rows selected
As you see STAGING tablespace is self contained. If it were not self contained you should either remove the dependencies by dropping them or include the tablespaces of segments into TTS set to which migration set is depended.
ALTER Tablespaces to READ ONLY Mode[edit]
This requirement must be obvious. In the next steps you will see that we will perform a file system copy to tablespace datafiles. In order those datafiles to not to require a recovery they need to be in consistent during the copy activity. That is to say it is required that redo log generation has to stop for those datafiles.
SQL> alter tablespace STAGING read only; <pre> ===Export the Metadata=== Final step on source side is to extract the metadata export. For this purpose you need a directory object. For example you can use built-in ADMIN_DIR or you can create a new one. <pre> bash-2.05$ expdp userid=temp_usr/****** directory=ADMIN_DIR dumpfile=staging_asn.dmp logfile=ADMIN_DIR:asn.log transport_tablespaces=staging transport_full_check=y Export: Release 10.2.0.1.0 – 64bit Production on Friday, 11 July, 2008 13:02:42 Copyright (c) 2003, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – 64bit Production With the Partitioning, OLAP and Data Mining options Starting “TEMP_USR”.”SYS_EXPORT_TRANSPORTABLE_01″: userid=temp_usr/******** directory=DIR1 dumpfile=staging_asn.dmp logfile=DIR1:asn.log transport_tablespaces=staging transport_full_check=y Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK Processing object type TRANSPORTABLE_EXPORT/TABLE Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK Master table “TEMP_USR”.”SYS_EXPORT_TRANSPORTABLE_01″ successfully loaded/unloaded ****************************************************************************** Dump file set for TEMP_USR.SYS_EXPORT_TRANSPORTABLE_01 is: /data08/asndb/staging_asn.dmp Job “TEMP_USR”.”SYS_EXPORT_TRANSPORTABLE_01″ successfully completed at 13:03:29
TODO List on Target Database[edit]
The task list for target database is simply,
- Copy the datafiles and export file to target server via FTP
- Create the necessary schemas
- Modify the db_n_cache_size parameter if necessary.
- Import the export file
Copy the datafiles via FTP[edit]
The very first step is to copy the datafiles and metadata export file to target server.
himalaya@oracle $ cd /export/home/oracle/hsensoy/datafile himalaya@oracle $ ftp 10.x.x.x … Name (10.200.171.16:oracle): root … Password: … ftp> cd /data08/asndb … ftp> ls … asn.log dump.log dump.par nohup.out staging1.dbf staging2.dbf staging3.dbf staging_asn.dmp … ftp> binary … ftp> get staging_asn.dmp … ftp> prompt ftp> mget *.dbf … ftp> qui …
Create Required Schemas[edit]
Remember that the metadata export is taken for the tablespace. While you are importing into a new database it will be looking for the schemas of the objects taken from the source system. So you need to create the required schemas in the new database.
SQL> CREATE USER TEMP_USR PROFILE DEFAULT IDENTIFIED BY ****** 2 DEFAULT TABLESPACE USERS_BTS 3 TEMPORARY TABLESPACE PRIM_TMP_GRP 4 QUOTA UNLIMITED ON USERS_BTS QUOTA UNLIMITED ON STAGING_BTS 5 ACCOUNT UNLOCK; User created.
Modify the db_n_cache_size (Optional)[edit]
If the source and target databases have different database block sizes. Then Oracle will not allow you to import the metadata
himalaya@oracle $ impdp userid=ddsbase/****** parfile=import.par Import: Release 10.2.0.3.0 – 64bit Production on Friday, 11 July, 2008 14:38:56 Copyright (c) 2003, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 – 64bit Production With the Partitioning, OLAP and Data Mining options Master table “DDSBASE”.”SYS_IMPORT_TRANSPORTABLE_01″ successfully loaded/unloaded Starting “DDSBASE”.”SYS_IMPORT_TRANSPORTABLE_01″: userid=ddsbase/******** parfile=import.par Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK ORA-39123: Data Pump transportable tablespace job aborted ORA-29339: tablespace block size 8192 does not match configured block sizes Job “DDSBASE”.”SYS_IMPORT_TRANSPORTABLE_01″ stopped due to fatal error at 14:38:59
Fortunately Oracle allows the usage of different block sized tablespaces within the same database. The only thing to do is to explicitly set the buffer cache size for different tablespaces
himalaya@oracle $ srv SQL*Plus: Release 10.2.0.3.0 – Production on Fri Jul 11 14:40:59 2008 Copyright (c) 1982, 2006, Oracle. All Rights Reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 – 64bit Production With the Partitioning, OLAP and Data Mining options SQL> alter system set db_8k_cache_size=16M; System altered. SQL> exit Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 – 64bit Production With the Partitioning, OLAP and Data Mining options
Import the Metadata[edit]
The final step on the target site is to import the tablespace metada. For this pupose you may simply create a data pump parameter file using your favorite text editor.
himalaya@oracle $ vi import.par “import.par” [New file] directory=ADMIN_DIR dumpfile=staging_asn.dmp logfile=ADMIN_DIR:asn.log transport_datafiles=’/export/home/oracle/hsensoy/datafile/staging1.dbf’,’/export /home/oracle/hsensoy/datafile/staging2.dbf’,’/export/home/oracle/hsensoy/datafil e/staging3.dbf’ keep_master=y ~ ~ ~ “import.par” [New file] 5 lines, 262 characters
Then using this file you can perform the import
himalaya@oracle $ impdp userid=ddsbase/****** parfile=import.par Import: Release 10.2.0.3.0 – 64bit Production on Friday, 11 July, 2008 14:41:14 Copyright (c) 2003, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 – 64bit Production With the Partitioning, OLAP and Data Mining options Master table “DDSBASE”.”SYS_IMPORT_TRANSPORTABLE_02″ successfully loaded/unloaded Starting “DDSBASE”.”SYS_IMPORT_TRANSPORTABLE_02″: userid=ddsbase/******** parfile=import.par Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK Processing object type TRANSPORTABLE_EXPORT/TABLE Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK Job “DDSBASE”.”SYS_IMPORT_TRANSPORTABLE_02″ successfully completed at 14:41:48 himalaya@oracle $
Your tablespace is ready to use in the target database.