Difference between revisions of "Oracle errors"
(→ORA-14404: partitioned table contains partitions in a different tablespace) |
(→Solution) |
||
| Line 956: | Line 956: | ||
That last row means extended statistics have been generated for 2 columns in this table.<br /> | That last row means extended statistics have been generated for 2 columns in this table.<br /> | ||
====Solution==== | ====Solution==== | ||
| − | Drop the extended stats, do what you need to do and recreate the extended stats. | + | Drop the extended stats, do what you need to do and recreate the extended stats. |
<pre> | <pre> | ||
PROCEDURE DROP_EXTENDED_STATS | PROCEDURE DROP_EXTENDED_STATS | ||
| Line 973: | Line 973: | ||
select dbms_stats.create_extended_stats ( 'KPK', 'XND40', '("NOMFIC","NUMSEQ")' ) from dual; | select dbms_stats.create_extended_stats ( 'KPK', 'XND40', '("NOMFIC","NUMSEQ")' ) from dual; | ||
</pre> | </pre> | ||
| + | |||
===ORA-14404: partitioned table contains partitions in a different tablespace=== | ===ORA-14404: partitioned table contains partitions in a different tablespace=== | ||
As part of an RMAN duplicate with a skip tablespace, this error is produced | As part of an RMAN duplicate with a skip tablespace, this error is produced | ||
Revision as of 13:17, 3 November 2018
Contents
- 1 ORA-00600
- 2 ORA-19051: Cannot use fast path insert for this XMLType table
- 3 ORA-27092: size of file exceeds file size limit of the process
- 4 SP2-1503: Unable to initialize Oracle call interface
- 5 UDE-31623: operation generated ORACLE error 31623
- 6 ORA-00020: maximum number of processes (150) exceeded
- 7 ORA-01555: snapshot too old: rollback segment number nn with name "xxx" too small
- 8 Optimise Oracle UNDO Parameters
- 9 Check LOB undo retention
- 10 Error 6 initializing SQL*Plus
- 11 ORA-2002: error while writing to audit trail
- 12 ORA-00449: background process 'CKPT' unexpectedly terminated with error 7446
- 13 ORA-00845: MEMORY_TARGET not supported on this system
- 14 ORA-24247: network access denied by access control list (ACL)
- 15 ORA-00600: internal error code, arguments: [qmx: no ref]
- 16 ORA-00201: control file version nn.nn.nn.nn incompatible with ORACLE version
- 17 TNS-00525: Insufficient privilege for operation
- 18 ORA-12003: materialized view or zonemap "KPK"."EVD01_NEW" does not exist
- 19 ORA-12162: TNS:net service name is incorrectly specified
- 20 ORA-00054: resource busy and acquire with NOWAIT specified
- 21 ORA-12518: TNS:listener could not hand off client connection
- 22 ORA-12520: TNS:listener could not find available handler for requested type of server
- 23 ORA-12541: TNS: no listener
- 24 ORA-12514: TNS:listener does not currently know of service
- 25 ORA-12547: TNS lost contact
- 26 ORA-02374: conversion error loading table "BLENGADM"."ZZ_EVTCTA"
- 27 ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION
- 28 ORA-14404: partitioned table contains partitions in a different tablespace
- 29 ORA-28112: failed to execute policy function
- 30 Reason
- 31 Solution
- 32 ORA-17629: Cannot connect to the remote database server
- 33 ORA-01950: no privileges on tablespace 'SYSAUX'
- 34 ORA-39080: failed to create queues "KUPC$C_1_20150616165037" and "KUPC$S_1_20150616165037" for Data Pump job
- 35 ORA-14063: Unusable index exists on unique/primary constraint key
- 36 ORA-09925: Unable to create audit trail file
- 37 ORA-09925: Unable to create audit trail file
- 38 RMAN-06023: no backup or copy of datafile 1 found to restore
- 39 RMAN-06136: ORACLE error from auxiliary database: ORA-06502: PL/SQL: numeric or value error
- 40 ORA-39181: Only partial table data may be exported due to fine grain access control on "owner"."<table_name>"
- 41 Reason
- 42 Solution
- 43 ORA-04030: out of process memory when trying to allocate 2520 bytes
- 44 ORA-04031: unable to allocate 32 bytes of shared memory ("shared pool","select name,online$,contents...","KGLH0^e5705bfc","kglHeapInitialize:temp")
- 45 ORA-00392: log 1 of thread 1 is being cleared, operation not allowed
- 46 RMAN-04004: error from recovery catalog database: ORA-01017: invalid username/password; logon denied
- 47 ORA-00230: operation disallowed: snapshot control file enqueue unavailable
- 48 ORA-19511: non RMAN, but media manager or vendor specific failure, error text:
- 49 ORA-19554: error allocating device, device type: SBT_TAPE, device name:
- 50 RMAN-20033: control file SEQUENCE# too low
- 51 ORA-27302: failure occurred at: slgpn
- 52 RMAN errors after crosscheck backup and delete noprompt obsolete and delete noprompt expired backup
- 53 RMAN-06004: ORACLE error from recovery catalog database: RMAN-20003: target database incarnation not found in recovery catalog
- 54 RMAN-06059: expected archived log not found, loss of archived log compromises recoverability
- 55 RMAN-05537: DUPLICATE without TARGET connection when auxiliary instance is started with spfile cannot use SPFILE clause
- 56 PSDRPC returns significant error 1013
- 57 Database trigger to capture ORA errors
- 58 ORA-19809: limit exceeded for recovery files
- 59 RMAN-00554: initialization of internal recovery manager package failed
- 60 Using AIX commands genld, genkld and slibclean to avoid library file locking errors (libjox) (Doc ID 739963.1)
- 61 ORA-20011: Approximate NDV failed: ORA-29913: error in executing ODCIEXTTABLEOPEN callout
- 62 ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-00600
Master Note for Diagnosing ORA-600 (Doc ID 1092832.1)
ORA-19051: Cannot use fast path insert for this XMLType table
Think LD_LIBRARY_PATH on Linux or LIBPATH on AIX.
Export: Release 11.2.0.4.0 - Production on Thu Jun 1 06:41:55 2017 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Starting "DWH_EXPORT"."SYS_EXPORT_TABLE_03": dwh_export/******** parfile=cadis_foto_export_credit_dba.par Estimate in progress using BLOCKS method... Processing object type TABLE_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 64.08 GB ORA-39126: Worker unexpected fatal error in KUPW$WORKER.FETCH_XML_OBJECTS [] ORA-19051: Cannot use fast path insert for this XMLType table ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86 ORA-06512: at "SYS.KUPW$WORKER", line 9710
Problem
When the 11g DB was started, LD_LIBRARY_PATH and LIBPATH were still pointing to paths in the 10g Home.
Solution
There may have been some odd values setup in LD_LIBRARY_PATH before the call to oraenv (if oraenv was indeed called!)
unset LD_LIBRARY_PATH LIBPATH export ORACLE_SID=<correct 11g SID> . oraenv echo $LD_LIBRARY_PATH echo $LIBPATH
They should now be correct.
ORA-27092: size of file exceeds file size limit of the process
Think ulimit...
Cause: an attempt was made to open a file that exceeds the process's file size limit (ulimit), additional information shows the current limit (logical blocks) and the size of the file (logical blocks) Action: increase the processes file size limit (ulimit) and retry
truncate table pibadm.test_ph
*
Error at line 1 :
ORA-00604: error ocurred at recursive SQL level 1
ORA-01116: error in opening database file 2
ORA-01110: datafile 2 : '/opt/u01/data/PIBQA/UNDO_01.dbf'
ORA-27092: size of file exceeds file size limit of the process
Additional information: 262143
Additional information: 2621441
Check the fs_size limit for this user in /etc/security/limits
Compare this with another user, say oracle. Probably -1 (unlimited)
SP2-1503: Unable to initialize Oracle call interface
When trying to login to the database via SQL*Plus, these messages a displayed
hn5118 /home/oracle $sqlplus / as sysdba SP2-1503: Unable to initialize Oracle call interface SP2-0152: ORACLE may not be functioning properly
Seems to be a problem unique to AIX (in this case).
Check to see what the LIBPATH variable is. It may still contain a path to an old ORACLE_HOME...
hn5118 /home/oracle $env|grep LIBPATH LIBPATH=/oracle/exp/ora_bin1/app/oracle/product/10.2.0/db_1/lib:/oracle/exp/ora_bin1/app/oracle/product/10.2.0/db_1/lib
Disco! Unset it and rerun.
hn5118 /home/oracle $unset LIBPATH hn5118 /home/oracle $sqlplus / as sysdba SQL*Plus: Release 12.1.0.2.0 Production on Wed Jul 12 17:17:54 2017 Copyright (c) 1982, 2014, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options SQL>
UDE-31623: operation generated ORACLE error 31623
When using Data pump, get following errors almost immediately:
UDE-31623: operation generated ORACLE error 31623 ORA-31623: a job is not attached to this session via the specified handle ORA-06512: at "SYS.DBMS_DATAPUMP", line 3326 ORA-06512: at "SYS.DBMS_DATAPUMP", line 4551 ORA-06512: at line 1
Solution 1:
In one case, I simplified the parameter file that was used. Maybe related to Streams Pool size, who knows?!
I removed these 2 lines and the export started working:
compression_algorithm=medium logtime=all
Solution 2:
There is a note about this on Metalink, 1080775.1 which suggests that Streams Pool Size needs setting.
For this DataPump error, you will need to configure the database with some Streams Pool. Perform the following query to find out how much STREAMS_POOL_SIZE has been allocated select * from v$sgainfo; ... Streams Pool Size 0 Yes Data Pump now uses Advanced Queuing (AQ) internally, so it can handle job stops/starts/re-starts. However, AQ using the Streams Pool, where the database currently has the STREAMS_POOL_SIZE set to ZERO. Manually set the STREAMS_POOL_SIZE (using ALTER SYSTEM or by changing the value in the the PFILE/SPFILE), re-start the database and re-attempt the Data Pump Export. Alternately, turn on Automatic Shared Memory management (ASMM) by setting SGA_TARGET or MEMORY_TARGET.
From Oracle Documentation
Setting the Size Of the Buffer Cache In a Streams Environment Oracle Data Pump uses Streams functionality to communicate between processes. If the SGA_TARGET initialization parameter is set, then the STREAMS_POOL_SIZE initialization parameter is automatically set to a reasonable value. If the SGA_TARGET initialization parameter is not set and the STREAMS_POOL_SIZE initialization parameter is not defined, then the size of the streams pool automatically defaults to 10% of the size of the shared pool. When the streams pool is created, the required SGA memory is taken from memory allocated to the buffer cache, reducing the size of the cache to less than what was specified by the DB_CACHE_SIZE initialization parameter. This means that if the buffer cache was configured with only the minimal required SGA, then Data Pump operations may not work properly. A minimum size of 10M is recommended for STREAMS_POOL_SIZE in order to ensure successful Data Pump operations.
ORA-00020: maximum number of processes (150) exceeded
Check the current resource limits with:
set lines 300 select * from v$resource_limit;
Unfortunately this limit cannot be increased without a restart.
It may be a necessary to increase the limit with
alter system set processes=500 scope=spfile;
followed by a shutdown/startup
Or it may be some user is running away with the limit.
Check to see who is running what with:
[See_what_statements_all_users_are_running]
ORA-01555: snapshot too old: rollback segment number nn with name "xxx" too small
Problem
A long running transaction has run out of space to store its read-consistent image
Solution 1
If it's a one-off, maybe leave it alone. It may have been a user leaving a session open overnight without committing or rolling back.
Solution 2
Make sure there's enough space to allow the transaction to finish.
Things to check
How much undo did the biggest transaction take? What is the undo retention parameter set to?
set lines 200 col name for a30 col description for a50 col value for a20 col maxquerylen for 999999 select p.name , p.description , p.value , max(u.maxquerylen) maxquerylen from v$parameter p , v$undostat u where 1=1 and p.name = 'undo_retention' group by p.name , p.description , p.value;
Set the retention to about 20% bigger than the maxquerylen.
select 'alter system set undo_retention='||round(1.2*max(maxquerylen))||' scope=both;' "Run this" from v$undostat;
and set the undo retention guarantee (being aware of the consequences)
select tablespace_name , retention from dba_tablespaces;
alter tablespace &undo_ts_name retention guarantee;
Optimise Oracle UNDO Parameters
The ideal undo_retention would be enough to hold rollback for the longest transaction that happened so far (can't see into the future)
So current undo_retention = current value of undo / (db_block_size * undo blocks per sec)
- Undo Blocks per Second
SELECT MAX(undoblks/((end_time-begin_time)*3600*24)) "UNDO_BLOCKS_PER_SEC" FROM v$undostat;
Optimal Undo Retention
col actual for 9999999999.99 heading "ACTUAL UNDO SIZE [MByte]"
col retent for 9999999999 heading "UNDO RETENTION [Sec]"
col needed for 9999999999.99 heading "NEEDED UNDO SIZE [Secs]"
SELECT d.undo_size/(1024*1024) actual,
to_number(SUBSTR(e.value,1,25)) retent,
ROUND((d.undo_size / (to_number(f.value) *
g.undo_block_per_sec))) needed
FROM (
SELECT SUM(a.bytes) undo_size
FROM v$datafile a,
v$tablespace b,
dba_tablespaces c
WHERE c.contents = 'UNDO'
AND c.status = 'ONLINE'
AND b.name = c.tablespace_name
AND a.ts# = b.ts#
) d,
v$parameter e,
v$parameter f,
(
SELECT MAX(undoblks/((end_time-begin_time)*3600*24))
undo_block_per_sec
FROM v$undostat
) g
WHERE e.name = 'undo_retention'
AND f.name = 'db_block_size'
/
- Calculate Needed UNDO Size for given Database Activity
col actual for 9999999999.99 heading "ACTUAL UNDO SIZE [MByte]"
col retent for 9999999999 heading "UNDO RETENTION [Sec]"
col needed for 9999999999.99 heading "NEEDED UNDO SIZE [MByte]"
SELECT d.undo_size/(1024*1024) actual
, to_number(SUBSTR(e.value,1,25)) retent
, (TO_NUMBER(e.value) * TO_NUMBER(f.value) *
g.undo_block_per_sec) / (1024*1024) needed
FROM (
SELECT SUM(a.bytes) undo_size
FROM v$datafile a,
v$tablespace b,
dba_tablespaces c
WHERE c.contents = 'UNDO'
AND c.status = 'ONLINE'
AND b.name = c.tablespace_name
AND a.ts# = b.ts#
) d,
v$parameter e,
v$parameter f,
(
SELECT MAX(undoblks/((end_time-begin_time)*3600*24))
undo_block_per_sec
FROM v$undostat
) g
WHERE e.name = 'undo_retention'
AND f.name = 'db_block_size'
/
It might be that the Export still fails with ORA-01555 even though retention seems long enough.
Are you running an export with the flashback_time parameter? If so, the undo_retention should be longer than the total export running time!
Check LOB undo retention
The retention settings for LOBS is different for the rest of the database.
When you change the UNDO_RETENTION parameter, the LOB segment’s retention value is NOT modified.
If you query the RETENTION column of the DBA_LOBS view, you will notice the old UNDO_RETENTION value still remains after you have modified the UNDO_RETENTION parameter.
SQL> show parameter undo_ NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ undo_management string AUTO undo_retention integer 36000 undo_tablespace string UNDOTBS1
SQL> desc SITEADMIN.AUDIT_LOG_MESSAGE Name Null? Type ----------------------------------------- -------- ---------------------------- AUDIT_LOG_MESSAGE_ID NOT NULL NUMBER AUDIT_LOG_ID NOT NULL NUMBER SEQUENCE NUMBER MESSAGE CLOB CREATE_DATE TIMESTAMP(6) COMPONENT_ID VARCHAR2(50 CHAR)
set lines 300 col column_name for a32 select table_name,column_name, pctversion,retention from dba_lobs where owner='&table_owner'; TABLE_NAME COLUMN_NAME PCTVERSION RETENTION ------------------------- ------------------------------ ---------- ---------- EMAIL_STORE CONTENT 0 EMAIL_STORE_ATTACHEMENT ATTACHEMENT 0 TRACKING ORIGINAL_MESSAGE 900 TRACKING MESSAGE 900 TRACKING SIEBEL_MESSAGE 900 WISE_WMS_LOG WISE_WMS_XML 900 ATTACHMENT ORIGINAL_MESSAGE 900 PLAN_TABLE OTHER_XML 900 AUDIT_LOG_MESSAGE MESSAGE 900 EXCEPTIONS STACK_TRACE 900
To change the LOB segment’s RETENTION value to match the new UNDO_RETENTION value, do the following:
ALTER TABLE my_table MODIFY LOB (lob_column) (PCTVERSION 20); ALTER TABLE my_table MODIFY LOB (lob_column) (RETENTION);
By momentarily changing the LOB to use PCTVERSION and then back to RETENTION, the new value of UNDO_RETENTION will be used.
You need to do this for all LOB segments that you intend to modify.
select 'ALTER TABLE '||owner||'.'||table_name||' MODIFY LOB ('||column_name||') (PCTVERSION 20);' from dba_lobs where owner='TIBCO'
select 'ALTER TABLE '||owner||'.'||table_name||' MODIFY LOB ('||column_name||') (RETENTION);' from dba_lobs where owner='TIBCO';
There is a table DBA_LOBS that displays information about all the LOB columns in the database, but it doesn't actually store any LOB data, just metadata. The actual LOB segments in the database have system-generated names that take the form SYS_LOB*identifier*$$. For example if you identify a segment named SYS_LOBidentifier$$ that is consuming space, you can find out what column of what table that LOB column maps to using the DBA_LOBS table:
set lines 1000 col owner for a20 col table_name for a32 col column_name for a32 select owner , table_name , column_name from dba_lobs where 1=1 and segment_name = 'SYS_LOB<<identifier>>$$'
The Undo Advisor PL/SQL Interface
You can activate the Undo Advisor by creating an undo advisor task through the advisor framework.
The following example creates an undo advisor task to evaluate the undo tablespace. The name of the advisor is 'Undo Advisor'.
The analysis is based on Automatic Workload Repository snapshots, which you must specify by setting parameters START_SNAPSHOT and END_SNAPSHOT.
In the following example, the START_SNAPSHOT is "1" and END_SNAPSHOT is "2".
DECLARE
tid NUMBER;
tname VARCHAR2(30);
oid NUMBER;
BEGIN
DBMS_ADVISOR.CREATE_TASK('Undo Advisor', tid, tname, 'Undo Advisor Task');
DBMS_ADVISOR.CREATE_OBJECT(tname, 'UNDO_TBS', null, null, null, 'null', oid);
DBMS_ADVISOR.SET_TASK_PARAMETER(tname, 'TARGET_OBJECTS', oid);
DBMS_ADVISOR.SET_TASK_PARAMETER(tname, 'START_SNAPSHOT', 1);
DBMS_ADVISOR.SET_TASK_PARAMETER(tname, 'END_SNAPSHOT', 2);
DBMS_ADVISOR.SET_TASK_PARAMETER(tname, 'INSTANCE', 1);
DBMS_ADVISOR.execute_task(tname);
END;
/
After you have created the advisor task, you can view the output and recommendations in the Automatic Database Diagnostic Monitor in Enterprise Manager.
This information is also available in the DBA_ADVISOR_* data dictionary views (DBA_ADVISOR_TASKS, DBA_ADVISOR_OBJECTS, DBA_ADVISOR_FINDINGS, DBA_ADVISOR_RECOMMENDATIONS, and so on).
References
Error 6 initializing SQL*Plus
Using sqlplus or rman under Cygwin, ORACLE_HOME still needs to be windows format!
Message file sp1<lang>.msb not found SP2-0750 :You may need to set ORACLE_HOME to your Oracle software directory
so...
export ORACLE_SID=KITRYD
ORAENV_ASK=NO
. oraenv
[[ "`uname`" == CYGWIN* ]] && ORACLE_HOME=`cygpath -w ${ORACLE_HOME}`
ORA-2002: error while writing to audit trail
Check the space left in the audit tablespace with:
select sum(bytes)/1024/1024 mb from dba_free_space where tablespace_name ='AUDIT_DT';
If plenty of space, it can be due to Oracle clearing out the recyclebin at the same time.
If not, extend the datafile...
select * from dba_data_files where tablespace_name ='AUDIT_DT'; alter database datafile 6 resize 30G;
If that is not possible, clean out some audit data or add a datafile.
ORA-00449: background process 'CKPT' unexpectedly terminated with error 7446
SQL> startup nomount pfile="/oracle/TRSCRP1/admin/initTRSCRP1.ora" ORA-00449: background process 'CKPT' unexpectedly terminated with error 7446 ORA-07446: sdnfy: bad value '' for parameter . SQL> exit
Solution
SQL> mkdir /oracle/TRSCRP1/admin/bdump /oracle/TRSCRP1/admin/cdump
ORA-00845: MEMORY_TARGET not supported on this system
- Reference: arjudba.blogspot.be
Problem Description
SQL> STARTUP ORA-00845: MEMORY_TARGET not supported on this system
Cause
- Starting from Oracle 11g the automatic memory management feature is now defined with parameter MEMORY_TARGET and MEMMORY_MAX_TARGET.
- On linux file system the shared memory need to be mounted on /dev/shm directory on the operating system.
- And the size of /dev/shm needs to be greater than MEMORY_TARGET or MEMMORY_MAX_TARGET.
- The AMM (Automatic Memory Management) now in 11g manages both SGA and PGA together by MMAN process.
- The MEMORY_TARGET parameter in 11g comes for (SGA_TARGET+PGA_AGGREGATE_TARGET) which was in 10g.
- And MEMORY_MAX_TARGET parameter in 11g comes instead of SGA_MAX_TARGET parameter which was in 10g.
- The ORA-00845:can arises for the following two reasons on linux system.
1)If the shared memory which is mapped to /dev/shm directory is less than the size of MEMORY_TARGET or MEMORY_MAX_TARGET or 2)If the shared memory is not mapped to /dev/shm directory.
Solution
Make sure /dev/shm is properly mounted. You can see it by, #df -h or #df -k command. The output should be similar to: $ df -k Filesystem Size Used Avail Use% Mounted on ... shmfs 1G 512M 512M 50% /dev/shm We see here for /dev/shm we have assigned 1G memory. Now if you set MEMORY_TARGET more than 1G then above ORA-845 will arise. For example if you have MEMORY_TARGET or MEMORY_MAX_TARGET set to 12G then you can mount shared memory to 13g like below. As a root user, # mount -t tmpfs shmfs -o size=13g /dev/shm In order to make the settings persistence so that it will affect after restarting machine add an entry in /etc/fstab similar to the following: shmfs /dev/shm tmpfs size=13g 0
ORA-24247: network access denied by access control list (ACL)
Cause: No access control list (ACL) has been assigned to the target host or the privilege necessary to access the target host has not been granted to the user in the access control list. Action: Ensure that an access control list (ACL) has been assigned to the target host and the privilege necessary to access the target host has been granted to the user.
Your application will encounter an ORA-24247 error if it relies on one of the network packages and no proper ACL has been created. For the use of the following packages it is mandatory to have an ACL for the application user in place in 11g:
- UTL_TCP
- UTL_SMTP
- UTL_MAIL
- UTL_HTTP
- UTL_INADDR
Here is a very good explanation of how to setup the necessary ACLs: [whitehorses.nl]
ORA-00600: internal error code, arguments: [qmx: no ref]
Search "Master Note for Oracle XML Database (XDB) Install / Deinstall (Doc ID 1292089.1)" for more info.
Problem:
Sometimes one or more of the following errors can be encountered when installing or upgrading XDB, configuring APEX, running an Export, or selecting from xdb.xdb$resource or sys.dba_network_acls:
- ORA-31159: XML DB is in an invalid state
- ORA-00600: internal error code, arguments: [unable to load XDB library]
- ORA-00600: internal error code, arguments: [qmx: no ref]
- ORA-00600: internal error code, arguments: [qmtGetColumnInfo1]
- ORA-00600: internal error code, arguments: [qmtb_init_len]
- ORA-00600: internal error code, arguments: [qmtGetBaseType]
- ORA-00600: internal error code, arguments: [psdnop-1], [600]
- ORA-00600: internal error code, arguments: [qmtInit1]
- ORA-07445: exception encountered: core dump [_memcpy()+224] [SIGSEGV] [Address not mapped to object]
- ORA-19051 Cannot Use Fast Path Insert For This XMLType Table
- ORA-31011: XML parsing failed
Errors of this sort generally occur when the init routines for the internal XDB functions are run in an invalid environment causing memory corruption.
This can happen if the database was ever started with the LD_LIBRARY_PATH (LIBPATH for AIX or SHLIB_PATH for HP) pointing to the wrong $ORACLE_HOME/lib directory rather than to the correct location for the instance.
The LD_LIBRARY_PATH/LIBPATH/SHLIB_PATH environment variable is used to resolve the location of the shared library "libxdb.so (libxdb.sl on HP)".
Solution:
- Stop Listener
- Stop database
- Ensure $ORACLE_HOME/lib is at the start of LD_LIBRARY_PATH / LIBPATH / SHLIB_PATH
- If AIX, run /usr/sbin/slibclean as root
- Start database
- Start listener
ORA-00201: control file version nn.nn.nn.nn incompatible with ORACLE version
Trying to restore a database after complete failure
first step is to startup the database in nomount mode and restore the controlfile - succeeded.
second step, startup the database in mount mode ready for database restore...
SQL*Plus: Release 10.2.0.3.0 - Production on Wed Aug 19 18:16:47 2015 Copyright (c) 1982, 2006, Oracle. All Rights Reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production With the Partitioning, OLAP and Data Mining options SYS@CODWHP1> startup mount ORACLE instance started. Total System Global Area 260046848 bytes Fixed Size 2029296 bytes Variable Size 113248528 bytes Database Buffers 134217728 bytes Redo Buffers 10551296 bytes ORA-00201: control file version 10.2.0.0.0 incompatible with ORACLE version 10.2.0.3.0 ORA-00202: control file: '/oracle/CODWHP1/product/10203/dbs/cntrlCODWHP1.dbf'
The problem was there was no compatible parameter in the simple init file I used to start up the instance.
Adding compatible parameter to init file to be same version (10.2.0.3) allowed the db to mount.
TNS-00525: Insufficient privilege for operation
lsnrctl start LISTENER_UCLID55V LSNRCTL for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production on 26-MAR-2015 13:36:14 Copyright (c) 1991, 2011, Oracle. All rights reserved. Starting /oracle/product/11.2.0.3/bin/tnslsnr: please wait... TNSLSNR for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production System parameter file is /oracle/product/11.2.0.3/network/admin/listener.ora Log messages written to /oracle/product/diag/tnslsnr/solax082/listener_uclid55v/alert/log.xml Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=150.251.114.238)(PORT=1525))) Error listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1525))) TNS-12555: TNS:permission denied TNS-12560: TNS:protocol adapter error TNS-00525: Insufficient privilege for operation IBM/AIX RISC System/6000 Error: 1: Not owner Listener failed to start. See the error message(s) above...
This would suggest the user attempting to start the listener has no access to either the Oracle binaries or the TNS listener/tnsnames files.
But... checkout the /tmp/.oracle directory :-)
That hidden gem shows who started the listener last time. Delete the relevant entry and try again!
To start from a clean sheet, stop all listeners, delete this directory and restart the listeners.
ORA-12003: materialized view or zonemap "KPK"."EVD01_NEW" does not exist
Here's one you shouldn't see very often.
Trying to drop a tablespace but prevented from doing so because a materialized view has referential contraints on a table in this tablespace.
SQL> drop tablespace TS_THALER_PART_OLD including contents and datafiles cascade constraints; drop tablespace TS_THALER_PART_OLD including contents and datafiles cascade constraints * ERROR at line 1: ORA-00604: error occurred at recursive SQL level 1 ORA-12083: must use DROP MATERIALIZED VIEW to drop "KPK"."EVD01_NEW"
Ok, nice of Oracle to tell us what we have to do. Simple enough. Let's drop the materialized view.
SQL> DROP MATERIALIZED VIEW "KPK"."EVD01_NEW"; DROP MATERIALIZED VIEW "KPK"."EVD01_NEW" * ERROR at line 1: ORA-12003: materialized view or zonemap "KPK"."EVD01_NEW" does not exist
Aah. Not so obvious then.
desc "KPK"."EVD01_NEW" Name Null? Type ----------------------------------------- -------- ---------------------------- DATMAJ VARCHAR2(8 CHAR) HEUMAJ VARCHAR2(8 CHAR) PGMMAJ VARCHAR2(6 CHAR) NATOPN VARCHAR2(6 CHAR) REFEXN VARCHAR2(50 CHAR) SWIOLN VARCHAR2(1 CHAR) STAEVT VARCHAR2(4 CHAR) REFEVT VARCHAR2(16 CHAR) NUMIDT VARCHAR2(30 CHAR) ORIEVT VARCHAR2(10 CHAR) REFLOG VARCHAR2(16 CHAR) MSGEVT_CPL VARCHAR2(1500 CHAR) REFOPN VARCHAR2(16 CHAR) DATEVT VARCHAR2(8 CHAR) MSGEVT_FIL_1 VARCHAR2(2000 CHAR) MSGEVT_FIL_2 VARCHAR2(2000 CHAR) MSGEVT_SPE VARCHAR2(200 CHAR) DATEFF VARCHAR2(8 CHAR) TMSTMP VARCHAR2(15 CHAR) VEREVT VARCHAR2(3 CHAR) MSGEVT_FIL_3 VARCHAR2(2000 CHAR) MSGEVT_FIL_4 VARCHAR2(2000 CHAR) TYPEVT VARCHAR2(5 CHAR) SBP_CODAPP VARCHAR2(10 CHAR) SBP_REFINT VARCHAR2(36 CHAR) SBP_TYPACT VARCHAR2(2 CHAR)
It exists... but as a table.
Something has gone wrong in the database somewhere. As it turns out after some deeper investigation, this database is a copy of production but "with modifications".
After the cloning process, a schema needed renaming. Oracle have a procedure for this but it is very long-winded and time-consuming. It uses export and import with transportable tablespaces to do the remap of the schema.
An unsupported workaround to rename schema is to simply update the username in sys.user$ and restart the database.
This works for the most part but now we see a consequence of that action.
col owner for a30 col mview_name for a30 select owner , mview_name from dba_mviews / OWNER MVIEW_NAME ------------------------------ ------------------------------ EPK EVD01_NEW
Ahaa, there's the materialized view but the owner is different from the one described by the database error message above. This one is the owner as it was on production.
This means the owner of materialized views has been denormalised for some reason. Reading the (complex) view text of dba_mviews, we see the owner column is based on sys.snap$.sowner
col sowner for a20 col vname for a20 col tname for a20 select sowner , vname , tname from sys.snap$ / SOWNER VNAME TNAME -------------------- -------------------- -------------------- EPK EVD01_NEW EVD01_NEW
For future reference when updating the username in sys.user$, update the snapshot owner in this table as well!
update sys.snap$ set sowner = 'KPK' where sowner = 'EPK' /
Try again
DROP MATERIALIZED VIEW "KPK"."EVD01_NEW"; Materialized view dropped.
yep, and the tablespace?
drop tablespace TS_THALER_PART_OLD including contents and datafiles cascade constraints; Tablespace dropped.
ORA-12162: TNS:net service name is incorrectly specified
Probably nothing to do with TNS. Check your ORACLE_SID is set correctly!
ORA-00054: resource busy and acquire with NOWAIT specified
SELECT O.OBJECT_NAME, S.SID, S.SERIAL#, P.SPID, S.PROGRAM,S.USERNAME, S.MACHINE,S.PORT , S.LOGON_TIME,SQ.SQL_FULLTEXT FROM V$LOCKED_OBJECT L, DBA_OBJECTS O, V$SESSION S, V$PROCESS P, V$SQL SQ WHERE L.OBJECT_ID = O.OBJECT_ID AND L.SESSION_ID = S.SID AND S.PADDR = P.ADDR AND S.SQL_ADDRESS = SQ.ADDRESS;
select object_name, s.sid, s.serial#, p.spid from v$locked_object l, dba_objects o, v$session s, v$process p where l.object_id = o.object_id and l.session_id = s.sid and s.paddr = p.addr;
then
alter system kill session 'sid,serial#';
ORA-12518: TNS:listener could not hand off client connection
The database you are trying to connect to is probably on its knees. Check server memory is not exhausted.
ORA-12520: TNS:listener could not find available handler for requested type of server
Strangely this could be related to number of processes being exhausted.
show parameter processes alter system set processes=350 scope=spfile; shutdown immediate startup
ORA-12541: TNS: no listener
Either very basic - start the listener!
or, as in our case - not so obvious.
We use virtual IP addresses so that the IP address stays the same after failover to standby.
Due to some confused configuration, we got the network adapter on two different servers to have the same IP address.
Stop the IP address on the bad server (assign another one) and all is fine again.
ORA-12514: TNS:listener does not currently know of service
lsnrctl status <listener_name>
Been altering tnsnames.ora or listener.ora?
- Check the local_listener parameter
Try resetting it to itself. Strange but if you are not using port 1521, this can work wonders!
Assuming listener is called LISTENER_WM9T:
alter system set local_listener=LISTENER_WM9T scope=both;
- Try and connect with the service name or directly with
sqlplus 'sys/*******'@<host>:<port>/<SID> as sysdba
Excellent description of how Oracle TNS listeners work and a checklist on how to fix this error
ORA-12547: TNS lost contact
If
sqlplus <user>/<pass>
fails with above error, try
sqlplus <user>/<pass>@<db connection>
If this works, it is most probably a permissions error.
Re-run the
$ORACLE_HOME/root.sh
script that was run as part of the original installation. This will reset the permissions on some important files.
ORA-02374: conversion error loading table "BLENGADM"."ZZ_EVTCTA"
This is typical of importing data with Datapump when the tables in the source database were created with "byte" semantics and the destination is "char" semantics.
ORA-02374: conversion error loading table "BLENGADM"."ZZ_EVTCTA" ORA-12899: value too large for column NOMCTA (actual: 257, maximum: 256) ORA-02372: data for row: NOMCTA : 0X'4C6976726574206427E9706172676E65202020202020202020'
Problem
The root cause is the nls_length_semantics of the table columns being BYTE where one character is assigned one byte. But after conversion in a multi-byte database, one character is larger than one byte (could be 2, 3 or 4) and no longer fits.
Solution
- import the metadata
Run the impdp command with content=metadata_only. This will create all the tables (still in their original form) but not the data.
- alter the BYTE columns in the affected tables to CHAR semantics
select column_name , char_used , data_length , data_type from user_tab_columns where table_name = 'T' and char_used = 'B'; COLUMN_NAME C DATA_LENGTH DATA_TYPE ------------ - ----------- --------- X B 1 VARCHAR2 SQL> alter table t modify x varchar2(1 char); Table altered.
- Import the data
Rerun the impdp ensuring table_exists_action is not REPLACE (use truncate or append). This now imports the data into the modified tables.
This script does the donkey work of altering table column semantics from byte to char
The author of this script is unknown but he/she deserves a medal. It saves countless hours of work and is so well written.
/*
The script converts all the VARCHAR2 columns in the user's schema
to CHAR length semantics. Known to work on 10.2.0.4 and later.
Script must be executed against the database without any other client
and/or application connections to the affected schema.
Oracle does not support CHAR semantics for the built-in schemas,
like SYS, SYSTEM or SYSMAN, so the script will throw an error,
if attempt to run it against any one of them.
Upon completion database objects that have dependencies on tables
being modified will be invalidated. Oracle usually re-compiles invalidated
objects on the first access. Alternatively, a DBA can execute UTLPRP script
to recompile all invalid objects in the entire database.
Upon completion, please make sure that NLS_LENGTH_SEMANTICS instance parameter
is set to CHAR to ensure further upgradability of the schema.
*/
set serveroutput on size unlimited format wrap
declare
type sql_list_t is table of varchar2(32767) index by pls_integer;
type idx_list_t is table of user_indexes.index_name%type index by pls_integer;
type col_list_t is table of user_tab_columns%rowtype index by pls_integer;
--
lv_col_cnt pls_integer := 0;
lv_idx_cnt pls_integer := 0;
lv_char_set pls_integer := 0;
lv_user user_users.username%type;
lv_sql_stmt varchar2(32767);
lv_byte_cols col_list_t;
lv_idx_names idx_list_t;
lv_idx_ddls sql_list_t;
--
procedure run_sql_stmt
(
p_sql_stmt in varchar2
)
as
begin
execute immediate p_sql_stmt;
exception
when others
then
raise_application_error(-20202, 'Failed statement [' || p_sql_stmt || dbms_utility.format_error_backtrace() || ']', true);
end;
begin
--
lv_user := user();
if lv_user in ('SYS', 'SYSTEM', 'SYSMAN')
then
raise_application_error(-20202, 'This script cannot be executed against Oracle built-in schema [' || lv_user || '].');
end if;
--
select
*
bulk collect into lv_byte_cols
from
user_tab_columns uc
where
uc.char_used = 'B'
and
substr(uc.table_name, 1, 4) != 'BIN$'
and
not exists
(
select 1
from user_views uv
where uv.view_name = uc.table_name
)
;
--
lv_col_cnt := lv_byte_cols.count();
--
dbms_output.enable();
--
if lv_col_cnt > 0
then
for idx_rw in
(
select index_name
from user_indexes
where index_type = 'FUNCTION-BASED NORMAL'
)
loop
lv_idx_cnt := lv_idx_cnt + 1;
lv_idx_names(lv_idx_cnt) := idx_rw.index_name;
lv_idx_ddls(lv_idx_cnt) := replace
(
dbms_metadata.get_ddl
(
object_type => 'INDEX',
name => idx_rw.index_name
)
,
';'
);
lv_sql_stmt := 'drop index ' || idx_rw.index_name;
run_sql_stmt(lv_sql_stmt);
dbms_output.put_line('Dropped function-based index: ' || idx_rw.index_name);
dbms_output.put_line('In case of script failure, restore the index by running the DDL statement below:');
dbms_output.put_line(lv_idx_ddls(lv_idx_cnt) || ';');
end loop;
--
for i in 1..lv_col_cnt
loop
lv_sql_stmt := 'alter table ' || lv_byte_cols(i).table_name || ' modify (' ||
lv_byte_cols(i).column_name || ' ' || lv_byte_cols(i).data_type ||
'(' || lv_byte_cols(i).char_length || ' char))';
run_sql_stmt(lv_sql_stmt);
end loop;
--
dbms_output.new_line();
for i in 1..lv_idx_cnt
loop
run_sql_stmt(lv_idx_ddls(i));
dbms_output.put_line('Recreated function-based index: ' || lv_idx_names(i));
end loop;
--
dbms_output.new_line();
dbms_output.put_line
(
'Updated length semantics to CHAR for ' || lv_col_cnt || ' column' ||
case when lv_col_cnt > 1
then 's'
end ||
' in the schema [' || lv_user || '].'
);
else
dbms_output.put_line('No columns with BYTE length semantics were found in the schema [' || lv_user || '].');
end if;
--
select count(*) into lv_char_set
from nls_session_parameters
where parameter = 'NLS_LENGTH_SEMANTICS'
and
value = 'CHAR'
;
--
if lv_char_set = 0
then
dbms_output.put_line('WARNING: Do not forget to change instance parameter NLS_LENGTH_SEMANTICS to CHAR and restart the database.');
end if;
end;
/
ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION
Nothing especially to do with partitions - just any ddl operation that affects the table as a whole (as that touches the offending hidden column).
Hidden columns do not normally show up in a desc unless colinvisible is on
set colinvisible on
But these sneaky system-generated ones don't show up even with that setting on! You need to show them with this:
set pages 50 lines 1000
col column_name for a40
col data_default for a40
select internal_column_id, column_name, data_default, hidden_column, virtual_column from dba_tab_cols where table_name ='XND40';
INTERNAL_COLUMN_ID COLUMN_NAME DATA_DEFAULT HIDDEN_COLUM VIRTUAL_COLU
------------------ -------------------------------------------------- -------------------------------------------------- ------------ ------------
35 MONLOT_REC_CDR '0' NO NO
34 MONLOT_REC_DBT '0' NO NO
33 MONLOT_CAL_CDR '0' NO NO
32 MONLOT_CAL_DBT '0' NO NO
31 MONMVT_CPT '0' NO NO
30 SENNOT ' ' NO NO
29 NUMSEQ NO NO
28 NOMFIC NO NO
27 TOTCPT_REC NO NO
26 TOTCPT_CAL NO NO
25 MONLOT_REC NO NO
24 MONLOT_CAL NO NO
23 NBRELE_REC NO NO
22 NBRELE_CAL NO NO
21 NUMCPT_GLO NO NO
20 DEVLOT NO NO
19 CODERR NO NO
18 CANLOT NO NO
17 REFTEX NO NO
16 REFEXN NO NO
15 ETTLOT NO NO
14 TYPLOT NO NO
13 DATEXC NO NO
12 STALOT NO NO
11 REFLOT NO NO
10 USRAUT NO NO
9 HEUAUT NO NO
8 DATAUT NO NO
7 PGMMAJ NO NO
6 USRMAJ NO NO
5 HEUMAJ NO NO
4 DATMAJ NO NO
3 DATCRT NO NO
2 REFMAJ NO NO
1 NUMVER NO NO
36 SYS_STS#KR2WNV5L070XKV1L5H7REH SYS_OP_COMBINED_HASH("NOMFIC","NUMSEQ") YES YES
36 rows selected.
That last row means extended statistics have been generated for 2 columns in this table.
Solution
Drop the extended stats, do what you need to do and recreate the extended stats.
PROCEDURE DROP_EXTENDED_STATS
Argument Name Type In/Out Default?
------------------------------ ----------------------- ------ --------
OWNNAME VARCHAR2 IN
TABNAME VARCHAR2 IN
EXTENSION VARCHAR2 IN
exec dbms_stats.drop_extended_stats ( 'KPK', 'XND40', '("NOMFIC","NUMSEQ")' );
create table kpk.xnd40_new as select * from kpk.xnd40 where 1=2;
Table created.
select dbms_stats.create_extended_stats ( 'KPK', 'XND40', '("NOMFIC","NUMSEQ")' ) from dual;
ORA-14404: partitioned table contains partitions in a different tablespace
As part of an RMAN duplicate with a skip tablespace, this error is produced
Reenabling controlfile options for auxiliary database
Executing: alter database enable block change tracking using file '/oracle/acc/ora_data3/adsa1/bct_adsp_do_not_delete.f'
contents of Memory Script:
{
Alter clone database open resetlogs;
}
executing Memory Script
database opened
Dropping offline and skipped tablespaces
Executing: drop tablespace "ADS_ARCHIVE" including contents cascade constraints
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 10/16/2017 23:46:12
RMAN-05501: aborting duplication of target database
RMAN-06136: ORACLE error from auxiliary database: ORA-14404: partitioned table contains partitions in a different tablespace
Recovery Manager complete.
and when attempting to backup the database, we get errors
RMAN-06169: could not read file header for datafile 188 error reason 1 RMAN-06169: could not read file header for datafile 189 error reason 1 RMAN-06169: could not read file header for datafile 190 error reason 1 RMAN-06169: could not read file header for datafile 191 error reason 1 RMAN-06169: could not read file header for datafile 14 error reason 1 released channel: t1 released channel: t2 RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of backup plus archivelog command at 10/17/2017 15:08:40 RMAN-06056: could not access datafile 14 Recovery Manager complete.
Problem
Tables in the tablespace have partitions in another tablespace so this one cannot be dropped.
Check which table partitions are in which tablespaces
set lines 200
set pagesize 200
col table_name for a14
col table_owner for a14
col partition_name for a14
select table_owner
, table_name
, partition_name
, tablespace_name
from dba_tab_partitions
where (table_owner, table_name) in
(
select table_owner
, table_name
from dba_tab_partitions x
where x.tablespace_name = '&&TABLESPACE_TO_DROP'
and exists ( select *
from dba_tab_partitions y
where x.table_owner = y.table_owner
and x.table_name = y.table_name
and y.tablespace_name not in ('&TABLESPACE_TO_DROP')
group by table_owner
, table_name
)
)
order by 1,2,partition_position
/
undef TABLESPACE_TO_DROP
TABLE_OWNER TABLE_NAME PARTITION_NAME TABLESPACE_NAME -------------- -------------- -------------- ------------------------------ KPK XND20 P0910 TS_THALER_PART_OLD KPK XND20 P1112 TS_THALER_PART_OLD KPK XND20 P13 TS_THALER_PART_OLD KPK XND20 P14 TS_THALER_PART_OLD KPK XND20 P15 TS_THALER_PART_OLD KPK XND20 P16 TS_THALER_DATA KPK XND20 P17 TS_THALER_DATA KPK XND20 P18 TS_THALER_DATA KPK XND20 PMAX TS_THALER_DATA
Solution
If the intention is to keep all the tables then run this query to generate statements to move the relevant tables out of this tablespace
select 'alter table '||b.table_owner||'.'||b.table_name||' move partition '||b.partition_name||' tablespace &NEW_TABLESPACE;'
from (
select distinct table_name
, partition_name
from dba_tab_partitions
where tablespace_name = '&&TABLESPACE_NAME'
) a
, dba_tab_partitions b
where a.table_name = b.table_name
and b.tablespace_name != '&TABLESPACE_NAME'
/
undef TABLESPACE_NAME
If however, the intention is to drop the tablespace anyway, the tables need to be dropped before the tablespace can be dropped.
But ...tables with partitions in different tablespaces cannot be dropped. The partitions need to be dropped first!
Run the following query to generate statements to drop all the 'problem' tables. When these have gone, the tablespace can be dropped.
- drop table partitions
spool /tmp/drop_table_partitions.sql sqlplus / as sysdba<<EOSQL set headi off newpa none feedb off trims on echo off select 'alter table '||table_owner||'.'||table_name||' drop partition '||partition_name||';' from dba_tab_partitions where tablespace_name = '$TABLESPACE_NAME' / spool off EOSQL echo "@/tmp/drop_table_partitions.sql" | sqlplus / as sysdba
- drop tables
sqlplus / as sysdba<<EOSQL select 'drop table '||table_owner||'.'||table_name||';' from dba_tab_partitions where tablespace_name = '$TABLESPACE_NAME' spool /tmp/drop_tables.sql / spool off EOSQL echo "@/tmp/drop_tables.sql" | sqlplus / as sysdba
- drop tablespace
drop tablespace ads_archive including contents and datafiles /
Find all table partitions in tablespaces with missing datafiles
set lines 1000 pages 100
col owner for a10
col segment_name for a40
col segment_type for a25
col partition_name for a20
col tablespace_name for a25
select owner
, segment_name
, segment_type
, partition_name
, tablespace_name
from dba_segments
where segment_type in ('TABLE PARTITION', 'TABLE SUBPARTITION')
and tablespace_name in (
select tablespace_name
from dba_data_files
where file_name like '%MISSING%'
)
order by 1, 2, decode (segment_type, 'TABLE SUBPARTITION', 1, 2),4
Use exchange partition to alter metadata of the tablespaces without moving the actual data
- this is what you are looking for!
Workaround
While working on the solution, a backup can be made using the "skip inaccessible" option. Put this in an rman cmdfile...
connect catalog rmanusr/rmanpwd@rmancat
connect target sys/syspwd@adsa1
run {
allocate channel t1 type 'sbt_tape' parms 'ENV=(NSR_SERVER=hn6000.company.be,NSR_CLIENT=hn491,NSR_DATA_VOLUME_POOL=DD1DAILY,NSR_END_ERROR_IGNORE=TRUE)';
allocate channel t2 type 'sbt_tape' parms 'ENV=(NSR_SERVER=hn6000.company.be,NSR_CLIENT=hn491,NSR_DATA_VOLUME_POOL=DD1DAILY,NSR_END_ERROR_IGNORE=TRUE)';
configure retention policy to recovery window of 28 days;
configure controlfile autobackup on;
crosscheck archivelog all;
SQL 'ALTER SYSTEM SWITCH LOGFILE';
SQL 'CREATE PFILE FROM SPFILE';
backup filesperset 1 format 'dbfull_%d_%I_%t_%s_%p' full database plus archivelog skip inaccessible delete all input;
backup current controlfile;
release channel t1;
release channel t2;
}
exit
Another way of deleting the offending objects
Taken from Recover from a failed RMAN duplicate
Need to check it before running it blindly!
set echo off feed off termo off trims on pages 0 newpa none lines 2000 headi off long 32000
col val for a1000
spool destroy_offending_objects.sql
select to_clob ('alter table '|| table_name||' drop constraint '||constraint_name||';') Val
from user_constraints where constraint_type in ('R','P') and table_name in (select distinct table_name from user_tab_partitions)
union all
select to_clob ('drop index '||index_name||';') Val from user_indexes where table_name in (select distinct table_name from user_tab_partitions)
union all
select to_clob ('alter table '||table_name|| ' drop partition '||partition_name||';') from dba_tab_partitions
where tablespace_name in (select tablespace_name from dba_data_files where file_name like '%MISSING%')
union all
Select case
when instr (VAl, 'PARTITION') = 0 Then
substr (Val,1,InStr (Val,')')+1)||';'
else
substr (Val,1,InStr (Val,')')+1)||' local;'
End val
from
(select dbms_metadata.get_ddl ('INDEX', index_name,'TEST') Val from user_indexes where table_name in (select distinct table_name from user_tab_partitions))
union all
Select Val
from
(select dbms_metadata.get_dependent_ddl ('REF_CONSTRAINT', table_name,'TEST') Val
from user_tables where table_name in (select distinct table_name from user_tab_partitions join user_constraints using ( table_name)
where constraint_type='R'))
union all
select Val
from
(select dbms_metadata.get_ddl ('CONSTRAINT', constraint_name,'TEST') Val
from user_constraints
where constraint_type='P' and table_name in (select distinct table_name from user_tab_partitions)
)
union all
select to_clob (val)
from (select distinct 'drop tablespace '||tablespace_name|| ' including contents and datafiles;' val
from dba_tab_partitions
where tablespace_name in (select tablespace_name from dba_data_files where file_name like '%MISSING%')
);
spoo off
@destroy_offending_objects
ORA-28112: failed to execute policy function
when exporting or importing using Data Pump< /br> During expdp is accompanied by "ORA-31693: Table data object "SYSMAN"."MGMT_IP_REPORT_DEF" failed to load/unload and is being skipped due to error:"
Reason
This happens due to audit policies being defined on the table but the policy function is defined in another schema.
Solution
Either drop the policy after importing the table with:
begin
for rec in (select * from dba_audit_policies where object_schema = 'SYSMAN') loop
dbms_fga.drop_policy(object_schema => rec.object_schema,
object_name => rec.object_name,
policy_name => rec.policy_name);
end loop;
end;
/
or exclude the policies during export by adding this to the .par file:
exclude=FGA_POLICY
or even better, give the necessary grants to the exporting user...
grant exempt access policy to impexpusr;
The sysdba role has this automatically.
ORA-17629: Cannot connect to the remote database server
Active duplication fails to connect even though dns connection exists in tnsnames.ora
Problem:
DUPLICATE TARGET DATABASE FROM ACTIVE DATABASE DORECOVER NOFILENAMECHECK; RMAN-03002: failure of Duplicate Db command at 05/27/2015 12:52:24 RMAN-05501: aborting duplication of target database RMAN-03015: error occurred in stored script Memory Script RMAN-03009: failure of backup command on ORA_DISK_1 channel at 05/27/2015 12:52:23 ORA-17629: Cannot connect to the remote database server ORA-17627: ORA-12154: TNS:could not resolve the connect identifier specified ORA-17629: Cannot connect to the remote database server
Solution:
Update the tnsnames.ora of both destination and source ORACLE_HOMEs!
ORA-01950: no privileges on tablespace 'SYSAUX'
26-JUN-15 03:12:27.545: >>> ORA-31642: the following SQL statement fails: BEGIN "SYS"."DBMS_RULE_EXP_RULES".SCHEMA_CALLOUT(:1,0,1,'12.01.00.02.00'); END; ORA-01950: no privileges on tablespace 'SYSAUX'
Cause:
In this case, it seems SYSMAN has no rights to create objects in the SYSAUX tablespace
Solution:
alter user SYSMAN quota unlimited on sysaux;
More information here
ORA-39080: failed to create queues "KUPC$C_1_20150616165037" and "KUPC$S_1_20150616165037" for Data Pump job
expdp / Export: Release 11.2.0.4.0 - Production on Tue Jun 16 16:50:36 2015 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options ORA-31626: job does not exist ORA-31637: cannot create job SYS_EXPORT_SCHEMA_01 for user OPS$ORAIBM ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95 ORA-06512: at "SYS.KUPV$FT_INT", line 798 [[ORA-39080: failed to create queues "KUPC$C_1_20150616165037" and "KUPC$S_1_20150616165037" for Data Pump job]] ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95 ORA-06512: at "SYS.KUPC$QUE_INT", line 1534 ORA-24002: QUEUE_TABLE SYS.AQ$KUPC$DATAPUMP_QUETAB_1 does not exist
Cause:
Possibly some or all of OLAP module has got itself deinstalled.
Solution:
Try:
- Recompiling invalid objects. @?/rdbms/admin/utlrp
- Reinstalling DBMS_CUBE packages by (re)running catproc.sql. @?/rdbms/admin/catproc followed by @?/rdbms/admin/utlrp
- Reinstalling OLAP. @?/olap/admin/olap.sql SYSAUX TEMP; But see Metalink note ID 296187.1 for full instructions.
- Check metalink for bugs. See ID 345198.1 and possibly ID 453796.1
ORA-14063: Unusable index exists on unique/primary constraint key
ORA-39083: Object type CONSTRAINT failed to create with error:
During an import datapump (impdp), import failed because indexes were in error before the tables were exported. This causes the import of rows to be skipped.
If the export cannot be done again (having compiled the indexes), then the metadata can be imported and the indexes compiled before importing the data.
- Import the metadata
cat<<EOCAT >impdp_adst_ads_archive_metadata.par userid='/ as sysdba' dumpfile=adst_ads_archive.dmp logfile=impdp_adst_ads_archive_metadata.log content=metadata_only EOCAT impdp parfile=impdp_adst_ads_archive_metadata.par
- Find the bad indexes and recompile them
select 'alter index '|| owner||'.'||index_name||' rebuild online parallel 8;' from dba_indexes where status = 'UNUSABLE' /
- Import the data
cat<<EOCAT >impdp_adst_ads_archive_data.par userid='/ as sysdba' dumpfile=adst_ads_archive.dmp logfile=impdp_adst_ads_archive_data.log table_exists_action=append content=data_only EOCAT impdp parfile=impdp_adst_ads_archive_data.par
ORA-09925: Unable to create audit trail file
SQL> startup nomount pfile="/oracle/TRSCRP1/admin/initTRSCRP1.ora" ORA-09925: Unable to create audit trail file SVR4 Error: 2: No such file or directory Additional information: 9925
or
connected to auxiliary database (not started) RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of Duplicate Db command at 08/29/2017 15:53:09 RMAN-05501: aborting duplication of target database RMAN-03015: error occurred in stored script Memory Script RMAN-04014: startup failed: ORA-09925: Unable to create audit trail file IBM AIX RISC System/6000 Error: 2: No such file or directory Additional information: 9925 Recovery Manager complete.
Problem
This will be almost certainly one of two issues. Either the directory is not writable by the id that started the instance or the directory just does not exist.
Solution
Note there can be a difference between what you see when you type:
show parameter audit
and when you type:
strings -a $ORACLE_HOME/dbs/spfile${ORACLE_SID}.ora | grep -i audit
due to someone doing an "alter system" without the "scope=spfile" or "scope=both"
SQL> host ls -al /oracle/TRSCRP1/admin/adump SQL> host mkdir /oracle/TRSCRP1/admin/adump
ORA-09925: Unable to create audit trail file
When starting SQL*Plus with no db processes running (expecting to see "Connected to an idle instance"), we get this:
ERROR: ORA-09925: Unable to create audit trail file IBM AIX RISC System/6000 Error: 2: No such file or directory Additional information: 9925 ORA-01075: you are currently logged on
- Filesystem permissions are fine
- 'strings spfile<SID>.ora | grep audit' shows the correct directory
- Running on AIX
AIX is the clue here... need to check for semaphores and locked shared memory objects
(0) UCLID55D oraibm@solax082:/oracle/product/11.2.0.3/bin> sysresv IPC Resources for ORACLE_SID "UCLID55D" : Shared Memory: ID KEY 8388640 0xffffffff 579862561 0xffffffff 384827426 0x326f32e8 Semaphores: ID KEY 167772287 0xf09f4914 Oracle Instance alive for sid "UCLID55D" (0) UCLID55D oraibm@solax082:/oracle/product/11.2.0.3/bin> ipcrm -m 8388640 (0) UCLID55D oraibm@solax082:/oracle/product/11.2.0.3/bin> ipcrm -m 579862561 (0) UCLID55D oraibm@solax082:/oracle/product/11.2.0.3/bin> ipcrm -m 384827426 (0) UCLID55D oraibm@solax082:/oracle/product/11.2.0.3/bin> ipcrm -s 167772287 (0) UCLID55D oraibm@solax082:/oracle/product/11.2.0.3/bin> sysresv IPC Resources for ORACLE_SID "UCLID55D" : Shared Memory ID KEY No shared memory segments used Semaphores: ID KEY No semaphore resources used Oracle Instance not alive for sid "UCLID55D" (0) UCLID55D oraibm@solax082:/oracle/product/11.2.0.3/bin> ps -ef | grep UCLID55D oraibm 3326 23776 0 14:12:05 pts/1 0:00 grep UCLID55D
RMAN-06023: no backup or copy of datafile 1 found to restore
RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of Duplicate Db command at 05/16/2018 16:08:13 RMAN-05501: aborting duplication of target database RMAN-05556: not all datafiles have backups that can be recovered to SCN 278250901080 RMAN-03015: error occurred in stored script Memory Script RMAN-06026: some targets not found - aborting restore RMAN-06023: no backup or copy of datafile 7 found to restore RMAN-06023: no backup or copy of datafile 6 found to restore RMAN-06023: no backup or copy of datafile 5 found to restore RMAN-06023: no backup or copy of datafile 4 found to restore RMAN-06023: no backup or copy of datafile 3 found to restore RMAN-06023: no backup or copy of datafile 2 found to restore RMAN-06023: no backup or copy of datafile 1 found to restore
Seems pretty straightforward? SCN was set too early for the retention period. But no, not this time.
RMAN> list backup of datafile 1;
List of Backup Sets
===================
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ -------------------
110170340 Incr 0 132.90G SBT_TAPE 00:18:32 2018-05-14 01:19:40
BP Key: 110170345 Status: AVAILABLE Compressed: NO Tag: WEEKLY_FULL
Handle: 24639762_REPORTA_07t2r3uk_1_1 Media: V_17152809_77191330
List of Datafiles in backup set 110170340
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- ------------------- ----
1 0 Incr 277998128991 2018-05-14 01:01:08 /cln/acc/ora_data3/reporta/system01.dbf
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ -------------------
110355808 Incr 1 204.00M SBT_TAPE 00:05:13 2018-05-15 01:06:22
BP Key: 110355813 Status: AVAILABLE Compressed: NO Tag: DAILY_INCREMENTAL
Handle: 24661053_REPORTA_3ft2toal_1_1 Media: V_17173010_77253424
List of Datafiles in backup set 110355808
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- ------------------- ----
1 1 Incr 278044267395 2018-05-15 01:01:09 /cln/acc/ora_data3/reporta/system01.dbf
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ -------------------
110543802 Incr 1 237.00M SBT_TAPE 00:05:51 2018-05-16 01:07:40
BP Key: 110543807 Status: AVAILABLE Compressed: NO Tag: DAILY_INCREMENTAL
Handle: 24681602_REPORTA_6nt30cnt_1_1 Media: V_17133646_77315270
List of Datafiles in backup set 110543802
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- ------------------- ----
1 1 Incr 278286445856 2018-05-16 01:01:50 /cln/acc/ora_data3/reporta/system01.dbf
SCN was only 12 hours ago with a retention of 28 days! The backup piece dated 2018-05-15 01:06:22 is clearly available and on the device type set_tape.
The trick here was to look back through the logs and look more closely at what the file restore message was saying...
channel t0: starting datafile backup set restore channel t0: specifying datafile(s) to restore from backup set channel t0: restoring datafile 00001 to /cln/acc/ora_data3/reporta/system01.dbf < < < ============== channel t0: reading from backup piece dbfull_REPORTA_738887052_967580186_116_1 channel t2: ORA-19870: error while restoring backup piece dbfull_REPORTA_738887052_967579750_113_1 ORA-19507: failed to retrieve sequential file, handle="dbfull_REPORTA_738887052_967579750_113_1", parms="" ORA-27029: skgfrtrv: sbtrestore returned error ORA-19511: non RMAN, but media manager or vendor specific failure, error text: CreateOraObject20: Job[0] thread[55116090]: Restore: GetBackupInfo() for Archive File[dbfull_REPORTA_738887052_967579750_113_1] failed. ORA-0651
It was not the fact that it could not find the data on tape, it was that it could not save the file to the location indicated!!
In the RMAN DUPLICATE script, I had all the file paths redirected (a mixture of convert parameters and spfile set) to a /restore filesystem.
But I forgot one!
A handy thing to do is to search out the file paths with this to see if something is still wrong:
cd $ORACLE_HOME/dbs
strings -a spfile${ORACLE_SID}.ora | grep dest
This should show any paths still not getting caught by the DUPLICATE script.
RMAN-06136: ORACLE error from auxiliary database: ORA-06502: PL/SQL: numeric or value error
When cloning a database, the duplicate fails with this error.
Problem
This duplicate had no target connection but did have a catalog and auxiliary connection. In the catalog (which was the catalog where the target was registerd), there was also an entry in that catalog with the same name as the auxiliary - left over from the days when these 2 instances were Primary and Standby in a Dataguard configuration.
... Datafile 414 skipped by request Datafile 415 skipped by request Datafile 419 skipped by request Datafile 420 skipped by request Datafile 421 skipped by request DBGSQL: AUXILIARY> begin :ofname := sys.dbms_backup_restore.convertFileName( fname => :ifname, ftype => :iftype, osftype => TRUE); end; DBGSQL: sqlcode = 6502 DBGSQL: B :ofname = NULL DBGSQL: B :ifname = DBGSQL: B :iftype = 2 restarting auxiliary database without server parameter file Oracle instance started Total System Global Area 32068440064 bytes Fixed Size 2262200 bytes Variable Size 16978545480 bytes Database Buffers 15032385536 bytes Redo Buffers 55246848 bytes RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of Duplicate Db command at 10/04/2017 14:50:29 RMAN-05501: aborting duplication of target database RMAN-06136: ORACLE error from auxiliary database: ORA-06502: PL/SQL: numeric or value error Recovery Manager complete.
Solution
Connect to the catalog and unregister the name of the auxiliary database. It shouldn't be there anyway.
rman catalog rmanusr/rmanpwd@rmancat Recovery Manager: Release 11.2.0.4.0 - Production on Wed Oct 4 15:00:00 2017 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. connected to recovery catalog database RMAN> set dbid 4260753020; executing command: SET DBID database name is "LBK" and DBID is 4260753020 RMAN> list db_unique_name of database; List of Databases DB Key DB Name DB ID Database Role Db_unique_name ------- ------- ----------------- --------------- ------------------ 13301587 LBK 4260753020 PRIMARY LBK 13301587 LBK 4260753020 STANDBY ACC_PERF 13301587 LBK 4260753020 STANDBY LBKRO RMAN> unregister db_unique_name 'ACC_PERF'; database db_unique_name is "ACC_PERF", db_name is "LBK" and DBID is 4260753020 Want to unregister the database with target db_unique_name (enter YES or NO)? YES database with db_unique_name ACC_PERF unregistered from the recovery catalog RMAN> exit
ORA-39181: Only partial table data may be exported due to fine grain access control on "owner"."<table_name>"
Reason
This happens when applications are designed with fine-grained access.
Solution
Grant the necessary privileges to the exporting user...
grant exempt access policy to ops$oraibm;
ORA-04030: out of process memory when trying to allocate 2520 bytes
expdp problem (ORA-04030: out of process memory when trying to allocate 2520 bytes (pga heap,koh-kghu call heap)) select value from v$pgastat where name='maximum PGA allocated'; SHOW PARAMETER TARGET ORA-04030: out of process memory when trying to allocate 2520 bytes (pga heap,koh-kghu call heap) alter system reset pga_aggregate_target scope=spfile; show sga select * from v$pgastat; show parameter process select sum(value)/1024/1024 Mb from v$sesstat s, v$statname n where 1=1 and n.STATISTIC# = s.STATISTIC# and name = 'session pga memory' show parameter WORKAREA select value from v$pgastat where name='maximum PGA allocated'; alter system set memory_max_target=10G scope=spfile; alter system set memory_target=3G scope=both; select * from v$memory_target_advice order by memory_size;
ORA-04031: unable to allocate bytes of shared memory ("","","","")
ORA-04031: unable to allocate 32 bytes of shared memory ("shared pool","select name,online$,contents...","KGLH0^e5705bfc","kglHeapInitialize:temp")
Incident details in: /oracle/diag/rdbms/mape/MAPE/incident/incdir_86754/MAPE_dm00_14221536_i86754.trc
Wed Jul 31 20:42:50 2013
Dumping diagnostic data in directory=[cdmp_20130731204250], requested by (instance=1, osid=14221536 (DM00)), summary=[incident=86753].
Wed Jul 31 20:42:51 2013
Sweep [inc][86754]: completed
Sweep [inc][86753]: completed
Sweep [inc2][86753]: completed
Wed Jul 31 20:42:57 2013
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
Dumping diagnostic data in directory=[cdmp_20130731204257], requested by (instance=1, osid=14221536 (DM00)), summary=[incident=86754].
Wed Jul 31 20:43:35 2013
Errors in file /oracle/diag/rdbms/mape/MAPE/trace/MAPE_dw02_40567024.trc (incident=82993):
ORA-04031: unable to allocate 32 bytes of shared memory ("shared pool","select order#,columns,types ...","KGLH0^470434f8","kglHeapInitialize:temp")
Database had already crashed so nothing to do but:
(0) MAPE ora@x025:/home/ora> sqlplus / as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Fri Aug 2 14:26:20 2013 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected. SYS@MAPE> alter system flush shared_pool; alter system flush shared_pool * ERROR at line 1: ORA-01012: not logged on Process ID: 0 Session ID: 0 Serial number: 0 SYS@MAPE> shutdown abort ORACLE instance shut down. SYS@MAPE> startup ORACLE instance started. Total System Global Area 734892032 bytes Fixed Size 2225128 bytes Variable Size 398461976 bytes Database Buffers 318767104 bytes Redo Buffers 15437824 bytes Database mounted. Database opened. SYS@MAPE> show parameter pool NAME TYPE VALUE ------------------------------------ ------------------------------ ------------------------------ buffer_pool_keep string buffer_pool_recycle string global_context_pool_size string java_pool_size big integer 0 large_pool_size big integer 0 olap_page_pool_size big integer 0 shared_pool_reserved_size big integer 18M shared_pool_size big integer 208M streams_pool_size big integer 0 SYS@MAPE> show sga Total System Global Area 734892032 bytes Fixed Size 2225128 bytes Variable Size 385879064 bytes Database Buffers 331350016 bytes Redo Buffers 15437824 bytes SYS@MAPE> exit
but for reference, the causes could well be the applications running on the database are not using bind variables...
From asktom.oracle.com
... Just to give you a tiny idea of how huge of a difference this can make performance wise, you only need to run a very small test: tkyte@TKYTE816> alter system flush shared_pool; System altered. tkyte@TKYTE816> declare 2 type rc is ref cursor; 3 l_rc rc; 4 l_dummy all_objects.object_name%type; 5 l_start number default dbms_utility.get_time; 6 begin 7 for i in 1 .. 1000 8 loop 9 open l_rc for 10 'select object_name 11 from all_objects 12 where object_id = ' || i; 13 fetch l_rc into l_dummy; 14 close l_rc; 15 end loop; 16 dbms_output.put_line 17 ( round( (dbms_utility.get_time-l_start)/100, 2 ) || 18 ' seconds...' ); 19 end; 20 / 14.86 seconds... PL/SQL procedure successfully completed. tkyte@TKYTE816> declare 2 type rc is ref cursor; 3 l_rc rc; 4 l_dummy all_objects.object_name%type; 5 l_start number default dbms_utility.get_time; 6 begin 7 for i in 1 .. 1000 8 loop 9 open l_rc for 10 'select object_name 11 from all_objects 12 where object_id = :x' 13 using i; 14 fetch l_rc into l_dummy; 15 close l_rc; 16 end loop; 17 dbms_output.put_line 18 ( round( (dbms_utility.get_time-l_start)/100, 2 ) || 19 ' seconds...' ); 20 end; 21 / 1.27 seconds... PL/SQL procedure successfully completed. That is pretty dramatic. The fact is that not only does this execute much faster (we spent more time PARSING our queries then actually EXECUTING them!) it will let more users use your system simultaneously. ...
and from linked question
...
Lbrary cache latch contention is typically caused by NOT using bind variables. It is due
to excessive parsing of statements.
One way to see if this might be the case in your situation is to run a script like:
create table t1 as select sql_text from v$sqlarea;
alter table t1 add sql_text_wo_constants varchar2(1000);
create or replace function
remove_constants( p_query in varchar2 ) return varchar2
as
l_query long;
l_char varchar2(1);
l_in_quotes boolean default FALSE;
begin
for i in 1 .. length( p_query )
loop
l_char := substr(p_query,i,1);
if ( l_char = '''' and l_in_quotes )
then
l_in_quotes := FALSE;
elsif ( l_char = '''' and NOT l_in_quotes )
then
l_in_quotes := TRUE;
l_query := l_query || '''#';
end if;
if ( NOT l_in_quotes ) then
l_query := l_query || l_char;
end if;
end loop;
l_query := translate( l_query, '0123456789', '@@@@@@@@@@' );
for i in 0 .. 8 loop
l_query := replace( l_query, lpad('@',10-i,'@'), '@' );
l_query := replace( l_query, lpad(' ',10-i,' '), ' ' );
end loop;
return upper(l_query);
end;
/
update t1 set sql_text_wo_constants = remove_constants(sql_text);
select sql_text_wo_constants, count(*)
from t1
group by sql_text_wo_constants
having count(*) > 100
order by 2
/
The output of that last query will show you statements that are identical in the shared
pool after all numbers and character string constants have been removed. These
statements -- and more importantly their counts -- are the potential bottlenecks. In
addition to causing the contention, they will be HUGE cpu consumers.
If you discover your applications do not use bind variables -- you must have this
corrected. You'll never have a good hit ratio if everyone submits "unique" sql. Your
shared pool will never be used right and you'll be using excessive CPU (90% of the time
it takes to process "insert into t values ( 1 )" is parsing. If you use "insert into t
values ( :x )", and bind the value of 1 -- then the next person that runs that insert
will benefit from your work and run that much faster.
...
This was from a ticket raised for a Data Pump export that consistently failed with:
Processing object type SCHEMA_EXPORT/TABLE/PROCACT_INSTANCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
ORA-39014: One or more workers have prematurely exited.
ORA-39029: worker 1 with process name "DW00" prematurely terminated
ORA-31671: Worker process DW00 had an unhandled exception.
ORA-04031: unable to allocate 52824 bytes of shared memory ("shared pool","unknown object","KTSL subheap","ktsl_load_disp-2")
ORA-06512: at "SYS.KUPW$WORKER", line 2122
ORA-06512: at line 2
The error ORA-4031 on the shared pool can indicate one of two things: - insufficient space for the shared pool (so insufficient memory in SGA at that time) OR - although there is enough memory, it is fragmented and no contiguous chunk can be allocated to satisfy the latest memory request. Each SGA has a little fragmentation of course because of the operations taking place, so in your case the fragmentation is represented by the 18M of free memory, which exists in uncontiguous chunks. This means that the root cause of the error is insufficient contiguous memory to allocate 12312 bytes of shared memory. ACTION PLAN ============== 1. We see memory_target=2000M. This is too little to support a 12c database, even when it has little activity. As you may know 1.7G for the SGA was acceptable when speaking of 32bit OSs where it was hardly possible to make use of more than that. Since you have a 64 bit system, the addressable memory is virtually unlimited. Furthermore, by using AMM (memory_target>0) this memory is divided between the SGA and the PGA (the user processes) which in fact, depending on the number of connections, can limit the SGA even more. So the first recommendation is to increase the value of memory_target to 4G for example. There is no ideal value, but this would be a reasonable starting value for tuning. You can do further tuning of the memory using the view V$MEMORY_TARGET_ADVICE. The view V$MEMORY_TARGET_ADVICE provides tuning advice for the MEMORY_TARGET initialization parameter: SQL> SELECT * FROM v$memory_target_advice ORDER BY memory_size; The dynamic performance view V$MEMORY_DYNAMIC_COMPONENTS shows the current sizes of all dynamically tuned memory components, including the total sizes of the SGA and instance PGA. You can also use V$MEMORY_RESIZE_OPS which has a circular history buffer of the last 800 memory resize requests. 2. Since the AMM (you have memory_target>0) is used, then make sure to set: _shared_pool_reserved_pct to 10 or 15 to ensure that when the Shared Pool grows or shrinks automatically, the Reserved Area will change as well. SQL> alter system set "_shared_pool_reserved_pct"=15 scope=spfile; --restart the instance FYI: By default, Oracle configures a small Reserved Pool (or Reserved Area) inside the Shared Pool. This memory can be used for operations such as PL/SQL and trigger compilation or for temporary space while loading Java objects. After the memory allocated from the Reserved Pool is freed, it returns to the Reserved Pool. 5% of the Shared Pool is set aside as the Reserved Pool to handle allocations of memory higher than defined by the hidden parameter _shared_pool_reserved_pct. In some application environments, 5% is too small. 3. Set shared_pool_size=500M or even more. This will represent a minimum amount of memory that will always be available to the shared pool. If more memory is needed at any moment, if it is available, it will be allocated to the shared pool over the 500M. Implement the above steps, monitor the database and let us know the result,
ORA-00392: log 1 of thread 1 is being cleared, operation not allowed
After performing an active duplication, this happened when trying to open the database.
SQL> alter database open resetlogs alter database open resetlogs * ERROR at line 1: ORA-00392: log 1 of thread 1 is being cleared, operation not allowed ORA-00312: online log 1 thread 1: '/oracle/oradata/testdb/redo/redo01.log'
SQL> select group#,thread#,status from v$log; GROUP# THREAD# STATUS ------- --------- ---------------- 1 1 CLEARING_CURRENT 2 1 STALE 3 1 STALE
Check the alert log! Always a good idea to get extra info. In here we find the answer
ORA-00344: unable to re-create online log '/oracle/oradata/testdb/redo/redo01.log' ORA-27040: file create error, unable to create file Linux-x86_64 Error: 2: No such file or directory Additional information: 1
So the clue is that the file cannot be created at the operating system level.
Doing a df -k /oracle/oradata/testdb/redo shows us that the filesystem was created too small for the destination database.
In this case, we do not want the redolog directory to be as big as on the source db as there will be almost no movement on the destination.
I should have specified the LOGFILE parameter in the DUPLICATE clause to setup new, smaller redolog files.
As this duplicate takes over 12 hours, I didn't want to do it again so I created symbolic links in the redolog directory pointing out to a larger filesystem with more space.
This allowed the creation of the redolog files. They can then be resized once the database has been opened.
cd /oracle/oradata/testdb/redo ln -s /tmp/redo01.log redo01.log ln -s /tmp/redo02.log redo02.log ln -s /tmp/redo03.log redo03.log SQL> alter database open resetlogs Database opened.
RMAN-04004: error from recovery catalog database: ORA-01017: invalid username/password; logon denied
oracle@host:/export/home/ora> rman target / catalog catowner/catpass@catdb Recovery Manager: Release 11.2.0.2.0 - Production on Thu Jun 12 16:51:03 2014 Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. connected to target database: XXXXRAP1 (DBID=3557010742) RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-00554: initialization of internal recovery manager package failed RMAN-04004: error from recovery catalog database: ORA-01017: invalid username/password; logon denied
This is normally very straight-forward and self-explanatory.
The puzzle here is that a connection from SQL*Plus works!
sqlplus catowner/catpass@catdb SQL*Plus: Release 11.2.0.2.0 Production on Thu Jun 12 16:53:18 2014 Copyright (c) 1982, 2010, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options SQL>
Eventually worked out that the rman binary in the databases ORACLE_HOME on the source machine needed relinking!
cd $ORACLE_HOME/rdbms/lib make -f ins_rdbms.mk irman
waiting for snapshot control file enqueue waiting for snapshot control file enqueue waiting for snapshot control file enqueue waiting for snapshot control file enqueue cannot make a snapshot control file RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03008: error while performing automatic resync of recovery catalog ORA-00230: operation disallowed: snapshot control file enqueue unavailable
From the doc...
When RMAN needs to back up or resynchronize from the control file, it first creates a snapshot or consistent image of the control file.
If one RMAN job is already backing up the control file while another needs to create a new snapshot control file, then you may see the following message:
waiting for snapshot controlfile enqueue
Under normal circumstances, a job that must wait for the control file enqueue waits for a brief interval and then successfully obtains the enqueue.
RMAN makes up to five attempts to get the enqueue and then fails the job.
The conflict is usually caused when two jobs are both backing up the control file, and the job that first starts backing up the control file waits for service from the media manager.
To determine which job is holding the conflicting enqueue:
After you see the first message stating "RMAN-08512: waiting for snapshot controlfile enqueue", start a new SQL*Plus session on the target database:
% sqlplus 'SYS/oracle@trgt AS SYSDBA'
Execute the following query to determine which job is causing the wait:
SELECT s.SID, USERNAME AS "User", PROGRAM, MODULE,
ACTION, LOGON_TIME "Logon", l.*
FROM V$SESSION s, V$ENQUEUE_LOCK l
WHERE l.SID = s.SID
AND l.TYPE = 'CF'
AND l.ID1 = 0
AND l.ID2 = 2;
You should see output similar to the following (the output in this example has been truncated):
SID User Program Module Action Logon
--- ---- -------------------- ------------------------- ---------------- ---------
9 SYS rman@h13 (TNS V1-V3) backup full datafile: c1 0000210 STARTED 21-JUN-01
After you have determined which job is creating the enqueue, you can do one of the following:
Wait until the job creating the enqueue completes
Cancel the current job and restart it after the job creating the enqueue completes
Cancel the job creating the enqueue
Commonly, enqueue situations occur when a job is writing to a tape drive, but the tape drive is waiting for a new cassette to be inserted.
If you start a new job in this situation, then you will probably receive the enqueue message because the first job cannot complete until the new tape is loaded.
or more directly...
set lines 2000 col killer for a70 col program for a20 col module for a20 col action for a20 col logon_time for a20 select 'alter system disconnect session '''||s.sid||','||s.serial#||''' immediate;' killer , username , program , module , action , logon_time from v$session s , v$enqueue_lock l where l.sid = s.sid and l.type = 'cf' and l.id1 = 0 and l.id2 = 2 /
ORA-19511: non RMAN, but media manager or vendor specific failure, error text:
RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of Duplicate Db command at 05/01/2018 14:23:09 RMAN-05501: aborting duplication of target database RMAN-03015: error occurred in stored script Memory Script ORA-27191: sbtinfo2 returned error Additional information: 3586 ORA-19511: non RMAN, but media manager or vendor specific failure, error text: Unable to connect to NetWorker server 'hn6000.cln.be' because 'Authentication error; why = Invalid client credential'. (2:10:79) Recovery Manager complete.
If this happens, check to see if the nsr daemon is running...
Good system:
/home/oracle> ps -ef | grep nsr
root 5701836 1 0 Mar 16 - 13:18 /bin/nsrexecd
oracle 18809292 22741502 0 15:03:58 pts/0 0:00 grep nsr
Bad system:
/home/oracle> ps -ef | grep nsr oracle 14745986 15139292 0 15:03:29 pts/2 0:00 grep nsr
ORA-19554: error allocating device, device type: SBT_TAPE, device name:
If using Cygwin...
RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of backup command at 05/19/2015 12:25:47 ORA-19554: error allocating device, device type: SBT_TAPE, device name: ORA-27000: skgfqsbi: failed to initialize storage subsystem (SBT) layer OSD-02534: Message 2534 not found; product=RDBMS; facility=SOSD
- Check the tdpo.opt file is where you think it is!
- Check the path to the opt file looks like this (needs to be Windows format):
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' FORMAT 'LOG_%d_t%t_s%s_u%U' PARMS 'ENV=(TDPO_OPTFILE=E:\oracle\KITRYD\admin\tdpo.opt)';
- Check the content of the opt file is Windows format paths!
RMAN-20033: control file SEQUENCE# too low
RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03008: error while performing automatic resync of recovery catalog ===RMAN-20033: control file SEQUENCE# too low===
From forums.oracle.com
This error generally happens if an archivelog backup happens while the RMAN database backup is running and if controlfile autobackup is configured.
The database backup is usually successful and it is just the controlfile backup that fails.
ORA-27302: failure occurred at: slgpn
RMAN-03009: failure of Control File and SPFILE Autobackup command on c1 channel at 06/28/2011 18:59:38 ORA-19715: invalid format %b for generated name ORA-27302: failure occurred at: slgpn
Problem
When doing a "backup as copy database", got these errors after 145 of 147 datafiles had been backed up!
Investigation
I had read elsewhere that this could be something to do with the format of the autobackup of the control file so I changed the format as suggested and reran the backup.
No luck. It ended exactly the same way after 145 files. I decided to compare the original list of datafiles with those that had been backed up in order to track down the 2 failures. Maybe I could see something odd with them.
Sure enough, these 2 files were different... there was a space at the end of the filenames :-)
select 'x'||file_name||'x' from dba_data_files where file_name like '% %' /
Solution
Run the backup of the other 2 datafiles with a different format that did not rely on the filename.
run {
allocate channel c1 device type disk format '/path/to/datafiles/%U';
backup as copy datafile 170;
backup as copy datafile 171;
backup as copy current controlfile;
release channel c1;
}
The other 145 datafiles were backed up with a format of '/path/to/datafiles/%b' which preserves the filenames so all that needs to be done is rename the other 2 after the backup finishes.
If they are going to be used in a restore scenario, they should be renamed in the controlfile also.
alter database rename file '/path/to/datafiles/filename_with_a_space_at_end ' to '/path/to/datafiles/filename_without_a_space';
RMAN errors after crosscheck backup and delete noprompt obsolete and delete noprompt expired backup
RMAN-06207: WARNING: 3921 objects could not be deleted for SBT_TAPE channel(s) due RMAN-06208: to mismatched status. Use CROSSCHECK command to fix status RMAN-06210: List of Mismatched objects RMAN-06211: ========================== RMAN-06212: Object Type Filename/Handle RMAN-06213: --------------- --------------------------------------------------- RMAN-06214: Backup Piece DB_SID_t20140316_s263889_umhp3bpa7_1_1 ...
Use the FORCE option with the DELETE command
The FORCE command tells RMAN to clean the information out of the catalog regardless of whether it can find it on the media or not.
delete force noprompt expired backup; delete force noprompt obsolete;
RMAN-06004: ORACLE error from recovery catalog database: RMAN-20003: target database incarnation not found in recovery catalog
Problem:
RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of Duplicate Db command at 12/05/2017 11:11:15 RMAN-05501: aborting duplication of target database RMAN-03015: error occurred in stored script Memory Script RMAN-06004: ORACLE error from recovery catalog database: RMAN-20003: target database incarnation not found in recovery catalog
Solution:
Strangely, this means, in my particular case, that block change tracking is turned on in the target databse and needs turning off in the auxiliary while the DUPLICATE is running.
It must be done while the auxiliary is in mount mode and restoring the datafiles (before recovery starts)
It is reported to be a bug fixed in 11.1 but somehow, we're still (sometimes) getting it in 12.1.0.2.
RMAN-06059: expected archived log not found, loss of archived log compromises recoverability
RMAN-06059: expected archived log not found, loss of archived log compromises recoverability ORA-19625: error identifying file <filename>
This error can come from the fact that the database files (more likely archivelogs) are being deleted by the operating system so RMAN has no clue of their whereabouts.
Possibly result of a database duplication and the catalog has not updated properly?
Solution
crosscheck archivelog all;
And then maybe take a backup to be on the safe side.
RMAN-05537: DUPLICATE without TARGET connection when auxiliary instance is started with spfile cannot use SPFILE clause
Problem:
RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of Duplicate Db command at 12/05/2017 14:17:49 RMAN-05501: aborting duplication of target database RMAN-05537: DUPLICATE without TARGET connection when auxiliary instance is started with spfile cannot use SPFILE clause
Usually happens after restarting a failed RMAN DUPLICATE database. The spfile has been regenerated on the auxiliary to a level where it has too much detail in it.
Solution:
$ cd $ORACLE_HOME/dbs $ ls -altr *dwh_perf* -rw-r----- 1 oracle oinstall 17645568 Aug 3 13:34 snapcf_dwh_perf.f -rw-r----- 1 oracle oinstall 7680 Nov 4 09:22 orapwdwh_perf -rw-r--r-- 1 oracle oinstall 104 Dec 5 09:34 initdwh_perf.ora -rw-r----- 1 oracle oinstall 6656 Dec 5 11:11 spfiledwh_perf.ora -rw-rw---- 1 oracle oinstall 1544 Dec 5 14:18 hc_dwh_perf.dat $ mv hc_dwh_perf.dat hc_dwh_perf.dat.old $ mv spfiledwh_perf.ora spfiledwh_perf.ora.old $ cat initdwh_perf.ora sga_max_size='6G' sga_target='6G' compatible='12.0.0' db_files='500' # Instance name db_name=dwh_perf
Re-run the DUPLICATE. It should zoom past the already recovered datafiles.
PSDRPC returns significant error 1013
archived log thread=1 sequence=195008 channel t2: reading from backup piece arclog_LBK_4260753020_946042770_479194_1 channel t0: piece handle=arclog_LBK_4260753020_946042770_479192_1 tag=TAG20170607T133757 channel t0: restored backup piece 1 channel t0: restore complete, elapsed time: 00:00:35 archived log file name=/xxxxxx/acc/ora_bin2/app/oracle/product/11.2.0.4/dbhome_1/dbs/arch1_195006_815790039.dbf thread=1 sequence=195006 PSDRPC returns significant error 1013. PSDRPC returns significant error 1013. released channel: t0 released channel: t1 released channel: t2 RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of Duplicate Db command at 06/09/2017 15:15:04 RMAN-05501: aborting duplication of target database RMAN-03015: error occurred in stored script Memory Script RMAN-11003: failure during parse/execution of SQL statement: alter database recover logfile '/xxxxxx/acc/ora_bin2/app/oracle/product/11.2.0.4/dbhome_1/dbs/arch1_195006_815790039.dbf' ORA-10877: error signaled in parallel recovery slave Recovery Manager complete.
Action
Check the alert log on the auxiliary instance.
ORA-279 signalled during: alter database recover logfile '/xxxxxx/acc/ora_bin2/app/oracle/product/11.2.0.4/dbhome_1/dbs/arch1_195005_815790039.dbf'... alter database recover logfile '/xxxxxx/acc/ora_bin2/app/oracle/product/11.2.0.4/dbhome_1/dbs/arch1_195006_815790039.dbf' Media Recovery Log /xxxxxx/acc/ora_bin2/app/oracle/product/11.2.0.4/dbhome_1/dbs/arch1_195006_815790039.dbf Fri Jun 09 07:30:30 2017 Errors with log /xxxxxx/acc/ora_bin2/app/oracle/product/11.2.0.4/dbhome_1/dbs/arch1_195006_815790039.dbf Recovery interrupted! Recovered data files to a consistent state at change 238029030495 Media Recovery failed with error 19755 Errors in file /xxxxxx/acc/ora_bin2/app/oracle/diag/rdbms/lbkro/lbkro/trace/lbkro_pr00_13108374.trc: ORA-00283: recovery session canceled due to errors ===>ORA-19755: could not open change tracking file ===>ORA-19750: change tracking file: '/xxxxxx/exp/ora_data2/lbk/data/bct_lbk_do_not_delete.f ===>ORA-27037: unable to obtain file status ===>IBM AIX RISC System/6000 Error: 2: No such file or directory Additional information: 3 Fri Jun 09 07:30:36 2017 ORA-10877 signalled during: alter database recover logfile '/xxxxxx/acc/ora_bin2/app/oracle/product/11.2.0.4/dbhome_1/dbs/arch1_195006_815790039.dbf'... Fri Jun 09 14:04:57 2017 alter database recover logfile '/xxxxxx/acc/ora_bin2/app/oracle/product/11.2.0.4/dbhome_1/dbs/arch1_195006_815790039.dbf' Media Recovery Log /xxxxxx/acc/ora_bin2/app/oracle/product/11.2.0.4/dbhome_1/dbs/arch1_195006_815790039.dbf Media Recovery failed with error 1112 ORA-283 signalled during: alter database recover logfile '/xxxxxx/acc/ora_bin2/app/oracle/product/11.2.0.4/dbhome_1/dbs/arch1_195006_815790039.dbf'... Fri Jun 09 14:31:20 2017 Shutting down instance (immediate) Shutting down instance: further logons disabled Stopping background process MMNL Stopping background process MMON License high water mark = 49 ALTER DATABASE CLOSE NORMAL ORA-1109 signalled during: ALTER DATABASE CLOSE NORMAL... ALTER DATABASE DISMOUNT Shutting down archive processes Archiving is disabled Completed: ALTER DATABASE DISMOUNT ARCH: Archival disabled due to shutdown: 1089 Shutting down archive processes Archiving is disabled ARCH: Archival disabled due to shutdown: 1089 Shutting down archive processes Archiving is disabled Fri Jun 09 14:31:24 2017
This is a known bug. it is fixed in 11.2.0.3 but... this customer does not apply patches!
Solution
While the DUPLICATE is running (and in MOUNT mode), log on to the auxiliary instance and disable block change tracking.
alter database disable block change tracking;
A documented workaround is to set db_filename_convert parameter in the DUPLICATE clause of the run block but I tried this and it failed again.
The obvious solution is to patch the database but...
Database trigger to capture ORA errors
From ora-ssn.blogspot.be
CREATE TABLE stats$error_log (
err_dt TIMESTAMP,
db_user VARCHAR2(30),
msg_stack VARCHAR2(2000),
sqltxt VARCHAR2(1000))
tablespace users;
Now, create a trigger on the database server.
CREATE OR REPLACE TRIGGER log_server_errors
AFTER SERVERERROR
ON DATABASE
DECLARE
v_sqltext VARCHAR2(1000);
nl_sqltext ora_name_list_t;
BEGIN
-- Capture entire error text
FOR i IN 1 .. ora_sql_txt(nl_sqltext) LOOP
v_sqltext := v_sqltext || nl_sqltext(i);
END LOOP;
INSERT INTO STATS$ERROR_LOG
(err_dt, db_user, msg_stack, sqltxt)
VALUES
(systimestamp,
sys.login_user,
dbms_utility.format_error_stack, v_sqltext);
END log_server_errors;
/
ORA-19809: limit exceeded for recovery files
RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03009: failure of backup command on ORA_DISK_1 channel at 07/15/2013 10:10:59 ORA-19809: limit exceeded for recovery files ORA-19804: cannot reclaim 274716160 bytes disk space from 1468006400 limit
ORA-19809: limit exceeded for recovery files
Cause: The limit for recovery files specified by the db_recovery_file_dest_size was exceeded.
Action: The error is accompanied by 19804. See message 19804 for further details
ORA-19804: cannot reclaim string bytes disk space from string limit
Cause: Oracle cannot reclaim disk space of specified bytes from the DB_RECOVERY_FILE_DEST_SIZE limit.
Action: There are five possible solutions:
1) Take frequent backup of recovery area using RMAN.
2) Consider changing RMAN retention policy.
3) Consider changing RMAN archivelog deletion policy.
4) Add disk space and increase DB_RECOVERY_FILE_DEST_SIZE.
5) Delete files from recovery area using RMAN.
The solution here is to increase the value of db_recovery_file_dest_size but it might also indicate a problem with tapes if the archivelogs are being backed up to tape via tdpo.
SYS@DN7> alter system set db_recovery_file_dest_size=4000M scope=both; System altered.
Looking into int further, saw RMAN parameters missing...
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 14 DAYS; CONFIGURE BACKUP OPTIMIZATION OFF; CONFIGURE DEFAULT DEVICE TYPE TO 'SBT_TAPE'; CONFIGURE CONTROLFILE AUTOBACKUP ON; CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE 'SBT_TAPE' TO 'LOG_%d_controlfile_%F.rman'; CONFIGURE DEVICE TYPE 'SBT_TAPE' PARALLELISM 1 BACKUP TYPE TO BACKUPSET;
RMAN-00554: initialization of internal recovery manager package failed
Getting this error after migrating a database from one machine to another and attaching to a nice new version 12c catalog on the new server
Recovery Manager: Release 11.2.0.2.0 - Production on Sat Jun 21 13:16:24 2014 Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. connected to target database: SPMSUPP1 (DBID=3026014394) RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-00554: initialization of internal recovery manager package failed RMAN-04004: error from recovery catalog database: ORA-01017: invalid username/password; logon denied
We cannot connect to the catalog via rman but there is no problem if connecting via SQL*Plus!
Solution: relink the rman executable.
Using AIX commands genld, genkld and slibclean to avoid library file locking errors (libjox) (Doc ID 739963.1)
When installing interim (one-off) patches, CPU patches or patchsets, you may encounter some errors with regards to files being locked or oui/opatch being unable to copy files
even though the databases, listeners and all other Oracle processes associated with the ORACLE_HOME to be patched were stopped.
This could be as result of a process which requires termination or an additional file needing to be unloaded from the system cache.
Run following commands to clean up...
genld -l | grep <ORACLE_HOME>
If genld returns data then a currently executing process has something open in the ORACLE_HOME directory, therefore terminate the process as required/recommended.
genkld | grep <ORACLE_HOME>
If the genkld command returns a list of shared objects currently loaded onto the OS system cache then please remove the entries from the OS system cache by running the slibclean command as root user:
/usr/sbin/slibclean
ORA-20011: Approximate NDV failed: ORA-29913: error in executing ODCIEXTTABLEOPEN callout
This seems to be related to orphaned Datapump tables. Generally seems to be a problem when stats jobs are running during the night (dbms_stats)
- Check no datapump jobs are running
select * from dba_datapump_jobs;
- Find orphaned tables
set lines 2000 select owner , object_name , object_type , status , to_char(created,'dd-mon-yyyy hh24:mi:ss') created , to_char(last_ddl_time,'dd-mon-yyyy hh24:mi:ss') last_ddl_time from dba_objects where 1=1 and object_name like 'ET$%';
Check that any rows seen are actually external tables
select owner , table_name , default_directory_name , access_type from dba_external_tables order by 1,2
- Drop the tables if they are not currently being used (no datapump jobs running)
drop table &&owner..&table_name;
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
Database is restored and now we want to recover it but we see this...
Starting recover at 24-JUL-15 11:26:19 starting media recovery unable to find archive log archive log thread=1 sequence=69763 Oracle Error: ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below ORA-01194: file 1 needs more recovery to be consistent ORA-01110: data file 1: '/oracle/TRSCRP1/oradata2/system01.dbf' released channel: c1
This means that although we have restored the database, we are still missing information to complete the recovery. This information is contained in the archived redo logs which must also be restored (from tape if they are no longer on disk)
We can see here that we need an archive log with sequence number 69763. But what is the file called and where should it be restored to?
For this we can go to SQL*Plus and get the database to tell us
recover database using backup controlfile until cancel;
and cancel straight away.
ORA-00279: change 13796376414661 generated at 07/14/2015 13:20:07 needed for
thread 1
ORA-00289: suggestion : /oracle/TRSCRP1/archive/arch_TRSCRP1_1_69763.arc
ORA-00280: change 13796376414661 for thread 1 is in sequence #69763
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
cancel
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: '/oracle/TRSCRP1/oradata2/system01.dbf'
ORA-01112: media recovery not started
Now we see that we need to find a file called arch_TRSCRP1_1_69763.arc and put it in the directory /oracle/TRSCRP1/archive.
This will continue up until the time specified for the recover so restore a whole bunch of them.
Repeat the "recover database using backup controlfile until cancel;" command and type AUTO or Enter until you've got to where you need to be.
Type CANCEL when point is reached and
alter database open resetlogs;