Difference between revisions of "Oracle errors"

From dbawiki
Jump to: navigation, search
(ORA-39080: failed to create queues "KUPC$C_1_20150616165037" and "KUPC$S_1_20150616165037" for Data Pump job)
(ORA-00845: MEMORY_TARGET not supported on this system)
Line 322: Line 322:
 
</pre>
 
</pre>
 
If that is not possible, clean out some audit data or add a datafile.
 
If that is not possible, clean out some audit data or add a datafile.
 +
 +
===ORA-00449: background process 'CKPT' unexpectedly terminated with error 7446===
 +
<pre>
 +
SQL> startup nomount pfile="/oracle/TRSCRP1/admin/initTRSCRP1.ora"
 +
ORA-00449: background process 'CKPT' unexpectedly terminated with error 7446
 +
ORA-07446: sdnfy: bad value '' for parameter .
 +
SQL> exit
 +
</pre>
 +
'''Solution'''
 +
<pre>
 +
SQL> mkdir /oracle/TRSCRP1/admin/bdump /oracle/TRSCRP1/admin/cdump
 +
</pre>
 +
 +
===ORA-09925: Unable to create audit trail file===
 +
<pre>
 +
SQL> startup nomount pfile="/oracle/TRSCRP1/admin/initTRSCRP1.ora"
 +
ORA-09925: Unable to create audit trail file
 +
SVR4 Error: 2: No such file or directory
 +
Additional information: 9925
 +
</pre>
 +
'''Solution'''
 +
<pre>
 +
SQL> host mkdir /oracle/TRSCRP1/admin/adump
 +
</pre>
  
 
===ORA-00845: MEMORY_TARGET not supported on this system===
 
===ORA-00845: MEMORY_TARGET not supported on this system===

Revision as of 16:11, 2 July 2015

Contents

ORA-00600

Master Note for Diagnosing ORA-600 (Doc ID 1092832.1)

ORA-27092: size of file exceeds file size limit of the process

Think ulimit...

Cause: an attempt was made to open a file that exceeds the process's file size limit (ulimit), additional information shows the current limit (logical blocks) and the size of the file (logical blocks)
Action: increase the processes file size limit (ulimit) and retry
truncate table pibadm.test_ph
                      *
Error at line 1 :
ORA-00604: error ocurred at recursive SQL level 1
ORA-01116: error in opening database file 2
ORA-01110: datafile 2 : '/opt/u01/data/PIBQA/UNDO_01.dbf'
ORA-27092: size of file exceeds file size limit of the process
Additional information: 262143
Additional information: 2621441

Check the fs_size limit for this user in /etc/security/limits
Compare this with another user, say oracle. Probably -1 (unlimited)

UDE-31623: operation generated ORACLE error 31623

When using Data pump, get following errors almost immediately:

UDE-31623: operation generated ORACLE error 31623
ORA-31623: a job is not attached to this session via the specified handle
ORA-06512: at "SYS.DBMS_DATAPUMP", line 3326
ORA-06512: at "SYS.DBMS_DATAPUMP", line 4551
ORA-06512: at line 1

Solution 1:
In one case, I simplified the parameter file that was used. Maybe related to Streams Pool size, who knows?!
I removed these 2 lines and the export started working:

compression_algorithm=medium
logtime=all

Solution 2:
There is a note about this on Metalink, 1080775.1 which suggests that Streams Pool Size needs setting.

For this DataPump error, you will need to configure the database with some Streams Pool.

Perform the following query to find out how much STREAMS_POOL_SIZE has been allocated

 select * from v$sgainfo;
 ...
 Streams Pool Size                               0 Yes

Data Pump now uses Advanced Queuing (AQ) internally, so it can handle job stops/starts/re-starts.  However, AQ using the Streams Pool, where the database currently has the STREAMS_POOL_SIZE set to ZERO.

Manually set the STREAMS_POOL_SIZE (using ALTER SYSTEM or by changing the value in the the PFILE/SPFILE), re-start the database and re-attempt the Data Pump Export.

Alternately, turn on Automatic Shared Memory management (ASMM) by setting SGA_TARGET or MEMORY_TARGET.

From Oracle Documentation

Setting the Size Of the Buffer Cache In a Streams Environment

Oracle Data Pump uses Streams functionality to communicate between processes.
If the SGA_TARGET initialization parameter is set, then the STREAMS_POOL_SIZE initialization parameter is automatically set to a reasonable value.
If the SGA_TARGET initialization parameter is not set and the STREAMS_POOL_SIZE initialization parameter is not defined,
then the size of the streams pool automatically defaults to 10% of the size of the shared pool.

When the streams pool is created, the required SGA memory is taken from memory allocated to the buffer cache,
reducing the size of the cache to less than what was specified by the DB_CACHE_SIZE initialization parameter.
This means that if the buffer cache was configured with only the minimal required SGA, then Data Pump operations may not work properly.
A minimum size of 10M is recommended for STREAMS_POOL_SIZE in order to ensure successful Data Pump operations.

ORA-00020: maximum number of processes (150) exceeded

Check the current resource limits with:

set lines 300
select * from v$resource_limit;

Unfortunately this limit cannot be increased without a restart.
It may be a necessary to increase the limit with

alter system set processes=500 scope=spfile;

followed by a shutdown/startup
Or it may be some user is running away with the limit.
Check to see who is running what with: [See_what_statements_all_users_are_running]

ORA-01555: snapshot too old: rollback segment number nn with name "xxx" too small


Problem

A long running transaction has run out of space to store its read-consistent image

Solution 1

If it's a one-off, maybe leave it alone. It may have been a user leaving a session open overnight without committing or rolling back.

Solution 2

Make sure there's enough space to allow the transaction to finish.

Things to check

How much undo did the biggest transaction take? What is the undo retention parameter set to?

set lines 200
col name        for a30
col description for a50
col value       for a20
col maxquerylen for 999999
select p.name
,      p.description
,      p.value
,      max(u.maxquerylen) maxquerylen
from   v$parameter p
,      v$undostat  u
where  1=1
and    p.name = 'undo_retention'
group  by p.name
,      p.description
,      p.value;

Set the retention to about 20% bigger than the maxquerylen.

select 'alter system set undo_retention='||round(1.2*max(maxquerylen))||' scope=both;' "Run this" from v$undostat;

and set the undo retention guarantee (being aware of the consequences)

select tablespace_name
,      retention
from   dba_tablespaces;
alter tablespace &undo_ts_name retention guarantee;

Optimise Oracle UNDO Parameters

The ideal undo_retention would be enough to hold rollback for the longest transaction that ever happened (can't see into the future) So current undo_retention = current value of undo / (db_block_size * undo blocks per sec)
Undo Blocks per Second

SELECT MAX(undoblks/((end_time-begin_time)*3600*24)) "UNDO_BLOCKS_PER_SEC"
FROM   v$undostat;

Optimal Undo Retention

col actual for 9999999999.99 heading "ACTUAL UNDO SIZE [MByte]"
col retent for 9999999999    heading "UNDO RETENTION [Sec]"
col needed for 9999999999.99 heading "NEEDED UNDO SIZE [Secs]"

SELECT d.undo_size/(1024*1024) actual,
       to_number(SUBSTR(e.value,1,25)) retent,
       ROUND((d.undo_size / (to_number(f.value) *
       g.undo_block_per_sec))) needed
  FROM (
       SELECT SUM(a.bytes) undo_size
          FROM v$datafile a,
               v$tablespace b,
               dba_tablespaces c
         WHERE c.contents = 'UNDO'
           AND c.status = 'ONLINE'
           AND b.name = c.tablespace_name
           AND a.ts# = b.ts#
       ) d,
       v$parameter e,
       v$parameter f,
       (
       SELECT MAX(undoblks/((end_time-begin_time)*3600*24))
              undo_block_per_sec
         FROM v$undostat
       ) g
WHERE e.name = 'undo_retention'
  AND f.name = 'db_block_size'
/

Calculate Needed UNDO Size for given Database Activity

col actual for 9999999999.99 heading "ACTUAL UNDO SIZE [MByte]"
col retent for 9999999999    heading "UNDO RETENTION [Sec]"
col needed for 9999999999.99 heading "NEEDED UNDO SIZE [MByte]"

SELECT d.undo_size/(1024*1024) actual
,      to_number(SUBSTR(e.value,1,25)) retent
,      (TO_NUMBER(e.value) * TO_NUMBER(f.value) *
       g.undo_block_per_sec) / (1024*1024)  needed
  FROM (
       SELECT SUM(a.bytes) undo_size
         FROM v$datafile a,
              v$tablespace b,
              dba_tablespaces c
        WHERE c.contents = 'UNDO'
          AND c.status = 'ONLINE'
          AND b.name = c.tablespace_name
          AND a.ts# = b.ts#
       ) d,
      v$parameter e,
       v$parameter f,
       (
       SELECT MAX(undoblks/((end_time-begin_time)*3600*24))
         undo_block_per_sec
         FROM v$undostat
       ) g
 WHERE e.name = 'undo_retention'
  AND f.name = 'db_block_size'
/

It might be that the Export still fails with ORA-01555 even though retention seems long enough. This can be due to LOB retention.
The retention settings for LOBS is different for the rest of the database.
When you change the UNDO_RETENTION parameter, the LOB segment’s retention value is NOT modified. If you query the RETENTION column of the DBA_LOBS view, you will notice the old UNDO_RETENTION value still remains after you have modified the UNDO_RETENTION parameter.

SQL> show parameter undo_

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
undo_management                      string      AUTO
undo_retention                       integer     36000
undo_tablespace                      string      UNDOTBS1
SQL> desc SITEADMIN.AUDIT_LOG_MESSAGE
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 AUDIT_LOG_MESSAGE_ID                      NOT NULL NUMBER
 AUDIT_LOG_ID                              NOT NULL NUMBER
 SEQUENCE                                           NUMBER
 MESSAGE                                            CLOB
 CREATE_DATE                                        TIMESTAMP(6)
 COMPONENT_ID                                       VARCHAR2(50 CHAR)
SQL> select table_name,column_name, pctversion,retention from dba_lobs where owner='SITEADMIN';

TABLE_NAME                COLUMN_NAME                    PCTVERSION  RETENTION
------------------------- ------------------------------ ---------- ----------
EMAIL_STORE               CONTENT                                 0
EMAIL_STORE_ATTACHEMENT   ATTACHEMENT                             0
TRACKING                  ORIGINAL_MESSAGE                                 900
TRACKING                  MESSAGE                                          900
TRACKING                  SIEBEL_MESSAGE                                   900
WISE_WMS_LOG              WISE_WMS_XML                                     900
ATTACHMENT                ORIGINAL_MESSAGE                                 900
PLAN_TABLE                OTHER_XML                                        900
AUDIT_LOG_MESSAGE         MESSAGE                                          900
EXCEPTIONS                STACK_TRACE                                      900

To change the LOB segment’s RETENTION value to match the new UNDO_RETENTION value, do the following:

ALTER TABLE my_table MODIFY LOB (lob_column) (PCTVERSION 20);
ALTER TABLE my_table MODIFY LOB (lob_column) (RETENTION);

By momentarily changing the LOB to use PCTVERSION and then back to RETENTION, the new value of UNDO_RETENTION will be used.
You need to do this for all LOB segments that you intend to modify.

select 'ALTER TABLE '||owner||'.'||table_name||' MODIFY LOB ('||column_name||') (PCTVERSION 20);' from dba_lobs where owner='TIBCO'

select 'ALTER TABLE '||owner||'.'||table_name||' MODIFY LOB ('||column_name||') (RETENTION);'     from dba_lobs where owner='TIBCO';

There is a table DBA_LOBS that displays information about all the LOB columns in the database, but it doesn't actually store any LOB data, just metadata. The actual LOB segments in the database have system-generated names that take the form SYS_LOB*identifier*$$. For example if you identify a segment named SYS_LOBidentifier$$ that is consuming space, you can find out what column of what table that LOB column maps to using the DBA_LOBS table:

set lines 1000
col owner       for a20
col table_name  for a32
col column_name for a32
select owner
,      table_name
,      column_name
from   dba_lobs
where  1=1
and    segment_name = 'SYS_LOB<<identifier>>$$'

The Undo Advisor PL/SQL Interface

You can activate the Undo Advisor by creating an undo advisor task through the advisor framework.
The following example creates an undo advisor task to evaluate the undo tablespace. The name of the advisor is 'Undo Advisor'.
The analysis is based on Automatic Workload Repository snapshots, which you must specify by setting parameters START_SNAPSHOT and END_SNAPSHOT.
In the following example, the START_SNAPSHOT is "1" and END_SNAPSHOT is "2".

DECLARE
   tid    NUMBER;
   tname  VARCHAR2(30);
   oid    NUMBER;
BEGIN
   DBMS_ADVISOR.CREATE_TASK('Undo Advisor', tid, tname, 'Undo Advisor Task');
   DBMS_ADVISOR.CREATE_OBJECT(tname, 'UNDO_TBS', null, null, null, 'null', oid);
   DBMS_ADVISOR.SET_TASK_PARAMETER(tname, 'TARGET_OBJECTS', oid);
   DBMS_ADVISOR.SET_TASK_PARAMETER(tname, 'START_SNAPSHOT', 1);
   DBMS_ADVISOR.SET_TASK_PARAMETER(tname, 'END_SNAPSHOT', 2);
   DBMS_ADVISOR.SET_TASK_PARAMETER(tname, 'INSTANCE', 1);
   DBMS_ADVISOR.execute_task(tname);
        END;
/

After you have created the advisor task, you can view the output and recommendations in the Automatic Database Diagnostic Monitor in Enterprise Manager.
This information is also available in the DBA_ADVISOR_* data dictionary views (DBA_ADVISOR_TASKS, DBA_ADVISOR_OBJECTS, DBA_ADVISOR_FINDINGS, DBA_ADVISOR_RECOMMENDATIONS, and so on).

References

Error 6 initializing SQL*Plus

Using sqlplus or rman under Cygwin, ORACLE_HOME still needs to be windows format!

Message file sp1<lang>.msb not found
SP2-0750 :You may need to set ORACLE_HOME to your Oracle software directory

so...

export ORACLE_SID=KITRYD
ORAENV_ASK=NO
. oraenv
[[ "`uname`" == CYGWIN* ]] && ORACLE_HOME=`cygpath -w ${ORACLE_HOME}`

ORA-2002: error while writing to audit trail

Check the space left in the audit tablespace with:

select sum(bytes)/1024/1024 mb from dba_free_space where tablespace_name ='AUDIT_DT';

If plenty of space, it can be due to Oracle clearing out the recyclebin at the same time.
If not, extend the datafile...

select * from dba_data_files where tablespace_name ='AUDIT_DT';
alter database datafile 6 resize 30G;

If that is not possible, clean out some audit data or add a datafile.

ORA-00449: background process 'CKPT' unexpectedly terminated with error 7446

SQL> startup nomount pfile="/oracle/TRSCRP1/admin/initTRSCRP1.ora"
ORA-00449: background process 'CKPT' unexpectedly terminated with error 7446
ORA-07446: sdnfy: bad value '' for parameter .
SQL> exit

Solution

SQL> mkdir /oracle/TRSCRP1/admin/bdump /oracle/TRSCRP1/admin/cdump

ORA-09925: Unable to create audit trail file

SQL> startup nomount pfile="/oracle/TRSCRP1/admin/initTRSCRP1.ora"
ORA-09925: Unable to create audit trail file
SVR4 Error: 2: No such file or directory
Additional information: 9925

Solution

SQL> host mkdir /oracle/TRSCRP1/admin/adump

ORA-00845: MEMORY_TARGET not supported on this system

Problem Description

SQL> STARTUP
ORA-00845: MEMORY_TARGET not supported on this system

Cause

  • Starting from Oracle 11g the automatic memory management feature is now defined with parameter MEMORY_TARGET and MEMMORY_MAX_TARGET.
  • On linux file system the shared memory need to be mounted on /dev/shm directory on the operating system.
  • And the size of /dev/shm needs to be greater than MEMORY_TARGET or MEMMORY_MAX_TARGET.
  • The AMM (Automatic Memory Management) now in 11g manages both SGA and PGA together by MMAN process.
  • The MEMORY_TARGET parameter in 11g comes for (SGA_TARGET+PGA_AGGREGATE_TARGET) which was in 10g.
  • And MEMORY_MAX_TARGET parameter in 11g comes instead of SGA_MAX_TARGET parameter which was in 10g.
  • The ORA-00845:can arises for the following two reasons on linux system.

1)If the shared memory which is mapped to /dev/shm directory is less than the size of MEMORY_TARGET or MEMORY_MAX_TARGET or 2)If the shared memory is not mapped to /dev/shm directory.

Solution

Make sure /dev/shm is properly mounted. You can see it by,
#df -h or #df -k command.
The output should be similar to:

$ df -k
Filesystem            Size  Used Avail Use% Mounted on
...
shmfs                 1G    512M 512M  50% /dev/shm


We see here for /dev/shm we have assigned 1G memory. Now if you set MEMORY_TARGET more than 1G then above ORA-845 will arise. For example if you have MEMORY_TARGET or MEMORY_MAX_TARGET set to 12G then you can mount shared memory to 13g like below.
As a root user,
# mount -t tmpfs shmfs -o size=13g /dev/shm
In order to make the settings persistence so that it will affect after restarting machine add an entry in /etc/fstab similar to the following:

shmfs /dev/shm tmpfs size=13g 0

ORA-24247: network access denied by access control list (ACL)

Cause: No access control list (ACL) has been assigned to the target host or the privilege necessary to access the target host has not been granted to the user in the access control list.

Action: Ensure that an access control list (ACL) has been assigned to the target host and the privilege necessary to access the target host has been granted to the user.

Your application will encounter an ORA-24247 error if it relies on one of the network packages and no proper ACL has been created. For the use of the following packages it is mandatory to have an ACL for the application user in place in 11g:

  • UTL_TCP
  • UTL_SMTP
  • UTL_MAIL
  • UTL_HTTP
  • UTL_INADDR

Here is a very good explanation of how to setup the necessary ACLs: [whitehorses.nl]

TNS-00525: Insufficient privilege for operation

lsnrctl start LISTENER_UCLID55V

LSNRCTL for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production on 26-MAR-2015 13:36:14

Copyright (c) 1991, 2011, Oracle.  All rights reserved.

Starting /oracle/product/11.2.0.3/bin/tnslsnr: please wait...

TNSLSNR for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production
System parameter file is /oracle/product/11.2.0.3/network/admin/listener.ora
Log messages written to /oracle/product/diag/tnslsnr/solax082/listener_uclid55v/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=150.251.114.238)(PORT=1525)))
Error listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1525)))
TNS-12555: TNS:permission denied
 TNS-12560: TNS:protocol adapter error
  TNS-00525: Insufficient privilege for operation
   IBM/AIX RISC System/6000 Error: 1: Not owner

Listener failed to start. See the error message(s) above...

This would suggest the user attempting to start the listener has no access to either the Oracle binaries or the TNS listener/tnsnames files.
But... checkout the /tmp/.oracle directory :-)
That hidden gem shows who started the listener last time. Delete the relevant entry and try again!
To start from a clean sheet, stop all listeners, delete this directory and restart the listeners.

ORA-12162: TNS:net service name is incorrectly specified

Probably nothing to do with TNS. Check your ORACLE_SID is set correctly!

ORA-00054: resource busy and acquire with NOWAIT specified

SELECT O.OBJECT_NAME, S.SID, S.SERIAL#, P.SPID, S.PROGRAM,S.USERNAME,
S.MACHINE,S.PORT , S.LOGON_TIME,SQ.SQL_FULLTEXT 
FROM V$LOCKED_OBJECT L, DBA_OBJECTS O, V$SESSION S, 
V$PROCESS P, V$SQL SQ 
WHERE L.OBJECT_ID = O.OBJECT_ID 
AND L.SESSION_ID = S.SID AND S.PADDR = P.ADDR 
AND S.SQL_ADDRESS = SQ.ADDRESS;
select object_name, s.sid, s.serial#, p.spid 
from v$locked_object l, dba_objects o, v$session s, v$process p
where l.object_id = o.object_id and l.session_id = s.sid and s.paddr = p.addr;

then

alter system kill session 'sid,serial#';

ORA-12518: TNS:listener could not hand off client connection

The database you are trying to connect to is probably on its knees. Check server memory is not exhausted.

ORA-12520: TNS:listener could not find available handler for requested type of server

Strangely this could be related to number of processes being exhausted.

show parameter processes
alter system set processes=350 scope=spfile;
shutdown immediate
startup

ORA-12514: TNS:listener does not currently know of service

lsnrctl status <listener_name>

Been altering tnsnames.ora or listener.ora?

  • Check the local_listener parameter

Try resetting it to itself. Strange but if you are not using port 1521, this can work wonders!
Assuming listener is called LISTENER_WM9T:

alter system set local_listener=LISTENER_WM9T scope=both;
  • Try and connect with the service name or directly with
sqlplus 'sys/*******'@<host>:<port>/<SID> as sysdba

Excellent description of how Oracle TNS listeners work and a checklist on how to fix this error

ORA-12547: TNS lost contact

If

sqlplus <user>/<pass>

fails with above error, try

sqlplus <user>/<pass>@<db connection>

If this works, it is most probably a permissions error.
Re-run the

$ORACLE_HOME/root.sh

script that was run as part of the original installation. This will reset the permissions on some important files.

ORA-28112: failed to execute policy function

when exporting or importing using DataPump< /br> During expdp is accompanied by "ORA-31693: Table data object "SYSMAN"."MGMT_IP_REPORT_DEF" failed to load/unload and is being skipped due to error:"

Reason

This happens due to audit policies being defined on the table but the policy function is defined in another schema.

Solution

Either drop the policy after importing the table with:

begin
    for rec in (select * from dba_audit_policies where object_schema = 'SYSMAN') loop
        dbms_fga.drop_policy(object_schema => rec.object_schema,
                             object_name => rec.object_name,
                             policy_name => rec.policy_name);  
    end loop;
end;
/

or exclude the policies during export by adding this to the .par file:

exclude=FGA_POLICY

or even better, give the necessary grants to the exporting user...

grant exempt access policy to impexpusr;

The sysdba role has this automatically.

ORA-01950: no privileges on tablespace 'SYSAUX'

26-JUN-15 03:12:27.545: >>> ORA-31642: the following SQL statement fails: 
BEGIN "SYS"."DBMS_RULE_EXP_RULES".SCHEMA_CALLOUT(:1,0,1,'12.01.00.02.00'); END;
ORA-01950: no privileges on tablespace 'SYSAUX'

Cause:
In this case, it seems SYSMAN has no rights to create objects in the SYSAUX tablespace
Solution:

alter user SYSMAN quota unlimited on sysaux;

More information here

ORA-39080: failed to create queues "KUPC$C_1_20150616165037" and "KUPC$S_1_20150616165037" for Data Pump job

expdp /

Export: Release 11.2.0.4.0 - Production on Tue Jun 16 16:50:36 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-31626: job does not exist
ORA-31637: cannot create job SYS_EXPORT_SCHEMA_01 for user OPS$ORAIBM
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPV$FT_INT", line 798
[[ORA-39080: failed to create queues "KUPC$C_1_20150616165037" and "KUPC$S_1_20150616165037" for Data Pump job]]
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPC$QUE_INT", line 1534
ORA-24002: QUEUE_TABLE SYS.AQ$KUPC$DATAPUMP_QUETAB_1 does not exist

Cause:
Possibly some or all of OLAP module has got itself deinstalled.
Solution:
Try:

  • Recompiling invalid objects. @?/rdbms/admin/utlrp
  • Reinstalling DBMS_CUBE packages by (re)running catproc.sql. @?/rdbms/admin/catproc followed by @?/rdbms/admin/utlrp
  • Reinstalling OLAP. @?/olap/admin/olap.sql SYSAUX TEMP; But see Metalink note ID 296187.1 for full instructions.
  • Check metalink for bugs. See ID 345198.1 and possibly ID 453796.1

ORA-39181: Only partial table data may be exported due to fine grain access control on "owner"."<table_name>"

Reason

This happens when applications are designed with fine-grained access.

Solution

Grant the necessary privileges to the exporting user...

grant exempt access policy to ops$oraibm;

ORA-04030: out of process memory when trying to allocate 2520 bytes

expdp problem (ORA-04030: out of process memory when trying to allocate 2520 bytes (pga heap,koh-kghu call heap))

select value from v$pgastat where name='maximum PGA allocated';

SHOW PARAMETER TARGET

ORA-04030: out of process memory when trying to allocate 2520 bytes (pga heap,koh-kghu call heap)

alter system reset pga_aggregate_target scope=spfile;

show sga

select * from v$pgastat;

show parameter process

select sum(value)/1024/1024 Mb
from   v$sesstat s, v$statname n
where  1=1
and    n.STATISTIC# = s.STATISTIC#
and    name = 'session pga memory'

show parameter WORKAREA

select value from v$pgastat where name='maximum PGA allocated';

alter system set memory_max_target=10G scope=spfile;

alter system set memory_target=3G scope=both;

select * from v$memory_target_advice order by memory_size;

ORA-04031: unable to allocate 32 bytes of shared memory ("shared pool","select name,online$,contents...","KGLH0^e5705bfc","kglHeapInitialize:temp")

ORA-04031: unable to allocate  bytes of shared memory ("","","","") 
ORA-04031: unable to allocate 32 bytes of shared memory ("shared pool","select name,online$,contents...","KGLH0^e5705bfc","kglHeapInitialize:temp") 
Incident details in: /oracle/diag/rdbms/mape/MAPE/incident/incdir_86754/MAPE_dm00_14221536_i86754.trc 
Wed Jul 31 20:42:50 2013 
Dumping diagnostic data in directory=[cdmp_20130731204250], requested by (instance=1, osid=14221536 (DM00)), summary=[incident=86753]. 
Wed Jul 31 20:42:51 2013 
Sweep [inc][86754]: completed 
Sweep [inc][86753]: completed 
Sweep [inc2][86753]: completed 
Wed Jul 31 20:42:57 2013 
Use ADRCI or Support Workbench to package the incident. 
See Note 411.1 at My Oracle Support for error and packaging details. 
Dumping diagnostic data in directory=[cdmp_20130731204257], requested by (instance=1, osid=14221536 (DM00)), summary=[incident=86754]. 
Wed Jul 31 20:43:35 2013 
Errors in file /oracle/diag/rdbms/mape/MAPE/trace/MAPE_dw02_40567024.trc  (incident=82993): 
ORA-04031: unable to allocate 32 bytes of shared memory ("shared pool","select order#,columns,types ...","KGLH0^470434f8","kglHeapInitialize:temp")

Database had already crashed so nothing to do but:

(0) MAPE ora@x025:/home/ora> sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Fri Aug 2 14:26:20 2013

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected.
SYS@MAPE> alter system flush shared_pool;
alter system flush shared_pool
*
ERROR at line 1:
ORA-01012: not logged on
Process ID: 0
Session ID: 0 Serial number: 0


SYS@MAPE> shutdown abort
ORACLE instance shut down.
SYS@MAPE> startup
ORACLE instance started.

Total System Global Area  734892032 bytes
Fixed Size                  2225128 bytes
Variable Size             398461976 bytes
Database Buffers          318767104 bytes
Redo Buffers               15437824 bytes
Database mounted.
Database opened.

SYS@MAPE> show parameter pool

NAME                                 TYPE                           VALUE
------------------------------------ ------------------------------ ------------------------------
buffer_pool_keep                     string
buffer_pool_recycle                  string
global_context_pool_size             string
java_pool_size                       big integer                    0
large_pool_size                      big integer                    0
olap_page_pool_size                  big integer                    0
shared_pool_reserved_size            big integer                    18M
shared_pool_size                     big integer                    208M
streams_pool_size                    big integer                    0
SYS@MAPE> show sga

Total System Global Area  734892032 bytes
Fixed Size                  2225128 bytes
Variable Size             385879064 bytes
Database Buffers          331350016 bytes
Redo Buffers               15437824 bytes
SYS@MAPE> exit

but for reference, the causes could well be the applications running on the database are not using bind variables...
From asktom.oracle.com

...
Just to give you a tiny idea of how huge of a difference this 
can make performance wise, you only need to run a very small 
test:

tkyte@TKYTE816> alter system flush shared_pool;
System altered.

tkyte@TKYTE816> declare
  2      type rc is ref cursor;
  3      l_rc rc;
  4      l_dummy all_objects.object_name%type;
  5      l_start number default dbms_utility.get_time;
  6  begin
  7      for i in 1 .. 1000
  8      loop
  9          open l_rc for
 10          'select object_name
 11             from all_objects
 12            where object_id = ' || i;
 13          fetch l_rc into l_dummy;
 14          close l_rc;
 15      end loop;
 16      dbms_output.put_line
 17      ( round( (dbms_utility.get_time-l_start)/100, 2 ) ||
 18        ' seconds...' );
 19  end;
 20  /
14.86 seconds...

PL/SQL procedure successfully completed.

tkyte@TKYTE816> declare
  2      type rc is ref cursor;
  3      l_rc rc;
  4      l_dummy all_objects.object_name%type;
  5      l_start number default dbms_utility.get_time;
  6  begin
  7      for i in 1 .. 1000
  8      loop
  9          open l_rc for
 10          'select object_name
 11             from all_objects
 12            where object_id = :x'
 13          using i;
 14          fetch l_rc into l_dummy;
 15          close l_rc;
 16      end loop;
 17      dbms_output.put_line
 18      ( round( (dbms_utility.get_time-l_start)/100, 2 ) ||
 19        ' seconds...' );
 20  end;
 21  /
1.27 seconds...

PL/SQL procedure successfully completed.

That is pretty dramatic.  The fact is that not only does this 
execute much faster (we spent more time PARSING our queries then 
actually EXECUTING them!) it will let more users use your system 
simultaneously.
...

and from linked question

...
Lbrary cache latch contention is typically caused by NOT using bind variables.  It is due 
to excessive parsing of statements.

One way to see if this might be the case in your situation is to run a script like:


create table t1 as select sql_text from v$sqlarea;

alter table t1 add sql_text_wo_constants varchar2(1000);

create or replace function 
remove_constants( p_query in varchar2 ) return varchar2
as
    l_query long;
    l_char  varchar2(1);
    l_in_quotes boolean default FALSE;
begin
    for i in 1 .. length( p_query )
    loop
        l_char := substr(p_query,i,1);
        if ( l_char = '''' and l_in_quotes )
        then
            l_in_quotes := FALSE;
        elsif ( l_char = '''' and NOT l_in_quotes )
        then
            l_in_quotes := TRUE;
            l_query := l_query || '''#';
        end if;
        if ( NOT l_in_quotes ) then
            l_query := l_query || l_char;
        end if;
    end loop;
    l_query := translate( l_query, '0123456789', '@@@@@@@@@@' );
    for i in 0 .. 8 loop
        l_query := replace( l_query, lpad('@',10-i,'@'), '@' );
        l_query := replace( l_query, lpad(' ',10-i,' '), ' ' );
    end loop;
    return upper(l_query);
end;
/
update t1 set sql_text_wo_constants = remove_constants(sql_text);

select sql_text_wo_constants, count(*)
  from t1
 group by sql_text_wo_constants
having count(*) > 100
 order by 2
/


The output of that last query will show you statements that are identical in the shared 
pool after all numbers and character string constants have been removed.  These 
statements -- and more importantly their counts -- are the potential bottlenecks.  In 
addition to causing the contention, they will be HUGE cpu consumers.  

If you discover your applications do not use bind variables -- you must have this 
corrected.  You'll never have a good hit ratio if everyone submits "unique" sql.  Your 
shared pool will never be used right and you'll be using excessive CPU (90% of the time 
it takes to process "insert into t values ( 1 )" is parsing.  If you use "insert into t 
values ( :x )", and bind the value of 1 -- then the next person that runs that insert 
will benefit from your work and run that much faster. 
...

Advice from Oracle concerning a concrete (12c) example of unable to allocate 52824 bytes of shared memory ("shared pool",...

This was from a ticket raised for a Data Pump export that consistently failed with:

Processing object type SCHEMA_EXPORT/TABLE/PROCACT_INSTANCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
ORA-39014: One or more workers have prematurely exited.
ORA-39029: worker 1 with process name "DW00" prematurely terminated
ORA-31671: Worker process DW00 had an unhandled exception.
ORA-04031: unable to allocate 52824 bytes of shared memory ("shared pool","unknown object","KTSL subheap","ktsl_load_disp-2")
ORA-06512: at "SYS.KUPW$WORKER", line 2122
ORA-06512: at line 2
The error ORA-4031 on the shared pool can indicate one of two things:
- insufficient space for the shared pool (so insufficient memory in SGA at that time)
OR
- although there is enough memory, it is fragmented and no contiguous chunk can be allocated to satisfy the latest memory request.

Each SGA has a little fragmentation of course because of the operations taking place, so in your case the fragmentation is represented by the 18M of free memory, which exists in uncontiguous chunks.
This means that the root cause of the error is insufficient contiguous memory to allocate 12312 bytes of shared memory.

ACTION PLAN
==============
1. We see memory_target=2000M. This is too little to support a 12c database, even when it has little activity.
As you may know 1.7G for the SGA was acceptable when speaking of 32bit OSs where it was hardly possible to make use of more than that.
Since you have a 64 bit system, the addressable memory is virtually unlimited.
Furthermore, by using AMM (memory_target>0) this memory is divided between the SGA and the PGA (the user processes) which in fact, depending on the number of connections, can limit the SGA even more.
So the first recommendation is to increase the value of memory_target to 4G for example. There is no ideal value, but this would be a reasonable starting value for tuning.

You can do further tuning of the memory using the view V$MEMORY_TARGET_ADVICE.
The view V$MEMORY_TARGET_ADVICE provides tuning advice for the MEMORY_TARGET initialization parameter:
SQL> SELECT * FROM v$memory_target_advice ORDER BY memory_size;

The dynamic performance view V$MEMORY_DYNAMIC_COMPONENTS shows the current sizes of all dynamically tuned memory components, including the total sizes of the SGA and instance PGA.

You can also use V$MEMORY_RESIZE_OPS which has a circular history buffer of the last 800 memory resize requests.

2. Since the AMM (you have memory_target>0) is used, then make sure to set:
 _shared_pool_reserved_pct to 10 or 15 to ensure that when the Shared Pool grows or shrinks automatically, the Reserved Area will change as well.

SQL> alter system set "_shared_pool_reserved_pct"=15 scope=spfile;
--restart the instance

FYI: By default, Oracle configures a small Reserved Pool (or Reserved Area) inside the Shared Pool.
This memory can be used for operations such as PL/SQL and trigger compilation or for temporary space while loading Java objects.
After the memory allocated from the Reserved Pool is freed, it returns to the Reserved Pool.
5% of the Shared Pool is set aside as the Reserved Pool to handle allocations of memory higher than defined by the hidden parameter _shared_pool_reserved_pct.
In some application environments, 5% is too small.

3. Set shared_pool_size=500M or even more. This will represent a minimum amount of memory that will always be available to the shared pool.
If more memory is needed at any moment, if it is available, it will be allocated to the shared pool over the 500M.

Implement the above steps, monitor the database and let us know the result,

RMAN-04004: error from recovery catalog database: ORA-01017: invalid username/password; logon denied

oracle@host:/export/home/ora> rman target / catalog catowner/catpass@catdb

Recovery Manager: Release 11.2.0.2.0 - Production on Thu Jun 12 16:51:03 2014

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

connected to target database: XXXXRAP1 (DBID=3557010742)
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00554: initialization of internal recovery manager package failed
RMAN-04004: error from recovery catalog database: ORA-01017: invalid username/password; logon denied

This is normally very straight-forward and self-explanatory.
The puzzle here is that a connection from SQL*Plus works!

sqlplus catowner/catpass@catdb

SQL*Plus: Release 11.2.0.2.0 Production on Thu Jun 12 16:53:18 2014

Copyright (c) 1982, 2010, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL>

Eventually worked out that the rman binary in the databases ORACLE_HOME on the source machine needed relinking!

cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk irman 

ORA-00230: operation disallowed: snapshot control file enqueue unavailable

waiting for snapshot control file enqueue
waiting for snapshot control file enqueue
waiting for snapshot control file enqueue
waiting for snapshot control file enqueue
cannot make a snapshot control file
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03008: error while performing automatic resync of recovery catalog
ORA-00230: operation disallowed: snapshot control file enqueue unavailable

From the doc...

When RMAN needs to back up or resynchronize from the control file, it first creates a snapshot or consistent image of the control file.
If one RMAN job is already backing up the control file while another needs to create a new snapshot control file, then you may see the following message:

waiting for snapshot controlfile enqueue

Under normal circumstances, a job that must wait for the control file enqueue waits for a brief interval and then successfully obtains the enqueue.
RMAN makes up to five attempts to get the enqueue and then fails the job.
The conflict is usually caused when two jobs are both backing up the control file, and the job that first starts backing up the control file waits for service from the media manager.

To determine which job is holding the conflicting enqueue:

    After you see the first message stating "RMAN-08512: waiting for snapshot controlfile enqueue", start a new SQL*Plus session on the target database:

    % sqlplus 'SYS/oracle@trgt AS SYSDBA'

    Execute the following query to determine which job is causing the wait:

    SELECT s.SID, USERNAME AS "User", PROGRAM, MODULE, 
           ACTION, LOGON_TIME "Logon", l.* 
    FROM V$SESSION s, V$ENQUEUE_LOCK l
    WHERE l.SID = s.SID
    AND l.TYPE = 'CF'
    AND l.ID1 = 0
    AND l.ID2 = 2;

    You should see output similar to the following (the output in this example has been truncated):

    SID User Program              Module                    Action           Logon
    --- ---- -------------------- ------------------------- ---------------- ---------
      9 SYS  rman@h13 (TNS V1-V3) backup full datafile: c1  0000210 STARTED  21-JUN-01


After you have determined which job is creating the enqueue, you can do one of the following:

    Wait until the job creating the enqueue completes
    Cancel the current job and restart it after the job creating the enqueue completes
    Cancel the job creating the enqueue

Commonly, enqueue situations occur when a job is writing to a tape drive, but the tape drive is waiting for a new cassette to be inserted.
If you start a new job in this situation, then you will probably receive the enqueue message because the first job cannot complete until the new tape is loaded.

or more directly...

set lines 2000
col killer     for a70
col program    for a20
col module     for a20
col action     for a20
col logon_time for a20
select 'alter system disconnect session '''||s.sid||','||s.serial#||''' immediate;' killer
,      username
,      program
,      module
,      action
,      logon_time
from   v$session s
,      v$enqueue_lock l
where  l.sid  = s.sid
and    l.type = 'cf'
and    l.id1  = 0
and    l.id2  = 2
/

RMAN-03002: failure of backup command at 05/19/2015 12:25:47

If using Cygwin...

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 05/19/2015 12:25:47
ORA-19554: error allocating device, device type: SBT_TAPE, device name:
ORA-27000: skgfqsbi: failed to initialize storage subsystem (SBT) layer
OSD-02534: Message 2534 not found;  product=RDBMS; facility=SOSD
  • Check the tdpo.opt file is where you think it is!
  • Check the path to the opt file looks like this (needs to be Windows format):
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' FORMAT   'LOG_%d_t%t_s%s_u%U' PARMS  'ENV=(TDPO_OPTFILE=E:\oracle\KITRYD\admin\tdpo.opt)';
  • Check the content of the opt file is Windows format paths!

RMAN-20033: control file SEQUENCE# too low

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03008: error while performing automatic resync of recovery catalog
===RMAN-20033: control file SEQUENCE# too low===

From forums.oracle.com
This error generally happens if an archivelog backup happens while the RMAN database backup is running and if controlfile autobackup is configured.
The database backup is usually successful and it is just the controlfile backup that fails.

RMAN errors after crosscheck backup and delete noprompt obsolete and delete noprompt expired backup

RMAN-06207: WARNING: 3921 objects could not be deleted for SBT_TAPE channel(s) due
RMAN-06208:          to mismatched status.  Use CROSSCHECK command to fix status
RMAN-06210: List of Mismatched objects
RMAN-06211: ==========================
RMAN-06212:   Object Type   Filename/Handle
RMAN-06213: --------------- ---------------------------------------------------
RMAN-06214: Backup Piece    DB_SID_t20140316_s263889_umhp3bpa7_1_1
...

Use the FORCE option with the DELETE command
The FORCE command tells RMAN to clean the information out of the catalog regardless of whether it can find it on the media or not.

delete force noprompt expired backup;
delete force noprompt obsolete;

Database trigger to capture ORA errors

From ora-ssn.blogspot.be

CREATE TABLE stats$error_log (
        err_dt          TIMESTAMP,
        db_user         VARCHAR2(30),
        msg_stack       VARCHAR2(2000),
        sqltxt          VARCHAR2(1000))
tablespace users;

Now, create a trigger on the database server.

CREATE OR REPLACE TRIGGER log_server_errors
  AFTER SERVERERROR
  ON DATABASE
DECLARE
          v_sqltext VARCHAR2(1000);
          nl_sqltext ora_name_list_t;
  BEGIN
          -- Capture entire error text
          FOR i IN 1 .. ora_sql_txt(nl_sqltext) LOOP
            v_sqltext := v_sqltext || nl_sqltext(i);
          END LOOP;
          INSERT INTO STATS$ERROR_LOG
          (err_dt, db_user, msg_stack, sqltxt)
          VALUES
          (systimestamp,
           sys.login_user,
           dbms_utility.format_error_stack, v_sqltext);
  END log_server_errors;
/

ORA-19809: limit exceeded for recovery files

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of backup command on ORA_DISK_1 channel at 07/15/2013 10:10:59
ORA-19809: limit exceeded for recovery files
ORA-19804: cannot reclaim 274716160 bytes disk space from 1468006400 limit


    ORA-19809: limit exceeded for recovery files
    Cause: The limit for recovery files specified by the db_recovery_file_dest_size was exceeded.

    Action: The error is accompanied by 19804. See message 19804 for further details

    ORA-19804: cannot reclaim string bytes disk space from string limit
    Cause: Oracle cannot reclaim disk space of specified bytes from the DB_RECOVERY_FILE_DEST_SIZE limit.

    Action: There are five possible solutions:
       1) Take frequent backup of recovery area using RMAN.
       2) Consider changing RMAN retention policy.
       3) Consider changing RMAN archivelog deletion policy.
       4) Add disk space and increase DB_RECOVERY_FILE_DEST_SIZE.
       5) Delete files from recovery area using RMAN.

The solution here is to increase the value of db_recovery_file_dest_size but it might also indicate a problem with tapes if the archivelogs are being backed up to tape via tdpo.

SYS@DN7> alter system set db_recovery_file_dest_size=4000M scope=both;

System altered.

Looking into int further, saw RMAN parameters missing...

CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 14 DAYS;
CONFIGURE BACKUP OPTIMIZATION OFF;
CONFIGURE DEFAULT DEVICE TYPE TO 'SBT_TAPE';
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE 'SBT_TAPE' TO 'LOG_%d_controlfile_%F.rman';
CONFIGURE DEVICE TYPE 'SBT_TAPE' PARALLELISM 1 BACKUP TYPE TO BACKUPSET;

RMAN-00554: initialization of internal recovery manager package failed

Getting this error after migrating a database from one machine to another and attaching to a nice new version 12c catalog on the new server

Recovery Manager: Release 11.2.0.2.0 - Production on Sat Jun 21 13:16:24 2014

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

connected to target database: SPMSUPP1 (DBID=3026014394)
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00554: initialization of internal recovery manager package failed
RMAN-04004: error from recovery catalog database: ORA-01017: invalid username/password; logon denied

We cannot connect to the catalog via rman but there is no problem if connecting via SQL*Plus!
Solution: relink the rman executable.

Using AIX commands genld, genkld and slibclean to avoid library file locking errors (libjox) (Doc ID 739963.1)

When installing interim (one-off) patches, CPU patches or patchsets, you may encounter some errors with regards to files being locked or oui/opatch being unable to copy files
even though the databases, listeners and all other Oracle processes associated with the ORACLE_HOME to be patched were stopped.
This could be as result of a process which requires termination or an additional file needing to be unloaded from the system cache.
Run following commands to clean up...

genld -l | grep <ORACLE_HOME>

If genld returns data then a currently executing process has something open in the ORACLE_HOME directory, therefore terminate the process as required/recommended.

genkld | grep <ORACLE_HOME>

If the genkld command returns a list of shared objects currently loaded onto the OS system cache then please remove the entries from the OS system cache by running the slibclean command as root user:

/usr/sbin/slibclean

ORA-20011: Approximate NDV failed: ORA-29913: error in executing ODCIEXTTABLEOPEN callout

This seems to be related to orphaned Datapump tables. Generally seems to be a problem when stats jobs are running during the night (dbms_stats)

  • Check no datapump jobs are running
select * from dba_datapump_jobs;
  • Find orphaned tables
set lines 2000
select owner
,      object_name
,      object_type
,      status
,      to_char(created,'dd-mon-yyyy hh24:mi:ss') created
,      to_char(last_ddl_time,'dd-mon-yyyy hh24:mi:ss') last_ddl_time
from   dba_objects
where  1=1
and    object_name like 'ET$%';

Check that any rows seen are actually external tables

select owner
,      table_name
,      default_directory_name
,      access_type
from   dba_external_tables
order  by 1,2
  • Drop the tables if they are not currently being used (no datapump jobs running)
drop table &&owner..&table_name;