Difference between revisions of "Oracle errors"

From dbawiki
Jump to: navigation, search
(UDE-31623: operation generated ORACLE error 31623)
Line 44: Line 44:
 
Alternately, turn on Automatic Shared Memory management (ASMM) by setting SGA_TARGET or MEMORY_TARGET.
 
Alternately, turn on Automatic Shared Memory management (ASMM) by setting SGA_TARGET or MEMORY_TARGET.
 
</pre>
 
</pre>
 +
From Oracle Documentation
 +
<pre>
 +
Setting the Size Of the Buffer Cache In a Streams Environment
 +
 +
Oracle Data Pump uses Streams functionality to communicate between processes. If the SGA_TARGET initialization parameter is set, then the STREAMS_POOL_SIZE initialization parameter is automatically set to a reasonable value.
 +
 +
If the SGA_TARGET initialization parameter is not set and the STREAMS_POOL_SIZE initialization parameter is not defined, then the size of the streams pool automatically defaults to 10% of the size of the shared pool.
 +
 +
When the streams pool is created, the required SGA memory is taken from memory allocated to the buffer cache, reducing the size of the cache to less than what was specified by the DB_CACHE_SIZE initialization parameter. This means that if the buffer cache was configured with only the minimal required SGA, then Data Pump operations may not work properly. A minimum size of 10M is recommended for STREAMS_POOL_SIZE in order to ensure successful Data Pump operations.
 +
</pre>
 +
 
===ORA-00020: maximum number of processes (150) exceeded===
 
===ORA-00020: maximum number of processes (150) exceeded===
 
Check the current resource limits with:
 
Check the current resource limits with:

Revision as of 10:45, 2 May 2014

ORA-00600

Master Note for Diagnosing ORA-600 (Doc ID 1092832.1)

ORA-27092: size of file exceeds file size limit of the process

Cause: an attempt was made to open a file that exceeds the process's file size limit (ulimit), additional information shows the current limit (logical blocks) and the size of the file (logical blocks)
Action: increase the processes file size limit (ulimit) and retry
truncate table pibadm.test_ph
                      *
Error at line 1 :
ORA-00604: error ocurred at recursive SQL level 1
ORA-01116: error in opening database file 2
ORA-01110: datafile 2 : '/opt/u01/data/PIBQA/UNDO_01.dbf'
ORA-27092: size of file exceeds file size limit of the process
Additional information: 262143
Additional information: 2621441

Check the fs_size limit for this user in /etc/security/limits
Compare this with another user, say oracle. Probably -1 (unlimited)

UDE-31623: operation generated ORACLE error 31623

When using Data pump, get following errors almost immediately:

UDE-31623: operation generated ORACLE error 31623
ORA-31623: a job is not attached to this session via the specified handle
ORA-06512: at "SYS.DBMS_DATAPUMP", line 3326
ORA-06512: at "SYS.DBMS_DATAPUMP", line 4551
ORA-06512: at line 1

There is a note about this on Metalink, 1080775.1 which suggests that Streams Pool Size needs setting.

For this DataPump error, you will need to configure the database with some Streams Pool.

Perform the following query to find out how much STREAMS_POOL_SIZE has been allocated

 select * from v$sgainfo;
 ...
 Streams Pool Size                               0 Yes

Data Pump now uses Advanced Queuing (AQ) internally, so it can handle job stops/starts/re-starts.  However, AQ using the Streams Pool, where the database currently has the STREAMS_POOL_SIZE set to ZERO.

Manually set the STREAMS_POOL_SIZE (using ALTER SYSTEM or by changing the value in the the PFILE/SPFILE), re-start the database and re-attempt the Data Pump Export.

Alternately, turn on Automatic Shared Memory management (ASMM) by setting SGA_TARGET or MEMORY_TARGET.

From Oracle Documentation

Setting the Size Of the Buffer Cache In a Streams Environment

Oracle Data Pump uses Streams functionality to communicate between processes. If the SGA_TARGET initialization parameter is set, then the STREAMS_POOL_SIZE initialization parameter is automatically set to a reasonable value.

If the SGA_TARGET initialization parameter is not set and the STREAMS_POOL_SIZE initialization parameter is not defined, then the size of the streams pool automatically defaults to 10% of the size of the shared pool.

When the streams pool is created, the required SGA memory is taken from memory allocated to the buffer cache, reducing the size of the cache to less than what was specified by the DB_CACHE_SIZE initialization parameter. This means that if the buffer cache was configured with only the minimal required SGA, then Data Pump operations may not work properly. A minimum size of 10M is recommended for STREAMS_POOL_SIZE in order to ensure successful Data Pump operations.

ORA-00020: maximum number of processes (150) exceeded

Check the current resource limits with:

set lines 300
select * from v$resource_limit;

Unfortunately this limit cannot be increased without a restart.
It may be a necessary to increase the limit with

alter system set processes=500 scope=spfile;

followed by a shutdown/startup
Or it may be some user is running away with the limit.
Check to see who is running what with: [See_what_statements_all_users_are_running]

ORA-24247: network access denied by access control list (ACL)

Cause: No access control list (ACL) has been assigned to the target host or the privilege necessary to access the target host has not been granted to the user in the access control list.

Action: Ensure that an access control list (ACL) has been assigned to the target host and the privilege necessary to access the target host has been granted to the user.

Your application will encounter an ORA-24247 error if it relies on one of the network packages and no proper ACL has been created. For the use of the following packages it is mandatory to have an ACL for the application user in place in 11g:

  • UTL_TCP
  • UTL_SMTP
  • UTL_MAIL
  • UTL_HTTP
  • UTL_INADDR

Here is a very good explanation of how to setup the necessary ACLs: [whitehorses.nl]

ORA-12162: TNS:net service name is incorrectly specified

Probably nothing to do with TNS. Check your ORACLE_SID is set correctly!

ORA-00054: resource busy and acquire with NOWAIT specified

SELECT O.OBJECT_NAME, S.SID, S.SERIAL#, P.SPID, S.PROGRAM,S.USERNAME,
S.MACHINE,S.PORT , S.LOGON_TIME,SQ.SQL_FULLTEXT 
FROM V$LOCKED_OBJECT L, DBA_OBJECTS O, V$SESSION S, 
V$PROCESS P, V$SQL SQ 
WHERE L.OBJECT_ID = O.OBJECT_ID 
AND L.SESSION_ID = S.SID AND S.PADDR = P.ADDR 
AND S.SQL_ADDRESS = SQ.ADDRESS;
select object_name, s.sid, s.serial#, p.spid 
from v$locked_object l, dba_objects o, v$session s, v$process p
where l.object_id = o.object_id and l.session_id = s.sid and s.paddr = p.addr;

then

alter system kill session 'sid,serial#';

ORA-12514: TNS:listener does not currently know of service

lsnrctl status <listener_name>

Been altering tnsnames.ora or listener.ora?

  • Check the local_listener parameter

Try resetting it to itself. Strange but if you are not using port 1521, this can work wonders!
Assuming listener is called LISTENER_WM9T:

alter system set local_listener=LISTENER_WM9T scope=both;
  • Try and connect with the service name or directly with
sqlplus 'sys/*******'@<host>:<port>/<SID> as sysdba

Excellent description of how Oracle TNS listeners work and a checklist on how to fix this error

ORA-12547: TNS lost contact

If

sqlplus <user>/<pass>

fails with above error, try

sqlplus <user>/<pass>@<db connection>

If this works, it is most probably a permissions error.
Re-run the

$ORACLE_HOME/root.sh

script that was run as part of the original installation. This will reset the permissions on some important files.

ORA-04030: out of process memory when trying to allocate 2520 bytes

expdp problem (ORA-04030: out of process memory when trying to allocate 2520 bytes (pga heap,koh-kghu call heap))

select value from v$pgastat where name='maximum PGA allocated';

SHOW PARAMETER TARGET

ORA-04030: out of process memory when trying to allocate 2520 bytes (pga heap,koh-kghu call heap)

alter system reset pga_aggregate_target scope=spfile;

show sga

select * from v$pgastat;

show parameter process

select sum(value)/1024/1024 Mb
from   v$sesstat s, v$statname n
where  1=1
and    n.STATISTIC# = s.STATISTIC#
and    name = 'session pga memory'

show parameter WORKAREA

select value from v$pgastat where name='maximum PGA allocated';

alter system set memory_max_target=10G scope=spfile;

alter system set memory_target=3G scope=both;

select * from v$memory_target_advice order by memory_size;

ORA-04031: unable to allocate 32 bytes of shared memory ("shared pool","select name,online$,contents...","KGLH0^e5705bfc","kglHeapInitialize:temp")

ORA-04031: unable to allocate  bytes of shared memory ("","","","") 
ORA-04031: unable to allocate 32 bytes of shared memory ("shared pool","select name,online$,contents...","KGLH0^e5705bfc","kglHeapInitialize:temp") 
Incident details in: /oracle/diag/rdbms/mape/MAPE/incident/incdir_86754/MAPE_dm00_14221536_i86754.trc 
Wed Jul 31 20:42:50 2013 
Dumping diagnostic data in directory=[cdmp_20130731204250], requested by (instance=1, osid=14221536 (DM00)), summary=[incident=86753]. 
Wed Jul 31 20:42:51 2013 
Sweep [inc][86754]: completed 
Sweep [inc][86753]: completed 
Sweep [inc2][86753]: completed 
Wed Jul 31 20:42:57 2013 
Use ADRCI or Support Workbench to package the incident. 
See Note 411.1 at My Oracle Support for error and packaging details. 
Dumping diagnostic data in directory=[cdmp_20130731204257], requested by (instance=1, osid=14221536 (DM00)), summary=[incident=86754]. 
Wed Jul 31 20:43:35 2013 
Errors in file /oracle/diag/rdbms/mape/MAPE/trace/MAPE_dw02_40567024.trc  (incident=82993): 
ORA-04031: unable to allocate 32 bytes of shared memory ("shared pool","select order#,columns,types ...","KGLH0^470434f8","kglHeapInitialize:temp")

Database had already crashed so nothing to do but:

(0) MAPE ora@x025:/home/ora> sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Fri Aug 2 14:26:20 2013

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected.
SYS@MAPE> alter database flush shared_pool;
alter database flush shared_pool
*
ERROR at line 1:
ORA-01012: not logged on
Process ID: 0
Session ID: 0 Serial number: 0


SYS@MAPE> shutdown abort
ORACLE instance shut down.
SYS@MAPE> startup
ORACLE instance started.

Total System Global Area  734892032 bytes
Fixed Size                  2225128 bytes
Variable Size             398461976 bytes
Database Buffers          318767104 bytes
Redo Buffers               15437824 bytes
Database mounted.
Database opened.

SYS@MAPE> show parameter pool

NAME                                 TYPE                           VALUE
------------------------------------ ------------------------------ ------------------------------
buffer_pool_keep                     string
buffer_pool_recycle                  string
global_context_pool_size             string
java_pool_size                       big integer                    0
large_pool_size                      big integer                    0
olap_page_pool_size                  big integer                    0
shared_pool_reserved_size            big integer                    18M
shared_pool_size                     big integer                    208M
streams_pool_size                    big integer                    0
SYS@MAPE> show sga

Total System Global Area  734892032 bytes
Fixed Size                  2225128 bytes
Variable Size             385879064 bytes
Database Buffers          331350016 bytes
Redo Buffers               15437824 bytes
SYS@MAPE> exit

but for reference, the causes could well be the applications running on the database are not using bind variables...
From asktom.oracle.com

...
Just to give you a tiny idea of how huge of a difference this 
can make performance wise, you only need to run a very small 
test:

tkyte@TKYTE816> alter system flush shared_pool;
System altered.

tkyte@TKYTE816> declare
  2      type rc is ref cursor;
  3      l_rc rc;
  4      l_dummy all_objects.object_name%type;
  5      l_start number default dbms_utility.get_time;
  6  begin
  7      for i in 1 .. 1000
  8      loop
  9          open l_rc for
 10          'select object_name
 11             from all_objects
 12            where object_id = ' || i;
 13          fetch l_rc into l_dummy;
 14          close l_rc;
 15      end loop;
 16      dbms_output.put_line
 17      ( round( (dbms_utility.get_time-l_start)/100, 2 ) ||
 18        ' seconds...' );
 19  end;
 20  /
14.86 seconds...

PL/SQL procedure successfully completed.

tkyte@TKYTE816> declare
  2      type rc is ref cursor;
  3      l_rc rc;
  4      l_dummy all_objects.object_name%type;
  5      l_start number default dbms_utility.get_time;
  6  begin
  7      for i in 1 .. 1000
  8      loop
  9          open l_rc for
 10          'select object_name
 11             from all_objects
 12            where object_id = :x'
 13          using i;
 14          fetch l_rc into l_dummy;
 15          close l_rc;
 16      end loop;
 17      dbms_output.put_line
 18      ( round( (dbms_utility.get_time-l_start)/100, 2 ) ||
 19        ' seconds...' );
 20  end;
 21  /
1.27 seconds...

PL/SQL procedure successfully completed.

That is pretty dramatic.  The fact is that not only does this 
execute much faster (we spent more time PARSING our queries then 
actually EXECUTING them!) it will let more users use your system 
simultaneously.
...

and from linked question

...
Lbrary cache latch contention is typically caused by NOT using bind variables.  It is due 
to excessive parsing of statements.

One way to see if this might be the case in your situation is to run a script like:


create table t1 as select sql_text from v$sqlarea;

alter table t1 add sql_text_wo_constants varchar2(1000);

create or replace function 
remove_constants( p_query in varchar2 ) return varchar2
as
    l_query long;
    l_char  varchar2(1);
    l_in_quotes boolean default FALSE;
begin
    for i in 1 .. length( p_query )
    loop
        l_char := substr(p_query,i,1);
        if ( l_char = '''' and l_in_quotes )
        then
            l_in_quotes := FALSE;
        elsif ( l_char = '''' and NOT l_in_quotes )
        then
            l_in_quotes := TRUE;
            l_query := l_query || '''#';
        end if;
        if ( NOT l_in_quotes ) then
            l_query := l_query || l_char;
        end if;
    end loop;
    l_query := translate( l_query, '0123456789', '@@@@@@@@@@' );
    for i in 0 .. 8 loop
        l_query := replace( l_query, lpad('@',10-i,'@'), '@' );
        l_query := replace( l_query, lpad(' ',10-i,' '), ' ' );
    end loop;
    return upper(l_query);
end;
/
update t1 set sql_text_wo_constants = remove_constants(sql_text);

select sql_text_wo_constants, count(*)
  from t1
 group by sql_text_wo_constants
having count(*) > 100
 order by 2
/


The output of that last query will show you statements that are identical in the shared 
pool after all numbers and character string constants have been removed.  These 
statements -- and more importantly their counts -- are the potential bottlenecks.  In 
addition to causing the contention, they will be HUGE cpu consumers.  

If you discover your applications do not use bind variables -- you must have this 
corrected.  You'll never have a good hit ratio if everyone submits "unique" sql.  Your 
shared pool will never be used right and you'll be using excessive CPU (90% of the time 
it takes to process "insert into t values ( 1 )" is parsing.  If you use "insert into t 
values ( :x )", and bind the value of 1 -- then the next person that runs that insert 
will benefit from your work and run that much faster. 
...

RMAN-20033: control file SEQUENCE# too low

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03008: error while performing automatic resync of recovery catalog
===RMAN-20033: control file SEQUENCE# too low===

From forums.oracle.com
This error occurs occasionally. I believe the cause is if an archive backup runs during a database backup, the controlfile is backed up by both and may (but not always) result in these errors. The database backup is usually successful and it is just the controlfile backup that complains.

Database trigger to capture ORA errors

From ora-ssn.blogspot.be

CREATE TABLE stats$error_log (
        err_dt          TIMESTAMP,
        db_user         VARCHAR2(30),
        msg_stack       VARCHAR2(2000),
        sqltxt          VARCHAR2(1000))
tablespace users;

Now, create a trigger on the database server.

CREATE OR REPLACE TRIGGER log_server_errors
  AFTER SERVERERROR
  ON DATABASE
DECLARE
          v_sqltext VARCHAR2(1000);
          nl_sqltext ora_name_list_t;
  BEGIN
          -- Capture entire error text
          FOR i IN 1 .. ora_sql_txt(nl_sqltext) LOOP
            v_sqltext := v_sqltext || nl_sqltext(i);
          END LOOP;
          INSERT INTO STATS$ERROR_LOG
          (err_dt, db_user, msg_stack, sqltxt)
          VALUES
          (systimestamp,
           sys.login_user,
           dbms_utility.format_error_stack, v_sqltext);
  END log_server_errors;
/

ORA-19809: limit exceeded for recovery files

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of backup command on ORA_DISK_1 channel at 07/15/2013 10:10:59
ORA-19809: limit exceeded for recovery files
ORA-19804: cannot reclaim 274716160 bytes disk space from 1468006400 limit


    ORA-19809: limit exceeded for recovery files
    Cause: The limit for recovery files specified by the db_recovery_file_dest_size was exceeded.

    Action: The error is accompanied by 19804. See message 19804 for further details

    ORA-19804: cannot reclaim string bytes disk space from string limit
    Cause: Oracle cannot reclaim disk space of specified bytes from the DB_RECOVERY_FILE_DEST_SIZE limit.

    Action: There are five possible solutions:
       1) Take frequent backup of recovery area using RMAN.
       2) Consider changing RMAN retention policy.
       3) Consider changing RMAN archivelog deletion policy.
       4) Add disk space and increase DB_RECOVERY_FILE_DEST_SIZE.
       5) Delete files from recovery area using RMAN.

The solution here is to increase the value of db_recovery_file_dest_size but it might also indicate a problem with tapes if the archivelogs are being backed up to tape via tdpo.

SYS@DN7> alter system set db_recovery_file_dest_size=4000M scope=both;

System altered.

Looking into int further, saw RMAN parameters missing...

CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 14 DAYS;
CONFIGURE BACKUP OPTIMIZATION OFF;
CONFIGURE DEFAULT DEVICE TYPE TO 'SBT_TAPE';
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE 'SBT_TAPE' TO 'LOG_%d_controlfile_%F.rman';
CONFIGURE DEVICE TYPE 'SBT_TAPE' PARALLELISM 1 BACKUP TYPE TO BACKUPSET;