Google it ....

Showing posts with label Oracle. Show all posts
Showing posts with label Oracle. Show all posts

Friday, January 26, 2024

How to run scheduler job on specific instance in RAC database

Here I'll show you how to configure scheduler job to run on specific instance in RAC database.
create test job on RAC database:
begin
	sys.dbms_scheduler.create_job(
	job_name        => 'SYS.TEST_SCHEDULER_JOB',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'declare var1 varchar2(1);
begin
	select * into var1 from dual;
	dbms_output.put_line(var1);
end;',
    start_date      => to_date('26-01-2024 00:00:00','dd-mm-yyyy hh24:mi:ss'),
	repeat_interval => 'Freq=Secondly;Interval=5',
	end_date        => to_date(null),
	job_class       => 'DEFAULT_JOB_CLASS',
	enabled         => true,
	auto_drop       => false,
	comments        => '');
end;
/
Using instance ID and instance stickiness attribute in job scheduler, Instance ID value indicate the instance on which the job is to be run. Instance Stickiness value true means job will run on same node until node is extremely overloaded or not available. False means job will run on any available node. Its default value is true. Job will continue on that on which its first run or started in RAC case. No use in Standalone server. 
check job with its instance_id and instance_stickiness:
select OWNER,JOB_NAME,INSTANCE_STICKINESS,INSTANCE_ID from dba_scheduler_jobs where JOB_NAME='TEST_SCHEDULER_JOB';
change instance_id for job:
begin 
 dbms_scheduler.set_attribute(name => 'TEST_SCHEDULER_JOB' ,attribute=>'INSTANCE_ID', value=>'2');
end;
/
disable instance_stickiness attribute for job:
begin 
 dbms_scheduler.set_attribute(name => 'TEST_SCHEDULER_JOB' ,attribute=>'INSTANCE_STICKINESS', value=>FALSE);
end;
/

Saturday, December 23, 2023

how to restore autobackup controlfile or spfile older than 7 days?

when we are trying to restore spfile or controlfile from autobackup appears error:

channel ORA_DISK_1: no autobackup in 7 days found

RMAN> restore spfile from autobackup;

Starting restore at 23-DEC-23

using channel ORA_DISK_1


channel ORA_DISK_1: looking for autobackup on day: 20231223

channel ORA_DISK_1: looking for autobackup on day: 20231222

channel ORA_DISK_1: looking for autobackup on day: 20231221

channel ORA_DISK_1: looking for autobackup on day: 20231220

channel ORA_DISK_1: looking for autobackup on day: 20231219

channel ORA_DISK_1: looking for autobackup on day: 20231218

channel ORA_DISK_1: looking for autobackup on day: 20231217

channel ORA_DISK_1: no autobackup in 7 days found

RMAN-00571: ===========================================================

RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============

RMAN-00571: ===========================================================

RMAN-03002: failure of restore command at 12/23/2023 18:41:09

RMAN-06172: no autobackup found or specified handle is not a valid copy or piece


When you are using RESTORE SPFILE FROM AUTOBACKUP or RESTORE CONTROLFILE FROM AUTOBACKUP,

RMAN begins the search on the current day or on the day specified with the SET UNTIL caluse.

If no autobackup is found in the current or SET UNTIL day, RMAN checks the previous day and so on.

In this way RMAN by default checks for 7 days autobackup from the current or SET UNTIL day.


Solution:

However if you want to extend your searching of autobackup more than 7 days then you have to use
MAXDAYS option with the RESTORE command.

RMAN>restore spfile from autobackup maxdays 30;
or
RMAN>restore controlfile from autobackup maxdays 30;

In these cases autobackup searches will be performed up to 30 days from the current or SET UNTIL day.

Friday, August 4, 2023

Exadata after disk replacement asm disk remain mount_status closed header_status candidate

One of physical disk on exadata cell was droped for replacement status, after engineer change physical disk with new disk in v$asm_disk view shown that mount_status is closed and header_status is candidate:

select g.NAME,
       d.disk_number dsk_num,
       d.mount_status,
       d.header_status,
       d.OS_MB,
       d.path
 from v$asm_disk d, v$asm_diskgroup g
 where
      d.GROUP_NUMBER=g.GROUP_NUMBER
      and d.disk_number = 26;
 

NAME           DSK_NUM MOUNT_STATUS HEADER_STATUS      OS_MB PATH
----------- ---------- ------------ ------------- ---------- ------------------------------------------------
RECO           26      CACHED       MEMBER            582080  o/192.168.10.17;192.168.10.18/RECO_FD_02_x81cel04
DATA           26      CLOSED       CANDIDATE         5217280 o/192.168.10.13;192.168.10.14/DATA_FD_02_x81cel02
Solution : The problem will be resolved when the disks are added to the relevant disk groups by using the PATH value in the above query.
Processes must be performed by connecting to the ASM instance as SYSASM.
First of all check if rebalnce process is not working and if not than add disk manually.
check asm rebalance is not working:
sqlplus / as sysasm

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Aug 4 16:21:21 2023
Version 19.17.0.0.0

Copyright (c) 1982, 2022, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.17.0.0.0

SQL> select * from v$asm_operation;

no rows selected

SQL>
add disk manually:

alter diskgroup DATA add disk 'o/192.168.10.13;192.168.10.14/DATA_FD_02_x81cel02' rebalance power 4;
you can check progress of rebalance againg in v$asm_operation.
After rebalance process finish its job disk statuses must be cached and member.
 
select g.NAME,
       d.disk_number dsk_num,
       d.mount_status,
       d.header_status,
       d.OS_MB,
       d.path
 from v$asm_disk d, v$asm_diskgroup g
 where
      d.GROUP_NUMBER=g.GROUP_NUMBER
      and d.disk_number = 26;
 

NAME           DSK_NUM MOUNT_STATUS HEADER_STATUS      OS_MB PATH
----------- ---------- ------------ ------------- ---------- ------------------------------------------------
RECO           26      CACHED       MEMBER            582080  o/192.168.10.17;192.168.10.18/RECO_FD_02_x81cel04
DATA           26      CACHED       MEMBER            5217280 o/192.168.10.13;192.168.10.14/DATA_FD_02_x81cel02

Saturday, May 13, 2023

Oracle Database monitor tablespace script

With this script you can monitor tablespaces sizes in oracle database. This script returns next columns:

Tablespace - tablespace name

Size MB - currently allocated space of tablespace in megabytes

Free MB - free size from allocated space in tablespace in megabytes

Max MB - size of maximum space until tablespace will extend in megabytes

Maximum free space in MB - free space in tablespace until it will be full, this is real value for monitring because it is showing real free space in tablespace in megabytes

Used MB - what space is used frm allocated space by segments in tablespace in megabytes

Used % - percentage of used space in tablespace, this is also real monitoring value

Status - oracle offers best practice for tablespace used percentage values 85% - caution, 90% - alarm, 97% - critical, so this script is depend on this values.


(
select df.tablespace_name "Tablespace",
       round(df.tablespace_size / 1024 / 1024, 2) "Size MB",
       round(nvl(fs.free_space, 0) / 1024 / 1024, 2) "Free MB",
       round(df.tablespace_maxsize / 1024 / 1024, 2) "Max MB",
       round((df.tablespace_maxsize - (df.tablespace_size - round(nvl(fs.free_space, 0), 2)))
       / 1024 / 1024,2) "Maximum free space in MB",
       round((df.tablespace_size / 1024 / 1024 - fs.free_space / 1024 / 1024),2) "Used MB",     
       to_char(round((df.tablespace_size - round(nvl(fs.free_space, 0), 2)) /
                     df.tablespace_maxsize * 100,
                     2),
               '990D99') || ' %' "Used %",
       case
         when round((df.tablespace_size - round(nvl(fs.free_space, 0), 2)) /
                    df.tablespace_maxsize * 100,
                    2) > 95 then
          'CRITICAL - 95%'
         when round((df.tablespace_size - round(nvl(fs.free_space, 0), 2)) /
                    df.tablespace_maxsize * 100,
                    2) > 90 then
          'ALARM - 90%'
         when round((df.tablespace_size - round(nvl(fs.free_space, 0), 2)) /
                    df.tablespace_maxsize * 100,
                    2) > 85 then
          'CAUTION - 85%'
         else
          'OK < 85%'
       end "Status"
  from (select 
               tablespace_name,
               sum(bytes) tablespace_size,
               sum(decode(maxbytes, 0, bytes, null, bytes, maxbytes)) tablespace_maxsize
          from dba_data_files
         group by tablespace_name) df,
       (select 
               tablespace_name, sum(bytes) free_space
          from dba_free_space
         group by tablespace_name) fs
 where df.tablespace_name = fs.tablespace_name(+)
 )
UNION
(
select 
       ts.tablespace_name "Tablespace",
       round(ts.tablespace_size / 1024 / 1024, 2) "Size MB",
       round(nvl(tfs.free_space, 0) / 1024 / 1024, 2) "Free MB",
       round(ts.tablespace_maxsize / 1024 / 1024, 2) "Max MB",
       round((ts.tablespace_maxsize - (ts.tablespace_size - tfs.free_space))
       / 1024 / 1024, 2) "Maximum free space in MB",
       round((ts.tablespace_size / 1024 / 1024 - tfs.free_space / 1024 / 1024),2)
        "Used MB",
       to_char(round((ts.tablespace_size - tfs.free_space) /
                     ts.tablespace_maxsize * 100,
                     2),
               '990D99') || ' %' "Used %",
       case
         when round((ts.tablespace_size - tfs.free_space) /
                    ts.tablespace_maxsize * 100,
                    2) > 95 then
          'CRITICAL - 95%'
         when round((ts.tablespace_size - tfs.free_space) /
                    ts.tablespace_maxsize * 100,
                    2) > 90 then
          'ALARM - 90%'
         when round((ts.tablespace_size - tfs.free_space) /
                    ts.tablespace_maxsize * 100,
                    2) > 85 then
          'CAUTION - 85%'
         else
          'OK < 85%'
       end "Status"
  from (select 
               tablespace_name,
               sum(bytes) tablespace_size,
               sum(decode(maxbytes, 0, bytes, null, bytes, maxbytes)) tablespace_maxsize
          from dba_temp_files
         group by tablespace_name) ts,
       (select
               tablespace_name, free_space from dba_temp_free_space) tfs
 where ts.tablespace_name = tfs.tablespace_name
    )
order by 7 desc, 1 asc;



note :
If it'll work slow on your database, main reason for this is your recyclebin, if there is too many objects then dba_free_space view will work slow, solution for this is to purge dba_recyclebin

Monday, April 24, 2023

Tuning is null query in oracle

When there is some simple query which is using just is null criteria on big table, oracle cost based optimizer making table access full, so query running slow, here is demonstration how to tune similar queries:
create table my_test_table as select * from dba_objects;

insert into my_test_table  select * from dba_objects;
commit;

UPDATE my_test_table set object_name=null where rownum<=100;
commit;

create index my_test_table_i1 on my_test_table(object_name);

begin
  dbms_stats.gather_table_stats(ownname => 'QOBESA',
                                tabname => 'MY_TEST_TABLE',
                                cascade => true);
end;


select * from my_test_table where object_name is null;


------------------------------------------------------------------------------
| Id  | Operation           | Name          | Rows | Bytes | Cost | Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |               |  100 | 12500 |  108 | 00:00:01 |
| * 1 |   TABLE ACCESS FULL | MY_TEST_TABLE |  100 | 12500 |  108 | 00:00:01 |
------------------------------------------------------------------------------

Predicate Information (identified by operation id):
------------------------------------------
* 1 - filter("OBJECT_NAME" IS NULL)
So we have table with more than 100 000 rows and in 100 rows column object_name is null, which is indexed column, but optimizer doesn't using it and making full table scan, if your table is huge and it contains many gigabytes of data it will be big problem. Many believes that there is no way to tune this type of statement, because oracle does not store NULL values in indexes, but here is little trick to use index on not null columns
create index my_test_table_i2 on my_test_table(object_name, -1);


begin
  dbms_stats.gather_table_stats(ownname => 'QOBESA',
                                tabname => 'MY_TEST_TABLE',
                                cascade => true);
end;


select * from my_test_table where object_name is null;


---------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name             | Rows | Bytes | Cost | Time     |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |                  |  100 | 12900 |   78 | 00:00:01 |
|   1 |   TABLE ACCESS BY INDEX ROWID BATCHED | MY_TEST_TABLE    |  100 | 12900 |   78 | 00:00:01 |
| * 2 |    INDEX RANGE SCAN                   | MY_TEST_TABLE_I2 |  100 |       |    3 | 00:00:01 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
------------------------------------------
* 2 - access("OBJECT_NAME" IS NULL)
So solution is to add some constant to the end of the index to be sure null values are stored.

Friday, March 17, 2023

ORA-01873: the leading precision of the interval is too small

 Hello,

Today we faced below error when quering dba_jobs:

SQL> select * from dba_jobs where job=4474623;
select * from dba_jobs where job=4474623

ORA-01873: the leading precision of the interval is too small

SQL> 
Oracle database is on 19c version so this dbms_job automatically converted in scheduler so this way it is correctly working:

select * from dba_scheduler_jobs where job_name='DBMS_JOB$_4474623';
so problem is dba_jobs view, we can see how it looks like :

create or replace view dba_jobs as
select
    m.dbms_job_number JOB, j.creator LOG_USER, u.name PRIV_USER,
    u.name SCHEMA_USER,
    CAST(j.last_start_date AS DATE) LAST_DATE,
    substr(to_char(j.last_start_date,'HH24:MI:SS'),1,8) LAST_SEC,
    CAST(
      DECODE(BITAND(j.job_status,2), 2, j.last_start_date, NULL)
    AS DATE) THIS_DATE,
    DECODE(BITAND(j.job_status,2), 2,
           substr(to_char(j.last_start_date,'HH24:MI:SS'),1,8), NULL) THIS_SEC,
    CAST(j.next_run_date AS DATE) NEXT_DATE,
    substr(to_char(j.next_run_date,'HH24:MI:SS'),1,8) NEXT_SEC,
    (CASE WHEN j.last_end_date>j.last_start_date THEN
     extract(day from (j.last_end_date-j.last_start_date)*86400) ELSE 0 END)
     TOTAL_TIME, -- Scheduler does not track total time
    DECODE(BITAND(j.job_status,1),0,'Y','N') BROKEN,
    DECODE(BITAND(j.flags,1024+4096+134217728),
                  0, j.schedule_expr, NULL) INTERVAL,
    j.failure_count FAILURES, j.program_action WHAT,
    j.nls_env NLS_ENV, j.env MISC_ENV, NVL(j.instance_id, 0) INSTANCE
  from
    sys.scheduler$_dbmsjob_map m
    left outer join sys.obj$ o on (o.name = m.job_name)
    left outer join sys.user$ u on (u.name = m.job_owner)
    left outer join sys.scheduler$_job j on (j.obj# = o.obj#)
  where
    o.owner# = u.user#
;
comment on table DBA_JOBS is 'All jobs in the database';
comment on column DBA_JOBS.JOB is 'Identifier of job.  Neither import/export nor repeated executions change it.';
comment on column DBA_JOBS.LOG_USER is 'USER who was logged in when the job was submitted';
comment on column DBA_JOBS.PRIV_USER is 'USER whose default privileges apply to this job';
comment on column DBA_JOBS.SCHEMA_USER is 'select * from bar  means  select * from schema_user.bar ';
comment on column DBA_JOBS.LAST_DATE is 'Date that this job last successfully executed';
comment on column DBA_JOBS.LAST_SEC is 'Same as LAST_DATE.  This is when the last successful execution started.';
comment on column DBA_JOBS.THIS_DATE is 'Date that this job started executing (usually null if not executing)';
comment on column DBA_JOBS.THIS_SEC is 'Same as THIS_DATE.  This is when the last successful execution started.';
comment on column DBA_JOBS.NEXT_DATE is 'Date that this job will next be executed';
comment on column DBA_JOBS.NEXT_SEC is 'Same as NEXT_DATE.  The job becomes due for execution at this time.';
comment on column DBA_JOBS.TOTAL_TIME is 'Total wallclock time spent by the system on this job, in seconds';
comment on column DBA_JOBS.BROKEN is 'If Y, no attempt is being made to run this job.  See dbms_jobq.broken(job).';
comment on column DBA_JOBS.INTERVAL is 'A date function, evaluated at the start of execution, becomes next NEXT_DATE';
comment on column DBA_JOBS.FAILURES is 'How many times has this job started and failed since its last success?';
comment on column DBA_JOBS.WHAT is 'Body of the anonymous PL/SQL block that this job executes';
comment on column DBA_JOBS.NLS_ENV is 'alter session parameters describing the NLS environment of the job';
comment on column DBA_JOBS.MISC_ENV is 'a versioned raw maintained by the kernel, for other session parameters';
comment on column DBA_JOBS.INSTANCE is 'Instance number restricted to run the job';
So there is important comment on total time that -- Scheduler does not track total time.
Solution : 
we can change dba_jobs view with this query to solve this problem or you can query from dba_Scheduler_jobs, we are changing just total_time field calculation with this:


( CASE
              WHEN j.last_end_date > j.last_start_date THEN
                 --EXTRACT (
                 --DAY FROM (j.last_end_date - j.last_start_date) * 86400)
          (CAST(j.last_end_date AS DATE)-CAST(j.last_start_date AS DATE)) * 86400
              ELSE
                 0
           END )
     TOTAL_TIME, -- Scheduler does not track total time

So whole view will be like that :
create or replace view dba_jobs as
select
    m.dbms_job_number JOB, j.creator LOG_USER, u.name PRIV_USER,
    u.name SCHEMA_USER,
    CAST(j.last_start_date AS DATE) LAST_DATE,
    substr(to_char(j.last_start_date,'HH24:MI:SS'),1,8) LAST_SEC,
    CAST(
      DECODE(BITAND(j.job_status,2), 2, j.last_start_date, NULL)
    AS DATE) THIS_DATE,
    DECODE(BITAND(j.job_status,2), 2,
           substr(to_char(j.last_start_date,'HH24:MI:SS'),1,8), NULL) THIS_SEC,
    CAST(j.next_run_date AS DATE) NEXT_DATE,
    substr(to_char(j.next_run_date,'HH24:MI:SS'),1,8) NEXT_SEC,
    ( CASE
              WHEN j.last_end_date > j.last_start_date THEN
                 --EXTRACT (
                 --DAY FROM (j.last_end_date - j.last_start_date) * 86400)
          (CAST(j.last_end_date AS DATE)-CAST(j.last_start_date AS DATE)) * 86400
              ELSE
                 0
           END )
     TOTAL_TIME, -- Scheduler does not track total time
    DECODE(BITAND(j.job_status,1),0,'Y','N') BROKEN,
    DECODE(BITAND(j.flags,1024+4096+134217728),
                  0, j.schedule_expr, NULL) INTERVAL,
    j.failure_count FAILURES, j.program_action WHAT,
    j.nls_env NLS_ENV, j.env MISC_ENV, NVL(j.instance_id, 0) INSTANCE
  from
    sys.scheduler$_dbmsjob_map m
    left outer join sys.obj$ o on (o.name = m.job_name)
    left outer join sys.user$ u on (u.name = m.job_owner)
    left outer join sys.scheduler$_job j on (j.obj# = o.obj#)
  where
    o.owner# = u.user#
;
comment on table DBA_JOBS is 'All jobs in the database';
comment on column DBA_JOBS.JOB is 'Identifier of job.  Neither import/export nor repeated executions change it.';
comment on column DBA_JOBS.LOG_USER is 'USER who was logged in when the job was submitted';
comment on column DBA_JOBS.PRIV_USER is 'USER whose default privileges apply to this job';
comment on column DBA_JOBS.SCHEMA_USER is 'select * from bar  means  select * from schema_user.bar ';
comment on column DBA_JOBS.LAST_DATE is 'Date that this job last successfully executed';
comment on column DBA_JOBS.LAST_SEC is 'Same as LAST_DATE.  This is when the last successful execution started.';
comment on column DBA_JOBS.THIS_DATE is 'Date that this job started executing (usually null if not executing)';
comment on column DBA_JOBS.THIS_SEC is 'Same as THIS_DATE.  This is when the last successful execution started.';
comment on column DBA_JOBS.NEXT_DATE is 'Date that this job will next be executed';
comment on column DBA_JOBS.NEXT_SEC is 'Same as NEXT_DATE.  The job becomes due for execution at this time.';
comment on column DBA_JOBS.TOTAL_TIME is 'Total wallclock time spent by the system on this job, in seconds';
comment on column DBA_JOBS.BROKEN is 'If Y, no attempt is being made to run this job.  See dbms_jobq.broken(job).';
comment on column DBA_JOBS.INTERVAL is 'A date function, evaluated at the start of execution, becomes next NEXT_DATE';
comment on column DBA_JOBS.FAILURES is 'How many times has this job started and failed since its last success?';
comment on column DBA_JOBS.WHAT is 'Body of the anonymous PL/SQL block that this job executes';
comment on column DBA_JOBS.NLS_ENV is 'alter session parameters describing the NLS environment of the job';
comment on column DBA_JOBS.MISC_ENV is 'a versioned raw maintained by the kernel, for other session parameters';
comment on column DBA_JOBS.INSTANCE is 'Instance number restricted to run the job';


after change dba_jobs view we can query that job without any error:
SQL> select * from dba_jobs where job=4474623;

Saturday, February 25, 2023

Disable or enable auto task jobs in oracle database

Automatic database maintenance tasks is used by oracle for tunning SQL queries, gather stale or non existing statistics on objects and space advisory. 

These Automatic maintenance jobs is configured default in Oracle Database :

  • Automatic Optimizer Statistics Collection - Gathers stale or missing statistics
  • Automatic Segment Advisor – Identifies segments that reorganized to save space
  • Automatic SQL Tuning Advisor – Tune high load SQL
How to disable automatic jobs in oracle?

To disable all these jobs you can :
begin
DBMS_AUTO_TASK_ADMIN.disable;
end;
/
How to enable automatic jobs in oracle?
begin
DBMS_AUTO_TASK_ADMIN.enable;
end;
/
How to disable sql tuning advisor, auto space advisor and auto optimizer stats collection separately one by one:
SQL> SELECT client_name, status FROM dba_autotask_client;

CLIENT_NAME                           STATUS
-------------------------------       ---------
auto optimizer stats collection       ENABLED
auto space advisor                    ENABLED
sql tuning advisor                    ENABLED

SQL> EXEC DBMS_AUTO_TASK_ADMIN.DISABLE(client_name=>'sql tuning advisor', operation=>NULL, window_name=>NULL);

SQL> EXEC DBMS_AUTO_TASK_ADMIN.DISABLE(client_name=>'auto space advisor', operation=>NULL, window_name=>NULL);

SQL> EXEC DBMS_AUTO_TASK_ADMIN.DISABLE(client_name=>'auto optimizer stats collection', operation=>NULL, window_name=>NULL);

PL/SQL procedure successfully completed.

3. Check the status again for auto task jobs

SQL> SELECT client_name, status FROM dba_autotask_client;

 CLIENT_NAME                             STATUS
----------------------------------       ---------------
auto optimizer stats collection          DISABLED
auto space advisor                       DISABLED
sql tuning advisor                       DISABLED
How to enable sql tuning advisor, auto space advisor and auto optimizer stats collection separately one by one:
SQL> EXEC DBMS_AUTO_TASK_ADMIN.ENABLE(client_name=>'sql tuning advisor', operation=>NULL, window_name=>NULL);

SQL> EXEC DBMS_AUTO_TASK_ADMIN.ENABLE(client_name=>'auto space advisor', operation=>NULL, window_name=>NULL);

SQL> EXEC DBMS_AUTO_TASK_ADMIN.ENABLE(client_name=>'auto optimizer stats collection', operation=>NULL, window_name=>NULL);

PL/SQL procedure successfully completed.

Wednesday, August 31, 2022

ORA-01000: maximum open cursors exceeded oracle.sysman.emSDK.emd.comm.MetricGetException: ORA-01000: maximum open cursors exceeded

Sometimes we are not getting alerts from OEM(Oracle Enterprise Manager) and if we try to check some metric values it is throwing error:
ORA-01000: maximum open cursors exceeded oracle.sysman.emSDK.emd.comm.MetricGetException: ORA-01000: maximum open cursors exceeded 
I Found interesting Doc about this issue on metalink: 
ORA-01000: MAXIMUM OPEN CURSORS EXCEEDED (Doc ID 2751726.1).
It seems it is Bug 31542357 - Use of setTimeToLiveTimeout causes leak of cursors and ORA-01000 eventually 
Solution: 
For 13.4 Cloud Control the patch will be included in the 13.4 RU7 and newer patches. 
The work around from development is: On agent side add this property in emd.properties and restart the agent. The following property will kill the seesion on db and avoid the cursor leak.
SqlConnectionCache._TimeToLiveTimeout=0

Saturday, May 7, 2022

Oracle Database World 2022

If you missed biggest database event of the year 2022 here you can access 18 sessions and 3 hands on lab: Oracle Database World 2022

Thursday, April 14, 2022

No ADR base is set Oracle RAC 19c

When I try connect to adrci appears below error:
adrci

ADRCI: Release 19.0.0.0.0 - Production on Thu Apr 14 14:51:54 2022

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

No ADR base is set
adrci> show homes
No ADR homes are set
adrci> exit
Solution: check to see if there is a file adrci_dir.mif in $ORACLE_HOME/log/diag if the directory does not exist do the following :
mkdir -p $ORACLE_HOME/log/diag
printf "%s" "/u01/app/oracle" > $ORACLE_HOME/log/diag/adrci_dir.mif
"/u01/app/oracle" will be your ADR base, which is actualy oracle_base. After that adrci will work:
adrci

ADRCI: Release 19.0.0.0.0 - Production on Thu Apr 14 15:01:21 2022

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

ADR base = "/u01/app/oracle"
adrci> show homes
ADR Homes: 
diag/asmtool/user_oracle/host_578647856_107
diag/asmtool/user_oracle/host_578647856_80
diag/asmtool/user_grid/host_578647856_107
diag/rdbms/testdb2/TESTDB2
diag/rdbms/testdb/TESTDB

Friday, January 28, 2022

TFA-00002 : Oracle Trace File Analyzer (TFA) is not running

When I tried to view oratop on database server appears below error:
tfactl oratop -database testdb
TFA-00002 Oracle Trace File Analyzer (TFA) is not running
Solution : As a root user
/etc/init.d/init.tfa  start  
Starting TFA..
start: Job is already running: oracle-tfa
Waiting up to 100 seconds for TFA to be started..
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
Successfully started TFA Process..
. . . . .
TFA-00002 : Oracle Trace File Analyzer (TFA) is not running
TFA Failed to start listening for commands

If it is not started successfully than check below : 
run synctfanodes.sh if you have root password . 
if no root password copy below files to TFA Failed node and try to restart TFA .. Please check whether these 4 files exist on all nodes are similar.
TFA_HOME/server.jks 
TFA_HOME/client.jks 
TFA_HOME/internal/ssl.properties 
TFA_HOME/internal/ 
portmapping.txt If not similar just copy these files to TFA Failing node and try TFA Restart ..It Worked for me .
 
./tfactl  -v "debug" start
Starting TFA..
start: Job is already running: oracle-tfa
Waiting up to 100 seconds for TFA to be started..
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
Successfully started TFA Process..
. . . . .
TFA Started and listening for commands

tfactl print status

.------------------------------------------------------------------------------------------------.
| Host    | Status of TFA | PID    | Port | Version    | Build ID             | Inventory Status |
+---------+---------------+--------+------+------------+----------------------+------------------+
| x82db01 | RUNNING       | 272939 | 5000 | 21.2.1.0.0 | 21210020210813042249 | COMPLETE         |
'---------+---------------+--------+------+------------+----------------------+------------------'
After that we can use oratop and monitor database with : tfactl oratop -database testdb

Friday, December 17, 2021

ORA-00600 [17147] ORA-48216 When Querying V$DIAG_ALERT_EXT View

When I try to query v$diag_alert_ext appears error:
ORA-00600: internal error code, arguments: [17147], [0x110A09510], [], [], [], [], [], [], [], [], [], []
ORA-48216: Field Len Exceeds Max Field Length [5] [30] [29286] [66]
Here is query which is used for quering alert.log entries
select instance_id, originating_timestamp, message_text, '1'
  from table(gv$(cursor (select instance_id, originating_timestamp, message_text
                    from v$diag_alert_ext a
                   where originating_timestamp > (sysdate - 1)
                     and (message_text like '%ORA-%' or
                         message_text like '%CRS-%'))))
 order by originating_timestamp desc;
As I found on metalink The issue is matching unpublished Bug 21266522 duplicate of unpublished Bug 18909196 - QUERIES INVOLVING X$DBGALERTEXT SOMETIMES FAIL DUE TO ORA-48216.
Solution:

1. Download and install the one-off Patch 18909196, available for your platform (this issue only exists in 11.2.0.4). If a patch does not exist for your platform please open a Service Request for Oracle Support.
-OR-
2. Implement the workaround: remove (i.e. delete) the xml-based alert log within /alert/log.xml (and log_*.xml).

Tuesday, November 16, 2021

After startup database with srvctl, sqlplus as sysdba results in ORA-12547: TNS:lost contact

We had a very strange situation on one of our test environment: 
1. If use sqlplus to start instance, then use srvctl to stop it will failed with ORA-12547
[oracle@test_host:/home/oracle]$sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Nov 15 12:57:46 2021
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> exit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
[oracle@test_host:/home/oracle]$srvctl stop database -d testdb
PRCD-1334 : failed to stop database testdb
PRCD-1124 : Failed to stop database testdb and its services
PRCD-1131 : Failed to stop database testdb and its services on nodes
PRCR-1133 : Failed to stop database testdb and its running services
PRCR-1132 : Failed to stop resources using a filter
ORA-12547: TNS:lost contact
CRS-2675: Stop of 'ora.testdb.db' on 'test_host' failed
2 . If use srvctl to start instance, sqlplus / as sysdba will get ORA-12547
[oracle@test_host:/home/oracle]$sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Thu Nov 15 12:59:21 2021
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.

ERROR:
ORA-12547: TNS:lost contact

Enter user-name:
ERROR:
ORA-12547: TNS:lost contact

Enter user-name:
ERROR:
ORA-12547: TNS:lost contact

SP2-0157: unable to CONNECT to ORACLE after 3 attempts, exiting SQL*Plus
3. Checked the permission for oracle binary and library and orabasetab, no problem found:
$ls -l $ORACLE_HOME/bin/oracle
-rwsr-s--x 1 oracle asmadmin 441253072 Nov 15 12:29 /u01/app/oracle/product/19.0.0/db_1/bin/oracle
( No change after we start instance by using srvctl or sqlplus )
$ls -l $ORACLE_HOME|grep lib
drwxr-xr-x 2 oracle oinstall 4096 Apr 17 2019 jlib
drwxr-xr-x 4 oracle oinstall 12288 Nov 13 12:29 lib

$more orabasetab
#orabasetab file is used to track Oracle Home associated with Oracle Base
/u01/app/oracle/product/19.0.0/db_1:/u01/app/oracle:OraDB19Home1:N:
Solution: The issue is due to incorrect configuration for Oracle home in OCR:
[grid@test_host:/home/grid]$srvctl config database -db testdb
Database unique name: testdb
Database name:
Oracle home: /u01/app/oracle/product/19.0.0/db_1/
So problem is / symbol at the end of oracle_home and we can change it
srvctl stop database -d testdb
srvctl modify database -d testdb -o /u01/app/oracle/product/19.0.0/db_1
srvctl start database -d testdb
This is corresponding oracle support doc: DB started by sqlplus cannot be stopped by srvctl( ORA-12547 ), and vice versa (Doc ID 2612663.1)

Tuesday, September 7, 2021

How To Grant Select On X$ Objects In Oracle

The X$ tables are owned by the SYS database user and are read-only, which is why they are referred to as fixed tables and the V$ views are referred to as fixed views. Only SYS can query the X$ tables, they cannot be granted. However, you can create view on them and grant the view. You cannot grant a select privilege on X$* tables to another user due to they are internally protected. If you try grant select on x$ objects then you will receive the following error:
SQL> grant select on sys.x$tables to scott;
grant select on sys.x$tables to scott
*
ERROR at line 1:
ORA-02030: can only select from fixed tables/views
Alternatively, We can create a view and then grant a select privilege on that new view to another user as follow: – Now create my own user and gives related grants to my user:
SQL> create user qobesa identified by pass ;
User created.

SQL> grant resource to qobesa ;

Grant succeeded.

SQL> grant connect to qobesa ;

Grant succeeded.
Connect database as my user qobesa and try to select any of X$ tables:
SQL> connect qobesa/pass
Connected.

SQL> show user
USER is "qobesa"

SQL> select * from sys.x$ksppcv;
select * from sys.x$ksppcv
*
ERROR at line 1:
ORA-00942: table or view does not exist
As you can see We are hitting ORA-00942. Let us try to give Select grant to our new user qobesa:
SQL> show user
USER is "SYS"
SQL> grant select on sys.x$ksppcv to qobesa;
grant select on sys.x$ksppcv to qobesa
*
ERROR at line 1:
ORA-02030: can only select from fixed tables/views
As you can see We can not give select grant and hitting ORA-02030 error. Now Let us create view and grant select to our new view.
SQL> create view vw_x$ksppcv as select * from sys.x$ksppcv;

View created.

SQL> grant select on sys.vw_x$ksppcv to qobesa;

Grant succeeded.
Now Let us try to select any x$table such as:
SQL> select * from sys.vw_x$ksppcv;
That's all.

Monday, June 14, 2021

How to convert physical standby database to snapshot standby database

Snapshot Standby database feature in dataguard environment gives us opportunity to open standby database in read write mode and then return to back as physical standby in read only.

We can use dgmgrl or sqlplus to convert physical standby database to snapsho standby database, here I'll write both method.

To open physical standby database in read write mode as snapshot standby database, flashback must be on.


Using SQLPLUS to convert phyical standby database to snapshot standby database:

SQL> alter system set db_recovery_file_dest='+DG';
System altered.

SQL> alter system set db_recovery_file_dest_size=500G;
System altered.

SQL> recover managed standby database cancel;

Media recovery complete.

SQL> shutdown immediate;

SQL> startup mount;

SQL> alter database flashback on;
Database altered.

SQL> select flashback_on from v$database;
FLASHBACK_ON
------------------
YES

SQL> ALTER DATABASE CONVERT TO SNAPSHOT STANDBY;
Database altered.

SQL> select NAME, OPEN_MODE, GUARD_STATUS, DATABASE_ROLE from v$database;
NAME      OPEN_MODE            GUARD_S DATABASE_ROLE
--------- -------------------- ------- ----------------
TESTDB     MOUNTED              NONE    SNAPSHOT STANDBY

SQL> shut immediate

SQL> startup

SQL> Select NAME, OPEN_MODE, GUARD_STATUS, DATABASE_ROLE from v$database;
NAME      OPEN_MODE            GUARD_S DATABASE_ROLE
--------- -------------------- ------- ----------------
TESTDB     READ WRITE           NONE    SNAPSHOT STANDBY

Convert snapshot standby database back to physical standby database:
SQL> select NAME, OPEN_MODE, GUARD_STATUS, DATABASE_ROLE from v$database;
NAME      OPEN_MODE            GUARD_S DATABASE_ROLE
--------- -------------------- ------- ----------------
TESTDB     READ WRITE           NONE    SNAPSHOT STANDBY

SQL> shut immediate

SQL> Startup nomount

SQL> alter database mount;

SQL> ALTER DATABASE CONVERT TO PHYSICAL STANDBY;
Database altered.

SQL> shut immediate

SQL> alter database mount standby database;
Database altered.

SQL> recover managed standby database disconnect from session;
Media recovery complete.

SQL> Select NAME, OPEN_MODE, GUARD_STATUS, DATABASE_ROLE from v$database;
NAME      OPEN_MODE            GUARD_S DATABASE_ROLE
--------- -------------------- ------- ----------------
TESTDB     MOUNTED              NONE    PHYSICAL STANDBY
Using DGMGRL to convert snapshot standby database to physical standby and return back:

-- convert physical standby to snapshot standby
DGMGRL> CONVERT DATABASE TESTDB TO SNAPSHOT STANDBY;

-- convert snapshot satndby to physical standby
DGMGRL> CONVERT DATABASE TESTDB to PHYSICAL STANDBY;

Tuesday, May 25, 2021

ORA-01450: maximum key length (3215) exceeded

Today I faced very interesting thig and I want to share it.
I am just trying to create index online and getting this error:
SQL> create index owner.index_name_i1 on owner.table_name(column_name) tablespace tbl_name online;
 
ORA-00604: error occurred at recursive SQL level 1
ORA-01450: maximum key length (3215) exceeded
 
SQL> 
Problem:
When creating an index the total length of the index cannot exceed a certain value. Primarily this value depends on DB_BLOCK_SIZE.
If 2K block size then maximum index key length=758
If 4K block size then maximum index key length=1578
If 8K block size then maximum index key length=3218 
If 16K block size then maximum index key length=6498

How the maximum index key length is measured by?
Maximum index key length=Total index length (Sum of width of all indexed column + the number of indexed columns) + Length of the key (2 bytes) + ROWID (6 bytes) + the length of the rowid (1 byte)
The index key size is limited by the value of db_block_size, because a key value may not span multiple blocks. So, based on the size of the block size of index depends. In fact, it is required that any index block must contain at least TWO index entries per block.
So we can say that the maximum key length for an index will be less than half of the DB_BLOCK_SIZE. But we know that in a block there also needed space for PCTFREE, INITRANS and space for block overhead (Block Header, ROW Directory, Table Directory, etc.). After considering these bytes the actual space that can be used for the Index key is actually just over 1/3 of the DB_BLOCK_SIZE.

Solution:
1) Increase your database block size. Create a tablespace with bigger block size and create index on that tablespace.
2) If you have index on multiple columns then you can split index to single or 2 columns so that size does not extended over it can handle.
3) create/rebuild the index without online clause.

Because the online rebuild of the index creates a journal table and index. This internal journal IOT table contains more columns in its index. 
This is a feature of online rebuild. This time the error arises because that current value of the initialization parameter db_block_size is not large enough to create internal journal IOT.

Sunday, November 8, 2020

You (oracle) are not allowed to use this program (crontab)

 On one of our server we are trying to schedule some os script in cron but appears oracle user can't do that:


$ crontab -l
You (oracle) are not allowed to use this program (crontab)
See crontab(1) for more information

Solution :
This error is due to oracle don’t have access to crontab.
There are two files on server, to allow and deny access to crontab, Files are named as etc/cron.allow and /etc/cron.deny
check that if oracle user is present in cron.deny file. if it is present remove oracle user from cron.deny file and add oracle user to cron.allow file.
if the files "/etc/cron.allow" and "/etc/cron.deny" files are not present on server, we can create both files manually.
# cat /etc/cron.allow
root

So, I haved added oracle user to cron.allow file, file will look as follows:
# cat /etc/cron.allow
root
oracle

Now Oracle user able to access the crontab.
$ crontab -l
no crontab for oracle

Thursday, October 29, 2020

ORA-16751: failed to switchover to physical standby database and Switchover Ends with Two Physical Standby Databases

You might find yourself in a situation where your switchover fails and left your environment with two physical standby databases.
From the broker your might see the following error:
ORA-16816: incorrect database role
Solution:
!!!!!!!!!!!!!!!!! First of all you need to stop database instances on all servers.
1. Logon (as sysdba) to the instance that was your Primary database instance before the switchover.

2. Confirm the database role.
SQL> select database_role from v$database;

DATABASE_ROLE

-------------------------------

PHYSICAL STANDBY

3. Shutdown the instance.
SQL> shutdown immediate;

Oracle Instance Shutdown.

4. Mount the database.
SQL> startup mount;

Database opened.

5. Cancel the MRP process. You will receive "ORA-16136: Managed Standby Recovery not active" if it is not running, but you can ignore.
SQL> alter database recover managed standby database cancel;

6. Terminate the current switchover to Standby that never completed fully.
SQL> alter database recover managed standby database finish;

Database altered

7. Now switchover to Primary.
SQL> alter database commit to switchover to primary with session shutdown;

Database altered

8. Open the database.
SQL> alter database open;

Database altered.


9. Confirm the database role.
SQL> select database_role from v$database;

DATABASE_ROLE

------------------------------

PRIMARY



Additional Steps
When attempting to open the Primary Database you may suffer the following error:
SQL> alter database open

*

ERROR at line 1:

ORA-16649: possible failover to another database prevents this database being opened

In this case, before you can open the database, you must disable Data Guard Broker as follows:
 SQL> alter system set dg_broker_start=false scope=both;

 System altered.

SQL> alter database open;

 Database altered.

Now re-install Data Guard Broker.

ORA-00600: internal error code, arguments: [qcisSetPlsqlCtx:tzi init], [], [], [], [], [], [], [], [], [], [], []

I was trying to restore Oracle database on another host and after open database with resetlogs option in alert.log file appear below internal error:


ORA-00600: internal error code, arguments: [qcisSetPlsqlCtx:tzi init], [], [], [], [], [], [], [], [], [], [], []

Beside this, database was working very slowly, every command was working extremely slowly and overall database working was unresponsive.
Solution:
1) Check files under $ORACLE_HOME/oracore/zoneinfo directory on database server, where backup is taken and replace files on local $ORACLE_HOME/oracore/zoneinfo where restore is done.
2) Restart database and everything is going smoothly.

Monday, October 19, 2020

ORA-00245: control file backup failed; in Oracle RAC, target might not be on shared storage

RAC database backup with RMAN failed with error:
Starting Control File and SPFILE Autobackup at 2015-09-14:23:13:44
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of Control File and SPFILE Autobackup command on Disk channel at 2015-09-14:23:13:45
ORA-00245: control file backup failed; in Oracle RAC, target might not be on shared storage

This is because From 11gR2 onwards, the controlfile backup happens without holding the controlfile enqueue. For non-RAC database, this doesn't change anything. But for RAC database, due to the changes made to the controlfile backup mechanism in 11gR2, any instance in the cluster may write to the snapshot controlfile. Due to this snapshot controlfile need to be visible to all instances. The snapshot controlfile MUST be accessible by all nodes of a RAC database, if the snapshot controlfile does not reside on a shared device error will be raised at the time of RMAN backup while taking snapshot of controlfile. This applies to backing up controlfile using sqlplus / having autobackup of controlfile configured on non shared location.
Solution:
This is a RAC specific configuration issue and the correct configuration is as described below It is changed behaviour which requires that the snapshot controlfile in a RAC environment, is on a shared location.
Check the snapshot controlfile location:
RMAN> show snapshot controlfile name;

Configure the snapshot controlfile to a shared disk :
RMAN> CONFIGURE SNAPSHOT CONTROLFILE NAME TO '+DG01/snapcf_RACDB.f';