Google it ....

Friday, January 26, 2024

How to run scheduler job on specific instance in RAC database

Here I'll show you how to configure scheduler job to run on specific instance in RAC database.
create test job on RAC database:
begin
	sys.dbms_scheduler.create_job(
	job_name        => 'SYS.TEST_SCHEDULER_JOB',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'declare var1 varchar2(1);
begin
	select * into var1 from dual;
	dbms_output.put_line(var1);
end;',
    start_date      => to_date('26-01-2024 00:00:00','dd-mm-yyyy hh24:mi:ss'),
	repeat_interval => 'Freq=Secondly;Interval=5',
	end_date        => to_date(null),
	job_class       => 'DEFAULT_JOB_CLASS',
	enabled         => true,
	auto_drop       => false,
	comments        => '');
end;
/
Using instance ID and instance stickiness attribute in job scheduler, Instance ID value indicate the instance on which the job is to be run. Instance Stickiness value true means job will run on same node until node is extremely overloaded or not available. False means job will run on any available node. Its default value is true. Job will continue on that on which its first run or started in RAC case. No use in Standalone server. 
check job with its instance_id and instance_stickiness:
select OWNER,JOB_NAME,INSTANCE_STICKINESS,INSTANCE_ID from dba_scheduler_jobs where JOB_NAME='TEST_SCHEDULER_JOB';
change instance_id for job:
begin 
 dbms_scheduler.set_attribute(name => 'TEST_SCHEDULER_JOB' ,attribute=>'INSTANCE_ID', value=>'2');
end;
/
disable instance_stickiness attribute for job:
begin 
 dbms_scheduler.set_attribute(name => 'TEST_SCHEDULER_JOB' ,attribute=>'INSTANCE_STICKINESS', value=>FALSE);
end;
/

Saturday, December 23, 2023

how to restore autobackup controlfile or spfile older than 7 days?

when we are trying to restore spfile or controlfile from autobackup appears error:

channel ORA_DISK_1: no autobackup in 7 days found

RMAN> restore spfile from autobackup;

Starting restore at 23-DEC-23

using channel ORA_DISK_1


channel ORA_DISK_1: looking for autobackup on day: 20231223

channel ORA_DISK_1: looking for autobackup on day: 20231222

channel ORA_DISK_1: looking for autobackup on day: 20231221

channel ORA_DISK_1: looking for autobackup on day: 20231220

channel ORA_DISK_1: looking for autobackup on day: 20231219

channel ORA_DISK_1: looking for autobackup on day: 20231218

channel ORA_DISK_1: looking for autobackup on day: 20231217

channel ORA_DISK_1: no autobackup in 7 days found

RMAN-00571: ===========================================================

RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============

RMAN-00571: ===========================================================

RMAN-03002: failure of restore command at 12/23/2023 18:41:09

RMAN-06172: no autobackup found or specified handle is not a valid copy or piece


When you are using RESTORE SPFILE FROM AUTOBACKUP or RESTORE CONTROLFILE FROM AUTOBACKUP,

RMAN begins the search on the current day or on the day specified with the SET UNTIL caluse.

If no autobackup is found in the current or SET UNTIL day, RMAN checks the previous day and so on.

In this way RMAN by default checks for 7 days autobackup from the current or SET UNTIL day.


Solution:

However if you want to extend your searching of autobackup more than 7 days then you have to use
MAXDAYS option with the RESTORE command.

RMAN>restore spfile from autobackup maxdays 30;
or
RMAN>restore controlfile from autobackup maxdays 30;

In these cases autobackup searches will be performed up to 30 days from the current or SET UNTIL day.

Friday, August 4, 2023

Exadata after disk replacement asm disk remain mount_status closed header_status candidate

One of physical disk on exadata cell was droped for replacement status, after engineer change physical disk with new disk in v$asm_disk view shown that mount_status is closed and header_status is candidate:

select g.NAME,
       d.disk_number dsk_num,
       d.mount_status,
       d.header_status,
       d.OS_MB,
       d.path
 from v$asm_disk d, v$asm_diskgroup g
 where
      d.GROUP_NUMBER=g.GROUP_NUMBER
      and d.disk_number = 26;
 

NAME           DSK_NUM MOUNT_STATUS HEADER_STATUS      OS_MB PATH
----------- ---------- ------------ ------------- ---------- ------------------------------------------------
RECO           26      CACHED       MEMBER            582080  o/192.168.10.17;192.168.10.18/RECO_FD_02_x81cel04
DATA           26      CLOSED       CANDIDATE         5217280 o/192.168.10.13;192.168.10.14/DATA_FD_02_x81cel02
Solution : The problem will be resolved when the disks are added to the relevant disk groups by using the PATH value in the above query.
Processes must be performed by connecting to the ASM instance as SYSASM.
First of all check if rebalnce process is not working and if not than add disk manually.
check asm rebalance is not working:
sqlplus / as sysasm

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Aug 4 16:21:21 2023
Version 19.17.0.0.0

Copyright (c) 1982, 2022, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.17.0.0.0

SQL> select * from v$asm_operation;

no rows selected

SQL>
add disk manually:

alter diskgroup DATA add disk 'o/192.168.10.13;192.168.10.14/DATA_FD_02_x81cel02' rebalance power 4;
you can check progress of rebalance againg in v$asm_operation.
After rebalance process finish its job disk statuses must be cached and member.
 
select g.NAME,
       d.disk_number dsk_num,
       d.mount_status,
       d.header_status,
       d.OS_MB,
       d.path
 from v$asm_disk d, v$asm_diskgroup g
 where
      d.GROUP_NUMBER=g.GROUP_NUMBER
      and d.disk_number = 26;
 

NAME           DSK_NUM MOUNT_STATUS HEADER_STATUS      OS_MB PATH
----------- ---------- ------------ ------------- ---------- ------------------------------------------------
RECO           26      CACHED       MEMBER            582080  o/192.168.10.17;192.168.10.18/RECO_FD_02_x81cel04
DATA           26      CACHED       MEMBER            5217280 o/192.168.10.13;192.168.10.14/DATA_FD_02_x81cel02

Saturday, July 22, 2023

ERROR: relation pg_stat_statements does not exist

When I try to select * from pg_stat_statements to get some information about queries that run on postgres database I getting
  
ERROR: relation pg_stat_statements does not exist
In this post I'll write step by step how to install pg_stat_statements in postgresql to analyze queries, whcih is very helpful during performance tuning. 
The pg_stat_statments table holds information on queries that ran in the past table has one row per for each query that ran. It provides useful information in columns like: 
pg_stat_statements.query — the query text
pg_stat_statements.calls — The number of times this query was run
pg_stat_statements.total_exec_time — The total time (ms) spent on the query.

Here is step by step how to install pg_stat_statements:

postgres=# select * from pg_stat_statements;
ERROR:  relation "pg_stat_statements" does not exist
LINE 1: select * from pg_stat_statements;
                      ^
postgres=# SELECT * FROM pg_available_extensions where name='pg_stat_statements';
        name        | default_version | installed_version |                                comment
--------------------+-----------------+-------------------+------------------------------------------------------------------------
 pg_stat_statements | 1.10            |                   | track planning and execution statistics of all SQL statements executed
(1 row)
Extensions in PostgreSQL server are database level. We have create extension for each database separately.

postgres=# CREATE EXTENSION pg_stat_statements;
CREATE EXTENSION
postgres=#
postgres=#
postgres=# SELECT * FROM pg_available_extensions where name='pg_stat_statements';
server closed the connection unexpectedly
        This probably means the server terminated abnormally
        before or while processing the request.
The connection to the server was lost. Attempting reset: Succeeded.
postgres=# SELECT * FROM pg_available_extensions where name='pg_stat_statements';
        name        | default_version | installed_version |                                comment
--------------------+-----------------+-------------------+------------------------------------------------------------------------
 pg_stat_statements | 1.10            | 1.10              | track planning and execution statistics of all SQL statements executed
(1 row)


postgres=# select * from pg_stat_statements;
ERROR:  pg_stat_statements must be loaded via shared_preload_libraries
postgres=#
in postgresql.conf file add these records at the end:

shared_preload_libraries = 'pg_stat_statements'
pg_stat_statements.max = 10000
pg_stat_statements.track = all
restart postgresql:

systemctl restart postgresql-15
now you can check pg_stat_statements and use it for performance tuning, for example this query returns top 10 statement for database by average elapsed time:

SELECT 
       query, 
       calls, 
       (total_exec_time/calls/1000)::integer AS avg_time_seconds 
FROM pg_stat_statements
WHERE 1=1 
--calls > 1000
ORDER BY avg_time_seconds DESC
LIMIT 10;
Happy postgres tuning.

Saturday, May 13, 2023

Oracle Database monitor tablespace script

With this script you can monitor tablespaces sizes in oracle database. This script returns next columns:

Tablespace - tablespace name

Size MB - currently allocated space of tablespace in megabytes

Free MB - free size from allocated space in tablespace in megabytes

Max MB - size of maximum space until tablespace will extend in megabytes

Maximum free space in MB - free space in tablespace until it will be full, this is real value for monitring because it is showing real free space in tablespace in megabytes

Used MB - what space is used frm allocated space by segments in tablespace in megabytes

Used % - percentage of used space in tablespace, this is also real monitoring value

Status - oracle offers best practice for tablespace used percentage values 85% - caution, 90% - alarm, 97% - critical, so this script is depend on this values.


(
select df.tablespace_name "Tablespace",
       round(df.tablespace_size / 1024 / 1024, 2) "Size MB",
       round(nvl(fs.free_space, 0) / 1024 / 1024, 2) "Free MB",
       round(df.tablespace_maxsize / 1024 / 1024, 2) "Max MB",
       round((df.tablespace_maxsize - (df.tablespace_size - round(nvl(fs.free_space, 0), 2)))
       / 1024 / 1024,2) "Maximum free space in MB",
       round((df.tablespace_size / 1024 / 1024 - fs.free_space / 1024 / 1024),2) "Used MB",     
       to_char(round((df.tablespace_size - round(nvl(fs.free_space, 0), 2)) /
                     df.tablespace_maxsize * 100,
                     2),
               '990D99') || ' %' "Used %",
       case
         when round((df.tablespace_size - round(nvl(fs.free_space, 0), 2)) /
                    df.tablespace_maxsize * 100,
                    2) > 95 then
          'CRITICAL - 95%'
         when round((df.tablespace_size - round(nvl(fs.free_space, 0), 2)) /
                    df.tablespace_maxsize * 100,
                    2) > 90 then
          'ALARM - 90%'
         when round((df.tablespace_size - round(nvl(fs.free_space, 0), 2)) /
                    df.tablespace_maxsize * 100,
                    2) > 85 then
          'CAUTION - 85%'
         else
          'OK < 85%'
       end "Status"
  from (select 
               tablespace_name,
               sum(bytes) tablespace_size,
               sum(decode(maxbytes, 0, bytes, null, bytes, maxbytes)) tablespace_maxsize
          from dba_data_files
         group by tablespace_name) df,
       (select 
               tablespace_name, sum(bytes) free_space
          from dba_free_space
         group by tablespace_name) fs
 where df.tablespace_name = fs.tablespace_name(+)
 )
UNION
(
select 
       ts.tablespace_name "Tablespace",
       round(ts.tablespace_size / 1024 / 1024, 2) "Size MB",
       round(nvl(tfs.free_space, 0) / 1024 / 1024, 2) "Free MB",
       round(ts.tablespace_maxsize / 1024 / 1024, 2) "Max MB",
       round((ts.tablespace_maxsize - (ts.tablespace_size - tfs.free_space))
       / 1024 / 1024, 2) "Maximum free space in MB",
       round((ts.tablespace_size / 1024 / 1024 - tfs.free_space / 1024 / 1024),2)
        "Used MB",
       to_char(round((ts.tablespace_size - tfs.free_space) /
                     ts.tablespace_maxsize * 100,
                     2),
               '990D99') || ' %' "Used %",
       case
         when round((ts.tablespace_size - tfs.free_space) /
                    ts.tablespace_maxsize * 100,
                    2) > 95 then
          'CRITICAL - 95%'
         when round((ts.tablespace_size - tfs.free_space) /
                    ts.tablespace_maxsize * 100,
                    2) > 90 then
          'ALARM - 90%'
         when round((ts.tablespace_size - tfs.free_space) /
                    ts.tablespace_maxsize * 100,
                    2) > 85 then
          'CAUTION - 85%'
         else
          'OK < 85%'
       end "Status"
  from (select 
               tablespace_name,
               sum(bytes) tablespace_size,
               sum(decode(maxbytes, 0, bytes, null, bytes, maxbytes)) tablespace_maxsize
          from dba_temp_files
         group by tablespace_name) ts,
       (select
               tablespace_name, free_space from dba_temp_free_space) tfs
 where ts.tablespace_name = tfs.tablespace_name
    )
order by 7 desc, 1 asc;



note :
If it'll work slow on your database, main reason for this is your recyclebin, if there is too many objects then dba_free_space view will work slow, solution for this is to purge dba_recyclebin