Google it ....

Showing posts with label RAC. Show all posts
Showing posts with label RAC. Show all posts

Friday, January 26, 2024

How to run scheduler job on specific instance in RAC database

Here I'll show you how to configure scheduler job to run on specific instance in RAC database.
create test job on RAC database:
begin
	sys.dbms_scheduler.create_job(
	job_name        => 'SYS.TEST_SCHEDULER_JOB',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'declare var1 varchar2(1);
begin
	select * into var1 from dual;
	dbms_output.put_line(var1);
end;',
    start_date      => to_date('26-01-2024 00:00:00','dd-mm-yyyy hh24:mi:ss'),
	repeat_interval => 'Freq=Secondly;Interval=5',
	end_date        => to_date(null),
	job_class       => 'DEFAULT_JOB_CLASS',
	enabled         => true,
	auto_drop       => false,
	comments        => '');
end;
/
Using instance ID and instance stickiness attribute in job scheduler, Instance ID value indicate the instance on which the job is to be run. Instance Stickiness value true means job will run on same node until node is extremely overloaded or not available. False means job will run on any available node. Its default value is true. Job will continue on that on which its first run or started in RAC case. No use in Standalone server. 
check job with its instance_id and instance_stickiness:
select OWNER,JOB_NAME,INSTANCE_STICKINESS,INSTANCE_ID from dba_scheduler_jobs where JOB_NAME='TEST_SCHEDULER_JOB';
change instance_id for job:
begin 
 dbms_scheduler.set_attribute(name => 'TEST_SCHEDULER_JOB' ,attribute=>'INSTANCE_ID', value=>'2');
end;
/
disable instance_stickiness attribute for job:
begin 
 dbms_scheduler.set_attribute(name => 'TEST_SCHEDULER_JOB' ,attribute=>'INSTANCE_STICKINESS', value=>FALSE);
end;
/

Monday, October 19, 2020

ORA-00245: control file backup failed; in Oracle RAC, target might not be on shared storage

RAC database backup with RMAN failed with error:
Starting Control File and SPFILE Autobackup at 2015-09-14:23:13:44
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of Control File and SPFILE Autobackup command on Disk channel at 2015-09-14:23:13:45
ORA-00245: control file backup failed; in Oracle RAC, target might not be on shared storage

This is because From 11gR2 onwards, the controlfile backup happens without holding the controlfile enqueue. For non-RAC database, this doesn't change anything. But for RAC database, due to the changes made to the controlfile backup mechanism in 11gR2, any instance in the cluster may write to the snapshot controlfile. Due to this snapshot controlfile need to be visible to all instances. The snapshot controlfile MUST be accessible by all nodes of a RAC database, if the snapshot controlfile does not reside on a shared device error will be raised at the time of RMAN backup while taking snapshot of controlfile. This applies to backing up controlfile using sqlplus / having autobackup of controlfile configured on non shared location.
Solution:
This is a RAC specific configuration issue and the correct configuration is as described below It is changed behaviour which requires that the snapshot controlfile in a RAC environment, is on a shared location.
Check the snapshot controlfile location:
RMAN> show snapshot controlfile name;

Configure the snapshot controlfile to a shared disk :
RMAN> CONFIGURE SNAPSHOT CONTROLFILE NAME TO '+DG01/snapcf_RACDB.f';

Thursday, October 26, 2017

DBA_JOBS_RUNNING view is not showing all running DBMS_JOBS across the nodes in RAC

If you have Real Application Clusters Database (RAC) and you want to see currently running DBMS_JOBS across all nodes DBA_JOBS_RUNNING view in RAC is not showing all running jobs in Oracle 11g, It shows running jobs on instance where you querying DBA_JOBS_RUNNING, because it uses v$lock instead of gv$lock. It Seems like that:

create or replace view dba_jobs_running as
select v.SID, v.id2 JOB, j.FAILURES,
    LAST_DATE, substr(to_char(last_date,'HH24:MI:SS'),1,8) LAST_SEC,
    THIS_DATE, substr(to_char(this_date,'HH24:MI:SS'),1,8) THIS_SEC,
    j.field1 INSTANCE
  from sys.job$ j, v$lock v
  where v.type = 'JQ' and j.job (+)= v.id2;
comment on column DBA_JOBS_RUNNING.SID is 'Identifier of process which is executing the job.  See v$lock.';
comment on column DBA_JOBS_RUNNING.JOB is 'Identifier of job.  This job is currently executing.';
comment on column DBA_JOBS_RUNNING.FAILURES is 'How many times has this job started and failed since its last success?';
comment on column DBA_JOBS_RUNNING.LAST_DATE is 'Date that this job last successfully executed';
comment on column DBA_JOBS_RUNNING.LAST_SEC is 'Same as LAST_DATE.  This is when the last successful execution started.';
comment on column DBA_JOBS_RUNNING.THIS_DATE is 'Date that this job started executing (usually null if not executing)';
comment on column DBA_JOBS_RUNNING.THIS_SEC is 'Same as THIS_DATE.  This is when the last successful execution started.';
comment on column DBA_JOBS_RUNNING.INSTANCE is 'The instance number restricted to run the job';

In RAC we can use next script to find all running jobs across all nodes:

select v.SID,
       v.id2 JOB,
       j.FAILURES,
       LAST_DATE,
       substr(to_char(last_date, 'HH24:MI:SS'), 1, 8) LAST_SEC,
       THIS_DATE,
       substr(to_char(this_date, 'HH24:MI:SS'), 1, 8) THIS_SEC,
       v.INST_ID instance
  from sys.job$ j, gv$lock v
 where v.type = 'JQ'
   and j.job(+) = v.id2;

Also you can create some view according to this query and call it dba_jobs_running_rac or something like that and in RAC database use that view to find all running jobs across all nodes.
For example :
create or replace view dba_jobs_running_rac as
select v.SID,
       v.id2 JOB,
       j.FAILURES,
       LAST_DATE,
       substr(to_char(last_date, 'HH24:MI:SS'), 1, 8) LAST_SEC,
       THIS_DATE,
       substr(to_char(this_date, 'HH24:MI:SS'), 1, 8) THIS_SEC,
       v.INST_ID instance
  from sys.job$ j, gv$lock v
 where v.type = 'JQ'
   and j.job(+) = v.id2;
select * from dba_jobs_running_rac

Wednesday, June 12, 2013

Recreate Oracle Enterprise Manager (OEM) database control 11g for Real Application Cluster (RAC)

If you you want to recrate enterprise manager for rac scenario is next:
[oracle@node1-230-231 ~]$ emca -config dbcontrol db -repos recreate -cluster

STARTED EMCA at Jun 12, 2013 12:15:19 PM
EM Configuration Assistant, Version 11.2.0.3.0 Production
Copyright (c) 2003, 2011, Oracle.  All rights reserved.

Enter the following information:
Database unique name: orcl
Service name: orcl
Listener ORACLE_HOME [ /u01/app/11.2.0/grid ]: 
Password for SYS user:  
Database Control is already configured for the database orcl
You have chosen to configure Database Control for managing the database orcl
This will remove the existing configuration and the default settings and 
perform a fresh configuration
----------------------------------------------------------------------
WARNING : While repository is dropped the database will be put in quiesce mode.
----------------------------------------------------------------------
Do you wish to continue? [yes(Y)/no(N)]: y
Password for DBSNMP user:  
Password for SYSMAN user:  
Cluster name: rac-scan

if you don't know cluster name open another terminal
[oracle@node1-230-231 ~]$ su - grid
Password: 
[grid@node1-230-231 ~]$ cemutlo -n
rac-scan

Email address for notifications (optional): 
Outgoing Mail (SMTP) server for notifications (optional): 
ASM ORACLE_HOME [ /u01/app/11.2.0/grid ]: 
ASM port [ 1521 ]: 
ASM username [ ASMSNMP ]: 
ASM user password:  
-----------------------------------------------------------------

You have specified the following settings

Database ORACLE_HOME ................ /u01/app/oracle/product/11.2.0/dbhome_1

Database instance hostname ................ Listener ORACLE_HOME ... /u01/app/11.2.0/grid
Listener port number ................ 1521
Cluster name ................ rac-scan
Database unique name ................ orcl
Email address for notifications ............... 
Outgoing Mail (SMTP) server for notifications ............... 
ASM ORACLE_HOME ................ /u01/app/11.2.0/grid
ASM port ................ 1521
ASM user role ................ SYSDBA
ASM username ................ ASMSNMP

-----------------------------------------------------------------
----------------------------------------------------------------------
WARNING : While repository is dropped the database will be put in quiesce mode.
----------------------------------------------------------------------
Do you wish to continue? [yes(Y)/no(N)]: y
Jun 12, 2013 12:17:27 PM oracle.sysman.emcp.EMConfig perform
INFO: This operation is being logged at 
/u01/app/oracle/cfgtoollogs/emca/orcl/emca_2013_06_12_12_15_19.log.
Jun 12, 2013 12:17:54 PM oracle.sysman.emcp.util.PortManager isPortInUse
WARNING: Specified port 5540 is already in use.
Jun 12, 2013 12:17:54 PM oracle.sysman.emcp.util.PortManager isPortInUse
WARNING: Specified port 5520 is already in use.
Jun 12, 2013 12:17:54 PM oracle.sysman.emcp.util.PortManager isPortInUse
WARNING: Specified port 1158 is already in use.
Jun 12, 2013 12:17:54 PM oracle.sysman.emcp.util.DBControlUtil stopOMS
INFO: Stopping Database Control (this may take a while) ...
Jun 12, 2013 12:18:18 PM oracle.sysman.emcp.EMReposConfig invoke
INFO: Dropping the EM repository (this may take a while) ...
Jun 12, 2013 12:22:02 PM oracle.sysman.emcp.EMReposConfig invoke
INFO: Repository successfully dropped
Jun 12, 2013 12:22:06 PM oracle.sysman.emcp.EMReposConfig createRepository
INFO: Creating the EM repository (this may take a while) ...
Jun 12, 2013 12:30:28 PM oracle.sysman.emcp.EMReposConfig invoke
INFO: Repository successfully created
Jun 12, 2013 12:30:47 PM oracle.sysman.emcp.EMReposConfig uploadConfigDataToRepository
INFO: Uploading configuration data to EM repository (this may take a while) ...
Jun 12, 2013 12:31:50 PM oracle.sysman.emcp.EMReposConfig invoke
INFO: Uploaded configuration data successfully
Jun 12, 2013 12:31:51 PM oracle.sysman.emcp.EMDBCConfig instantiateOC4JConfigFiles
INFO: Propagating 
/u01/app/oracle/product/11.2.0/dbhome_1/oc4j/j2ee/OC4J_DBConsole_node1-230-231_orcl to remote nodes ...
Jun 12, 2013 12:31:52 PM oracle.sysman.emcp.EMDBCConfig instantiateOC4JConfigFiles
INFO: Propagating 
/u01/app/oracle/product/11.2.0/dbhome_1/oc4j/j2ee/OC4J_DBConsole_node2-232-233_orcl to remote nodes ...
Jun 12, 2013 12:32:04 PM oracle.sysman.emcp.EMAgentConfig deployStateDirs
INFO: Propagating /u01/app/oracle/product/11.2.0/dbhome_1/node1-230-231_orcl to remote nodes ...
Jun 12, 2013 12:32:06 PM oracle.sysman.emcp.EMAgentConfig deployStateDirs
INFO: Propagating /u01/app/oracle/product/11.2.0/dbhome_1/node2-232-233_orcl to remote nodes ...
Jun 12, 2013 12:32:29 PM oracle.sysman.emcp.util.DBControlUtil secureDBConsole
INFO: Securing Database Control (this may take a while) ...
Jun 12, 2013 12:33:17 PM oracle.sysman.emcp.util.DBControlUtil startOMS
INFO: Starting Database Control (this may take a while) ...
Jun 12, 2013 12:33:59 PM oracle.sysman.emcp.EMDBPostConfig performConfiguration
INFO: Database Control started successfully
Jun 12, 2013 12:33:59 PM oracle.sysman.emcp.EMDBPostConfig performConfiguration
INFO: >>>>>>>>>>> The Database Control URL is 
https://node1-230-231:1158/em <<<<<<<<<<<
Jun 12, 2013 12:34:09 PM oracle.sysman.emcp.EMDBPostConfig showClusterDBCAgentMessage
INFO: 
****************  Current Configuration  ****************
 INSTANCE            NODE           DBCONTROL_UPLOAD_HOST
----------        ----------        ---------------------

orcl              node1-230-231              node1-230-231
orcl              node2-232-233              node1-230-231


Jun 12, 2013 12:34:09 PM oracle.sysman.emcp.EMDBPostConfig invoke
WARNING: 
************************  WARNING  ************************

Management Repository has been placed in secure mode wherein Enterprise Manager data will be encrypted.  
The encryption key has been placed in the file: 
/u01/app/oracle/product/11.2.0/dbhome_1/node1-230-231_orcl/sysman/config/emkey.ora. 
Ensure this file is backed up as the encrypted data will become unusable if this file is lost. 

***********************************************************
Enterprise Manager configuration completed successfully
FINISHED EMCA at Jun 12, 2013 12:34:09 PM


check status of enterprise manager
[oracle@node1-230-231 ~]$ emctl status dbconsole
Oracle Enterprise Manager 11g Database Control Release 11.2.0.3.0 
Copyright (c) 1996, 2011 Oracle Corporation.  All rights reserved.
https://node1-230-231:1158/em/console/aboutApplication
Oracle Enterprise Manager 11g is running. 
------------------------------------------------------------------
Logs are generated in directory 
/u01/app/oracle/product/11.2.0/dbhome_1/node1-230-231_orcl/sysman/log 


that's all.