Google it ....

Showing posts with label Oracle. Show all posts
Showing posts with label Oracle. Show all posts

Friday, October 16, 2020

SYS.EXP_HEAD$ Growth in Tablespace SYSAUX

 Optimizer Expression Tracking allocates too much space in SYSAUX with following objects:


SYS.EXP_HEAD$
SYS.EXP_OBJ$
SYS.EXP_STAT$

Optimizer expression statistics monitoring is introduced in 12.2 database version. Expression statistics monitoring is enabled by default in 12.2. For specific applications, this might end up in growing repository segments. Monitoring expression statistics is introduced for a new optimizer feature in 12.2 to collect expression usage statistics in SQL queries. Monitoring feature is controlled by parameter "_column_tracking_level". High growth have been noticed when monitoring expression statistics is active, enabled by default. Note: There is no effects to execution plan of SQL if in-memory is disable. What is this feature used for? You can imagine that it is useful for optimization in general, but right now it has a single "client": Oracle Database In-Memory, where it’s referred to as the Expression Statistics Store (ESS).
Solution:
Disable future monitoring:
alter system set "_column_tracking_level"=17 scope=both;
--wait 10 minutes

To purge data from tables:
exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO();
truncate table sys.exp_head$ drop storage;
truncate table sys.exp_obj$ drop storage;
truncate table sys.exp_stat$ drop storage;
alter index SYS.I_EXP_HEAD$ rebuild tablespace sysaux online;
alter index SYS.I_EXP_STAT$ rebuild tablespace sysaux online;
alter index SYS.I_EXP_OBJ$ rebuild tablespace sysaux online;

To re-enable expression statistics monitoring later :
alter system set "_column_tracking_level"=21 scope=both; 

Wednesday, September 30, 2020

PLS-908: The stored format is not supported by this release ORA-06541: PL/SQL: compilation error ORA-06553: PLS-1: ORA-06540

On one of our database development team member getting below error when trying to use remote database package procedure into local view, which have execute privilege on remote object:

ORA-04052: eror occured when looking up remote object test.test_pack@testdb
ORA-06541: PL/SQL: compilation error - compilation aborted
ORA-06553: PLS-1: ORA-06540: PL/SQL: compilation error
ORA-06553: PLS-908: The stored format of test.test_pack@testdb is not supported by this release
On metaLink there is doc PLS-908 Error When Objects Are Accessed Across a DBLink (Doc ID 824951.1) and it's bug, but in our case we have other version than doc says and also development team member said that this view was working some times ago, ooh this was hint for me and I tried to flush shared_pool with command:
SQL> ALTER SYSTEM FLUSH SHARED_POOL;
on database where view exist and bingo! it's working now and everybody are happy!

Monday, September 7, 2020

ORA-16854: apply lag could not be determined in DGMGRL

On one of our Data Guard Architecure appeared below error in DGMGRL, when checked the configuration status but the primary and standby databases are fine with no delay and in syncronize.
ORA-16854: apply lag could not be determined
DGMGRL> show database verbose testdbst;

Database - testdbst

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 1 second ago)
  Apply Lag:          (unknown)
  Average Apply Rate: (unknown)
  Active Apply Rate:  (unknown)
  Maximum Apply Rate: (unknown)
  Real Time Query:    ON
  Instance(s):
    testdbst

  Properties:
    DGConnectIdentifier             = 'testdbstby'
    ObserverConnectIdentifier       = ''
    LogXptMode                      = 'ASYNC'
    RedoRoutes                      = ''
    DelayMins                       = '0'
    Binding                         = 'OPTIONAL'
    MaxFailure                      = '0'
    MaxConnections                  = '1'
    ReopenSecs                      = '300'
    NetTimeout                      = '30'
    RedoCompression                 = 'DISABLE'
    LogShipping                     = 'ON'
    PreferredApplyInstance          = ''
    ApplyInstanceTimeout            = '0'
    ApplyLagThreshold               = '30'
    TransportLagThreshold           = '30'
    TransportDisconnectedThreshold  = '30'
    ApplyParallel                   = 'AUTO'
    ApplyInstances                  = '0'
    StandbyFileManagement           = 'AUTO'
    ArchiveLagTarget                = '900'
    LogArchiveMaxProcesses          = '4'
    LogArchiveMinSucceedDest        = '1'
    DataGuardSyncLatency            = '0'
    DbFileNameConvert               = '+DATA1/testdb, +DATA2/testdbst, +RECO1/testdb, +REC2/testdbst'
    LogFileNameConvert              = '+DATA1/testdb, +DATA2/testdbst, +RECO1/testdb, +RECO2/testdbst'
    FastStartFailoverTarget         = ''
    InconsistentProperties          = '(monitor)'
    InconsistentLogXptProps         = '(monitor)'
    SendQEntries                    = '(monitor)'
    LogXptStatus                    = '(monitor)'
    RecvQEntries                    = '(monitor)'
    PreferredObserverHosts          = ''
    StaticConnectIdentifier         = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=testdbst-dg-vip)(PORT=1522))(CONNECT_DATA=(SERVICE_NAME=testdbst)(SERVER=DEDICATED)))'
    StandbyArchiveLocation          = 'USE_DB_RECOVERY_FILE_DEST'
    AlternateLocation               = ''
    LogArchiveTrace                 = '0'
    LogArchiveFormat                = '%t_%s_%r.dbf'
    TopWaitEvents                   = '(monitor)'

  Log file locations:
    Alert log               : /u01/app/oracle/diag/rdbms/testdbst/testdbst/trace/alert_testdbst.log
    Data Guard Broker log   : /u01/app/oracle/diag/rdbms/testdbst/testdbst/trace/drctestdbst.log

Database Status:
SUCCESS
Solution: 

1) Connect to the standby database and check if MRP process is still running If MRP process is still running, cancell it with command:
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
and start MRP process with command:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT FROM SESSION;
2) if above Solution will not work try to just restart standby database. 

3) If there is still prolem with warning "about apply lag could not be determined" you can disable lag computation with next command in DGMGRL
DGMGRL> edit database 'testdbst' set property 'ApplyLagThreshold'=0;
This definitely will solve your problem.

Thursday, September 3, 2020

CRS-4995: The command 'Modify resource' is invalid in crsctl. Use srvctl for this command.

After version Oracle 12c when I use command

crsctl modify resource ora.testdb.db -attr AUTO_START=1
for autostarting specific resource appears next error:
CRS-4995:  The command 'Modify  resource' is invalid in crsctl. Use srvctl for this command.
Solution: 
For achive to modify resource with crsctl command there is special key -unsupported and whole command will be :
crsctl modify resource ora.testdb.db -attr AUTO_START=1 -unsupported

check it:
crsctl stat res ora.testdb.db -p

NAME=ora.testdb.db
TYPE=ora.database.type
AUTO_START=1

Monday, August 3, 2020

ORA-27206: requested file not found in media management catalog

During take backup of Oracle Database through Veritas netbackup, appears below errors:

RMAN-03009: failure of backup command on chan05 channel at 02/08/2020 05:24:23
ORA-19513: failed to identify sequential file
ORA-27206: requested file not found in media management catalog
Solution:
The problem may occurred If you are added your client ip address in master netbackup server while configuring backup policy.
So change client hostname with domain name instead of ip address of client.

Monday, July 20, 2020

How to install TFA - Oracle Trace File Analyzer

In this post I will explain how We can install and use TFA. Oracle Trace File Analyzer (TFA) provides a number of diagnostic tools in a single bundle, making it easy to gather diagnostic information about the Oracle database and clusterware, which in turn helps with problem resolution when dealing with Oracle Support.

In many case Oracle will some output using this tool for your Service Requests.

First We need to download latest TFA version from Metalink. Please follow TFA Collector – TFA with Database Support Tools Bundle (Doc ID 1513912.1)


[root@exadb01 ahf]# ./ahf_setup 

AHF Installer for Platform Linux Architecture x86_64

AHF Installation Log : /tmp/ahf_install_202000_255172_2020_07_20-12_49_45.log

Starting Autonomous Health Framework (AHF) Installation

AHF Version: 20.2.0 Build Date: 202006260723

TFA is already installed at : /u01/app/12.2.0.1/grid/suptools/tfa/exadb01/tfa_home

Installed TFA Version : 192000 Build ID : 20190405072025

Default AHF Location : /opt/oracle.ahf

Do you want to install AHF at [/opt/oracle.ahf] ? [Y]|N : N  

Please Enter new AHF Location : /u01/ahf

AHF Location : /u01/ahf/oracle.ahf

AHF Data Directory stores diagnostic collections and metadata.
AHF Data Directory requires at least 5GB (Recommended 10GB) of free space.

Choose Data Directory from below options : 

1. /u01/app/grid [Free Space : 31501 MB]
2. /u01/ahf/oracle.ahf [Free Space : 31501 MB]
3. Enter a different Location

Choose Option [1 - 3] : 2

AHF Data Directory : /u01/ahf/oracle.ahf/data

Shutting down TFA : /u01/app/12.2.0.1/grid/suptools/tfa/exadb01/tfa_home

Copying TFA Data Files from /u01/app/12.2.0.1/grid/suptools/tfa/exadb01/tfa_home

Uninstalling TFA : /u01/app/12.2.0.1/grid/suptools/tfa/exadb01/tfa_home

Do you want to add AHF Notification Email IDs ? [Y]|N : N

AHF will also be installed/upgraded on these Cluster Nodes :

1. exadb02

The AHF Location and AHF Data Directory must exist on the above nodes
AHF Location : /u01/ahf/oracle.ahf
AHF Data Directory : /u01/ahf/oracle.ahf/data

Do you want to install/upgrade AHF on Cluster Nodes ? [Y]|N : Y

Extracting AHF to /u01/ahf/oracle.ahf

Configuring TFA Services

Copying TFA Data Files to AHF

Discovering Nodes and Oracle Resources


TFA will configure Storage Cells using SSH Setup:

.-----------------------------------.
|   | EXADATA CELL | CURRENT STATUS |
+---+--------------+----------------+
| 1 | exa1cel04    | ONLINE         |
| 2 | exa1cel05    | ONLINE         |
| 3 | exa1cel06    | ONLINE         |
| 4 | exa1cel07    | ONLINE         |
'---+--------------+----------------'


Starting TFA Services

.----------------------------------------------------------------------------.
| Host   | Status of TFA | PID    | Port | Version    | Build ID             |
+--------+---------------+--------+------+------------+----------------------+
| exadb01 | RUNNING       | 386561 | 5000 | 20.2.0.0.0 | 20200020200626072308 |
'--------+---------------+--------+------+------------+----------------------'

Running TFA Inventory...

Adding default users to TFA Access list...

.--------------------------------------------------------.
|              Summary of AHF Configuration              |
+-----------------+--------------------------------------+
| Parameter       | Value                                |
+-----------------+--------------------------------------+
| AHF Location    | /u01/ahf/oracle.ahf                  |
| TFA Location    | /u01/ahf/oracle.ahf/tfa              |
| Exachk Location | /u01/ahf/oracle.ahf/exachk           |
| Data Directory  | /u01/ahf/oracle.ahf/data             |
| Repository      | /u01/ahf/oracle.ahf/data/repository  |
| Diag Directory  | /u01/ahf/oracle.ahf/data/exadb01/diag |
'-----------------+--------------------------------------'


Starting exachk scheduler from AHF ...

AHF install completed on exadb01

Installing AHF on Remote Nodes :

AHF will be installed on exadb02, Please wait.

Installing AHF on exadb02 :

[exadb02] Copying AHF Installer

[exadb02] Running AHF Installer

AHF binaries are available in /u01/ahf/oracle.ahf/bin

AHF is successfully installed

Moving /tmp/ahf_install_202000_255172_2020_07_20-12_49_45.log to /u01/ahf/oracle.ahf/data/exadb01/diag/ahf/
# Collect data for all components for a specific time period.
./tfactl diagcollect -from "2020-07-18 13:00:00" -to "2020-07-19 13:00:00"
# Collect data for all components for the last 12 hours.
./tfactl diagcollect

Monday, June 15, 2020

ORA-38856: CANNOT MARK INSTANCE UNNAMED_INSTANCE_2 (REDO THREAD 2) AS ENABLED

Backup was taken in RAC Database and we are trying to restore it in single instance. During making open resetlogs appears error:
SQL> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-38856: cannot mark instance UNNAMED_INSTANCE_2 (redo thread 2) as enabled
Details: ORA-38856: cannot mark instance UNNAMED_INSTANCE_2 (redo thread 2) as enabled. Oracle bug, 4355382 ORA-38856: FAILED TO OPEN DATABASE WITH RESETLOGS WHEN USING RAC BACKUP. The problem is related to the number of threads used by the source database and an expectation that the cloned database must have an identical number of threads. 

Solution: 
 1) Set the following parameter in the auxiliary init.ora file: _no_recovery_through_resetlogs=TRUE. 
 2) Open the database in resetlogs mode. 
 3) Remove _ no_recovery_through_resetlogs=TRUE from init.ora. 
 4) Restart database.

Thursday, June 4, 2020

ORA-12516: TNS: listener could not find available handler with matching protocol stack

On one of our database appears error when users are trying to connect to database:
ORA-12516: TNS: listener could not find available handler with matching protocol stack
What does it mean? It's mean that our listener can't find available hanlde. Let's check into database how many processes we have:
sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Thu Jun 4 11:51:25 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0


SQL> select limit_value, current_utilization, max_utilization from v$resource_limit 
     where resource_name='processes';

LIMIT_VALUE                              CURRENT_UTILIZATION MAX_UTILIZATION
---------------------------------------- ------------------- ---------------
       300                                               295             300

SQL> 
This is reason because oracle can't make server process for corresponding user process. Solution Just Increase parameter processes and restart database.
alter system set processes=500 scope=spfile;
shutdown immediate;
startup;

Friday, May 22, 2020

Error in invoking target 'agent nmhs' of make file ins_emagent.mk while installing Oracle database software

Error in invoking target 'agent nmhs' of make file ins_emagent.mk while installing Oracle 11gr2 11.2.0.4.0 on Red Hat Enterprise Linux 7
Problem:

While installing Oracle database software 11.2.0.4.0 on Linux 7 we may encounter this error in between installation:

Error in invoking target 'agent nmhs' of make file 
/u0/app/oracle/product/11.2.0.4.0/sysman/lib/ins_emagent.mk

Cause:

Linking error due to "ins_emagent.mk"

Solution:

To fix edit the ins_emagent.mk file and change the following line in $ORACLE_HOME/sysman/lib/ins_emagent.mk


$(MK_EMAGENT_NMECTL)

to

$(MK_EMAGENT_NMECTL) -lnnz11

and press the retry button again.

Thursday, November 7, 2019

ORA-20001: Statistics Advisor: Invalid task name for the current user

In alert log we are getting error:

2019-11-06T01:40:00.897926+04:00
Errors in file /u0/app/oracle/diag/rdbms/testdb/TESTDB/trace/TESTDB_j000_90818.trc:
ORA-12012: error on auto execute of job "SYS"."ORA$AT_OS_OPT_SY_43482"
ORA-20001: Statistics Advisor: Invalid task name for the current user
ORA-06512: at "SYS.DBMS_STATS", line 47207
ORA-06512: at "SYS.DBMS_STATS_ADVISOR", line 882
ORA-06512: at "SYS.DBMS_STATS_INTERNAL", line 20059
ORA-06512: at "SYS.DBMS_STATS_INTERNAL", line 22201
ORA-06512: at "SYS.DBMS_STATS", line 47197

Cause:
Issue occurred while creating database with DBCA, the advisory package is not created properly. This issue is happening because of the unavailability of the Stats Advisor Tasks from the created database.
It is Oracle known Bug and can appear when database is created with dbca in oracle 12.2

Solution:

1) connect with sysdba privilege user and run following query:
col name for a30
select name, ctime, how_created
from sys.wri$_adv_tasks
where owner_name = 'SYS'
and name in ('AUTO_STATS_ADVISOR_TASK','INDIVIDUAL_STATS_ADVISOR_TASK');

no rows selected

2) Initialize the package with following commands:
EXEC dbms_stats.init_package();

3) Verify the package create. It will fixed the issue
col name for a30
select name, ctime, how_created
from sys.wri$_adv_tasks
where owner_name = 'SYS'
and name in ('AUTO_STATS_ADVISOR_TASK','INDIVIDUAL_STATS_ADVISOR_TASK');

NAME                           CTIME     HOW_CREATED                            
------------------------------ --------- --------------       
AUTO_STATS_ADVISOR_TASK        07-NOV-19 CMD                                    
INDIVIDUAL_STATS_ADVISOR_TASK  07-NOV-19 CMD 

Wednesday, October 23, 2019

Textual description of firstImageUrl

During Installation ASM disks not shown

During installation Oracle Automatic Storage - ASM, I choose "Configure Oracle Grid Infrastructure for a Standalone Server (Oracle Restart) and click next



on next window appears that my disk for ASM is not shown



Solution:

click on "Change Discovery Path" buton
and write there "/dev/oracleasm/disks".

after that disk is shown and you can continue your installation/configuration.



Monday, October 21, 2019

Error accessing PRODUCT_USER_PROFILE Warning: Product user profile information not loaded! You may need to run PUPBLD.SQL as SYSTEM

On one of our database when customer tries to login into SQLPlus as protect (or any other account other than SYS or SYSTEM), the following set of messages appears:

Error accessing PRODUCT_USER_PROFILE
Warning: Product user profile information not loaded!
You may need to run PUPBLD.SQL as SYSTEM

This is warning message and customer of course can login into database, but for disappear this messages solution is pretty simple.

This warning message means PRODUCT_USER_PROFILE table has not been built in the SYSTEM account.

To create this table, a Database Administrator (DBA) must run the pupbld.sql script located in the $ORACLE_HOME/sqlplus/admin, this script must be run as SYSTEM user.

If the PRODUCT_USER_PROFILE table is created incorrectly, all users other than SYSTEM will see a warning when connecting to Oracle from SQL*Plus that the PRODUCT_USER_PROFILE information is not loaded.

Solution

$ cd $ORACLE_HOME/sqlplus/admin
$ ls
glogin.sql  help  libsqlplus.def  plustrce.sql  pupbld.sql

sqlplus 

Enter user-name: system
Enter password: ******

SQL> @ pupbld.sql

Wednesday, October 16, 2019

Distributed Transactions in Prepared State Fails and can't rollback or commit or purge with event "free global transaction table entry"

Distributed Transactions in Prepared State Fails
The distributed transactions (dba_2pc_pending) will make below errors.
· There are two kind of errors will be appear in alert log,as shown as below
1. ORA-01591: lock held by in-doubt distributed transaction 103.10.39920
2. ORA-02050: transaction 9.2.446246 rolled back, some remote DBs may be in-doubt
ORA-03135: connection lost contact
Sun Aug 26 08:57:48 2012
DISTRIB TRAN db.ac.a68396a9.9.2.446246 is local tran 9.2.446246 (hex=09.02.6cf26)
insert pending collecting tran, scn=10238878734264 (hex=94f.ecbad7b8
(Note: - in this stage database automatically delete that transaction.)Other wise the transaction will go to dead mode (state of the transactions will be prepared ).
· There is chance hit high concurrency on database with buffer busy waits and enq:index contention too.
· The result of the blocking sessions wait event will be "Undo Segment Recovery".
· If you query the transaction from dba_2pc_pending,you can able to see that state is prepared.

select * from dba_2pc_pending where state='prepared' order by fail_time;
In this table, there is column called GLOBAL_TRAN_ID, use this column and track the transactions details from other database, like as below.

select * from dba_2pc_pending@db2 where GLOBAL_TRAN_ID='db2.232bc983.41.6.4224568';

Gateway database transaction state with committed and main database state has prepared mode.
But, some situation main database transaction state only prepared!

Solution

Oracle recommendations are to rollback or commit the transaction.

select * from dba_2pc_pending where local_tran_id='xx.39920';

103.10.xxx920 db1.a68396a9.103.10.39920 prepared

exec DBMS_TRANSACTION.rollback_force('local_trans_id');

example :- exec DBMS_TRANSACTION.rollback_force('6.1xxxx297');
or
commit force 'local_trans_id'
example :- commit force '103.10.39920';
If rollback or commit transaction hangs with wait event "free global transaction table entry", this situation try to purge transaction
exec dbms_transaction.purge_lost_db_entry('local_trans_id');
example :- exec dbms_transaction.purge_lost_db_entry('103.10.39920');.
This will be failed with error.
ERROR at line 1:
ORA-06510: PL/SQL: unhandled user-defined exception
ORA-06512: at "SYS.DBMS_TRANSACTION", line 94
ORA-06512: at line 1

Rollback, commit and purge failed with error, and then only you can follow below steps.

Case 1:

Login to the database as sysdba

select * from dba_2pc_pending;

ensure that transaction is in PREPARED state
SELECT KTUXEUSN, KTUXESLT, KTUXESQN, /* Transaction ID */
KTUXESTA Status,
KTUXECFL Flags
FROM x$ktuxe
WHERE ktuxesta!='INACTIVE'
AND ktuxeusn=6
AND KTUXESLT=12
AND KTUXESQN=399297;

SELECT * FROM sys.pending_trans$;
SELECT * FROM sys.pending_sessions$;
SELECT * FROM sys.pending_sub_sessions$;

/* <== Replace this with your local tran id */
delete from sys.pending_trans$ where local_tran_id = '103.10.39920'; 
/* <== Replace this with your local tran id */
delete from sys.pending_sessions$ where local_tran_id = '103.10.39920'; 
/* <== Replace this with your local tran id */
delete from sys.pending_sub_sessions$ where local_tran_id ='103.10.39920'; 

commit;

alter system disable distributed recovery;
insert some dummy rows
insert into pending_trans$
(LOCAL_TRAN_ID,
GLOBAL_TRAN_FMT,
GLOBAL_ORACLE_ID,
STATE,
STATUS,
SESSION_VECTOR,
RECO_VECTOR,
TYPE#,
FAIL_TIME,
RECO_TIME)
values
('103.10.39920', /* <== Replace this with your local tran id */
   306206, /* */
   'XXXXXXX.12345.1.2.3', /* These values can be used without any */
   'prepared',
   'P', /* modification. Most of the values are */
   hextoraw('00000001'), /* constant. */
   hextoraw('00000000'), /* */
   0,
   sysdate,
   sysdate);
insert into pending_sessions$
values
  ('103.10.39920', /* <==Replace only this with your local tran id */
   1,
   hextoraw('05004F003A1500000104'),
   'C',
   0,
   30258592,
   '',
   146);

commit;

commit force '103.10.39920';

BEGIN
dbms_transaction.purge_lost_db_entry('103.10.39920');

END;

alter system enable distributed recovery;

Note: If the case 1 failed, Follow case2

Case 2:

Cause

ORA-02075 is caused because a commit force or rollback force was issued from a session in another instance and it is performing recovery of pending transactions.

Solution

SQL> alter system disable distributed recovery;
SQL> delete from sys.pending_trans$ where local_tran_id = 'LOCAL TRANSACTION ID';
SQL> delete from sys.pending_sessions$ where local_tran_id = 'LOCAL TRANSACTION ID';
SQL> delete from sys.pending_sub_sessions$ where local_tran_id = 'LOCAL TRANSACTION ID';
SQL> commit;

SQL> alter system enable distributed recovery;

Thursday, September 26, 2019

Oracle Installation error: You do not have sufficient permissions to access the inventory

When I am trying to install oracle software I am getting error:
You do not have sufficient permissions to access the inventory '/u01/app/oraInventory'. Installation cannot continue. Make sure that you have read/write permissions to the inventory directory and restart the installer.: Permission denied

$ ./runInstaller
You do not have sufficient permissions to access
the inventory '/u01/app/oraInventory'. 
Installation cannot continue. 
Make sure that you have read/write permissions
to the inventory directory and restart 
the installer.: Permission denied

try to drop oraInventory and try again run installer

$ rm -rf /u01/app/oraInventory

$ ./runInstaller
You do not have sufficient permissions to access
the inventory '/u01/app/oraInventory'. 
Installation cannot continue. 
Make sure that you have read/write permissions
to the inventory directory and restart 
the installer.: Permission denied


Oracle thinks there is still a oraInventory, even I drop it.
There is a file somewhere (not in /u01) telling oracle that oraInventory exists.

It is here: /etc/oraInst.loc. After doind a few oracle installs I guessed it was in /etc, but you could use find or locate to find this file.

as root:
rm /etc/oraInst.loc

as oracle:
$ ./runInstaller
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB. Actual 12603 MB Passed
Checking swap space: must be greater than 150 MB. Actual 2015 MB Passed
..
It works!

Wednesday, September 18, 2019

ORA-20013: dbms_qopatch ran mostly in non install area

Datapatch fails with following errors :
./datapatch -verbose
SQL Patching tool version 12.1.0.2.0 on Fri Jul 17 09:45:14 2015
Copyright (c) 2015, Oracle. All rights reserved.

Log file for this invocation: /cfgtoollogs/sqlpatch/sqlpatch_22821_2015_07_17_09_45_14/sqlpatch_invocation.log

Connecting to database…OK
Bootstrapping registry and package to current versions…done

Queryable inventory could not determine the current opatch status.
Execute ‘select dbms_sqlpatch.verify_queryable_inventory from dual’
and/or check the invocation log
/cfgtoollogs/sqlpatch/sqlpatch_22821_2015_07_17_09_45_14/sqlpatch_invocation.log
for the complete error.
Prereq check failed, exiting without installing any patches.

Please refer to MOS Note 1609718.1 and/or the invocation log
/cfgtoollogs/sqlpatch/sqlpatch_22821_2015_07_17_09_45_14/sqlpatch_invocation.log
for information on how to resolve the above errors.

SQL Patching tool complete on Fri Jul 17 09:45:21 2015


OR

Connecting to database...OK
Determining current state...DBD::Oracle::st execute failed: ORA-20009: Job Load_opatch_inventory_1execution failed
ORA-06512: at "SYS.DBMS_QOPATCH", line 1011
ORA-06512: at line 4 (DBD ERROR: OCIStmtExecute) [for Statement "DECLARE
x XMLType;
BEGIN
x := dbms_qopatch.get_pending_activity;
? := x.getStringVal();

END;" with ParamValues: :p1=undef] at /RAC/oracle/12.1.0.1/dbhome_1/sqlpatch/sqlpatch.pm line 648.

Further accessing the inventory through the DBMS_QOPATCH also fails with the following error :

SQL> select dbms_sqlpatch.verify_queryable_inventory from dual;
ORA-20013: DBMS_QOPATCH ran mostly in non install area

SQL> select dbms_qopatch.get_opatch_install_info() from dual;
ERROR:
ORA-20013: DBMS_QOPATCH ran mostly in non install area
ORA-06512: at “SYS.DBMS_QOPATCH”, line 1937
ORA-06512: at “SYS.DBMS_QOPATCH”, line 133

Cause:
OPATCH_INST_DIR was pointing to $ORACLE_HOME/QOpatch
or
OPATCH_LOG_DIR was pointing to $ORACLE_HOME/QOPatch
or
OPATCH_SCRIPT_DIR was pointing to $ORACLE_HOME/OPatch
or
Any one or more of the above condition is true.



This can be checked using the following query :
SQL> select DIRECTORY_NAME, DIRECTORY_PATH from dba_directories ;

Instead ,
OPATCH_INST_DIR should point to $ORACLE_HOME/OPatch
OPATCH_LOG_DIR should point to $ORACLE_HOME/QOpatch
OPATCH_SCRIPT_DIR should point to $ORACLE_HOME/QOpatch

Solution:
Drop the directory and create correct location :
1) SQL> create or replace directory OPATCH_INST_DIR as '/OPatch';
2) SQL> create or replace directory OPATCH_LOG_DIR as '/QOpatch';
3) SQL> create or replace directory OPATCH_SCRIPT_DIR as '/QOpatch';
4) Invoke datapatch command:

   ./datapatch -prereq  
   ./datapatch -verbose

Friday, August 23, 2019

Discoverer dis51pr: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory

Here the new issue that I found during changing the pref.txt in Discoverer Plus/Viewer 11g

I changed the pref.txt in $ORACLE_INSTANCE/config/PreferenceServer/Discoverer_asinst_1.
And when I tried to run $ORACLE_INSTANCE/Discoverer/Discoverer_asinst_1/util/applypreferences.sh
I got an error:
dis51pr: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory

I checked the $ORACLE_INSTANCE/config/OPMN/opmn/opmn.xml , there was no LD_ASSUME_KERNEL parameter there.

How to fix it? Here the solution:

1. Backup $ORACLE_INSTANCE/Discoverer/Discoverer_asinst_1/util/discenv.sh file
2. Look for :

#Bug 7491881 - For LNX AMD, set the LD_ASSUME_KERNEL variable
export LD_ASSUME_KERNEL=2.4.19


and change it to


#Bug 7491881 - For LNX AMD, set the LD_ASSUME_KERNEL variable
#export LD_ASSUME_KERNEL=2.4.19
3. Run $ORACLE_INSTANCE/Discoverer/Discoverer_asinst_1/util/applypreferences.sh

Tuesday, August 6, 2019

Find on which row is waiting blocked sesssion when you have TX - row lock contention

There may be situation When your session is Waiting for row lock, but which row is locked in oracle database?
Here I'll show you how to find on which row is waiting blocked session in Oracle Database when you have row level lock (enq: TX - row lock contention).
Let's Demonstrate small demo
Session 1

SQL> select * from scott.dept;
 
DEPTNO DNAME          LOC
------ -------------- -------------
    10 ACCOUNTING     NEW YORK
    20 RESEARCH       DALLAS
    30 SALES          CHICAGO
    40 OPERATIONS     BOSTON

SQL> update scott.dept set loc='SESSION1' where deptno=10; 
 
1 row updated


Session 2

SQL> select * from scott.dept;
 
DEPTNO DNAME          LOC
------ -------------- -------------
    10 ACCOUNTING     NEW YORK
    20 RESEARCH       DALLAS
    30 SALES          CHICAGO
    40 OPERATIONS     BOSTON

SQL> update scott.dept set loc='SESSION2' where deptno=10; 

Now second session waiting for first session because there is row lock on table scott.dept and we need to find on which row is waiting second session

SQL> select sid, event from gv$session where event like 'enq%';
 
       SID EVENT
---------- ----------------------------------------------------------------
       133 enq: TX - row lock contention
 
SQL> column owner format a10
SQL> column object_name format a12
SQL> column row_wait_obj# format a10
SQL> column row_wait_file# format a6
SQL> column row_wait_block# format a8
SQL> select o.owner,
           o.object_name,
           row_wait_obj#,
           row_wait_file#,
           row_wait_block#,
           row_wait_row#,
           dbms_rowid.rowid_create(1,
                                   o.DATA_OBJECT_ID,
                                   ROW_WAIT_FILE#,
                                  ROW_WAIT_BLOCK#,
                                  ROW_WAIT_ROW#) as "RowID"
     from gv$session s, 
          dba_objects o
     where sid = 133    ---- waiting session id
     and s.ROW_WAIT_OBJ# = o.OBJECT_ID;
 
OWNER      OBJECT_NAME  ROW_WAIT_O ROW_WA ROW_WAIT ROW_WAIT_ROW# RowID
---------- ------------ ---------- ------ -------- ------------- ------------------
SCOTT      DEPT              75333      4      135             0 AAASZFAAEAAAACHAAA
 
SQL> select * from scott.dept where rowid='AAASZFAAEAAAACHAAA';
 
DEPTNO DNAME          LOC
------ -------------- -------------
    10 ACCOUNTING     NEW YORK
 
SQL> 

Monday, July 1, 2019

Textual description of firstImageUrl

[INS-30131] Initial setup required for the execution of installer validations failed

While trying to install oracle client on Linux 7 appears error:
[INS-30131] Initial setup required for the execution of installer validations failed




In details section is written :
Cause - Failed to access the temporary location.  
Action - Ensure that the current user has required permissions to access the temporary location. 
Additional Information:
 - PRVG-1901 : failed to setup CVU remote execution framework directory 
    "/tmp/CVU_12.2.0.1.0_oracle/" on nodes "dynatrace-ag"  
    - Cause:  An operation requiring remote execution could not complete because
         the attempt to set up the Cluster Verification Utility remote
         execution framework failed on the indicated nodes at the
         indicated directory location because the CVU remote execution
         framework version did not match the CVU java verification
         framework version. The accompanying message provides detailed
         failure information.  - Action:  Ensure that the directory indicated exists or can be created and
         the user executing the checks has sufficient permission to
         overwrite the contents of this directory. Also review the
         accompanying error messages and respond to them. 
   Summary of the failed nodes dynatrace-ag  
   - An internal error occurred within cluster verification framework
The command executed was "/tmp/CVU_12.2.0.1.0_oracle//exectask.sh -getver". 
The output from the command was "/tmp/CVU_12.2.0.1.0_oracle/exectask: 
error while loading shared libraries: libaio.so.1: cannot open shared object file: No such file or directory"
- Cause: Cause Of Problem Not Available  - Action: User Action Not Available  
- Version of exectask could not be retrieved from node "dynatrace-ag"  
- Cause: Cause Of Problem Not Available  - Action: User Action Not Available  
- An internal error occurred within cluster verification framework
The command executed was "/tmp/CVU_12.2.0.1.0_oracle//exectask.sh -getver". 
The output from the command was "/tmp/CVU_12.2.0.1.0_oracle/exectask: 
error while loading shared libraries: libaio.so.1: cannot open shared object file: No such file or directory"
- Cause: Cause Of Problem Not Available  - Action: User Action Not Available  
- Version of exectask could not be retrieved from node "dynatrace-ag"  
- Cause: Cause Of Problem Not Available  - Action: User Action Not Available 

So solution is simple just install libaio package and retray
yum install libaio


Wednesday, June 5, 2019

Got ORA-1013 while running PLSQL

A big trace file was generated in udump,

The following content was in the trace file:
*** 2019-06-05 16:15:01.725
*** SESSION ID:(1159.11779) 2019-06-05 16:15:01.725
*** CLIENT ID:() 2019-06-05 16:15:01.725
*** SERVICE NAME:(TESTDB) 2019-06-05 16:15:01.725
*** MODULE NAME:(PL/SQL Developer) 2019-06-05 16:15:01.725
*** ACTION NAME:(Script for A.PRC_TEST@TESTDB) 2019-06-05 16:15:01.725
 
*********START PLSQL RUNTIME DUMP************
***Got internal error Exception caught in pl/sql run-time while running PLSQL***
***Got ORA-1013 while running PLSQL***
PACKAGE BODY TESTUSER.A:
library unit=7f877518 line=3 opcode=84 static link=2abba709c568 scope=1
FP=2abba709c718 PC=7b6ce24a Page=0 AP=2abba709c568 ST=2abba709c798
DL0=2abae5fd86e8 GF=2abae5fd8740 DL1=2abae5fd8708 DPF=2abae5fd8730 DS=7b6ce3f0
   DON library unit variable list instantiation
------ ------------ ------------- -------------

This is expected behavior, if using Ctrl+C to terminate a running PL/SQL, a trace file is generated for debugging purposes.

This issue can be reproduced by the following steps:
1. Open a session and running:
declare
n number;
begin
loop
n := n + 1;
end loop;
end;
/

==>Then terminate this session with ctrl+c

2. Check udump and a trace is generated:

Solution

To avoid generating big trace file, please set MAX_DUMP_FILE_SIZE to a smaller value to get smaller trace files:

For example:
You can run the following command to set MAX_DUMP_FILE_SIZE to 1GB:
SQL> alter system set max_dump_file_size='1024M';
SQL> show parameter max_dump_file_size

Tuesday, May 7, 2019

ORA-01628: max # extents (32765) reached for rollback segment

Error: ORA-01628: max # extents (32765) reached for rollback segment is occurring in a transaction that is generating a lot of undo data, during an add extent operation in an undo segment and is indicating we have hit the MAXEXTENTS (32765) and then we cannot extend the undo segment.

Here is troubleshooting steps which can be do when ORA-01628: max # extents (32765) reached for rollback segment error occurs in your database.
(Doc ID 1580182.1)

1)
It is important to check UNDO tablespace utilization and tuned undo retention as follow :
SQL> SELECT DISTINCT STATUS,TABLESPACE_NAME, SUM(BYTES), COUNT(*) FROM DBA_UNDO_EXTENTS GROUP BY STATUS, TABLESPACE_NAME;

SQL> select max(maxquerylen),max(tuned_undoretention) from v$undostat;

SQL> select max(maxquerylen),max(tuned_undoretention) from DBA_HIST_UNDOSTAT;

SQL> select sum(bytes) from dba_free_space where tablespace_name='&UNDOTBS';
Before proceed, Invistigate/Resolve any excessive allocation of ACTIVE/UNEXPIRED extents and high calculation of tuned_undoretention.

2)
Basically, It is obvious to see high undo usage when there are huge transactions.

Here is a query you can use to find out how much undo a transaction is using:
select a.sid, a.serial#, a.username, b.used_urec, b.used_ublk
from v$session a, v$transaction b
where a.saddr=b.ses_addr;
As you know, with automatic undo, you have no control over the extent sizes, nor do you have the ability to shrink them. This all happens automatically.
The system automatically decides on the extent size, however it will generally start allocating larger extents if an undo segment is extending a lot.
The large number of extents in the rollback segment is likely due to fragmentation in the undo tablespace:
Oracle can probably only allocate extents of 64k due to fragmentation, so it is very probable to hit the max extents issue.

The maximum number of extents for undo segments is limited to 32K and a long/large runing transaction can exhaust this limit by adding
new extents if the next extent of the current one is not an expired one and finally will receive ORA-1628.

So, after getting the ORA-1628 errors in the transaction that extended the undo segment until its limit,
future transactions will not be allowed to bind to the undo segment until it is not shrinked (you may see that the number of extents is decreased).

So, The two major causes of ORA-1628 issue are a very large transaction or undo tablespace fragmentation.
In case of large transaction, That is solved by splitting the large transaction to smaller ones (e.g. frequent commits).

In case of undo tablespace fragmentation, That is solved by recreating the undo tablespace.

Solutions :

1)
Set parameter "_rollback_segment_count" to online more available UNDO segments. Value should be set by placing the highest value obtained of the following queries:

select status,count(*) from dba_rollback_segs group by status; --You add OFFLINE+ONLINE to get the number

select max(maxconcurrency) from wrh$_undostat;

select max(maxconcurrency) from v$undostat;

2)
In case you have large value for TUNED_UNDORETENTION :

SQL> select max(maxquerylen),max(tuned_undoretention) from v$undostat;

SQL> select max(maxquerylen),max(tuned_undoretention) from DBA_HIST_UNDOSTAT;
ALTER SYSTEM SET "_highthreshold_undoretention"=max(maxquerylen)+1;

3)
Before/after running large transactions, Shrink undo segments when reaching certain threshold (Ex: 15000 extents) do not wait to reach its maximum (32765) to be able to bring it below certain threshold so that this undo segment can qualify for binding again.

a) select a.inst_id,
       a.ksppinm  "Parameter",
       b.ksppstvl "Session Value",
       c.ksppstvl "Instance Value"
  from x$ksppi a, x$ksppcv b, x$ksppsv c
 where a.indx = b.indx
   and a.indx = c.indx
   and a.inst_id = b.inst_id
   and b.inst_id = c.inst_id
   and a.ksppinm in ('_smu_debug_mode')
 order by 2;


select segment_name,
       round(nvl(sum(act), 0) / (1024 * 1024 * 1024), 3) "ACT GB BYTES",
       round(nvl(sum(unexp), 0) / (1024 * 1024 * 1024), 3) "UNEXP GB BYTES",
       round(nvl(sum(exp), 0) / (1024 * 1024 * 1024), 3) "EXP GB BYTES",
       NO_OF_EXTENTS
  from (select segment_name,
               nvl(sum(bytes), 0) act,
               00 unexp,
               00 exp,
               count(*) NO_OF_EXTENTS
          from DBA_UNDO_EXTENTS
         where status = 'ACTIVE'
           and tablespace_name = 'UNDOTBS1'
         group by segment_name
        union
        select segment_name,
               00 act,
               nvl(sum(bytes), 0) unexp,
               00 exp,
               count(*) NO_OF_EXTENTS
          from DBA_UNDO_EXTENTS
         where status = 'UNEXPIRED'
           and tablespace_name = 'UNDOTBS1'
         group by segment_name
        union
        select segment_name,
               00 act,
               00 unexp,
               nvl(sum(bytes), 0) exp,
               count(*) NO_OF_EXTENTS
          from DBA_UNDO_EXTENTS
         where status = 'EXPIRED'
           and tablespace_name = 'UNDOTBS1'
         group by segment_name)
 group by segment_name, NO_OF_EXTENTS
 order by 5 desc;


select sum(blocks), count(*) extents, segment_name
  from DBA_EXTENTS
 where tablespace_name = 'UNDOTBS1'
 group by segment_name
 order by 2 desc;

b) alter system set "_smu_debug_mode" = 4 scope=memory;

c) alter rollback segment "_SYSSMU$" shrink;

d) alter system set "_smu_debug_mode" =  scope=memory;

Then you can check the result of this measure by running the query in step a again before and after the above three steps.

4) Drop and recreate undo tablespace (due to it's fragmentation)

5) Minimize the generated undo as much as possible :

Example:

- split large transactions into smaller one

- commit more often

- use direct path load rather than conventional path load to significantly reduce the amount of undo and thus also avoid a too high fragmentation of undo tablespace.