Google it ....

Showing posts with label tuning. Show all posts
Showing posts with label tuning. Show all posts

Saturday, July 22, 2023

ERROR: relation pg_stat_statements does not exist

When I try to select * from pg_stat_statements to get some information about queries that run on postgres database I getting
  
ERROR: relation pg_stat_statements does not exist
In this post I'll write step by step how to install pg_stat_statements in postgresql to analyze queries, whcih is very helpful during performance tuning. 
The pg_stat_statments table holds information on queries that ran in the past table has one row per for each query that ran. It provides useful information in columns like: 
pg_stat_statements.query — the query text
pg_stat_statements.calls — The number of times this query was run
pg_stat_statements.total_exec_time — The total time (ms) spent on the query.

Here is step by step how to install pg_stat_statements:

postgres=# select * from pg_stat_statements;
ERROR:  relation "pg_stat_statements" does not exist
LINE 1: select * from pg_stat_statements;
                      ^
postgres=# SELECT * FROM pg_available_extensions where name='pg_stat_statements';
        name        | default_version | installed_version |                                comment
--------------------+-----------------+-------------------+------------------------------------------------------------------------
 pg_stat_statements | 1.10            |                   | track planning and execution statistics of all SQL statements executed
(1 row)
Extensions in PostgreSQL server are database level. We have create extension for each database separately.

postgres=# CREATE EXTENSION pg_stat_statements;
CREATE EXTENSION
postgres=#
postgres=#
postgres=# SELECT * FROM pg_available_extensions where name='pg_stat_statements';
server closed the connection unexpectedly
        This probably means the server terminated abnormally
        before or while processing the request.
The connection to the server was lost. Attempting reset: Succeeded.
postgres=# SELECT * FROM pg_available_extensions where name='pg_stat_statements';
        name        | default_version | installed_version |                                comment
--------------------+-----------------+-------------------+------------------------------------------------------------------------
 pg_stat_statements | 1.10            | 1.10              | track planning and execution statistics of all SQL statements executed
(1 row)


postgres=# select * from pg_stat_statements;
ERROR:  pg_stat_statements must be loaded via shared_preload_libraries
postgres=#
in postgresql.conf file add these records at the end:

shared_preload_libraries = 'pg_stat_statements'
pg_stat_statements.max = 10000
pg_stat_statements.track = all
restart postgresql:

systemctl restart postgresql-15
now you can check pg_stat_statements and use it for performance tuning, for example this query returns top 10 statement for database by average elapsed time:

SELECT 
       query, 
       calls, 
       (total_exec_time/calls/1000)::integer AS avg_time_seconds 
FROM pg_stat_statements
WHERE 1=1 
--calls > 1000
ORDER BY avg_time_seconds DESC
LIMIT 10;
Happy postgres tuning.

Monday, April 24, 2023

Tuning is null query in oracle

When there is some simple query which is using just is null criteria on big table, oracle cost based optimizer making table access full, so query running slow, here is demonstration how to tune similar queries:
create table my_test_table as select * from dba_objects;

insert into my_test_table  select * from dba_objects;
commit;

UPDATE my_test_table set object_name=null where rownum<=100;
commit;

create index my_test_table_i1 on my_test_table(object_name);

begin
  dbms_stats.gather_table_stats(ownname => 'QOBESA',
                                tabname => 'MY_TEST_TABLE',
                                cascade => true);
end;


select * from my_test_table where object_name is null;


------------------------------------------------------------------------------
| Id  | Operation           | Name          | Rows | Bytes | Cost | Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |               |  100 | 12500 |  108 | 00:00:01 |
| * 1 |   TABLE ACCESS FULL | MY_TEST_TABLE |  100 | 12500 |  108 | 00:00:01 |
------------------------------------------------------------------------------

Predicate Information (identified by operation id):
------------------------------------------
* 1 - filter("OBJECT_NAME" IS NULL)
So we have table with more than 100 000 rows and in 100 rows column object_name is null, which is indexed column, but optimizer doesn't using it and making full table scan, if your table is huge and it contains many gigabytes of data it will be big problem. Many believes that there is no way to tune this type of statement, because oracle does not store NULL values in indexes, but here is little trick to use index on not null columns
create index my_test_table_i2 on my_test_table(object_name, -1);


begin
  dbms_stats.gather_table_stats(ownname => 'QOBESA',
                                tabname => 'MY_TEST_TABLE',
                                cascade => true);
end;


select * from my_test_table where object_name is null;


---------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name             | Rows | Bytes | Cost | Time     |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |                  |  100 | 12900 |   78 | 00:00:01 |
|   1 |   TABLE ACCESS BY INDEX ROWID BATCHED | MY_TEST_TABLE    |  100 | 12900 |   78 | 00:00:01 |
| * 2 |    INDEX RANGE SCAN                   | MY_TEST_TABLE_I2 |  100 |       |    3 | 00:00:01 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
------------------------------------------
* 2 - access("OBJECT_NAME" IS NULL)
So solution is to add some constant to the end of the index to be sure null values are stored.