NULL’s vs NOT NULL’s and Performance

When it comes to giving the cost based optimiser the best possible chance to make the “right” decisions, many DBA’s are diligent in keeping statistics up to date, using histograms where appropriate, creating more indexes (or removing surplus indexes).

However one often neglected area is that the the null-ness of columns also impacts the optimiser decisions. NULL and NOT NULL do more than just act as constraints, they also add (or detract) to the value of indexes on those columns. Here’s an example of how the null-ness of a column impacts optimizer decisions. I have a table T which is a copy of DBA_OBJECTS, indexed on OBJECT_ID.


SQL> create table t as
  2  select * from dba_objects
  3  where object_id is not null;

Table created.

SQL>
SQL> create index IX on T ( object_id );

Index created.

Now I want to perform a standard pagination style query, namely, get the first 5 rows in order of OBJECT_ID


SQL>
SQL> set autotrace traceonly
SQL> select *
  2  from
  3    ( select * from t
  4      order by object_id
  5    )
  6  where rownum <= 5;

5 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 3299198703

----------------------------------------------------------------------------------------
| Id  | Operation               | Name | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |      |     5 |  2405 |       |  2755   (1)| 00:00:01 |
|*  1 |  COUNT STOPKEY          |      |       |       |       |            |          |
|   2 |   VIEW                  |      | 78750 |    36M|       |  2755   (1)| 00:00:01 |
|*  3 |    SORT ORDER BY STOPKEY|      | 78750 |     9M|    14M|  2755   (1)| 00:00:01 |
|   4 |     TABLE ACCESS FULL   | T    | 78750 |     9M|       |   428   (1)| 00:00:01 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(ROWNUM<=5)
   3 - filter(ROWNUM<=5)


Statistics
----------------------------------------------------------
         52  recursive calls
        130  db block gets
       1591  consistent gets
          0  physical reads
      25420  redo size
       2735  bytes sent via SQL*Net to client
        607  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          5  rows processed

I scanned the entire table and burned around 1500 consistent gets. Now I’ll let the database know what I already know, that is, that the OBJECT_ID column is contains no nulls. Then I’ll repeat the same query.


SQL>
SQL> alter table T modify object_id not null;

Table altered.

SQL>
SQL> select *
  2  from
  3    ( select * from t
  4      order by object_id
  5    )
  6  where rownum <= 5;

5 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 3114946973

--------------------------------------------------------------------------------------
| Id  | Operation                     | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |      |     5 |  2405 |     3   (0)| 00:00:01 |
|*  1 |  COUNT STOPKEY                |      |       |       |            |          |
|   2 |   VIEW                        |      |     5 |  2405 |     3   (0)| 00:00:01 |
|   3 |    TABLE ACCESS BY INDEX ROWID| T    | 78750 |     9M|     3   (0)| 00:00:01 |
|   4 |     INDEX FULL SCAN           | IX   |     5 |       |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(ROWNUM<=5)


Statistics
----------------------------------------------------------
         85  recursive calls
        132  db block gets
         72  consistent gets
          1  physical reads
      27192  redo size
       2735  bytes sent via SQL*Net to client
        607  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
          5  rows processed

Woo hoo! 20 times more efficient. The extra information we provided to the database allowed for more execution paths to be considered by the optimizer.

That’s all well and good. By how do we know which columns might be missing an appropriate NOT NULL constraint ?

Well, the following routine might help Smile

The PL/SQL procedure below deduces a ‘number of rows that are null’ count for all columns that are indexed for all tables in the schema (passed as parameter P_SCHEMA), although you can pass a table name to restrict the procedure to just that table. For any column that contains no nulls, the appropriate ‘alter table … ( column_name not null)’ command will be outputted. (Make sure you set serveroutput on).

Notes:

  • The client for whom I wrote this script, had no NOT NULL constraints on any table so the procedure only takes a rough stab at ALL_CONSTRAINTS for existing constraints.
  • It tries to keep things sensible – avoiding complex data types, tables that are auto-generated or used for queues etc, but there’s no guarantee it won’t either miss a table, or pick up it should not.
  • This is a brute force approach – it works by scanning every table in the schema, so use your discretion as to when would be a suitable time for running this routine.  But it will only scan each table once to determine the null count for all candidate columns.
  • (As with any diagnosis script), you should not apply it’s recommendations without some careful thought first.

12.2 version


create or replace 
procedure check_indexed_columns_for_null(
                 p_schema varchar2, 
                 p_table_name varchar2 default null) is
  cursor x is
   select 
     table_name,
     column_name,
     count(*) over ( partition by table_name ) as colcount
     from
     (
       select 
         table_name,
         column_name,
         min(existing_constraint)
       from 
       (
       select  
           a.table_name, 
           a.column_name, 
           ( select  count(*)
             from    all_constraints x,
                     all_cons_columns cc
             where   x.owner = c.owner
             and     x.table_name = c.table_name
             and     cc.owner      = x.owner
             and     cc.constraint_name = x.constraint_name
             and     
            (
               ( x.constraint_type = 'C'  and replace(search_condition_vc,'"') = a.column_name||' IS NOT NULL' )
                   or 
               ( x.constraint_type = 'P'  and cc.column_name = a.column_name )
             )
            ) existing_constraint
       from 
         all_ind_columns a,
         all_tables c,
         all_tab_columns ccol
       where a.index_owner = p_schema
       and a.index_owner = p_schema
       and a.table_name = nvl(upper(p_table_name),a.table_name)
       and c.table_name = a.table_name
       and c.owner      = a.table_owner
       and c.owner      = ccol.owner
       and c.table_name = ccol.table_name
       and a.column_name = ccol.column_name
       and c.secondary = 'N'
       and c.temporary = 'N'
       and c.nested    = 'NO'
       and c.external  = 'NO'
       and ccol.data_type_owner is null
       and ccol.data_type not in ('LONG','LONG RAW','CLOB','UROWID','UNDEFINED','NCLOB','BLOB','BFILE','ROWID')
       and (c.owner,c.table_name) not in ( select owner, queue_table from all_queue_tables where owner = p_schema )
       )
       group by 
         table_name,
         column_name
       having min(existing_constraint) = 0
     );

  str0 varchar2(32767); 
  str1 varchar2(32767); 
  str2 varchar2(32767); 
  str3 varchar2(32767);

  prev varchar2(100) := '*';
  cnt number;
  trailer varchar2(5);

procedure do_sql(thesql varchar2) is
  tcursor integer;
  dummy integer;
begin
  -- dbms_output.put_line(thesql);
  execute immediate thesql;
end;

begin
for i in x loop
  if prev != i.table_name then
    str0 := 'declare ';
    str1 := 'begin select '; str2 := ' into ';
    str3 := ' '; cnt := 1;
  end if;
  if cnt = i.colcount then 
    trailer := ' ';
  else
    trailer := ','||chr(10);
  end if;
  str0 := str0 || 'v'||ltrim(cnt)||' number;';
  str1 := str1 || 'sum(decode('||i.column_name||',null,1,0))'||trailer;
  str2 := str2 || 'v'||ltrim(cnt)||trailer;
  str3 := str3 || 'if v'||ltrim(cnt)||' = 0 then '||
    'dbms_output.put_line(''alter table '||p_schema||'.'||i.table_name||
    ' modify ('||i.column_name||' not null);''); end if;'||chr(10);
  if cnt = i.colcount then
    str2 := str2 ||' from '||p_schema||'.'||i.table_name||';';
    str3 := str3 ||' end;';
    do_sql(str0||' '||str1||' '||str2||' '||str3);
  end if;
  prev := i.table_name;
  cnt := cnt + 1;
end loop;
end;
/
sho err

set serverout on
exec check_indexed_columns_for_null('SCOTT')

11.2 version

(Needs a direct CREATE TABLE grant to the owner of the procedure)


create or replace
procedure check_indexed_columns_for_null(
                 p_schema varchar2,
                 p_table_name varchar2 default null) is
  cursor c_template is
    select table_name, column_name, 0 colcount
    from   all_tab_columns;

  type rowlist is table of c_template%rowtype;  
  r    rowlist;
  rc   sys_refcursor;

  str0 varchar2(32767);
  str1 varchar2(32767);
  str2 varchar2(32767);
  str3 varchar2(32767);

  prev varchar2(100) := '*';
  cnt number;
  trailer varchar2(5);

procedure do_sql(thesql varchar2) is
  tcursor integer;
  dummy integer;
begin
  dbms_output.put_line(thesql);
  execute immediate thesql;
end;

begin

  begin
    execute immediate 'drop table tmp$cons purge';
  exception when others then null;
  end;

  execute immediate 'create table tmp$cons as  select owner, table_name, constraint_name, constraint_type, to_lob(search_condition) search_condition_vc '||
                    'from all_constraints';

  open rc for
    q'{select
      table_name,
      column_name,
      count(*) over ( partition by table_name ) as colcount
      from
      (
        select
          table_name,
          column_name,
          min(existing_constraint)
        from
        (
        select
            a.table_name,
            a.column_name,
            ( select  count(*)
              from    ( select owner, table_name, constraint_name, constraint_type, cast(search_condition_vc as varchar2(4000)) search_condition_vc 
                        from tmp$cons ) x,
                      all_cons_columns cc
              where   x.owner = c.owner
              and     x.table_name = c.table_name
              and     cc.owner      = x.owner
              and     cc.constraint_name = x.constraint_name
              and
             (
                ( x.constraint_type = 'C'  and replace(search_condition_vc,'"') = a.column_name||' IS NOT NULL' )
                    or
                ( x.constraint_type = 'P'  and cc.column_name = a.column_name )
              )
             ) existing_constraint
        from
          all_ind_columns a,
          all_tables c,
          all_tab_columns ccol
        where a.index_owner = :p_schema
        and a.index_owner = :p_schema
        and a.table_name = nvl(upper(:p_table_name),a.table_name)
        and c.table_name = a.table_name
        and c.owner      = a.table_owner
        and c.owner      = ccol.owner
        and c.table_name = ccol.table_name
        and a.column_name = ccol.column_name
        and c.secondary = 'N'
        and c.temporary = 'N'
        and c.nested    = 'NO'
        and (c.owner,c.table_name) not in ( select owner, table_name from all_external_tables where owner = :p_schema )
        and ccol.data_type_owner is null
        and ccol.data_type not in ('LONG','LONG RAW','CLOB','UROWID','UNDEFINED','NCLOB','BLOB','BFILE','ROWID')
        and (c.owner,c.table_name) not in ( select owner, queue_table from all_queue_tables where owner = :p_schema )
        )
        group by
          table_name,
          column_name
        having min(existing_constraint) = 0
      )
      }' using p_schema,p_schema,p_table_name,p_schema,p_schema;

  fetch rc bulk collect into r;
  close rc;
  
  for i in 1 .. r.count loop
    if prev != r(i).table_name then
      str0 := 'declare ';
      str1 := 'begin select '; str2 := ' into ';
      str3 := ' '; cnt := 1;
    end if;
    if cnt = r(i).colcount then
      trailer := ' ';
    else
      trailer := ','||chr(10);
    end if;
    str0 := str0 || 'v'||ltrim(cnt)||' number;';
    str1 := str1 || 'sum(decode('||r(i).column_name||',null,1,0))'||trailer;
    str2 := str2 || 'v'||ltrim(cnt)||trailer;
    str3 := str3 || 'if v'||ltrim(cnt)||' = 0 then '||
      'dbms_output.put_line(''alter table '||p_schema||'.'||r(i).table_name||
      ' modify ('||r(i).column_name||' not null);''); end if;'||chr(10);
    if cnt = r(i).colcount then
      str2 := str2 ||' from '||p_schema||'.'||r(i).table_name||';';
      str3 := str3 ||' end;';
      do_sql(str0||' '||str1||' '||str2||' '||str3);
    end if;
    prev := r(i).table_name;
    cnt := cnt + 1;
  end loop;
end;
/

 

If you want to see the underlying table scan queries that are being run, simply comment back in the “dbms_output.put_line” in the DO_SQL subroutine.

Enjoy!

Getting started…adding an account to use

If you’ve read my previous post about getting started with the Oracle database, then hopefully you now have your very own database installed and running, and you have a explored a little with the sample schemas using SQL Developer.  Perhaps now you want to venture out into your own database development, and for that, you will want to create your own user account and create your own tables.  Here’s another video which will guide you through the process.

The video shows the process of creating a user via SQL Developer. If you are a command line addict, then here’s the commands you will need run to enact the same. You can use SQL Plus or SQLcl to run these commands. If you’ve never done that, simply

  • open up a terminal or MS-DOC command window,
  • type sqlplus and hit Enter
  • Type SYSTEM for the username and hit Enter
  • Type the password you specified when you created the database as per my previous blog post.

You will see a prompt “SQL>”.  You enter the following commands to create your first user account.


SQL> create user DEVELOPER identified by MyCaseSensitivePassword;

User created.

SQL> grant connect, resource to DEVELOPER;

Grant succeeded.

SQL> alter user DEVELOPER quota unlimited on USERS;

User altered.

And you are now good to go. Fire up SQL Developer as shown in the video, and you are ready to connect as your new user account as shown at this moment in the video.

Remember – we’re doing this to learn and explore Oracle. It is very easy to see all of functionality available to you, and to leap in and start building a fully-fledged application.  But don’t forget – for that, you need more than just a play-thing at home. You need backups; you need security; you need scalability; you need redundancy and failover.  So if you’re ready to get serious, you probably need to think about using a cloud account or similar.

Enjoy your database !

Clone a table

Sometimes doing a CREATE TABLE AS SELECT is all we need to copy the data from an existing table.  But what if we want more than that ?  What if we really want to clone that table to match the original as closely as possible.  We had a question along these lines on AskTOM today.  A standard CTAS copies the NOT NULL attributes and the data types, but not really much else.  We know that Data Pump will take care of it, but that is more complex than a simple CTAS.

So here is a simple routine to wrap the Data Pump calls so that the CTAS can be achieved with just as simple a command.  A database link pointing back to the same database is all we need.

Note:  The true innovation in this blog post came from Laurent’s excellent idea here.  I am just adding a small wrapper to make the process a little more palatable.  So all credit to Laurent here please.


SQL> create table emp as select * from scott.emp;

Table created.

SQL> create sequence seq start with 8000;

Sequence created.

SQL> alter table emp modify empno default seq.nextval;

Table altered.

SQL> alter table emp add primary key ( empno );

Table altered.

SQL> alter table emp add unique ( ename );

Table altered.

SQL> alter table emp compress;

Table altered.

SQL> alter table emp enable row movement;

Table altered.

And here is the routine to clone it, whilst keeping all of those additional bits of metadata.


SQL> create or replace
  2  procedure clone_tab(p_source varchar2,p_target varchar2) is
  3    n number;
  4    g varchar2(30);
  5    j varchar2(30);
  6  begin
  7    select global_name into g from global_name;
  8    begin
  9      execute immediate 'alter session close database link tmp$1';
 10    exception
 11      when others then null;
 12    end;
 13
 14    begin
 15      execute immediate 'drop database link tmp$1';
 16    exception
 17      when others then null;
 18    end;
 19
 20    execute immediate 'create database link tmp$1 using '''||g||'''';
 21
 22    if p_target like '%.%' or p_source like '%.%' then
 23      raise_application_error(-20000,'No schema prefix allowed');
 24    end if;
 25
 26    n := dbms_datapump.open('IMPORT','TABLE','TMP$1');
 27    dbms_datapump.metadata_filter(n,'NAME_LIST',''''||upper(p_source)||'''');
 28    dbms_datapump.metadata_remap(n,'REMAP_TABLE',upper(p_source),upper(p_target));
 29    dbms_datapump.start_job(n);
 30    dbms_datapump.wait_for_job(n, j);
 31  end;
 32  /

Procedure created.

SQL>
SQL> set serverout on
SQL> exec clone_tab('emp','emp2');

PL/SQL procedure successfully completed.

SQL>
SQL> select dbms_metadata.get_ddl('TABLE','EMP2',user) from dual
  2
SQL> select dbms_metadata.get_ddl('TABLE','EMP2',user) from dual;

DBMS_METADATA.GET_DDL('TABLE','EMP2',USER)
---------------------------------------------------------------------------------------

  CREATE TABLE "MCDONAC"."EMP2"
   (    "EMPNO" NUMBER(4,0) DEFAULT "MCDONAC"."SEQ"."NEXTVAL" NOT NULL ENABLE,
        "ENAME" VARCHAR2(10),
        "JOB" VARCHAR2(9),
        "MGR" NUMBER(4,0),
        "HIREDATE" DATE,
        "SAL" NUMBER(7,2),
        "COMM" NUMBER(7,2),
        "DEPTNO" NUMBER(2,0),
         UNIQUE ("ENAME")
  USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "USERS"  ENABLE,
         PRIMARY KEY ("EMPNO")
  USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "USERS"  ENABLE
   ) SEGMENT CREATION IMMEDIATE
  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
 COMPRESS BASIC LOGGING
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "USERS"  ENABLE ROW MOVEMENT

SQL> select count(*) from emp;

  COUNT(*)
----------
        14

SQL> select count(*) from emp2;

  COUNT(*)
----------
        14

Licensed for Advanced Compression? Don’t forget the network

We often think of Advanced Compression being exclusively about compressing data “at rest”, ie, on some sort of storage device.  And don’t get me wrong, if we consider just that part of Advanced Compression, that still covers a myriad of opportunities that could yield benefits for your databases and database applications:

  • Heat maps
  • Automatic Data Optimization
  • XML, JSON and LOB compression (including de-duplication)
  • Compression on backups
  • Compression on Data Pump files
  • Additional compression options on indexes and tables
  • Compressed Flashback Data Archive storage
  • Storage snapshot compression

However, if you are licensed for the option, there are other things that you can also take advantage of when it comes to compression of data on the network.

In 12c, Advanced Network Compression is part of the Advanced Compression option.  Obviously, the modern data centre typically co-locates application servers and database servers within a high speed high bandwidth network, but once data needs to be accessed over less optimal networks, for example, users running ad-hoc queries to their desktops, then compressing data across the wire can yield benefits.

Trust me, when you live in Perth, Western Australia, any kind of improvement to access data across the network is a good thing! Smile

Here’s an example of the option in action. Here’s my source table – which I’ve deliberately created with lots of potential for compression:


SQL create table comp_Test
  2  as select rownum x, rpad('x',1000,'x') y from dual
  3  connect by level <= 100000;

Table created.

Every row is almost a replica of the others, and column Y is 1000 repeated characters. So it is an artificial but perfect candidate to see the compression in action. To ensure that the network transmission of the data is a impactful factor here, I’ve located this table in a “database server” (my spare laptop) accessed via a slow wireless network from my other machine. We can see the impact of the network by starting with the default settings

Default settings


C:\temp>sqlplus scott/tiger@pdb122

SQL*Plus: Release 12.2.0.1.0 Production on Wed Dec 27 13:48:05 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Last Successful login time: Wed Dec 27 2017 13:48:26 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production


SQL> set timing on
SQL> set arraysize 1000
SQL> set feedback only
SQL> select * from scott.comp_test;

100000 rows selected.

Elapsed: 00:00:24.68

So a touch more than 1000 bytes * 100000 rows yields the following session level statistics


SQL> @netstat

NAME                                                              VALUE
------------------------------------------------------------ ----------
bytes sent via SQL*Net to client                              102138497

Now I’ll activate compression by adding the following parameters to sqlnet.ora on client and server.


SQLNET.COMPRESSION=on
SQLNET.COMPRESSION_LEVELS=(low,high)
SQLNET.COMPRESSION_THRESHOLD=1024

and repeat the test in the a fresh connection. To see if we can potentially use compression, we can check V$SESSION_CONNECT_INFO


SQL> select sid, network_service_banner from V$SESSION_CONNECT_INFO 
  2  where sid = sys_context('USERENV','SID')
  3  @pr
==============================
SID                           : 324
NETWORK_SERVICE_BANNER        : Windows NT TCP/IP NT Protocol Adapter for 64-bit Windows: Version 12.2.0.1.0 - Production
==============================
SID                           : 324
NETWORK_SERVICE_BANNER        : Oracle Advanced Network Compression Service for 64-bit Windows: Version 12.2.0.1.0 - Production
==============================
SID                           : 324
NETWORK_SERVICE_BANNER        : NTS Authentication service adapter for 64-bit Windows: Version 2.0.0.0.0 - Production
==============================
SID                           : 324
NETWORK_SERVICE_BANNER        : Encryption service for 64-bit Windows: Version 12.2.0.1.0 - Production
==============================
SID                           : 324
NETWORK_SERVICE_BANNER        : Crypto-checksumming service for 64-bit Windows: Version 12.2.0.1.0 - Production

or if you enable a SQL*Net trace, you’ll entries like this in the log files


nlstddp_dump_ptable:   SQLNET.COMPRESSION_LEVELS = HIGH
nlstddp_dump_ptable:   SQLNET.COMPRESSION = ON

Compression enabled


SQL> set timing on
SQL> set arraysize 1000
SQL> set feedback only
SQL> select * from scott.comp_test;

100000 rows selected.

Elapsed: 00:00:01.93

You can see the huge difference in elapsed time. Unfortunately, as far I can ascertain, the session network statistics are derived after decompression, because the bytes transferred still reflect the original sizes


SQL> @netstat

NAME                                                              VALUE
------------------------------------------------------------ ----------
bytes sent via SQL*Net to client                              102114310

We can also take advantage of the larger SDU sizes available in 12c. Now my sqlnet.ora looks like this:


DEFAULT_SDU_SIZE=1048576
SQLNET.COMPRESSION=on
SQLNET.COMPRESSION_LEVELS=(low,high)
SQLNET.COMPRESSION_THRESHOLD=1024

and then the test can be repeated.

Compression on, larger SDU


SQL> set timing on
SQL> set arraysize 1000
SQL> set feedback only
SQL> select * from scott.comp_test;

100000 rows selected.

Elapsed: 00:00:00.98

(* – elapsed times are averaged across a number of executions)

There is a whitepaper on Advanced Compression with more details here, so if you have the license, make sure you take a look to see if there are some benefits for you to maximize the return on your investment.

iASH–my “infinite ASH” routine

I love Active Session History (ASH) data because a lot of the work I’ve done in my consulting life was “after the fact” diagnosis.  By this I mean that many of us have been in a similar circumstance where the customer will contact you not when a problem is occurring, but only when you contact them for some other potentially unrelated reason.  At which point you hear will that dreaded sentence:

“Yeah, the Order Entry screen was really slow a couple of hours ago

And this is where ASH is an awesome resource.  With the ASH data available, there is a good chance you will be able to diagnose the issue without having to make an embarrassing request for the customer to repeat the task so that you can trace the underlying database activity.  Because no-one likes to be the person that says:

“Yeah that performance must have really sucked for you … Hey, let’s do it again!”

But even ASH has it’s limitations because sometimes the customer sentence is phrased slightly differently Smile

“Yeah, the Order Entry screen was really slow last Tuesday

In this case, it is unlikely that the ASH data will still be available.  Whilst a subset of the invaluable ASH data is retained in DBA_HIST_ACTIVE_SESS_HISTORY, I would prefer to keep the complete set of ASH data available for longer than the timespan for which it is practical (due to the limitations of memory). So I wrote myself a simple little routine that keep all of the ASH data around for longer.  It’s hardly rocket science – just a little partitioned table to capture V$ACTIVE_SESSION_HISTORY at regular intervals.  Let’s walk through it so you can understand it and adapt it for your own use.

First I’ll create a partitioned table to hold the ASH data.  I’m using partitioning to avoid the need to index the table, so the insertion cost is minimal.  I’m partitioning by day and the code assumes this, so take care if you intend to modify it.


SQL> create table ash_hist
  2  partition by range (sample_time)
  3  interval( numtodsinterval(1,'day'))
  4  (partition p1 values less than (timestamp' 2017-01-01 00:00:00'))
  5  as select * from sys.gv_$active_session_history;

Table created.

Here is my procedure to capture the data.  The essentials of the routine are:

  • Starting with the most recent partition, find the last recorded entry in ASH_HIST.  We’ll look back up to 10 days to find our starting point (hence the daily partitions).
  • If there is no data for the last 10 days, we’ll bomb out, because we haven’t been running the routine frequently enough.
  • Copy all the ASH data from this point to now into ASH_HIST using a nice efficient INSERT-APPEND, but we’ll skip the session that is doing the copying. (You can include it if you want just by removing line 8)
  • Once per week (you can control this by tinkering with the IF conditions on line 34) we’ll drop the oldest partitions.  By default I keep 90 days, but you can set this by altering “l_retention” on line 5.

SQL>
SQL> CREATE OR REPLACE procedure save_ash_hist is
  2    l_hi_val_as_string varchar2(1000);
  3    l_hi_val_as_date   date;
  4    l_max_recorded     timestamp;
  5    l_retention        number := 90;
  6
  7  begin
  8    dbms_application_info.set_module('$$SAVE_ASH$$','');
  9    -- we are looping to take advantage
 10    -- of partition elimination
 11
 12    for i in 0 .. 10 loop
 13       select max(sample_time)
 14       into   l_max_recorded
 15       from   ash_hist
 16       where  sample_time > systimestamp - i;
 17
 18       exit when l_max_recorded is not null;
 19    end loop;
 20
 21    if l_max_recorded is null then
 22      raise_application_error(-20000,'No max sample time with 10 days');
 23    end if;
 24    dbms_output.put_line('Last copied time was '||l_max_recorded);
 25
 26    insert /*+ append */ into ash_hist
 27    select *
 28    from sys.gv_$active_session_history
 29    where sample_time > l_max_recorded
 30    and   ( module != '$$SAVE_ASH$$' or module is null );
 31    dbms_output.put_line('Copied '||sql%rowcount||' rows');
 32    commit;
 33
 34    if to_char(sysdate,'DYHH24') between 'TUE01' and 'TUE06' then
 35
 36      begin
 37        execute immediate 'alter table ash_hist set interval ()';
 38      exception
 39        when others then null;
 40      end;
 41      execute immediate 'alter table ash_hist set interval (NUMTODSINTERVAL(1,''DAY''))';
 42
 43      for i in ( select *
 44                 from   user_tab_partitions
 45                 where  table_name = 'ASH_HIST'
 46                 and    partition_position > 1
 47                 order by partition_position )
 48      loop
 49        l_hi_val_as_string := i.high_value;
 50        execute immediate 'select '||l_hi_val_as_string||' from dual' into l_hi_val_as_date;
 51
 52        if l_hi_val_as_date < sysdate - l_retention then
 53          execute immediate 'alter table ash_hist drop partition '||i.partition_name;
 54        else
 55          exit;
 56        end if;
 57
 58      end loop;
 59    end if;
 60  end;
 61  /

Procedure created.

And that is all there is to it.  Each time we run the procedure, we’ll grab all the ASH data since the last time we ran and keep it in ASH_HIST.


SQL>
SQL> select count(*) from ash_hist;

  COUNT(*)
----------
       792

1 row selected.


SQL>
SQL> exec save_ash_hist

PL/SQL procedure successfully completed.

SQL>
SQL> select count(*) from ash_hist;

  COUNT(*)
----------
       819

1 row selected.

A simple scheduler job to run the routine every couple of hours (I’m assuming your SGA holds at least 2 hours of samples in V$ACTIVE_SESSION_HISTORY – if not, you’d need to adjust the frequency) and you’re off and running.


SQL>
SQL>
SQL> BEGIN
  2      dbms_scheduler.create_job (
  3         job_name           =>  'ASH_CAPTURE',
  4         job_type           =>  'PLSQL_BLOCK',
  5         job_action         =>  'save_ash_hist;',
  6         start_date         =>  CAST((TRUNC(SYSDATE,'HH') + (1/24) + (55/24/60)) AS TIMESTAMP), -- job commences at 55 mins past the next hour
  7         repeat_interval    =>  'FREQ=HOURLY; INTERVAL=2',
  8         enabled            =>  true,
  9         comments           =>  'Permanent record of ASH data');
 10  END;
 11  /

PL/SQL procedure successfully completed.

You can assume all of the standard disclaimers here. Use at own risk, blah blah, no warranty, blah blah, etc

Addenda:  I should add that you could write this complete level of detail directly to DBA_HIST_ACTIVE_SESS_HISTORY via tinkering with “_ash_disk_filter_ratio” , but please get the endorsement of Support first.

Parsing freeform data in flat files

SQL loader is a very cool utility that has existed for a long time within Oracle to load flat files into the database. However sometimes people find the control file syntax quite cryptic, and when it comes to passing very complicated structures, this can mean control files which are hard to maintain. For me the best solution here is to use an external table. That way we can combine the power of the SQL Loader control file syntax embedded within the external table definition, along with the full power of PL/SQL and SQL for additional parsing of that data.

Here is an example where the data is spread across multiple lines and the task is to bring all that data together into a natural form, namely an ID followed by text.

So here is my file that has free format text


10,some data
that is really 
annoying because it
spans lots of rows.
20,and some more than does not.
30,and a mixture of stuff and we 
will assume that numerics then comm as 
is the thing that tells us it is a new line. 
40,otherwise we will keep on going.

What I’m going to do is use an external table to bring that data in as it is within the file, and then use a PL/SQL function in pipelined mode to parse the data into the component attributes.


SQL>
SQL> create table freeform
  2  (
  3  line varchar2(200)
  4  )
  5  ORGANIZATION external
  6  (
  7  TYPE oracle_loader
  8  DEFAULT DIRECTORY temp
  9  ACCESS PARAMETERS
 10  (
 11  RECORDS DELIMITED BY NEWLINE
 12  FIELDS LDRTRIM
 13  (
 14  line char(200)
 15  )
 16  )
 17  location
 18  (
 19  'freeform.dat'
 20  )
 21  )REJECT LIMIT UNLIMITED ;

Table created.

SQL>
SQL> select * from freeform;

LINE
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
10,some data
that is really
annoying because it
spans lots of rows.
20,and some more than does not.
30,and a mixture of stuff and we
will assume that numerics then comm as
is the thing that tells us it is a new line.
40,otherwise we will keep on going.

9 rows selected.

SQL>
SQL> create or replace
  2  type output_data as object (id int, str varchar2(500))
  3  /

Type created.

SQL>
SQL> create or replace
  2  type output_list as table of output_data
  3  /

Type created.

SQL>
SQL> create or replace
  2  function cleanup_that_sucker(rc sys_refcursor) return output_list pipelined is
  3    type strlist is table of varchar2(200) index by pls_integer;
  4    s strlist;
  5    o output_data := output_data(null,null);
  6    c pls_integer := 0;
  7  begin
  8    loop
  9      fetch rc bulk collect into s limit 500;
 10
 11      for i in 1 .. s.count loop
 12        c := c + 1;
 13        if regexp_like(s(i),'^[[:digit:]]+,') then
 14          if c > 1 then
 15             pipe row (o);
 16          end if;
 17          o.id  := substr(s(i),1,instr(s(i),',')-1);
 18          o.str := substr(s(i),instr(s(i),',')+1);
 19        else
 20          o.str := o.str || ' ' || s(i);
 21        end if;
 22      end loop;
 23      exit when rc%notfound;
 24    end loop;
 25    pipe row (o);
 26    return;
 27  end;
 28  /

Function created.

SQL> sho err
No errors.
SQL>
SQL> select *
  2  from cleanup_that_sucker(cursor(
  3          select line from freeform
  4          ));

        ID STR
---------- ------------------------------------------------------------------------------------------------------------------------
        10 some data that is really annoying because it spans lots of rows.
        20 and some more than does not.
        30 and a mixture of stuff and we will assume that numerics then comm as is the thing that tells us it is a new line.
        40 otherwise we will keep on going.

4 rows selected.

SQL>
SQL>

And there we have it. We have the full power of PL/SQL and SQL at our fingertips even though we are parsing flat file data. This avoids complicated control file syntax and makes ongoing maintenance far easier. Performance is still good because we are using the bulk collect feature in PL/SQL. Also because we are taking a cursor expression as input, we have lots of flexibility in terms of what data are we passing in. We simply change the SQL expression in the cursor.

Idle banter

When your car gets a flat tyre, it’s always handy to have a spare.  We do the same with the database Smile


SQL> select
  2    x.ksppinm  name
  3  from
  4    sys.x$ksppi  x,
  5    sys.x$ksppcv2  y
  6  where
  7    x.inst_id = userenv('Instance') and
  8    y.inst_id = userenv('Instance') and
  9    x.indx+1 = y.kspftctxpn and
 10    x.ksppinm like '%spare%parameter%';

NAME
--------------------------------------------------------------------------------
_first_spare_parameter
_second_spare_parameter
_third_spare_parameter
_fourth_spare_parameter
_fifth_spare_parameter
_sixth_spare_parameter
_seventh_spare_parameter
_eighth_spare_parameter
_ninth_spare_parameter
_tenth_spare_parameter
_eleventh_spare_parameter
_twelfth_spare_parameter
_thirteenth_spare_parameter
_fourteenth_spare_parameter
_fifteenth_spare_parameter
_sixteenth_spare_parameter
_seventeenth_spare_parameter
_eighteenth_spare_parameter
_nineteenth_spare_parameter
_twentieth_spare_parameter
_twenty-first_spare_parameter
_twenty-second_spare_parameter
_twenty-third_spare_parameter
_twenty-fourth_spare_parameter
_twenty-fifth_spare_parameter
_twenty-sixth_spare_parameter
_twenty-seventh_spare_parameter
_twenty-eighth_spare_parameter
_twenty-ninth_spare_parameter
_thirtieth_spare_parameter
_thirty-first_spare_parameter
_thirty-second_spare_parameter
_thirty-third_spare_parameter
_thirty-fourth_spare_parameter
_thirty-fifth_spare_parameter
_thirty-sixth_spare_parameter
_thirty-seventh_spare_parameter
_thirty-eighth_spare_parameter
_thirty-ninth_spare_parameter
_fortieth_spare_parameter
_forty-first_spare_parameter
_forty-second_spare_parameter
_forty-third_spare_parameter
_forty-fourth_spare_parameter
_forty-fifth_spare_parameter
_forty-sixth_spare_parameter
_forty-seventh_spare_parameter
_forty-eighth_spare_parameter
_forty-ninth_spare_parameter
_fiftieth_spare_parameter
_fifty-first_spare_parameter
_fifty-second_spare_parameter
_fifty-third_spare_parameter
_fifty-fourth_spare_parameter
_fifty-fifth_spare_parameter
_fifty-sixth_spare_parameter
_fifty-seventh_spare_parameter
_fifty-eighth_spare_parameter
_fifty-ninth_spare_parameter
_sixtieth_spare_parameter
_sixty-first_spare_parameter
_sixty-second_spare_parameter
_sixty-third_spare_parameter
_sixty-fourth_spare_parameter
_sixty-fifth_spare_parameter
_sixty-sixth_spare_parameter
_sixty-seventh_spare_parameter
_sixty-eighth_spare_parameter
_sixty-ninth_spare_parameter
_seventieth_spare_parameter
_seventy-first_spare_parameter
_seventy-second_spare_parameter
_seventy-third_spare_parameter
_seventy-fourth_spare_parameter
_seventy-fifth_spare_parameter
_seventy-sixth_spare_parameter
_seventy-seventh_spare_parameter
_seventy-eighth_spare_parameter
_seventy-ninth_spare_parameter
_eightieth_spare_parameter
_eighty-first_spare_parameter
_eighty-second_spare_parameter
_eighty-third_spare_parameter
_eighty-fourth_spare_parameter
_eighty-fifth_spare_parameter
_eighty-sixth_spare_parameter
_eighty-seventh_spare_parameter
_eighty-eighth_spare_parameter
_eighty-ninth_spare_parameter
_ninetieth_spare_parameter
_ninety-first_spare_parameter
_ninety-second_spare_parameter
_ninety-third_spare_parameter
_ninety-fourth_spare_parameter
_ninety-fifth_spare_parameter
_ninety-sixth_spare_parameter
_ninety-seventh_spare_parameter
_ninety-eighth_spare_parameter
_ninety-ninth_spare_parameter
_one-hundredth_spare_parameter
_one-hundred-and-first_spare_parameter
_one-hundred-and-second_spare_parameter
_one-hundred-and-third_spare_parameter
_one-hundred-and-fourth_spare_parameter
_one-hundred-and-fifth_spare_parameter
_one-hundred-and-sixth_spare_parameter
_one-hundred-and-seventh_spare_parameter
_one-hundred-and-eighth_spare_parameter
_one-hundred-and-ninth_spare_parameter
_one-hundred-and-tenth_spare_parameter
_one-hundred-and-eleventh_spare_parameter
_one-hundred-and-twelfth_spare_parameter
_one-hundred-and-thirteenth_spare_parameter
_one-hundred-and-fourteenth_spare_parameter
_one-hundred-and-fifteenth_spare_parameter
_one-hundred-and-sixteenth_spare_parameter
_one-hundred-and-seventeenth_spare_parameter
_one-hundred-and-eighteenth_spare_parameter
_one-hundred-and-nineteenth_spare_parameter
_one-hundred-and-twentieth_spare_parameter
_one-hundred-and-twenty-first_spare_parameter
_one-hundred-and-twenty-second_spare_parameter
_one-hundred-and-twenty-third_spare_parameter
_one-hundred-and-twenty-fourth_spare_parameter
_one-hundred-and-twenty-fifth_spare_parameter
_one-hundred-and-twenty-sixth_spare_parameter
_one-hundred-and-twenty-seventh_spare_parameter
_one-hundred-and-twenty-eighth_spare_parameter
_one-hundred-and-twenty-ninth_spare_parameter
_one-hundred-and-thirtieth_spare_parameter
_one-hundred-and-thirty-first_spare_parameter
_one-hundred-and-thirty-second_spare_parameter
_one-hundred-and-thirty-third_spare_parameter
_one-hundred-and-thirty-fourth_spare_parameter
_one-hundred-and-thirty-fifth_spare_parameter
_one-hundred-and-thirty-sixth_spare_parameter
_one-hundred-and-thirty-seventh_spare_parameter
_one-hundred-and-thirty-eighth_spare_parameter
_one-hundred-and-thirty-ninth_spare_parameter
_one-hundred-and-fortieth_spare_parameter
_one-hundred-and-forty-first_spare_parameter
_one-hundred-and-forty-second_spare_parameter
_one-hundred-and-forty-third_spare_parameter
_one-hundred-and-forty-fourth_spare_parameter
_one-hundred-and-forty-fifth_spare_parameter
_one-hundred-and-forty-sixth_spare_parameter
_one-hundred-and-forty-seventh_spare_parameter
_one-hundred-and-forty-eighth_spare_parameter
_one-hundred-and-forty-ninth_spare_parameter
_one-hundred-and-fiftieth_spare_parameter
_one-hundred-and-fifty-first_spare_parameter
_one-hundred-and-fifty-second_spare_parameter
_one-hundred-and-fifty-third_spare_parameter
_one-hundred-and-fifty-fourth_spare_parameter
_one-hundred-and-fifty-fifth_spare_parameter
_one-hundred-and-fifty-sixth_spare_parameter
_one-hundred-and-fifty-seventh_spare_parameter
_one-hundred-and-fifty-eighth_spare_parameter
_one-hundred-and-fifty-ninth_spare_parameter
_one-hundred-and-sixtieth_spare_parameter
_one-hundred-and-sixty-first_spare_parameter
_one-hundred-and-sixty-second_spare_parameter
_one-hundred-and-sixty-third_spare_parameter
_one-hundred-and-sixty-fourth_spare_parameter
_one-hundred-and-sixty-fifth_spare_parameter
_one-hundred-and-sixty-sixth_spare_parameter
_one-hundred-and-sixty-seventh_spare_parameter
_one-hundred-and-sixty-eighth_spare_parameter
_one-hundred-and-sixty-ninth_spare_parameter
_one-hundred-and-seventieth_spare_parameter
_one-hundred-and-seventy-first_spare_parameter
_one-hundred-and-seventy-second_spare_parameter
_one-hundred-and-seventy-third_spare_parameter
_one-hundred-and-seventy-fourth_spare_parameter
_one-hundred-and-seventy-fifth_spare_parameter
_one-hundred-and-seventy-sixth_spare_parameter
_one-hundred-and-seventy-seventh_spare_parameter
_one-hundred-and-seventy-eighth_spare_parameter
_one-hundred-and-seventy-ninth_spare_parameter
_one-hundred-and-eightieth_spare_parameter
_one-hundred-and-eighty-first_spare_parameter
_one-hundred-and-eighty-second_spare_parameter
_one-hundred-and-eighty-third_spare_parameter
_one-hundred-and-eighty-fourth_spare_parameter
_one-hundred-and-eighty-fifth_spare_parameter
_one-hundred-and-eighty-sixth_spare_parameter
_one-hundred-and-eighty-seventh_spare_parameter
_one-hundred-and-eighty-eighth_spare_parameter
_one-hundred-and-eighty-ninth_spare_parameter
_one-hundred-and-ninetieth_spare_parameter
_one-hundred-and-ninety-first_spare_parameter
_one-hundred-and-ninety-second_spare_parameter
_one-hundred-and-ninety-third_spare_parameter
_one-hundred-and-ninety-fourth_spare_parameter
_one-hundred-and-ninety-fifth_spare_parameter
_one-hundred-and-ninety-sixth_spare_parameter
_one-hundred-and-ninety-seventh_spare_parameter
_one-hundred-and-ninety-eighth_spare_parameter
_one-hundred-and-ninety-ninth_spare_parameter
_two-hundredth_spare_parameter
_two-hundred-and-first_spare_parameter
_two-hundred-and-second_spare_parameter
_two-hundred-and-third_spare_parameter
_two-hundred-and-fourth_spare_parameter
_two-hundred-and-fifth_spare_parameter
_two-hundred-and-sixth_spare_parameter
_two-hundred-and-seventh_spare_parameter
_two-hundred-and-eighth_spare_parameter
_two-hundred-and-ninth_spare_parameter
_two-hundred-and-tenth_spare_parameter
_two-hundred-and-eleventh_spare_parameter
_two-hundred-and-twelfth_spare_parameter
_two-hundred-and-thirteenth_spare_parameter
_two-hundred-and-fourteenth_spare_parameter
_two-hundred-and-fifteenth_spare_parameter
_two-hundred-and-sixteenth_spare_parameter
_two-hundred-and-seventeenth_spare_parameter
_two-hundred-and-eighteenth_spare_parameter
_two-hundred-and-nineteenth_spare_parameter
_two-hundred-and-twentieth_spare_parameter
_spare_test_parameter

Standard common sense stuff: Please don’t mess with these (or any) underscore parameter without consulting Support.