Execution plans on LiveSQL

To protect the integrity of people’s data, and isolate sessions on LiveSQL, we lock down the environment.  Clearly if you are doing some testing with sensitive data, you don’t want an anonymous member of the user community mining V$SQL to see what commands you have been running.  Conversely, we want to allow people to perform most of the tasks that would do on a standard database installation without having to install or configure anything.  That’s the great thing about LiveSQL.

So for that reason, you don’t get access to all of the features of DBMS_XPLAN.  But since we do provide access to a limited set of V$ view access, you can get execution plan details by going back to first principles.  Here’s some scripts you can use


--
-- First run your query
--
select /*+ gather_plan_statistics */ .... (my_query)

--
--  Then locate the SQL_ID/CHILD_NUMBER for your just executed query
--
select * from v$sqlstats where sql_text like '...';


--
-- Plug them into the queries below.  
--
-- The first one gets the execution plan, the second one get the runtime statistics
--
with plan_table as 
(
select * from v$sql_plan_statistics_all
where sql_id = "my sql id"
and child_number = "my child"
)
select id "Id",
  rpad(
  lpad(' ',2*level) || operation || ' ' || options , 40 ) "Operation",
  object_name "Name",
  cardinality "Rows",
  bytes "Bytes", 
  cost "Cost",
  rtrim(to_char(trunc(time/3600),'fm00')||':'||
  to_char(mod(trunc(time/60),1),'fm00')||':'||
  to_char(trunc(mod(time,60)),'fm00'),':')     "Time"
from plan_table
connect by prior id = parent_id 
start with id = 0
union all
select null, null,null, null,null, null,null from dual
union all
select null, lpad(id,4) ||'-filter '||FILTER_PREDICATES,null, null,null, null,null 
from plan_table where FILTER_PREDICATES is not null
union all
select null, lpad(id,4) ||'-access '||ACCESS_PREDICATES,null, null,null, null,null 
from plan_table where ACCESS_PREDICATES is not null
order by 1 nulls last, 2 nulls first;



with plan_table as 
(
select * from v$sql_plan_statistics_all
where sql_id = 'd09nv17gy4j3z'
and child_number = 0
)
select id "Id",
  rpad(
  lpad(' ',2*level) || operation || ' ' || options , 40 ) "Operation",
  object_name "Name",
  last_starts "Starts",
  cardinality "E-Rows", 
  LAST_OUTPUT_ROWS "A-Rows",
  rtrim(to_char(trunc(time/3600),'fm00')||':'||
  to_char(mod(trunc(time/60),1),'fm00')||':'||
  to_char(trunc(mod(time,60)),'fm00'),':')     "Time",
  LAST_CR_BUFFER_GETS+LAST_CU_BUFFER_GETS "Buffers"
from plan_table p
connect by prior id = parent_id 
start with id = 0
union all
select null, null,null, null,null, null,null,null from dual
union all
select null, lpad(id,4) ||'-filter '||FILTER_PREDICATES,null, null,null, null,null,null 
from plan_table where FILTER_PREDICATES is not null
union all
select null, lpad(id,4) ||'-access '||ACCESS_PREDICATES,null, null,null, null,null,null 
from plan_table where ACCESS_PREDICATES is not null
order by 1 nulls last, 2 nulls first;


Oracle Database 18c

Yup…it’s arrived!

New name obviously, because we’ve jumped to our new naming model to align with the calendar year as opposed to version number.  You might be thinking “So what?” but it’s a significant change in the way we getting software to customer.  Mike Dietrich blogged about this at length here so I won’t repeat what has been said, but in TL;DR form:

More frequent releases, with smaller amounts of change per release

In this way, the approach lets us focus more of solidifying existing features, and being able to quickly respond to bugs that arise.

So 18c is more an incremental release on 12.2 (in fact, internally it was referred to as “12.2.0.2” for most of it’s build cycle) focussed on stability and hardening of existing features.

Don’t worry, we still managed to pack some cool new things in there – which you can read about in the New Features guide.

I’ll be doing some blog posts and videos on the 18c content soon, but here’s something to whet your appetite Smile

NVL vs COALESCE

Jonathan Lewis just published a blog post about NVL and COALESCE and the optimizer costings for each.

There is also perhaps a significant difference between NVL and COALESCE in that the former seems to have an in-built optimization for handling bind variables and nulls. 

Consider an application where users optionally pass in search criteria and you have to query a table based on that criteria.  You have three natural choices here to implement that:

WHERE column = :search_criteria or :search_criteria is null

or

WHERE column = nvl(:search_criteria ,column)

or

WHERE column = coalesce(:search_criteria,column)

Functionally they are identical*, but the implementation detail shows a nice little optimizer trick that only works with NVL.


SQL> create table t as select * From dba_objects;

Table created.

SQL> variable search_criteria number
SQL>
SQL> exec :search_criteria := 123

PL/SQL procedure successfully completed.

SQL>
SQL> create index ix1 on t ( object_id ) ;

Index created.

SQL>
SQL> set feedback only
SQL> select *
  2  from t
  3  where object_id = nvl(:search_criteria,object_id);

1 row selected.

SQL>
SQL> set feedback on
SQL> select * from dbms_xplan.display_cursor();

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------
SQL_ID  0g820t1jw00hm, child number 0
-------------------------------------
select * from t where object_id = nvl(:search_criteria,object_id)

Plan hash value: 2258578794

----------------------------------------------------------------------------------------------------------
| Id  | Operation                              | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                       |                 |       |       |   430 (100)|          |
|   1 |  VIEW                                  | VW_ORE_1B35BA0F | 78868 |    36M|   430   (1)| 00:00:01 |
|   2 |   UNION-ALL                            |                 |       |       |            |          |
|*  3 |    FILTER                              |                 |       |       |            |          |
|   4 |     TABLE ACCESS BY INDEX ROWID BATCHED| T               |     1 |   132 |     2   (0)| 00:00:01 |
|*  5 |      INDEX RANGE SCAN                  | IX1             |     1 |       |     1   (0)| 00:00:01 |
|*  6 |    FILTER                              |                 |       |       |            |          |
|*  7 |     TABLE ACCESS FULL                  | T               | 78867 |     9M|   428   (1)| 00:00:01 |
----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter(:SEARCH_CRITERIA IS NOT NULL)
   5 - access("OBJECT_ID"=:SEARCH_CRITERIA)
   6 - filter(:SEARCH_CRITERIA IS NULL)
   7 - filter("OBJECT_ID" IS NOT NULL)


27 rows selected.

SQL>
SQL> set feedback only
SQL> select *
  2  from t
  3  where object_id = coalesce(:search_criteria,object_id);

1 row selected.

SQL>
SQL> set feedback on
SQL> select * from dbms_xplan.display_cursor();

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------
SQL_ID  am3uvm7nvx5d9, child number 0
-------------------------------------
select * from t where object_id = coalesce(:search_criteria,object_id)

Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       |   427 (100)|          |
|*  1 |  TABLE ACCESS FULL| T    |     1 |   132 |   427   (1)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("OBJECT_ID"=COALESCE(:SEARCH_CRITERIA,"OBJECT_ID"))


18 rows selected.

SQL>
SQL>
SQL>
SQL> set feedback only
SQL> select *
  2  from t
  3  where ( object_id = :search_criteria or :search_criteria is null );

1 row selected.

SQL>
SQL> set feedback on
SQL> select * from dbms_xplan.display_cursor();

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------
SQL_ID  ff0s2j51scxss, child number 0
-------------------------------------
select * from t where ( object_id = :search_criteria or
:search_criteria is null )

Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       |   427 (100)|          |
|*  1 |  TABLE ACCESS FULL| T    |  3945 |   508K|   427   (1)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter((:SEARCH_CRITERIA IS NULL OR
              "OBJECT_ID"=:SEARCH_CRITERIA))


20 rows selected.

SQL>

Only NVL gets the benefit of the query being “split” into two pieces – one to handle the case where the passed criteria is null, and the other for when the criteria is not null.  The FILTER in line 3 shows that we will only run one or the other.

So for these particular types of queries, make sure you test all the possibilities – you might find NVL (currently) is your best bet.

 

* – Addenda:  Thanks to Jonathan for pointing out that you can get discrepancies in the results for the three strategies above for columns that may contain nulls, so as always, take care.

NULL’s vs NOT NULL’s and Performance

When it comes to giving the cost based optimiser the best possible chance to make the “right” decisions, many DBA’s are diligent in keeping statistics up to date, using histograms where appropriate, creating more indexes (or removing surplus indexes).

However one often neglected area is that the the null-ness of columns also impacts the optimiser decisions. NULL and NOT NULL do more than just act as constraints, they also add (or detract) to the value of indexes on those columns. Here’s an example of how the null-ness of a column impacts optimizer decisions. I have a table T which is a copy of DBA_OBJECTS, indexed on OBJECT_ID.


SQL> create table t as
  2  select * from dba_objects
  3  where object_id is not null;

Table created.

SQL>
SQL> create index IX on T ( object_id );

Index created.

Now I want to perform a standard pagination style query, namely, get the first 5 rows in order of OBJECT_ID


SQL>
SQL> set autotrace traceonly
SQL> select *
  2  from
  3    ( select * from t
  4      order by object_id
  5    )
  6  where rownum <= 5;

5 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 3299198703

----------------------------------------------------------------------------------------
| Id  | Operation               | Name | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |      |     5 |  2405 |       |  2755   (1)| 00:00:01 |
|*  1 |  COUNT STOPKEY          |      |       |       |       |            |          |
|   2 |   VIEW                  |      | 78750 |    36M|       |  2755   (1)| 00:00:01 |
|*  3 |    SORT ORDER BY STOPKEY|      | 78750 |     9M|    14M|  2755   (1)| 00:00:01 |
|   4 |     TABLE ACCESS FULL   | T    | 78750 |     9M|       |   428   (1)| 00:00:01 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(ROWNUM<=5)
   3 - filter(ROWNUM<=5)


Statistics
----------------------------------------------------------
         52  recursive calls
        130  db block gets
       1591  consistent gets
          0  physical reads
      25420  redo size
       2735  bytes sent via SQL*Net to client
        607  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          5  rows processed

I scanned the entire table and burned around 1500 consistent gets. Now I’ll let the database know what I already know, that is, that the OBJECT_ID column is contains no nulls. Then I’ll repeat the same query.


SQL>
SQL> alter table T modify object_id not null;

Table altered.

SQL>
SQL> select *
  2  from
  3    ( select * from t
  4      order by object_id
  5    )
  6  where rownum <= 5;

5 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 3114946973

--------------------------------------------------------------------------------------
| Id  | Operation                     | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |      |     5 |  2405 |     3   (0)| 00:00:01 |
|*  1 |  COUNT STOPKEY                |      |       |       |            |          |
|   2 |   VIEW                        |      |     5 |  2405 |     3   (0)| 00:00:01 |
|   3 |    TABLE ACCESS BY INDEX ROWID| T    | 78750 |     9M|     3   (0)| 00:00:01 |
|   4 |     INDEX FULL SCAN           | IX   |     5 |       |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(ROWNUM<=5)


Statistics
----------------------------------------------------------
         85  recursive calls
        132  db block gets
         72  consistent gets
          1  physical reads
      27192  redo size
       2735  bytes sent via SQL*Net to client
        607  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
          5  rows processed

Woo hoo! 20 times more efficient. The extra information we provided to the database allowed for more execution paths to be considered by the optimizer.

That’s all well and good. By how do we know which columns might be missing an appropriate NOT NULL constraint ?

Well, the following routine might help Smile

The PL/SQL procedure below deduces a ‘number of rows that are null’ count for all columns that are indexed for all tables in the schema (passed as parameter P_SCHEMA), although you can pass a table name to restrict the procedure to just that table. For any column that contains no nulls, the appropriate ‘alter table … ( column_name not null)’ command will be outputted. (Make sure you set serveroutput on).

Notes:

  • The client for whom I wrote this script, had no NOT NULL constraints on any table so the procedure only takes a rough stab at ALL_CONSTRAINTS for existing constraints.
  • It tries to keep things sensible – avoiding complex data types, tables that are auto-generated or used for queues etc, but there’s no guarantee it won’t either miss a table, or pick up it should not.
  • This is a brute force approach – it works by scanning every table in the schema, so use your discretion as to when would be a suitable time for running this routine.  But it will only scan each table once to determine the null count for all candidate columns.
  • (As with any diagnosis script), you should not apply it’s recommendations without some careful thought first.

12.2 version


create or replace 
procedure check_indexed_columns_for_null(
                 p_schema varchar2, 
                 p_table_name varchar2 default null) is
  cursor x is
   select 
     table_name,
     column_name,
     count(*) over ( partition by table_name ) as colcount
     from
     (
       select 
         table_name,
         column_name,
         min(existing_constraint)
       from 
       (
       select  
           a.table_name, 
           a.column_name, 
           ( select  count(*)
             from    all_constraints x,
                     all_cons_columns cc
             where   x.owner = c.owner
             and     x.table_name = c.table_name
             and     cc.owner      = x.owner
             and     cc.constraint_name = x.constraint_name
             and     
            (
               ( x.constraint_type = 'C'  and replace(search_condition_vc,'"') = a.column_name||' IS NOT NULL' )
                   or 
               ( x.constraint_type = 'P'  and cc.column_name = a.column_name )
             )
            ) existing_constraint
       from 
         all_ind_columns a,
         all_tables c,
         all_tab_columns ccol
       where a.index_owner = p_schema
       and a.index_owner = p_schema
       and a.table_name = nvl(upper(p_table_name),a.table_name)
       and c.table_name = a.table_name
       and c.owner      = a.table_owner
       and c.owner      = ccol.owner
       and c.table_name = ccol.table_name
       and a.column_name = ccol.column_name
       and c.secondary = 'N'
       and c.temporary = 'N'
       and c.nested    = 'NO'
       and c.external  = 'NO'
       and ccol.data_type_owner is null
       and ccol.data_type not in ('LONG','LONG RAW','CLOB','UROWID','UNDEFINED','NCLOB','BLOB','BFILE','ROWID')
       and (c.owner,c.table_name) not in ( select owner, queue_table from all_queue_tables where owner = p_schema )
       )
       group by 
         table_name,
         column_name
       having min(existing_constraint) = 0
     );

  str0 varchar2(32767); 
  str1 varchar2(32767); 
  str2 varchar2(32767); 
  str3 varchar2(32767);

  prev varchar2(100) := '*';
  cnt number;
  trailer varchar2(5);

procedure do_sql(thesql varchar2) is
  tcursor integer;
  dummy integer;
begin
  -- dbms_output.put_line(thesql);
  execute immediate thesql;
end;

begin
for i in x loop
  if prev != i.table_name then
    str0 := 'declare ';
    str1 := 'begin select '; str2 := ' into ';
    str3 := ' '; cnt := 1;
  end if;
  if cnt = i.colcount then 
    trailer := ' ';
  else
    trailer := ','||chr(10);
  end if;
  str0 := str0 || 'v'||ltrim(cnt)||' number;';
  str1 := str1 || 'sum(decode('||i.column_name||',null,1,0))'||trailer;
  str2 := str2 || 'v'||ltrim(cnt)||trailer;
  str3 := str3 || 'if v'||ltrim(cnt)||' = 0 then '||
    'dbms_output.put_line(''alter table '||p_schema||'.'||i.table_name||
    ' modify ('||i.column_name||' not null);''); end if;'||chr(10);
  if cnt = i.colcount then
    str2 := str2 ||' from '||p_schema||'.'||i.table_name||';';
    str3 := str3 ||' end;';
    do_sql(str0||' '||str1||' '||str2||' '||str3);
  end if;
  prev := i.table_name;
  cnt := cnt + 1;
end loop;
end;
/
sho err

set serverout on
exec check_indexed_columns_for_null('SCOTT')

11.2 version

(Needs a direct CREATE TABLE grant to the owner of the procedure)


create or replace
procedure check_indexed_columns_for_null(
                 p_schema varchar2,
                 p_table_name varchar2 default null) is
  cursor c_template is
    select table_name, column_name, 0 colcount
    from   all_tab_columns;

  type rowlist is table of c_template%rowtype;  
  r    rowlist;
  rc   sys_refcursor;

  str0 varchar2(32767);
  str1 varchar2(32767);
  str2 varchar2(32767);
  str3 varchar2(32767);

  prev varchar2(100) := '*';
  cnt number;
  trailer varchar2(5);

procedure do_sql(thesql varchar2) is
  tcursor integer;
  dummy integer;
begin
  dbms_output.put_line(thesql);
  execute immediate thesql;
end;

begin

  begin
    execute immediate 'drop table tmp$cons purge';
  exception when others then null;
  end;

  execute immediate 'create table tmp$cons as  select owner, table_name, constraint_name, constraint_type, to_lob(search_condition) search_condition_vc '||
                    'from all_constraints';

  open rc for
    q'{select
      table_name,
      column_name,
      count(*) over ( partition by table_name ) as colcount
      from
      (
        select
          table_name,
          column_name,
          min(existing_constraint)
        from
        (
        select
            a.table_name,
            a.column_name,
            ( select  count(*)
              from    ( select owner, table_name, constraint_name, constraint_type, cast(search_condition_vc as varchar2(4000)) search_condition_vc 
                        from tmp$cons ) x,
                      all_cons_columns cc
              where   x.owner = c.owner
              and     x.table_name = c.table_name
              and     cc.owner      = x.owner
              and     cc.constraint_name = x.constraint_name
              and
             (
                ( x.constraint_type = 'C'  and replace(search_condition_vc,'"') = a.column_name||' IS NOT NULL' )
                    or
                ( x.constraint_type = 'P'  and cc.column_name = a.column_name )
              )
             ) existing_constraint
        from
          all_ind_columns a,
          all_tables c,
          all_tab_columns ccol
        where a.index_owner = :p_schema
        and a.index_owner = :p_schema
        and a.table_name = nvl(upper(:p_table_name),a.table_name)
        and c.table_name = a.table_name
        and c.owner      = a.table_owner
        and c.owner      = ccol.owner
        and c.table_name = ccol.table_name
        and a.column_name = ccol.column_name
        and c.secondary = 'N'
        and c.temporary = 'N'
        and c.nested    = 'NO'
        and (c.owner,c.table_name) not in ( select owner, table_name from all_external_tables where owner = :p_schema )
        and ccol.data_type_owner is null
        and ccol.data_type not in ('LONG','LONG RAW','CLOB','UROWID','UNDEFINED','NCLOB','BLOB','BFILE','ROWID')
        and (c.owner,c.table_name) not in ( select owner, queue_table from all_queue_tables where owner = :p_schema )
        )
        group by
          table_name,
          column_name
        having min(existing_constraint) = 0
      )
      }' using p_schema,p_schema,p_table_name,p_schema,p_schema;

  fetch rc bulk collect into r;
  close rc;
  
  for i in 1 .. r.count loop
    if prev != r(i).table_name then
      str0 := 'declare ';
      str1 := 'begin select '; str2 := ' into ';
      str3 := ' '; cnt := 1;
    end if;
    if cnt = r(i).colcount then
      trailer := ' ';
    else
      trailer := ','||chr(10);
    end if;
    str0 := str0 || 'v'||ltrim(cnt)||' number;';
    str1 := str1 || 'sum(decode('||r(i).column_name||',null,1,0))'||trailer;
    str2 := str2 || 'v'||ltrim(cnt)||trailer;
    str3 := str3 || 'if v'||ltrim(cnt)||' = 0 then '||
      'dbms_output.put_line(''alter table '||p_schema||'.'||r(i).table_name||
      ' modify ('||r(i).column_name||' not null);''); end if;'||chr(10);
    if cnt = r(i).colcount then
      str2 := str2 ||' from '||p_schema||'.'||r(i).table_name||';';
      str3 := str3 ||' end;';
      do_sql(str0||' '||str1||' '||str2||' '||str3);
    end if;
    prev := r(i).table_name;
    cnt := cnt + 1;
  end loop;
end;
/

 

If you want to see the underlying table scan queries that are being run, simply comment back in the “dbms_output.put_line” in the DO_SQL subroutine.

Enjoy!

Getting started…adding an account to use

If you’ve read my previous post about getting started with the Oracle database, then hopefully you now have your very own database installed and running, and you have a explored a little with the sample schemas using SQL Developer.  Perhaps now you want to venture out into your own database development, and for that, you will want to create your own user account and create your own tables.  Here’s another video which will guide you through the process.

The video shows the process of creating a user via SQL Developer. If you are a command line addict, then here’s the commands you will need run to enact the same. You can use SQL Plus or SQLcl to run these commands. If you’ve never done that, simply

  • open up a terminal or MS-DOC command window,
  • type sqlplus and hit Enter
  • Type SYSTEM for the username and hit Enter
  • Type the password you specified when you created the database as per my previous blog post.

You will see a prompt “SQL>”.  You enter the following commands to create your first user account.


SQL> create user DEVELOPER identified by MyCaseSensitivePassword;

User created.

SQL> grant connect, resource to DEVELOPER;

Grant succeeded.

SQL> alter user DEVELOPER quota unlimited on USERS;

User altered.

And you are now good to go. Fire up SQL Developer as shown in the video, and you are ready to connect as your new user account as shown at this moment in the video.

Remember – we’re doing this to learn and explore Oracle. It is very easy to see all of functionality available to you, and to leap in and start building a fully-fledged application.  But don’t forget – for that, you need more than just a play-thing at home. You need backups; you need security; you need scalability; you need redundancy and failover.  So if you’re ready to get serious, you probably need to think about using a cloud account or similar.

Enjoy your database !

Those pesky LONG columns

There was a time, many moons ago Smile when CLOB, BLOB and BFILE did not exist as data types. So if you had anything longer than a few kilobytes of data to store, you had to use a LONG or a LONG RAW.  But those data types came with all sorts of restrictions and frustrations, and we all embraced the improvements that the LOB data types brought in Oracle 8.  But of course, we carry a lot of that historical “baggage” in the data dictionary.


SQL> select owner, table_name, column_name
  2  from   dba_tab_cols
  3  where  data_type = 'LONG'
  4  and    substr(table_name,1,4) in ('DBA_','ALL_','USER')
  5  order by 1,2, 3;

OWNER                          TABLE_NAME                     COLUMN_NAME
------------------------------ ------------------------------ -------------------------
SYS                            ALL_ARGUMENTS                  DEFAULT_VALUE
SYS                            ALL_CLUSTER_HASH_EXPRESSIONS   HASH_EXPRESSION
SYS                            ALL_CONSTRAINTS                SEARCH_CONDITION
SYS                            ALL_IND_EXPRESSIONS            COLUMN_EXPRESSION
SYS                            ALL_IND_PARTITIONS             HIGH_VALUE
SYS                            ALL_IND_SUBPARTITIONS          HIGH_VALUE
SYS                            ALL_MVIEWS                     QUERY
SYS                            ALL_MVIEW_AGGREGATES           MEASURE
SYS                            ALL_MVIEW_ANALYSIS             QUERY
SYS                            ALL_NESTED_TABLE_COLS          DATA_DEFAULT
SYS                            ALL_REGISTERED_MVIEWS          QUERY_TXT
SYS                            ALL_REGISTERED_SNAPSHOTS       QUERY_TXT
SYS                            ALL_SNAPSHOTS                  QUERY
SYS                            ALL_SQLSET_PLANS               OTHER
SYS                            ALL_SUBPARTITION_TEMPLATES     HIGH_BOUND
SYS                            ALL_SUMMARIES                  QUERY
SYS                            ALL_SUMMARY_AGGREGATES         MEASURE
SYS                            ALL_TAB_COLS                   DATA_DEFAULT
SYS                            ALL_TAB_COLS_V$                DATA_DEFAULT
SYS                            ALL_TAB_COLUMNS                DATA_DEFAULT
SYS                            ALL_TAB_PARTITIONS             HIGH_VALUE
SYS                            ALL_TAB_SUBPARTITIONS          HIGH_VALUE
SYS                            ALL_TRIGGERS                   TRIGGER_BODY
SYS                            ALL_VIEWS                      TEXT
SYS                            ALL_VIEWS_AE                   TEXT
SYS                            ALL_ZONEMAPS                   QUERY
SYS                            ALL_ZONEMAP_MEASURES           MEASURE
SYS                            DBA_ADVISOR_SQLPLANS           OTHER
SYS                            DBA_ARGUMENTS                  DEFAULT_VALUE
SYS                            DBA_CLUSTER_HASH_EXPRESSIONS   HASH_EXPRESSION
SYS                            DBA_CONSTRAINTS                SEARCH_CONDITION
SYS                            DBA_IM_EXPRESSIONS             SQL_EXPRESSION
SYS                            DBA_IND_EXPRESSIONS            COLUMN_EXPRESSION
SYS                            DBA_IND_PARTITIONS             HIGH_VALUE
SYS                            DBA_IND_SUBPARTITIONS          HIGH_VALUE
SYS                            DBA_MVIEWS                     QUERY
SYS                            DBA_MVIEW_AGGREGATES           MEASURE
SYS                            DBA_MVIEW_ANALYSIS             QUERY
SYS                            DBA_NESTED_TABLE_COLS          DATA_DEFAULT
SYS                            DBA_OUTLINES                   SQL_TEXT
SYS                            DBA_REGISTERED_MVIEWS          QUERY_TXT
SYS                            DBA_REGISTERED_SNAPSHOTS       QUERY_TXT
SYS                            DBA_SNAPSHOTS                  QUERY
SYS                            DBA_SQLSET_PLANS               OTHER
SYS                            DBA_SQLTUNE_PLANS              OTHER
SYS                            DBA_SUBPARTITION_TEMPLATES     HIGH_BOUND
SYS                            DBA_SUMMARIES                  QUERY
SYS                            DBA_SUMMARY_AGGREGATES         MEASURE
SYS                            DBA_TAB_COLS                   DATA_DEFAULT
SYS                            DBA_TAB_COLS_V$                DATA_DEFAULT
SYS                            DBA_TAB_COLUMNS                DATA_DEFAULT
SYS                            DBA_TAB_PARTITIONS             HIGH_VALUE
SYS                            DBA_TAB_SUBPARTITIONS          HIGH_VALUE
SYS                            DBA_TRIGGERS                   TRIGGER_BODY
SYS                            DBA_VIEWS                      TEXT
SYS                            DBA_VIEWS_AE                   TEXT
SYS                            DBA_ZONEMAPS                   QUERY
SYS                            DBA_ZONEMAP_MEASURES           MEASURE
SYS                            USER_ADVISOR_SQLPLANS          OTHER
SYS                            USER_ARGUMENTS                 DEFAULT_VALUE
SYS                            USER_CLUSTER_HASH_EXPRESSIONS  HASH_EXPRESSION
SYS                            USER_CONSTRAINTS               SEARCH_CONDITION
SYS                            USER_IM_EXPRESSIONS            SQL_EXPRESSION
SYS                            USER_IND_EXPRESSIONS           COLUMN_EXPRESSION
SYS                            USER_IND_PARTITIONS            HIGH_VALUE
SYS                            USER_IND_SUBPARTITIONS         HIGH_VALUE
SYS                            USER_MVIEWS                    QUERY
SYS                            USER_MVIEW_AGGREGATES          MEASURE
SYS                            USER_MVIEW_ANALYSIS            QUERY
SYS                            USER_NESTED_TABLE_COLS         DATA_DEFAULT
SYS                            USER_OUTLINES                  SQL_TEXT
SYS                            USER_REGISTERED_MVIEWS         QUERY_TXT
SYS                            USER_REGISTERED_SNAPSHOTS      QUERY_TXT
SYS                            USER_SNAPSHOTS                 QUERY
SYS                            USER_SQLSET_PLANS              OTHER
SYS                            USER_SQLTUNE_PLANS             OTHER
SYS                            USER_SUBPARTITION_TEMPLATES    HIGH_BOUND
SYS                            USER_SUMMARIES                 QUERY
SYS                            USER_SUMMARY_AGGREGATES        MEASURE
SYS                            USER_TAB_COLS                  DATA_DEFAULT
SYS                            USER_TAB_COLS_V$               DATA_DEFAULT
SYS                            USER_TAB_COLUMNS               DATA_DEFAULT
SYS                            USER_TAB_PARTITIONS            HIGH_VALUE
SYS                            USER_TAB_SUBPARTITIONS         HIGH_VALUE
SYS                            USER_TRIGGERS                  TRIGGER_BODY
SYS                            USER_VIEWS                     TEXT
SYS                            USER_VIEWS_AE                  TEXT
SYS                            USER_ZONEMAPS                  QUERY
SYS                            USER_ZONEMAP_MEASURES          MEASURE
WMSYS                          ALL_WM_IND_EXPRESSIONS         COLUMN_EXPRESSION
WMSYS                          USER_WM_IND_EXPRESSIONS        COLUMN_EXPRESSION

So when we want to do some data mining against the dictionary, the following error is a common and frustrating one!


SQL> select *
  2  from   dba_views
  3  where  text like '%mystring%';
where  text like '%mystring%'
       *
ERROR at line 3:
ORA-00932: inconsistent datatypes: expected CHAR got LONG

I often hear the comment “Why not just in the next version, just change all those LONG’s to CLOB?”.  I imagine it would be possible to do so, but just pause for a second and think of the amount of regression testing that would need to occur, not just inside the database group in the Oracle organization, but for every customer that has ever coded up a mechanism in PL/SQL, or C, or Java, or any other language that is potentially using a LONG column in one of their queries.  I don’t know for sure, but I suspect that is the reason why we have added new columns to the dictionary rather than modify existing ones.  For example, if you look at DBA_VIEWS, you can see the addition of a column TEXT_VC which is a VARCHAR2 equivalent to the pre-existing TEXT column which is a LONG.


SQL> desc DBA_VIEWS
 Name                                Null?    Type
 ----------------------------------- -------- ----------------
 OWNER                               NOT NULL VARCHAR2(128)
 VIEW_NAME                           NOT NULL VARCHAR2(128)
 TEXT_LENGTH                                  NUMBER
 TEXT                                         LONG
 TEXT_VC                                      VARCHAR2(4000)
 TYPE_TEXT_LENGTH                             NUMBER
 TYPE_TEXT                                    VARCHAR2(4000)
 OID_TEXT_LENGTH                              NUMBER
 OID_TEXT                                     VARCHAR2(4000)
 VIEW_TYPE_OWNER                              VARCHAR2(128)
 VIEW_TYPE                                    VARCHAR2(128)
 SUPERVIEW_NAME                               VARCHAR2(128)
 EDITIONING_VIEW                              VARCHAR2(1)
 READ_ONLY                                    VARCHAR2(1)
 CONTAINER_DATA                               VARCHAR2(1)
 BEQUEATH                                     VARCHAR2(12)
 ORIGIN_CON_ID                                NUMBER
 DEFAULT_COLLATION                            VARCHAR2(100)
 CONTAINERS_DEFAULT                           VARCHAR2(3)
 CONTAINER_MAP                                VARCHAR2(3)
 EXTENDED_DATA_LINK                           VARCHAR2(3)
 EXTENDED_DATA_LINK_MAP                       VARCHAR2(3)

TEXT_VC is a nice touch, but it would have been nice to have that as a CLOB column to get access to the entire DDL for the view should it exceed 4000 characters.  I can’t solve that problem, but I thought perhaps I can give you a workable compromise.  In 8i, the TO_LOB function was implemented to allow customers to migrate from LONG to CLOB with minimal fuss.  I can take advantage of that to provide a CLOB-based copy of DBA_VIEWS:


SQL> create table system.t  as
  2  select
  3     owner
  4    ,view_name
  5    ,text_length
  6    ,to_lob(text) text
  7    ,text_vc
  8    ,type_text_length
  9    ,type_text
 10    ,oid_text_length
 11    ,oid_text
 12    ,view_type_owner
 13    ,view_type
 14    ,superview_name
 15    ,editioning_view
 16    ,read_only
 17    ,container_data
 18    ,bequeath
 19    ,origin_con_id
 20    ,default_collation
 21    ,containers_default
 22    ,container_map
 23    ,extended_data_link
 24    ,extended_data_link_map
 25  from dba_views;

Table created.

SQL>
SQL> create index system.t_ix on system.t ( owner, view_name );

Index created.

That is all well and good, but of course, the moment I perform maintenance on any view in the system, that table will be “stale”. That still might be a perfectly reasonably solution for you – you just refresh the table contents (say) once per day, or when you do deployments of schema changes into your database.  But perhaps we can do a little better than that.  Using a DDL event trigger, we can capture changes on views and adjust our copy accordingly.


SQL> create or replace
  2  trigger sys.view$clob$handler
  3  after create or alter or drop or rename
  4  on database
  5  when ( ora_dict_obj_type = 'VIEW' )
  6  declare
  7    l_obj_name  varchar2(128) := ora_dict_obj_name;
  8    l_obj_owner varchar2(128) := ora_dict_obj_owner;
  9
 10    l_text   varchar2(1000);
 11    sql_text ora_name_list_t;
 12    l_idx    pls_integer;
 13  begin
 14    lock table system.t in exclusive mode;
 15
 16    --
 17    -- remove the existing row for the view
 18    --
 19    delete from system.t where owner = l_obj_owner and view_name = l_obj_name;
 20
 21    --
 22    -- if it is a rename event, we will try to derive the new name
 23    -- from the sql statement by looking for a trailing "TO"
 24    --
 25    if ora_sysevent in ('RENAME') then
 26      l_idx := ora_sql_txt(sql_text);
 27      for i in 1 .. l_idx
 28      loop
 29          l_text := l_text || sql_text(i);
 30      end loop;
 31      l_idx := instr(lower(l_text),' to ');
 32      if l_idx = 0 then
 33        raise_application_error(-20000,'Could not find appropriate rename content');
 34      end if;
 35      --
 36      -- The SQL has a trailing chr(0) which we need to remove
 37      --
 38      l_text := rtrim(ltrim(substr(l_text,l_idx+4)),' '||chr(0));
 39      --
 40      -- The object name in quotes mean we preserve the case specified, otherwise
 41      -- we will normalize to upper
 42      --
 43      if l_text like '"%"' then
 44        l_obj_name := rtrim(ltrim(l_text,'"'),'"');
 45      else
 46        l_obj_name := upper(l_text);
 47      end if;
 48    end if;
 49
 50    --
 51    -- Now we insert the updated definition for the view, or
 52    -- perhaps its new name
 53    --
 54    if ora_sysevent in ('CREATE','ALTER','RENAME') then
 55      insert into system.t
 56      select
 57         owner
 58        ,view_name
 59        ,text_length
 60        ,to_lob(text) text
 61        ,text_vc
 62        ,type_text_length
 63        ,type_text
 64        ,oid_text_length
 65        ,oid_text
 66        ,view_type_owner
 67        ,view_type
 68        ,superview_name
 69        ,editioning_view
 70        ,read_only
 71        ,container_data
 72        ,bequeath
 73        ,origin_con_id
 74        ,default_collation
 75        ,containers_default
 76        ,container_map
 77        ,extended_data_link
 78        ,extended_data_link_map
 79      from dba_views
 80      where owner = l_obj_owner
 81      and   view_name = l_obj_name;
 82    end if;
 83
 84  end;
 85  /

Trigger created.

Obviously, for more serious usage you’ll be choosing a better table name and not using SYS, but this is just a demo on my laptop.  Unlike a DML trigger, where we have access to “new” and “old” images of the data, for a RENAME command, I had to probe the SQL text to try derive the new name. So lets now perform some view DDL and see how our tracking trigger accommodates the changes.


SQL> create view view1 as select * from ALL_objects;

View created.

SQL> create view view2 as select * from ALL_objects;

View created.

SQL> create view view3 as select * from view2;

View created.

SQL> drop view view1;

View dropped.

SQL> rename view2 to view2a;

Table renamed.

SQL>
SQL>
SQL> select count(*) from system.t;

  COUNT(*)
----------
      7347

SQL> select count(*) from dba_views;

  COUNT(*)
----------
      7347

SQL> select owner, view_name from dba_views
  2  minus
  3  select owner, view_name from system.t;

no rows selected

SQL>
SQL> select owner, view_name from system.t
  2  minus
  3  select owner, view_name from dba_views;

no rows selected

And there we have it – our own custom version of DBA_VIEWS where the TEXT is now exposed as a CLOB column.  So now, mining that column for information is as easy as a simple predicate


SQL> select owner, view_name
  2  from   system.t
  3  where  text like 'selec%ALL_objects';

OWNER                          VIEW_NAME
------------------------------ -------------------
MCDONAC                        VIEW2A