More triggers are better

Yes, you heard me correctly. If you have got one trigger on a table, then you might be surprised to find that perhaps having a second one will be a better option. Then again, I also love the sweet scent of a clickbaity, inflammatory blog post title to draw the readers in Smile so you’ll just have to read on to see which is true.

As much as I am not a fan of using triggers to populate metadata in tables, I also concede that it is a popular approach taken by many of us as we code up our applications. Sure, we’d like to have our application code store that all important “who did this change” information with each database row, but more often than not, this get shoe-horned into the codebase via triggers as an afterthought to the development process.

But our well-read developer will often be thinking: “I’m a savvy developer – I’ve been reading blogs, and websites that tell me that less triggers is better, so I’ll use as few as I can” which leads to a scenario like the one I’ll present now.

I’ll start with a standard parent and child table setup, connected via an obvious foreign key


SQL> create table par (
  2    id           number(10)    not null primary key,
  3    description  varchar2(20)  not null,
  4    created_by   varchar2(20)  not null,
  5    updated_by   varchar2(20)  not null
  6    );

Table created.

SQL>
SQL> create sequence par_seq;

Sequence created.

SQL>
SQL> create sequence chd_seq;

Sequence created.

SQL>
SQL> create table chd (
  2    id           number(10)    not null primary key,
  3    par_id       number(10)    not null,
  4    constraint  fk foreign key ( par_id ) references par ( id )
  5    );

Table created.

SQL>
SQL>
SQL> create index chd_fk_idx on chd (par_id);

Index created.

I’ve got a sequence for each table, so in order to “bind” that sequence to the primary key for each table I’ll create a trigger.  So I’ll add some trigger code to implement some other common application requirements at the same time:

  • Populate the primary key with a sequence value,
  • Populate the CREATED_BY, UPDATED_BY columns on insert of a new row
  • Amend the UPDATED_BY column when I later update that row

Here is a trigger to implement that.


SQL> create or replace trigger trg_par
  2  before insert or update on par
  3  for each row
  4  begin
  5      if inserting then
  6         :new.id := par_seq.nextval;
  7         :new.created_by := user;
  8         :new.updated_by := user;
  9      end if;
 10
 11      if updating then
 12         :new.updated_by := user;
 13      end if;
 14  end;
 15  /

Trigger created.

Now my application is ready go.  I’ll insert some data to verify that my trigger has not broken any functionality.


SQL> insert into par (description) values ('test');

1 row created.

SQL> insert into par (description) values ('test2');

1 row created.

SQL> select * from par;

  ID DESCRIPTION          CREATED_BY           UPDATED_BY
---- -------------------- -------------------- --------------------
   1 test                 MCDONAC              MCDONAC
   2 test2                MCDONAC              MCDONAC

2 rows selected.

SQL>
SQL> insert into chd (id,par_id) values (chd_seq.nextval,1);

1 row created.

SQL> insert into chd (id,par_id) values (chd_seq.nextval,2);

1 row created.

SQL> select * from chd;

  ID     PAR_ID
---- ----------
   1          1
   2          2

2 rows selected.

SQL>
SQL> commit;

Commit complete.

So far so good. Let us see now what happens when I do a simple update on the parent table on the DESCRIPTION column.  Note that this column is not involved with any primary key, or index or foreign key relationship – it’s just a simple string column.


SQL> update par set description = 'anything' where id = 1;

1 row updated.

SQL> select locked_mode, object_name
  2  from v$locked_object l, all_objects ob
  3  where ob.object_id =l.object_id;

LOCKED_MODE OBJECT_NAME
----------- ----------------------------------------
          3 PAR
          3 CHD

2 rows selected.

Notice the locks that have been taken. Understandably, I had to take a lock on the PAR table because I’ve just updated a row on the table, so I need to make sure that no-one does anything like drop it, or modify the structure whilst I’ve got an outstanding transaction. But here is the interesting part – we also took a lock on the child table CHD. That seems superfluous because as I said – we are not performing an update of anything that could possibly impact the child table in any way. 

Why did the database choose to add this extra lock?

The cause is due to the way I coded the trigger.  Notice that the trigger has a reference to the primary key in the trigger body:


  5      if inserting then
  6         :new.id := par_seq.nextval;

Even though that part of the trigger body is not executed (because I am performing an update not an insert) the mere presence of it is enough for the database to cautiously add that extra lock on the CHD table because we “just might” be messing around with the parent table primary key here.

So now I’ll roll back that update and convert the single triggers to two triggers – one for insert and one dedicated for update.


SQL> rollback;

Rollback complete.

SQL>
SQL> drop trigger trg_par ;

Trigger dropped.

SQL>
SQL> create or replace trigger trg1_par
  2  before insert on par
  3  for each row
  4  begin
  5      :new.id := par_seq.nextval;
  6      :new.created_by := user;
  7      :new.updated_by := user;
  8  end;
  9  /

Trigger created.

SQL>
SQL> create or replace trigger trg2_par
  2  before update on par
  3  for each row
  4  begin
  5      if updating then
  6          :new.updated_by := user;
  7      end if;
  8  end;
  9  /

Trigger created.

Now I’ll run the same update


SQL> update par set description = 'anything' where id = 1;

1 row updated.

SQL> select locked_mode, object_name
  2  from v$locked_object l, all_objects ob
  3  where ob.object_id =l.object_id;

LOCKED_MODE OBJECT_NAME
----------- ----------------------------------------
          3 PAR

1 row selected.

Because the reference to the primary key columns for table PAR are no longer in the firing (update) trigger, I no longer longer see the lock on the CHD table.

In practice, it is unlikely that this additional lock is going to cause you a lot of harm – it will block some operations on the child table such as a truncate or DDL to make a structural change, so it is reasonably to assume that these are rare occurrences. But I’m a fan of the philosophy of: Have as few locks as possible, but always as many as required. So in this case, you might to want to consider opting for two triggers instead of one.

And of course, perhaps getting to zero triggers might be the best option Smile

When WHEN went faster

Yeah…try saying that blog post title 10 times in a row as fast as you can Smile

But since we’re talking about doing things fast, this is just a quick post about a conversation I had a twitter yesterday about the WHEN clause in a trigger.

 

image

That is an easy benchmark to whip up – I just need a couple of tables, each with a simple a trigger differing only by their usage of the WHEN clause.  Here is my setup:


SQL> create table t1 ( x int );

Table created.

SQL> create table t2 ( x int );

Table created.

SQL> create or replace
  2  trigger TRG1
  3  before insert on t1
  4  for each row
  5  begin
  6    if sys_context('USERENV','SID') = 0 then
  7       :new.x := 0;
  8    end if;
  9  end;
 10  /

Trigger created.

SQL> create or replace
  2  trigger TRG2
  3  before insert on t2
  4  for each row
  5  when ( sys_context('USERENV','SID') = 0 )
  6  begin
  7     :new.x := 0;
  8  end;
  9  /

Trigger created.

I’m using an always false condition so the content of the trigger will never be fired (except for the WHEN clause and the IF-test).  Now I’m ready to slam a bunch of inserts into each table and measure the performance.


SQL> set timing on
SQL> insert into t1
  2  select 1 from dual
  3  connect by level <= 1000000;

1000000 rows created.

Elapsed: 00:00:02.52

SQL> insert into t2
  2  select 1 from dual
  3  connect by level <= 1000000;

1000000 rows created.

Elapsed: 00:00:00.41

There is an immediate urge here to go screaming from the roof tops that WHEN is the all conquering hero of this benchmark. And yes, it is indeed faster but make sure you keep the results in perspective.  We gained back about 2 seconds for 1 million inserts. For me, once I’m inserting millions of rows into a table I’m starting to think not about trigger performance but whether this is the kind of table I want to be having triggers on at all. 

Those pesky LONG columns

There was a time, many moons ago Smile when CLOB, BLOB and BFILE did not exist as data types. So if you had anything longer than a few kilobytes of data to store, you had to use a LONG or a LONG RAW.  But those data types came with all sorts of restrictions and frustrations, and we all embraced the improvements that the LOB data types brought in Oracle 8.  But of course, we carry a lot of that historical “baggage” in the data dictionary.


SQL> select owner, table_name, column_name
  2  from   dba_tab_cols
  3  where  data_type = 'LONG'
  4  and    substr(table_name,1,4) in ('DBA_','ALL_','USER')
  5  order by 1,2, 3;

OWNER                          TABLE_NAME                     COLUMN_NAME
------------------------------ ------------------------------ -------------------------
SYS                            ALL_ARGUMENTS                  DEFAULT_VALUE
SYS                            ALL_CLUSTER_HASH_EXPRESSIONS   HASH_EXPRESSION
SYS                            ALL_CONSTRAINTS                SEARCH_CONDITION
SYS                            ALL_IND_EXPRESSIONS            COLUMN_EXPRESSION
SYS                            ALL_IND_PARTITIONS             HIGH_VALUE
SYS                            ALL_IND_SUBPARTITIONS          HIGH_VALUE
SYS                            ALL_MVIEWS                     QUERY
SYS                            ALL_MVIEW_AGGREGATES           MEASURE
SYS                            ALL_MVIEW_ANALYSIS             QUERY
SYS                            ALL_NESTED_TABLE_COLS          DATA_DEFAULT
SYS                            ALL_REGISTERED_MVIEWS          QUERY_TXT
SYS                            ALL_REGISTERED_SNAPSHOTS       QUERY_TXT
SYS                            ALL_SNAPSHOTS                  QUERY
SYS                            ALL_SQLSET_PLANS               OTHER
SYS                            ALL_SUBPARTITION_TEMPLATES     HIGH_BOUND
SYS                            ALL_SUMMARIES                  QUERY
SYS                            ALL_SUMMARY_AGGREGATES         MEASURE
SYS                            ALL_TAB_COLS                   DATA_DEFAULT
SYS                            ALL_TAB_COLS_V$                DATA_DEFAULT
SYS                            ALL_TAB_COLUMNS                DATA_DEFAULT
SYS                            ALL_TAB_PARTITIONS             HIGH_VALUE
SYS                            ALL_TAB_SUBPARTITIONS          HIGH_VALUE
SYS                            ALL_TRIGGERS                   TRIGGER_BODY
SYS                            ALL_VIEWS                      TEXT
SYS                            ALL_VIEWS_AE                   TEXT
SYS                            ALL_ZONEMAPS                   QUERY
SYS                            ALL_ZONEMAP_MEASURES           MEASURE
SYS                            DBA_ADVISOR_SQLPLANS           OTHER
SYS                            DBA_ARGUMENTS                  DEFAULT_VALUE
SYS                            DBA_CLUSTER_HASH_EXPRESSIONS   HASH_EXPRESSION
SYS                            DBA_CONSTRAINTS                SEARCH_CONDITION
SYS                            DBA_IM_EXPRESSIONS             SQL_EXPRESSION
SYS                            DBA_IND_EXPRESSIONS            COLUMN_EXPRESSION
SYS                            DBA_IND_PARTITIONS             HIGH_VALUE
SYS                            DBA_IND_SUBPARTITIONS          HIGH_VALUE
SYS                            DBA_MVIEWS                     QUERY
SYS                            DBA_MVIEW_AGGREGATES           MEASURE
SYS                            DBA_MVIEW_ANALYSIS             QUERY
SYS                            DBA_NESTED_TABLE_COLS          DATA_DEFAULT
SYS                            DBA_OUTLINES                   SQL_TEXT
SYS                            DBA_REGISTERED_MVIEWS          QUERY_TXT
SYS                            DBA_REGISTERED_SNAPSHOTS       QUERY_TXT
SYS                            DBA_SNAPSHOTS                  QUERY
SYS                            DBA_SQLSET_PLANS               OTHER
SYS                            DBA_SQLTUNE_PLANS              OTHER
SYS                            DBA_SUBPARTITION_TEMPLATES     HIGH_BOUND
SYS                            DBA_SUMMARIES                  QUERY
SYS                            DBA_SUMMARY_AGGREGATES         MEASURE
SYS                            DBA_TAB_COLS                   DATA_DEFAULT
SYS                            DBA_TAB_COLS_V$                DATA_DEFAULT
SYS                            DBA_TAB_COLUMNS                DATA_DEFAULT
SYS                            DBA_TAB_PARTITIONS             HIGH_VALUE
SYS                            DBA_TAB_SUBPARTITIONS          HIGH_VALUE
SYS                            DBA_TRIGGERS                   TRIGGER_BODY
SYS                            DBA_VIEWS                      TEXT
SYS                            DBA_VIEWS_AE                   TEXT
SYS                            DBA_ZONEMAPS                   QUERY
SYS                            DBA_ZONEMAP_MEASURES           MEASURE
SYS                            USER_ADVISOR_SQLPLANS          OTHER
SYS                            USER_ARGUMENTS                 DEFAULT_VALUE
SYS                            USER_CLUSTER_HASH_EXPRESSIONS  HASH_EXPRESSION
SYS                            USER_CONSTRAINTS               SEARCH_CONDITION
SYS                            USER_IM_EXPRESSIONS            SQL_EXPRESSION
SYS                            USER_IND_EXPRESSIONS           COLUMN_EXPRESSION
SYS                            USER_IND_PARTITIONS            HIGH_VALUE
SYS                            USER_IND_SUBPARTITIONS         HIGH_VALUE
SYS                            USER_MVIEWS                    QUERY
SYS                            USER_MVIEW_AGGREGATES          MEASURE
SYS                            USER_MVIEW_ANALYSIS            QUERY
SYS                            USER_NESTED_TABLE_COLS         DATA_DEFAULT
SYS                            USER_OUTLINES                  SQL_TEXT
SYS                            USER_REGISTERED_MVIEWS         QUERY_TXT
SYS                            USER_REGISTERED_SNAPSHOTS      QUERY_TXT
SYS                            USER_SNAPSHOTS                 QUERY
SYS                            USER_SQLSET_PLANS              OTHER
SYS                            USER_SQLTUNE_PLANS             OTHER
SYS                            USER_SUBPARTITION_TEMPLATES    HIGH_BOUND
SYS                            USER_SUMMARIES                 QUERY
SYS                            USER_SUMMARY_AGGREGATES        MEASURE
SYS                            USER_TAB_COLS                  DATA_DEFAULT
SYS                            USER_TAB_COLS_V$               DATA_DEFAULT
SYS                            USER_TAB_COLUMNS               DATA_DEFAULT
SYS                            USER_TAB_PARTITIONS            HIGH_VALUE
SYS                            USER_TAB_SUBPARTITIONS         HIGH_VALUE
SYS                            USER_TRIGGERS                  TRIGGER_BODY
SYS                            USER_VIEWS                     TEXT
SYS                            USER_VIEWS_AE                  TEXT
SYS                            USER_ZONEMAPS                  QUERY
SYS                            USER_ZONEMAP_MEASURES          MEASURE
WMSYS                          ALL_WM_IND_EXPRESSIONS         COLUMN_EXPRESSION
WMSYS                          USER_WM_IND_EXPRESSIONS        COLUMN_EXPRESSION

So when we want to do some data mining against the dictionary, the following error is a common and frustrating one!


SQL> select *
  2  from   dba_views
  3  where  text like '%mystring%';
where  text like '%mystring%'
       *
ERROR at line 3:
ORA-00932: inconsistent datatypes: expected CHAR got LONG

I often hear the comment “Why not just in the next version, just change all those LONG’s to CLOB?”.  I imagine it would be possible to do so, but just pause for a second and think of the amount of regression testing that would need to occur, not just inside the database group in the Oracle organization, but for every customer that has ever coded up a mechanism in PL/SQL, or C, or Java, or any other language that is potentially using a LONG column in one of their queries.  I don’t know for sure, but I suspect that is the reason why we have added new columns to the dictionary rather than modify existing ones.  For example, if you look at DBA_VIEWS, you can see the addition of a column TEXT_VC which is a VARCHAR2 equivalent to the pre-existing TEXT column which is a LONG.


SQL> desc DBA_VIEWS
 Name                                Null?    Type
 ----------------------------------- -------- ----------------
 OWNER                               NOT NULL VARCHAR2(128)
 VIEW_NAME                           NOT NULL VARCHAR2(128)
 TEXT_LENGTH                                  NUMBER
 TEXT                                         LONG
 TEXT_VC                                      VARCHAR2(4000)
 TYPE_TEXT_LENGTH                             NUMBER
 TYPE_TEXT                                    VARCHAR2(4000)
 OID_TEXT_LENGTH                              NUMBER
 OID_TEXT                                     VARCHAR2(4000)
 VIEW_TYPE_OWNER                              VARCHAR2(128)
 VIEW_TYPE                                    VARCHAR2(128)
 SUPERVIEW_NAME                               VARCHAR2(128)
 EDITIONING_VIEW                              VARCHAR2(1)
 READ_ONLY                                    VARCHAR2(1)
 CONTAINER_DATA                               VARCHAR2(1)
 BEQUEATH                                     VARCHAR2(12)
 ORIGIN_CON_ID                                NUMBER
 DEFAULT_COLLATION                            VARCHAR2(100)
 CONTAINERS_DEFAULT                           VARCHAR2(3)
 CONTAINER_MAP                                VARCHAR2(3)
 EXTENDED_DATA_LINK                           VARCHAR2(3)
 EXTENDED_DATA_LINK_MAP                       VARCHAR2(3)

TEXT_VC is a nice touch, but it would have been nice to have that as a CLOB column to get access to the entire DDL for the view should it exceed 4000 characters.  I can’t solve that problem, but I thought perhaps I can give you a workable compromise.  In 8i, the TO_LOB function was implemented to allow customers to migrate from LONG to CLOB with minimal fuss.  I can take advantage of that to provide a CLOB-based copy of DBA_VIEWS:


SQL> create table system.t  as
  2  select
  3     owner
  4    ,view_name
  5    ,text_length
  6    ,to_lob(text) text
  7    ,text_vc
  8    ,type_text_length
  9    ,type_text
 10    ,oid_text_length
 11    ,oid_text
 12    ,view_type_owner
 13    ,view_type
 14    ,superview_name
 15    ,editioning_view
 16    ,read_only
 17    ,container_data
 18    ,bequeath
 19    ,origin_con_id
 20    ,default_collation
 21    ,containers_default
 22    ,container_map
 23    ,extended_data_link
 24    ,extended_data_link_map
 25  from dba_views;

Table created.

SQL>
SQL> create index system.t_ix on system.t ( owner, view_name );

Index created.

That is all well and good, but of course, the moment I perform maintenance on any view in the system, that table will be “stale”. That still might be a perfectly reasonably solution for you – you just refresh the table contents (say) once per day, or when you do deployments of schema changes into your database.  But perhaps we can do a little better than that.  Using a DDL event trigger, we can capture changes on views and adjust our copy accordingly.


SQL> create or replace
  2  trigger sys.view$clob$handler
  3  after create or alter or drop or rename
  4  on database
  5  when ( ora_dict_obj_type = 'VIEW' )
  6  declare
  7    l_obj_name  varchar2(128) := ora_dict_obj_name;
  8    l_obj_owner varchar2(128) := ora_dict_obj_owner;
  9
 10    l_text   varchar2(1000);
 11    sql_text ora_name_list_t;
 12    l_idx    pls_integer;
 13  begin
 14    lock table system.t in exclusive mode;
 15
 16    --
 17    -- remove the existing row for the view
 18    --
 19    delete from system.t where owner = l_obj_owner and view_name = l_obj_name;
 20
 21    --
 22    -- if it is a rename event, we will try to derive the new name
 23    -- from the sql statement by looking for a trailing "TO"
 24    --
 25    if ora_sysevent in ('RENAME') then
 26      l_idx := ora_sql_txt(sql_text);
 27      for i in 1 .. l_idx
 28      loop
 29          l_text := l_text || sql_text(i);
 30      end loop;
 31      l_idx := instr(lower(l_text),' to ');
 32      if l_idx = 0 then
 33        raise_application_error(-20000,'Could not find appropriate rename content');
 34      end if;
 35      --
 36      -- The SQL has a trailing chr(0) which we need to remove
 37      --
 38      l_text := rtrim(ltrim(substr(l_text,l_idx+4)),' '||chr(0));
 39      --
 40      -- The object name in quotes mean we preserve the case specified, otherwise
 41      -- we will normalize to upper
 42      --
 43      if l_text like '"%"' then
 44        l_obj_name := rtrim(ltrim(l_text,'"'),'"');
 45      else
 46        l_obj_name := upper(l_text);
 47      end if;
 48    end if;
 49
 50    --
 51    -- Now we insert the updated definition for the view, or
 52    -- perhaps its new name
 53    --
 54    if ora_sysevent in ('CREATE','ALTER','RENAME') then
 55      insert into system.t
 56      select
 57         owner
 58        ,view_name
 59        ,text_length
 60        ,to_lob(text) text
 61        ,text_vc
 62        ,type_text_length
 63        ,type_text
 64        ,oid_text_length
 65        ,oid_text
 66        ,view_type_owner
 67        ,view_type
 68        ,superview_name
 69        ,editioning_view
 70        ,read_only
 71        ,container_data
 72        ,bequeath
 73        ,origin_con_id
 74        ,default_collation
 75        ,containers_default
 76        ,container_map
 77        ,extended_data_link
 78        ,extended_data_link_map
 79      from dba_views
 80      where owner = l_obj_owner
 81      and   view_name = l_obj_name;
 82    end if;
 83
 84  end;
 85  /

Trigger created.

Obviously, for more serious usage you’ll be choosing a better table name and not using SYS, but this is just a demo on my laptop.  Unlike a DML trigger, where we have access to “new” and “old” images of the data, for a RENAME command, I had to probe the SQL text to try derive the new name. So lets now perform some view DDL and see how our tracking trigger accommodates the changes.


SQL> create view view1 as select * from ALL_objects;

View created.

SQL> create view view2 as select * from ALL_objects;

View created.

SQL> create view view3 as select * from view2;

View created.

SQL> drop view view1;

View dropped.

SQL> rename view2 to view2a;

Table renamed.

SQL>
SQL>
SQL> select count(*) from system.t;

  COUNT(*)
----------
      7347

SQL> select count(*) from dba_views;

  COUNT(*)
----------
      7347

SQL> select owner, view_name from dba_views
  2  minus
  3  select owner, view_name from system.t;

no rows selected

SQL>
SQL> select owner, view_name from system.t
  2  minus
  3  select owner, view_name from dba_views;

no rows selected

And there we have it – our own custom version of DBA_VIEWS where the TEXT is now exposed as a CLOB column.  So now, mining that column for information is as easy as a simple predicate


SQL> select owner, view_name
  2  from   system.t
  3  where  text like 'selec%ALL_objects';

OWNER                          VIEW_NAME
------------------------------ -------------------
MCDONAC                        VIEW2A

Identity columns in 12c … just a sequence ?

This question came to me over Twitter, so I thought I’d whip out a quick post on it

 

image

Yes, we do implement the IDENTITY column via a sequence.  (Digression – I think this is a smart thing to do.  After all, we’ve had sequences for 20 years, so we know how they work, they are mature, tested, and rock solid, so why invent something new?)

So Peter’s question was – why not just do what we’ve always done and create a sequence and assign it as the default.  And yes, there is nothing wrong with doing that – it will work just fine.  But there are a couple of subtle differences between that and using the IDENTITY syntax.

1) The sequence is bound to the table, so when I drop the table, the sequence disappears as well


SQL> create table t ( x int generated as identity);

Table created.

SQL> select object_name from user_objects where created > sysdate - 60/86400;

OBJECT_NAME
----------------------------------------
T
ISEQ$$_195276

SQL> drop table t purge;

Table dropped.

SQL> select object_name from user_objects where created > sysdate - 60/86400;

no rows selected

2) You can lock down the column


SQL> create table t ( x int generated always as identity);

Table created.

SQL> insert into t values ( 12 ) ;
insert into t values ( 12 )
*
ERROR at line 1:
ORA-32795: cannot insert into a generated always identity column

Personally, I don’t have a preference.  The most important thing to me was that sequences can be used as a default value to avoid triggers on every single table.

DDL triggers – interesting results

This question came in on AskTom, yielding a very interesting result when it comes to DDL triggers. To set the scene, I’ll first create a table called T which is just a copy of SCOTT.EMP


SQL> create table scott.t as select * from scott.emp;

Table created.

SQL> desc scott.t
 Name                                                              Null?    Type
 ----------------------------------------------------------------- -------- -----------------
 EMPNO                                                             NOT NULL NUMBER(4)
 ENAME                                                                      VARCHAR2(10)
 JOB                                                                        VARCHAR2(9)
 MGR                                                                        NUMBER(4)
 HIREDATE                                                                   DATE
 SAL                                                                        NUMBER(7,2)
 COMM                                                                       NUMBER(7,2)
 DEPTNO                                                                     NUMBER(2)

Now let’s say our requirement is to capture anything that relates to columns for tables in the SCOTT schema. So if we create a table, then we want to see all of those columns, and also, if we alter an existing table, then we want to see what the table looks like as a result after the columns have been added or dropped. So let’s build a simple DDL trigger to do that. First, I’ll create a procedure which will list all of the columns for any table name that we pass it.  We’ll work exclusively in SCOTT for this example.


SQL> create or replace
  2  procedure create_audit_trigger(p_tab varchar2) is
  3  begin
  4    for i in ( select column_name from dba_tab_columns
  5               where table_name = p_tab
  6               and owner = 'SCOTT'
  7             )
  8    loop
  9      dbms_output.put_line(i.column_name);
 10    end loop;
 11  end;
 12  /

Procedure created.

And we’ll give it a quick test so make sure it works


SQL> set serverout on
SQL> exec create_audit_trigger('EMP')
EMPNO
ENAME
JOB
MGR
HIREDATE
SAL
COMM
DEPTNO

PL/SQL procedure successfully completed.

We’ll now put that inside our DDL trigger any time we perform an ALTER or CREATE DDL operation on a table.


SQL> create or replace
  2  TRIGGER audit_trigger_update
  3  AFTER ALTER OR CREATE ON scott.SCHEMA
  4  WHEN (ORA_DICT_OBJ_TYPE = 'TABLE')
  5  BEGIN
  6    create_audit_trigger(ORA_DICT_OBJ_NAME);
  7  END;
  8  /

Trigger created.

SQL> conn scott/tiger
Connected.

Now we’ll create a table and see if it works.


SQL> set serverout on
SQL> create table t1 ( x int, y int );
X
Y

Table created.

And we’ll drop a column to test that


SQL> alter table t1 drop column y;
X

So far so good. Now let us add a column to the table.


SQL> alter table t add x int;
EMPNO
ENAME
JOB
MGR
HIREDATE
SAL
COMM
DEPTNO  -- ???? no X

Table altered.

This is an unexpected result. At the time within the trigger we query the data dictionary to obtain a list of columns, it does not yet appear in the definition. Now that the statement has completed, if we perform a describe on the table, we can see that the new column X is indeed there.


SQL> desc t
 Name                                                                    Null?    Type
 ----------------------------------------------------------------------- -------- --------------
 EMPNO                                                                   NOT NULL NUMBER(4)
 ENAME                                                                            VARCHAR2(10)
 JOB                                                                              VARCHAR2(9)
 MGR                                                                              NUMBER(4)
 HIREDATE                                                                         DATE
 SAL                                                                              NUMBER(7,2)
 COMM                                                                             NUMBER(7,2)
 DEPTNO                                                                           NUMBER(2)
 X                                                                                NUMBER(38)

The only workaround I can currently think of is to do work asynchronously, via DBMS_JOB. Rather than DBMS_OUTPUT, I’ll log the column names in a table so we can view them after the fact:


SQL> create table col_debug ( ts timestamp, col_name varchar2(50));

Table created.

SQL> create or replace
  2  procedure create_audit_trigger(p_tab varchar2) is
  3    l_now timestamp := systimestamp;
  4  begin
  5    for i in ( select column_name from dba_tab_columns
  6               where table_name = p_tab
  7               and owner = 'SCOTT'
  8             )
  9    loop
 10      insert into col_debug values (l_now,i.column_name);
 11    end loop;
 12    commit;
 13  end;
 14  /

Procedure created.

SQL> create or replace
  2  TRIGGER audit_trigger_update
  3  AFTER ALTER OR CREATE ON scott.SCHEMA
  4  WHEN (ORA_DICT_OBJ_TYPE = 'TABLE')
  5  declare
  6    j int;
  7  BEGIN
  8    dbms_job.submit(j,'create_audit_trigger('''||ORA_DICT_OBJ_NAME||''');');
  9  END;
 10  /

Trigger created.

Now I repeat the addition of a column, and the check out debugging table


SQL> alter table t add x1 int;

Table altered.

SQL> select * from col_debug;

TS                                                                          COL_NAME
--------------------------------------------------------------------------- ---------------
24-FEB-17 03.02.36.553000 PM                                                EMPNO
24-FEB-17 03.02.36.553000 PM                                                ENAME
24-FEB-17 03.02.36.553000 PM                                                JOB
24-FEB-17 03.02.36.553000 PM                                                MGR
24-FEB-17 03.02.36.553000 PM                                                HIREDATE
24-FEB-17 03.02.36.553000 PM                                                SAL
24-FEB-17 03.02.36.553000 PM                                                COMM
24-FEB-17 03.02.36.553000 PM                                                DEPTNO
24-FEB-17 03.02.36.553000 PM                                                X
24-FEB-17 03.02.36.553000 PM                                                X1

10 rows selected.

So the trigger submits a job rather than does the work itself.

Why does an ADD column operation behave differently to DROP and CREATE? That remains a mystery Smile


					

FOLLOWS clause

In early Oracle releases, you could have 1 trigger per type of trigger per table (eg if you had a before-row-insert trigger, you could have only 1). Now there is no limit. But that does raise the question – if I have “n” BEFORE ROW triggers on a table, in which order do they fire ? Let’s look at the following example:


SQL> drop table t1 purge;

Table dropped.

SQL> create table T1 ( x int, y int, z int );

Table created.

SQL> create or replace
  2  trigger trg1
  3  before insert on t1
  4  for each row
  5  begin
  6    :new.y := :new.x;
  7  end;
  8  /

Trigger created.

SQL> create or replace
  2  trigger trg2
  3  before insert on t1
  4  for each row
  5  begin
  6    :new.z := :new.y;
  7  end;
  8  /

Trigger created.

Seems simple enough…Copy ‘x’ into ‘y’, and then copy ‘y’ into ‘z’. So lets see what happens



SQL> insert into t1 (x) values (1);

1 row created.

SQL> select * from t1;

         X          Y          Z
---------- ---------- ----------
         1          1


Where did “Z” go ? What happened was TRG2 fired first, and then TRG1 fired. The firing order is indeterminate.

To solve this, we can use the FOLLOWS command to dictate the order in which triggers must fire.


SQL> create or replace
  2  trigger trg2
  3  before insert on t1
  4  for each row
  5  follows trg1
  6  begin
  7    :new.z := :new.y;
  8  end;
  9  /

Trigger created.

SQL> insert into t1 (x) values (1);

1 row created.

SQL> select * from t1;

         X          Y          Z
---------- ---------- ----------
         1          1
         1          1          1

SQL>
SQL>

So now you can see (if you have a lot of triggers) where FOLLOWS might come in useful.

Where did my triggers go ?

You need to be careful when coding and using triggers when it comes to Datapump (or anything that transposes triggers between schemas).  A lot of people make assumptions about what will happen with their triggers, and often get a nasty shock when they see the reality.  Lets explore with an example

I’m going to add three triggers to the standard EMP table in the SCOTT schema


SQL> connect scott/tiger
Connected.
SQL> create or replace
  2  trigger TRG1 before insert on scott.EMP
  3  for each row
  4  declare
  5    x int;
  6  begin
  7    select count(*) into x from scott.dept;
  8  end;
  9  /

Trigger created.

SQL>
SQL>
SQL> create or replace
  2  trigger TRG2 before insert on EMP
  3  for each row
  4  declare
  5    x int;
  6  begin
  7    select count(*) into x from scott.dept;
  8  end;
  9  /

Trigger created.

SQL>
SQL>
SQL> create or replace
  2  trigger TRG3 before insert on EMP
  3  for each row
  4  declare
  5    x int;
  6  begin
  7    select count(*) into x from dept;
  8  end;
  9  /

Trigger created.

Now we’ll unload the entire schema and copy it to a new schema called SCOTT2


expdp directory=TEMP dumpfile=schema.dmp logfile=schema.log schemas=scott

Export: Release 12.1.0.2.0 - Production on Fri Nov 27 10:31:29 2015

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Username: system/********

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01":  system/******** directory=TEMP dumpfile=schema.dmp logfile=schema.log schemas=scott
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 256 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
. . exported "SCOTT"."ABC"                               5.898 KB       1 rows
. . exported "SCOTT"."DEPT"                              6.031 KB       4 rows
. . exported "SCOTT"."EMP"                               8.781 KB      14 rows
. . exported "SCOTT"."SALGRADE"                          5.960 KB       5 rows
. . exported "SCOTT"."BONUS"                                 0 KB       0 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
  C:\TEMP\SCHEMA.DMP
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Fri Nov 27 10:31:56 2015 elapsed 0 00:00:24


impdp directory=TEMP dumpfile=schema.dmp logfile=schemaimp.log schemas=scott remap_schema=scott:scott2

Import: Release 12.1.0.2.0 - Production on Fri Nov 27 10:31:57 2015

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Username: system/********

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_SCHEMA_01":  system/******** directory=TEMP dumpfile=schema.dmp logfile=schemaimp.log schemas=scott remap_schema=scott:scott2 Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "SCOTT2"."ABC"                              5.898 KB       1 rows
. . imported "SCOTT2"."DEPT"                             6.031 KB       4 rows
. . imported "SCOTT2"."EMP"                              8.781 KB      14 rows
. . imported "SCOTT2"."SALGRADE"                         5.960 KB       5 rows
. . imported "SCOTT2"."BONUS"                                0 KB       0 rows
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
ORA-39082: Object type TRIGGER:"SCOTT2"."TRG1" created with compilation warnings
ORA-39082: Object type TRIGGER:"SCOTT2"."TRG2" created with compilation warnings
Job "SYSTEM"."SYS_IMPORT_SCHEMA_01" completed with 2 error(s) at Fri Nov 27 10:32:16 2015 elapsed 0 00:00:14

SQL> select trigger_name, status
  2  from dba_triggers
  3  where owner = 'SCOTT2';

TRIGGER_NAME                   STATUS
------------------------------ --------
TRG1                           ENABLED
TRG2                           ENABLED
TRG3                           ENABLED

So at this point, everything seems to be ok. But we should look more closely…


SQL> select object_name, status
  2  from dba_objects
  3  where owner = 'SCOTT2'
  4  and object_name like 'TRG%';

OBJECT_NAME                              STATUS
---------------------------------------- -------
TRG3                                     VALID
TRG2                                     INVALID
TRG1                                     INVALID

Two of the triggers are invalid. And here’s where things really catch people out …


SQL> select trigger_name, table_owner, table_name
  2  from   dba_triggers
  3  where  owner = 'SCOTT2';

TRIGGER_NAME                   TABLE_OWNER          TABLE_NAME
------------------------------ -------------------- -----------
TRG1                           SCOTT                EMP
TRG2                           SCOTT2               EMP
TRG3                           SCOTT2               EMP

Ouch…we have a trigger owned by SCOTT2 but the triggering table is SCOTT ! Probably not what we intended.

That’s how we do it because people forget that triggers do not have to be in the same schema as the object they are triggering on. For example, I might have a schema called “SCOTT_TRIGGERS” where I define all of my triggers for tables owned by SCOTT.  So even though an import might ask to remap a schema, we cant really assume that every reference to “SCOTT” in either the trigger definition or trigger body can just be changed.  We can only remap the schema (hence the name of the parameter).  In the case above, TRG2 is invalid because the trigger body is trying to reference SCOTT.EMP and does not have any permissions to do so.

So be careful when transposing (either by datapump or any other means) triggers between schemas.