DML error logging redux

Posted by

I posted a nice little feature of DML error logging recently, and a comment came in mentioned that caution is needed with DML error logging because the errors are logged persistently. Hence if you ran a load multiple times, or multiple sessions were utilizing the facility, then the error logging table can quickly become a soup of data that cannot be tracked back to the code your own session was running.

By default this is true, which we can see with a trivial demo. I’ll do the same load of bad rows twice and “forget” to clear out my error table. Here’s my setup – we have a target table called TGT which has rules on nulls and integers being positive, and a source table SRC which has some data that will violate those rules when it comes to loading the data.



SQL> create table tgt ( x int not null, y int, z int check ( z > 0 ));

Table created.

SQL> create table src ( sx int, sy int, sz int );

Table created.

SQL>
SQL> insert into src values (1,10,1);

1 row created.

SQL> insert into src values (2,11,1);

1 row created.

SQL> insert into src values (3,12,-2);

1 row created.

SQL> insert into src values (4,13,1);

1 row created.

SQL> insert into src values (5,14,1);

1 row created.

SQL> insert into src values (null,15,10);

1 row created.

SQL> exec dbms_errlog.create_error_log(dml_table_name=>'tgt')

PL/SQL procedure successfully completed.

SQL> insert into tgt
  2  select * from src
  3  log errors reject limit unlimited;

4 rows created.

SQL>
SQL> select * from err$_tgt
  2  @pr
==============================
ORA_ERR_NUMBER$               : 2290
ORA_ERR_MESG$                 : ORA-02290: check constraint (MCDONAC.SYS_C0086635) violated

ORA_ERR_ROWID$                :
ORA_ERR_OPTYP$                : I
ORA_ERR_TAG$                  :
X                             : 3
Y                             : 12
Z                             : -2
==============================
ORA_ERR_NUMBER$               : 1400
ORA_ERR_MESG$                 : ORA-01400: cannot insert NULL into ("MCDONAC"."TGT"."X")

ORA_ERR_ROWID$                :
ORA_ERR_OPTYP$                : I
ORA_ERR_TAG$                  :
X                             :
Y                             : 15
Z                             : 10

PL/SQL procedure successfully completed.

That is fine so far, but if I run the load from multiple sessions, then it is no longer apparent which session did what activities. When session 2 runs the same code, I have 4 errors but I cannot track the session information



SQL> insert into tgt
  2  select * from src
  3  log errors reject limit unlimited;

4 rows created.

SQL>
SQL> select * from err$_tgt
  2  @pr
==============================
ORA_ERR_NUMBER$               : 2290
ORA_ERR_MESG$                 : ORA-02290: check constraint (MCDONAC.SYS_C0086635) violated

ORA_ERR_ROWID$                :
ORA_ERR_OPTYP$                : I
ORA_ERR_TAG$                  :
X                             : 3
Y                             : 12
Z                             : -2
==============================
ORA_ERR_NUMBER$               : 1400
ORA_ERR_MESG$                 : ORA-01400: cannot insert NULL into ("MCDONAC"."TGT"."X")

ORA_ERR_ROWID$                :
ORA_ERR_OPTYP$                : I
ORA_ERR_TAG$                  :
X                             :
Y                             : 15
Z                             : 10
==============================
ORA_ERR_NUMBER$               : 2290
ORA_ERR_MESG$                 : ORA-02290: check constraint (MCDONAC.SYS_C0086635) violated

ORA_ERR_ROWID$                :
ORA_ERR_OPTYP$                : I
ORA_ERR_TAG$                  :
X                             : 3
Y                             : 12
Z                             : -2
==============================
ORA_ERR_NUMBER$               : 1400
ORA_ERR_MESG$                 : ORA-01400: cannot insert NULL into ("MCDONAC"."TGT"."X")

ORA_ERR_ROWID$                :
ORA_ERR_OPTYP$                : I
ORA_ERR_TAG$                  :
X                             :
Y                             : 15
Z                             : 10

PL/SQL procedure successfully completed.

However, the error logging is just a standard database table, so as long as you do not interfere with the columns required to support error logging, you can augment the table to capture readily available metadata.



SQL> delete err$_tgt;

4 rows deleted.

SQL> delete tgt;

8 rows deleted.

SQL> commit;

Commit complete.

SQL>
SQL> alter table err$_tgt add session_id  int
  2    default sys_context('USERENV','SID');

Table altered.

SQL> alter table err$_tgt add client_host varchar2(100)
  2    default sys_context('USERENV','HOST');

Table altered.

SQL> alter table err$_tgt add osuser      varchar2(100)
  2    default sys_context('USERENV','OS_USER');

Table altered.

SQL>
SQL> delete tgt;

0 rows deleted.

SQL> insert into tgt
  2  select * from src
  3  log errors reject limit unlimited;

4 rows created.

SQL>
SQL> select * from err$_tgt
  2  @pr
==============================
ORA_ERR_NUMBER$               : 2290
ORA_ERR_MESG$                 : ORA-02290: check constraint (MCDONAC.SYS_C0086635) violated

ORA_ERR_ROWID$                :
ORA_ERR_OPTYP$                : I
ORA_ERR_TAG$                  :
X                             : 3
Y                             : 12
Z                             : -2
SESSION_ID                    : 369
CLIENT_HOST                   : WORKGROUP\GTX
OSUSER                        : GTX\hamcdc
==============================
ORA_ERR_NUMBER$               : 1400
ORA_ERR_MESG$                 : ORA-01400: cannot insert NULL into ("MCDONAC"."TGT"."X")

ORA_ERR_ROWID$                :
ORA_ERR_OPTYP$                : I
ORA_ERR_TAG$                  :
X                             :
Y                             : 15
Z                             : 10
SESSION_ID                    : 369
CLIENT_HOST                   : WORKGROUP\GTX
OSUSER                        : GTX\hamcdc

PL/SQL procedure successfully completed.

Of course the big question here is – is this approach supported? After all, typically we do not endorse people tinkering with internally/automatically created tables. As far as I can tell, you are “almost” covered here Smile. In the Database Administrators manual it states:

If the error logging table contains a column with a name that does not match a DML table column, the column is ignored.

So we allowed to add columns without that being a violation. However, whilst experimental observation suggests that the columns with the DEFAULT clause do indeed preserve their default values, this is not guaranteed by the documentation. For example, it would be possible one day in the future for the internal operation that populates the error logging table to force null into those columns. I’m not saying that will happen, but just that Oracle would be within its rights to do so. If you’re OK with sailing close to the wind in terms of the definition of “supported”, you can use the DEFAULT clause above, or Steven Feuerstein has put together some nice helper utilities for DML error logging. You can find them on LiveSQL here:

https://livesql.oracle.com/apex/livesql/file/content_CTMX9U1B173HA6PZZN9B6JK3T.html

https://livesql.oracle.com/apex/livesql/file/content_CTS18A4BCWHLNR9X1OC3DR6CF.html

If that is a concern, then an alternative that is covered in the docs is using a nominated tag when logging the error, ie:

LOG ERRORS INTO my_table  (‘some tag’)

and hence you could use a concatenation such as:


sys_context('USERENV','SID')    ||';'||
sys_context('USERENV','HOST')   ||';'||
sys_context('USERENV','OS_USER')||';'||

for your tag to capture the metadata you desire. In either instance, you can indeed track additional metadata to help debugging when using DML error logging.

(Shout out to Sergiusz Wolicki for providing extra information for this post)

One comment

  1. Just a comment on this.
    There is an issue with this If you have multiple transactions in your code.
    For example if inside a loop you do a bulk collect with limit-clause and then bulk-DML the results with intermediate commits.
    Or if your code goes through multiple phases where each phase is committed and DML is performed on the same table in more than 1 phase.
    If it’s ok to postpone the error handling until the very end of the code your approach is perfect.
    But sometimes this is not what you want/need.
    And then you won’t know if an error occurred in this transaction or in a previous one, because they were in the same session.

    So, what I usually do in these cases is generate a GUID as soon as I start a new transaction (=at start of code and after each commit or rollback).
    Then I put that GUID in the tag so I can check any errors that occurred during that transaction.
    Another approach would obviously be to delete all errors from your session at the end of each transaction (or create the table as an on commit delete Global Temporary Table), but that means you loose the possibility to have a look at what happened during this run if ever you feel that the handling of the errors (your own logging) in retrospect wasn’t sufficient to see why rows weren’t processed correctly.

Got some thoughts? Leave a comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.