Oracle Code … Not for database people ?

imageJump over to the Oracle Code home page and you will see the “mission statement” of the Oracle Code conference series:

“Learn from technical experts in sessions for developing software in Java, Node.js, and other languages and frameworks.”

You might hence be thinking that “old school” stuff like (relational) database technology has no place at such a conference, and certainly the agenda looks slanted away from database technologies.  But I think you’re wrong Smile and here is why I think that.  I did a talk in Bangalore last week at the Oracle Code event there (which by the way was a wonderful event, so thanks to all that came along) on some SQL language techniques.  After the talk, one of attendees came up to me, thanked me for the talk and said this (I’m paraphrasing):

“It was really interesting to see all the stuff that you could do in SQL.  I’m a Java person, and whenever I have complicated data requirements, I have always simply retrieved the data from the database and then done the complex part of the operations in Java.  But your session has convinced me to explore doing some of that in SQL”

It is so easy to have a bias for the technology(s) that you are most capable with.  I am just as guilty of that as anyone. I’ll generally look for a SQL or PL/SQL means to solve a business problem before considering other options that may actually be more appropriate.  But here we had an attendee who was happy to consider looking outside his sphere of expertise to focus on optimal solutions to problems rather than just solutions that sat inside his “comfort zone”.  That really struck a chord with me, and made me feel like the entire trip was worthwhile. Because when we have a bias toward a particular technology, it is easy to lulled into an argument that other technologies are inappropriate for any usage.  And then suddenly we’re into a shouting match about why technology “X” is the best and that anything that is not technology “X” is junk.  We all lose when that’s the case.

So there’s an argument to made that Oracle Code is indeed not for database developers, but in the same way, it is not for middle tier developers, and not for front end developers.  Oracle Code is about creating the balanced developer – a developer that has expertise in one (or more) areas but more importantly, can understand the whole stack and have an impartial, unclouded (no pun intended) view of the benefits of all layers in the application stack.  Because that balance ultimately leads to a better development community, and better opportunities to maximize the benefits of each of the components in the array of technologies that now permeate our development careers.

So whatever your area of expertise, Oracle Code has something for you, and and perhaps the best thing you can do at an Oracle Code event, is attend something outside your current area of expertise.

SUM is better than DISTINCT

There is a good chance that (based on this blog post title) that you’re expecting a post on SQL, and that’s understandable. But I’ll come clean nice and early – that was just to lure you in Smile

The post is about SUM and DISTINCT, but not in the technical sense.

A few days ago, fellow OakTable member Jonathan Lewis put a post on his blog: https://jonathanlewis.wordpress.com/2017/04/10/ask-jonathan/ where he is launching a mechanism where you can pose questions to him, and he will select topics of interest and write about them in the UKOUG Oracle Scene magazine.  What I found hilarious was that people started emailing me saying “Are you concerned about AskTOM?”.  Now, last time I checked, we don’t have Google Ads or Facebook Ads on AskTOM, so it is not as if we have some fiscal need to keep the website hits high, increase our “social engagement”, or whatever the latest buzzword is for that sort of thing.  If it was – you see lots of cat videos and references to Justin Bieber in our answers Smile

AskTOM does one thing…and one thing only – we try to help out the Oracle community, to make them more successful.  It’s what Tom did in the past, and it’s what the team do now.  That’s the same reason why our answers will often refer people to links / blog posts / tutorials outside of the oracle.com domain, for example, to the excellent work of some of the Oracle Aces and Ace Directors.  It’s about getting good information and getting the job done.  Similarly, when I give talks on AskTom to user groups, one of the things I encourage people to do is form their own “AskMe” concept within their own IT shops – get people asking questions, and discussing solutions at the department level, the organizational level, and ultimately the global community level.  Then we all benefit – we all win.

So rather than being “concerned”, I’m thrilled by anyone that wants to put their hand up and say “Hey, I’m here to help out the community”.  I encourage any of us who are experienced practitioners in the Oracle community to do the same.

The SUM of contributors to the Oracle community will be much better for that community than just a few DISTINCT people !  … and hence the title of the post.

Pi Day, March 14

Geeks around the world will be celebrating Pi day, a shameless excuse to put the month before the day like the Americans do so we can talk Mathematics on “3.14” day Smile

So what better way to show how cool SQL can be with some expressions to approach Pi with continuous series

All we really need to know by way of background is the CONNECT BY LEVEL trick to generate arbirtary sequences of row, eg


SQL> select level from dual connect by level <= 5;

     LEVEL
----------
         1
         2
         3
         4
         5

Now we can get some approximations to Pi using the some of the standard SQL functions SQRT and POWER



SQL> select sqrt(8*sum(1/power((level*2-1),2))) pi
  2  from dual
  3  connect by level <= 100000
  4  order by 1 desc;

        PI
----------
3.14158947

1 row selected.

SQL>
SQL>
SQL> select sqrt(6*sum(1/(level*level))) pi
  2  from dual
  3  connect by level <= 100000
  4  order by 1 desc;

        PI
----------
 3.1415831

1 row selected.

SQL>
SQL> select 8*sum(1/((4*(level-1)+1)*(4*(level-1)+3))) pi
  2  from dual
  3  connect by level <= 100000;

        PI
----------
3.14158765

1 row selected.

SQL>
SQL>
SQL> select
  2   4*sum(
  3     power(-1,level-1)/(level*2-1)*
  4        ( 12*power(1/18,level*2-1)+
  5           8*power(1/57,level*2-1)-
  6           5*power(1/239,level*2-1))) pi
  7  from dual
  8  connect by level <= 100;

        PI
----------
3.14159265

1 row selected.

SQL>

Some methods to calculate Pi need factorials, but there isn’t a native SQL function for that. But in 12c, that’s no problem, we can define SQL functions on the fly directly inside our SQL statement !


SQL>
SQL> with
  2    function factorial(n int) return int is
  3      f int := 1;
  4    begin
  5      for i in 1 .. n loop
  6        f := f * i;
  7      end loop;
  8      return f;
  9    end;
 10  select  2*sum(
 11    power(2,level-1)*power(factorial(level-1),2)/factorial(2*level-1)
 12    ) pi
 13  from dual
 14  connect by level <= 17;
 15  /

        PI
----------
 3.1415864

1 row selected.

But what if you’re not on 12c yet ? Well, you should be! You can check out why it’s time to upgrade from Maria and myself chatting about it over coffee

But whilst you’re on 11g, there is still plenty of cool SQL options to play with Pi. Here we can use recursive SQL to generate Pi, based on the formula:

 

image



SQL> with term(numerator,product,seq) as
  2  ( select sqrt(2) numerator, sqrt(2)/2 product , 1 seq from dual
  3    union all
  4    select sqrt(2+numerator), sqrt(2+numerator)*product/2 , seq+1
  5    from   term, dual
  6    where  term.seq <= 16
  7  )
  8  select 2/product pi
  9  from term
 10  where seq = 16;

        PI
----------
3.14159265

1 row selected.

 

SQL…still the most awesome language out there!

Partition count for interval partitioned tables

When dealing with a RANGE partitioned table, the defined partitions dictate all of the data that can be placed into the table. For example, if I have a SALES table as per below


SQL> create table SALES
  2    ( cal_year  date,
  3      txn_id    int,
         ...
         ...
 24    )
 25  partition by range ( cal_year )
 26  (
 27    partition p_low values less than ( date '2000-01-01' ),
 28    partition p2000 values less than ( date '2001-01-01' ),
         ...
         ...
 34    partition p2016 values less than ( date '2017-01-01' )
 35  );

Table created.

then the existing partitions define a natural upper bound on the value of CAL_YEAR that I can insert into the table. For example, if I attempt to add a row for the year 2018, I get the familiar ORA-14400 that has called out many a DBA at the stroke of midnight on New Years Eve Smile


SQL> insert into SALES
  2  values ( date '2018-01-01', .... );

insert into SALES
            *
ERROR at line 1:
ORA-14400: inserted partition key does not map to any partition

As many will know, the resolution to this is either a maintenance task to ensure that there are sufficient partitions defined, or to use the INTERVAL partitioning method, which came available in 11g.


SQL> create table SALES
  2    ( cal_year  date,
  3      txn_id    int,
         ...
         ...
 23    )
 24  partition by range ( cal_year )
 25  INTERVAL( NUMTOYMINTERVAL(1,'YEAR'))
 26  (
 27    partition p_low values less than ( date '2000-01-01' ),
 28  );

Table created.

And I can observe partitions being created as required as data is added to the table


SQL> select PARTITION_NAME, HIGH_VALUE
  2  from   user_tab_partitions
  3  where  table_name = 'SALES';

PARTITION_NAME            HIGH_VALUE
------------------------- --------------------------------
P00                       TIMESTAMP' 2000-01-01 00:00:00'

SQL> insert into SALES
  2  values ( to_date('12-DEC-2011'),....);

SQL> select PARTITION_NAME, HIGH_VALUE
  2  from   user_tab_partitions
  3  where  table_name = 'SALES';

PARTITION_NAME            HIGH_VALUE
------------------------- --------------------------------
P00                       TIMESTAMP' 2000-01-01 00:00:00'
SYS_P362                  TIMESTAMP' 2012-01-01 00:00:00'

But this isn’t a post about how interval partitioning is defined, because it’s a topic that is now well understood and well detailed in the documentation and on many blogs.

I wanted to touch on a something more subtle that you might encounter when using interval partitioned tables. Let me do a query on the SALES table, which has been recreated (as INTERVAL partitioned) but is empty. Here is the execution plan when I query the table.


SQL> select * from SALES; --empty


-------------------------------------------------------------
| Id  | Operation           | Name  | Rows  | Pstart| Pstop |
-------------------------------------------------------------
|   0 | SELECT STATEMENT    |       |     1 |       |       |
|   1 |  PARTITION RANGE ALL|       |     1 |     1 |1048575|
|   2 |   TABLE ACCESS FULL | SALES |     1 |     1 |1048575|
-------------------------------------------------------------

Wow! One million partitions ! That might seem odd, because we know that our table has been defined only with a single partition, and even that might not be instantiated yet depending on our choice of “deferred_segment_creation” parameter on the database. But the explanation is relatively simple. The moment we define a table as interval partitioned, we in effect know “in advance” the definition of every single interval that will ever follow. The starting point for the intervals is known due to the initial partition definition in the DDL, and the size/length of the interval maps out every possible future partition.

image

The maximum number of partitions is 1048575, which is then reflected in the execution plan.

You’ll see similar information when you create an index on such a table. If the index is local, and hence follows the same partitioning scheme as the underlying table, then it too has potentially 1048575 partitions all not yet in use, but known in advance. So if you look at the PARTITION_COUNT column for such an index, you’ll also see that the database will state that it has a (very) high partition count


SQL> create index sales_ix on sales ( some_col ) local;

Index created.

SQL> select TABLE_NAME,INDEX_NAME,PARTITION_COUNT from user_part_indexes;

TABLE_NAME                     INDEX_NAME                     PARTITION_COUNT
------------------------------ ------------------------------ ---------------
SALES                          SALES_IX                               1048575

1 row selected.

So if you see anything suggesting one million partitions, double check to see if you really have that many.

image

Multi-table insert

An interesting question came through on AskTom recently.  The requirement was to perform a single pass through a source table, and load the data into three target tables.

Now that’s trivially achieved with a multi-table insert, but there was a subtle “twist” on this requirement.  Each of the three target tables may already contain some, none or all of the rows from the source table.  Hence the requirement was to “fill in the blanks”.

So here’s a little demo of one way we could achieve this.

First, here is our source table with 10 rows (1 through 10)


SQL> create table t_source as select rownum s from dual connect by level <= 10; Table created. SQL>
SQL> select * from t_source;

         S
----------
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10

10 rows selected.

And here are our three target tables, T1, T2 and T3, each with a subset of the rows already


SQL> create table t1 as select rownum x from dual connect by level <= 5; Table created. SQL> create table t2 as select rownum y from dual connect by level <= 3; Table created. SQL> create table t3 as select rownum z from dual connect by level <= 6; Table created. SQL> select * from t1 order by 1;

         X
----------
         1
         2
         3
         4
         5

5 rows selected.

SQL> select * from t2 order by 1;

         Y
----------
         1
         2
         3

3 rows selected.

SQL> select * from t3 order by 1;

         Z
----------
         1
         2
         3
         4
         5
         6

6 rows selected.

Now obviously we could perform a simple insert-select-where-not-exists style operation for each table, but we need to meet our poster’s requirement of a single pass through the source table. So we will take advantage of an outer join to pick up just those rows that do not already match.



SQL> insert all
  2    when in_tab1 is null then
  3      into t1 (x ) values (s )
  4    when in_tab2 is null then
  5      into t2 (y ) values (s )
  6    when in_tab3 is null then
  7      into t3 (z ) values (s )
  8  select
  9    t_source.s,
 10    t1.x in_tab1,
 11    t2.y in_tab2,
 12    t3.z in_tab3
 13  from t_source, t1, t2, t3
 14  where t_source.s = t1.x(+)
 15  and t_source.s = t2.y(+)
 16  and t_source.s = t3.z(+)
 17  /

16 rows created.

SQL>
SQL> select * from t1 order by 1;

         X
----------
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10

10 rows selected.

SQL> select * from t2 order by 1;

         Y
----------
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10

10 rows selected.

SQL> select * from t3 order by 1;

         Z
----------
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10

10 rows selected.

SQL>

And the job is done. Our poster never really elaborated on why a single pass was necessary – but let’s assume it was due to the source table being large. If we look at the execution plan, we see a swag of cascading hash joins, so whilst a single pass of the source table has been achieved, there is no guarantee that we’re not going to end up with other issues in processing all of those “concurrent” joins.



---------------------------------------------------------------------------------
| Id  | Operation            | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |          |    10 |   120 |     8   (0)| 00:00:01 |
|*  1 |  HASH JOIN OUTER     |          |    10 |   120 |     8   (0)| 00:00:01 |
|*  2 |   HASH JOIN OUTER    |          |    10 |    90 |     6   (0)| 00:00:01 |
|*  3 |    HASH JOIN OUTER   |          |    10 |    60 |     4   (0)| 00:00:01 |
|   4 |     TABLE ACCESS FULL| T_SOURCE |    10 |    30 |     2   (0)| 00:00:01 |
|   5 |     TABLE ACCESS FULL| T2       |     3 |     9 |     2   (0)| 00:00:01 |
|   6 |    TABLE ACCESS FULL | T1       |     5 |    15 |     2   (0)| 00:00:01 |
|   7 |   TABLE ACCESS FULL  | T3       |     6 |    18 |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("T_SOURCE"."S"="T3"."Z"(+))
   2 - access("T_SOURCE"."S"="T1"."X"(+))
   3 - access("T_SOURCE"."S"="T2"."Y"(+))

But that’s often life on AskTom.We only get half the story Smile

Performing a large correlated update

We had a question on AskTom recently asking for the best way to update a new column on a very large table.  Four in-place update options are at our disposal:

  • Serial update SQL
  • Parallel DML update SQL
  • DBMS_PARALLEL_EXECUTE
  • PL/SQL batching (as long as sensible restart-after-error logic is readily available)

all of which will achieve the required outcome but they have a significant redo and undo cost associated with them.  Also, depending on the size of the new column, such an update might create chaos with row migration, because whilst unlikely, there is definitely the potential for every row to grow beyond the available block space required to hold it in situ.

So a common approach to tackling a large “update” is to recast the problem to creating a new version (copy) of the table.  This can be done with a CREATE TABLE AS SELECT (CTAS).  Using this method we get the benefits of less resource consumption, but two issues can get in the way of this approach:

  1. It requires significant downtime whilst the new table is populated
  2. It requires careful planning to ensure all of the dependent objects (indexes, constraints, etc etc) are correctly recreated on the new table

But perhaps there is an alternative – it might not be 100% as fast as a pure CTAS, but what if it solves both the issues mentioned above ?  That alternative is DBMS_REDEFINITION

People often dismiss DBMS_REDEFINITION as purely for changing the structure of a table without changing the underlying data (for example, partitioning a table).  But they fail to take notice of the column mapping functionality that is available – and as long as our column mapping expressions are deterministic, then we can get more value out of the package, including a correlated update.

So here’s our example:

We will have a table called T1, which is a copy of DBA_OBJECTS and will be the table to which we will want a add a new column




SQL>
SQL> create table t1  as
  2  select owner, object_name, subobject_name,
  3       object_id, data_object_id, object_type,
  4       created, last_ddl_time, timestamp, status,
  5       temporary, generated, secondary
  6    from all_objects
  7    where object_id is not null;

Table created.

SQL>
SQL> alter table t1
  2    add constraint t1_pk
  3    primary key(object_id);

Table altered.

T2 is the table from which we will want to source the new data to apply to T1. In this case, T2 has a primary key of OBJECT_ID which can be used to source a new data element called FNAME. In our example, this is just owner and object name concatenated.


SQL>
SQL> create table t2 as select object_id, owner||'.'||object_name fname
  2  from all_objects
  3    where object_id is not null;

Table created.

SQL>
SQL> alter table t2
  2    add constraint t2_pk
  3    primary key(object_id);

Table altered.

So here is the requirement. T1 gets a new column called FULL_NAME. It must be populated via the FNAME column on T2 via the OBJECT_ID lookup. We can see that T1 starts with FULL_NAME being null, and we can see a sample of the values that should come from T2


SQL> alter table t1 add full_name varchar2(200);

Table altered.

SQL>
SQL> select object_id, owner, full_name from t1
  2  where rownum < 10;

 OBJECT_ID OWNER                          FULL_NAME
---------- ------------------------------ --------------------------------------------------
        30 SYS
        47 SYS
        32 SYS
        49 SYS
        17 SYS
         2 SYS
        29 SYS
        45 SYS
        10 SYS

9 rows selected.

SQL>
SQL> select object_id, fname from t2
  2  where rownum < 10;

 OBJECT_ID FNAME
---------- ---------------------------
        30 SYS.I_COBJ#
        47 SYS.I_USER2
        32 SYS.CCOL$
        49 SYS.I_COL2
        17 SYS.FILE$
         2 SYS.C_OBJ#
        29 SYS.C_COBJ#
        45 SYS.I_TS1
        10 SYS.C_USER#

We can’t do joins or subqueries in DBMS_REDEFINITION but we can do deterministic expressions. In our case, we’ll use a deterministic PL/SQL function to mimic the join


SQL>
SQL> create or replace
  2  function get_full(p_id int )  return varchar2 deterministic is
  3    f varchar2(100);
  4  begin
  5    select  fname
  6    into f
  7    from t2
  8    where object_id = p_id;
  9
 10    return f;
 11  end;
 12  /

Function created.

As per normal, we create an interim table to hold the converted data as we redefine the table.


SQL>
SQL> create table t_interim (
  2   owner                         varchar2(128),
  3   object_name                   varchar2(128),
  4   subobject_name                varchar2(128),
  5   object_id                     number,
  6   data_object_id                number,
  7   object_type                   varchar2(23),
  8   created                       date,
  9   last_ddl_time                 date,
 10   timestamp                     varchar2(19),
 11   status                        varchar2(7),
 12   temporary                     varchar2(1),
 13   generated                     varchar2(1),
 14   secondary                     varchar2(1),
 15   full_name                     varchar2(200)
 16  );

Table created.

And now we are ready to use DBMS_REDEFINITION. The critical part here is the COL_MAPPING parameter. We are using the call to GET_FULL to populate the new column that will ultimately end up on T1.


SQL>
SQL>
SQL> declare
  2      l_colmap varchar(512);
  3    begin
  4      l_colmap :=
  5            'OWNER
  6            ,OBJECT_NAME
  7            ,SUBOBJECT_NAME
  8            ,OBJECT_ID
  9            ,DATA_OBJECT_ID
 10            ,OBJECT_TYPE
 11            ,CREATED
 12            ,LAST_DDL_TIME
 13            ,TIMESTAMP
 14            ,STATUS
 15            ,TEMPORARY
 16            ,GENERATED
 17            ,SECONDARY
 18            ,get_full(OBJECT_ID) FULL_NAME';
 19
 20      dbms_redefinition.start_redef_table
 21      (  uname           => user,
 22         orig_table      => 'T1',
 23         int_table       => 'T_INTERIM',
 24         col_mapping   => l_colmap );
 25   end;
 26   /

PL/SQL procedure successfully completed.

And here is the nice part. By using DBMS_REDEFINITION, we dont need to worry about the dependencies – we can let COPY_TABLE_DEPENDENTS do the heavy lifting.


SQL>
SQL> variable nerrors number
SQL> begin
  2    dbms_redefinition.copy_table_dependents
  3      ( user, 'T1', 'T_INTERIM',
  4        copy_indexes => dbms_redefinition.cons_orig_params,
  5        num_errors => :nerrors );
  6  end;
  7  /

PL/SQL procedure successfully completed.

SQL>
SQL> print nerrors

   NERRORS
----------
         0

SQL> begin
  2    dbms_redefinition.finish_redef_table
  3      ( user, 'T1', 'T_INTERIM' );
  4  end;
  5  /

PL/SQL procedure successfully completed.

And we’re done ! A correlated update using DBMS_REDEFINITION.


SQL>
SQL> select object_id, owner, full_name from t1
  2  where rownum < 10;

 OBJECT_ID OWNER                          FULL_NAME
---------- ------------------------------ --------------------------------------------------
        30 SYS                            SYS.I_COBJ#
        47 SYS                            SYS.I_USER2
        32 SYS                            SYS.CCOL$
        49 SYS                            SYS.I_COL2
        17 SYS                            SYS.FILE$
         2 SYS                            SYS.C_OBJ#
        29 SYS                            SYS.C_COBJ#
        45 SYS                            SYS.I_TS1
        10 SYS                            SYS.C_USER#

9 rows selected.

SQL>
SQL>

You might be thinking “But we’ll be slow because we’re calling that PL/SQL millions of times”.  Yes, this will be slower than a pure CTAS operation, but by using DBMS_REDEFINITION the process is online with nearly zero service disruption.  Hence the overall execution time is not nearly so critical anymore as a CTAS where applications are unavailable during the activity.

Code enhancements without changing code

An interesting suggestion came through on AskTom this week, which prompted the following exploration.

Let us assume you are populating a table with


INSERT INTO EMP SELECT * FROM SOURCE_EMPS

and it falls over with


ORA-12899: value too large for column

To capture the data that caused that error is not trivial. Of course, we could edit the code to perform DML error logging but of course, that means changing the source code, which means change control, testing, red tape etc etc etc. Is there a better way ? Perhaps there is.

In 12c, there is the SQL translation framework, designed to work for those applications being migrated from one database platform to Oracle. We intercept incoming SQL statements in (say) SQL Server syntax format, and recast it on the fly to an equivalent Oracle syntax. But we can also take advantage of that to handle this data error issue. Here’s a demo of that in action:


SQL> create user demo identified by demo quota 100m on users;

User created.

SQL> grant create session, alter session, create table, create sql translation profile, create procedure to DEMO;

Grant succeeded.

SQL>
SQL> conn demo/demo
Connected.

SQL> create table emp ( x varchar(10));

Table created.

SQL> create table source_emps as
  2  select cast(rpad('E',rownum,'E') as varchar2(20)) x_in
  3  from dual connect by level <= 15; Table created. SQL> select * from source_emps;

X_IN
--------------------
E
EE
EEE
EEEE
EEEEE
EEEEEE
EEEEEEE
EEEEEEEE
EEEEEEEEE
EEEEEEEEEE
EEEEEEEEEEE
EEEEEEEEEEEE
EEEEEEEEEEEEE
EEEEEEEEEEEEEE
EEEEEEEEEEEEEEE

15 rows selected.

So we have an EMP table with 10byte column ”X”, and some source rows from SOURCE_EMPS with ever increasingly larger sizes for “X_IN”. So once our rows get larger than 10 bytes, we are going to have a problem inserting them into EMP.


SQL> INSERT INTO EMP SELECT * FROM SOURCE_EMPS;
INSERT INTO EMP SELECT * FROM SOURCE_EMPS
                       *
ERROR at line 1:
ORA-12899: value too large for column "DEMO"."EMP"."X" (actual: 11, maximum: 10)

So let us now intercept that SQL using the translation framework.


SQL>
SQL> begin
  2    dbms_sql_translator.drop_profile(profile_name => 'DEMO_PROFILE');
  3  exception when others then null;
  4  end;
  5  /

PL/SQL procedure successfully completed.

SQL>
SQL> exec dbms_sql_translator.create_profile('DEMO_PROFILE');

PL/SQL procedure successfully completed.

SQL>
SQL> BEGIN
  2      DBMS_SQL_TRANSLATOR.REGISTER_SQL_TRANSLATION(
  3        profile_name    => 'DEMO_PROFILE',
  4        sql_text        => 'INSERT INTO EMP SELECT * FROM SOURCE_EMPS',   
  5        translated_text => 'INSERT INTO EMP SELECT * FROM SOURCE_EMPS LOG ERRORS INTO ERR$_EMP (TO_CHAR(SYSTIMESTAMP)) REJECT LIMIT UNLIMITED');
  6  END;
  7  /

PL/SQL procedure successfully completed.

So we can create an error logging table, and activate our translation profile


SQL> exec dbms_errlog.create_error_log ( dml_table_name => 'EMP' );

PL/SQL procedure successfully completed.

SQL> alter session set sql_translation_profile = DEMO_PROFILE;

Session altered.

SQL> alter session set events = '10601 trace name context forever, level 32';

Session altered.

SQL>
SQL> INSERT INTO EMP SELECT * FROM SOURCE_EMPS;

10 rows created.

Notice now that our insert was “successful” because we are now capturing the errors.


SQL> select count(*) from err$_emp;

  COUNT(*)
----------
         5

SQL>
SQL> select ORA_ERR_MESG$ from err$_emp;

ORA_ERR_MESG$
---------------------------------------------------------------------------------
ORA-12899: value too large for column "DEMO"."EMP"."X" (actual: 11, maximum: 10)
ORA-12899: value too large for column "DEMO"."EMP"."X" (actual: 12, maximum: 10)
ORA-12899: value too large for column "DEMO"."EMP"."X" (actual: 13, maximum: 10)
ORA-12899: value too large for column "DEMO"."EMP"."X" (actual: 14, maximum: 10)
ORA-12899: value too large for column "DEMO"."EMP"."X" (actual: 15, maximum: 10)