SMON_SCN_TIME and ORA-8161? Digging deeper

In the recent versions of the Oracle database, we’ve had the ability to convert between a System Change Number (SCN) and the approximate time to which that SCN pertains. These functions are unsurprisingly called SCN_TO_TIMESTAMP and TIMESTAMP_TO_SCN. The only potential misnomer here is that even though the functions are called “timestamp” and return a datatype of timestamp, on most platforms you are going to notice that the granularity doesn’t run down into fractions of seconds


SQL> select scn_to_timestamp(14816563713652) from dual;

SCN_TO_TIMESTAMP(14816563713652)
---------------------------------------------------------
08-JUN-19 02.30.59.000000000 AM

This all looks great until you start poking around too far into the past, and you end up in territory like this:


SQL> select scn_to_timestamp(14816563693489) from dual;
select scn_to_timestamp(14816563693489) from dual
       *
ERROR at line 1:
ORA-08181: specified number is not a valid system change number
ORA-06512: at "SYS.SCN_TO_TIMESTAMP", line 1

If I poke around in V$LOG_HISTORY, then I can pretty easily confirm that SCN 14816563693489 was indeed a valid SCN for this database at some stage in the past, so the fact that we can encounter ORA-08181 suggests that there is a finite structure that holds the mapping between SCNs and the time at which those SCNs pertain to. And indeed there is. The table is called SYS.SMON_SCN_TIME


SQL> desc sys.smon_scn_time
 Name                                      Null?    Type
 ----------------------------------------- -------- --------------------
 THREAD                                             NUMBER
 TIME_MP                                            NUMBER
 TIME_DP                                            DATE
 SCN_WRP                                            NUMBER
 SCN_BAS                                            NUMBER
 NUM_MAPPINGS                                       NUMBER
 TIM_SCN_MAP                                        RAW(1200)
 SCN                                                NUMBER
 ORIG_THREAD                                        NUMBER

and in most instances, it will hold around a week’s worth of SCN to timestamp mapping information.


SQL> select min(time_dp), max(time_dp) from sys.smon_scn_time;

MIN(TIME_ MAX(TIME_
--------- ---------
07-JUN-19 13-JUN-19

SQL> select count(*) from  sys.smon_scn_time;

  COUNT(*)
----------
      1603

When SMON_SCN_TIME first came into existence, the granularity of data in this table was an entry approximately every 5 minutes. Hence you could map an SCN to, at best, a 5 minute window. In more recent releases, this has been improved to a granularity of 3 seconds. But the table does not appear to reflect that evolution if you look at some sample data.


SQL> select scn, time_dp
  2  from sys.smon_scn_time s
  3  where rownum <= 10;

               SCN TIME_DP
------------------ -------------------
    14816565366476 11/06/2019 08:25:11
    14816565366679 11/06/2019 08:30:38
    14816565366808 11/06/2019 08:35:11
    14816565367164 11/06/2019 08:40:44
    14816565367291 11/06/2019 08:45:12
    14816565367475 11/06/2019 08:50:32
    14816565029366 10/06/2019 01:28:13
    14816565029605 10/06/2019 01:33:40
    14816565032515 10/06/2019 01:38:13
    14816565032779 10/06/2019 01:43:40

It still has a row every 5 minutes to handle backward compatibility, and to drill down to the 3 second level, you are expected to use the functions SCN_TO_TIMESTAMP and TIMESTAMP_TO_SCN already mentioned.

But what if you wanted to store this SCN to timestamp mapping for longer? Activating flashback data archive is one way of achieving this, but what if you just wanted to store the mapping forever without doing this? At first, it seems to be a simple task. I can simply get the lowest SCN from SMON_SCN_TIME:


SQL> select min(scn) scn from   sys.SMON_SCN_TIME;

               SCN
------------------
    14816563714099
    

and then use a familiar CONNECT BY technique to roll through the SCNs from that point onward


SQL> select 14816563714099+rownum scn,  
  2         scn_to_timestamp(14816563714099+rownum) ts
  3  from ( select 1 from dual connect by level <= 50 );

               SCN TS
------------------ -----------------------------------
    14816563714100 08-JUN-19 02.41.53.000000000 AM
    14816563714101 08-JUN-19 02.41.56.000000000 AM
    14816563714102 08-JUN-19 02.41.59.000000000 AM
    14816563714103 08-JUN-19 02.41.59.000000000 AM
    14816563714104 08-JUN-19 02.41.59.000000000 AM
    14816563714105 08-JUN-19 02.41.59.000000000 AM
    14816563714106 08-JUN-19 02.41.59.000000000 AM
    14816563714107 08-JUN-19 02.41.59.000000000 AM
    14816563714108 08-JUN-19 02.41.59.000000000 AM
    14816563714109 08-JUN-19 02.41.59.000000000 AM
    14816563714110 08-JUN-19 02.41.59.000000000 AM
    14816563714111 08-JUN-19 02.42.05.000000000 AM
    14816563714112 08-JUN-19 02.42.08.000000000 AM
    14816563714113 08-JUN-19 02.42.11.000000000 AM
    14816563714114 08-JUN-19 02.42.14.000000000 AM
    14816563714115 08-JUN-19 02.42.17.000000000 AM
    14816563714116 08-JUN-19 02.42.20.000000000 AM
    14816563714117 08-JUN-19 02.42.23.000000000 AM
    14816563714118 08-JUN-19 02.42.26.000000000 AM
    14816563714119 08-JUN-19 02.42.29.000000000 AM
    14816563714120 08-JUN-19 02.42.32.000000000 AM
    14816563714121 08-JUN-19 02.42.35.000000000 AM
    14816563714122 08-JUN-19 02.42.38.000000000 AM
    14816563714123 08-JUN-19 02.42.41.000000000 AM
    14816563714124 08-JUN-19 02.42.44.000000000 AM
    14816563714125 08-JUN-19 02.42.47.000000000 AM
    14816563714126 08-JUN-19 02.42.50.000000000 AM
    14816563714127 08-JUN-19 02.42.53.000000000 AM
    14816563714128 08-JUN-19 02.42.56.000000000 AM
    14816563714129 08-JUN-19 02.42.59.000000000 AM
    14816563714130 08-JUN-19 02.42.59.000000000 AM
    14816563714131 08-JUN-19 02.42.59.000000000 AM
    14816563714132 08-JUN-19 02.42.59.000000000 AM
    14816563714133 08-JUN-19 02.42.59.000000000 AM
    ...
    ...
  

But if we are going to store this data, we can start to see some things that can be improved.

Firstly, a lot of rows have the same timestamp. As mentioned, there is only a granularity of 3 seconds, and we can certainly burn through a lot of SCNs in a 3 second period. I can use some simple analytics to see if a timestamp matches the trailing one and filter it out if that is the case.


SQL> with
  2  t as
  3    ( select min(scn) lo from   sys.smon_scn_time ),
  4  all_scns as
  5    ( select lo+rownum scn, scn_to_timestamp(lo+rownum) ts
  6      from  t
  7      connect by level <= 100 ),
  8  delta_scns as
  9    ( select a.*, row_number() over ( partition by ts order by scn ) as rn
 10      from all_scns a )
 11  select * from delta_scns
 12  where rn = 1;

               SCN TS                                               RN
------------------ ---------------------------------------- ----------
    14816563714100 08-JUN-19 02.41.53.000000000 AM                   1
    14816563714101 08-JUN-19 02.41.56.000000000 AM                   1
    14816563714102 08-JUN-19 02.41.59.000000000 AM                   1
    14816563714111 08-JUN-19 02.42.05.000000000 AM                   1
    14816563714112 08-JUN-19 02.42.08.000000000 AM                   1
    14816563714113 08-JUN-19 02.42.11.000000000 AM                   1
    14816563714114 08-JUN-19 02.42.14.000000000 AM                   1
    14816563714115 08-JUN-19 02.42.17.000000000 AM                   1
    14816563714116 08-JUN-19 02.42.20.000000000 AM                   1
    14816563714117 08-JUN-19 02.42.23.000000000 AM                   1
    14816563714118 08-JUN-19 02.42.26.000000000 AM                   1
    14816563714119 08-JUN-19 02.42.29.000000000 AM                   1
    14816563714120 08-JUN-19 02.42.32.000000000 AM                   1
    14816563714121 08-JUN-19 02.42.35.000000000 AM                   1
    14816563714122 08-JUN-19 02.42.38.000000000 AM                   1
    14816563714123 08-JUN-19 02.42.41.000000000 AM                   1
    14816563714124 08-JUN-19 02.42.44.000000000 AM                   1
    14816563714125 08-JUN-19 02.42.47.000000000 AM                   1
    14816563714126 08-JUN-19 02.42.50.000000000 AM                   1
    14816563714127 08-JUN-19 02.42.53.000000000 AM                   1
    14816563714128 08-JUN-19 02.42.56.000000000 AM                   1
    14816563714129 08-JUN-19 02.42.59.000000000 AM                   1
    ...
    ...
 

Secondly, we need to stop at a particular point. I can easily get the highest SCN from the table


SQL> select max(scn) scn from   sys.smon_scn_time;

               SCN
------------------
    14816565838367
 

but that is when things start to get problematic. Even for just this small set of data (just a few days), on my almost entirely idle database (its my laptop):


SQL> select max(scn) - 14816563714100 from   sys.smon_scn_time;

MAX(SCN)-14816563714100
-----------------------
                2124267
    

you can see that I’ve burned through over 2million SCNs. When I put that back into my query


SQL> with
  2  t as
  3    ( select min(scn) lo, max(scn) hi from   sys.smon_scn_time ),
  4  all_scns as
  5    ( select lo+rownum scn, scn_to_timestamp(lo+rownum) ts
  6      from  t
  7      connect by level <= hi-lo  )
  8  delta_scns as
  9    ( select a.*, row_number() over ( partition by ts order by scn ) as rn
 10      from all_scns a )
 11  select * from delta_scns
 12  where rn = 1
 

it took a looonnnggg time to come back. This is understandable – it is millions of calls to extract a timestamp, followed by some heavy duty analytics to remove duplicates. It has become a very expensive operation to perform.

So that got me thinking – how does the database do it? I started with – how does the database get down to 3 second granularity when there is still only a row every 5 minutes in SMON_SCN_TIME? This is where the TIM_SCN_MAP column comes into play.


SQL> select scn, num_mappings, tim_scn_map
  2  from sys.smon_scn_time
  3  where scn = 14816565797824

               SCN NUM_MAPPINGS TIM_SCN_MAP
------------------ ------------ -----------------------------------------------------------------------------
    14816565797824          100 2F90015DC12324C0790D00003290015DC22324C0790D00003590015DC32324C0790D0000...

Searching along raw data and looking for patterns, I could see that the “D0000” was repeated every 24 characters (or 12 bytes). A simple check confirmed that this appears to equate to the NUM_MAPPINGS column, so it is reasonable to assume that the raw data is a set of time mappings. Given that the maximum value for NUM_MAPPINGS is 100, this also fits the hypothesis because 100 x 3 seconds granularity yields 5 minutes, which is how often we get a row in SMON_SCN_TIME


SQL> select scn, num_mappings, length(tim_scn_map)/24
  2  from   sys.smon_scn_time
  3  /

               SCN NUM_MAPPINGS LENGTH(TIM_SCN_MAP)/24
------------------ ------------ ----------------------
    14816565366476          100                    100
    14816565366679           83                     83
    14816565366808          100                    100
    14816565367164           82                     82
    14816565367291          100                    100
    14816565367475           86                     86
    14816565029366          100                    100
 ...
 

So 24 characters (12 bytes) is probably an occurrence of a SCN to timestamp mapping. The task now is dig out that information from that raw encoding. I’ll grab that first 12 bytes from the row above 2F90015DC12324C0790D0000 and see if we can align it to other values in the table. The first thing I noticed is that ‘790D0000’ rarely changes across the whole table, so I converted that to decimal to see if I could match it to anything. We have to swap the bytes first, and then I got:


SQL> select to_number('0D79','xxxx') from dual;

TO_NUMBER('0D79','XXXX')
------------------------
                    3449

That matched the SCN_WRP column in SMON_SCN_TIME. Grabbing the next 4 bytes gives


SQL> select to_number('C02423C1','xxxxxxxx') from dual;

TO_NUMBER('C02423C1','XXXXXXXX')
--------------------------------
                      3223593921

That matched the SCN_BAS column in SMON_SCN_TIME. So SCN_WRP and SCN_BAS are enough to form a complete SCN number. Which would mean that the remaining 4 bytes should be the time information.


SQL> select to_number('5D01902F','xxxxxxxx') from dual;

TO_NUMBER('5D01902F','XXXXXXXX')
--------------------------------
                      1560383535
   

Obviously that does not look like a date or a time, but this is an internal format that is seen in various places around Oracle database, for example, in redo logs etc. To explain where it comes from, it is better to approach it from the other direction (starting with time).

Lets say I wanted a unique number for clock time of just hours and minutes (eg: 09:37). I could do something like:

unique val = hours * 100 + minutes

That would work because I know that minutes is capped at 59, so there is no chance of an overlap by using a multiplier of 100. So here is how that concept can be taken to its full extent to get a unique number for a full date and time. I’ll pick a date/time of 2019-6-12 23:52:15 as an example

  • Take the year, 2019.
  • We don’t need anything before Oracle was “born”, so subtract 1976 = 43
  • For the months, multiply by 13 (anything more than 12 months) = 559
  • Because we’re IT nerds, we’ll start months at zero (0=January), so we add 5 for June = 564
  • For the day in a month, multiply by 32 (anything more than 31) = 18048
  • Because we’re IT nerds, we’ll start day of month at zero (=1st), so we add 11 for the 12th = 18059
  • For the hours, multiply by 24 (because hours never exceed 23) = 433416
  • No need for nerdy stuff here, because hours are already normalised at 0-23, so we add 23 = 433439
  • For the minutes, multiply by 60 (minutes are 0 to 59) = 26006340
  • Then add the minutes = 26006392
  • Same for the seconds, multiply by 60 = 1560383520
  • and finally we add the seconds = 1560383535

And voila! You can that it matches the decimal expansion of the 4 bytes of 5D01902F above. Now that we know how the time to number conversion is done, a little PL/SQL lets the reverse be done (number to time):


SQL> declare
  2    t int := 1560383535;
  3    secs int;
  4    mins int;
  5    hrs int;
  6    dy int;
  7    mon int;
  8    yr int;
  9  begin
 10   secs := mod(t,60);
 11   t := t - secs;
 12   t := floor(t / 60);
 13   mins := mod(t,60);
 14   t := t - mins;
 15   t := floor(t / 60);
 16   hrs := mod(t,24);
 17   t := t - hrs;
 18   t := floor(t / 24);
 19   dy := mod(t,32);
 20   t := t - dy;
 21   t := floor(t / 32);
 22   dy := dy + 1;
 23   mon := mod(t,13);
 24   t := t - mon;
 25   mon := mon + 1;
 26   t := floor(t / 13);
 27   yr := t + 1976;
 28   dbms_output.put_line(yr||'-'||mon||'-'||dy||' '||hrs||':'||mins||':'||secs);
 29  end;
 30  /
2019-6-12 23:52:15

To make all of this simpler and modular, I’ll build a couple of functions to bring all the bits and pieces we’ve learnt together. First a function to take a non-byte swapped hex string to a SCN


SQL> create or replace
  2  function raw_to_scn(p_scn varchar2) return number is
  3  begin
  4    return to_number(
  5                  substr(p_scn,7,2)||
  6                  substr(p_scn,5,2)||
  7                  substr(p_scn,3,2)||
  8                  substr(p_scn,1,2),'xxxxxxxx');
  9  end;
 10  /

Function created.

and then a function that will extend the anonymous block above to take a non-byte swapped hex string and return a timestamp (well, a date in reality).


SQL> create or replace
  2  function raw_to_date(p_time varchar2) return date is
  3    t int := to_number(
  4                  substr(p_time,7,2)||
  5                  substr(p_time,5,2)||
  6                  substr(p_time,3,2)||
  7                  substr(p_time,1,2),'xxxxxxxx');
  8    secs int;
  9    mins int;
 10    hrs int;
 11    dy int;
 12    mon int;
 13    yr int;
 14  begin
 15   secs := mod(t,60);
 16   t := t - secs;
 17   t := floor(t / 60);
 18   mins := mod(t,60);
 19   t := t - mins;
 20   t := floor(t / 60);
 21   hrs := mod(t,24);
 22   t := t - hrs;
 23   t := floor(t / 24);
 24   dy := mod(t,32);
 25   t := t - dy;
 26   t := floor(t / 32);
 27   dy := dy + 1;
 28   mon := mod(t,13);
 29   t := t - mon;
 30   mon := mon + 1;
 31   t := floor(t / 13);
 32   yr := t + 1976;
 33   return to_date(yr||'-'||mon||'-'||dy||' '||hrs||':'||mins||':'||secs,'yyyy-mm-dd hh24:mi:ss');
 34  end;
 35  /

Function created.

With these in place, I can create a couple of types to serve as return datatypes for a pipelined function


SQL> create or replace
  2  type scn_dte as object
  3    ( scn int, dte date );
  4  /

Type created.

SQL>
SQL> create or replace
  2  type scn_dte_nt as table of scn_dte
  3  /

Type created.

and here is a function that will take important components from SMON_SCN_TIME namely SCN_BAS, NUM_MAPPINGS, SCAN_WRAP and the raw data in TIM_SCN_MAP to spit out the 3-second interval data rather than just 1 row per 5 minutes.


SQL> create or replace
  2  function full_smon_scn_timestamp(p_start_scn int default 0) return scn_dte_nt pipelined as
  3    l_map varchar2(2400);
  4  begin
  5    for i in ( select * from sys.smon_scn_time
  6               where scn > p_start_scn
  7               order by scn
  8             )
  9    loop
 10      pipe row ( scn_dte(i.scn_wrp * 4294967296 + i.scn_bas,i.time_dp));
 11      l_map := i.tim_scn_map;
 12      for j in 1 .. i.num_mappings
 13      loop
 14         pipe row (
 15           scn_dte(
 16             i.scn_wrp * 4294967296 + raw_to_scn(substr(l_map,9,8)),
 17             raw_to_date(substr(l_map,1,8))
 18             )
 19          );
 20         l_map := substr(l_map,25);
 21      end loop;
 22    end loop;
 23  end;
 24  /

Function created.

Let’s give that function a run to see what comes out:


SQL> select * from full_smon_scn_timestamp()
  2  where rownum <= 20;

               SCN DTE
------------------ -------------------
    14816563726597 07/06/2019 21:50:43
    14816563726598 07/06/2019 21:50:46
    14816563726599 07/06/2019 21:50:49
    14816563726600 07/06/2019 21:50:52
    14816563726601 07/06/2019 21:50:55
    14816563726602 07/06/2019 21:50:58
    14816563726603 07/06/2019 21:51:01
    14816563726604 07/06/2019 21:51:04
    14816563726605 07/06/2019 21:51:07
    14816563726606 07/06/2019 21:51:10
    14816563726607 07/06/2019 21:51:13
    14816563726608 07/06/2019 21:51:16
    14816563726609 07/06/2019 21:51:19
    14816563726610 07/06/2019 21:51:22
    14816563726611 07/06/2019 21:51:25
    14816563726612 07/06/2019 21:51:28
    14816563726633 07/06/2019 21:51:34
    14816563726634 07/06/2019 21:51:37
    14816563726635 07/06/2019 21:51:40
    14816563726636 07/06/2019 21:51:43

Each row in SMON_SCN_TIME contains a starting value for the SCN plus all of the mappings, so the total number of 3-second intervals is:


SQL> select sum(num_mappings+1) from sys.smon_scn_time;

SUM(NUM_MAPPINGS+1)
-------------------
             145262
  

and I can compare that to the total number of rows that will be returned from my function


SQL> select count(*) from full_smon_scn_timestamp();

  COUNT(*)
----------
    145262
   

So there you have a way to grab all of those 3-second granularity entries in SMON_SCN_TIME without having to call SCN_TO_TIMESTAMP millions of time. You could run this function (say) daily by passing in a starting SCN value equal to the highest value you have so far retrieved to build up your own custom SMON_SCN_TIME variant that you can keep for as long as you want.

Of course, poking around in raw fields is totally unsupported, so don’t come crying to me if you decide to use this for anything more than exploration and one day it all blows up in your face 🙂

Kscope Sunday will be awesome

Yeah yeah I know. What kind of dufus cut-pastes a giant image into their blog post. That would be ….. me Smile

But the reality is, there is just soooooo much going on this year on the Sunday before the “official” start of the Kscope conference, it was too hard to condense it into a few sentences. So I just dumped the image from the website, but you can read about it here.

It’s my first Kscope and it will be blast to be running a day of high quality but dirt cheap learning with Maria, Jeff, Steven and Blaine.

And we’ll even have some fun and games. How good is your knowledge of the Oracle Database and the Oracle community? Our Sunday quiz will let you shine! (Well, to be honest, we’ve designed the quiz so that anyone in the community has a chance to win, so come along to have some fun and maybe pick up some prizes as well!)

See you in Seattle!

 

image

With and without WITH_PLSQL within a WITH SQL statement

OK, let’s be honest right up front. The motivation for this post is solely to be able to roll out a tongue twisting blog post title Smile. But hopefully there’s some value as well in here for you if you’re hitting the error:

ORA-32034: unsupported use of WITH clause

First some background. A cool little enhancement to the WITH clause came along in 12c that allowed PLSQL functions to be defined within the scope of the executing SQL statement. To see the benefit of this, consider the following example that I have a personal affinity with (given my surname).

Let’s say I’ve allowed mixed-case data in a table that holds names.


SQL> select surname
  2  from   names;

SURNAME
------------------------------
jones
brown
SMITH

There’s nothing inherently wrong here, but in terms of rendering that data in a report or on screen, it would be nice to have some consistency.

“No problem” you think, “I’ll just slap an INITCAP in there”


SQL> select initcap(surname)
  2  from   names;

INITCAP(SURNAME)
------------------------------
Jones
Brown
Smith

That works fine of course until …. yours truly gets added into the mix Smile. After a couple of new rows are added, we can start to see the shortcomings of INITCAP.


SQL> select initcap(surname)
  2  from   names;

INITCAP(SURNAME)
------------------------------
Jones
Brown
Smith
Mcdonald
Johnson'S

I’d like a capital D, and letters that follow apostrophes have some nuances that might need handling. This can be solved without too much fuss – with a little bit of PLSQL I can produce a custom version of INITCAP that will handle these exceptional cases.


SQL> create or replace
  2  function MY_INITCAP(p_string varchar2) return varchar2 is
  3    l_string varchar2(1000) := p_string;
  4  begin
  5    if regexp_like(l_string,'(Mac[A-Z]|Mc[A-Z])') then
  6        null;
  7    elsif l_string like '''%' then
  8        null;
  9    else
 10      l_string := initcap(l_string);
 11      if l_string like '_''S%' then
 12         null;
 13      else
 14         l_string := replace(l_string,'''S','''s');
 15      end if;
 16    end if;
 17
 18    return l_string;
 19  end;
 20  /

SQL> select my_initcap(surname)
  2  from   names;

MY_INITCAP(SURNAME)
--------------------------
Jones
Brown
Smith
McDonald
Johnson's

But perhaps I’d like that functionality inline with the SQL so that a future maintainer can directly see what I’ve done. Yes, I could refactor the code to be 100% SQL with no reliance on PLSQL using something like this:


SQL> select
  2    case
  3      when regexp_like(surname,'(Mac[A-Z]|Mc[A-Z])') then surname
  4      when surname like '''%' then surname
  5      when initcap(surname) like '_''S%' then surname
  6      else replace(initcap(surname),'''S','''s')
  7    end ugh
  8  from names;

UGH
-------------------------------
Jones
Brown
Smith
McDonald
Johnson's

But if I’m doing this to help a future maintainer….well… that convoluted CASE statement probably isn’t such a nice remnant for them Smile. So since 12c, we’ve been able to add that PLSQL code directly within the SQL statement itself.


SQL> WITH
  2    function my_initcap(p_string varchar2) return varchar2 is
  3      l_string varchar2(1000) := p_string;
  4    begin
  5      if regexp_like(l_string,'(Mac[A-Z]|Mc[A-Z])') then
  6          null;
  7      elsif l_string like '''%' then
       ...
 17
 18      return l_string;
 19    end;
 20  select my_initcap(surname)
 21  from   names;

MY_INITCAP(SURNAME)
-----------------------------------------
Jones
Brown
Smith
McDonald

Hopefully you can now see the benefit of the feature. Now back to the topic at hand, the ORA-32034 error. If you attempt to use the feature within an INSERT, UPDATE or DELETE statement, you’ll get a surprise:


SQL> insert into TARGET_TABLE
  2  WITH
  3    function my_initcap(p_string varchar2) 
  4                return varchar2 is
  5      l_string varchar2(1000) := p_string;
  6    begin
          ...
 20    end;
 21  select my_initcap(surname)
 22  from   names;
 23  /

WITH
*
ERROR at line 2:
ORA-32034: unsupported use of WITH clause

To overcome this, you need to specify the WITH_PLSQL hint


SQL> insert /*+ WITH_PLSQL */ into TARGET_TABLE
  2  WITH
  3    function my_initcap(p_string varchar2) 
  4                return varchar2 is
  5      l_string varchar2(1000) := p_string;
  6    begin
          ...
 20    end;
 21  select my_initcap(surname)
 22  from   names;
 23  /

5 rows inserted.

I’ve never ascertained the reason precisely why the hint is needed (I’m asking around internally within the database group), but I have a hypothesis: Given that you could potentially write anything within an PLSQL function (including independent transactions), my guess is that the hint is a flag to the database to say “Have an extra careful double-check of the function code to make sure it’s not doing anything dangerous, in particular, to the table we doing the DML on”.

I might be 100% wrong here – I’ll update the post if I get more information in future.

Advice on fragmentation and shrinkage

If you have performed some sort of data cleanup or similar on a table, then the deleted space will be reused by future insertions. But if

  • that cleanup was the last task you were performing on that table, ie, you were not expecting a lot of new data to ever come in again, or
  • you are performing a lot of full scan queries on that table and you want to make sure they are as efficient as possible

then there may be benefits to performing a shrink on that table to reclaim that space. One of the cool things about the segment advisor is that it will detect if there are some benefits to be gained by shrinking a segment. Here’s an example of that. I create a large table and then delete every 2nd row.


SQL> create table scott.demo_table as
  2  select d.* from dba_objects d,
  3   ( select 1 from dual connect by level <= 100 ) ;

Table created.

SQL>
SQL> delete from scott.demo_table
  2  where mod(object_id,2) = 0;

4125300 rows deleted.

SQL> commit;

Commit complete.

Now I run the segment advisor and I get a nice report on what can be done to reclaim that space.



SQL>
SQL>
SQL> set serveroutput on size 1000000
SQL> set linesize 200
SQL> set verify off
SQL>
SQL> declare
  2    l_object_id     number;
  3    l_task_name     varchar2(50) := 'TEST_TASK';
  4    l_object_type   varchar2(50) := upper('TABLE');
  5    l_attr1         varchar2(50) := upper('SCOTT');
  6    l_attr2         varchar2(50) := upper('DEMO_TABLE');
  7  begin
  8    begin  DBMS_ADVISOR.delete_task(task_name => l_task_name); exception when others then null; end;
  9
 10    dbms_advisor.create_task (
 11      advisor_name      => 'Segment Advisor',
 12      task_name         => l_task_name);
 13
 14    dbms_advisor.create_object (
 15      task_name   => l_task_name,
 16      object_type => l_object_type,
 17      attr1       => l_attr1,
 18      attr2       => l_attr2,
 19      attr3       => NULL,
 20      attr4       => 'null',
 21      attr5       => NULL,
 22      object_id   => l_object_id);
 23
 24    dbms_advisor.set_task_parameter (
 25      task_name => l_task_name,
 26      parameter => 'RECOMMEND_ALL',
 27      value     => 'TRUE');
 28
 29    dbms_advisor.execute_task(task_name => l_task_name);
 30
 31
 32    for cur_rec in (select f.impact,
 33                           o.type,
 34                           o.attr1,
 35                           o.attr2,
 36                           f.message,
 37                           f.more_info
 38                    from   dba_advisor_findings f
 39                           join dba_advisor_objects o on f.object_id = o.object_id and f.task_name = o.task_name
 40                    where  f.task_name = l_task_name
 41                    order by f.impact desc)
 42    loop
 43      dbms_output.put_line('..');
 44      dbms_output.put_line('Type             : ' || cur_rec.type);
 45      dbms_output.put_line('Attr1            : ' || cur_rec.attr1);
 46      dbms_output.put_line('Attr2            : ' || cur_rec.attr2);
 47      dbms_output.put_line('Message          : ' || cur_rec.message);
 48      dbms_output.put_line('More info        : ' || cur_rec.more_info);
 49    end loop;
 50
 51    dbms_advisor.delete_task(task_name => l_task_name);
 52  end;
 53  /
..
Type             : TABLE
Attr1            : SCOTT
Attr2            : DEMO_TABLE
Message          : Enable row movement of the table SCOTT.DEMO_TABLE and perform shrink, estimated savings is 596868412 bytes.
More info        : Allocated Space:1342177280: Used Space:745308868: Reclaimable Space :596868412:

PL/SQL procedure successfully completed.

But what if that segment sits in a manual segment space managed tablespace? We can see from the above that a shrink-style operation will yield some benefits, but there’s a problem. You cannot perform an ALTER TABLE SHRINK command unless a segment sits in an ASSM tablespace. So is the segment advisor of any use in these cases? Let’s re-run the demo to find out


SQL> select * from dba_tablespaces where tablespace_name = 'NO_ASSM'
  2  @pr
==============================
TABLESPACE_NAME               : NO_ASSM
BLOCK_SIZE                    : 8192
INITIAL_EXTENT                : 65536
NEXT_EXTENT                   :
MIN_EXTENTS                   : 1
MAX_EXTENTS                   : 2147483645
MAX_SIZE                      : 2147483645
PCT_INCREASE                  :
MIN_EXTLEN                    : 65536
STATUS                        : ONLINE
CONTENTS                      : PERMANENT
LOGGING                       : LOGGING
FORCE_LOGGING                 : NO
EXTENT_MANAGEMENT             : LOCAL
ALLOCATION_TYPE               : SYSTEM
PLUGGED_IN                    : NO
SEGMENT_SPACE_MANAGEMENT      : MANUAL
DEF_TAB_COMPRESSION           : DISABLED
RETENTION                     : NOT APPLY
BIGFILE                       : NO
PREDICATE_EVALUATION          : HOST
ENCRYPTED                     : NO
COMPRESS_FOR                  :
DEF_INMEMORY                  : DISABLED
DEF_INMEMORY_PRIORITY         :
DEF_INMEMORY_DISTRIBUTE       :
DEF_INMEMORY_COMPRESSION      :
DEF_INMEMORY_DUPLICATE        :
SHARED                        : SHARED
DEF_INDEX_COMPRESSION         : DISABLED
INDEX_COMPRESS_FOR            :
DEF_CELLMEMORY                :
DEF_INMEMORY_SERVICE          :
DEF_INMEMORY_SERVICE_NAME     :
LOST_WRITE_PROTECT            : OFF
CHUNK_TABLESPACE              : N

PL/SQL procedure successfully completed.

SQL>
SQL> create table scott.demo_table tablespace no_assm as
  2  select d.* from dba_objects d,
  3   ( select 1 from dual connect by level <= 100 ) ;

Table created.

SQL>
SQL> delete from scott.demo_table
  2  where mod(object_id,2) = 0;

4125300 rows deleted.

SQL> commit;

Commit complete.

SQL>
SQL>
SQL> set serveroutput on size 1000000
SQL> set linesize 200
SQL> set verify off
SQL>
SQL> declare
  2    l_object_id     number;
  3    l_task_name     varchar2(50) := 'TEST_TASK';
  4    l_object_type   varchar2(50) := upper('TABLE');
  5    l_attr1         varchar2(50) := upper('SCOTT');
  6    l_attr2         varchar2(50) := upper('DEMO_TABLE');
  7  begin
  8    begin  DBMS_ADVISOR.delete_task(task_name => l_task_name); exception when others then null; end;
  9
 10    dbms_advisor.create_task (
 11      advisor_name      => 'Segment Advisor',
 12      task_name         => l_task_name);
 13
 14    dbms_advisor.create_object (
 15      task_name   => l_task_name,
 16      object_type => l_object_type,
 17      attr1       => l_attr1,
 18      attr2       => l_attr2,
 19      attr3       => NULL,
 20      attr4       => 'null',
 21      attr5       => NULL,
 22      object_id   => l_object_id);
 23
 24    dbms_advisor.set_task_parameter (
 25      task_name => l_task_name,
 26      parameter => 'RECOMMEND_ALL',
 27      value     => 'TRUE');
 28
 29    dbms_advisor.execute_task(task_name => l_task_name);
 30
 31
 32    for cur_rec in (select f.impact,
 33                           o.type,
 34                           o.attr1,
 35                           o.attr2,
 36                           f.message,
 37                           f.more_info
 38                    from   dba_advisor_findings f
 39                           join dba_advisor_objects o on f.object_id = o.object_id and f.task_name = o.task_name
 40                    where  f.task_name = l_task_name
 41                    order by f.impact desc)
 42    loop
 43      dbms_output.put_line('..');
 44      dbms_output.put_line('Type             : ' || cur_rec.type);
 45      dbms_output.put_line('Attr1            : ' || cur_rec.attr1);
 46      dbms_output.put_line('Attr2            : ' || cur_rec.attr2);
 47      dbms_output.put_line('Message          : ' || cur_rec.message);
 48      dbms_output.put_line('More info        : ' || cur_rec.more_info);
 49    end loop;
 50
 51    dbms_advisor.delete_task(task_name => l_task_name);
 52  end;
 53  /
..
Type             : TABLE
Attr1            : SCOTT
Attr2            : DEMO_TABLE
Message          : Perform re-org on the object DEMO_TABLE, estimated savings is 600175966 bytes.
More info        : Allocated Space:1342177280: Used Space:742001314: Reclaimable Space :600175966:

PL/SQL procedure successfully completed.

SQL>
SQL>

As you can see, the segment advisor will take that into account and adjust its recommendations accordingly. And one of the cool things with 12.2 and above, is that tables can be reorganised without an outage!


SQL> alter table scott.demo_table move online;

Table altered.

Nice!

Generic data models … generic applications … ugh

There’s a hesitation to publish this example, because publishing it may be interpreted as an endorsement of this approach and it certainly isn’t. Over the years there have been plenty of articles describing the long term pain that typically comes from generic data models. Here’s a few to whet your appetite.

https://rodgersnotes.wordpress.com/2010/09/21/muck-massively-unified-code-key-generic-three-table-data-model/

https://www.red-gate.com/simple-talk/opinion/opinion-pieces/bad-carma/

https://asktom.oracle.com/pls/asktom/asktom.search?tag=query-on-design

But I’m posting this example because it serves as a nice tutorial for DBMS_SQL, and also, there is perhaps the suggestion that the requestor is moving away from a generic data model to a more well structured one. We’ll go with the benefit of the doubt here Smile

The incoming data for this example was the “classic” generic data model where the table name, column name and column values were not defined in the data dictionary but as values within a table


SQL> CREATE TABLE data_table
  2    (
  3      row_seq      int,
  4      table_name   VARCHAR2(30),
  5      column_name  VARCHAR2(30),
  6      column_value VARCHAR2(30)
  7    );

Table created.

SQL>
SQL>
SQL>
SQL> insert into data_table values ( 1,'TEST_TAB','ORDER_NUMBER','ORD1001');

1 row created.

SQL> insert into data_table values ( 1,'TEST_TAB','CUST_NAME','CUST1001');

1 row created.

SQL> insert into data_table values ( 1,'TEST_TAB','INVOICE_NUMBER','INV001');

1 row created.

SQL> insert into data_table values ( 1,'TEST_TAB','ORDER_DATE','04/11/2018 15:14:00');

1 row created.

SQL> insert into data_table values ( 1,'TEST_TAB','AMOUNT','1001');

1 row created.

SQL>
SQL> insert into data_table values ( 2,'TEST_TAB','ORDER_NUMBER','ORD1002');

1 row created.

SQL> insert into data_table values ( 2,'TEST_TAB','CUST_NAME','CUST1002');

1 row created.

SQL> insert into data_table values ( 2,'TEST_TAB','INVOICE_NUMBER','INV002');

1 row created.

SQL> insert into data_table values ( 2,'TEST_TAB','ORDER_DATE','02/11/2018 15:14:00');

1 row created.

SQL> insert into data_table values ( 2,'TEST_TAB','AMOUNT','1002');

1 row created.

SQL>
SQL> commit;

Commit complete.

SQL> select * from data_table;

   ROW_SEQ TABLE_NAME                     COLUMN_NAME                    COLUMN_VALUE
---------- ------------------------------ ------------------------------ ------------------------------
         1 TEST_TAB                       ORDER_NUMBER                   ORD1001
         1 TEST_TAB                       CUST_NAME                      CUST1001
         1 TEST_TAB                       INVOICE_NUMBER                 INV001
         1 TEST_TAB                       ORDER_DATE                     04/11/2018 15:14:00
         1 TEST_TAB                       AMOUNT                         1001
         2 TEST_TAB                       ORDER_NUMBER                   ORD1002
         2 TEST_TAB                       CUST_NAME                      CUST1002
         2 TEST_TAB                       INVOICE_NUMBER                 INV002
         2 TEST_TAB                       ORDER_DATE                     02/11/2018 15:14:00
         2 TEST_TAB                       AMOUNT                         1002

10 rows selected.

The task here was to take those values and convert into INSERT statements, so with the sample data above, the aim is to insert those rows into a table called TEST_TAB. Given that the DML must generated entirely from metadata, we can use DBMS_SQL to handle it. Constructing the DML is easier than you might think due to some handy analytic SQL functions plus the ever useful LISTAGG.


SQL> select
  2    row_number() over
  3      ( partition by table_name, row_seq order by column_name ) as seq,
  4    count(*) over
  5      ( partition by table_name, row_seq ) as col_cnt,
  6    listagg(column_name,',') within group
  7      ( order by column_name ) over ( partition by table_name, row_seq ) as cols,
  8    listagg(':'||column_name,',') within group
  9      ( order by column_name ) over ( partition by table_name, row_seq ) as bindcols,
 10    column_value
 11  from data_table
 12  order by table_name, row_seq, column_name
 13  @pr
==============================
SEQ                           : 1
COL_CNT                       : 5
COLS                          : AMOUNT,CUST_NAME,INVOICE_NUMBER,ORDER_DATE,ORDER_NUMBER
BINDCOLS                      : :AMOUNT,:CUST_NAME,:INVOICE_NUMBER,:ORDER_DATE,:ORDER_NUMBER
COLUMN_VALUE                  : 1001
==============================
SEQ                           : 2
COL_CNT                       : 5
COLS                          : AMOUNT,CUST_NAME,INVOICE_NUMBER,ORDER_DATE,ORDER_NUMBER
BINDCOLS                      : :AMOUNT,:CUST_NAME,:INVOICE_NUMBER,:ORDER_DATE,:ORDER_NUMBER
COLUMN_VALUE                  : CUST1001
==============================
SEQ                           : 3
COL_CNT                       : 5
COLS                          : AMOUNT,CUST_NAME,INVOICE_NUMBER,ORDER_DATE,ORDER_NUMBER
BINDCOLS                      : :AMOUNT,:CUST_NAME,:INVOICE_NUMBER,:ORDER_DATE,:ORDER_NUMBER
COLUMN_VALUE                  : INV001
==============================
SEQ                           : 4
COL_CNT                       : 5
COLS                          : AMOUNT,CUST_NAME,INVOICE_NUMBER,ORDER_DATE,ORDER_NUMBER
BINDCOLS                      : :AMOUNT,:CUST_NAME,:INVOICE_NUMBER,:ORDER_DATE,:ORDER_NUMBER
COLUMN_VALUE                  : 04/11/2018 15:14:00
==============================
SEQ                           : 5
COL_CNT                       : 5
COLS                          : AMOUNT,CUST_NAME,INVOICE_NUMBER,ORDER_DATE,ORDER_NUMBER
BINDCOLS                      : :AMOUNT,:CUST_NAME,:INVOICE_NUMBER,:ORDER_DATE,:ORDER_NUMBER
COLUMN_VALUE                  : ORD1001
==============================
SEQ                           : 1
COL_CNT                       : 5
COLS                          : AMOUNT,CUST_NAME,INVOICE_NUMBER,ORDER_DATE,ORDER_NUMBER
BINDCOLS                      : :AMOUNT,:CUST_NAME,:INVOICE_NUMBER,:ORDER_DATE,:ORDER_NUMBER
COLUMN_VALUE                  : 1002
==============================
SEQ                           : 2
COL_CNT                       : 5
COLS                          : AMOUNT,CUST_NAME,INVOICE_NUMBER,ORDER_DATE,ORDER_NUMBER
BINDCOLS                      : :AMOUNT,:CUST_NAME,:INVOICE_NUMBER,:ORDER_DATE,:ORDER_NUMBER
COLUMN_VALUE                  : CUST1002
==============================
SEQ                           : 3
COL_CNT                       : 5
COLS                          : AMOUNT,CUST_NAME,INVOICE_NUMBER,ORDER_DATE,ORDER_NUMBER
BINDCOLS                      : :AMOUNT,:CUST_NAME,:INVOICE_NUMBER,:ORDER_DATE,:ORDER_NUMBER
COLUMN_VALUE                  : INV002
==============================
SEQ                           : 4
COL_CNT                       : 5
COLS                          : AMOUNT,CUST_NAME,INVOICE_NUMBER,ORDER_DATE,ORDER_NUMBER
BINDCOLS                      : :AMOUNT,:CUST_NAME,:INVOICE_NUMBER,:ORDER_DATE,:ORDER_NUMBER
COLUMN_VALUE                  : 02/11/2018 15:14:00
==============================
SEQ                           : 5
COL_CNT                       : 5
COLS                          : AMOUNT,CUST_NAME,INVOICE_NUMBER,ORDER_DATE,ORDER_NUMBER
BINDCOLS                      : :AMOUNT,:CUST_NAME,:INVOICE_NUMBER,:ORDER_DATE,:ORDER_NUMBER
COLUMN_VALUE                  : ORD1002

Now that we have each data value, plus all the components for an INSERT statement (including binding values not using any literals), we’re good to go:


SQL> create table test_tab (
  2    order_number varchar2(10),
  3    cust_name varchar2(10),
  4    invoice_number varchar2(10),
  5    order_date varchar2(30),
  6    amount varchar2(10)
  7  );

Table created.


SQL> declare
  2    l_sql varchar2(32000);
  3    l_cur     pls_integer := dbms_sql.open_cursor;
  4    l_execute pls_integer;
  5  begin
  6    for i in (
  7      select   table_name,
  8               column_name,
  9               row_number() over ( partition by table_name, row_seq order by column_name ) as seq,
 10               count(*) over ( partition by table_name, row_seq ) as col_cnt,
 11               listagg(column_name,',') within group ( order by column_name ) over ( partition by table_name, row_seq ) as cols,
 12               listagg(':'||column_name,',') within group ( order by column_name ) over ( partition by table_name, row_seq ) as bindcols,
 13               column_value
 14      from data_table
 15      order by table_name, row_seq, column_name
 16   ) loop
 17       if i.seq = 1 then
 18         l_sql := 'insert into '||i.table_name||'('||i.cols||') values ('||i.bindcols||')';
 19         dbms_sql.parse(l_cur,l_sql,dbms_sql.native);
 20       end if;
 21       dbms_sql.bind_variable(l_cur,i.column_name,i.column_value);
 22       if i.seq = i.col_cnt then
 23         l_execute := dbms_sql.execute(l_cur);
 24       end if;
 25   end loop;
 26   dbms_sql.close_cursor(l_cur);
 27  end;
 28  /

PL/SQL procedure successfully completed.

SQL>
SQL> select * from test_tab;

ORDER_NUMB CUST_NAME  INVOICE_NU ORDER_DATE                     AMOUNT
---------- ---------- ---------- ------------------------------ ----------
ORD1001    CUST1001   INV001     04/11/2018 15:14:00            1001
ORD1002    CUST1002   INV002     02/11/2018 15:14:00            1002

2 rows selected.

SQL>

DBMS_JOB – the joy of transactions

This is a followup to yesterdays post on DBMS_JOB and is critical if you’re upgrading to 19c soon. Mike Dietrich wrote a nice piece last week on the “under the covers” migration of the now deprecated DBMS_JOB package to the new Scheduler architecture. You should check it out before reading on here.

Mike’s post concerned mainly what would happen during upgrade (spoiler: the DBMS_JOB jobs become scheduler jobs but you can maintain them using the old API without drama), but immediately Twitter was a buzz with a couple of concerns that I wanted to address:

1) What about new jobs submitted via the old API after the upgrade?

For comparison, here’s what you see currently in 18c – DBMS_JOB is quite separate from the scheduler.


SQL> select banner from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production

SQL> declare
  2    j int;
  3  begin
  4    dbms_job.submit(j,'begin dbms_session.sleep(60); end;',sysdate,'sysdate+1');
  5  end;
  6  /

PL/SQL procedure successfully completed.

SQL> select job, what from user_jobs;

       JOB WHAT
---------- --------------------------------------------------
       181 begin dbms_session.sleep(60); end;

SQL> select job_name, job_action from user_scheduler_jobs;

no rows selected


Now here’s the same in 19c


SQL> select banner from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production

SQL> declare
  2    j int;
  3  begin
  4    dbms_job.submit(j,'begin dbms_session.sleep(60); end;',sysdate,'sysdate+1');
  5  end;
  6  /

PL/SQL procedure successfully completed.

SQL> select job, what from user_jobs;

       JOB WHAT
---------- --------------------------------------------------
        22 begin dbms_session.sleep(60); end;

SQL> select job_name, job_action from user_scheduler_jobs;

JOB_NAME             JOB_ACTION
-------------------- ------------------------------------------------------------
DBMS_JOB$_22         begin dbms_session.sleep(60); end;

SQL> select * from user_scheduler_jobs
  2  @pr
==============================
JOB_NAME                      : DBMS_JOB$_22
JOB_SUBNAME                   :
JOB_STYLE                     : REGULAR
JOB_CREATOR                   : MCDONAC
CLIENT_ID                     :
GLOBAL_UID                    :
PROGRAM_OWNER                 :
PROGRAM_NAME                  :
JOB_TYPE                      : PLSQL_BLOCK
JOB_ACTION                    : begin dbms_session.sleep(60); end;
NUMBER_OF_ARGUMENTS           : 0
SCHEDULE_OWNER                :
SCHEDULE_NAME                 :
SCHEDULE_TYPE                 : PLSQL
START_DATE                    : 26-MAY-19 07.12.47.000000 PM +08:00
REPEAT_INTERVAL               : sysdate+1
EVENT_QUEUE_OWNER             :
EVENT_QUEUE_NAME              :
EVENT_QUEUE_AGENT             :
EVENT_CONDITION               :
EVENT_RULE                    :
FILE_WATCHER_OWNER            :
FILE_WATCHER_NAME             :
END_DATE                      :
JOB_CLASS                     : DEFAULT_JOB_CLASS
ENABLED                       : TRUE
AUTO_DROP                     : TRUE
RESTART_ON_RECOVERY           : FALSE
RESTART_ON_FAILURE            : FALSE
STATE                         : SCHEDULED
JOB_PRIORITY                  : 3
RUN_COUNT                     : 0
UPTIME_RUN_COUNT              :
MAX_RUNS                      :
FAILURE_COUNT                 : 0
UPTIME_FAILURE_COUNT          :
MAX_FAILURES                  :
RETRY_COUNT                   : 0
LAST_START_DATE               :
LAST_RUN_DURATION             :
NEXT_RUN_DATE                 : 27-MAY-19 07.12.47.000000 PM -07:00
SCHEDULE_LIMIT                :
MAX_RUN_DURATION              :
LOGGING_LEVEL                 : OFF
STORE_OUTPUT                  : TRUE
STOP_ON_WINDOW_CLOSE          : FALSE
INSTANCE_STICKINESS           : TRUE
RAISE_EVENTS                  :
SYSTEM                        : FALSE
JOB_WEIGHT                    : 1
NLS_ENV                       : NLS_LANGUAGE='AMERICAN' NLS_TERRITORY='AMERICA'...
SOURCE                        :
NUMBER_OF_DESTINATIONS        : 1
DESTINATION_OWNER             :
DESTINATION                   :
CREDENTIAL_OWNER              :
CREDENTIAL_NAME               :
INSTANCE_ID                   :
DEFERRED_DROP                 : FALSE
ALLOW_RUNS_IN_RESTRICTED_MODE : FALSE
COMMENTS                      :
FLAGS                         : 9007199254872624
RESTARTABLE                   : FALSE
HAS_CONSTRAINTS               : FALSE
CONNECT_CREDENTIAL_OWNER      :
CONNECT_CREDENTIAL_NAME       :
FAIL_ON_SCRIPT_ERROR          : FALSE

PL/SQL procedure successfully completed.

You can see that it will be enabled by default and is classed as a regular job. Even if you submit a one-off job, it will still be classed as regular not lightweight.


SQL> declare
  2    j int;
  3  begin
  4    dbms_job.submit(j,'begin dbms_session.sleep(60); end;',sysdate);
  5  end;
  6  /

PL/SQL procedure successfully completed.

SQL> select job, what from user_jobs;

       JOB WHAT
---------- --------------------------------------------------
        25 begin dbms_session.sleep(60); end;

SQL> select * from user_scheduler_jobs
  2  @pr
==============================
JOB_NAME                      : DBMS_JOB$_25
JOB_SUBNAME                   :
JOB_STYLE                     : REGULAR
JOB_CREATOR                   : MCDONAC
CLIENT_ID                     :
GLOBAL_UID                    :
PROGRAM_OWNER                 :
PROGRAM_NAME                  :
JOB_TYPE                      : PLSQL_BLOCK
JOB_ACTION                    : begin dbms_session.sleep(60); end;
NUMBER_OF_ARGUMENTS           : 0
SCHEDULE_OWNER                :
SCHEDULE_NAME                 :
SCHEDULE_TYPE                 : ONCE
START_DATE                    : 26-MAY-19 08.37.09.000000 PM +08:00
REPEAT_INTERVAL               :
EVENT_QUEUE_OWNER             :
EVENT_QUEUE_NAME              :
EVENT_QUEUE_AGENT             :
EVENT_CONDITION               :
EVENT_RULE                    :
FILE_WATCHER_OWNER            :
FILE_WATCHER_NAME             :
END_DATE                      :
JOB_CLASS                     : DEFAULT_JOB_CLASS
ENABLED                       : TRUE
AUTO_DROP                     : TRUE
RESTART_ON_RECOVERY           : FALSE
RESTART_ON_FAILURE            : FALSE
STATE                         : SCHEDULED
JOB_PRIORITY                  : 3
RUN_COUNT                     : 0
UPTIME_RUN_COUNT              :
MAX_RUNS                      :
FAILURE_COUNT                 : 0
UPTIME_FAILURE_COUNT          :
MAX_FAILURES                  :
RETRY_COUNT                   : 0
LAST_START_DATE               :
LAST_RUN_DURATION             :
NEXT_RUN_DATE                 : 27-MAY-19 11.37.09.268652 AM +08:00
SCHEDULE_LIMIT                :
MAX_RUN_DURATION              :
LOGGING_LEVEL                 : OFF
STORE_OUTPUT                  : TRUE
STOP_ON_WINDOW_CLOSE          : FALSE
INSTANCE_STICKINESS           : TRUE
RAISE_EVENTS                  :
SYSTEM                        : FALSE
JOB_WEIGHT                    : 1
NLS_ENV                       : NLS_LANGUAGE='AMERICAN' NLS_TERRITORY='AMERICA' NLS_CURRENC
NLS_NUMERIC_CHARACTERS='.,' NLS_CALENDAR='GREGORIAN' NLS_DATE_FORMAT='DD-MON-RR' NLS_DATE_L
NLS_TIME_FORMAT='HH.MI.SSXFF AM' NLS_TIMESTAMP_FORMAT='DD-MON-RR HH.MI.SSXFF AM' NLS_TIME_T
NLS_TIMESTAMP_TZ_FORMAT='DD-MON-RR HH.MI.SSXFF AM TZR' NLS_DUAL_CURRENCY='$' NLS_COMP='BINA
NLS_NCHAR_CONV_EXCP='FALSE'
SOURCE                        :
NUMBER_OF_DESTINATIONS        : 1
DESTINATION_OWNER             :
DESTINATION                   :
CREDENTIAL_OWNER              :
CREDENTIAL_NAME               :
INSTANCE_ID                   :
DEFERRED_DROP                 : FALSE
ALLOW_RUNS_IN_RESTRICTED_MODE : FALSE
COMMENTS                      :
FLAGS                         : 9007199254880304
RESTARTABLE                   : FALSE
HAS_CONSTRAINTS               : FALSE
CONNECT_CREDENTIAL_OWNER      :
CONNECT_CREDENTIAL_NAME       :
FAIL_ON_SCRIPT_ERROR          : FALSE

PL/SQL procedure successfully completed.

Important Note: There is one critical thing you need to be aware of with this change. DBMS_JOB is an “old-school” public API, hence anyone and everyone pretty much had access to it. Anyone familiar with DBMS_SCHEDULER will know that the components within it are true database objects, which can be protected with privileges. So when you upgrade to 19c, to ensure that you do not get nasty surprises, users that are using DBMS_JOB will need the CREATE JOB privilege otherwise their previous ability to submit jobs will disappear. For example:


SQL> conn scott/tiger@db192_pdb1
Connected.

SQL> declare
  2    j int;
  3  begin
  4    dbms_job.submit(j,'begin dbms_session.sleep(60); end;',sysdate);
  5  end;
  6  /
declare
*
ERROR at line 1:
ORA-27486: insufficient privileges
ORA-06512: at "SYS.DBMS_ISCHED", line 9387
ORA-06512: at "SYS.DBMS_ISCHED", line 9376
ORA-06512: at "SYS.DBMS_ISCHED", line 175
ORA-06512: at "SYS.DBMS_ISCHED", line 9302
ORA-06512: at "SYS.DBMS_IJOB", line 196
ORA-06512: at "SYS.DBMS_JOB", line 168
ORA-06512: at line 4

2) What about the cherished transactional nature of DBMS_JOB?

If the old style jobs are now scheduler jobs, do we lose the transactional element of DBMS_JOB? Nope. Even though we will create a paired scheduler entry, DBMS_JOB is still transactional (which I love!).


SQL> declare
  2    j int;
  3  begin
  4    dbms_job.submit(j,'begin dbms_session.sleep(60); end;',sysdate,'sysdate+1');
  5  end;
  6  /

PL/SQL procedure successfully completed.

SQL> select job, what from user_jobs;

       JOB WHAT
---------- --------------------------------------------------
        21 begin dbms_session.sleep(60); end;

SQL> select job_name, job_action from user_scheduler_jobs;

JOB_NAME             JOB_ACTION
-------------------- ------------------------------------------------------------
DBMS_JOB$_21         begin dbms_session.sleep(60); end;

SQL> roll;
Rollback complete.
SQL> select job, what from user_jobs;

no rows selected

SQL> select job_name, job_action from user_scheduler_jobs;

no rows selected

SQL>

If you love that transactional capability too, then make sure to visit the database ideas page to vote up this idea. I have no issue with DBMS_SCHEDULER doing commits by default, but it would be cool if (say) for lightweight jobs we had an option to choose whether we commit or not.

DBMS_JOB – watching for failures

I had a friend point this one out to me recently. They use DBMS_JOB to allow some “fire and forget” style functionality for user, and in their case, the jobs are “best efforts” in that if they fail, it is not a big deal.

So whilst this may sound counter-intuitive, but if you rely on jobs submitted via DBMS_JOB to fail, then please read on.

By default, if a job fails 16 times in a row, it will marked as broken by the the database. Here’s a simple of example that, where the anonymous block will obviously fail because we are dividing by zero each time. I’ll set the job to run every 5 seconds, so that within a couple of minutes we’ll have 16 failures. First I’ll run this on 11.2.0.4


SQL> select banner from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.4 - 64bit Production

SQL> declare
  2    j int;
  3  begin
  4    dbms_job.submit(j,'declare x int; begin x := 1/0; end;',sysdate,'sysdate+5/86400');
  5  end;
  6  /

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

[wait for a bit]

SQL> select job, what, failures from user_jobs;

       JOB WHAT                                                 FAILURES
---------- -------------------------------------------------- ----------
        23 declare x int; begin x := 1/0; end;                         8

[wait for a bit]

SQL> /

       JOB WHAT                                                 FAILURES B
---------- -------------------------------------------------- ---------- -
         3 declare x int; begin x := 1/0; end;                         9 N

SQL> /

       JOB WHAT                                                 FAILURES B
---------- -------------------------------------------------- ---------- -
         3 declare x int; begin x := 1/0; end;                        16 Y

SQL> /

       JOB WHAT                                                 FAILURES B
---------- -------------------------------------------------- ---------- -
         3 declare x int; begin x := 1/0; end;                        16 Y

You can see the “breaking” of the job in action here. We got to 16 failures, and the database decided “enough is enough” Smile and the job will no longer run until some sort of intervention is performed by an administrator.

Now I’ll run that demo again in 12.2


SQL> declare
  2    j int;
  3  begin
  4    dbms_job.submit(j,'declare x int; begin x := 1/0; end;',sysdate,'sysdate+5/86400');
  5  end;
  6  /

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

[wait for a bit]

SQL> select job, what, failures from user_jobs;

       JOB WHAT                                                 FAILURES
---------- -------------------------------------------------- ----------
        23 declare x int; begin x := 1/0; end;                         8

[wait for a bit]

SQL> /

       JOB WHAT                                                 FAILURES B
---------- -------------------------------------------------- ---------- -
         3 declare x int; begin x := 1/0; end;                        13 N

SQL> /

       JOB WHAT                                                 FAILURES B
---------- -------------------------------------------------- ---------- -
         3 declare x int; begin x := 1/0; end;                        19 N

SQL> /

       JOB WHAT                                                 FAILURES B
---------- -------------------------------------------------- ---------- -
         3 declare x int; begin x := 1/0; end;                        25 N

You can see that in 12.2 (and I’ve tested in 18 and 19) that failures can continue past 16 unabated. If you’re being hit by this, patches are available, so please get in touch with Support. A quick workaround until you can apply patches is to use another job to monitor the others. Here’s a small anonymous block you could run in each container as SYSDBA that you could schedule (say) every few minutes


SQL> begin
  2    for i in ( select job
  3               from dba_jobs
  4               where  failures > 16
  5               and    job not in ( sys_context('userenv','fg_job_id'),sys_context('userenv','bg_job_id'))
  6               and    job not in ( select job from dba_jobs_running )
  7               )
  8    loop
  9      dbms_ijob.broken(i.job,true);
 10    end loop;
 11    commit;
 12  end;
 13  /

PL/SQL procedure successfully completed.

Ideally, you probably want to be moving to DBMS_SCHEDULER where possible, which has a greater flexibility and control over error handling amongst many other things.