In the recent versions of the Oracle database, we’ve had the ability to convert between a System Change Number (SCN) and the approximate time to which that SCN pertains. These functions are unsurprisingly called SCN_TO_TIMESTAMP and TIMESTAMP_TO_SCN. The only potential misnomer here is that even though the functions are called “timestamp” and return a datatype of timestamp, on most platforms you are going to notice that the granularity doesn’t run down into fractions of seconds
SQL> select scn_to_timestamp(14816563713652) from dual;
SCN_TO_TIMESTAMP(14816563713652)
---------------------------------------------------------
08-JUN-19 02.30.59.000000000 AM
This all looks great until you start poking around too far into the past, and you end up in territory like this:
SQL> select scn_to_timestamp(14816563693489) from dual;
select scn_to_timestamp(14816563693489) from dual
*
ERROR at line 1:
ORA-08181: specified number is not a valid system change number
ORA-06512: at "SYS.SCN_TO_TIMESTAMP", line 1
If I poke around in V$LOG_HISTORY, then I can pretty easily confirm that SCN 14816563693489 was indeed a valid SCN for this database at some stage in the past, so the fact that we can encounter ORA-08181 suggests that there is a finite structure that holds the mapping between SCNs and the time at which those SCNs pertain to. And indeed there is. The table is called SYS.SMON_SCN_TIME
SQL> desc sys.smon_scn_time
Name Null? Type
----------------------------------------- -------- --------------------
THREAD NUMBER
TIME_MP NUMBER
TIME_DP DATE
SCN_WRP NUMBER
SCN_BAS NUMBER
NUM_MAPPINGS NUMBER
TIM_SCN_MAP RAW(1200)
SCN NUMBER
ORIG_THREAD NUMBER
and in most instances, it will hold around a week’s worth of SCN to timestamp mapping information.
SQL> select min(time_dp), max(time_dp) from sys.smon_scn_time;
MIN(TIME_ MAX(TIME_
--------- ---------
07-JUN-19 13-JUN-19
SQL> select count(*) from sys.smon_scn_time;
COUNT(*)
----------
1603
When SMON_SCN_TIME first came into existence, the granularity of data in this table was an entry approximately every 5 minutes. Hence you could map an SCN to, at best, a 5 minute window. In more recent releases, this has been improved to a granularity of 3 seconds. But the table does not appear to reflect that evolution if you look at some sample data.
SQL> select scn, time_dp
2 from sys.smon_scn_time s
3 where rownum <= 10;
SCN TIME_DP
------------------ -------------------
14816565366476 11/06/2019 08:25:11
14816565366679 11/06/2019 08:30:38
14816565366808 11/06/2019 08:35:11
14816565367164 11/06/2019 08:40:44
14816565367291 11/06/2019 08:45:12
14816565367475 11/06/2019 08:50:32
14816565029366 10/06/2019 01:28:13
14816565029605 10/06/2019 01:33:40
14816565032515 10/06/2019 01:38:13
14816565032779 10/06/2019 01:43:40
It still has a row every 5 minutes to handle backward compatibility, and to drill down to the 3 second level, you are expected to use the functions SCN_TO_TIMESTAMP and TIMESTAMP_TO_SCN already mentioned.
But what if you wanted to store this SCN to timestamp mapping for longer? Activating flashback data archive is one way of achieving this, but what if you just wanted to store the mapping forever without doing this? At first, it seems to be a simple task. I can simply get the lowest SCN from SMON_SCN_TIME:
SQL> select min(scn) scn from sys.SMON_SCN_TIME;
SCN
------------------
14816563714099
and then use a familiar CONNECT BY technique to roll through the SCNs from that point onward
SQL> select 14816563714099+rownum scn,
2 scn_to_timestamp(14816563714099+rownum) ts
3 from ( select 1 from dual connect by level <= 50 );
SCN TS
------------------ -----------------------------------
14816563714100 08-JUN-19 02.41.53.000000000 AM
14816563714101 08-JUN-19 02.41.56.000000000 AM
14816563714102 08-JUN-19 02.41.59.000000000 AM
14816563714103 08-JUN-19 02.41.59.000000000 AM
14816563714104 08-JUN-19 02.41.59.000000000 AM
14816563714105 08-JUN-19 02.41.59.000000000 AM
14816563714106 08-JUN-19 02.41.59.000000000 AM
14816563714107 08-JUN-19 02.41.59.000000000 AM
14816563714108 08-JUN-19 02.41.59.000000000 AM
14816563714109 08-JUN-19 02.41.59.000000000 AM
14816563714110 08-JUN-19 02.41.59.000000000 AM
14816563714111 08-JUN-19 02.42.05.000000000 AM
14816563714112 08-JUN-19 02.42.08.000000000 AM
14816563714113 08-JUN-19 02.42.11.000000000 AM
14816563714114 08-JUN-19 02.42.14.000000000 AM
14816563714115 08-JUN-19 02.42.17.000000000 AM
14816563714116 08-JUN-19 02.42.20.000000000 AM
14816563714117 08-JUN-19 02.42.23.000000000 AM
14816563714118 08-JUN-19 02.42.26.000000000 AM
14816563714119 08-JUN-19 02.42.29.000000000 AM
14816563714120 08-JUN-19 02.42.32.000000000 AM
14816563714121 08-JUN-19 02.42.35.000000000 AM
14816563714122 08-JUN-19 02.42.38.000000000 AM
14816563714123 08-JUN-19 02.42.41.000000000 AM
14816563714124 08-JUN-19 02.42.44.000000000 AM
14816563714125 08-JUN-19 02.42.47.000000000 AM
14816563714126 08-JUN-19 02.42.50.000000000 AM
14816563714127 08-JUN-19 02.42.53.000000000 AM
14816563714128 08-JUN-19 02.42.56.000000000 AM
14816563714129 08-JUN-19 02.42.59.000000000 AM
14816563714130 08-JUN-19 02.42.59.000000000 AM
14816563714131 08-JUN-19 02.42.59.000000000 AM
14816563714132 08-JUN-19 02.42.59.000000000 AM
14816563714133 08-JUN-19 02.42.59.000000000 AM
...
...
But if we are going to store this data, we can start to see some things that can be improved.
Firstly, a lot of rows have the same timestamp. As mentioned, there is only a granularity of 3 seconds, and we can certainly burn through a lot of SCNs in a 3 second period. I can use some simple analytics to see if a timestamp matches the trailing one and filter it out if that is the case.
SQL> with
2 t as
3 ( select min(scn) lo from sys.smon_scn_time ),
4 all_scns as
5 ( select lo+rownum scn, scn_to_timestamp(lo+rownum) ts
6 from t
7 connect by level <= 100 ),
8 delta_scns as
9 ( select a.*, row_number() over ( partition by ts order by scn ) as rn
10 from all_scns a )
11 select * from delta_scns
12 where rn = 1;
SCN TS RN
------------------ ---------------------------------------- ----------
14816563714100 08-JUN-19 02.41.53.000000000 AM 1
14816563714101 08-JUN-19 02.41.56.000000000 AM 1
14816563714102 08-JUN-19 02.41.59.000000000 AM 1
14816563714111 08-JUN-19 02.42.05.000000000 AM 1
14816563714112 08-JUN-19 02.42.08.000000000 AM 1
14816563714113 08-JUN-19 02.42.11.000000000 AM 1
14816563714114 08-JUN-19 02.42.14.000000000 AM 1
14816563714115 08-JUN-19 02.42.17.000000000 AM 1
14816563714116 08-JUN-19 02.42.20.000000000 AM 1
14816563714117 08-JUN-19 02.42.23.000000000 AM 1
14816563714118 08-JUN-19 02.42.26.000000000 AM 1
14816563714119 08-JUN-19 02.42.29.000000000 AM 1
14816563714120 08-JUN-19 02.42.32.000000000 AM 1
14816563714121 08-JUN-19 02.42.35.000000000 AM 1
14816563714122 08-JUN-19 02.42.38.000000000 AM 1
14816563714123 08-JUN-19 02.42.41.000000000 AM 1
14816563714124 08-JUN-19 02.42.44.000000000 AM 1
14816563714125 08-JUN-19 02.42.47.000000000 AM 1
14816563714126 08-JUN-19 02.42.50.000000000 AM 1
14816563714127 08-JUN-19 02.42.53.000000000 AM 1
14816563714128 08-JUN-19 02.42.56.000000000 AM 1
14816563714129 08-JUN-19 02.42.59.000000000 AM 1
...
...
Secondly, we need to stop at a particular point. I can easily get the highest SCN from the table
SQL> select max(scn) scn from sys.smon_scn_time;
SCN
------------------
14816565838367
but that is when things start to get problematic. Even for just this small set of data (just a few days), on my almost entirely idle database (its my laptop):
SQL> select max(scn) - 14816563714100 from sys.smon_scn_time;
MAX(SCN)-14816563714100
-----------------------
2124267
you can see that I’ve burned through over 2million SCNs. When I put that back into my query
SQL> with
2 t as
3 ( select min(scn) lo, max(scn) hi from sys.smon_scn_time ),
4 all_scns as
5 ( select lo+rownum scn, scn_to_timestamp(lo+rownum) ts
6 from t
7 connect by level <= hi-lo )
8 delta_scns as
9 ( select a.*, row_number() over ( partition by ts order by scn ) as rn
10 from all_scns a )
11 select * from delta_scns
12 where rn = 1
it took a looonnnggg time to come back. This is understandable – it is millions of calls to extract a timestamp, followed by some heavy duty analytics to remove duplicates. It has become a very expensive operation to perform.
So that got me thinking – how does the database do it? I started with – how does the database get down to 3 second granularity when there is still only a row every 5 minutes in SMON_SCN_TIME? This is where the TIM_SCN_MAP column comes into play.
SQL> select scn, num_mappings, tim_scn_map
2 from sys.smon_scn_time
3 where scn = 14816565797824
SCN NUM_MAPPINGS TIM_SCN_MAP
------------------ ------------ -----------------------------------------------------------------------------
14816565797824 100 2F90015DC12324C0790D00003290015DC22324C0790D00003590015DC32324C0790D0000...
Searching along raw data and looking for patterns, I could see that the “D0000” was repeated every 24 characters (or 12 bytes). A simple check confirmed that this appears to equate to the NUM_MAPPINGS column, so it is reasonable to assume that the raw data is a set of time mappings. Given that the maximum value for NUM_MAPPINGS is 100, this also fits the hypothesis because 100 x 3 seconds granularity yields 5 minutes, which is how often we get a row in SMON_SCN_TIME
SQL> select scn, num_mappings, length(tim_scn_map)/24
2 from sys.smon_scn_time
3 /
SCN NUM_MAPPINGS LENGTH(TIM_SCN_MAP)/24
------------------ ------------ ----------------------
14816565366476 100 100
14816565366679 83 83
14816565366808 100 100
14816565367164 82 82
14816565367291 100 100
14816565367475 86 86
14816565029366 100 100
...
So 24 characters (12 bytes) is probably an occurrence of a SCN to timestamp mapping. The task now is dig out that information from that raw encoding. I’ll grab that first 12 bytes from the row above 2F90015DC12324C0790D0000 and see if we can align it to other values in the table. The first thing I noticed is that ‘790D0000’ rarely changes across the whole table, so I converted that to decimal to see if I could match it to anything. We have to swap the bytes first, and then I got:
SQL> select to_number('0D79','xxxx') from dual;
TO_NUMBER('0D79','XXXX')
------------------------
3449
That matched the SCN_WRP column in SMON_SCN_TIME. Grabbing the next 4 bytes gives
SQL> select to_number('C02423C1','xxxxxxxx') from dual;
TO_NUMBER('C02423C1','XXXXXXXX')
--------------------------------
3223593921
That matched the SCN_BAS column in SMON_SCN_TIME. So SCN_WRP and SCN_BAS are enough to form a complete SCN number. Which would mean that the remaining 4 bytes should be the time information.
SQL> select to_number('5D01902F','xxxxxxxx') from dual;
TO_NUMBER('5D01902F','XXXXXXXX')
--------------------------------
1560383535
Obviously that does not look like a date or a time, but this is an internal format that is seen in various places around Oracle database, for example, in redo logs etc. To explain where it comes from, it is better to approach it from the other direction (starting with time).
Lets say I wanted a unique number for clock time of just hours and minutes (eg: 09:37). I could do something like:
unique val = hours * 100 + minutes
That would work because I know that minutes is capped at 59, so there is no chance of an overlap by using a multiplier of 100. So here is how that concept can be taken to its full extent to get a unique number for a full date and time. I’ll pick a date/time of 2019-6-12 23:52:15 as an example
- Take the year, 2019.
- We don’t need anything before Oracle was “born”, so subtract 1976 = 43
- For the months, multiply by 13 (anything more than 12 months) = 559
- Because we’re IT nerds, we’ll start months at zero (0=January), so we add 5 for June = 564
- For the day in a month, multiply by 32 (anything more than 31) = 18048
- Because we’re IT nerds, we’ll start day of month at zero (=1st), so we add 11 for the 12th = 18059
- For the hours, multiply by 24 (because hours never exceed 23) = 433416
- No need for nerdy stuff here, because hours are already normalised at 0-23, so we add 23 = 433439
- For the minutes, multiply by 60 (minutes are 0 to 59) = 26006340
- Then add the minutes = 26006392
- Same for the seconds, multiply by 60 = 1560383520
- and finally we add the seconds = 1560383535
And voila! You can that it matches the decimal expansion of the 4 bytes of 5D01902F above. Now that we know how the time to number conversion is done, a little PL/SQL lets the reverse be done (number to time):
SQL> declare
2 t int := 1560383535;
3 secs int;
4 mins int;
5 hrs int;
6 dy int;
7 mon int;
8 yr int;
9 begin
10 secs := mod(t,60);
11 t := t - secs;
12 t := floor(t / 60);
13 mins := mod(t,60);
14 t := t - mins;
15 t := floor(t / 60);
16 hrs := mod(t,24);
17 t := t - hrs;
18 t := floor(t / 24);
19 dy := mod(t,32);
20 t := t - dy;
21 t := floor(t / 32);
22 dy := dy + 1;
23 mon := mod(t,13);
24 t := t - mon;
25 mon := mon + 1;
26 t := floor(t / 13);
27 yr := t + 1976;
28 dbms_output.put_line(yr||'-'||mon||'-'||dy||' '||hrs||':'||mins||':'||secs);
29 end;
30 /
2019-6-12 23:52:15
To make all of this simpler and modular, I’ll build a couple of functions to bring all the bits and pieces we’ve learnt together. First a function to take a non-byte swapped hex string to a SCN
SQL> create or replace
2 function raw_to_scn(p_scn varchar2) return number is
3 begin
4 return to_number(
5 substr(p_scn,7,2)||
6 substr(p_scn,5,2)||
7 substr(p_scn,3,2)||
8 substr(p_scn,1,2),'xxxxxxxx');
9 end;
10 /
Function created.
and then a function that will extend the anonymous block above to take a non-byte swapped hex string and return a timestamp (well, a date in reality).
SQL> create or replace
2 function raw_to_date(p_time varchar2) return date is
3 t int := to_number(
4 substr(p_time,7,2)||
5 substr(p_time,5,2)||
6 substr(p_time,3,2)||
7 substr(p_time,1,2),'xxxxxxxx');
8 secs int;
9 mins int;
10 hrs int;
11 dy int;
12 mon int;
13 yr int;
14 begin
15 secs := mod(t,60);
16 t := t - secs;
17 t := floor(t / 60);
18 mins := mod(t,60);
19 t := t - mins;
20 t := floor(t / 60);
21 hrs := mod(t,24);
22 t := t - hrs;
23 t := floor(t / 24);
24 dy := mod(t,32);
25 t := t - dy;
26 t := floor(t / 32);
27 dy := dy + 1;
28 mon := mod(t,13);
29 t := t - mon;
30 mon := mon + 1;
31 t := floor(t / 13);
32 yr := t + 1976;
33 return to_date(yr||'-'||mon||'-'||dy||' '||hrs||':'||mins||':'||secs,'yyyy-mm-dd hh24:mi:ss');
34 end;
35 /
Function created.
With these in place, I can create a couple of types to serve as return datatypes for a pipelined function
SQL> create or replace
2 type scn_dte as object
3 ( scn int, dte date );
4 /
Type created.
SQL>
SQL> create or replace
2 type scn_dte_nt as table of scn_dte
3 /
Type created.
and here is a function that will take important components from SMON_SCN_TIME namely SCN_BAS, NUM_MAPPINGS, SCAN_WRAP and the raw data in TIM_SCN_MAP to spit out the 3-second interval data rather than just 1 row per 5 minutes.
SQL> create or replace
2 function full_smon_scn_timestamp(p_start_scn int default 0) return scn_dte_nt pipelined as
3 l_map varchar2(2400);
4 begin
5 for i in ( select * from sys.smon_scn_time
6 where scn > p_start_scn
7 order by scn
8 )
9 loop
10 pipe row ( scn_dte(i.scn_wrp * 4294967296 + i.scn_bas,i.time_dp));
11 l_map := i.tim_scn_map;
12 for j in 1 .. i.num_mappings
13 loop
14 pipe row (
15 scn_dte(
16 i.scn_wrp * 4294967296 + raw_to_scn(substr(l_map,9,8)),
17 raw_to_date(substr(l_map,1,8))
18 )
19 );
20 l_map := substr(l_map,25);
21 end loop;
22 end loop;
23 end;
24 /
Function created.
Let’s give that function a run to see what comes out:
SQL> select * from full_smon_scn_timestamp()
2 where rownum <= 20;
SCN DTE
------------------ -------------------
14816563726597 07/06/2019 21:50:43
14816563726598 07/06/2019 21:50:46
14816563726599 07/06/2019 21:50:49
14816563726600 07/06/2019 21:50:52
14816563726601 07/06/2019 21:50:55
14816563726602 07/06/2019 21:50:58
14816563726603 07/06/2019 21:51:01
14816563726604 07/06/2019 21:51:04
14816563726605 07/06/2019 21:51:07
14816563726606 07/06/2019 21:51:10
14816563726607 07/06/2019 21:51:13
14816563726608 07/06/2019 21:51:16
14816563726609 07/06/2019 21:51:19
14816563726610 07/06/2019 21:51:22
14816563726611 07/06/2019 21:51:25
14816563726612 07/06/2019 21:51:28
14816563726633 07/06/2019 21:51:34
14816563726634 07/06/2019 21:51:37
14816563726635 07/06/2019 21:51:40
14816563726636 07/06/2019 21:51:43
Each row in SMON_SCN_TIME contains a starting value for the SCN plus all of the mappings, so the total number of 3-second intervals is:
SQL> select sum(num_mappings+1) from sys.smon_scn_time;
SUM(NUM_MAPPINGS+1)
-------------------
145262
and I can compare that to the total number of rows that will be returned from my function
SQL> select count(*) from full_smon_scn_timestamp();
COUNT(*)
----------
145262
So there you have a way to grab all of those 3-second granularity entries in SMON_SCN_TIME without having to call SCN_TO_TIMESTAMP millions of time. You could run this function (say) daily by passing in a starting SCN value equal to the highest value you have so far retrieved to build up your own custom SMON_SCN_TIME variant that you can keep for as long as you want.
Of course, poking around in raw fields is totally unsupported, so don’t come crying to me if you decide to use this for anything more than exploration and one day it all blows up in your face 🙂
Quite a dig :).
Thank you, Connor!
Interesting read !
Small remark : the example formula to get a unique number for hours and minutes should be “unique val = hours * 60 + minutes” because minutes are 0 to 59. With 30 as multiplier, there is some overlap.
Ah yes of course. Post will be updated. Thanks