A few months back I gave a talk on the importance of good physical design if you need to meet extreme levels of performance in your Oracle database. Topics included index selection, block contention, partitioning and also the issue of sequence contention.

Sequence caching

By default sequences are cached in the Oracle database in order to improve their performance. When people hear the term “cache” when it comes to sequence, they can mistakenly think of a “bucket” of numbers loaded into the SGA that are then fed out to database client processes as they request them. This is not how the caching implementation works. The caching is more a measure of how often we will synchronise the database dictionary definition of the sequence usage with what is happening in memory.

For example, if you create a sequence with the default cache of 20, this means that every time 20 sequence values are consumed by clients, we will return to the data dictionary and update the internal SEQ$ table with details on the sequence usage. When you get the first sequence value (typically “1”), the data dictionary will show the sequence LAST_NUMBER of 21, indicating that values 1 to 20 are consumed from the SGA, and that when those are exhausted (and someone gets the value of 21), we will return to the SEQ$ and update the LAST_NUMBER to become 41 and so forth.  In this way, if the database gets bounced, or the sequence values are flushed out of the SGA, we can go back to the data dictionary table to work out where the start the next round of sequence values. This is why there is always the possibility of gaps in sequence number generation.

Legacy Defaults

When sequences first arrived in the Oracle database, 50Mhz was a fast speed for a database server CPU. Systems were measured in tens of transactions per second, so a sequence cache value of 20 was typically little cause of concern. Of course, modern systems are now measured thousands or even millions of transactions per second. Consider what that means for sequences. If I attempt to consume 1,000,000 sequence values per second and the sequence is defined with the default cache of 20, it means that I now need to update the SEQ$ table in the data dictionary 50,000 times per second! That defeats the whole idea of caching to reduce data dictionary overheads.

We can observe this with a simple benchmark. I create a sequence with a cache size of 10, and then I will run 3 concurrent loader programs, each consuming 1,000,000 sequence values to insert into a target table.


SQL> create sequence seq cache 10;

Sequence created.

SQL> create table t ( x int );

Table created.

--
-- There are THREE concurrent sessions running this block
--
SQL> begin
  2   for i in 1 .. 1000000 loop
  3     insert into t values (seq.nextval);
  4   end loop;
  5   commit;
  6  end;
  7  /

PL/SQL procedure successfully completed.

Elapsed: 00:01:33.25

By checking V$SQL whilst the blocks run, we can see that the regular updates to the internal SEQ$ table, ending up with 300,000 executions, one execution for every 10 sequence numbers consumed.


SQL> select executions, sql_text from v$sql
  2  where sql_text like 'update%seq$%';

EXECUTIONS SQL_TEXT
---------- ----------------------------------------------------------------
         1 update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,
           order$=:6,cache=:7,highwater=:8,audit$=:9,flags=:10 where obj#=:
           1


SQL> /

EXECUTIONS SQL_TEXT
---------- ----------------------------------------------------------------
     10546 update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,
           order$=:6,cache=:7,highwater=:8,audit$=:9,flags=:10 where obj#=:
           1


...
...

SQL> /

EXECUTIONS SQL_TEXT
---------- ----------------------------------------------------------------
    172992 update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,
           order$=:6,cache=:7,highwater=:8,audit$=:9,flags=:10 where obj#=:
           1


SQL> /

EXECUTIONS SQL_TEXT
---------- ----------------------------------------------------------------
    300001 update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,
           order$=:6,cache=:7,highwater=:8,audit$=:9,flags=:10 where obj#=:
           1


The solution to this is trivial – simply bump up the cache value to something that reflects the rate of consumption. As mentioned, the cache size has no bearing on memory consumption in the SGA. If I need 1,000,000 sequence values per second, then a cache size in the millions is probably required.

But here is the most common result I see when running a query against customer databases



SQL> select distinct cache_size
  2  from   dba_sequences;

CACHE_SIZE
----------
        20
         0

People either go with the defaults or (even worse) go with no caching at all on the fallacy that this will avoid gaps or memory usage, and then end up with performance issues when their applications try to scale to production size workloads. Take note of the elapsed time in the demo above. Even with all of those dictionary updates, we still completed the task in 90 seconds, representing over 30,000 inserts per second. But can we do better ?

If you wont help yourself

It would be rude to say “Well, we just gave up asking people to increase the cache size” Smile so lets go with “We came up with some optimizations” in the latest release of the Oracle Database. I’m going to repeat the demo above, this time on Oracle 19c Release Update 12.


--
-- As before, there are THREE concurrent sessions running this block
--

SQL> begin
  2   for i in 1 .. 1000000 loop
  3     insert into t values (seq.nextval);
  4   end loop;
  5   commit;
  6  end;
  7  .
SQL>
SQL> /

PL/SQL procedure successfully completed.

Elapsed: 00:00:14.07

EXECUTIONS SQL_TEXT
---------- ----------------------------------------------------------------
         9 update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,
           order$=:6,cache=:7,highwater=:8,audit$=:9,flags=:10 where obj#=:
           1

The cache is still defined as 10, but notice the dramatic reduction in dictionary updates. As the consumption rate of the sequence increased, the database automatically detects this and internally adjusts the rate at which we update the dictionary, which is equivalent to the functionality you would see with a larger cache.

Also notice the enormous performance boost that came as a consequence. We’ve have dropped from 90 seconds down to 15 seconds, a 6-fold performance improvement with no application or database changes!

We can observe approximately how large the caching got by running the demo again, this time with a commit for every row, and then killing the database instance during the run. The dictionary will not have a chance to be updated and thus the gap between the last used sequence and the dictionary gives a measure of the caching (or mechanism that is equivalent to larger caching) that was taking place.


SQL> begin
  2   for i in 1 .. 1000000 loop
  3     insert into t values (seq.nextval);
  4     commit write nowait batch;
  5   end loop;
  6   commit;
  7  end;
  8  .
SQL>
SQL> /
begin
*
ERROR at line 1:
ORA-03113: end-of-file on communication channel
Process ID: 14016
Session ID: 1350 Serial number: 2105


Elapsed: 00:00:09.37

SQL> conn / as sysdba
Connected.
SQL> shutdown abort
ORACLE instance shut down.

--
-- AFTER RESTART
--
SQL> select max(x) from t;

    MAX(X)
----------
    537208
    
SQL> select last_number
  2  from   user_sequences
  3  where  sequence_name = 'SEQ';

LAST_NUMBER
-----------
     699193
     


In this case, the “caching” (ie, dictionary update frequency) looks like it got automatically pushed out to somewhere around 100,000.

So there you go … we killed sequence contention. Dead cold. An ex-parrot so to speak Smile. More details are in the docs.

TL;DR: If your sequences aren’t cached to meet their usage, we’ll take care of it for you automatically.

6 responses to “We killed sequence contention!”

  1. […] McDonald has just published a blog note about a tweak to sequences that appeared in recent versions of Oracle (19.10 – see tweet from […]

  2. Great, thanks

  3. Thank you.
    Will this new feature handle sequences defined with NOCACHE in the same manner?

    With this new functionality, kind of makes one wonder why anyone would choose NOCACHE ? It does not guarantee no gaps, and Oracle seems to be reinforcing here the fact more cache is a “good” thing.

    1. nocache sequences remain entirely nocached

  4. Andrew Markiewicz Avatar
    Andrew Markiewicz

    Good information. Thanks Connor.
    Do you happen to know if this applies to SE2 as well?
    I was reviewing a trace file and the seq$ update is near the top . Searching for information and found your article.
    The sequence is currently caching at 20 and the executions to update seq$ are 37696 which is close to an update every 20 rows.
    Was hoping to see this automatic caching increase but did not witness that.
    “Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 – Production
    Version 19.18.0.0.0”

    Probably will increase the cache size but if Oracle can do the needful that would be handy.
    Thanks

    1. I just checked 19.17 SE and all of the parameters are there, so I assume it will apply there as well

Got some thoughts? Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trending

Blog at WordPress.com.