Continuous Delivery – Moving to SECUREFILE

Posted by

You’ve been google-ing and you’ve seen articles (for example) like http://www.oracle.com/technetwork/articles/sql/11g-securefiles-084075.html and you’re pretty pumped about using SECUREFILE features. You’ve got lots of existing CLOB data and moving to SECUREFILE is going to make your life much easier. You’re also excited about the fact that none of your code has to change – you just change the existing CLOB columns to be stored as SECUREFILE and you’ll have set yourself up for all sorts of feature goodness !

But how do we do it in a continuous delivery (CD) model ? Because moving CLOB’s sounds like downtime doesn’t it ?

And by default, that’s exactly what it will be. Let’s explore that with a faux application that uses CLOB’s.

We’ll create an “application” which consists of a table, a sequence, a trigger* to populate the sequence, and a procedure which is our application interface.

(*- yes, in 12c we can skip the trigger and just use the new DEFAULT facilities available, but I’m keeping the trigger because that’s a very common occurrence in the industry as people move to 12c)

SQL> create table T1 ( x int, y clob, t timestamp default systimestamp, constraint T1_PK primary key (x) ) ;

Table created.

SQL> create sequence T1_SEQ;

Sequence created.

SQL> create or replace trigger T1_TRG
  2  before insert on T1
  3  for each row
  4  begin
  5    :new.x := t1_seq.nextval;
  6  end;
  7  /

Trigger created.

SQL> create or replace
  2  procedure T1_LOGGER is
  3    l_long varchar2(32767) := rpad('x',32000,'x');
  4  begin
  5    for i in 1 .. 1000 loop
  6      insert into t1 (y ) values (l_long);
  7      commit;
  8      dbms_lock.sleep(0.1);
  9    end loop;
 10  end;
 11  /

Procedure created.

And there we have our application. The T1_LOGGER application will insert approximately 10 rows per second, each one containing a 32kilobyte CLOB. We’ll start our application running:

SQL> --
SQL> -- This represents your app, busily inserting clobs
SQL> --
SQL> exec t1_logger;

And now here is where our continuous delivery comes in. I want to change to using SECUREFILES (and backdate the existing CLOBS) without interrupting user services. So I login to a second session and issue:

SQL> alter table T1 move lob ( y) store as securefile;

Table altered.
Elapsed: 00:00:46.11

Well that all looks fine, until I return to the session where my application is running

SQL> exec t1_logger;
BEGIN t1_logger; END;

*
ERROR at line 1:
ORA-01502: index 'SCOTT.T1_PK' or partition of such index is in unusable state
ORA-06512: at "SCOTT.T1_LOGGER", line 5
ORA-06512: at line 1

Ker-splat! I’ve broken my app, and its dead until I rebuild that index, which could be hours. Time to update mv CV 🙂 And what’s worse is, even if our application had not had a unique index, then we still created all sort of problems. Notice that our “alter table move” took 46 seconds. If we look at the insertion timestamps for our lobs, we see:

SQL> select max(t) from t1;

MAX(T)
---------------------------------------------------------------------------
03-JUL-15 10.26.04.644000 AM

SQL> select systimestamp from dual;

SYSTIMESTAMP
---------------------------------------------------------------------------
03-JUL-15 10.26.51.748000 AM +08:00

We ended up with a “freeze” of 46 seconds in our application whilst the table was locked as it was moved. If this had been (say) a service-based interface, there’s a very good chance that our services would have reported timeout’s and all sort of other errors back to the calling environment.

So we take a squizz around the Oracle documentation and find this:

blog_dbms_redef

The “ONLINE” option looks promising. Let’s give that a whirl…

SQL> alter table T1 move lob ( y) store as securefile online;
alter table T1 move lob ( y) store as securefile online
                                                 *
ERROR at line 1:
ORA-00906: missing left parenthesis


SQL> alter table T1 move online lob ( y) store as securefile;
alter table T1 move online lob ( y) store as securefile
*
ERROR at line 1:
ORA-01735: invalid ALTER TABLE option

We don’t get any joy there, since a closer look at the documentation tells us that the ONLINE option is for index-organised tables, so it doesn’t apply here.

So it seems like we’re stuck….Enter DBMS_REDEFINITION. This is a very cool utility for redefining tables without making them inaccessible to calling applications. Let’s go through the same process, this time using DBMS_REDEFINITION. We will build and run our application from scratch

SQL> create sequence T1_SEQ;

Sequence created.

SQL> create or replace trigger T1_TRG
  2  before insert on T1
  3  for each row
  4  begin
  5    :new.x := t1_seq.nextval;
  6  end;
  7  /

Trigger created.

SQL> create or replace
  2  procedure T1_logger is
  3    l_long varchar2(32767) := rpad('x',32000,'x');
  4  begin
  5    for i in 1 .. 1000 loop
  6      insert into t1 (y ) values (l_long);
  7      commit;
  8      dbms_lock.sleep(0.1);
  9    end loop;
 10  end;
 11  /

Procedure created.

SQL> --
SQL> -- This represents your app, busily inserting clobs
SQL> --
SQL> exec t1_logger;

Now we create a second session, whilst our application is running, and redefine our table online using a template tale to inform the database what structure we want.

SQL> create table T2 ( x int , y clob, t timestamp default systimestamp  ) lob ( y) store as securefile;

Table created.

SQL> EXEC DBMS_REDEFINITION.can_redef_table(user, 'T1');

PL/SQL procedure successfully completed.

SQL> EXEC DBMS_REDEFINITION.start_redef_table(user, 'T1', 'T2');

PL/SQL procedure successfully completed.

SQL> SET SERVEROUTPUT ON
SQL> DECLARE
  2    l_num_errors PLS_INTEGER;
  3  BEGIN
  4    DBMS_REDEFINITION.copy_table_dependents(
  5      uname             => user,
  6      orig_table        => 'T1',
  7      int_table         => 'T2',
  8      copy_indexes      => DBMS_REDEFINITION.cons_orig_params,
  9      copy_triggers     => TRUE,  -- Default
 10      copy_constraints  => TRUE,  -- Default
 11      copy_privileges   => TRUE,  -- Default
 12      ignore_errors     => FALSE, -- Default
 13      num_errors        => l_num_errors);
 14    DBMS_OUTPUT.put_line('l_num_errors=' || l_num_errors);
 15  END;
 16  /
l_num_errors=0

PL/SQL procedure successfully completed.

SQL> EXEC DBMS_REDEFINITION.sync_interim_table(user, 'T1', 'T2');

PL/SQL procedure successfully completed.

SQL> EXEC DBMS_REDEFINITION.finish_redef_table(user, 'T1', 'T2');

PL/SQL procedure successfully completed.

And we’done. We flick back to our original application session and check for errors:

SQL> exec t1_logger;

PL/SQL procedure successfully completed.

It ran to completion without any problems. We have moved to SECUREFILE without any disruption to service. Very cool!

But although nothing crashed, maybe there was a blockage of service to the database? We can check that by locking at the maximum time between insertions in our application.

SQL> select max(delta)
  2  from
  3   ( select t - lag(t) over ( order by t) as delta from t1 );

MAX(DELTA)
---------------------------------------------------------------------------
+000000000 00:00:00.250000

We were inserting rows every 0.10 seconds, so 0.25 represents a tiny overhead and unlikely to be noticed.

So when you need to make small structural changes to your database tables, have a think about DBMS_REDEFINITION. In a future blog post, I’ll talk about why you might want to redefine your tables even when you are not changing their structure at all !

5 comments

  1. Hi Connor.. Nearly a “decade” since that post – but why/when do you HAVE to use a INTERIM table. Everbody does these to me unnecessary steps – start (check) redef / sync / stop/ drop etc.. I have created a SR for this question but awaiting a (good) answer. I just did your test and nothing broke – simply by issuing SYS@tdridb1 SQL> SYS@tdridb1 SQL> BEGIN
    DBMS_REDEFINITION.REDEF_TABLE (
    uname =>’KEV’,
    tname =>’T1′,
    table_compression_type =>’COMPRESS FOR OLTP’,
    table_part_tablespace =>’BIGTEST’,
    index_tablespace => ‘BIGTEST’,
    lob_tablespace => ‘BIGTEST’,
    lob_store_as => ‘SECUREFILE’);
    END;
    /
    – to a new tablespace also just for fun.

      1. Old school is cool:) – But do you know the answer? – Is it only perhaps when there’s mappings / new column definitions /renamings and such – (that) we need to create the interim? Kind Regards
        Kevin

        1. REDEF_TABLE came later in the piece, so I imagine people (like myself) have plenty of existing scripts which are can/start/sync/finish, and we just start with those whenever we’re doing a redef

Got some thoughts? Leave a comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.