Interval partitioning just got better

Interval partitioning was a great feature when it arrived in version 11, because we no longer had to worry so much about ensuring partitions were available for new data when it arrived.  Partitions would just be created on the fly as required.  I’m not going to talk about interval partition in detail because there’s plenty of good content already out there.  But one key element for interval partitioning is that the intervals have to start from somewhere, which is why you always have to define a table with at least one partition.

 

image

 

So what if I want to drop that partition that is the conceptual “starting point”.  Well…I get problems Smile


SQL> create table sales
  2  ( tstamp       timestamp    not null,
  3    empno        number(10)   not null,
  4    ename        varchar2(10) not null,
  5    deptno       varchar2(10) not null
  6  )
  7  partition by range (tstamp)
  8  interval( numtoyminterval(1,'YEAR'))
  9  (
 10    partition p00 values less than (timestamp '2010-01-01 00:00:00')
 11  );

Table created.

SQL>
SQL> insert into sales values ( timestamp '2011-01-01 00:00:00' , 0,0,0);

1 row created.

SQL> insert into sales values ( timestamp '2012-01-01 00:00:00' , 0,0,0);

1 row created.

SQL> insert into sales values ( timestamp '2013-01-01 00:00:00' , 0,0,0);

1 row created.

SQL>
SQL> col high_value format a36
SQL> col pname format a10
SQL>
SQL> select
  2    partition_name pname,
  3    partition_position pos,
  4    high_value,
  5    interval
  6  from   user_tab_partitions
  7  where  table_name = 'SALES';

PNAME             POS HIGH_VALUE                           INT
---------- ---------- ------------------------------------ ---
P00                 1 TIMESTAMP' 2010-01-01 00:00:00'      NO
SYS_P121            2 TIMESTAMP' 2012-01-01 00:00:00'      YES
SYS_P122            3 TIMESTAMP' 2013-01-01 00:00:00'      YES
SYS_P123            4 TIMESTAMP' 2014-01-01 00:00:00'      YES

4 rows selected.

SQL>
SQL> alter table sales drop partition p00;
alter table sales drop partition p00
                                 *
ERROR at line 1:
ORA-14758: Last partition in the range section cannot be dropped

There is a simple workaround for that issue. If you re-issue the INTERVAL definition for the table, all of the existing partitions will be “upgraded” (or should be it downgraded…I dunno) to being range partitions. Hence we will now have “moved” the starting point, and can then drop the problem partition.


SQL> alter table sales set interval( numtoyminterval(1,'YEAR'));

Table altered.

SQL> select
  2    partition_name pname,
  3    partition_position pos,
  4    high_value,
  5    interval
  6  from   user_tab_partitions
  7  where  table_name = 'SALES';

PNAME             POS HIGH_VALUE                           INT
---------- ---------- ------------------------------------ ---
P00                 1 TIMESTAMP' 2010-01-01 00:00:00'      NO
SYS_P121            2 TIMESTAMP' 2012-01-01 00:00:00'      NO
SYS_P122            3 TIMESTAMP' 2013-01-01 00:00:00'      NO
SYS_P123            4 TIMESTAMP' 2014-01-01 00:00:00'      NO

4 rows selected.

SQL> alter table sales drop partition p00;

Table altered.

So that’s all pretty easy, but of course, you must now run this “re-interval” command all the time to be sure that you will always be able to drop any partition you want.

Unless of course….. you’re on 12.2 ! Let’s repeat the demo on 12.2


SQL> create table sales
  2  ( tstamp       timestamp    not null,
  3    empno        number(10)   not null,
  4    ename        varchar2(10) not null,
  5    deptno       varchar2(10) not null
  6  )
  7  partition by range (tstamp)
  8  interval( numtoyminterval(1,'YEAR'))
  9  (
 10    partition p00 values less than (timestamp '2010-01-01 00:00:00')
 11  );

Table created.

SQL>
SQL> insert into sales values ( timestamp '2011-01-01 00:00:00' , 0,0,0);

1 row created.

SQL> insert into sales values ( timestamp '2012-01-01 00:00:00' , 0,0,0);

1 row created.

SQL> insert into sales values ( timestamp '2013-01-01 00:00:00' , 0,0,0);

1 row created.

SQL>
SQL> col high_value format a36
SQL> col pname format a10
SQL>
SQL> select
  2    partition_name pname,
  3    partition_position pos,
  4    high_value,
  5    interval
  6  from   user_tab_partitions
  7  where  table_name = 'SALES';

PNAME             POS HIGH_VALUE                           INT
---------- ---------- ------------------------------------ ---
P00                 1 TIMESTAMP' 2010-01-01 00:00:00'      NO
SYS_P3415           2 TIMESTAMP' 2012-01-01 00:00:00'      YES
SYS_P3416           3 TIMESTAMP' 2013-01-01 00:00:00'      YES
SYS_P3417           4 TIMESTAMP' 2014-01-01 00:00:00'      YES

4 rows selected.

SQL>
SQL> alter table sales drop partition p00;

Table altered.

SQL>
SQL> select
  2    partition_name pname,
  3    partition_position pos,
  4    high_value,
  5    interval
  6  from   user_tab_partitions
  7  where  table_name = 'SALES';

PNAME             POS HIGH_VALUE                           INT
---------- ---------- ------------------------------------ ---
SYS_P3415           1 TIMESTAMP' 2012-01-01 00:00:00'      NO
SYS_P3416           2 TIMESTAMP' 2013-01-01 00:00:00'      YES
SYS_P3417           3 TIMESTAMP' 2014-01-01 00:00:00'      YES

3 rows selected.

How cool is that! We now automatically modify one of the interval partitions to being a range partition, so you’ll not get the error.

The little features are often the coolest Smile

Quick tip–database link passwords

If you are relying on database links in your application, think carefully about how you want to manage the accounts that you connect with, in particular, when it comes to password expiry.

With a standard connect request to the database, if your password is going to expire soon, you will get some feedback on this:



SQL> conn demo/demo@np12
ERROR:
ORA-28002: the password will expire within 6 days


Connected.


But when using those same credentials via a database link, you will not get any warning, so when that password expires…you might be dead in the water.



SQL> create database link demolink connect to demo identified by demo using 'np12';

Database link created.

SQL> select * from tab@demolink;

TNAME
------------------------------------------------------------------------------------------------
TABTYPE  CLUSTERID
------- ----------
EMPLOYEES
TABLE

Quick tip–identity columns

Lets say I’ve been reading about schema separation, and thus I am going to have a schema which owns all of my objects, which I’ll call APP_OWNER, which will have no connection privilege and a separate schema called APP_ADMIN which will take care of all of the DDL tasks.

Here’s my setup:


SQL> create user app_owner identified by app_owner;
 
User created.
 
SQL> grant
  2    create session,
  3    create any table,
  4    create any sequence,
  5  to app_admin identified by app_admin;
 
Grant succeeded.

I have granted APP_ADMIN the ability to create tables in any schema, and create sequences in any schema. I did the latter, because I know that if I use the “IDENTITY” clause for a column, then behind the scenes I’ll be creating a sequence to populate those ascending values.  So it looks like I am ready to go and create my objects.  Let’s create that first table


SQL> conn app_admin/app_admin
Connected.
 
SQL> create table app_owner.t(pk integer generated always as identity);
create table app_owner.t(pk integer generated always as identity)
*
ERROR at line 1:
ORA-01031: insufficient privileges

Hmmm….that is not what we were expecting. It turns out that to create an identity column in another schema you need more than just CREATE ANY SEQUENCE. You also need SELECT ANY SEQUENCE.


SQL> conn / as sysdba
Connected. 
SQL> grant
  2    create session,
  3    create any table,
  4    create any sequence,
  5    select any sequence
  6  to app_admin identified by app_admin;
 
Grant succeeded.
 
SQL> conn app_admin/app_admin
Connected.
 
SQL> create table app_owner.t(pk integer generated always as identity);
 
Table created.

And there we go Smile

Footnote: If you’ve never seen the syntax “grant <privs> to <user> identified by <pass>” it is a quick shortcut to both create the user account and assign privileges in a single command

Session killin’ time

As developers, sometimes we set something running that we wish we hadn’t Smile  And naturally, we’d like to be good IT citizens and clean up the mess as quick as we can.  (For most of us, this means – cover our tracks before the phone rings about smoke coming out of the server).  But of course, getting an administrator to hand over the trigger to let you have the ALTER SYSTEM KILL SESSION command is probably unlikely because…well… it’s just a bad bad idea.  So here’s a wrapper which might serve as a starting point for you.  It expose the kill system command to you, but in a restricted set of circumstances.

Sample Usage

By default, we report any session that has a status of active or killed. We’ll see the session details, whether it’s running or blocked, plus the SQL ID etc.


SQL> select * from table(sys.my_session.s);

Session User/Elapsed Secs Current SQL Status Program Blocked by Session
--------------------------------------------------------------------------------------------------------------------------------
257,5501 ASKTOM (0) 89uk42w1xkdty ACTIVE sqlplus.exe-comcdona
261,8848 QDB_PROD (33747) KILLED SQL Developer-stevenfeuerstein
368,43752 ASKTOM (12) a40p1nyb24j18 ACTIVE sqlplus.exe-comcdona

Alternatively, we can pass in “ALL” to see all sessions



SQL> select * from table(sys.my_session.s('all'));

Session User/Elapsed Secs Current SQL Status Program Blocked by Session
--------------------------------------------------------------------------------------------------------------------------------
1,13808 SYS (1769614) INACTIVE VKTM-oracle
2,23469 SYS (1769611) INACTIVE DIAG-oracle
3,36185 SYS (1769611) INACTIVE DBW0-oracle
4,21472 SYS (1769611) INACTIVE SMON-oracle
5,8033 SYS (1769611) INACTIVE PXMN-oracle
6,16680 SYS (1769605) INACTIVE TMON-oracle
7,61493 SYS (1769605) INACTIVE ARC3-oracle
9,16830 APEX_PUBLIC_USER (1122) INACTIVE APEX Listener-oracle
11,3902 SYS (1769590) INACTIVE CJQ0-oracle
12,20631 SYS (1769587) INACTIVE QM00-oracle
14,50003 APEX_LISTENER (949) INACTIVE APEX Listener-oracle
17,2037 SYS (1769585) INACTIVE Q009-oracle
22,41550 SYS (125) INACTIVE W002-oracle
26,52963 ORDS_PUBLIC_USER (99637) INACTIVE APEX Listener-oracle
28,27784 SYS (3693) INACTIVE W007-oracle
30,9396 ORDS_PUBLIC_USER (1306) INACTIVE APEX Listener-oracle
119,21406 SYS (1769614) INACTIVE GEN0-oracle
120,9696 SYS (1769611) INACTIVE DBRM-oracle
121,65040 SYS (1769611) INACTIVE LGWR-oracle
122,64828 SYS (1769611) INACTIVE LG01-oracle
123,65400 SYS (1769611) INACTIVE MMON-oracle
...
...

By default, you can just call the “kill” routine and we’ll look for a single session that

  • has been active
  • for more than 10 seconds,
  • is owned by you,
  • is within a list of known user accounts is running on your terminal,
  • came from sqlplus or sql developer,
  • is not a parallel slave

etc etc….Basically you edit the code (at the end of this blog) to be as restrictive as required to protect people from themselves Smile

I’m just dbms_output-ing in this case, but you get the idea. We first try a kill, and then attempt a disconnect as well.



SQL> exec sys.my_session.kill
alter system kill session '368,43752' immediate
alter system disconnect session '368,43752'

PL/SQL procedure successfully completed.

You can see that this was the only applicable ASKTOM process from the active list in the first query.

If we don’t find a session, or find more than 1 session, you’ll get errors, eg



SQL> exec sys.my_session.kill
BEGIN my_session.kill; END;

*
ERROR at line 1:
ORA-20000: No suitable session found for killing

SQL> exec sys.my_session.kill
BEGIN my_session.kill; END;

*
ERROR at line 1:
ORA-20000: More than one session found. Pass in the SID number

In the latter case, you can do a more selective kill by passing in the SID, eg



SQL> exec sys.my_session.kill(368)
alter system kill session '368,43752' immediate

PL/SQL procedure successfully completed.

Enjoy !

Source Code


create or replace
package sys.my_session is
  procedure kill(p_sid number default null);
  function s(p_type varchar2 default 'ACTIVE') return sys.odcivarchar2list pipelined;
end;
/
--
-- Choose (wisely) which schemas you want to give access to this
--
grant execute on sys.my_session to ???????;
create or replace
package body sys.my_session is
  procedure kill(p_sid number default null) is
    l_sid     int;
    l_serial# int;
    l_user    varchar2(30) := user;
  begin
    select s.sid, s.serial#
    into   l_sid, l_serial#
    from   v$session s,
           v$px_session p
  --
    -- Here is where you implement all the rules for controlling exactly *what* sessions
    -- people will be allowed to kill.
    --
  --
    -- must be my own login
    --
    where  s.username = l_user
    --
    -- must be running something for 10 seconds or more
    --
    and    s.status = 'ACTIVE'
    and    s.last_call_et > 10
    --
    -- must be SQL Dev or SQL Plus
    --
    and    ( upper(s.program) like '%SQLPLUS%'
          or upper(s.program) like '%SQL%DEVELOPER%'
           )
    --
    -- must be an account we're allowed to kill (just in case they circumvent the 'own login' check above)
    --
    and   s.username in ('ASKTOM','QDB_PROD')
    --
    -- must be on the same machine as the problem session
    --
    and   upper(s.terminal) = upper(sys_context('USERENV','TERMINAL'))
    --
    -- allow override for explicit sid
    --
    and   s.sid = nvl(p_sid,s.sid)
    --
    -- must not be a parallel slave
    --
    and   s.sid = p.sid(+)
    and   s.serial# = p.serial#(+)
    and   s.sid != p.qcsid(+)
    and   p.sid is null;
  begin
      dbms_output.put_line('alter system kill session '''||l_sid||','||l_serial#||''' immediate');
  --    execute immediate 'alter system kill session '''||l_sid||','||l_serial#||''' immediate';
    exception
      when others then null;
    end;
  begin
      dbms_output.put_line('alter system disconnect session '''||l_sid||','||l_serial#||'''');
  --    execute immediate 'alter system disconnect session '''||l_sid||','||l_serial#||'''';
    exception
      when others then null;
    end;

  exception
    --
    -- No session found, means you're fine or trying to be nasty
    --
    when no_data_found then
       raise_application_error(-20000,'No suitable session found for killing');
    --
    -- More than one session found, means we'll need the sid explicitly
    --
    when too_many_rows then
       raise_application_error(-20000,'More than one session found.  Pass in the SID number');
  end;
  function s(p_type varchar2 default 'ACTIVE') return sys.odcivarchar2list pipelined is
  begin
   for i in ( 
      select row_number() over ( order by s.sid)  r, s.sid, s.serial#,
                nvl(s.username,'SYS')
        || ' ('||s.last_call_et||')' username,
        case when s.lockwait is null then
             case when s.username is null then
                nvl2(j.job_sid,'ACTIVE','INACTIVE')
             else
                s.status
             end
        else 'BLOCKED'
        end status,
        nvl(s.sql_id,' ') sql_id,
        nvl(case when s.program is not null then
               ( case when s.program like 'oracle%(%)%' then regexp_substr(s.program,'^oracle.*\((.*)\).*$',1, 1, 'i', 1)
                      else s.program
                      end )
             when s.username is null then ( select p.program
                                            from   v$process p
                                            where  s.PADDR = p.ADDR )
             end || '-' || s.osuser,' ') program ,
             nvl(to_char(blocking_session),' ') blocking_session,
             nvl(case
               when blocking_session is null then cast(null as varchar2(1))
               else
                cast(( select substr(s1.osuser||'-'||s1.program,1,60)
                  from   v$session s1
                  where s1.sid = s.blocking_session
                ) as varchar2(60))
             end,' ') blocker
      from v$session s,
                ( select sid job_sid
                  from   v$lock
                  where  type = 'JQ' ) j
      where s.sid = j.job_sid(+)
      and (
            ( upper(p_type) = 'ACTIVE' 
              and s.status in ('ACTIVE','KILLED')
              and ( s.username is not null or ( s.username is null and s.last_call_et < 300 ) )
            ) 
            or
            upper(p_type) = 'ALL'
          )
      order by s.sid
   ) 
   loop
     if i.r = 1 then
       pipe row ( rpad('Session',14)||
                  rpad('User/Elapsed Secs',24)||
                  rpad('Current SQL',16)||
                  rpad('Status',14)||
                  rpad('Program',40)||
                  rpad('Blocked by Session',20)
                 );
       pipe row ( rpad('-',14,'-')||
                  rpad('-',24,'-')||
                  rpad('-',16,'-')||
                  rpad('-',14,'-')||
                  rpad('-',40,'-')||
                  rpad('-',20,'-')
                 );
     end if;
     pipe row ( rpad(i.sid||','||i.serial#,14)||
                rpad(i.username,24)||
                rpad(i.sql_id,16)||
                rpad(i.status,14)||
                rpad(substr(i.program,1,38),40)||
                lpad(i.blocker,20)
              );
   end loop;
  end;
end;
/
sho err
select * from table(my_session.s('all'));

Transportable Tablespace–part 2

I did a little demo of sharing a tablespace between two databases a few days back – you can see the details here or by just scrolling down Smile if you’re on the home page.

To avoid clouding the demonstration I omitted something in the details, but I’ll share that now, because it could be critical depending on how you currently use transportable tablespaces.

Let me do the most basic of examples now, transporting a tablespace from one database to another:

First, we make our tablespace read only, and Datapump export out the metadata


SQL> alter tablespace DEMO read only;

Tablespace altered.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

C:\>expdp transport_tablespaces=DEMO directory=TEMP dumpfile=tts.dmp

Export: Release 12.1.0.2.0 - Production on Tue Apr 18 14:16:06 2017

Copyright (c) 1982, 2016, Oracle and/or its affiliates.  All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYS"."SYS_EXPORT_TRANSPORTABLE_01":  /******** AS SYSDBA transport_tablespaces=DEMO directory=TEMP dumpfile=tts.dmp
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table "SYS"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TRANSPORTABLE_01 is:
  C:\TEMP\TTS.DMP
******************************************************************************
Datafiles required for transportable tablespace DEMO:
  C:\ORACLE\ORADATA\NP12\DEMO.DBF
Job "SYS"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at Tue Apr 18 14:17:03 2017 elapsed 0 00:00:50

Then, I copy the datafile(s) to the target location and Datapump import the metadata.


C:\>copy C:\oracle\oradata\np12\DEMO.DBF C:\oracle\oradata\db122\DEMO.DBF
        1 file(s) copied.

C:\>impdp transport_datafiles=C:\oracle\oradata\db122\DEMO.DBF directory=TEMP dumpfile=tts.dmp

Import: Release 12.2.0.1.0 - Production on Tue Apr 18 14:17:27 2017

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Master table "SYS"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Starting "SYS"."SYS_IMPORT_TRANSPORTABLE_01":  /******** AS SYSDBA transport_datafiles=C:\oracle\oradata\db122\DEMO.DBF directory=TEMP dumpfile=tts.dmp
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job "SYS"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at Tue Apr 18 14:17:46 2017 elapsed 0 00:00:15

And voila, there is my tablespace in the target database…


C:\>sql122

SQL*Plus: Release 12.2.0.1.0 Production on Tue Apr 18 14:19:08 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Last Successful login time: Tue Apr 18 2017 14:14:19 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production



SQL> select * from dba_tablespaces
  2  where tablespace_name = 'DEMO'
  3  @pr
==============================
TABLESPACE_NAME               : DEMO
BLOCK_SIZE                    : 8192
INITIAL_EXTENT                : 1048576
NEXT_EXTENT                   : 1048576
MIN_EXTENTS                   : 1
MAX_EXTENTS                   : 2147483645
MAX_SIZE                      : 2147483645
PCT_INCREASE                  : 0
MIN_EXTLEN                    : 1048576
STATUS                        : READ ONLY
CONTENTS                      : PERMANENT
LOGGING                       : LOGGING
FORCE_LOGGING                 : NO
EXTENT_MANAGEMENT             : LOCAL
ALLOCATION_TYPE               : UNIFORM
PLUGGED_IN                    : YES
SEGMENT_SPACE_MANAGEMENT      : AUTO
DEF_TAB_COMPRESSION           : DISABLED
RETENTION                     : NOT APPLY
BIGFILE                       : NO
PREDICATE_EVALUATION          : HOST
ENCRYPTED                     : NO
COMPRESS_FOR                  :
DEF_INMEMORY                  : DISABLED
DEF_INMEMORY_PRIORITY         :
DEF_INMEMORY_DISTRIBUTE       :
DEF_INMEMORY_COMPRESSION      :
DEF_INMEMORY_DUPLICATE        :
SHARED                        : SHARED
DEF_INDEX_COMPRESSION         : DISABLED
INDEX_COMPRESS_FOR            :
DEF_CELLMEMORY                :
DEF_INMEMORY_SERVICE          :
DEF_INMEMORY_SERVICE_NAME     :
LOST_WRITE_PROTECT            : OFF
CHUNK_TABLESPACE              : N

PL/SQL procedure successfully completed.

“Gee thanks, Connor” I can hear you muttering. A demonstration of the flippin’ obvious!

But there is one thing that is not apparent from the export or import logs. Let’s take a squizz at the database alert log for the target, that is, the database we imported the tablespace into.


DW00 started with pid=69, OS id=632, wid=1, job SYS.SYS_IMPORT_TRANSPORTABLE_01
2017-04-18T14:17:34.208631+08:00
Plug in tablespace DEMO with datafile
  'C:\oracle\oradata\db122\DEMO.DBF'
2017-04-18T14:17:46.199645+08:00
ALTER TABLESPACE "DEMO" READ WRITE
Completed: ALTER TABLESPACE "DEMO" READ WRITE
2017-04-18T14:17:46.665512+08:00
ALTER TABLESPACE "DEMO" READ ONLY
Completed: ALTER TABLESPACE "DEMO" READ ONLY

That is a change in 12c. Whilst our imported tablespace ends up as read only as it has always done, during the import process, there was a small window where the tablespace was READ WRITE. This is needed to make some metadata corrections to the tablespace on the way in.

So if you do intend to share tablespaces between databases, that is, share a single copy of the file, make sure take some precautions. On my Windows laptop, standard Windows file locking prohibited me from causing any damage to my source datafile, but on other platforms you might to set those files to read only at the OS level just in case. Of course, you’ll then see a warning during the Datapump import saying that the momentary change to read/write could not be done, but that is not a critical problem.  The transport will still complete.

Sharing a tablespace between 2 databases

I was reading an interesting discussion today about multiple databases each containing large amounts of read-only data.  If that read-only data is common, then it would make sense to have a single copy of that data and have both databases share it.

Well, as long as you can isolate that data into its own tablespace, then you can do that easily with Oracle by transporting the metadata between two databases and leaving the files in place.

Here’s an example

Source database


SQL> select banner from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
PL/SQL Release 12.1.0.2.0 - Production
CORE    12.1.0.2.0      Production
TNS for 64-bit Windows: Version 12.1.0.2.0 - Production
NLSRTL Version 12.1.0.2.0 - Production

SQL> create tablespace i_am_on_121 datafile 'C:\oracle\oradata\tts\my_tspace' size 50m;

Tablespace created.

SQL> create table t tablespace i_am_on_121 as select * from dba_objects;

Table created.

SQL> alter tablespace i_am_on_121 read only;

Tablespace altered.

C:\>expdp transport_tablespaces=i_am_on_121 directory=TEMP dumpfile=tts.dmp

Export: Release 12.1.0.2.0 - Production on Fri Apr 14 08:50:24 2017

Copyright (c) 1982, 2016, Oracle and/or its affiliates.  All rights reserved.

Username: mcdonac/*****

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "MCDONAC"."SYS_EXPORT_TRANSPORTABLE_01":  mcdonac/******** transport_tablespaces=i_am_on_121 directory=TEMP dumpfile=tts.dmp
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table "MCDONAC"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for MCDONAC.SYS_EXPORT_TRANSPORTABLE_01 is:
  C:\TEMP\TTS.DMP
******************************************************************************
Datafiles required for transportable tablespace I_AM_ON_121:
  C:\ORACLE\ORADATA\TTS\MY_TSPACE
Job "MCDONAC"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at Fri Apr 14 08:51:16 2017 elapsed 0 00:00:47

and then we import it into a different database (and this one even is a different version!).

Target database


C:\Users\hamcdc>impdp transport_datafiles=C:\oracle\oradata\tts\my_tspace directory=TEMP dumpfile=tts.dmp

Import: Release 12.2.0.1.0 - Production on Fri Apr 14 08:51:28 2017

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

Username: mcdonac/*****

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Master table "MCDONAC"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Starting "MCDONAC"."SYS_IMPORT_TRANSPORTABLE_01":  mcdonac/******** transport_datafiles=C:\oracle\oradata\tts\my_tspace directory=TEMP dumpfile=tts.dmp
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER


SQL> select banner from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
PL/SQL Release 12.2.0.1.0 - Production
CORE    12.2.0.1.0      Production
TNS for 64-bit Windows: Version 12.2.0.1.0 - Production
NLSRTL Version 12.2.0.1.0 - Production

SQL> select count(*) from t;

  COUNT(*)
----------
     92934

SQL> select * from dba_tablespaces
  2  where tablespace_name = 'I_AM_ON_121'
  3  @pr
==============================
TABLESPACE_NAME               : I_AM_ON_121
BLOCK_SIZE                    : 8192
INITIAL_EXTENT                : 65536
NEXT_EXTENT                   :
MIN_EXTENTS                   : 1
MAX_EXTENTS                   : 2147483645
MAX_SIZE                      : 2147483645
PCT_INCREASE                  :
MIN_EXTLEN                    : 65536
STATUS                        : READ ONLY
CONTENTS                      : PERMANENT
LOGGING                       : LOGGING
FORCE_LOGGING                 : NO
EXTENT_MANAGEMENT             : LOCAL
ALLOCATION_TYPE               : SYSTEM
PLUGGED_IN                    : YES
SEGMENT_SPACE_MANAGEMENT      : AUTO
DEF_TAB_COMPRESSION           : DISABLED
RETENTION                     : NOT APPLY
BIGFILE                       : NO
PREDICATE_EVALUATION          : HOST
ENCRYPTED                     : NO
COMPRESS_FOR                  :
DEF_INMEMORY                  : DISABLED
DEF_INMEMORY_PRIORITY         :
DEF_INMEMORY_DISTRIBUTE       :
DEF_INMEMORY_COMPRESSION      :
DEF_INMEMORY_DUPLICATE        :
SHARED                        : SHARED
DEF_INDEX_COMPRESSION         : DISABLED
INDEX_COMPRESS_FOR            :
DEF_CELLMEMORY                :
DEF_INMEMORY_SERVICE          :
DEF_INMEMORY_SERVICE_NAME     :
LOST_WRITE_PROTECT            : OFF
CHUNK_TABLESPACE              : N

PL/SQL procedure successfully completed.

Better to be safe than sorry…

I’ve always been worried about taking a script that is fine to run in my non-production environments (in particular a DROP script) and accidentally running it in a Production environment, shortly followed by the typing up of a fresh resume to look for a new job once the mistake is discovered Smile

The standard means of a “least privilege required” model when it comes to Production access is a good step along this road to protecting the database. But ultimately, at one time or another, an administrator with appropriate privileges, good intentions, but (say) bad cut-paste skills might inadvertently run a drop command against a database they did not want to.  (A well publicised example of this happened quite recently https://www.theregister.co.uk/2017/02/01/gitlab_data_loss/)

So it would be nice to have a method of adding a ‘double check’ facility so that we can guard against a DROP being run on your production system inadvertently. That way, even if you did  run that drop script against Production, you would be “saved from yourself”. Here’s one such possible means you might want to consider. We’ll use a context variable that has to be set in order for DROP commands to be processed.


--
-- Our context
--
SQL> create context DDL_CHECK using ddl_checker;

Context created.

--
-- A routinte to set it
--
SQL> create or replace
  2  procedure ddl_checker(p_val varchar2) is
  3  begin
  4    dbms_session.set_context('DDL_CHECK','ENABLED',p_val);
  5  end;
  6  /

Procedure created.

--
-- If we are on the Production database, DROP DDL's only proceed is the context variable is set
--
SQL> create or replace
  2  trigger ddl_checker_before
  3  before drop on scott.schema
  4  begin
  5    if sys.database_name like '%PROD%' and sys_context('DDL_CHECK','ENABLED') is null
  6    then
  7      raise_application_error(-20000,'DDL confirmation context is not set');
  8    end if;
  9  end;
 10  /

Trigger created.

--
-- And after every DROP, the context is reset for the next use so it cannot be left on inadvertently
--
SQL> create or replace
  2  trigger ddl_checker_after
  3  after drop on scott.schema
  4  begin
  5    ddl_checker(null);
  6  end;
  7  /

Trigger created.

SQL>
SQL>
SQL> grant execute on ddl_checker to scott;

Grant succeeded.

SQL>

So now we can see what happens when SCOTT attempts some DDL in Production by mistake.


SQL> conn scott/tiger
Connected.

SQL> drop table t purge;
drop table t purge
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-20000: DDL confirmation context is not set
ORA-06512: at line 4

--
-- We must explicitly express our intention to perform a DDL
--
SQL> exec admin.ddl_checker('true')

PL/SQL procedure successfully completed.

--
-- and only then are we allowed to run a DROP command
--
SQL> drop table t purge;

Table dropped.

--
-- and in doing so, we don't have any lingering access
--
SQL> drop table t1 purge;
drop table t1 purge
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-20000: DDL confirmation context is not set
ORA-06512: at line 4


SQL>

I stress – this is not about securing administrator access or replacing your existing processes for auditing or locking down the privileges on your production systems. This is just about that extra check to help your over-worked and/or over-tired administrators from committing a small mistake that becomes a catastrophic error.