Identity columns in 12c … just a sequence ?

This question came to me over Twitter, so I thought I’d whip out a quick post on it

 

image

Yes, we do implement the IDENTITY column via a sequence.  (Digression – I think this is a smart thing to do.  After all, we’ve had sequences for 20 years, so we know how they work, they are mature, tested, and rock solid, so why invent something new?)

So Peter’s question was – why not just do what we’ve always done and create a sequence and assign it as the default.  And yes, there is nothing wrong with doing that – it will work just fine.  But there are a couple of subtle differences between that and using the IDENTITY syntax.

1) The sequence is bound to the table, so when I drop the table, the sequence disappears as well


SQL> create table t ( x int generated as identity);

Table created.

SQL> select object_name from user_objects where created > sysdate - 60/86400;

OBJECT_NAME
----------------------------------------
T
ISEQ$$_195276

SQL> drop table t purge;

Table dropped.

SQL> select object_name from user_objects where created > sysdate - 60/86400;

no rows selected

2) You can lock down the column


SQL> create table t ( x int generated always as identity);

Table created.

SQL> insert into t values ( 12 ) ;
insert into t values ( 12 )
*
ERROR at line 1:
ORA-32795: cannot insert into a generated always identity column

Personally, I don’t have a preference.  The most important thing to me was that sequences can be used as a default value to avoid triggers on every single table.

LOBs and tiny typos

This one caught me out – I was racing around looking for bugs, or parameter files, or hidden settings that stopped SECUREFILE lobs from being created.  Here was my incredibly simple test case – create a securefile LOB, and then make sure it’s a securefile.


SQL> create table t1 ( b blob ) lob ( b ) store as securfile;

Table created.

SQL> select securefile
  2  from   user_lobs
  3  where  table_name = 'T1';

SEC
---
NO

That had me bamboozled, but it is a trivial explanation.  Notice that I did not spell  “SECUREFILE” correctly.  As a result, the syntax is interpreted as being the name of the LOB segment in the data dictionary, rather than the specification of how the LOB should be stored.


SQL> select segment_name
  2  from   user_lobs
  3  where  table_name = 'T1';

SEGMENT_NAME
------------------------------
SECURFILE

All it takes is the correct “e” in “securefile” and normal service was resumed Smile


SQL> drop table t1 purge;

Table dropped.

SQL> create table t1 ( b blob ) lob ( b ) store as securefile;

Table created.

SQL> select securefile
  2  from   user_lobs
  3  where  table_name = 'T1';

SEC
---
YES

So why blog about it such a silly mistake ?  Because this is a common issue with all of us as developers.  We see something unusual and our first assumption is that it must be some fundamental flaw in the product we’re using.  That mindset has been around as long computer programming has existed, but but 20 years ago, it wasn’t so easy to make a fool of yourself by bleating about it on social media Smile.  I remember when I first started programming, one of my mentors told me: “Remember, the COBOL compiler is not broken”, and of course, we can insert any programming language into that sentence.  So before you jump on to Twitter … just take a moment to re-check that script, or get a colleague to give it a “once over”.  You’ll either save yourself some embarrassment, or you add additional rigour to your test case – it’s win-win.

12c Statistics on load–special cases

One of the cool features in 12c is the automatic collection of optimizer statistics when a table is either created or loaded via direct path from empty.  This makes a lot of sense because it saves us from what used to be the mandatory second step of gathering statistics whenever we loaded an empty table with data.

For example

11.2.0.4


SQL> create table t1 as
  2  select * from dba_objects;

Table created.

SQL>
SQL> select num_rows
  2  from user_tables
  3  where table_name = 'T1';

  NUM_ROWS
----------
(null)

SQL> select count(*) from t1;

  COUNT(*)
----------
     86802

So at this point, we’d then have to scan the table we just loaded with a DBMS_STATS call in order to come up with some representative statistics. This was fixed in 12c with “statistics on load”.

12c 


SQL> create table t1 as
  2  select * from dba_objects;

Table created.

SQL> – no dbms_stats call
SQL> select num_rows
  2  from user_tables
  3  where table_name = 'T1';

  NUM_ROWS
----------
     78170

But this facility has been blogged about extensively, so I wont labour the point. This post is more about just a couple of things to be aware of when taking advantage of the facility.

Partitioned tables

The 12c feature is for statistics on load of the table. So if you are creating/loading a partitioned table, whilst you will still get statistics collected, they are at the table level only. For example,


SQL> create table t1
  2  partition by range ( object_id )
  3  interval ( 20000 )
  4  (
  5    partition p1 values less than ( 20000 ),
  6    partition p2 values less than ( 200000 )
  7  )
  8  as select d.* from dba_objects d
  9  where object_id is not null;

Table created.

SQL>
SQL> select num_rows
  2  from user_tables
  3  where table_name = 'T1';

  NUM_ROWS
----------
     78165

SQL>
SQL> select partition_name, num_rows
  2  from user_tab_partitions
  3  where table_name = 'T1';

PARTITION_   NUM_ROWS
---------- ----------
P1         (null)
P2         (null)

Notice that the partition level statistics are not collected. Interestingly, indexes however will be ok whether they are created during or after the load.


--
-- indexes created after the load on the table above
--
SQL> create index ix1 on t1 ( object_id ) local;

Index created.

SQL> create index ix2 on t1 ( object_name);

Index created.

SQL>
SQL> select index_name, leaf_blocks
  2  from user_indexes
  3  where index_name in ( 'IX1','IX2');

INDEX_NAME                     LEAF_BLOCKS
------------------------------ -----------
IX1                                    173
IX2                                    535

2 rows selected.

SQL>
SQL> select index_name, partition_name, leaf_blocks
  2  from user_ind_partitions
  3  where index_name in ( 'IX1','IX2');

INDEX_NAME                     PARTITION_ LEAF_BLOCKS
------------------------------ ---------- -----------
IX1                            P1                  43
IX1                            P2                 130

--
-- indexes created as part of the table creation process
--
SQL> create table t1 (
  2     owner
  3    ,object_name
  4    ,subobject_name
  5    ,object_id
  6    ,data_object_id
  7    ,object_type
  8    ,created
  9    ,last_ddl_time
 10    ,timestamp
 11    ,status
 12    ,temporary
 13    ,generated
 14    ,secondary
 15    ,namespace
 16    ,edition_name
 17    ,sharing
 18    ,editionable
 19    ,oracle_maintained
 20    ,application
 21    ,default_collation
 22    ,duplicated
 23    ,sharded
 24    ,created_appid
 25    ,created_vsnid
 26    ,modified_appid
 27    ,modified_vsnid
 28    ,constraint pk primary key ( object_id ) using index local
 29    ,constraint uq unique ( owner, object_id ) using index
 30  )
 31  partition by range ( object_id )
 32  interval ( 20000 )
 33  (
 34    partition p1 values less than ( 20000 ),
 35    partition p2 values less than ( 200000 )
 36  )
 37  as select d.* from dba_objects d
 38  where object_id is not null;

Table created.

SQL>
SQL>
SQL> select index_name, leaf_blocks
  2  from user_indexes
  3  where index_name in ('PK','UQ');

INDEX_NAME                     LEAF_BLOCKS
------------------------------ -----------
PK                                     163
UQ                                     263

2 rows selected.

SQL>
SQL> select index_name, partition_name, leaf_blocks
  2  from user_ind_partitions
  3  where index_name in ('PK','UQ');

INDEX_NAME                     PARTITION_ LEAF_BLOCKS
------------------------------ ---------- -----------
PK                             P1                  41
PK                             P2                 122

2 rows selected.

CTAS empty is still OK

A common trick for creating an empty table whilst copying the definition of an existing table is to use a predicate with an always false condition. For example,


SQL> create table t1 as
  2  select * from dba_objects
  3  where 1=0;

Table created.

Statistics on load will still take place for such a table, ie


SQL> select num_rows
  2  from user_tables
  3  where table_name = 'T1';

  NUM_ROWS
----------
         0

Now you might be worried that since there are now statistics on the table, that perhaps a direct load using INSERT /*+ APPEND */,  which would normally perform statistics on load will no longer work. But there is no need to panic. We’ll still determine that the table is empty, and do another statistics on load operation when you perform that direct load.


SQL> insert /*+ APPEND */ into t1
  2  select * from dba_objects;

78171 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> select num_rows
  2  from user_tables
  3  where table_name = 'T1';

  NUM_ROWS
----------
     78171

But this also has implications if you want to retain some existing statistics on the table. Because when we truncate the table, that same reset of statistics collection can occur.


SQL> create table t1 as
  2  select * from dba_objects
  3  where rownum 
SQL> select num_rows
  2  from user_tables
  3  where table_name = 'T1';

  NUM_ROWS
----------
        10

SQL>
SQL> truncate table t1;

Table truncated.

SQL>
SQL> insert /*+ APPEND */ into t1
  2  select * from dba_objects;

78171 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> select num_rows
  2  from user_tables
  3  where table_name = 'T1';

  NUM_ROWS
----------
     78171

Similarly, all you need is one transaction that “brings the table to life” for automatic statistics collection to be no longer active. Even if that transactions rolls back. For example:


SQL> create table t1 as
  2  select * from dba_objects
  3  where 1=0;

Table created.

SQL> select num_rows
  2  from user_tables
  3  where table_name = 'T1';

  NUM_ROWS
----------
         0

SQL> -- a normal mode insert of a row
SQL> insert into t1
  2  select * from dba_objects where rownum = 1;

1 row created.

SQL>
SQL> rollback;

Rollback complete.

SQL>
SQL> insert /*+ APPEND */ into t1
  2  select * from dba_objects;

78171 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> select num_rows
  2  from user_tables
  3  where table_name = 'T1';

  NUM_ROWS
----------
         0

All of these behaviour are pretty much how I would expect them to work (your opinion may differ of course Smile), and I know that some people have opted to disable the automatic collection altogether to avoid any ambiguity but that strikes me as overkill.

If you are really concerned about it, it is easy enough to suffix your load routines with a simple wrapper function to check the statistics and issue the appropriate DBMS_STATS call to make up the shortfall.   And it would be remiss of me not to mention the latest optimizer whitepaper which covers in detail other optimizer enhancements in 12c. 

Happy stats collecting !

Quick tip–identity columns

Lets say I’ve been reading about schema separation, and thus I am going to have a schema which owns all of my objects, which I’ll call APP_OWNER, which will have no connection privilege and a separate schema called APP_ADMIN which will take care of all of the DDL tasks.

Here’s my setup:


SQL> create user app_owner identified by app_owner;
 
User created.
 
SQL> grant
  2    create session,
  3    create any table,
  4    create any sequence,
  5  to app_admin identified by app_admin;
 
Grant succeeded.

I have granted APP_ADMIN the ability to create tables in any schema, and create sequences in any schema. I did the latter, because I know that if I use the “IDENTITY” clause for a column, then behind the scenes I’ll be creating a sequence to populate those ascending values.  So it looks like I am ready to go and create my objects.  Let’s create that first table


SQL> conn app_admin/app_admin
Connected.
 
SQL> create table app_owner.t(pk integer generated always as identity);
create table app_owner.t(pk integer generated always as identity)
*
ERROR at line 1:
ORA-01031: insufficient privileges

Hmmm….that is not what we were expecting. It turns out that to create an identity column in another schema you need more than just CREATE ANY SEQUENCE. You also need SELECT ANY SEQUENCE.


SQL> conn / as sysdba
Connected. 
SQL> grant
  2    create session,
  3    create any table,
  4    create any sequence,
  5    select any sequence
  6  to app_admin identified by app_admin;
 
Grant succeeded.
 
SQL> conn app_admin/app_admin
Connected.
 
SQL> create table app_owner.t(pk integer generated always as identity);
 
Table created.

And there we go Smile

Footnote: If you’ve never seen the syntax “grant to identified by ” it is a quick shortcut to both create the user account and assign privileges in a single command

Transportable Tablespace–part 2

I did a little demo of sharing a tablespace between two databases a few days back – you can see the details here or by just scrolling down Smile if you’re on the home page.

To avoid clouding the demonstration I omitted something in the details, but I’ll share that now, because it could be critical depending on how you currently use transportable tablespaces.

Let me do the most basic of examples now, transporting a tablespace from one database to another:

First, we make our tablespace read only, and Datapump export out the metadata


SQL> alter tablespace DEMO read only;

Tablespace altered.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

C:\>expdp transport_tablespaces=DEMO directory=TEMP dumpfile=tts.dmp

Export: Release 12.1.0.2.0 - Production on Tue Apr 18 14:16:06 2017

Copyright (c) 1982, 2016, Oracle and/or its affiliates.  All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYS"."SYS_EXPORT_TRANSPORTABLE_01":  /******** AS SYSDBA transport_tablespaces=DEMO directory=TEMP dumpfile=tts.dmp
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table "SYS"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TRANSPORTABLE_01 is:
  C:\TEMP\TTS.DMP
******************************************************************************
Datafiles required for transportable tablespace DEMO:
  C:\ORACLE\ORADATA\NP12\DEMO.DBF
Job "SYS"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at Tue Apr 18 14:17:03 2017 elapsed 0 00:00:50

Then, I copy the datafile(s) to the target location and Datapump import the metadata.


C:\>copy C:\oracle\oradata\np12\DEMO.DBF C:\oracle\oradata\db122\DEMO.DBF
        1 file(s) copied.

C:\>impdp transport_datafiles=C:\oracle\oradata\db122\DEMO.DBF directory=TEMP dumpfile=tts.dmp

Import: Release 12.2.0.1.0 - Production on Tue Apr 18 14:17:27 2017

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Master table "SYS"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Starting "SYS"."SYS_IMPORT_TRANSPORTABLE_01":  /******** AS SYSDBA transport_datafiles=C:\oracle\oradata\db122\DEMO.DBF directory=TEMP dumpfile=tts.dmp
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job "SYS"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at Tue Apr 18 14:17:46 2017 elapsed 0 00:00:15

And voila, there is my tablespace in the target database…


C:\>sql122

SQL*Plus: Release 12.2.0.1.0 Production on Tue Apr 18 14:19:08 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Last Successful login time: Tue Apr 18 2017 14:14:19 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production



SQL> select * from dba_tablespaces
  2  where tablespace_name = 'DEMO'
  3  @pr
==============================
TABLESPACE_NAME               : DEMO
BLOCK_SIZE                    : 8192
INITIAL_EXTENT                : 1048576
NEXT_EXTENT                   : 1048576
MIN_EXTENTS                   : 1
MAX_EXTENTS                   : 2147483645
MAX_SIZE                      : 2147483645
PCT_INCREASE                  : 0
MIN_EXTLEN                    : 1048576
STATUS                        : READ ONLY
CONTENTS                      : PERMANENT
LOGGING                       : LOGGING
FORCE_LOGGING                 : NO
EXTENT_MANAGEMENT             : LOCAL
ALLOCATION_TYPE               : UNIFORM
PLUGGED_IN                    : YES
SEGMENT_SPACE_MANAGEMENT      : AUTO
DEF_TAB_COMPRESSION           : DISABLED
RETENTION                     : NOT APPLY
BIGFILE                       : NO
PREDICATE_EVALUATION          : HOST
ENCRYPTED                     : NO
COMPRESS_FOR                  :
DEF_INMEMORY                  : DISABLED
DEF_INMEMORY_PRIORITY         :
DEF_INMEMORY_DISTRIBUTE       :
DEF_INMEMORY_COMPRESSION      :
DEF_INMEMORY_DUPLICATE        :
SHARED                        : SHARED
DEF_INDEX_COMPRESSION         : DISABLED
INDEX_COMPRESS_FOR            :
DEF_CELLMEMORY                :
DEF_INMEMORY_SERVICE          :
DEF_INMEMORY_SERVICE_NAME     :
LOST_WRITE_PROTECT            : OFF
CHUNK_TABLESPACE              : N

PL/SQL procedure successfully completed.

“Gee thanks, Connor” I can hear you muttering. A demonstration of the flippin’ obvious!

But there is one thing that is not apparent from the export or import logs. Let’s take a squizz at the database alert log for the target, that is, the database we imported the tablespace into.


DW00 started with pid=69, OS id=632, wid=1, job SYS.SYS_IMPORT_TRANSPORTABLE_01
2017-04-18T14:17:34.208631+08:00
Plug in tablespace DEMO with datafile
  'C:\oracle\oradata\db122\DEMO.DBF'
2017-04-18T14:17:46.199645+08:00
ALTER TABLESPACE "DEMO" READ WRITE
Completed: ALTER TABLESPACE "DEMO" READ WRITE
2017-04-18T14:17:46.665512+08:00
ALTER TABLESPACE "DEMO" READ ONLY
Completed: ALTER TABLESPACE "DEMO" READ ONLY

That is a change in 12c. Whilst our imported tablespace ends up as read only as it has always done, during the import process, there was a small window where the tablespace was READ WRITE. This is needed to make some metadata corrections to the tablespace on the way in.

So if you do intend to share tablespaces between databases, that is, share a single copy of the file, make sure take some precautions. On my Windows laptop, standard Windows file locking prohibited me from causing any damage to my source datafile, but on other platforms you might to set those files to read only at the OS level just in case. Of course, you’ll then see a warning during the Datapump import saying that the momentary change to read/write could not be done, but that is not a critical problem.  The transport will still complete.

Sharing a tablespace between 2 databases

I was reading an interesting discussion today about multiple databases each containing large amounts of read-only data.  If that read-only data is common, then it would make sense to have a single copy of that data and have both databases share it.

Well, as long as you can isolate that data into its own tablespace, then you can do that easily with Oracle by transporting the metadata between two databases and leaving the files in place.

Here’s an example

Source database


SQL> select banner from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
PL/SQL Release 12.1.0.2.0 - Production
CORE    12.1.0.2.0      Production
TNS for 64-bit Windows: Version 12.1.0.2.0 - Production
NLSRTL Version 12.1.0.2.0 - Production

SQL> create tablespace i_am_on_121 datafile 'C:\oracle\oradata\tts\my_tspace' size 50m;

Tablespace created.

SQL> create table t tablespace i_am_on_121 as select * from dba_objects;

Table created.

SQL> alter tablespace i_am_on_121 read only;

Tablespace altered.

C:\>expdp transport_tablespaces=i_am_on_121 directory=TEMP dumpfile=tts.dmp

Export: Release 12.1.0.2.0 - Production on Fri Apr 14 08:50:24 2017

Copyright (c) 1982, 2016, Oracle and/or its affiliates.  All rights reserved.

Username: mcdonac/*****

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "MCDONAC"."SYS_EXPORT_TRANSPORTABLE_01":  mcdonac/******** transport_tablespaces=i_am_on_121 directory=TEMP dumpfile=tts.dmp
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table "MCDONAC"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for MCDONAC.SYS_EXPORT_TRANSPORTABLE_01 is:
  C:\TEMP\TTS.DMP
******************************************************************************
Datafiles required for transportable tablespace I_AM_ON_121:
  C:\ORACLE\ORADATA\TTS\MY_TSPACE
Job "MCDONAC"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at Fri Apr 14 08:51:16 2017 elapsed 0 00:00:47

and then we import it into a different database (and this one even is a different version!).

Target database


C:\Users\hamcdc>impdp transport_datafiles=C:\oracle\oradata\tts\my_tspace directory=TEMP dumpfile=tts.dmp

Import: Release 12.2.0.1.0 - Production on Fri Apr 14 08:51:28 2017

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

Username: mcdonac/*****

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Master table "MCDONAC"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Starting "MCDONAC"."SYS_IMPORT_TRANSPORTABLE_01":  mcdonac/******** transport_datafiles=C:\oracle\oradata\tts\my_tspace directory=TEMP dumpfile=tts.dmp
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER


SQL> select banner from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
PL/SQL Release 12.2.0.1.0 - Production
CORE    12.2.0.1.0      Production
TNS for 64-bit Windows: Version 12.2.0.1.0 - Production
NLSRTL Version 12.2.0.1.0 - Production

SQL> select count(*) from t;

  COUNT(*)
----------
     92934

SQL> select * from dba_tablespaces
  2  where tablespace_name = 'I_AM_ON_121'
  3  @pr
==============================
TABLESPACE_NAME               : I_AM_ON_121
BLOCK_SIZE                    : 8192
INITIAL_EXTENT                : 65536
NEXT_EXTENT                   :
MIN_EXTENTS                   : 1
MAX_EXTENTS                   : 2147483645
MAX_SIZE                      : 2147483645
PCT_INCREASE                  :
MIN_EXTLEN                    : 65536
STATUS                        : READ ONLY
CONTENTS                      : PERMANENT
LOGGING                       : LOGGING
FORCE_LOGGING                 : NO
EXTENT_MANAGEMENT             : LOCAL
ALLOCATION_TYPE               : SYSTEM
PLUGGED_IN                    : YES
SEGMENT_SPACE_MANAGEMENT      : AUTO
DEF_TAB_COMPRESSION           : DISABLED
RETENTION                     : NOT APPLY
BIGFILE                       : NO
PREDICATE_EVALUATION          : HOST
ENCRYPTED                     : NO
COMPRESS_FOR                  :
DEF_INMEMORY                  : DISABLED
DEF_INMEMORY_PRIORITY         :
DEF_INMEMORY_DISTRIBUTE       :
DEF_INMEMORY_COMPRESSION      :
DEF_INMEMORY_DUPLICATE        :
SHARED                        : SHARED
DEF_INDEX_COMPRESSION         : DISABLED
INDEX_COMPRESS_FOR            :
DEF_CELLMEMORY                :
DEF_INMEMORY_SERVICE          :
DEF_INMEMORY_SERVICE_NAME     :
LOST_WRITE_PROTECT            : OFF
CHUNK_TABLESPACE              : N

PL/SQL procedure successfully completed.

12c Release 2, set feedback enhancement in SQL PLus

There’s a nice little touch to the “set feedback” command in SQL PLus in 12.2. There is a new “only” clause in the SET FEEDBACK command, so you can run queries but the resulting rows are not shown in the display. They are still queried, fetched and “returned”, but just rendered to the screen. Here’s an example


$ sqlplus hr/hr

SQL*Plus: Release 12.2.0.1.0 Production on Tue Mar 14 22:59:15 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Last Successful login time: Sat Mar 11 2017 01:59:20 -04:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> select * from regions;

 REGION_ID REGION_NAME
---------- -------------------------
         1 Europe
         2 Americas
         3 Asia
         4 Middle East and Africa

So that’s normal behaviour. Let’s now use the new ONLY option.


SQL> set feedback only 
SQL> select * from regions;

4 rows selected.

So why would you want that ? Well, sometimes you just want to run the query so that you can use a subsequent DBMS_XPLAN.DISPLAY_CURSOR call to see the true execution plan. Or perhaps, you just to want to examine some options with regard to the fetch performance. For example, here’s a demo of fetching from a large table called EMP


SQL> show arraysize
arraysize 10

SQL> set timing on
SQL> select * from emp;

1753088 rows selected.

Elapsed: 00:00:26.27

So that took 26 seconds, with an arraysize of 10. Let’s see if we can do better than that – we’ll bump up the arraysize to 200


SQL> set arraysize 200
SQL> select * from emp;

1753088 rows selected.

Elapsed: 00:00:04.65

Wow, thats pretty cool. Six times faster just by tweaking the batch size of the fetch. Surely then we can just keep bumping it up.


SQL> set arraysize 5000
SQL> select * from emp;

1753088 rows selected.

Elapsed: 00:00:04.43

Apparently not. There is a “sweet spot” for arraysize, and diminishing returns on performance when you go higher and higher (at the cost of consuming memory and resources on your client machine to drag all those rows down at once). But this post isn’t about arraysize, it’s merely a conduit for the nice cool feature SET FEEDBACK ONLY which lets us now test out such things without having to see all the rows presented back.