Partial Indexes–Take Care With Truncate

Partial indexes are a very cool feature that came along with Oracle 12c. The capability at partition level to control index existence allows for a couple of obvious use cases:

1) You index the data in recent partitions only, because small amounts of data are aggressively searched by applications and/or users, but not the older data because the queries for older data are either less frequent or are more analytical in nature.

2) You index the data in older partitions only, because you are putting the recent data into the in-memory store so almost any kind of access is very fast, but you do not want to swamp the entire in-memory store with older data that people rarely query.  But you still want reasonable access performance on that old data.

Whatever your use-case, there is currently an outstanding issue with partial indexes that you need to be aware of. (Note: This is not correct functionality and will be fixed in a future release). If you truncate a table, then the partial index dictionary definition may not be correctly observed for partitions.

Here’s an example:


SQL> create table t ( x int, y int )
  2  partition by range ( x )
  3  (
  4    partition p1 values less than ( 1000 ) indexing on,
  5    partition p2 values less than ( 2000 ) indexing off
  6  );
 
Table created.
 
SQL> insert into t
  2  select rownum, rownum from dual connect by level < 2000;
 
1999 rows created.
 
SQL> create index ix on t ( x ) local indexing partial;
 
Index created.
 
SQL> select segment_name, partition_name
  2  from user_segments
  3  where segment_name = 'IX';
 
SEGMENT_NAME                   PARTITION_NAME
------------------------------ ------------------------------
IX                             P1
 
SQL> select partition_name, status
  2  from user_ind_partitions
  3  where index_name = 'IX';
 
PARTITION_NAME                 STATUS
------------------------------ --------
P1                             USABLE
P2                             UNUSABLE

So far so good… partial indexing is working as expected. Then I truncate the table:


SQL> truncate table t;
 
Table truncated.
 
SQL>
SQL> select segment_name, partition_name
  2  from user_segments
  3  where segment_name = 'IX';
 
SEGMENT_NAME                   PARTITION_NAME
------------------------------ ------------------------------
IX                             P1
IX                             P2
 
SQL> select partition_name, status
  2  from user_ind_partitions
  3  where index_name = 'IX';
 
PARTITION_NAME                 STATUS
------------------------------ --------
P1                             USABLE
P2                             USABLE
 
SQL> select segment_name, partition_name
  2  from user_segments
  3  where segment_name = 'IX';
 
SEGMENT_NAME                   PARTITION_NAME
------------------------------ ------------------------------
IX                             P1
IX                             P2
 
SQL> select partition_name, status
  2  from user_ind_partitions
  3  where index_name = 'IX';
 
PARTITION_NAME                 STATUS
------------------------------ --------
P1                             USABLE
P2                             USABLE

And now we seem to have dropped our knowledge of partial indexing and the second index partition springs into existence. The dictionary for the table remains unaffected


SQL> select partition_name, indexing
  2  from   user_tab_partitions
  3  where  table_name = 'T';
 
PARTITION_NAME                 INDE
------------------------------ ----
P1                             ON
P2                             OFF

The workaround is easy. You simply re-issue the partial indexing command


SQL> alter table t modify partition p2 indexing off;

Table altered.

SQL>
SQL> select segment_name, partition_name
  2  from user_segments
  3  where segment_name = 'IX';

SEGMENT_NAME                   PARTITION_NAME
------------------------------ ---------------------------
IX                             P1

But obviously, you need to remember to do this.

Easy as pi…. hole.

A slight digression from my normal database-focussed content today Smile

In internet bandwidth and latency strapped Perth (Western Australia), every last drop of internet counts. Recently I stumbled upon this blog post by Troy Hunt about using a Raspberry Pi to serve as a local DNS to filter out unnecessary content. Personally, I don’t care about the content as such (I don’t like ads, but I do acknowledge that they are generally a necessary “evil”), but for me it is about getting the most performance out of my lowly internet connection until the technology evolves in Australia.

So I headed over to https://pi-hole.net/ and downloaded the installation script. I don’t have a Raspberry Pi, which is its intended platform, but I do have Virtualbox floating around, because I use that extensively for the Oracle VM’s that I need for tackling AskTOM questions on various versions of the database. I initially tried to install pi-hole on an existing Oracle Enterprise Linux installation, but the script makes a fair few assumptions about platform, and I had a lot of dramas. So I just downloaded a fresh install of Fedora 28, got it up and running in a small VM and voila! Pi-hole installed pretty much straight out of the gate.

Subjectively, things definitely seem a lot snappier now. My console suggests that a quarter of my traffic is being filtered out!

image

And I’m only using the pi-hole on a few machines at home currently. I’ll give it a few more days before I take the big jump and make it the DNS for my router so that all my devices can get the benefit.

But so far, so good. If you’re laden with slow clunky internet like I am, perhaps give pi-hole a look.

My APEX 18.2 upgrade in a nutshell

As always, you should read the Installation/Upgrade manual from top to bottom before upgrading any piece of software, and be aware of all of the pre-requisites and processes.  But for me, my Application Express 18.2 upgrade was as simple as:

  • Download Application Express
  • Unzip to my target location
  • sqlplus / as sysdba
    • SQL> @apexins.sql SYSAUX SYSAUX TEMP /i/
  • Go to my ORDS installation
    • java -jar ords.war validate
  • sqlplus / as sysdba
    • SQL> @apex_rest_config.sql

And voila! A freshly upgraded Application Express in a matter of minutes…

image

Sweet!

OpenWorld Monday…I’m so screwed

As I mentioned in a previous blog post, I whipped up a small Application Express application to let me plan out my Openworld activities. But there’s a small problem with being able to quickly and easily find thing based on the schedule…. There is too much to see!!!

I had a quick glance through the schedule just for Monday, and already I can see myself being very frustrated with all of the content that I am going to have miss out on, simply because it clashes with other content I’d like to see.

And just to compound things Smile with any luck, I’ll be able to score some time in the theatre in the demo grounds to give some lightning talks in both the morning and afternoon. More details on that soon!

Addenda: Yes I did score some time! Come see my SQL Tuning in 20mins session. Details here.

But here is my wish list below – just smashed full of conflicts… sigh Smile

Monday morning

The New Auto-Upgrade for Oracle Databases [TRN4031]
Daniel Overby Hansen, Database Administrator, SimCorp
Mike Dietrich, Master Product Manager – Database Upgrades and Migrations, Oracle
Monday, Oct 22, 9:00 a.m. – 9:45 a.m. | Moscone West – Room 3004

which overlaps with

High Availability and Sharding Deep Dive with Next-Generation Oracle Database [TRN4032]
Wei Hu, Vice President of Product Development, Oracle
Monday, Oct 22, 9:00 a.m. – 9:45 a.m. | Moscone West – Room 3007

which overlaps with

Oracle Linux and Oracle VM VirtualBox: The Enterprise Development Platform [PRO4724]
Simon Coter, Director of Product Management – Oracle VM & VirtualBox, Oracle
Sergio Leunissen, Vice President of Product Management, Oracle
Monday, Oct 22, 9:00 a.m. – 9:45 a.m. | Moscone South – Room 152

which overlaps with

A Day in the Life of a DBA in an Autonomous World [PRO4376]
Akshay Sangaonkar, Senior Principal Product Manager, Oracle
Karl Dias, Vice President, Database Manageability, Oracle
Monday, Oct 22, 9:00 a.m. – 9:45 a.m. | Marriott Marquis (Golden Gate Level) – Golden Gate C3

which overlaps with

Make AppDev Fun Again, with the Oracle RAD Stack [PRO4087]
Kris Rice, Senior Director, Oracle
Michael Hichwa, Software Development VP, Oracle
Joel Kallman, Senior Director, Software Development, Oracle
Monday, Oct 22, 9:00 a.m. – 9:45 a.m. | Moscone West – Room 3009

which overlaps with

Using Deep Learning and Neural Networks in Oracle Database 18c [BUS1891]
Brendan Tierney, ., Oralytics
Neil Chandler, Database Administrator, Chandler Systems Ltd
Monday, Oct 22, 9:00 a.m. – 9:45 a.m. | Moscone West – Room 3001

which overlaps with

Analytic Functions: A Developer’s Best Friend [DEV6244]
Timothy Hall, DBA, Developer, Author, Trainer, oracle-base.com
Code One Tracks: Database, Big Data, and Data Science
Session Type: Developer Session
Monday, Oct 22, 9:00 a.m. – 9:45 a.m. | Moscone West – Room 2003

 

Monday mid-morning

Oracle Linux and Oracle VM: Get Trained for Cloud, Hybrid, and On-Premises [TRN5828]
Avi Miller, Product Management Director, Oracle
Antoinette O’SULLIVAN, Director of Training and Documentation for Linux and Virtualization, Oracle
Monday, Oct 22, 10:30 a.m. – 11:15 a.m. | Moscone South – Room 154

which overlaps with

Why Citizen Developers Should Be Your New Best Friend [TRN4091]
David Peake, Senior Principal Product Manager, Oracle
Monday, Oct 22, 10:30 a.m. – 11:15 a.m. | Moscone West – Room 3009

which overlaps with

Oracle Autonomous Database Cloud [PKN3947]
Andrew Mendelsohn, Executive Vice President Database Server Technologies, Oracle
Monday, Oct 22, 11:00 a.m. – 12:15 p.m. | The Exchange @ Moscone South – The Arena

which overlaps with

3D: Docker for Database Developers [TIP1247]
Roel Hartman, Director, APEX Consulting
Monday, Oct 22, 11:30 a.m. – 12:15 p.m. | Moscone West – Room 3020

which overlaps with

Oracle Linux: State of the Penguin [PRO4720]
Wim Coekaerts, Senior Vice President, Operating Systems and Virtualization Engineering, Oracle
Monday, Oct 22, 11:30 a.m. – 12:15 p.m. | Moscone West – Room 2000

which overlaps with

50 Shades of Data: How, When, Why—Big, Relational, NoSQL, Elastic, Graph, Event [DEV4976]
Lucas Jellema, CTO, AMIS Services BV
Code One Tracks: Database, Big Data, and Data Science, Java Server-Side Development and Microservices
Session Type: Developer Session
Monday, Oct 22, 10:30 a.m. – 11:15 a.m. | Moscone West – Room 2007

which overlaps with

Microservices: Get Rid of Your DBA and Send the DB into Burnout [DEV5504]
Franck Pachot, Computing Engineer, CERN
Code One Tracks: Database, Big Data, and Data Science
Session Type: Developer Session
Monday, Oct 22, 10:30 a.m. – 11:15 a.m. | Moscone West – Room 2003

which overlaps with

REST-Enabled Neural Networks in Oracle Database 18c [DEV5026]
Brendan Tierney, ., Oralytics
Neil Chandler, Database Architect, Chandler Systems
Code One Tracks: Database, Big Data, and Data Science
Session Type: Developer Session
Monday, Oct 22, 11:30 a.m. – 12:15 p.m. | Moscone West – Room 2001

which overlaps with

Real-Life SQL Tuning: From Four Minutes to Eight Seconds in an Hour [DEV5668]
Liron Amitzi, Senior Database Consultant, Brillix LTD
Code One Tracks: Database, Big Data, and Data Science
Session Type: Developer Session
Monday, Oct 22, 11:30 a.m. – 12:15 p.m. | Moscone West – Room 2003

 

Monday afternoon

Oracle Database in an Asynchronous World [TRN5515]
Dominic Giles, Master Product Manager, Oracle
Monday, Oct 22, 3:45 p.m. – 4:30 p.m. | Moscone West – Room 3003

which overlaps with

A Day in the Life of a Real-World Performance Engineer [TRN4026]
Graham Wood, Software Architect, Oracle
Robert Carlin, Software Development Manager, Real World Performance, Oracle Database Development, Oracle
Mihajlo Tekic, Oracle
Monday, Oct 22, 3:45 p.m. – 4:30 p.m. | Moscone West – Room 3004

which overlaps with

Build a Web App with Oracle REST Data Services and Oracle JavaScript Extension Toolkit [HOL6325]
Jeff Smith, Senior Principal Product Manager, Oracle
Ashley Chen, Senior Product Manager, Oracle
Colm Divilly, Consulting Member of Technical Staff, Oracle
Elizabeth Saunders, Principal Technical Staff, Oracle
Monday, Oct 22, 3:45 p.m. – 4:45 p.m. | Marriott Marquis (Yerba Buena Level) – Salon 3/4

 

Monday late afternoon

Oracle Optimizer and the Road to the Latest Generation of Oracle Database [PRO4009]
Nigel Bayliss, Senior Principal Product Manager, Oracle
Monday, Oct 22, 4:45 p.m. – 5:30 p.m. | Moscone West – Room 3005

which overlaps with

Oracle Autonomous OLTP Database Cloud Overview and Roadmap [PRO3978]
Maria Colgan, master product manager at Oracle Corporation , Oracle
Juan Loaiza, Senior Vice President, Oracle
Monday, Oct 22, 4:45 p.m. – 5:30 p.m. | Moscone West – Room 3003

which overlaps with

Data Warehouse Like a Tech Startup with Oracle Autonomous Data Warehouse Cloud [BUS3194]
Mark Rittman, Managing Director, MJR Analytics ltd
Monday, Oct 22, 4:45 p.m. – 5:30 p.m. | Moscone West – Room 3006

 

Monday evening

Data Management in a Microservices World [TIP4175]
Gerald Venzl, Senior Principal Product Manager, Oracle
Monday, Oct 22, 5:45 p.m. – 6:30 p.m. | Moscone West – Room 3003

which overlaps with

18(ish) Things Developers Will Love About Oracle Database 18c [PRO4093]
Chris Saxon, Developer Advocate for SQL, Oracle
Monday, Oct 22, 5:45 p.m. – 6:30 p.m. | Moscone West – Room 3009

which overlaps with

What’s New for Oracle SQL Developer [PRO4058]
Jeff Smith, Senior Principal Product Manager, Oracle
Monday, Oct 22, 5:45 p.m. – 6:30 p.m. | Moscone West – Room 3006

which overlaps with

What Everyone Should Know About Oracle Partitioning [PRO4046]
Hermann Baer, Senior Director Product Management, Oracle
Monday, Oct 22, 5:45 p.m. – 6:30 p.m. | Moscone West – Room 3005

 

Correcting datatypes with minimal downtime

Just a quick post here by request of an attendee of the September Office Hours. I had a demo converting the data type of a primary key without losing data and with minimal impact to the availability of the database using DBMS_REDEFINITION.  You can see that video here

but here is the script used to run that demo. 



SQL> drop table t purge;

Table dropped.

SQL> drop table t_interim purge;

Table dropped.

--
-- Source table with string primary key that we want to convert to numeric
--
SQL>
SQL> create table t as
  2  select
  3     to_char(object_id) pk
  4    ,owner
  5    ,object_name
  6    ,subobject_name
  7    ,object_id
  8    ,data_object_id
  9  from all_objects
 10  where object_id is not null;

Table created.

SQL>
SQL> alter table t add primary key ( pk );

Table altered.

--
-- Empty interim table with numeric string primary, representing our corrected data type
--
SQL>
SQL> create table t_interim as
  2  select
  3     object_id pk
  4    ,owner
  5    ,object_name
  6    ,subobject_name
  7    ,object_id
  8    ,data_object_id
  9  from all_objects
 10  where 1=0;

Table created.

SQL>
SQL> desc t
 Name                                                                    Null?    Type
 ----------------------------------------------------------------------- -------- -------------------------------------
 PK                                                                      NOT NULL VARCHAR2(40)
 OWNER                                                                   NOT NULL VARCHAR2(128)
 OBJECT_NAME                                                             NOT NULL VARCHAR2(128)
 SUBOBJECT_NAME                                                                   VARCHAR2(128)
 OBJECT_ID                                                               NOT NULL NUMBER
 DATA_OBJECT_ID                                                                   NUMBER

SQL>
SQL> desc t_interim
 Name                                                                    Null?    Type
 ----------------------------------------------------------------------- -------- -------------------------------------
 PK                                                                      NOT NULL NUMBER
 OWNER                                                                   NOT NULL VARCHAR2(128)
 OBJECT_NAME                                                             NOT NULL VARCHAR2(128)
 SUBOBJECT_NAME                                                                   VARCHAR2(128)
 OBJECT_ID                                                               NOT NULL NUMBER
 DATA_OBJECT_ID                                                                   NUMBER

--
-- Standard call will fail, because the columns do not align
--

SQL>
SQL> begin
  2    dbms_redefinition.start_redef_table
  3         (  uname           => user,
  4            orig_table      => 'T',
  5            int_table       => 'T_INTERIM'
  6            );
  7  end;
  8  /
begin
*
ERROR at line 1:
ORA-42016: shape of interim table does not match specified column mapping
ORA-06512: at "SYS.DBMS_REDEFINITION", line 109
ORA-06512: at "SYS.DBMS_REDEFINITION", line 3887
ORA-06512: at "SYS.DBMS_REDEFINITION", line 5208
ORA-06512: at line 2

--
-- Column mapping call fails, because the columns being manipulated is the primary key
--

SQL>
SQL> declare
  2    l_colmap varchar2(200) :=
  3      q'{   to_number(pk) as pk
  4           ,owner
  5           ,object_name
  6           ,subobject_name
  7           ,object_id
  8           ,data_object_id}';
  9  begin
 10    dbms_redefinition.start_redef_table
 11         (  uname           => user,
 12            orig_table      => 'T',
 13            int_table       => 'T_INTERIM',
 14            col_mapping   => l_colmap
 15            );
 16  end;
 17  /
declare
*
ERROR at line 1:
ORA-42008: error occurred while instantiating the redefinition
ORA-12016: materialized view does not include all primary key columns
ORA-06512: at "SYS.DBMS_REDEFINITION", line 109
ORA-06512: at "SYS.DBMS_REDEFINITION", line 3887
ORA-06512: at "SYS.DBMS_REDEFINITION", line 5208
ORA-06512: at line 10

--
-- Abort this run to reset our objects
--

SQL>
SQL> begin
  2    dbms_redefinition.abort_redef_table
  3         (  uname           => user,
  4            orig_table      => 'T',
  5            int_table       => 'T_INTERIM'
  6            );
  7  end;
  8  /

PL/SQL procedure successfully completed.

--
-- Use the ROWID in lieu of the primary key
--

SQL>
SQL> declare
  2    l_colmap varchar2(200) :=
  3      q'{   to_number(pk) as pk
  4           ,owner
  5           ,object_name
  6           ,subobject_name
  7           ,object_id
  8           ,data_object_id}';
  9  begin
 10    dbms_redefinition.start_redef_table
 11         (  uname           => user,
 12            orig_table      => 'T',
 13            int_table       => 'T_INTERIM',
 14            col_mapping   => l_colmap,
 15            options_flag=>DBMS_REDEFINITION.cons_use_rowid
 16            );
 17  end;
 18  /

PL/SQL procedure successfully completed.

--
-- Add our subordinate objects
--

SQL>
SQL> alter table t_interim add primary key ( pk );

Table altered.

--
-- And finally finish off the process (this is the window of unavailability)
--

SQL>
SQL> begin
  2    dbms_redefinition.finish_redef_table
  3         (  uname           => user,
  4            orig_table      => 'T',
  5            int_table       => 'T_INTERIM');
  6  end;
  7  /

PL/SQL procedure successfully completed.

SQL>
SQL> desc t
 Name                                                                    Null?    Type
 ----------------------------------------------------------------------- -------- -------------------------------------
 PK                                                                      NOT NULL NUMBER
 OWNER                                                                   NOT NULL VARCHAR2(128)
 OBJECT_NAME                                                             NOT NULL VARCHAR2(128)
 SUBOBJECT_NAME                                                                   VARCHAR2(128)
 OBJECT_ID                                                               NOT NULL NUMBER
 DATA_OBJECT_ID                                                                   NUMBER

Enjoy!

Datatype conversion laziness … yet another reason

I’ve got two words to describe my thoughts on being lazy with data types …. JUST DON’T! Smile

There’s been many blog posts, videos, AskTOM questions etc over the years with numerous examples of how the optimizer gets confused, how the SQL engine gets confused…and even how developers will ultimately get confused if you do not spend that little tiny bit of extra effort in ensuring data type consistency.

I personally wish we had an init.ora parameter called (say) “response_to_data_type_conversion” = IMPLICIT (default) or ERROR, so that when we encountered a data type conversion we could choose to report it back to the developer as an error, or implicitly try to convert it as we currently do. Then we could have it set to “error” at least in non-production environments to get a handle on where we are not being diligent.

But anyway, enough ranting Smile. The reason for this blog post to give yet another example of how making assumptions about data type handling can lead to spurious errors.

Let’s assume we have a simple requirement – to collapse a result set into a single CSV result stored in a CLOB. Here is my first cut at the code



SQL> set serverout on
SQL> declare
  2    l_csv  clob := empty_clob();
  3  begin
  4   for i in (select *
  5             from dba_objects )
  6   loop
  7        l_csv := l_csv || i.owner||','||i.object_name||','||i.object_id;
  8    end loop;
  9    dbms_output.put_line('length=' || dbms_lob.getlength(l_csv));
 10  end;
 11  /
declare
*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error
ORA-06512: at line 7
ORA-06512: at line 7

And it fails. To get an idea as to why and where it fails, I’ll add an exception handler to capture the state of the CLOB.



SQL> set serverout on
SQL> declare
  2    l_csv  clob := empty_clob();
  3  begin
  4   for i in (select *
  5             from dba_objects )
  6   loop
  7        l_csv := l_csv || i.owner||','||i.object_name||','||i.object_id;
  8    end loop;
  9    dbms_output.put_line('length=' || dbms_lob.getlength(l_csv));
 10  exception
 11    when others then
 12      dbms_output.put_line('length=' || dbms_lob.getlength(l_csv));
 13      raise;
 14  end;
 15  /
length=32776
declare
*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error
ORA-06512: at line 13
ORA-06512: at line 7
ORA-06512: at line 7

Now that number looks “interesting” in that it is right on the length limit for a VARCHAR2 in PL/SQL. But why would a VARCHAR2 come into play here? We are dealing with a CLOB and that should be allowed to get much larger. To further confuse things, look what happens when I perform functionally the same operation, but using an intermediate variable for each row fetched from the loop.



SQL> declare
  2    l_csv  clob := empty_clob();
  3    l_line varchar2(255);
  4  begin
  5   for i in (select *
  6             from dba_objects )
  7   loop
  8        l_line := i.owner||','||i.object_name||','||i.object_id;
  9        l_csv := l_csv || l_line;
 10    end loop;
 11    dbms_output.put_line('length ' || length(l_csv));
 12  end;
 13  /

PL/SQL procedure successfully completed.

And now it works! That seems extraordinary because the logic would appear to be identical.

The answer here is once again – not taking sufficient care with our data type conversions. The OBJECT_ID we are fetching is numeric. Because we are simply slamming that into a concatenation operator (||), we need to do some implicit data type conversion, and to achieve that, we need to do some casting into VARCHAR2. We are not privy to how the PL/SQL execution engine is performing the conversion, but the error suggests that all components of the expression (including the left hand side) are being casted to VARCHAR2 and hence our “clob” ultimately exceeds the 32k limit.

The resolution is simple – take control of the data type conversion as we should have done anyway:



SQL> set serverout on
SQL> declare
  2    l_csv  clob := empty_clob();
  3  begin
  4   for i in (select *
  5             from dba_objects )
  6   loop
  7        l_csv := l_csv || i.owner||','||i.object_name||','||to_char(i.object_id);
  8    end loop;
  9    dbms_output.put_line('length=' || dbms_lob.getlength(l_csv));
 10  end;
 11  /
length=3491433

PL/SQL procedure successfully completed.

The moral of the story remains the same. Consistent and explicit handling of data type conversions will give you more robust code.

Another little 12c improvement

You’ve got a huge table right? Massive! Immense! And then something bad happens. You get asked to remove one of the columns from that table.

“No problem” you think. “I won’t run the ‘drop column’ command because that will visit every block and take forever!”

So you settle on the perfect tool for such a scenario – simply mark the column as unused so that it is no longer available to application code and the developers that write that code.

But there’s a catch that not many people know about. SET UNUSED is meant to be just a trivial database dictionary operation. We don’t touch the data, we only manipulate the column definition and thus an execution of SET UNUSED should be instantaneous. This is indeed the case most of the time, but as the example below shows – the way the column was added to the table, can have a bearing on what it costs to have that column removed.


SQL> select banner from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production

--
-- large-ish table to monitor the impact
--
SQL> create table t1
  2   as
  3   select d.*
  4   from dba_objects d,
  5   ( select 1 from dual connect by level <= 100);

Table created.

SQL> set timing on
SQL> select count(*) from t1;

  COUNT(*)
----------
   8713700

Elapsed: 00:00:01.92

--
-- Take an existing column to UNUSED ... instantaneous!
--
SQL> alter table t1 set unused column created;

Table altered.

Elapsed: 00:00:00.06


--
-- Add a new column, and then make it UNUSED ... instantaneous!
--
SQL> alter table t1 add new_col1 timestamp;

Table altered.

Elapsed: 00:00:00.01
SQL> alter table t1 set unused column new_col1;

Table altered.

Elapsed: 00:00:00.00


--
-- Add a new columns with a not null default, and then make it UNUSED ... careful!
--
SQL> alter table t1 add new_col2 timestamp default sysdate not null;

Table altered.

Elapsed: 00:00:00.00
SQL> alter table t1 set unused column new_col2;

Table altered.

Elapsed: 00:01:35.39

You will not get the same issue in 12c.


SQL> select banner from v$version;

BANNER
-----------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> create table t1  nologging
  2   as
  3   select d.*
  4   from dba_objects d,
  5   ( select 1 from dual connect by level <= 100);

Table created.

SQL>
SQL> set timing on
SQL> select count(*) from t1;

  COUNT(*)
----------
   7951500

Elapsed: 00:00:02.20
SQL> alter table t1 set unused column created;

Table altered.

Elapsed: 00:00:00.08
SQL>
SQL> alter table t1 add new_col1 timestamp;

Table altered.

Elapsed: 00:00:00.00
SQL> alter table t1 set unused column new_col1;

Table altered.

Elapsed: 00:00:00.00
SQL>
SQL> alter table t1 add new_col2 timestamp default sysdate not null;

Table altered.

Elapsed: 00:00:00.01
SQL> alter table t1 set unused column new_col2;

Table altered.

Elapsed: 00:00:00.01
SQL>

Partial indexing – get the dictionary definitions right

Just a quick post to clear up some confusion that can be seen on the partial indexing capabilities in Oracle Database 12c and above. I was at an event in the Oracle Perth office last week, and an attendee mentioned that they thought the feature was not working or was not enabled on their database, and presented the following demonstration to prove their case:



SQL> create table t ( x int, y int )
  2  partition by range ( x )
  3  (
  4    partition p1 values less than ( 1000 ) indexing on,
  5    partition p2 values less than ( 2000 ) indexing off
  6  );

Table created.

SQL> insert into t
  2  select rownum, rownum from dual connect by level < 2000;

1999 rows created.

SQL> create index ix on t ( x ) local;

Index created.

SQL> select segment_name, partition_name
  2  from user_segments
  3  where segment_name = 'IX';

SEGMENT_NAME                   PARTITION_NAME
------------------------------ ------------------------------
IX                             P1
IX                             P2

SQL> select partition_name, status
  2  from user_ind_partitions
  3  where index_name = 'IX';

PARTITION_NAME                 STATUS
------------------------------ --------
P1                             USABLE
P2                             USABLE

At first glance, this looks counter-intuitive. I have explicitly specified that I do not want indexing on partition P2, yet after creating a local index, I still have 2 segments, one for each partition, and double-checking USER_IND_PARTITIONS tells me that they are both “fully-fledged” usable index partitions.

As per the documentation linked above, nominating the INDEXING ON / OFF at the partition level on the table definition is not the whole story. When you create the index, you need to inform the database that you wish a particular index to respect that intent. This is because you might want some indexes to be partial and others not to be.

So by slightly changing my CREATE INDEX statement, I can get the desired outcome.


SQL> drop index ix;

Index dropped.

SQL> create index ix on t ( x ) local indexing partial;

Index created.

SQL> select segment_name, partition_name
  2  from user_segments
  3  where segment_name = 'IX';

SEGMENT_NAME                   PARTITION_NAME
------------------------------ ------------------------------
IX                             P1

SQL> select partition_name, status
  2  from user_ind_partitions
  3  where index_name = 'IX';

PARTITION_NAME                 STATUS
------------------------------ --------
P1                             USABLE
P2                             UNUSABLE

Connor and Chris at OpenWorld

Chris and I will be at OpenWorld next month, so our session details are below, but you can also click on the links below to add entries to your calendar to make sure you don’t miss us. Don’t forget to use the official Schedule Builder to make sure you have booked your spot at our sessions!

If you can’t get to our sessions, you might catch us wandering the halls or at The Hub. Feel free to come and say Hello and talk tech!

See you at OpenWorld!

Addenda: See the slides from my talks here

Calendar ICS files

Tune ANY SQL in 20 minutes (.ics)

Fast Lane to Database Success (.ics)

Developers – Don’t Be The Person That Discombobulates Your Database (.ics)

18 Things Developers Will Love About Database 18c (.ics)

Why Isn’t My Query Using An Index (.ics)

 

Session Details

image

 

image

image

 

image

The full stack developer….is BACK for 2018!

Last year, I flexed my technology muscles Smile by building on the fine ground work of Lucas Jellema in using some Node, some REST, and some JSON to extract the full Oracle Openworld speaker catalogue, and then added some JSON parsing in the database, some SQL and slapped a nice helping on Application Express on top of it all, to end up with an Application Express rendition of the Oracle Openworld speaker catalogue.

So it is that time of year again, and whilst some of the bits and pieces have changed, I’ve managed to stumble my way through all of the difference pieces once again, and made some enhancements along the way to give you the new and improved 2018 version!

It’s waiting there on apex.oracle.com for you to explore.

https://tinyurl.com/oow18catalog

Enjoy!

Modifying tables without losing materialized views

Whilst the ever increasing speed of storage and servers, and features likes In-memory are decreasing the need for large numbers of materialized views in databases, there are still plenty of use cases where they can be a useful performance or integrity tool.

But what if that materialized view takes minutes or hours to build? Then even if it is fast refresh enabled, than fast refresh is no of use if we have to rebuild the entire materialized view from scratch due to an structural change in the underlying table. 

For example, let’s say I have a table and a simple fast refresh materialized view on that table as below:


SQL> create table t(x number(8) primary key);

Table created.

SQL> insert into t values(55);

1 row created.

SQL> create materialized view log on t with primary key, rowid;

Materialized view log created.

SQL> create materialized view mv
  2  build immediate
  3  refresh fast on demand as
  4  select * from t;

Materialized view created.

SQL>
SQL> select * from t;

         X
----------
        55

1 row selected.

SQL> select * from mv;

         X
----------
        55

1 row selected.

SQL> insert into t values (10);

1 row created.

SQL> insert into t values (20);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from t;

         X
----------
        10
        20
        55

3 rows selected.

SQL> select * from mv;

         X
----------
        55

1 row selected.

SQL>
SQL> exec dbms_mview.refresh('mv');

PL/SQL procedure successfully completed.

SQL> select * from mv;

         X
----------
        10
        20
        55

3 rows selected.

For the sake of this discussion, let’s assume the build of materialized takes hours. Naturally we want to avoid having to a rebuild (or do a complete refresh) of that materialized view. But then…the inevitable happens. We need to change the table T. In this case, the values for column X now exceed the limits of the definition.


SQL> insert into t values (123123123);
insert into t values (123123123)
                      *
ERROR at line 1:
ORA-01438: value larger than specified precision allowed for this column

Obviously, this is a trivial fix for the table. We simply alter the column to make it larger (which does not require any downtime or reorganization of the data).


SQL> alter table t modify x number(10);

But all is not well…Having a materialized view means that either the materialized view log, or the materialized view itself may have replicas of that column, so they remain “broken”


SQL> insert into t values (123123123);
insert into t values (123123123)
            *
ERROR at line 1:
ORA-12096: error in materialized view log on "MCDONAC"."T"
ORA-01438: value larger than specified precision allowed for this column

Notice the subtle difference in the error message.  It is not an error on table T, but an error on the materialized view log. You might have the urge to simply jump and run the alter commands on the materialized view log and the materialized view. And you might even find that this approach works. But please note – this approach is not supported, and thus we can’t guarantee that it will (a) work, or (b) not create problems later down the track when you attempt to refresh the view or perform other operations.

But if direct alteration is not support, how do we solve the problem without having to rebuild the entire materialized view from scratch?

The solution here is the option to preserve the materialized view as a standalone table. We can drop the definition of the materialized view but hold on to the table that supports it. Now that it is a standalone table, we can alter the column definition so that it matches our source table T. Notice that before I drop the definition, I perform a final refresh to make sure the materialized is totally up to date – so there is a little bit of coordination required here to make sure that you do not lose any changes that occur to table T during the process.


SQL> exec dbms_mview.refresh('mv');

PL/SQL procedure successfully completed.

SQL> drop materialized view mv preserve table;

Materialized view dropped.

SQL> drop materialized view log on t ;

Materialized view log dropped.

SQL>
SQL> alter table t modify x number(10);

Table altered.

SQL> alter table mv modify x number(10);

Table altered.

We still don’t have our materialized view back though. But we can recreate it without needing a full build cycle, using the PREBUILT table clause.


SQL> create materialized view log on t with PRIMARY KEY, rowid;

Materialized view log created.

SQL>
SQL> create materialized view mv
  2  on prebuilt table
  3  refresh fast  on demand as
  4  select * from t;

Materialized view created.

And we are done! An easy and supported means of altering the materialized view structure without a full rebuild of the data.

Complex materialized views? Try a table first

Just a quick post today that arose from an AskTOM question a little while back. Over the years and across the evolution of various versions of the Oracle database, the amount of “power” you could pack into a CREATE TABLE statement as grown. For example, I can do a CREATE-TABLE-AS-SELECT, along with a partitioning clause, including an explicit CREATE INDEX command and add constraints all in one atomic operation.



SQL> create table t
  2    (id1,
  3     id2,
  4     owner,
  5     object_name,
  6     constraint t_pk primary key ( id1) using index
  7        ( create index t_pk on t ( id1, owner) )
  8    )
  9  partition by range ( id2 )
 10  (
 11    partition p1 values less than ( 100000 ),
 12    partition p2 values less than ( 200000 )
 13  )
 14  as
 15    select
 16      object_id id1,
 17      object_id id2,
 18      owner,
 19      object_name
 20  from dba_objects
 21  where object_id is not null;

Table created.

That’s pretty cool, but one of the most common times you will be writing DDL that includes a “select * from” as part of the DDL definition, is during the creation of materialized views, because the view text naturally will contain a select statement. Unfortunately, we are not quite as “generous” when it comes to accepting all of the various physical implementation options when it comes to DDL for materialized views. Trying to create a materialized view that matches our table T above is a struggle.


SQL> create materialized view t_mv
  2    (id1,
  3     id2,
  4     owner,
  5     object_name,
  6     constraint t_mv_pk primary key ( id1) using index
  7        ( create index t_mv_pk on t_mv ( id1, owner) )
  8    )
  9  partition by range ( id2 )
 10  (
 11    partition p1 values less than ( 100000 ),
 12    partition p2 values less than ( 200000 )
 13  )
 14  refresh complete on demand
 15  as
 16    select
 17      id1,
 18      id2,
 19      owner,
 20      object_name
 21  from t;
   constraint t_mv_pk primary key ( id1) using index
              *
ERROR at line 6:
ORA-00907: missing right parenthesis


SQL>
SQL> create materialized view t_mv
  2    (id1,
  3     id2,
  4     owner,
  5     object_name,
  6     constraint t_mv_pk primary key ( id1) using index
  7        ( create index t_mv_pk on t_mv ( id1, owner) )
  8    )
  9  refresh complete on demand
 10  partition by range ( id2 )
 11  (
 12    partition p1 values less than ( 100000 ),
 13    partition p2 values less than ( 200000 )
 14  )
 15  as
 16    select
 17      id1,
 18      id2,
 19      owner,
 20      object_name
 21  from t;
   constraint t_mv_pk primary key ( id1) using index
              *
ERROR at line 6:
ORA-00907: missing right parenthesis

When you encounter this limitation, don’t forget that one of the nice things you can do with materialized view creation is pre-create the underlying table with all of its flexibility:



SQL> create table t_mv
  2    (id1,
  3     id2,
  4     owner,
  5     object_name,
  6     constraint t_mv_pk primary key ( id1) using index
  7        ( create index t_mv_pk on t_mv ( id1, owner) )
  8    )
  9  partition by range ( id2 )
 10  (
 11    partition p1 values less than ( 100000 ),
 12    partition p2 values less than ( 200000 )
 13  )
 14  as
 15    select
 16      id1,
 17      id2,
 18      owner,
 19      object_name
 20  from t;

Table created.

And then once that it done, you can use the PREBUILT TABLE clause to create your materialized view which will now satisfy all of the underlying physical structure elements you desired.



SQL> create materialized view t_mv
  2  on prebuilt table
  3  refresh complete on demand
  4  as select
  5      id1,
  6      id2,
  7      owner,
  8      object_name
  9  from t;

Materialized view created.

SQL>
SQL>