“Call me!” Many many times!

Some readers might recall that classic Blondie track “Call me”.  Of course, some readers might be wishing that I wouldn’t harp on about great songs from the 80’s. But bear with me, there is a (very tenuous) link to this post. If you haven’t heard the song, you can jump to the chorus right here.  Go on, I’ll wait until you get back. Smile

This golden oldie is relevant when it comes to dealing with object types in PL/SQL, and in particular, when you are using them in a SQL-related context.  To set the scene, I’ll start with a trivial example – creating a simple object, utilizing that object in a function, and then using a simple SELECT-FROM-DUAL on that function to check that it works.


SQL> create or replace
  2  type three_values as object
  3   ( val_1 int,
  4     val_2 int,
  5     val_3 int
  6   );
  7  /

Type created.

SQL>
SQL> create or replace
  2  function f return three_values is
  3  begin
  4     return three_values(1,2,3);
  5  end;
  6  /

Function created.

SQL>
SQL> select f from dual;

F(VAL_1, VAL_2, VAL_3)
-----------------------------------------
THREE_VALUES(1, 2, 3)

So far so good. But as is often the case, the requirement is not to get the object as an entity in it’s own right, but to get at the scalar values within the object. Hence the wrapping of that query as an inline view to an outer one to get the individual columns:


SQL> select x.c.val_1, x.c.val_2, x.c.val_3
  2  from
  3  (select f c from dual ) x;

   C.VAL_1    C.VAL_2    C.VAL_3
---------- ---------- ----------
         1          2          3

1 row selected.

But let’s explore that a little more carefully. Observation of the SQL text might lead us to think that

  • we ran the inner query,
  • saved the result we’ve already seen into a temporary result of type THREE_VALUES,
  • then extracted the elements via the outer SELECT.

But now I’ll make a small amendment to the function as follows. I’ll introduce a package variable so we can do some tracking of executions


SQL> create or replace package pkg as
  2    x int := 0;
  3  end;
  4  /

Package created.

SQL>
SQL> create or replace
  2  function f return three_values is
  3  begin
  4     pkg.x:= pkg.x+1;
  5     return three_values(1,2,3);
  6  end;
  7  /

Function created.

As you would expect, a simple SELECT-FROM-DUAL results in the function being executed once.


SQL> select f from dual;

F(VAL_1, VAL_2, VAL_3)
-----------------------------------------------------
THREE_VALUES(1, 2, 3)

1 row selected.

SQL> set serverout on
SQL> exec dbms_output.put_line('calls='||pkg.x);
calls=1

PL/SQL procedure successfully completed.

As before, I’ll now wrap that query in another SELECT to extract the column elements. I’ll reset my package variable to start the count again from zero.


SQL> exec pkg.x := 0;

PL/SQL procedure successfully completed.

SQL> select x.c.val_1, x.c.val_2, x.c.val_3
  2  from
  3  (select f c from dual ) x;

   C.VAL_1    C.VAL_2    C.VAL_3
---------- ---------- ----------
         1          2          3

1 row selected.

SQL> set serverout on
SQL> exec dbms_output.put_line('calls='||pkg.x);
calls=3

PL/SQL procedure successfully completed.

Notice the difference. The function was called three times. If that function was doing some “heavy lifting” then those additional executions might be a cause for concern. The number of calls to a function referenced from SQL has always been indeterminate, so this is nothing new, but it still might catch you by surprise. References throughout the SQL to those function scalar values can bump the count up even more:


SQL> exec pkg.x := 0;

PL/SQL procedure successfully completed.

SQL> select x.c.val_1, x.c.val_2, x.c.val_3
  2  from
  3  (select f c from dual ) x
  4  where x.c.val_1 = 1 and x.c.val_2 = 2;

   C.VAL_1    C.VAL_2    C.VAL_3
---------- ---------- ----------
         1          2          3

1 row selected.

SQL> set serverout on
SQL> exec dbms_output.put_line('calls='||pkg.x);
calls=5

but conversely, you cannot just simply assume that every reference will result in an additional execution. For example, adding an ORDER BY clause containing references does not increment the count.


SQL> exec pkg.x := 0;

PL/SQL procedure successfully completed.

SQL> select x.c.val_1, x.c.val_2, x.c.val_3
  2  from
  3  (select f c from dual ) x
  4  where x.c.val_1 = 1 and x.c.val_1+10 = 11
  5  order by x.c.val_1, x.c.val_2, x.c.val_3;

   C.VAL_1    C.VAL_2    C.VAL_3
---------- ---------- ----------
         1          2          3

1 row selected.

SQL> set serverout on
SQL> exec dbms_output.put_line('calls='||pkg.x);
calls=5

PL/SQL procedure successfully completed.

A 10053 trace on any of these queries reveals the reason why we can expect to see multiple executions. When I parse my initial query


select x.c.val_1, x.c.val_2, x.c.val_3
from
(select f c from dual ) x

then scrolling through the 10053 trace, I’ll end up with this:


Final query after transformations:******* UNPARSED QUERY IS *******
SELECT SYS_OP_ATG("SCOTT"."F"(),1,2,2) "C.VAL_1",SYS_OP_ATG("SCOTT"."F"(),2,3,2) "C.VAL_2",SYS_OP_ATG("SCOTT"."F"(),3,4,2) "C.VAL_3" 
FROM "SYS"."DUAL" "DUAL"

You can see that the query has been transformed to have explicit calls of the function for each column we requested.

Can the multiple calls be avoided? Yes. There’s a couple of simple options to explore here. First of these, is that you can use a scalar subquery to take advantage of some query level caching that the database will do to avoid repeated executions of query sub-components.  Here is the previous examples repeated but with a scalar subselect to access the function.


SQL> exec pkg.x := 0;

PL/SQL procedure successfully completed.

SQL> select x.c.val_1, x.c.val_2, x.c.val_3
  2  from
  3  (select ( select f from dual ) c
  4   from dual ) x;

   C.VAL_1    C.VAL_2    C.VAL_3
---------- ---------- ----------
         1          2          3

1 row selected.

SQL>
SQL> exec dbms_output.put_line('calls='||pkg.x);
calls=1

PL/SQL procedure successfully completed.

SQL>
SQL>
SQL>
SQL> exec pkg.x := 0;

PL/SQL procedure successfully completed.

SQL> select x.c.val_1, x.c.val_2, x.c.val_3
  2  from
  3  (select ( select f from dual ) c
  4   from dual ) x
  5  where x.c.val_1 = 1 and x.c.val_2 = 2;

   C.VAL_1    C.VAL_2    C.VAL_3
---------- ---------- ----------
         1          2          3

1 row selected.

SQL>
SQL> exec dbms_output.put_line('calls='||pkg.x);
calls=1

PL/SQL procedure successfully completed.

Alternately, since we are returning an object as a table, then we can reflect this by adding an additional nested table object type to handle this within the function:


SQL> create or replace
  2  type three_val_list as
  3   table of three_values
  4  /

Type created.

SQL>
SQL> create or replace
  2  function f return three_val_list is
  3  begin
  4     pkg.x:= pkg.x+1;
  5     return three_val_list(three_values(1,2,3));
  6  end;
  7  /

Function created.

SQL> exec pkg.x := 0;

PL/SQL procedure successfully completed.

SQL> select * from f();

     VAL_1      VAL_2      VAL_3
---------- ---------- ----------
         1          2          3

1 row selected.

SQL> exec dbms_output.put_line('calls='||pkg.x);
calls=1

PL/SQL procedure successfully completed.

Before you get carried away and start screaming “Performance Bug!” from the rooftops, the moral of the story here is probably to first ask yourself: Is there a valid reason for referencing my OBJECT data types via SQL? Very often I see the instantiation of any variable (complex or scalar) being done with a SELECT-FROM-DUAL. Don’t do it – it is almost never needed.

But, if you have truly embraced the Object-Relational features and are genuinely gathering object types from SQL statements, then consider some of the workarounds above to avoid excessive function calls.

Attribute clustering….super cool

I’ve spoken about attribute clustering before here, here and here. So from that you can probably glean that I’m a fan.

I recently spoke about an example of this as well during my AskTOM Office Hours session which you can watch below:

After posting that video, I had a follow-up question which I thought would be worth exploring. My examples so far on attribute clustering were either loading a table from empty, or issuing an “alter table move” on a table that already contained data. In both those situations, it is reasonable to assume that clustering is going to help because it aligns conceptually with what we’d expect to happen, namely

  • read all of the data
  • re-organize all of the data so that it is clustered by the nominated clustering key(s)
  • reload all of the data

But what if we do not have that luxury of moving all of the data around? What if I have table that is already clustered, but now I need to load more data into it? As we know, we only see clustering in effect if we perform a direct load operation. But a direct load operation only creates new blocks. That would suggest we cannot cluster the data, no?

Lets explore this scenario with an example.

As a control to this experiment, I’ll perform some index lookups to a “conventional” table, that is, random data without any clustering attributes.

I’ll create a table with the data arranged in random order, and then add the same amount of data again. (I could do this in a single operation, but I’m keeping it in alignment with experiments I’ll perform shortly to mimic the concept of load data and then appending new data).


SQL> create table t as
  2  select rownum id, rpad('x',100) data, mod(rownum,100) cust_id
  3  from dual
  4  connect by level <= 100000
  5  order by dbms_random.value;

Table created.

SQL>
SQL> insert /*+ APPEND */ into t
  2  select rownum+100000 id, rpad('x',100) data, mod(rownum,100) cust_id
  3  from dual
  4  connect by level <= 100000
  5  order by dbms_random.value;

100000 rows created.

SQL>
SQL> commit;

Commit complete.

I’ve got 200,000 randomly distributed rows for 100 distinct CUST_ID values. Now I’ll add an index and perform a simple lookup to examine the cost. (Note: The statistics you see are not from an initial execution, but from subsequent executions to eliminate parsing and other costs)


SQL> create index ix on t ( cust_id );

Index created.

SQL>
SQL>
SQL> set autotrace on stat
SQL> select /*+ index(t (cust_id)) */ max(data)
  2  from t
  3  where cust_id = 17;

MAX(DATA)
----------------------------------------------------------------------------------------------------
x

1 row selected.

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       1502  consistent gets
          0  physical reads
          0  redo size
        641  bytes sent via SQL*Net to client
        608  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

So our control figure is approximately 1500 logical I/O’s.

In the previous blog posts on this topic, I then added attribute clustering definitions and perform an ‘alter table move’ to reorganize all of the data into a nice clustered arragement. This time, I’ll repeat the above experiment in a different manner. The sequence of steps will be:

  1. load the data,
  2. add the clustering attributes,
  3. reorganize the data,
  4. add an index,
  5. observe the expected benefits of clustering,
  6. add more data, which we might expect could not be clustered because the table is already clustered now,
  7. perform more index lookups and see what the cost is.

If clustering is only useful for an initial data load, then we’d expect to see a significant degradation when the final index lookups are performed in (7). First of all, just to reinforce the coolness of attribute clustering just one more time, here are steps (1) through (5). Remember – the control figure here is 1500 LIO’s.


SQL> create table t as
  2  select rownum id, rpad('x',100) data, mod(rownum,100) cust_id
  3  from dual
  4  connect by level <= 100000
  5  order by dbms_random.value;

Table created.

SQL>
SQL> create table t2 as select * from t;

Table created.

SQL>
SQL> alter table t ADD clustering by linear order(cust_id);

Table altered.

SQL> alter table t move;

Table altered.

SQL>
SQL> create index ix on t ( cust_id );

Index created.

SQL>
SQL> set autotrace on stat
SQL> select /*+ index(t (cust_id)) */ max(data)
  2  from t
  3  where cust_id = 17;

MAX(DATA)
----------------------------------------------------------------------------------------------------
x

1 row selected.


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
         20  consistent gets
          2  physical reads
          0  redo size
        641  bytes sent via SQL*Net to client
        608  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

Ooooh yeahhhhh… 20 LIOs!!! But now the real test comes. You can see I also created a table called T2, being a clone of T in its randomised data state. Now I am going to append the contents on T2 to my (clustered) T table.


SQL> insert /*+ APPEND */ into t select * from t2;

100000 rows created.

SQL> commit;

Commit complete.

One thing that does look promising is notice the “SORT ORDER BY” step in the execution plan. That suggests that even though this is appending new data to an existing set of data, some clustering might be taking place.


SQL> explain plan for insert /*+ APPEND */ into t select * from t2;

Explained.

SQL> select * from table(dbms_xplan.display());

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
Plan hash value: 528765404

-------------------------------------------------------------------------------------------------
| Id  | Operation                        | Name | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT                 |      |   100K|    10M|       |  2905   (1)| 00:00:01 |
|   1 |  LOAD AS SELECT                  | T    |       |       |       |            |          |
|   2 |   OPTIMIZER STATISTICS GATHERING |      |   100K|    10M|       |  2905   (1)| 00:00:01 |
|   3 |    SORT ORDER BY                 |      |   100K|    10M|    11M|  2905   (1)| 00:00:01 |
|   4 |     TABLE ACCESS FULL            | T2   |   100K|    10M|       |   442   (1)| 00:00:01 |
-------------------------------------------------------------------------------------------------

11 rows selected.

But the real test comes in the index lookup. Let’s repeat that.


SQL>
SQL> set autotrace on stat
SQL> select /*+ index(t (cust_id)) */ max(data)
  2  from t
  3  where cust_id = 17;

MAX(DATA)
----------------------------------------------------------------------------------------------------
x

1 row selected.


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
         43  consistent gets
          0  physical reads
          0  redo size
        641  bytes sent via SQL*Net to client
        608  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

And we’re still awesomely good. The amount of data for CUST_ID = 17 has doubled, and the LIOs approximately doubled as well. So we are still seeing benefits of clustering even though we are appending to a table with previously clustered data.

Just how close to optimal is this? I’ll perform an ‘alter table move’ against the entire table, and re-measure:


SQL> alter table t move online;

Table altered.

SQL> select /*+ index(t (cust_id)) */ max(data)
  2  from t
  3  where cust_id = 17;

MAX(DATA)
---------------------------------------------------------------------------
x

1 row selected.

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
         37  consistent gets
          0  physical reads
          0  redo size
        641  bytes sent via SQL*Net to client
        608  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

Hence optimal clustering would have yielded 37 LIOs for this data, so 43 is very close to that, and a huge step up in efficiency over the 1500 LIOs from the base experiment. In effect, we have “clumps” of clustered data in the table now – so larger, low frequency dataset loads will give better results than smaller, higher frequency loads. But still…very cool stuff that you should be taking advantage of in your databases.

You need scaling huh? Maybe it’s just ego

I’ve just come back from OracleCode Singapore.  It was a great event – the venue was awesome and the attendees were engaged and interested in the content. But there was one thing that I found amusing (disturbing perhaps?) is the number of times I had people approach me on the topic of scaling.  Conversation would typically run along the lines of:

“What is your recommendation for scaling?”

which almost suggests that scaling is of itself, the end solution here.  Not “Here is function X, and I need it to scale”, or “My business requirement is X, and it needs to scale” but just “I need to scale”

So I’d push back and ask more questions:

  • Scale what?
  • What data are you capturing?
  • Where is it coming from?
  • What speed? What volume?
  • What are you plans with the data you are capturing?
  • How do you intend to process the data?
  • Is it transient? Are you planning on storing it forever? Is it sensitive information?

And the scary thing is – more often than not, those were questions for which they did not have answers to (yet?). I’d ask the questions, and very quickly the conversation would be returned to:

“Do I need sharding?”
”Should I use a NoSQL solution?”
“What ‘aaS’ option should I be using to achieve my scaling needs”
”How many nodes do I need?
”What server configuration is best?”

I’m seeing this more and more – that the technological approach to achieve a business requirement is seen AS the business requirement. I hate to be brutal (well…that’s a lie, I like being brutal Smile) but here’s the thing – Stop being so damn focussed on scaling until you have an idea of what your true performance requirements are!

Don’t get me wrong – there are systems out there that need to be architected from the ground up that will have to deal with scaling challenges that have perhaps never been tackled before.  But read those last few words again: “never been tackled before”.  Do you know what that also means?  It means it applies to an intsy wintsy tiny percentage of IT systems.  If it wasn’t, then surprise surprise – those challenges have been tackled before.  Why does everyone in the IT industry think that the system they are about to build will need the same architectural treatment as those 0.00001% of systems in the world that truly do.

Because in almost all of my time in IT, for the other 99.999% of systems out there – the two critical solutions to scaling systems to meet (and well and truly exceed) the performance requirements to meet the business needs are pretty simple:

1) don’t write crappy code,

2) don’t store data in a crappy way

That’s it.  When you can definitively demonstrate that

a) your code is well written,

b) your data is being stored in a means to best serve business requirements

and your application still cannot meet performance needs, then yes, it’s time to talk about architectural options for scaling. But more and more I see folks ignoring (a) and (b), or worse, just assuming that they are implicit and guaranteed to happen, and leaping straight into “I need a 10,000 node, geo-disperse, NoSQL, cached, compressed, mem-optimized, column-based, non-ACID, distributed blah blah blah” for my system to work.

Here’s a reality check – you don’t.  Save yourself a lot of hassles and start simple and focus on quality. You’ll find things will probably scale just fine.

If you’ve made it this far through the post and you think I’m just ranting…well, that’s true Smile but let me also answer the next obvious question:

“So how do we make sure we write good code? How do we make sure we store our data intelligently?”

That’s why we (developer advocates) are here. We’re here to help you succeed. So check out our resources, reach out to us via social media channels, and we’ll help you every step of the journey.

Hybrid histograms

Just a quick post here so I could consolidate some information about histograms in 12c.

On my last Office Hours session, one of the questions that was posed was asking for an explanation of the new types of histograms in use in 12c.  So I had a few slides and described conceptually at a high level how they work and what they represent.  If you’re new to 12c, or new to optimizer histograms in general, then take a look at the video below to get up to speed.  But for those who want to dig into a deeper level, I also saw recently two absolutely cracking articles which discuss the algorithms and implementation in much more detail.  So I’ve linked them here as well.

http://www.oracle.com/technetwork/articles/database/histogram-construction-oracle-12c-4426845.html

http://www.oracle.com/technetwork/articles/database/maintenance-histograms-db-12c-4426850.html

On that note…if you celebrate Easter – have a safe and happy break.  So you on the other side of the weekend!

Hare, Easter Bunny, Plush Bunny, Easter

Text indexes for numbers

We had an AskTOM question recently about being able to search for numbers within a concatenated list. The particular issue was a list of mobile phone numbers, but this is really just about tackling the larger issue of finding numbers within list.

Let’s create a simple example to see where things can break down (and how we can fix them).


SQL> create table t (  cid int, mobs varchar2(1000));

Table created.

SQL>
SQL> insert into t
  2  select c, listagg(num,',') within group ( order by num )
  3  from
  4  ( select mod(rownum,1000) c, trunc(dbms_random.value(1000000,9999999)) num from dual connect by level <= 4000 )
  5  group by c;

1000 rows created.

SQL>
SQL> exec dbms_stats.gather_table_stats('','T');

PL/SQL procedure successfully completed.

SQL>
SQL> select * from t
  2  where rownum <= 10;

       CID MOBS
---------- --------------------------------------------------
         0 2644307,3565512,5481105,7725189
         1 1570287,2092729,6127058,6546683
         2 6018800,6408347,6592531,8456137
         3 2087673,3086382,6692756,9377699
         4 2964558,3887606,6305557,7441515
         5 2219544,4331436,5246494,5303583
         6 1005450,1625403,2271986,4493049
         7 2605217,5143371,7444316,9073658
         8 1205487,4660509,5148296,9578099
         9 3736741,8385346,8758352,9496363

10 rows selected.

My sample table has 1000 rows and there a four “mobile phone” numbers concatenated into a list for each row.

(Side note: I’m not using 555-prefixed numbers like you’ll see in the movies Smile Why phone numbers in movies start with 555)

Now let us try query the table for one of the phone numbers as highlighted in red in the previous list.


SQL>
SQL> select * from t where mobs like '%7444316%';

       CID MOBS
---------- ----------------------------------------
         7 2605217,5143371,7444316,9073658

1 row selected.

SQL> set autotrace traceonly explain
SQL> select * from t where mobs like '%7444316%';

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |    50 |  1800 |     5   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T    |    50 |  1800 |     5   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("MOBS" LIKE '%7444316%' AND "MOBS" IS NOT NULL)

I got the correct row back from the query, but the full scan of the table might be an issue as this table grows to larger sizes. (I’m working on the assumption here that a search for a single number will never return a large number of rows).

Since this is a “term that could be anywhere within the string” style of search, creating a Text index on the column seems an obvious choice. So I’ll do that and try again:


SQL> set autotrace off
SQL>
SQL> create index ix on t ( mobs ) indextype is ctxsys.context;

Index created.

SQL>
SQL> set autotrace traceonly explain
SQL> select * from t where contains(mobs,'7444316') > 0;

Execution Plan
----------------------------------------------------------
Plan hash value: 1339481741

------------------------------------------------------------------------------------
| Id  | Operation                   | Name | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |      |     1 |    36 |     4   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T    |     1 |    36 |     4   (0)| 00:00:01 |
|*  2 |   DOMAIN INDEX              | IX   |       |       |     4   (0)| 00:00:01 |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("CTXSYS"."CONTAINS"("MOBS",'7444316')>0)

That is looking a lot better. I’ll be able to take advantage of this index….or so it might first appear! (Cue ominous sounding music…)

There is an ever so slight problem here – we don’t get the rows we need!


SQL> set autotrace off
SQL> select * from t where contains(mobs,'7444316') > 0;

no rows selected

In fact, even if we concocted a search for the value of the entire column, we still do not get a result back from our query.



SQL> select * from t where contains(mobs,'2605217,5143371,7444316,9073658') > 0;

no rows selected

You need to careful with indexing terms that are not what could be thought of as “common” text. After all, it is a Text index, and by default, the assumption here is that we are indexing document style text.

But the solution is easy – we just need to manipulate the standard lexer to avoid common numeric separators (comma, period, etc) perturbing our index entries. I’m opting for a tilde (~) here because it does not appear in the source column.



SQL>
SQL> drop index ix;

Index dropped.

SQL> begin
  2        ctx_ddl.drop_preference('my_lexer');
  3        ctx_ddl.create_preference('my_lexer', 'BASIC_LEXER');
  4        ctx_ddl.set_attribute('my_lexer', 'numgroup', '~');
  5  end;
  6  /

PL/SQL procedure successfully completed.

SQL>
SQL> create index ix on t ( mobs ) indextype is ctxsys.context PARAMETERS('lexer my_lexer');

Index created.

SQL>
SQL> set autotrace traceonly explain
SQL> select * from t
  2  where contains(mobs,'7444316') > 0;

Execution Plan
----------------------------------------------------------
Plan hash value: 1339481741

------------------------------------------------------------------------------------
| Id  | Operation                   | Name | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |      |     1 |    36 |     4   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T    |     1 |    36 |     4   (0)| 00:00:01 |
|*  2 |   DOMAIN INDEX              | IX   |       |       |     4   (0)| 00:00:01 |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("CTXSYS"."CONTAINS"("MOBS",'7444316')>0)

SQL>
SQL> set autotrace off
SQL> select * from t
  2  where contains(mobs,'7444316') > 0;

       CID MOBS
---------- --------------------------------------------------
         7 2605217,5143371,7444316,9073658

1 row selected.

Our execution plan still shows that we can take advantage of the Text index, but I’m also getting the results I expected.

As an aside, Text indexes are one of those gems in the Oracle database that often gets a great amount of new functionality with each new release. So every time you upgrade, take a fresh look at the Text Index documentation. You might get a lot of cool ideas for how to use them in your applications.

Cool stuff with partition elimination

Sometimes in the IT world, the term “surprise” is not a good one.

“I woke up this morning and got a surprise…my database was down.”

“I ran a SELECT COUNT(*) on my most important table, and got a surprise result of zero rows.”

and so forth. Generally as IT professionals, encountering the unexpected is not a good start to the day Smile.

But sometimes, surprises can be a nice thing.  Here is my example for today – when I found that the database can do a remarkably good job when it comes to reducing the workload with partitioned tables.  

Most people will already be familiar with the concept of partition elimination.  If you have a table partitioned into (say) yearly segments, and you ask for all of the data for the past 2 years, then the optimizer is intelligent enough to only scan the relevant partitions rather than the entire table.  Here is an example of that in action. I have a table that is partitioned by year on a END_DATE column, and then sub-partitioned into quarters based on a START_DATE column.


SQL> create table t
  2      partition by range( end_dt )
  3      subpartition by range( start_dt )
  4      (
  5         partition p_2014 values less than ( to_date('01-Jan-2015','dd-mon-yyyy') )
  6                 (
  7                         subpartition sp_2014_q1 values less than ( to_date('01-apr-2014','dd-mon-yyyy') ) ,
  8                         subpartition sp_2014_q2 values less than ( to_date('01-jul-2014','dd-mon-yyyy') ) ,
  9                         subpartition sp_2014_q3 values less than ( to_date('01-oct-2014','dd-mon-yyyy') ) ,
 10                         subpartition sp_2014_q4 values less than ( to_date('01-jan-2015','dd-mon-yyyy') )
 11                ) ,
 12        partition p_2015 values less than ( to_date('01-Jan-2016','dd-mon-yyyy') )
 13                (
 14                        subpartition sp_2015_q1 values less than ( to_date('01-apr-2015','dd-mon-yyyy') ) ,
 15                        subpartition sp_2015_q2 values less than ( to_date('01-jul-2015','dd-mon-yyyy') ) ,
 16                        subpartition sp_2015_q3 values less than ( to_date('01-oct-2015','dd-mon-yyyy') ) ,
 17                        subpartition sp_2015_q4 values less than ( to_date('01-jan-2016','dd-mon-yyyy') )
 18                )
 19     )
 20     as
 21     select a.* ,
 22        to_date('01-Jan-2014','dd-mon-yyyy') + mod(rownum,730)-1 as start_dt,
 23        to_date('01-Jan-2014','dd-mon-yyyy') + mod(rownum,730) as end_dt
 24     from all_objects a;

Table created.

SQL>
SQL> alter table t modify start_dt not null;

Table altered.

SQL> alter table t modify end_dt not null;

Table altered.

SQL>
SQL> set autotrace traceonly explain
SQL> select *
  2      from t
  3      where start_dt = to_date('13-Jan-2015','dd-mon-yyyy') ;

Execution Plan
----------------------------------------------------------
Plan hash value: 2183203583

------------------------------------------------------------------------------------------------
| Id  | Operation               | Name | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |      |   101 | 15049 |   491   (1)| 00:00:01 |       |       |
|   1 |  PARTITION RANGE ALL    |      |   101 | 15049 |   491   (1)| 00:00:01 |     1 |     2 |
|   2 |   PARTITION RANGE SINGLE|      |   101 | 15049 |   491   (1)| 00:00:01 |       |       |
|*  3 |    TABLE ACCESS FULL    | T    |   101 | 15049 |   491   (1)| 00:00:01 |       |       |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter("START_DT"=TO_DATE(' 2015-01-13 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))


You can see that we had to scan all of the partitions, but within each of the partitions we only had to scan a single sub-partition (as indicated by “PARTITION RANGE SINGLE”).  So for each year (based on END_DT) we scanned a single one of the 4 subpartitions.  We are doing only 25% of the work of scanning the entire table.  But one pleasant surprise I saw today was how the optimizer can take advantage of additional information to improve things even more.  Let us now add a fairly obvious rule about the data:


SQL>
SQL> alter table t add constraint t_chk check( start_dt < end_dt );

Table altered.


And have another look at that query execution plan.


SQL>
SQL> set autotrace traceonly explain
SQL> select *
  2      from t
  3      where start_dt = to_date('13-Jan-2015','dd-mon-yyyy') ;

Execution Plan
----------------------------------------------------------
Plan hash value: 3874588989

------------------------------------------------------------------------------------------------
| Id  | Operation               | Name | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |      |   101 | 15049 |   247   (1)| 00:00:01 |       |       |
|   1 |  PARTITION RANGE SINGLE |      |   101 | 15049 |   247   (1)| 00:00:01 |     2 |     2 |
|   2 |   PARTITION RANGE SINGLE|      |   101 | 15049 |   247   (1)| 00:00:01 |     1 |     1 |
|*  3 |    TABLE ACCESS FULL    | T    |   101 | 15049 |   247   (1)| 00:00:01 |     5 |     5 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter("START_DT"=TO_DATE(' 2015-01-13 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "END_DT">TO_DATE(' 2015-01-13 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

SQL>
SQL>

How cool is that!  The constraint let us remove even more partitions from consideration. In fact, we ended up only scanning a single partition. Moreover, we never specified END_DT as a predicate, but you can see in the FILTER section, we could synthesize such a predicate using the rule defined by the check constraint we added.

I suppose the moral of the story is two-fold here.

1) The optimizer can be pretty smart with partition elimination,

2) As always, never hide rules and facts about the data from the database.  Make sure you define those constraints in the database tier.

ROWNUM and ORDER BY

Just a quick revisit on an old topic so I could link to this post in a Stack Overflow discussion.



SQL> create table t ( x int );

Table created.

SQL>
SQL> insert into t
  2  select dbms_random.value(1,1000)
  3  from dual
  4  connect by level <= 100;

100 rows created.

SQL>
SQL> select count(*) from t;

  COUNT(*)
----------
       100

1 row selected.

--
-- DOES NOT WORK
--
SQL>
SQL> select * from t
  2  where rownum <= 10
  3  order by x desc;

         X
----------
       995
       909
       873
       871
       640
       555
       318
       197
       134
        70

10 rows selected.

--
-- CORRECT ALTERNATIVES
--

SQL>
SQL> select *
  2  from
  3    ( select * from t order by x desc )
  4  where rownum <= 10;

         X
----------
      1000
       995
       991
       977
       977
       971
       955
       909
       909
       900

10 rows selected.

SQL> select * from t
  2  order by x desc
  3  fetch first 10 rows only;

         X
----------
      1000
       995
       991
       977
       977
       971
       955
       909
       909
       900

10 rows selected.

SQL>
SQL> select x
  2  from
  3    ( select t.*,
  4         row_number() over ( order by x desc ) r
  5      from t  )
  6  where r <= 10;

         X
----------
      1000
       995
       991
       977
       977
       971
       955
       909
       909
       900

10 rows selected.

As you can see, you cannot simply have WHERE ROWNUM and ORDER BY in the same SELECT statement if you are after the “top” rows based on some criteria.