Things get a little interesting in the data dictionary when it comes to interval partitions. Consider the following example:
SQL> create table t
2 partition by range(x) interval(1)
3 ( partition p0 values less than (1) )
4 as
5 select rownum x from dual connect by level <= 5; Table created. SQL> select count(*)
2 from user_tab_partitions
3 where table_name = 'T';
COUNT(*)
----------
6
So our table has 6 partitions. Let’s now check the execution plan for a sample query
SQL> set autotrace traceonly explain
SQL> select * from t where x = 1235;
Execution Plan
----------------------------------------------------------
Plan hash value: 2931986080
-----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
-----------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 3 | 4 (0)| 00:00:01 | | |
| 1 | PARTITION RANGE SINGLE| | 1 | 3 | 4 (0)| 00:00:01 | 1236 | 1236 |
|* 2 | TABLE ACCESS FULL | T | 1 | 3 | 4 (0)| 00:00:01 | 1236 | 1236 |
-----------------------------------------------------------------------------------------------
Well that seems quite odd. If there are only 6 partitions, how come the optimizer suggests we will jump to partition 1236 ?!?!
This is explained by how interval partitions work. With a range partition regime, each partition has a lower bound defined by that of the previous partition. With interval partitions, things are different. In effect, all of the partitions are immediately known at table creation time, because the interval defines all possible partitions that could ever be. They just come into “existence” until required to hold data. Hence partition 1236 is the partition that would come into existence, should the value x=1235 be inserted into the table.
This can get a bit tricky when we actually put that value into the table.
SQL> insert into t values (1235);
1 row created.
So partition 1236 is now in existence right ? Let’s look at our partitions in USER_TAB_PARTITIONS
SQL> select partition_position, partition_name, high_value
2 from user_tab_partitions
3 where table_name = 'T';
PARTITION_POSITION PARTITION_NAME HIGH_VALUE
------------------ ------------------------------ ----------
1 P0 1
2 SYS_P18027 2
3 SYS_P18028 3
4 SYS_P18029 4
5 SYS_P18030 5
6 SYS_P18031 6
7 SYS_P18032 1236
7 rows selected.
We can see the new partition SYS_P18032 with the boundary to hold our just inserted row, but why is it partition 7 not partition 1236 ? This is possibly why the column is called PARTITION_POSITION not PARTITION_NUMBER. The partition is the 7th “instantiated” partition in our table, which for an interval partitioning scheme is not necessarily the same as the partition number in the table. In fact, simply inserting a new row changes the position for this partition.
SQL> select partition_position, partition_name, high_value
2 from user_tab_partitions
3 where table_name = 'T';
PARTITION_POSITION PARTITION_NAME HIGH_VALUE
------------------ ------------------------------ ----------
1 P0 1
2 SYS_P18027 2
3 SYS_P18028 3
4 SYS_P18029 4
5 SYS_P18030 5
6 SYS_P18031 6
7 SYS_P18033 601
8 SYS_P18032 1236
8 rows selected.
The PARTITION_POSITION is dynamically assigned. A quick look at the definition for the xxx_TAB_PARTITIONS view shows a simple analytic function to assign the value:
row_number() over (partition by u.name, o.name order by tp.part#)
I have a hypothesis for this. I recall back in Oracle 8, dropping or splitting partitions was very slow, and (I think) that the PARTITION_POSITION was a simple column attribute. Hence when you dropped a partition, internally every row had to be updated (incremented or decremented) to make the PARTITION_POSITION correct. So replacing it with an analytic is an obvious choice, and one made before interval partitioning existed. I stress – this all might be fiction, it is just my recollection of it.
So take care – the partition number you see in an execution plan might not map exactly to the partition position values you see in the data dictionary
Interval partitioning is incredibly useful.
But it has a number of specific quirks that can be quite unexpected.
Thanks for this post, finding good info on these is quite hard.