This is a bug encountered at a client site, and the test script was carefully prepared by Mark Hoxey (http://markhoxey.wordpress.com/) but it just shows how seemingly independent features in Oracle sometimes collide. Some of the elements of the script have been omitted for brevity, but if you’d like a copy, drop me a line.
We have a partitioned table with 4,000,000 rows in it.
SQL> CREATE TABLE block_test
2 (id NUMBER(10) NOT NULL
3 ,tstamp TIMESTAMP NOT NULL
4 ,val NUMBER(10) NOT NULL
5 ,padding CHAR(50) NOT NULL)
6 TABLESPACE users
7 PARTITION BY RANGE (tstamp)
8 (PARTITION block_test_201201 VALUES LESS THAN (TO_TIMESTAMP('01/02/2012','dd/mm/yyyy'))
9 ,PARTITION block_test_201202 VALUES LESS THAN (TO_TIMESTAMP('01/03/2012','dd/mm/yyyy'))
10 ,PARTITION block_test_201203 VALUES LESS THAN (TO_TIMESTAMP('01/04/2012','dd/mm/yyyy'))
11 )
12 NOLOGGING
13 /
Table created.
SQL> INSERT /*+ APPEND */ INTO block_test
2 WITH generator AS
3 (SELECT -- materialize
4 ROWNUM AS n
5 FROM dual
6 CONNECT BY level <= 2000)
7 SELECT ROWNUM AS id
8 , TO_TIMESTAMP('01/01/2012','dd/mm/yyyy') + NUMTODSINTERVAL(ROWNUM/4000000*75*24*60*60,'SECOND')
9 , TRUNC(ROWNUM/10000)
10 , 'X'
11 FROM generator v1
12 , generator v2
13 /
4000000 rows created.
Now, working under the presumption that some of the data is now “old”, we opt to compress it to make it more efficient to query and consume less space and
SQL> ALTER TABLE block_test MOVE PARTITION block_test_201201 PCTFREE 0 COMPRESS UPDATE INDEXES
2 /
Table altered.
SQL> ALTER TABLE block_test MOVE PARTITION block_test_201202 PCTFREE 0 COMPRESS UPDATE INDEXES
2 /
Table altered.
Now here is where things get interesting…Lets now resume inserting into the table in the normal way, ie, grab the next primary in sequence and start inserting. For each insert, we’ll record how many session IO’s were done to perform the insert. Typically, we’d expect very few – just a few on the primary key index and the table block itself.
SQL> DECLARE
2 TYPE t_stats_tab IS TABLE OF v$mystat%ROWTYPE;
3
4 c_iterations CONSTANT NUMBER(8) := 1000;
5 c_threshold_io CONSTANT NUMBER(4) := 100;
6 c_threshold_sec CONSTANT NUMBER(8) := 1;
7
8 l_start_tstamp TIMESTAMP;
9 l_end_tstamp TIMESTAMP;
10 l_start_io NUMBER(8);
11 l_end_io NUMBER(8);
12
13 l_start_stats t_stats_tab;
14 l_end_stats t_stats_tab;
15
16 l_min_duration INTERVAL DAY TO SECOND(6);
17 l_max_duration INTERVAL DAY TO SECOND(6);
18
19 l_min_io NUMBER(8);
20 l_max_io NUMBER(8);
21
22 l_id block_test.id%TYPE;
23 l_tstamp block_test.tstamp%TYPE;
24
[snip]
50 BEGIN
51
52 FOR i IN 1..c_iterations
53 LOOP
54
[snip]
59 SELECT SYSTIMESTAMP
60 , value
61 INTO l_start_tstamp
62 , l_start_io
63 FROM v$mystat
64 WHERE statistic# = 11; -- session logical reads
65
66 INSERT INTO block_test
67 (id, tstamp, val, padding)
68 VALUES
69 (block_test_id_seq.NEXTVAL, SYSTIMESTAMP, TRUNC(i/1000), 'X')
70 RETURNING id, tstamp
71 INTO l_id, l_tstamp;
72
73 SELECT SYSTIMESTAMP
74 , value
75 INTO l_end_tstamp
76 , l_end_io
77 FROM v$mystat
78 WHERE statistic# = 11; -- session logical reads
79
[snip]
84 COMMIT;
85
86 -- if we've exceeded what we've set as duration or IO thresholds then
87 -- log the details
88 IF (EXTRACT(SECOND FROM (l_end_tstamp-l_start_tstamp)) >= c_threshold_sec)
89 OR ((l_end_io - l_start_io) >= c_threshold_io)
90 THEN
91 dbms_ouput.put_line ('Entry ' || TO_CHAR(l_id) || ' started at ' || TO_CHAR(l_tstamp,'dd/mm/yyyy hh24:mi:ss') ||
92 ', took ' || TO_CHAR(EXTRACT(SECOND FROM (l_end_tstamp-l_start_tstamp))) || ' seconds' ||
93 ' and generated ' || TO_CHAR(l_end_io - l_start_io) || ' IOs');
94 END IF;
95
[snip]
105 END LOOP;
106
107 END;
108 /
Entry 4000170 started at 08/03/2012 20:52:31, took 3.431496 seconds and generated 19549 IOs
Bang ! One of the inserts took 3 seconds (an eternity in Oracle speak) and burned through nearly 20,000 logical IO’s ?
Setting a couple of events on the advice of Oracle Support
SQL> ALTER SESSION SET EVENTS '10612 trace name context forever,level 3'
SQL> ALTER SESSION SET EVENTS '10224 trace name context forever,level 1'
shows some interesting diagnostics – it looks like a large chunk of blocks are being encountered as candidates for the insert, but are in fact full, and hence rejected from consideration – we see thousands of entries like:
ktspfsrch: Returns: BlockDBA:0x01029a2f
kdisnew_bseg_srch_cbk reject block -mark full,dba 0x01029a2f,time 20:52:31.332
ktspfupdst: fdba: 0x01029a20 offset: 15 nblks: 1 newstate: 1
kdisnew_bseg_srch_cbk rejecting block ,dba 0x01029a2f,time 20:52:31.332
ktspfsrch:Cbk didnot like 0x01029a2f
This is on 11.2.0.2….Interestingly on 11.2.0.3, we see the same entries in the trace file, but you do not get the slow down….