Yes it’s SCBT day here in Perth!
SCBT = Silly Click Bait Title
This post is just a cautionary tale that it is easy to get caught up judging SQL performance solely on a few metrics rather than taking a more common sense approach of assessing performance based on the true requirements of the relevant component of the application. I say “true requirements” because it may vary depending on what is important to the application for a particular component.
For the majority of the time, response time is most probably the thing you will care about most. Certainly for customer-facing components of an application, response time is likely to be the dominant factor in defining application “success”. But it does not necessarily always need to be response time. You might have a SQL statement that gets executed millions (or billions) of times, and hence perhaps parse time CPU or overall latching activity might be a critical factor.
My point is this: Not all SQL statements can be simply judged on a rule of “less consistent gets = a better SQL”. Here’s an example that came up on AskTOM recently, hence the click bait title. I’ll start with 100 copies of DBA_OBJECTS to be my source table.
SQL> create table t pctfree 0
2 as select owner, object_id, subobject_name
3 from dba_objects,
4 ( select 1 from dual connect by level <= 100 );
Table created.
SQL>
SQL> exec dbms_stats.gather_table_stats('','t');
PL/SQL procedure successfully completed.
We’ll connect to SQL Plus and just run a simple SELECT to retrieve all of the rows.
SQL> set autotrace traceonly statistics
SQL> select * from t;
8350600 rows selected.
Statistics
------------------------------------------------------
13 recursive calls
17 db block gets
849818 consistent gets
1 physical reads
3024 redo size
294152578 bytes sent via SQL*Net to client
9186257 bytes received via SQL*Net from client
835061 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
8350600 rows processed
Wow…that’s a lot of consistent gets. Here’s my simple solution to reduce that. We’ll just an ORDER BY clause!
SQL> select * from t order by object_id;
8350600 rows selected.
Statistics
-------------------------------------------------------
170 recursive calls
4 db block gets
16469 consistent gets
21597 physical reads
0 redo size
201757283 bytes sent via SQL*Net to client
9186257 bytes received via SQL*Net from client
835061 SQL*Net roundtrips to/from client
0 sorts (memory)
1 sorts (disk)
8350600 rows processed
Woo hoo! Look how much I’ve dropped the consistent gets. Time for me to scream from the roof tops “I am a SQL tuning genius!”
In reality, I don’t need an ORDER BY to improve the “consistent gets” number from the SELECT statement. One of the governing factors for how many consistent gets we will need to do is the fetch size, and my original execution was performed with a fetch size (arraysize in SQL Plus) of just 10. Let’s bump that up and re-run the query:
SQL> set arraysize 1000
SQL> select * from t;
8350600 rows selected.
Statistics
------------------------------------------------------
0 recursive calls
0 db block gets
24805 consistent gets
0 physical reads
0 redo size
133771032 bytes sent via SQL*Net to client
92458 bytes received via SQL*Net from client
8352 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
8350600 rows processed
Voila! An easy “fix” to consistent gets. In reality, don’t forget that consistent gets is just one piece of the performance performance puzzle. If I throw in some response time metrics for the above queries, then (as an application user) I know which query I would prefer to have running!
SQL> set autotrace off
SQL> set timing on
SQL> set feedback only
SQL> select * from t;
8350600 rows selected.
Elapsed: 00:00:02.08
SQL> select * from t order by object_id;
8350600 rows selected.
Elapsed: 00:00:09.70
SQL>
So, click bait titles aside, make sure you don’t get too carried away just focussing on particular metrics (eg consistent gets) when assessing and tuning the performance of your SQL statements.