First of all … Happy New Year! This is my first post for 2020. Last year, I fell just short of 100 blog posts for the year – so this year, I’m starting early and hopefully I can crack the 100 mark! Anyway..onto the post.
The in-memory option in the Oracle database can yield some ridiculously good performance results. As someone who regularly gets to visit customers, it is always a feel good moment when you can take their data warehouse sample data and queries, which could be running in minutes or hours, slap on some in-memory parameters and watch the amazement when those queries might drop from hours to minutes, or minutes to seconds.
Don’t get me wrong; in-memory is not a magic wand that transforms every single query to insane performance, but whenever you are scanning large volumes of data, some of the performance benefits can really blow your mind. And yes, before you blast me on the comments I know its not a free option. But because it is not free, it is important that you make sure you are getting the benefits you have paid good money for, so I thought I would do a couple of blog posts to point some traps that customers have stumbled into.
The first trap I’ve seen people fall into is thinking that they have specified the in-memory settings for a table but not taking the time to check that they are truly getting the benefits. Here is an example of this in action.
I created a table T which is 20 copies of DBA_OBJECTS
SQL> create table t
2 as
3 select d.*
4 from dba_objects d, ( select 1 from dual connect by level <= 20 );
Table created.
Now I diligently go about setting all the in-memory characteristics that I would like for this table.
SQL> alter table t
2 inmemory memcompress for query( owner,object_id )
3 inmemory memcompress for dml ( object_type)
4 inmemory memcompress for query high( object_name )
5 inmemory memcompress for capacity low( created )
6 inmemory memcompress for capacity high( status )
7 no inmemory( duplicated, sharded ) ;
Table altered.
Having done this, I need to prime the in-memory store by scanning the table which will kickstart the background processing to populate the in-memory store.
SQL> select count(*) from t;
COUNT(*)
----------
1656440
At this point, after a short time period, I can check to see if the relevant in-memory segments have been populated.
SQL> select owner,segment_name,bytes,bytes_not_populated,populate_status from v$im_segments;
no rows selected
As you can see, the in-memory is empty. Maybe I didn’t wait long enough, but even after several minutes, you can see the store is empty and remains empty.
SQL> select owner,segment_name,bytes,bytes_not_populated,populate_status from v$im_segments;
no rows selected
Perhaps it is an error in my DDL nominating the in-memory attributes of the columns in my table T. I can check that by querying V$IM_COLUMN_LEVEL.
SQL> select owner,segment_column_id,column_name, inmemory_compression
2 from v$im_column_level
3 order by 2 ;
OWNER SEGMENT_COLUMN_ID COLUMN_NAME INMEMORY_COMPRESSION
---------- ----------------- ------------------------------ ---------------------
MCDONAC 1 OWNER FOR QUERY LOW
MCDONAC 2 OBJECT_NAME FOR QUERY HIGH
MCDONAC 3 SUBOBJECT_NAME DEFAULT
MCDONAC 4 OBJECT_ID FOR QUERY LOW
MCDONAC 5 DATA_OBJECT_ID DEFAULT
MCDONAC 6 OBJECT_TYPE FOR DML
MCDONAC 7 CREATED FOR CAPACITY LOW
MCDONAC 8 LAST_DDL_TIME DEFAULT
MCDONAC 9 TIMESTAMP DEFAULT
MCDONAC 10 STATUS FOR CAPACITY HIGH
MCDONAC 11 TEMPORARY DEFAULT
MCDONAC 12 GENERATED DEFAULT
MCDONAC 13 SECONDARY DEFAULT
MCDONAC 14 NAMESPACE DEFAULT
MCDONAC 15 EDITION_NAME DEFAULT
MCDONAC 16 SHARING DEFAULT
MCDONAC 17 EDITIONABLE DEFAULT
MCDONAC 18 ORACLE_MAINTAINED DEFAULT
MCDONAC 19 APPLICATION DEFAULT
MCDONAC 20 DEFAULT_COLLATION DEFAULT
MCDONAC 21 DUPLICATED NO INMEMORY
MCDONAC 22 SHARDED NO INMEMORY
MCDONAC 23 CREATED_APPID DEFAULT
MCDONAC 24 CREATED_VSNID DEFAULT
MCDONAC 25 MODIFIED_APPID DEFAULT
MCDONAC 26 MODIFIED_VSNID DEFAULT
The way I like to interpret this (and hence explain to customers) is that the column level specifications are the way of nominating the in-memory metadata for when the entire table is moved to the in-memory store. The column level metadata is not an instruction to actually place the table into the in-memory store, and hence that must be done explicitly at the table level.
SQL> alter table t inmemory;
Table altered.
Now that I have done that, I will rescan the table to prime the in-memory store, and voila…we now have full population.
SQL> select count(*) from t;
COUNT(*)
----------
1656440
SQL> select owner,segment_name,bytes,bytes_not_populated,populate_status from v$im_segments;
OWNER SEGMENT_NA BYTES BYTES_NOT_POPULATED POPULATE_STAT
---------- ---------- ---------- ------------------- -------------
MCDONAC T 262144000 0 COMPLETED
So always make sure you check V$IM_SEGMENTS to ensure that you have populated the the in-memory store. In the next blog post, I’ll show an example where even V$IM_SEGMENTS being populated might not mean that the in-memory store is populated fully with the data you want.
You can watch the video version of this demo here
Hello Connor,
I would just like to start this year by wishing you once again a HAPPY, HEALTHY AND LUCKY NEW YEAR 2020,
and also by expressing my very deep wish and hope that the infernal current situation in Australia will end as soon as possible, and this paradise place will return to its previous state.
It is one of the last paradises left on Earth and I am extremely sad thinking that human global activity probably contributed a lot to this ecological catastrophe and the death of so many unique and innocent beings š¦ š¦ š¦
Hopefully, the day will come when technology will be used to prevent such situations, rather than to create them š
Best Regards & Hope for the best,
Iudith Mentzel
Thank you for your kinds words.