In the previous post, I described the importance of checking V$IM_SEGMENTS to ensure that the in-memory store is populated to have confidence that you are indeed getting the benefit of the feature. But even if the store has been populated, when it comes to virtual columns, additional care is needed to make sure you can take advantage of the feature. Here is an example of how you might not be getting the in-memory benefits when you were expecting to (and how to solve it).

I’ll start by creating a table T as 20 copies of DBA_OBJECTS, and I’ll add a virtual column called OBJECT_DETAILS. The definition for the virtual column is somewhat nonsensical, being many nested SQRT calls, but this also makes it easy to check for in-memory usage (as I’ll explain shortly).


SQL> set termout on
SQL> create table t (
  2         owner varchar2(30),
  3         object_type varchar2(80),
  4         object_id number,
  5         object_details generated always as ( sqrt(sqrt(sqrt(sqrt(object_id))))));

Table created.

SQL> insert /*+ APPEND */ into t (owner,object_type,object_id)
  2      select owner,object_type,object_id
  3      from dba_objects, ( select 1 from dual connect by level <= 20 );

1656460 rows created.

SQL> commit;

Commit complete.

Before I start to tinker with in-memory, I’ll do a full scan of T and force it to evaluate the virtual column OBJECT_DETAILS. Because of the many nested square root calls, this is a very expensive query and takes over 5 seconds even though the table is quite small.


SQL> set timing on
SQL> select max(object_details)
  2  from t;

MAX(OBJECT_DETAILS)
-------------------
         2.12899839

1 row selected.

Elapsed: 00:00:05.73

This elapsed time can hence become our measuring stick for the detection of in-memory usage. If the OBJECT_DETAILS column is pre-calculated and housed in the in-memory store, then the elapsed time should be far lower. To achieve that, I’ll now put the table into the in-memory store


SQL> alter table t inmemory;

Table altered.

And to kickstart the in-memory store process , I’ll scan the table


SQL> select count(*) from t;

  COUNT(*)
----------
   1656460

1 row selected.

After waiting for a short while, I can check V$IM_SEGMENTS to make sure that table has been populated.


SQL> select owner,segment_name,bytes,bytes_not_populated,populate_status from v$im_segments;

OWNER      SEGMENT_NA      BYTES BYTES_NOT_POPULATED POPULATE_STAT
---------- ---------- ---------- ------------------- -------------
MCDONAC    T            46047232                   0 COMPLETED

As I said in my previous post, the population of V$IM_SEGMENTS should mean that all is in readiness for me to bask in the performance goodness of in-memory Smile. So now I’ll re-run my query


SQL> select max(object_details)
  2  from t;

MAX(OBJECT_DETAILS)
-------------------
         2.12899839

1 row selected.

Elapsed: 00:00:05.41

Well, that was disappointing Smile. The query performance was unchanged. Maybe the optimizer opted not to the use the in-memory store for this query. I can check the execution plan to see if that was the case.


SQL> select * from dbms_xplan.display_cursor();

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------
SQL_ID  ay7fsgvavb6r4, child number 0
-------------------------------------
select max(object_details) from t

Plan hash value: 2966233522

------------------------------------------------------------------------------------
| Id  | Operation                   | Name | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |      |       |       |    73 (100)|          |
|   1 |  SORT AGGREGATE             |      |     1 |    22 |            |          |
|   2 |   TABLE ACCESS INMEMORY FULL| T    |  1656K|    34M|    73  (20)| 00:00:01 |
------------------------------------------------------------------------------------

Even though the in-memory store has been populated and the optimizer is suggesting that an in-memory scan is being performed, additional care is required when it comes to the in-memory option and virtual columns. There is an initialization parameter that controls how the in-memory option interacts with virtual columns


SQL> show parameter inmemory_virtual_columns

NAME                                 TYPE        VALUE
------------------------------------ ----------- ----------
inmemory_virtual_columns             string      MANUAL

The default setting of MANUAL means that it is the responsibility of the database administrator to explicitly nominate which virtual columns need to be placed into the in-memory store. So I’ll go ahead and do that.


SQL> alter table t inmemory(object_details);

Table altered.

I’ll do a rescan of the table to prime the in-memory store and try again.


SQL> select count(*) from t;

  COUNT(*)
----------
   1656460

1 row selected.

SQL> select owner,segment_name,bytes,bytes_not_populated,populate_status from v$im_segments;

OWNER      SEGMENT_NA      BYTES BYTES_NOT_POPULATED POPULATE_STAT
---------- ---------- ---------- ------------------- -------------
MCDONAC    T            46047232                   0 COMPLETED

1 row selected.

SQL> select max(object_details)
  2  from t;

MAX(OBJECT_DETAILS)
-------------------
         2.12899839

1 row selected.

Elapsed: 00:00:05.30

Well, that was disappointing Smile. The query performance was still unchanged. If you refer back to the previous post, merely changing the metadata for columns is not the catalyst for a (re)population of the in-memory store. Hence I will toggle the entire table to reset its in-memory status.


SQL> alter table t no inmemory;

Table altered.

SQL> pause

SQL> alter table t inmemory;

Table altered.

SQL> pause

SQL> alter table t inmemory(object_details);

Table altered.

Now I will rescan the table to prime the in-memory store and try one more time.


SQL> select count(*) from t;

  COUNT(*)
----------
   1656460

1 row selected.

SQL> select owner,segment_name,bytes,bytes_not_populated,populate_status from V$IM_SEGMENTS;

OWNER      SEGMENT_NA      BYTES BYTES_NOT_POPULATED POPULATE_STAT
---------- ---------- ---------- ------------------- -------------
MCDONAC    T            46047232                   0 COMPLETED

1 row selected.

SQL> select max(object_details)
  2  from t
  3
SQL> pause

SQL> /

MAX(OBJECT_DETAILS)
-------------------
         2.12899839

1 row selected.

Elapsed: 00:00:00.00

And there we go! Now I scan the expensive column instantaneously because the in-memory store has been populated with the virtual column values.

Detecting the problem

You might be thinking: “Yes, but my virtual column does not have 20 nested SQRT calls, so how I do know if I have got things set up correctly?”

I’ll reset the table to not having the OBJECT_DETAILS column in the in-memory store


SQL> alter table t no inmemory;

Table altered.

SQL> alter table t inmemory;

Table altered.

SQL> select count(*) from t;

  COUNT(*)
----------
   1656460

1 row selected.

SQL> select owner,segment_name,bytes,bytes_not_populated,populate_status from v$im_segments;

OWNER      SEGMENT_NA      BYTES BYTES_NOT_POPULATED POPULATE_STAT
---------- ---------- ---------- ------------------- -------------
MCDONAC    T            46047232                   0 COMPLETED

At this point, the standard table columns are now populated into the in-memory store but the OBJECT_DETAILS column is not. I can query V$IM_COLUMN_LEVEL to check the in-memory metadata for each column.


SQL> select table_name, column_name, inmemory_compression
  2  from V$IM_COLUMN_LEVEL
  3  where table_name = 'T';

TABLE_NAME                     COLUMN_NAME                    INMEMORY_COMPRESSION
------------------------------ ------------------------------ --------------------
T                              OBJECT_DETAILS                 UNSPECIFIED
T                              OWNER                          DEFAULT
T                              OBJECT_TYPE                    DEFAULT
T                              OBJECT_ID                      DEFAULT

But as we saw above, just changing the metadata for the OBJECT_DETAILS is not sufficient to force it into the in-memory store. I’ll nominate OBJECT_DETAILS and you can see that V$IM_COLUMN_LEVEL immediately reflects that change, but that does not force a repopulation of the in-memory store.


SQL> alter table t inmemory(object_details);

Table altered.


SQL> select table_name, column_name, inmemory_compression
  2  from v$im_column_level
  3  where table_name = 'T';

TABLE_NAME                     COLUMN_NAME                    INMEMORY_COMPRESSION
------------------------------ ------------------------------ --------------------
T                              OBJECT_DETAILS                 DEFAULT
T                              OWNER                          DEFAULT
T                              OBJECT_TYPE                    DEFAULT
T                              OBJECT_ID                      DEFAULT

Since V$IM_COLUMN_LEVEL does not tell us if the in-memory store for OBJECT_DETAILS has been populated, we need to look elsewhere. A virtual column is an expression, so we need to look in the part of the in-memory store dedicated to holding expressions.


SQL> select count(*) from v$imeu_header;

  COUNT(*)
----------
         0

1 row selected.

You can see that this area is currently empty, which is the confirmation that OBJECT_DETAILS has not been populated into the in-memory store. I’ll perform the steps listed earlier in the post to make sure that OBJECT_DETAILS is populated, and then we can see that V$IMEU_HEADER now contains some content.


SQL> alter table t no inmemory;

Table altered.

SQL> alter table t inmemory;

Table altered.

SQL> alter table t inmemory(object_details);

Table altered.

SQL> select count(*) from t;

  COUNT(*)
----------
   1656460

1 row selected.

SQL> select owner,segment_name,bytes,bytes_not_populated,populate_status from v$im_segments;

OWNER      SEGMENT_NA      BYTES BYTES_NOT_POPULATED POPULATE_STAT
---------- ---------- ---------- ------------------- -------------
MCDONAC    T            46047232                   0 COMPLETED

SQL> select count(*) from V$IMEU_HEADER;

  COUNT(*)
----------
         3
         

Since virtual columns are almost certainly going to be some sort of expression, evaluating them is always going to consume significant CPU if you are processing millions (or billions) of rows. The in-memory store might be very useful means of reducing the cost of evaluating those expressions, but make sure you run your DDL commands in the correct sequence to guarantee that the virtual columns are being stored in the in-memory area.

You can watch the video version of this demo here

4 responses to “Getting the most out of in-memory – part 2”

  1. Hello Connor,

    I have just two short remarks regarding this excellent posts series:

    1. It looks to me a little bit conceptually inconsistent that you can define inmemory storage parameters for columns
    *before* enabling the table itself for IN MEMORY, but, these definitions (shown in V$IM_COLUMN_LEVEL)
    are dropped when you (re)set the table to NO INMEMORY, for example for triggering its repopulation in the column store.
    So, regardless of the special case of virtual columns, the inmemory parameters you might have defined for other
    columns are lost.
    There is an example in the documentation that shows this:
    https://docs.oracle.com/en/database/oracle/oracle-database/19/inmem/populating-objects-in-memory.html#GUID-9E123407-13BA-4566-933D-6E4473C8FE5C

    In light of this behavior, couldn’t it be a better alternative to use DBMS_INMEMORY.REPOPULATE
    for achieving on-demand repopulation, instead of ALTER TABLE , for (hopefully) preserving column definitions ?

    2. The table V$IMEU_HEADER does not appear at all in the documentation (including 19c),
    neither in the Database Reference not in the InMemory guide.
    I think it is worth to report this as a documentation miss.

    Cheers & Best Regards,
    Iudith

  2. […] 2. Getting the most out of in-memory – part 2 […]

  3. […] the thing I finished reading this post “Getting the most out of in-memory – part 2” from Connor McDonald I noticed that several key and important SQL code has been marked with blue […]

  4. […] 2. Getting the most out of in-memory – part 2 […]

Got some thoughts? Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trending

Blog at WordPress.com.