For pretty much as long as I can remember, the view V$SESSION_LONGOPS has existed. It has a suite of columns within it, but almost every usage I’ve ever seen has been about one thing – namely, predicting how long an operation will take before it finishes. Before the advent of SQL Monitoring which allows digging deeply into the state of a running query, this was a useful nugget of information that could be used for progress indicators etc.
To see a simple example of the data that the view captures, I created a massive table on my local database, and also stored it in a tablespace on an old slow IDE drive plugged into my laptop via USB (because SSD/NVMe drives are so damn quick nowadays!).
SQL> create table bigtab as
2 select d.* from dba_objects d,
3 ( select 1 from dual connect by level <= 1000 );
Table created.
SQL>
SQL> select bytes from dba_segments
2 where segment_name = 'BIGTAB';
BYTES
----------
1.3355E+10
SQL>
SQL> select bytes/1024/1024/1024 from dba_segments
2 where segment_name = 'BIGTAB';
BYTES/1024/1024/1024
--------------------
12.4375
Now I’ll set a long query running against this 12gigabyte table.
SQL> set timing on
SQL> select owner, sum(object_id)
2 from bigtab
3 group by owner;
(running)
and in another session, we’ll see what information we can glean from V$SESSION_LONGOPS
SQL> select * from v$session_longops
2 where username = 'MCDONAC'
3 @pr
==============================
SID : 735
SERIAL# : 29950
OPNAME : Table Scan
TARGET : MCDONAC.BIGTAB
TARGET_DESC :
SOFAR : 414988
TOTALWORK : 1622079
UNITS : Blocks
START_TIME : 11-FEB-22
LAST_UPDATE_TIME : 11-FEB-22
TIMESTAMP :
TIME_REMAINING : 79
ELAPSED_SECONDS : 17
CONTEXT : 0
MESSAGE : Table Scan: MCDONAC.BIGTAB: 414988 out of 1622079 Blocks done
USERNAME : MCDONAC
SQL_ADDRESS : 00007FFBF9F9CB98
SQL_HASH_VALUE : 896413448
SQL_ID : ggtzwg4uqwbs8
SQL_PLAN_HASH_VALUE : 4245726685
SQL_EXEC_START : 11-FEB-22
SQL_EXEC_ID : 16777216
SQL_PLAN_LINE_ID : 2
SQL_PLAN_OPERATION : TABLE ACCESS
SQL_PLAN_OPTIONS : FULL
QCSID : 0
CON_ID : 3
PL/SQL procedure successfully completed.
The columns of interest here are ELAPSED_SECONDS and TIME_REMAINING because the sum of these gives an estimate as to how much time the query will take to execute. The data is updated in (near) real time as execution continues, for example:
SQL> select * from v$session_longops
2 where username = 'MCDONAC'
3 @pr
==============================
SID : 735
SERIAL# : 29950
OPNAME : Table Scan
TARGET : MCDONAC.BIGTAB
TARGET_DESC :
SOFAR : 783196
TOTALWORK : 1622079
UNITS : Blocks
START_TIME : 11-FEB-22
LAST_UPDATE_TIME : 11-FEB-22
TIMESTAMP :
TIME_REMAINING : 64
ELAPSED_SECONDS : 32
CONTEXT : 0
MESSAGE : Table Scan: MCDONAC.BIGTAB: 783196 out of 1622079 Blocks done
USERNAME : MCDONAC
SQL_ADDRESS : 00007FFBF9F9CB98
SQL_HASH_VALUE : 896413448
SQL_ID : ggtzwg4uqwbs8
SQL_PLAN_HASH_VALUE : 4245726685
SQL_EXEC_START : 11-FEB-22
SQL_EXEC_ID : 16777216
SQL_PLAN_LINE_ID : 2
SQL_PLAN_OPERATION : TABLE ACCESS
SQL_PLAN_OPTIONS : FULL
QCSID : 0
CON_ID : 3
PL/SQL procedure successfully completed.
and we also get some useful information in the MESSAGE column. We can see where the value 1622079 comes from for the table scan by taking a look at the optimizer statistics in the data dictionary.
SQL> select blocks
2 from user_tables
3 where table_name = 'BIGTAB';
BLOCKS
----------
1622079
So based on the timing data, we are expecting an execution time of around 96 seconds. Let’s see how accurate that was:
SQL> set timing on
SQL> select owner, sum(object_id)
2 from bigtab
3 group by owner;
OWNER SUM(OBJECT_ID)
------------------------------ --------------
SYS 2.9871E+12
SYSTEM 5142557000
DBSNMP 1211624000
APPQOSSYS 123415000
DEMO 1.2677E+10
51 rows selected.
Elapsed: 00:01:44.63
The true execution time was 104 seconds – the slightly longer duration is to be expected, because the session longops information was for the table scan, but there will be additional costs to the execution, ie, the grouping of the data, potential dumping of data to temporary storage etc. Note that once a query is completed, the information may remain in V$SESSION_LONGOPS representing the final state of the execution.
SQL> select * from v$session_longops
2 where username = 'MCDONAC'
3 @pr
==============================
SID : 735
SERIAL# : 29950
OPNAME : Table Scan
TARGET : MCDONAC.BIGTAB
TARGET_DESC :
SOFAR : 1622079
TOTALWORK : 1622079
UNITS : Blocks
START_TIME : 11-FEB-22
LAST_UPDATE_TIME : 11-FEB-22
TIMESTAMP :
TIME_REMAINING : 0
ELAPSED_SECONDS : 104
CONTEXT : 0
MESSAGE : Table Scan: MCDONAC.BIGTAB: 1622079 out of 1622079 Blocks done
USERNAME : MCDONAC
SQL_ADDRESS : 00007FFBF9F9CB98
SQL_HASH_VALUE : 896413448
SQL_ID : ggtzwg4uqwbs8
SQL_PLAN_HASH_VALUE : 4245726685
SQL_EXEC_START : 11-FEB-22
SQL_EXEC_ID : 16777216
SQL_PLAN_LINE_ID : 2
SQL_PLAN_OPERATION : TABLE ACCESS
SQL_PLAN_OPTIONS : FULL
QCSID : 0
CON_ID : 3
PL/SQL procedure successfully completed.
For more complicated queries, multiple steps of the operation may be required and thus V$SESSION_LONGOPS can have entries for each long running step (sorts etc) that might be required.
But there’s a slight snag here. If a table is huge, or a query is expected to be long running, one of the most commonly used techniques to improve the performance of that query is throw more server grunt at it, ie, run it in parallel.
Parallel worker threads take the single large task of scanning the table and break it up into smaller chunks. If I scan a 1000 block table with 4 parallel workers, then whilst in total they should end up scanning approximately 250 blocks each, the database tackles this with much smaller chunks of data to try ensure an even division of work. For example, if one worker thread exhausts a small chunk of blocks, it can be allocated another chunk immediately whilst the others are still busy.
Here’s the same query this time executed in parallel:
SQL> set timing on
SQL> select /*+ parallel(8) */ owner, sum(object_id)
2 from bigtab
3 group by owner;
(running)
Whilst that was running, I took a look at v$session_longops – here’s the output:
SQL> select * from v$session_longops
2 where username = 'MCDONAC'
3 @pr
==============================
SID : 1109
SERIAL# : 52999
OPNAME : Rowid Range Scan
TARGET : MCDONAC.BIGTAB
TARGET_DESC :
SOFAR : 9853
TOTALWORK : 15568
UNITS : Blocks
START_TIME : 11-FEB-22
LAST_UPDATE_TIME : 11-FEB-22
TIMESTAMP :
TIME_REMAINING : 5
ELAPSED_SECONDS : 7
CONTEXT : 0
MESSAGE : Rowid Range Scan: MCDONAC.BIGTAB: 9853 out of 15568 Blocks done
USERNAME : MCDONAC
SQL_ADDRESS : 00007FFECCABF580
SQL_HASH_VALUE : 364932852
SQL_ID : 6a4r6pnaw0vrn
SQL_PLAN_HASH_VALUE : 3136705186
SQL_EXEC_START : 11-FEB-22
SQL_EXEC_ID : 16777220
SQL_PLAN_LINE_ID : 8
SQL_PLAN_OPERATION : TABLE ACCESS
SQL_PLAN_OPTIONS : FULL
QCSID : 124
CON_ID : 3
==============================
SID : 737
SERIAL# : 17228
OPNAME : Rowid Range Scan
TARGET : MCDONAC.BIGTAB
TARGET_DESC :
SOFAR : 9604
TOTALWORK : 15568
UNITS : Blocks
START_TIME : 11-FEB-22
LAST_UPDATE_TIME : 11-FEB-22
TIMESTAMP :
TIME_REMAINING : 5
ELAPSED_SECONDS : 7
CONTEXT : 0
MESSAGE : Rowid Range Scan: MCDONAC.BIGTAB: 9604 out of 15568 Blocks done
USERNAME : MCDONAC
SQL_ADDRESS : 00007FFECCABF580
SQL_HASH_VALUE : 364932852
SQL_ID : 6a4r6pnaw0vrn
SQL_PLAN_HASH_VALUE : 3136705186
SQL_EXEC_START : 11-FEB-22
SQL_EXEC_ID : 16777220
SQL_PLAN_LINE_ID : 8
SQL_PLAN_OPERATION : TABLE ACCESS
SQL_PLAN_OPTIONS : FULL
QCSID : 124
CON_ID : 3
==============================
...
... (8 rows in total)
...
PL/SQL procedure successfully completed.
Once a query is running in parallel, there is no longer anything in there that directly says “Hey, this is how long things are going to take”
However, we can get some clues using the info that is there. We can see that the blocks being scanned by a parallel worker is 15568 (“Rowid Range Scan: MCDONAC.BIGTAB: 9604 out of 15568 Blocks done”).
We know that there are 1622079 blocks in the table from USER_TABLES, so from these two things we can deduce:
SQL> select 1622079 / 15568 from dual;
1622079/15568
-------------
104.193153
that there will need to be about 104 small chunk scans to be done. The time elapsed / time remaining figures add to 12 seconds (I told you this table on an old IDE drive ) which means total scanning time is:
SQL> select 104*12 from dual;
104*12
----------
1248
Since we have 8 parallel workers going at this concurrently, this means the estimated query execution time will be:
SQL> select 1248 / 8 from dual;
1248/8
----------
156
Thus our estimate is that we think this will take around 2mins36s to complete. Lets see what really happened:
SQL> select /*+ parallel(8) */ owner, sum(object_id)
2 from bigtab
3 group by owner;
OWNER SUM(OBJECT_ID)
------------------------------ --------------
SYS 2.9489E+12
MYUSER 932761000
DVF 1595229000
ORDSYS 3.5752E+10
...
49 rows selected.
Elapsed: 00:02:47.18
In terms of an estimate, we’re still pretty close. Funnily enough, because I’ve thrown 8 workers at my poor USB-attached drive, the overall execution time has actually dropped significantly, so there’s an important lesson there as well for DBAs – don’t overstretch your I/O subsystem with parallel operations!
So if you’re not licensed for SQL Monitor, or you’re building some facilities to let ad-hoc query customers know roughly how long their queries have to run until completion, a little bit of data mining into V$SESSION_LONGOPS might be just what you need, even when running queries in parallel.
That @pr looks cool 🙂
Grab it from here
https://github.com/connormcd/misc-scripts/tree/master/sql