I love Active Session History (ASH) data because a lot of the work I’ve done in my consulting life was “after the fact” diagnosis. By this I mean that many of us have been in a similar circumstance where the customer will contact you not when a problem is occurring, but only when you contact them for some other potentially unrelated reason. At which point you hear will that dreaded sentence:
“Yeah, the Order Entry screen was really slow a couple of hours ago”
And this is where ASH is an awesome resource. With the ASH data available, there is a good chance you will be able to diagnose the issue without having to make an embarrassing request for the customer to repeat the task so that you can trace the underlying database activity. Because no-one likes to be the person that says:
“Yeah that performance must have really sucked for you … Hey, let’s do it again!”
But even ASH has it’s limitations because sometimes the customer sentence is phrased slightly differently
“Yeah, the Order Entry screen was really slow last Tuesday”
In this case, it is unlikely that the ASH data will still be available. Whilst a subset of the invaluable ASH data is retained in DBA_HIST_ACTIVE_SESS_HISTORY, I would prefer to keep the complete set of ASH data available for longer than the timespan for which it is practical (due to the limitations of memory). So I wrote myself a simple little routine that keep all of the ASH data around for longer. It’s hardly rocket science – just a little partitioned table to capture V$ACTIVE_SESSION_HISTORY at regular intervals. Let’s walk through it so you can understand it and adapt it for your own use.
First I’ll create a partitioned table to hold the ASH data. I’m using partitioning to avoid the need to index the table, so the insertion cost is minimal. I’m partitioning by day and the code assumes this, so take care if you intend to modify it.
SQL> create table ash_hist
2 partition by range (sample_time)
3 interval( numtodsinterval(1,'day'))
4 (partition p1 values less than (timestamp' 2017-01-01 00:00:00'))
5 as select * from sys.gv_$active_session_history;
Table created.
Here is my procedure to capture the data. The essentials of the routine are:
- Starting with the most recent partition, find the last recorded entry in ASH_HIST. We’ll look back up to 10 days to find our starting point (hence the daily partitions).
- If there is no data for the last 10 days, we’ll bomb out, because we haven’t been running the routine frequently enough.
- Copy all the ASH data from this point to now into ASH_HIST using a nice efficient INSERT-APPEND, but we’ll skip the session that is doing the copying. (You can include it if you want just by removing line 8)
- Once per week (you can control this by tinkering with the IF conditions on line 34) we’ll drop the oldest partitions. By default I keep 90 days, but you can set this by altering “l_retention” on line 5.
SQL>
SQL> CREATE OR REPLACE procedure save_ash_hist is
2 l_hi_val_as_string varchar2(1000);
3 l_hi_val_as_date date;
4 l_max_recorded timestamp;
5 l_retention number := 90;
6
7 begin
8 dbms_application_info.set_module('$$SAVE_ASH$$','');
9 -- we are looping to take advantage
10 -- of partition elimination
11
12 for i in 0 .. 10 loop
13 select max(sample_time)
14 into l_max_recorded
15 from ash_hist
16 where sample_time > systimestamp - i;
17
18 exit when l_max_recorded is not null;
19 end loop;
20
21 if l_max_recorded is null then
22 raise_application_error(-20000,'No max sample time with 10 days');
23 end if;
24 dbms_output.put_line('Last copied time was '||l_max_recorded);
25
26 insert /*+ append */ into ash_hist
27 select *
28 from sys.gv_$active_session_history
29 where sample_time > l_max_recorded
30 and ( module != '$$SAVE_ASH$$' or module is null );
31 dbms_output.put_line('Copied '||sql%rowcount||' rows');
32 commit;
33
34 if to_char(sysdate,'DYHH24') between 'TUE01' and 'TUE06' then
35
36 begin
37 execute immediate 'alter table ash_hist set interval ()';
38 exception
39 when others then null;
40 end;
41 execute immediate 'alter table ash_hist set interval (NUMTODSINTERVAL(1,''DAY''))';
42
43 for i in ( select *
44 from user_tab_partitions
45 where table_name = 'ASH_HIST'
46 and partition_position > 1
47 order by partition_position )
48 loop
49 l_hi_val_as_string := i.high_value;
50 execute immediate 'select '||l_hi_val_as_string||' from dual' into l_hi_val_as_date;
51
52 if l_hi_val_as_date < sysdate - l_retention then
53 execute immediate 'alter table ash_hist drop partition '||i.partition_name;
54 else
55 exit;
56 end if;
57
58 end loop;
59 end if;
60 end;
61 /
Procedure created.
And that is all there is to it. Each time we run the procedure, we’ll grab all the ASH data since the last time we ran and keep it in ASH_HIST.
SQL>
SQL> select count(*) from ash_hist;
COUNT(*)
----------
792
1 row selected.
SQL>
SQL> exec save_ash_hist
PL/SQL procedure successfully completed.
SQL>
SQL> select count(*) from ash_hist;
COUNT(*)
----------
819
1 row selected.
A simple scheduler job to run the routine every couple of hours (I’m assuming your SGA holds at least 2 hours of samples in V$ACTIVE_SESSION_HISTORY – if not, you’d need to adjust the frequency) and you’re off and running.
SQL>
SQL>
SQL> BEGIN
2 dbms_scheduler.create_job (
3 job_name => 'ASH_CAPTURE',
4 job_type => 'PLSQL_BLOCK',
5 job_action => 'save_ash_hist;',
6 start_date => CAST((TRUNC(SYSDATE,'HH') + (1/24) + (55/24/60)) AS TIMESTAMP), -- job commences at 55 mins past the next hour
7 repeat_interval => 'FREQ=HOURLY; INTERVAL=2',
8 enabled => true,
9 comments => 'Permanent record of ASH data');
10 END;
11 /
PL/SQL procedure successfully completed.
You can assume all of the standard disclaimers here. Use at own risk, blah blah, no warranty, blah blah, etc
Addenda: I should add that you could write this complete level of detail directly to DBA_HIST_ACTIVE_SESS_HISTORY via tinkering with “_ash_disk_filter_ratio” , but please get the endorsement of Support first.
Neat !
My own scripts to capture ASH to history tables didn’t use Interval Partitioning (why didn’t I think of that ?!).
Hi Connor,
how about just changing the retention time to 90 days?
sys.dbms_workload_repository.modify_snapshot_settings(retention => 90*24*60);
Hi Chris
But in AWR you only have every 10th sample of ASH. Of course you could change this with the hidden parameter _ash_disk_filter_ratio
Connor,
Why not just base line the AWR snap’s -so that it won’t be purged by the retention policy and the details remains in DBA_HIST_ACTIVE_SESSION_HISTORY dictionary itself.
won’t that would be better than this manual approach?
https://docs.oracle.com/cd/E11882_01/server.112/e41573/autostat.htm#PFGRF94176
I think in this case you’ll get the standard information in DBA_HIST_ACTIVE_SESSION_HISTORY which is not all of the entries in V$ACTIVE_SESSION_HISTORY.
Hi Connor,
Can we capture and retain the data from other gv$views in similar fashion.
I ran into a typical case where one day old data (the time when the issue occured) was present in gv$active_session_history.
So i was having all the sql id which got executed during the concerned time period but i was not able to find the associated sqltext for all of them.
DBA_HIST_ACTIVE_SESS_HISTORY only contained a subset and not all the data.
At the same time sql details got aged out from all the gv$ views which contains sql id and sql text information.
In addition to this session statistics were also missing.
This was on Oracle 11.2.0.4 so, even the sql monitoring reports history was not there.
In 12c as well sql monitoring will by default only kick off if a sql is running for more than 4 seconds or in parallel. So, still there is gap and it will not capture the data for all the sql from the gv$ views.
Please suggest.
You can build as little or as much as you like. Obviously, the more you capture then the more overhead in capturing it, and the more overhead in storing it. You need to decide on the cost/benefit.
Not all V$ are as “cheap” as ASH, which is queried w/out any latching.
Also note that querying GV$ can be very expensive as all rows will pass through the query coordinator. Best to query V$ from each instance.
Hi Connor,
Thanks for the script ….just wonder if you can explain what this below piece of code does searching from web seems it converts the interval partitioned table to a range partitioned table (set interval ()) but if you can explain what is the need of this code please?
34 if to_char(sysdate,’DYHH24′) between ‘TUE01’ and ‘TUE06’ then
35
36 begin
37 execute immediate ‘alter table ash_hist set interval ()’;
38 exception
39 when others then null;
40 end;
41 execute immediate ‘alter table ash_hist set interval (NUMTODSINTERVAL(1,”DAY”))’;
If you are on 12.2 and above, it wont be needed. Before that release, at least one partition had to be a “range” (instead of an “interval”) to ensure that we had a “starting” point, so you could never drop the oldest partitions. That workaround above reset existing existing intervals to ranges to allow things to move forward over time. In 12.2, this is done automatically