I love Active Session History (ASH) data because a lot of the work I’ve done in my consulting life was “after the fact” diagnosis. By this I mean that many of us have been in a similar circumstance where the customer will contact you not when a problem is occurring, but only when you contact them for some other potentially unrelated reason. At which point you hear will that dreaded sentence:
“Yeah, the Order Entry screen was really slow a couple of hours ago”
And this is where ASH is an awesome resource. With the ASH data available, there is a good chance you will be able to diagnose the issue without having to make an embarrassing request for the customer to repeat the task so that you can trace the underlying database activity. Because no-one likes to be the person that says:
“Yeah that performance must have really sucked for you … Hey, let’s do it again!”
But even ASH has it’s limitations because sometimes the customer sentence is phrased slightly differently
“Yeah, the Order Entry screen was really slow last Tuesday”
In this case, it is unlikely that the ASH data will still be available. Whilst a subset of the invaluable ASH data is retained in DBA_HIST_ACTIVE_SESS_HISTORY, I would prefer to keep the complete set of ASH data available for longer than the timespan for which it is practical (due to the limitations of memory). So I wrote myself a simple little routine that keep all of the ASH data around for longer. It’s hardly rocket science – just a little partitioned table to capture V$ACTIVE_SESSION_HISTORY at regular intervals. Let’s walk through it so you can understand it and adapt it for your own use.
First I’ll create a partitioned table to hold the ASH data. I’m using partitioning to avoid the need to index the table, so the insertion cost is minimal. I’m partitioning by day and the code assumes this, so take care if you intend to modify it.
SQL> create table ash_hist 2 partition by range (sample_time) 3 interval( numtodsinterval(1,'day')) 4 (partition p1 values less than (timestamp' 2017-01-01 00:00:00')) 5 as select * from sys.gv_$active_session_history; Table created.
Here is my procedure to capture the data. The essentials of the routine are:
- Starting with the most recent partition, find the last recorded entry in ASH_HIST. We’ll look back up to 10 days to find our starting point (hence the daily partitions).
- If there is no data for the last 10 days, we’ll bomb out, because we haven’t been running the routine frequently enough.
- Copy all the ASH data from this point to now into ASH_HIST using a nice efficient INSERT-APPEND, but we’ll skip the session that is doing the copying. (You can include it if you want just by removing line 8)
- Once per week (you can control this by tinkering with the IF conditions on line 34) we’ll drop the oldest partitions. By default I keep 90 days, but you can set this by altering “l_retention” on line 5.
SQL> SQL> CREATE OR REPLACE procedure save_ash_hist is 2 l_hi_val_as_string varchar2(1000); 3 l_hi_val_as_date date; 4 l_max_recorded timestamp; 5 l_retention number := 90; 6 7 begin 8 dbms_application_info.set_module('$$SAVE_ASH$$',''); 9 -- we are looping to take advantage 10 -- of partition elimination 11 12 for i in 0 .. 10 loop 13 select max(sample_time) 14 into l_max_recorded 15 from ash_hist 16 where sample_time > systimestamp - i; 17 18 exit when l_max_recorded is not null; 19 end loop; 20 21 if l_max_recorded is null then 22 raise_application_error(-20000,'No max sample time with 10 days'); 23 end if; 24 dbms_output.put_line('Last copied time was '||l_max_recorded); 25 26 insert /*+ append */ into ash_hist 27 select * 28 from sys.gv_$active_session_history 29 where sample_time > l_max_recorded 30 and ( module != '$$SAVE_ASH$$' or module is null ); 31 dbms_output.put_line('Copied '||sql%rowcount||' rows'); 32 commit; 33 34 if to_char(sysdate,'DYHH24') between 'TUE01' and 'TUE06' then 35 36 begin 37 execute immediate 'alter table ash_hist set interval ()'; 38 exception 39 when others then null; 40 end; 41 execute immediate 'alter table ash_hist set interval (NUMTODSINTERVAL(1,''DAY''))'; 42 43 for i in ( select * 44 from user_tab_partitions 45 where table_name = 'ASH_HIST' 46 and partition_position > 1 47 order by partition_position ) 48 loop 49 l_hi_val_as_string := i.high_value; 50 execute immediate 'select '||l_hi_val_as_string||' from dual' into l_hi_val_as_date; 51 52 if l_hi_val_as_date < sysdate - l_retention then 53 execute immediate 'alter table ash_hist drop partition '||i.partition_name; 54 else 55 exit; 56 end if; 57 58 end loop; 59 end if; 60 end; 61 / Procedure created.
And that is all there is to it. Each time we run the procedure, we’ll grab all the ASH data since the last time we ran and keep it in ASH_HIST.
SQL> SQL> select count(*) from ash_hist; COUNT(*) ---------- 792 1 row selected. SQL> SQL> exec save_ash_hist PL/SQL procedure successfully completed. SQL> SQL> select count(*) from ash_hist; COUNT(*) ---------- 819 1 row selected.
A simple scheduler job to run the routine every couple of hours (I’m assuming your SGA holds at least 2 hours of samples in V$ACTIVE_SESSION_HISTORY – if not, you’d need to adjust the frequency) and you’re off and running.
SQL> SQL> SQL> BEGIN 2 dbms_scheduler.create_job ( 3 job_name => 'ASH_CAPTURE', 4 job_type => 'PLSQL_BLOCK', 5 job_action => 'save_ash_hist;', 6 start_date => CAST((TRUNC(SYSDATE,'HH') + (1/24) + (55/24/60)) AS TIMESTAMP), -- job commences at 55 mins past the next hour 7 repeat_interval => 'FREQ=HOURLY; INTERVAL=2', 8 enabled => true, 9 comments => 'Permanent record of ASH data'); 10 END; 11 / PL/SQL procedure successfully completed.
You can assume all of the standard disclaimers here. Use at own risk, blah blah, no warranty, blah blah, etc
Addenda: I should add that you could write this complete level of detail directly to DBA_HIST_ACTIVE_SESS_HISTORY via tinkering with “_ash_disk_filter_ratio” , but please get the endorsement of Support first.