There is a cool little utility in $ORACLE_HOME/rdbms/admin called “awrextr.sql”.
When you run the script, the opening prompt describes what action will be taken
SQL> @awrextr
~~~~~~~~~~~~~
AWR EXTRACT
~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~ This script will extract the AWR data for a range of snapshots ~
~ into a dump file. The script will prompt users for the ~
~ following information: ~
~ (1) database id ~
~ (2) snapshot range to extract ~
~ (3) name of directory object ~
~ (4) name of dump file ~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
That all seems straightforward enough, but you might be in for a surprise when you run the script and discover your SYSAUX tablespace has ballooned out to potentially twice the size it was when you started the script.
To explain why, firstly let’s look at the full output from a run of the script
SQL> @awrextr
~~~~~~~~~~~~~
AWR EXTRACT
~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~ This script will extract the AWR data for a range of snapshots ~
~ into a dump file. The script will prompt users for the ~
~ following information: ~
~ (1) database id ~
~ (2) snapshot range to extract ~
~ (3) name of directory object ~
~ (4) name of dump file ~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Databases in this Workload Repository schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
DB Id DB Name Host
------------ ------------ ------------
* 1163970465 DB21 BAILEY
The default database id is the local one: '1163970465'. To use this
database id, press to continue, otherwise enter an alternative.
Enter value for dbid:
Using 1163970465 for Database ID
Specify the number of days of snapshots to choose from
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Entering the number of days (n) will result in the most recent
(n) days of snapshots being listed. Pressing without
specifying a number lists all completed snapshots.
Enter value for num_days: 1
Listing the last day's Completed Snapshots
DB Name Snap Id Snap Started
------------ --------- ------------------
DB21 22376 06 May 2025 00:58
22377 06 May 2025 01:58
22378 06 May 2025 02:58
22379 06 May 2025 03:58
22380 06 May 2025 04:58
22381 06 May 2025 05:58
22382 06 May 2025 06:58
22383 06 May 2025 07:58
22384 06 May 2025 08:58
22385 06 May 2025 09:58
22386 06 May 2025 10:58
22387 06 May 2025 11:58
Specify the Begin and End Snapshot Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for begin_snap: 22376
Begin Snapshot Id specified: 22376
Enter value for end_snap: 22387
End Snapshot Id specified: 22387
Specify the Directory Name
~~~~~~~~~~~~~~~~~~~~~~~~~~
Directory Name Directory Path
------------------------------ -------------------------------------------------
DATA_PUMP_DIR C:\oracle\admin\db21\dpdump
ORACLE_BASE C:\oracle
ORACLE_HOME C:\oracle\product\21
...
...
Choose a Directory Name from the above list (case-sensitive).
Enter value for directory_name: DATA_PUMP_DIR
Using the dump directory: DATA_PUMP_DIR
Specify the Name of the Extract Dump File
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The prefix for the default dump file name is awrdat_22376_22387.
To use this name, press to continue, otherwise enter
an alternative.
Enter value for file_name:
Using the dump file prefix: awrdat_22376_22387
|
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| The AWR extract dump file will be located
| in the following directory/file:
| C:\oracle\admin\db21\dpdump/
| awrdat_22376_22387.dmp
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
| *** AWR Extract Started ...
|
| This operation will take a few moments. The
| progress of the AWR extract operation can be
| monitored in the following directory/file:
| C:\oracle\admin\db21\dpdump/
| awrdat_22376_22387.log
|
End of AWR Extract
You can see that we nominated a start and end snapshot ID, and nominated a directory where DataPump would unload the data, and the net result was a DataPump export file containing the extract.
It we look at the DataPump log, we see the usual AWR tables with their unloaded data.
Starting "SYS"."SYS_EXPORT_TABLE_01":
W-1 Startup on instance 1 took 1 seconds
W-1 Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
W-1 Processing object type TABLE_EXPORT/TABLE/TABLE
W-1 Completed 141 TABLE objects in 0 seconds
W-1 . . exported "SYS"."WRH$$1_$$_ACTIVE_SESSION_HISTORY" 278.2 KB 928 rows in 0 seconds using direct_path
W-1 . . exported "SYS"."WRH$$1_$$_ASM_BAD_DISK" 0 KB 0 rows in 0 seconds using direct_path
W-1 . . exported "SYS"."WRH$$1_$$_ASM_DISKGROUP" 0 KB 0 rows in 0 seconds using direct_path
W-1 . . exported "SYS"."WRH$$1_$$_ASM_DISKGROUP_STAT" 0 KB 0 rows in 0 seconds using direct_path
W-1 . . exported "SYS"."WRH$$1_$$_ASM_DISK_STAT_SUMMARY" 0 KB 0 rows in 0 seconds using direct_path
W-1 . . exported "SYS"."WRH$$1_$$_AWR_TEST_1" 0 KB 0 rows in 0 seconds using direct_path
W-1 . . exported "SYS"."WRH$$1_$$_BG_EVENT_SUMMARY" 85.74 KB 1668 rows in 0 seconds using direct_path
W-1 . . exported "SYS"."WRH$$1_$$_BUFFERED_QUEUES" 21.58 KB 60 rows in 0 seconds using direct_path
W-1 . . exported "SYS"."WRH$$1_$$_BUFFERED_SUBSCRIBERS" 30.78 KB 108 rows in 0 seconds using direct_path
W-1 . . exported "SYS"."WRH$$1_$$_BUFFER_POOL_STATISTICS" 17.94 KB 24 rows in 0 seconds using direct_path
W-1 . . exported "SYS"."WRH$$1_$$_CELL_CONFIG" 0 KB 0 rows in 0 seconds using direct_path
W-1 . . exported "SYS"."WRH$$1_$$_CELL_CONFIG_DETAIL" 0 KB 0 rows in 0 seconds using direct_path
W-1 . . exported "SYS"."WRH$$1_$$_CELL_DB" 0 KB 0 rows in 0 seconds using direct_path
W-1 . . exported "SYS"."WRH$$1_$$_CELL_DISK_SUMMARY" 0 KB 0 rows in 0 seconds using direct_path
W-1 . . exported "SYS"."WRH$$1_$$_CELL_GLOBAL" 0 KB 0 rows in 0 seconds using direct_path
[snip]
W-1 . . exported "SYS"."WRM$$1_$$_PDB_IN_SNAP" 8.671 KB 36 rows in 0 seconds using direct_path
W-1 . . exported "SYS"."WRM$$1_$$_SNAPSHOT" 12.58 KB 12 rows in 0 seconds using direct_path
W-1 . . exported "SYS"."WRM$$1_$$_SNAPSHOT_DETAILS" 74.68 KB 1416 rows in 0 seconds using direct_path
W-1 . . exported "SYS"."WRM$$1_$$_SNAP_ERROR" 0 KB 0 rows in 0 seconds using direct_path
W-1 . . exported "SYS"."WRM$$1_$$_WR_CONTROL" 16.14 KB 1 rows in 0 seconds using direct_path
W-1 Completed 141 TABLE_EXPORT/TABLE/TABLE_DATA objects in 2 seconds
W-1 Master table "SYS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
C:\ORACLE\ADMIN\DB21\DPDUMP\AWRDAT_22376_22387.DMP
Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at Tue May 6 12:25:12 2025 elapsed 0 00:00:22
But take a closer look. Notice something a little odd about those table names? Whilst a lot of internal Oracle dictionary table names contain a “$” in their sign, there seems to be a surplus 🙂 of “$” in these table names.
If we dig into the extract processing with some tracing, we can get to the bottom of what is going on here. Let’s say you nominate a snapshot ID range of 100 to 200. The extract process does not simply add that as a predicate to the DataPump parameter file, but it uses those values to create new tables being a subset of the true repository tables. Thus in the trace file, you’ll see a series of CREATE TABLE commands along the lines of:
create table WRM$$1_$$_DATABASE_INSTANCE tablespace SYSAUX as
select * from WRM$_DATABASE_INSTANCE where snap_id between 100 and 200
create table WRM$$1_$$_SNAPSHOT tablespace SYSAUX as
select * from WRM$_SNAPSHOT where snap_id between 100 and 200
create table WRM$$1_$$_WR_CONTROL tablespace SYSAUX as
select * from WRM$_WR_CONTROL where snap_id between 100 and 200
create table WRH$$1_$$_STAT_NAME tablespace SYSAUX as
select * from WRM$_STAT_NAME where snap_id between 100 and 200
create table WRH$$1_$$_PARAMETER_NAME tablespace SYSAUX as
select * from WRM$_PARAMETER_NAME where snap_id between 100 and 200
create table WRH$$1_$$_EVENT_NAME tablespace SYSAUX as
select * from WRM$_EVENT_NAME where snap_id between 100 and 200
create table WRH$$1_$$_LATCH_NAME tablespace SYSAUX as
select * from WRM$_LATCH_NAME where snap_id between 100 and 200
...
...
and so forth for all the tables required in the extract. If you are using this script to get an extract of all of the snapshots in your AWR repository, that you are going to need (at least temporarily) SYSAUX to be able to accommodate double the existing size of your AWR repository until the DataPump has completed its extract.
If you were really tight on space, for example, you were doing this on Express/Free Editions of the database (where an expansion of SYSAUX could cause all sorts of problems), then you could conceivably use the database trace file to extract all of the CREATE TABLE commands and you could run them manually into their own tablespace and then run DataPump against that. I’d consider this a last resort, because that would be a script you would have to revisit every single time you patched or upgraded the database. It might be better to do multiple extracts with a small snapshot ID range.
But just be aware of this temporary space requirement, and make sure your SYSAUX can handle it.




Got some thoughts? Leave a comment