a truck overfilled with cut logs driving on the highway

Don’t run tight on PGA

Posted by

The database on my laptop seemed slow this morning. Normally I would put this down to the fact that I’ve got 30 million browser tabs open and never seem to be able to close them down Smile, but today things seemed especially slow, even just navigating around with SQL Developer etc.

So I threw a trace on some sessions, and very quickly something stood out like a sore thumb in my trace files. Hundreds and hundreds of lines complaining that I was running too close to the wire with my PGA settings.


WAIT #1926887211704: nam='acknowledge over PGA limit' ela= 24953 limit=4294967296 margin=27959537 growth=0 obj#=40 tim=2335418232685
WAIT #1926887211704: nam='PGA memory operation' ela= 19 p1=65536 p2=1 p3=0 obj#=40 tim=2335418232811
WAIT #1926887211704: nam='acknowledge over PGA limit' ela= 15108 limit=4294967296 margin=27959537 growth=0 obj#=40 tim=2335418248411
WAIT #1926887211704: nam='PGA memory operation' ela= 27 p1=65536 p2=1 p3=0 obj#=40 tim=2335418248620
WAIT #1926887211704: nam='acknowledge over PGA limit' ela= 14368 limit=4294967296 margin=27959537 growth=0 obj#=40 tim=2335418263592
WAIT #1926887211704: nam='PGA memory operation' ela= 10 p1=65536 p2=1 p3=0 obj#=40 tim=2335418263673
WAIT #1926887211704: nam='acknowledge over PGA limit' ela= 30287 limit=4294967296 margin=27959537 growth=0 obj#=40 tim=2335418294198
WAIT #1926887211704: nam='PGA memory operation' ela= 10 p1=65536 p2=1 p3=0 obj#=40 tim=2335418294278
WAIT #1926887211704: nam='acknowledge over PGA limit' ela= 15963 limit=4294967296 margin=27959537 growth=0 obj#=40 tim=2335418310461
WAIT #1926887211704: nam='PGA memory operation' ela= 12 p1=65536 p2=1 p3=0 obj#=40 tim=2335418310556
WAIT #1926887211704: nam='acknowledge over PGA limit' ela= 15108 limit=4294967296 margin=27959537 growth=0 obj#=40 tim=2335418325726
WAIT #1926887211704: nam='PGA memory operation' ela= 14 p1=65536 p2=1 p3=0 obj#=40 tim=2335418325836
WAIT #1926887211704: nam='acknowledge over PGA limit' ela= 31366 limit=4294967296 margin=27959537 growth=0 obj#=40 tim=2335418357485
WAIT #1926887211704: nam='PGA memory operation' ela= 38 p1=65536 p2=1 p3=0 obj#=40 tim=2335418357678
WAIT #1926887211704: nam='acknowledge over PGA limit' ela= 13391 limit=4294967296 margin=27959537 growth=0 obj#=40 tim=2335418372768
WAIT #1926887211704: nam='PGA memory operation' ela= 45 p1=65536 p2=1 p3=0 obj#=40 tim=2335418373020
WAIT #1926887211704: nam='acknowledge over PGA limit' ela= 11527 limit=4294967296 margin=27959537 growth=0 obj#=40 tim=2335418387982
WAIT #1926887211704: nam='PGA memory operation' ela= 41 p1=65536 p2=1 p3=0 obj#=40 tim=2335418388263
WAIT #1926887211704: nam='acknowledge over PGA limit' ela= 29858 limit=4294967296 margin=27959537 growth=0 obj#=40 tim=2335418419179
WAIT #1926887211704: nam='PGA memory operation' ela= 27 p1=65536 p2=1 p3=0 obj#=40 tim=2335418419416

Whilst my laptop runs waaayy too many databases, and as such I have their PGAs and SGAs sized quite small, this is not something I normally see, so I started to dig around to see where all that PGA was being consumed. Here’s a simple query to see what is consuming a lot of PGA at this particular moment.


SQL> select s.sid, s.serial#, s.program, s.event, st.value
  2  from v$session s,
  3       v$sesstat st,
  4       v$statname sn
  5  where st.value > 100*1024*1024
  6  and st.sid = s.sid
  7  and st.statistic# = sn.statistic#
  8  and sn.name = 'session pga memory';

       SID    SERIAL# PROGRAM                        EVENT                               VALUE
---------- ---------- ------------------------------ ------------------------------ ----------
       497      10012 ORACLE.EXE (Q00F)              EMON slave idle wait            774071832
       737      35280 ORACLE.EXE (Q00H)              EMON slave idle wait            891446808
       860      43180 ORACLE.EXE (Q00I)              EMON slave idle wait            782394904
      1108      37179 ORACLE.EXE (Q00K)              EMON slave idle wait            773940760
      1228      52195 ORACLE.EXE (Q00L)              EMON slave idle wait            446785048

It turns out that my older Windows 19c version (which is lagging behind in RUs) is hitting a known issue with EMON background threads. If you see similar on your system, please get in touch with Support for patches. In my case, I simply bounced the database to free things up, but in a true production setting, you could kill these sessions and reset aq_tm_procesess to its existing value of 1 in order to kick start EMON threads back into action.

Lesson learned: Keep your software up to date to pick up the latest bug fixes!

Before I bounced the system, I was still curious about the slowness – these EMON sessions had been floating around for a while, so I thought that whilst my queries might be slow initially, once they got the PGA they needed, they would be fine after that. But my observation was that any kind of navigation around in the data dictionary with SQL Developer was still really slow. So I dug around some more in the trace files, and I found something else that was popping up frequently.


WAIT #1926623986200: nam='PX Deq: Join ACK' ela= 1275 sleeptime/senderid=1 passes=1 p3=0 obj#=159794 tim=2335414181723
WAIT #1926623986200: nam='PX Deq: Join ACK' ela= 5 sleeptime/senderid=0 passes=0 p3=0 obj#=159794 tim=2335414181803
WAIT #1926623986200: nam='PX Deq: Parse Reply' ela= 8134 sleeptime/senderid=2 passes=2 p3=0 obj#=159794 tim=2335414190164
WAIT #1926623986200: nam='PX Deq: Parse Reply' ela= 112569 sleeptime/senderid=4 passes=4 p3=0 obj#=159794 tim=2335414302786
WAIT #1926623986200: nam='PX Deq: Execute Reply' ela= 183140 sleeptime/senderid=5 passes=5 p3=0 obj#=159794 tim=2335414610874
WAIT #1926623986200: nam='PX Deq: Execute Reply' ela= 108 sleeptime/senderid=1 passes=1 p3=0 obj#=159794 tim=2335414611037
WAIT #1926623986200: nam='PX Deq: Execute Reply' ela= 3545459 sleeptime/senderid=27 passes=27 p3=0 obj#=159794 tim=2335418159420
WAIT #1926623986200: nam='PX Deq: Execute Reply' ela= 119 sleeptime/senderid=1 passes=1 p3=0 obj#=159794 tim=2335418159646
WAIT #1926623986200: nam='PX Deq: Signal ACK EXT' ela= 5464 sleeptime/senderid=1 passes=1 p3=0 obj#=159794 tim=2335418165163
WAIT #1926623986200: nam='PX Deq: Signal ACK EXT' ela= 6 sleeptime/senderid=0 passes=0 p3=0 obj#=159794 tim=2335418165210
WAIT #1926623986200: nam='PX Deq: Slave Session Stats' ela= 119 sleeptime/senderid=1 passes=1 p3=0 obj#=159794 tim=2335418165358
WAIT #1926623986200: nam='PX Deq: Slave Session Stats' ela= 4 sleeptime/senderid=0 passes=0 p3=0 obj#=159794 tim=2335418165398

Lots of parallel activity going on. But a ‘grep’ of the trace file revealed no PARALELL hints and similarly, my database is a “parallel free” zone Smile


SQL> select owner, table_name, degree
  2  from   dba_tables
  3  where  trim(degree)  not in ( '1','DEFAULT');

no rows selected

The reason you’ll see (lots of) parallel activity on any recent version of the Oracle database, even if nothing is “natively” parallel, is that in the multitenant architecture, a pluggable database contains references back to the root container for certain dictionary information. That information can be shared between the root and multiple pluggables, all being sourced from the root. Thus when internal dictionary queries are executed, they can spawn parallel threads in order to span the entire container architecture. Here’s a simple example of that:


SQL> set autotrace traceonly explain
SQL> select * from dba_procedures;

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------------------------------------

Plan hash value: 310276515

----------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                  | Name               | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |                    | 20000 |  8378K|     1 (100)| 00:00:01 |       |       |        |      |            |
|   1 |  PX COORDINATOR            |                    |       |       |            |          |       |       |        |      |            |
|   2 |   PX SEND QC (RANDOM)      | :TQ10000           | 20000 |  8378K|     1 (100)| 00:00:01 |       |       |  Q1,00 | P->S | QC (RAND)  |
|   3 |    PX PARTITION LIST ALL   |                    | 20000 |  8378K|     1 (100)| 00:00:01 |     1 |     2 |  Q1,00 | PCWC |            |
|   4 |     EXTENDED DATA LINK FULL| INT$DBA_PROCEDURES | 20000 |  8378K|     1 (100)| 00:00:01 |       |       |  Q1,00 | PCWP |            |
----------------------------------------------------------------------------------------------------------------------------------------------

Thus when you combine a PGA shortage and parallel threads being launched, this can lead to a lot of trauma on your database. Each parallel thread is its own session, and thus you are going to get a “storm” of threads coming into existence at the same time, all asking for an allocation of PGA at the same time. If you are already running tight on PGA, then performance might drop dramatically.

So remember to keep an eye on your PGA allocations, and take a closer look if any session is holding onto a lot of PGA for an excessively long time. You’ll start to see PGA related events in V$SYSTEM_EVENT and AWR reports if you’re generating them. These can help you hone in on where you might need to take some reparative action.\

In my case, I just should not have been lazy with my RUs Smile

2 comments

  1. How can i find the place in PL/SQL, that use PGA?
    May be set some event on session, for turn on some trace? Or something different?

    PS: program unit contains so many code, that it’s very hard to see everything

Got some thoughts? Leave a comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.