Probably the most common usage for large objects (CLOBs and BLOBs) is to store them in a database table. In this circumstance, it feels intuitive that you won’t have a lot of concerns about memory, because the database will simply store those objects in datafiles like it would any other kind of data.
But BLOBs and CLOBs can also be declared as local variables in a PL/SQL block. We typically expect local variables to be housed within the memory for that session (the PGA). There are explicit calls in the DBMS_LOB package to create a temporary large object, but what if we do not use that API? What if we just start bludgeoning a local variable with more and more data? Is this a threat to the session memory and potentially the database server ?
Time for a test!
To see what happens when we keep growing a BLOB, I’m going to monitor two attributes for a session:
- How much PGA is it consuming, and
- Whether it is using any temporary tablespace allocation.
To keep my code nice and compact, I’ll determine in advance the statistic number for the PGA memory I want to monitor
SQL> select name
2 from v$statname
3 where statistic# = 40;
NAME
--------------------------------
session pga memory max
Before doing anything, I’ll check the amount of PGA my session has used in its lifetime.
SQL> select 'pga '||value val
2 from v$mystat
3 where statistic# = 40;
VAL
--------------------------------
pga 2585904
Now I’ll start with a simple routine that adds 10kilobytes to an existing 1kilobyte BLOB. We wouldn’t expect such a variable to cause any problems because it is less than what would easily store in a VARCHAR2 anywa
SQL> set serverout on
SQL> declare
2 l_raw blob := utl_raw.cast_to_raw(lpad('X',1000,'X'));
3 c blob;
4 begin
5 c := l_raw;
6 for i in 1 .. 10
7 loop
8 dbms_lob.writeappend (c, utl_raw.length(l_raw), l_raw);
9 end loop;
10
11 for i in (
12 select 'pga '||value val
13 from v$mystat
14 where statistic# = 40
15 union all
16 select 'blk '||blocks from v$sort_usage
17 where username = 'MCDONAC'
18 union all
19 select 'len '||dbms_lob.getlength(c) from dual
20 )
21 loop
22 dbms_output.put_line(i.val);
23 end loop;
24 end;
25 /
pga 2863896
len 11000
PL/SQL procedure successfully completed.
Notice also that the query portion on line 16 to report any temporary space has not returned any results. So everything here is being done in PGA. Let’s ramp things up a little. Now I’ll push to 100kB, which exceeds what could be stored in a standard local variable.
SQL> set serverout on
SQL> declare
2 l_raw blob := utl_raw.cast_to_raw(lpad('X',1000,'X'));
3 c blob;
4 begin
5 c := l_raw;
6 for i in 1 .. 100
7 loop
8 dbms_lob.writeappend (c, utl_raw.length(l_raw), l_raw);
9 end loop;
10
11 for i in (
12 select 'pga '||value val
13 from v$mystat
14 where statistic# = 40
15 union all
16 select 'blk '||blocks from v$sort_usage
17 where username = 'MCDONAC'
18 union all
19 select 'len '||dbms_lob.getlength(c) from dual
20 )
21 loop
22 dbms_output.put_line(i.val);
23 end loop;
24 end;
25 /
pga 3453720
len 101000
PL/SQL procedure successfully completed.
Our PGA has jumped a little, and there is still no temporary space usage. So lets push onwards and see if I can blow up my session PGA. Now I’ll push 10 mega-bytes into the BLOB.
SQL> set serverout on
SQL> declare
2 l_raw blob := utl_raw.cast_to_raw(lpad('X',1000,'X'));
3 c blob;
4 begin
5 c := l_raw;
6 for i in 1 .. 10000
7 loop
8 dbms_lob.writeappend (c, utl_raw.length(l_raw), l_raw);
9 end loop;
10
11 for i in (
12 select 'pga '||value val
13 from v$mystat
14 where statistic# = 40
15 union all
16 select 'blk '||blocks from v$sort_usage
17 where username = 'MCDONAC'
18 union all
19 select 'len '||dbms_lob.getlength(c) from dual
20 )
21 loop
22 dbms_output.put_line(i.val);
23 end loop;
24 end;
25 /
pga 2994968
blk 1280
blk 128
len 10001000
PL/SQL procedure successfully completed.
You can see what the database is doing here. Even though the BLOB is declared as a local variable, it is being handled like a true temporary BLOB. We store it in PGA if it fits within the overall database’s management of the PGA, but once it gets large, we’ll utilise temporary storage to ensure our session does not chew up excessive memory. This allows me to write as much data as I like into the variable and not harm the PGA. Here’s the BLOB pushed out to 200megabytes, but I’ll still only consuming just over 3megabytes of PGA
SQL> set serverout on
SQL> declare
2 l_raw blob := utl_raw.cast_to_raw(lpad('X',1000,'X'));
3 c blob;
4 begin
5 c := l_raw;
6 for i in 1 .. 200000
7 loop
8 dbms_lob.writeappend (c, utl_raw.length(l_raw), l_raw);
9 end loop;
10
11 for i in (
12 select 'pga '||value val
13 from v$mystat
14 where statistic# = 40
15 union all
16 select 'blk '||blocks from v$sort_usage
17 where username = 'MCDONAC'
18 union all
19 select 'len '||dbms_lob.getlength(c) from dual
20 )
21 loop
22 dbms_output.put_line(i.val);
23 end loop;
24 end;
25 /
pga 3584792
blk 24576
blk 128
len 200001000
PL/SQL procedure successfully completed.
(picture credit: @jorgezapatag unsplash)
Thanks for the post, Connor. Tried this on a test db at work (same config as prod), and noticed that even with dbms_lob.writeappend FOR LOOP commented out, and padding reduced to 1, pga stays same & blk jumps to 128. What config controls this, and any good reasons for DBAs to have set them this way?
It might be your pga_agg_target set very low so we quickly dump out to TEMP