One of the super cool features in 26ai is the ability to extend the VALUES clause in your INSERT statements to allow multiple rows per execution.
If you’re unfamiliar with that, here’s a quick video I did on this a while back
This can be useful not just for brevity but also for performance, especially if you are pushing those INSERT statements across a network.
But of course that raises the question: What is the sweet spot?
Should I have 10 rows per INSERT? Maybe 50? Maybe 1000?
Even without running a test, there’s some interesting trade-offs that might come into play here
- Less trips to the database is usually a good thing.
- However, giant SQL statements with hundreds or thousands of literals take a long time to parse.
- Replacing those literals with bind variables would help (using cursor_sharing), but what is the cost of scanning long SQLs to build that list of binds?
So here’s a little test harness you can use to see what’s best.
It will populate a sample table with 1,000,000 rows, and it will perform this task multiple times, each time using a different multiple of rows per INSERT statement.
Spoiler Alert: I quickly gave up on the non-cursor-sharing tests; they were all orders of magnitude slower than their cursor sharing equivalents, so we’ll focus on those.
Here’s the anonymous block to generate the required scripts. Edit the directory object TMP as per your own configuration.
drop table perftest purge;
create table perftest (
n1 int,
n2 int,
n3 int,
n4 int,
n5 int,
s1 varchar2(10),
s2 varchar2(10),
s3 varchar2(10),
s4 varchar2(10),
s5 varchar2(10),
d1 date
) tablespace users;
declare
f utl_file.file_type;
l_inc sys.odcinumberlist := sys.odcinumberlist(10,20,50,75,100,150,200,300,400,500,1000);
qt varchar2(1) := '''';
cm varchar2(1) := ',';
procedure put(m varchar2) is
begin
utl_file.put_line(f,m);
end;
begin
for i in 1 .. l_inc.count loop
f := utl_file.fopen('TMP','insperf'||l_inc(i)||'.sql','W');
put('conn mcdonac/alicat1@db23');
put('alter session set cursor_sharing = force;');
put('truncate table perftest;');
put('set feedback off');
put('timing start');
for r in 1 .. 1000000 loop
if mod(r,l_inc(i)) = 1 then
put('insert into perftest values');
end if;
put('('||r||cm||(r+1)||cm||(r+2)||cm||(r+3)||cm||(r+4)||cm||
qt||(r+5)||qt||cm||qt||(r+6)||qt||cm||qt||(r+7)||qt||cm||qt||(r+8)||qt||cm||qt||(r+9)||qt||cm||
'date '||qt||'2026-'||(mod(r,10)+1)||'-'||(mod(r,26)+1)||qt||')'||
case when mod(r,l_inc(i)) = 0 or r = 1000000 then ';' else ',' end);
end loop;
put('timing stop');
put('col name format a40');
put('select s.name, st.value');
put('from v$statname s, v$mystat st');
put('where st.statistic# = s.statistic#');
put('and s.name like ''%pga%'';');
utl_file.fclose(f);
end loop;
end;
/
At end of this, I now have a series of scripts, the file name indicating the row multiple per INSERT statement
insperf10.sql
insperf20.sql
insperf50.sql
insperf75.sql
insperf100.sql
insperf150.sql
insperf200.sql
insperf300.sql
insperf400.sql
insperf500.sql
insperf1000.sql
For example, just grabbing the first few lines of insperf10.sql, it looks like:
conn scott/tiger@db23
alter session set cursor_sharing = force;
truncate table perftest;
set feedback off
timing start
-- 10 rows at a time, for 1 million rows
insert into perftest values
(1,2,3,4,5,'6','7','8','9','10',date '2026-2-2'),
(2,3,4,5,6,'7','8','9','10','11',date '2026-3-3'),
(3,4,5,6,7,'8','9','10','11','12',date '2026-4-4'),
(4,5,6,7,8,'9','10','11','12','13',date '2026-5-5'),
(5,6,7,8,9,'10','11','12','13','14',date '2026-6-6'),
(6,7,8,9,10,'11','12','13','14','15',date '2026-7-7'),
(7,8,9,10,11,'12','13','14','15','16',date '2026-8-8'),
(8,9,10,11,12,'13','14','15','16','17',date '2026-9-9'),
(9,10,11,12,13,'14','15','16','17','18',date '2026-10-10'),
(10,11,12,13,14,'15','16','17','18','19',date '2026-1-11');
insert into perftest values
(11,12,13,14,15,'16','17','18','19','20',date '2026-2-12'),
(12,13,14,15,16,'17','18','19','20','21',date '2026-3-13'),
(13,14,15,16,17,'18','19','20','21','22',date '2026-4-14'),
(14,15,16,17,18,'19','20','21','22','23',date '2026-5-15'),
(15,16,17,18,19,'20','21','22','23','24',date '2026-6-16'),
(16,17,18,19,20,'21','22','23','24','25',date '2026-7-17'),
(17,18,19,20,21,'22','23','24','25','26',date '2026-8-18'),
(18,19,20,21,22,'23','24','25','26','27',date '2026-9-19'),
(19,20,21,22,23,'24','25','26','27','28',date '2026-10-20'),
(20,21,22,23,24,'25','26','27','28','29',date '2026-1-21');
..
..
..
..
timing stop
col name format a40
select s.name, st.value
from v$statname s, v$mystat st
where st.statistic# = s.statistic#
and s.name like '%pga%';
I can now each of those scripts into my 26ai database and check the results.
SQL> @c:\tmp\insperf10.sql
Connected.
Elapsed: 00:00:44.98
NAME VALUE
---------------------------------------- ----------
session pga memory 2324064
session pga memory max 3700320
SQL> @c:\tmp\insperf20.sql
Connected.
Elapsed: 00:00:29.70
NAME VALUE
---------------------------------------- ----------
session pga memory 2341128
session pga memory max 4897032
SQL> @c:\tmp\insperf50.sql
Connected.
Elapsed: 00:00:34.87
NAME VALUE
---------------------------------------- ----------
session pga memory 2520672
session pga memory max 7977224
SQL> @c:\tmp\insperf75.sql
Connected.
Elapsed: 00:00:47.73
NAME VALUE
---------------------------------------- ----------
session pga memory 2455136
session pga memory max 10729736
SQL> @c:\tmp\insperf100.sql
Connected.
Elapsed: 00:01:02.14
NAME VALUE
---------------------------------------- ----------
session pga memory 2782816
session pga memory max 13285640
SQL> @c:\tmp\insperf150.sql
Connected.
Elapsed: 00:01:29.12
NAME VALUE
---------------------------------------- ----------
session pga memory 2848352
session pga memory max 18528520
SQL> @c:\tmp\insperf200.sql
Connected.
Elapsed: 00:01:57.64
NAME VALUE
---------------------------------------- ----------
session pga memory 3438176
session pga memory max 23771400
SQL> @c:\tmp\insperf300.sql
Connected.
Elapsed: 00:02:51.60
NAME VALUE
---------------------------------------- ----------
session pga memory 2651744
session pga memory max 31242504
SQL> @c:\tmp\insperf400.sql
Connected.
Elapsed: 00:03:36.43
Surprising result! Bigger is not always better. It looks like its a non-trivial exercise to parse giant SQL’s and replace all literals with binds. In this case, the sweet spot seems to be a low as 20 rows per INSERT. Moreover, once we got over 100, the PGA consumption was starting to grow significantly.
A followup experiment is now needed. Is the cost of the INSERTs due to the number of rows, or perhaps its just due to the number of binds? We can explore that by repeating our test with a slightly modified table that has less columns, which means less binds required per INSERT statement with the same number of VALUES rows. A small tweak to the test harness lets us repeat the tests on a 5 column table.
drop table perftest purge;
create table perftest (
n1 int,
n2 int,
s1 varchar2(10),
s2 varchar2(10),
d1 date
) tablespace users;
declare
f utl_file.file_type;
l_inc sys.odcinumberlist := sys.odcinumberlist(10,20,50,75,100,150,200,300,400,500,1000);
qt varchar2(1) := '''';
cm varchar2(1) := ',';
procedure put(m varchar2) is
begin
utl_file.put_line(f,m);
end;
begin
for i in 1 .. l_inc.count loop
f := utl_file.fopen('TMP','smallinsperf'||l_inc(i)||'.sql','W');
put('conn mcdonac/alicat1@db23');
put('alter session set cursor_sharing = force;');
put('truncate table perftest;');
put('set feedback off');
put('timing start');
for r in 1 .. 1000000 loop
if mod(r,l_inc(i)) = 1 then
put('insert into perftest values');
end if;
put('('||r||cm||(r+1)||cm||
qt||(r+5)||qt||cm||qt||(r+6)||qt||cm||
'date '||qt||'2026-'||(mod(r,10)+1)||'-'||(mod(r,26)+1)||qt||')'||
case when mod(r,l_inc(i)) = 0 or r = 1000000 then ';' else ',' end);
end loop;
put('timing stop');
put('col name format a40');
put('select s.name, st.value');
put('from v$statname s, v$mystat st');
put('where st.statistic# = s.statistic#');
put('and s.name like ''%pga%'';');
utl_file.fclose(f);
end loop;
end;
/
Now lets perform the same tests and compare the output:
SQL> @c:\tmp\smallinsperf10.sql
Connected.
Elapsed: 00:00:26.57
NAME VALUE
---------------------------------------- ----------
session pga memory 3848456
session pga memory max 4372744
SQL> @c:\tmp\smallinsperf20.sql
Connected.
Elapsed: 00:00:23.31
NAME VALUE
---------------------------------------- ----------
session pga memory 2996488
session pga memory max 4372744
SQL> @c:\tmp\smallinsperf50.sql
Connected.
Elapsed: 00:00:16.92
NAME VALUE
---------------------------------------- ----------
session pga memory 3062024
session pga memory max 6535432
SQL> @c:\tmp\smallinsperf75.sql
Connected.
Elapsed: 00:00:15.26
NAME VALUE
---------------------------------------- ----------
session pga memory 2258528
session pga memory max 7190792
SQL> @c:\tmp\smallinsperf100.sql
Connected.
Elapsed: 00:00:16.13
NAME VALUE
---------------------------------------- ----------
session pga memory 2537736
session pga memory max 8829192
SQL> @c:\tmp\smallinsperf150.sql
Connected.
Elapsed: 00:00:21.20
NAME VALUE
---------------------------------------- ----------
session pga memory 2455136
session pga memory max 11974920
SQL> @c:\tmp\smallinsperf200.sql
Connected.
Elapsed: 00:00:29.48
NAME VALUE
---------------------------------------- ----------
session pga memory 2799880
session pga memory max 14989576
SQL> @c:\tmp\smallinsperf300.sql
Connected.
Elapsed: 00:00:41.59
NAME VALUE
---------------------------------------- ----------
session pga memory 2651744
session pga memory max 19577096
SQL> @c:\tmp\smallinsperf400.sql
Connected.
Elapsed: 00:00:56.40
NAME VALUE
---------------------------------------- ----------
session pga memory 3569248
session pga memory max 23050504
SQL> @c:\tmp\smallinsperf500.sql
Connected.
Elapsed: 00:01:12.44
NAME VALUE
---------------------------------------- ----------
session pga memory 2586208
session pga memory max 26130696
SQL>
So it would appear that the governing factor here is going to be the total number of binds. Further exploration might reveal differences between numeric, character and other bind types, but ultimately, the message here is:
Don’t go nuts with the number of rows.
Your database CPU and your database PGA will thank you for it.




Got some thoughts? Leave a comment