When you are querying rows from an Oracle Database, a key element of the Oracle JDBC driver related to performance is an attribute known as the fetch size.
For those unfamiliar with the fetch size, when you issue a FETCH for a single row from our JDBC driver, we will go grab a batch of rows and then feed them to you one at a time. This reduces trips to the database and improves efficiency, and by default, the size of this batch is 10 rows at a time.
Some colleagues were discussing the origins of the value 10, ie, why was it chosen etc. I don’t have an answer to that, but in the back-and-forth of the discussion, some important discoveries were made 😀
Here’s a simple Java program you’re welcome to try on your own database. It creates a largish table, fetches all of the rows from it, and then displays some performance metrics. Just change the connection details and you should be good to go.
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
public class FetchSize {
public static void main(String[] args) throws SQLException {
try (Connection con = DriverManager.getConnection("jdbc:oracle:thin:MYUSER/MYPASS@//MYHOST/MYDB")) {
// try (Connection con = DriverManager.getConnection("jdbc:oracle:thin:MYUSER/MYPASS@//MYHOST/MYDB?defaultRowPrefetch=1000")) {
try (Statement stmt = con.createStatement()) {
System.out.println("Creating test table");
try {
stmt.execute("drop table tmp_fetch_demo purge");
}
catch (SQLException ignored) {
}
stmt.execute("create table tmp_fetch_demo as select a.* from all_objects a, ( select 1 from dual connect by level <= 50) ");
// stmt.execute("""
// create table tmp_fetch_demo as
// select a.*
// from all_objects a, (select 1 from dual connect by level <= 50)""");
System.out.println("Done.");
// If you want to trace this
//
// stmt.execute("alter session set tracefile_identifier = FETCH");
// stmt.execute("alter session set sql_trace = true");
int newFetchSize;
if (args.length > 0 && (newFetchSize = Integer.parseInt(args[0])) > 0) {
System.out.println("Setting JDBC Statement default fetch size to " + newFetchSize);
stmt.setFetchSize(newFetchSize);
}
else if (stmt.getFetchSize() == 10) {
System.out.println("Keeping default JDBC Statement default fetch size (" + stmt.getFetchSize() + ")");
}
else {
System.out.println("JDBC Statement default fetch size set to " + stmt.getFetchSize() + " using connection string parameter");
}
try (ResultSet rs = stmt.executeQuery("select object_id, object_name from tmp_fetch_demo")) {
System.out.println("Querying all rows");
long startTime = System.nanoTime();
long cnt = 0;
while (rs.next()) {
int v1 = rs.getInt(1);
String v2 = rs.getString(2);
cnt++;
}
System.out.println("ResultSet fetch size " + rs.getFetchSize());
long duration = (System.nanoTime() - startTime) / 1000000L;
long tp = (long) ((double) cnt / duration * 1000d);
System.out.printf("Rows fetched %d%nDuration %d ms%nThroughput %d rows/sec%n", cnt, duration, tp);
}
}
}
}
}
(Don’t flame me if my Java isn’t great- I’m a database guy 😀)
If you prefer, you can just grab it from here https://github.com/connormcd/misc-scripts/blob/master/FetchSize.java
We can use this program to confirm the default fetch size by passing “0” (Note: 0 is not a valid size, the code simply does not set a fetch size if 0 is passed)
x:\temp\java>java FetchSize 0
Creating test table
Done.
Keeping default JDBC Statement default fetch size (10)
Querying all rows
ResultSet fetch size 10
Rows fetched 4072000, 11471 ms
Throughput 354982/sec
I fetched a little over 4 million rows in just over 11 seconds. Note that this is perhaps a little pessimistic because I’ve also got a database trace enabled to capture the fetch statistics, but we can assume we’re going to get 350,000 rows per second or thereabouts.
When I take a look at the database trace, I can see the following:
FETCH #1998675021656:c=0,e=5,p=0,cr=1,cu=0,mis=0,r=10,dep=0,og=1,plh=1575980263,tim=179310469275
FETCH #1998675021656:c=0,e=5,p=0,cr=1,cu=0,mis=0,r=10,dep=0,og=1,plh=1575980263,tim=179310469549
FETCH #1998675021656:c=0,e=9,p=0,cr=1,cu=0,mis=0,r=10,dep=0,og=1,plh=1575980263,tim=179310469830
FETCH #1998675021656:c=0,e=6,p=0,cr=1,cu=0,mis=0,r=10,dep=0,og=1,plh=1575980263,tim=179310469986
The fetches were done in batches of 10. (This is the “r=10” entry on each line).
350,000 rows per seconds sounds impressive, but we can do a lot better than that. Just to show how critical the fetch size can be to performance, I’ll now do the execution with a fetch size of 500
x:\temp\java>java FetchSize 500
Creating test table
Done.
Setting JDBC Statement default fetch size to 500
Querying all rows
ResultSet fetch size 500
Rows fetched 4072000, 1058 ms
Throughput 3848771/sec
Wow! I got a 10-fold boost in performance. Now we are in millions of rows per second territory. I can confirm the fetch size of 500 via the trace file
FETCH #1997623590632:c=0,e=38,p=0,cr=8,cu=0,mis=0,r=500,dep=0,og=1,plh=1575980263,tim=179351278248
FETCH #1997623590632:c=0,e=53,p=15,cr=9,cu=0,mis=0,r=500,dep=0,og=1,plh=1575980263,tim=179351278572
FETCH #1997623590632:c=0,e=33,p=0,cr=9,cu=0,mis=0,r=500,dep=0,og=1,plh=1575980263,tim=179351278890
FETCH #1997623590632:c=0,e=56,p=15,cr=8,cu=0,mis=0,r=500,dep=0,og=1,plh=1575980263,tim=179351279189
Also note that I’m running this entirely locally, ie, the java program is running on the database server, so there is no network transmission costs here. If my app server was distanced from the database server, then fetch size might be even more important to reducing network costs.
But this is not to say that you should just crank fetch size to extreme levels. You typically get diminishing returns over 500 (depending on various factors such as network etc).
x:\temp\java>java FetchSize 5000
Creating test table
Done.
Setting JDBC Statement default fetch size to 500
Querying all rows
ResultSet fetch size 5000
Rows fetched 4072000, 621 ms
Throughput 6557165/sec
x:\temp\java>java FetchSize 50000
Creating test table
Done.
Setting JDBC Statement default fetch size to 500
Querying all rows
ResultSet fetch size 50000
Rows fetched 4072000, 618 ms
Throughput 6588996/sec
For some of you, all of this may be old knowledge. But another thing that also plays a role here is the version of the driver you are using.
The demos above were done with the 18c JDBC driver, simply because I had it handy on my machine. I then downloaded the latest 23ai driver, and look at the results I achieve without nominating a fetch size.
x:\temp\java>java FetchSize 0
Creating test table
Done.
Keeping default JDBC Statement default fetch size (10)
Querying all rows
ResultSet fetch size 250 <<<++++!!!!!!
Rows fetched 4072000, 1098 ms
Throughput 3708561/sec
Doesn’t that look nice! We got great performance without touching the fetch size at all. The trace file reveals what is going on.
FETCH #1997623585400:c=0,e=2721,p=28,cr=3,cu=0,mis=0,r=10,dep=0,og=1,plh=1575980263,tim=179456214593
FETCH #1997623585400:c=0,e=17,p=0,cr=1,cu=0,mis=0,r=10,dep=0,og=1,plh=1575980263,tim=179456260896
FETCH #1997623585400:c=0,e=7,p=0,cr=1,cu=0,mis=0,r=10,dep=0,og=1,plh=1575980263,tim=179456261293
FETCH #1997623585400:c=0,e=28,p=0,cr=5,cu=0,mis=0,r=250,dep=0,og=1,plh=1575980263,tim=179456261853
FETCH #1997623585400:c=0,e=24,p=0,cr=4,cu=0,mis=0,r=250,dep=0,og=1,plh=1575980263,tim=179456263399
FETCH #1997623585400:c=0,e=34,p=0,cr=5,cu=0,mis=0,r=250,dep=0,og=1,plh=1575980263,tim=179456264231
FETCH #1997623585400:c=0,e=80,p=15,cr=4,cu=0,mis=0,r=250,dep=0,og=1,plh=1575980263,tim=179456265090
FETCH #1997623585400:c=0,e=21,p=0,cr=4,cu=0,mis=0,r=250,dep=0,og=1,plh=1575980263,tim=179456265904
FETCH #1997623585400:c=0,e=33,p=0,cr=5,cu=0,mis=0,r=250,dep=0,og=1,plh=1575980263,tim=179456266503
FETCH #1997623585400:c=0,e=20,p=0,cr=5,cu=0,mis=0,r=250,dep=0,og=1,plh=1575980263,tim=179456266948
FETCH #1997623585400:c=0,e=50,p=15,cr=5,cu=0,mis=0,r=250,dep=0,og=1,plh=1575980263,tim=179456267361
I’m not saying that it is a hard and fast rule of 3 fetches of 10 and then subsequent fetches are 250, but we can be confident that there are some automatic optimizations at play here.
A couple of nice things about this approach:
- For the common (but not ideal) practice of fetching just the first row from a cursor as an existence check, we’re still at the default fetch size of 10, so there is no risk of impacting the response latency with a 250 row check
- If you are doing pagination of (say) 10 rows for a typical web or mobile app, then the same applies – you’re getting just the right amount of rows
- If you are indeed churning through all of the rows, then the bump up to 250 will give you a significant performance boost without worrying about fetch size settings.
Moral of the story:
Keep your JDBC drivers up to date (in particular, to the latest 23ai versions), and the fetch size setting will by and large take care of itself. And yes, you can see from the demos here that millions of rows per seconds even in a single thread is well within the realms of possibility.




Leave a reply to Jared Cancel reply