So I jumped onto an LLM and asked it to give me some PL/SQL to handle a 3-step series of DML statements

and out popped the following code:

That all looks fine, and the coding style is definitely not unique to LLMs. I see it everywhere. It is always the familiar pattern:

begin
   [dml]
   [dml]
   [dml]
   commit;
exception
   when others then
      rollback;
      raise;
end;
/

Your code reviewer is going to give you a tick of approval. After all, you’ve been very careful to look after the error handling. It looks like you’ve done the responsible thing. It looks “enterprise ready” 🙂

But here’s the thing … It was probably unnecessary, mostly harmful, and in some cases, might even corrupt your database. Here’s why.

PL/SQL is Statement-Level Atomicity

Oracle already treats a PL/SQL block as a single executable statement from the perspective of the caller.

Consider the following PL/SQL block.

begin
   insert into sales 
   values (...);

   insert into audit_log
   values (...);

   select adjustment_factor
   into   l_factor
   from   commissions
   where  ...

   update accounts
   set balance = balance + bonus / l_factor
   where ...

end;

Let’s assume that due to a data error, the ADJUSTMENT_FACTOR we fetched had a value of zero. This means the UPDATE statement is going to fail with a division-by-zero error.

Obviously the UPDATE will not work so there is nothing to roll back there, but the two INSERT statements will also automatically be rolled back. That is one of the core features of a PL/SQL block – it is viewed by the database as a single call, just like a single DML statement, and so it exhibits statement level transaction behaviour. It is atomic – it will completely work or completely fail.

You do not need this:

exception
   when others then
      rollback;
      raise;

because Oracle already did the rollback for the work performed inside that failed call.

So you might be thinking “OK then, so I do not need the rollback“, and opt to fallback to your handler in the PL/SQL block being this:

exception
   when others then
      raise;

I see this a lot as well, usually because a coding policy that every block needs an exception section.

This adds exactly zero value.

In fact, by adding that handler you made your code harder to debug when an error does occur. Consider what happens when my PL/SQL code has the (redundant) exception handler in there:

When this PL/SQL block fails, the error stack is going tell me:

“Your code failed at line 20

because that is where the “raise” was called. That piece of information is … useless. I don’t know if it was the INSERT that failed, or the SELECT, or the UPDATE. I get no clues.

However, without the exception handler, the error stack will be:

“Your code failed at line 13

which is exactly what I need to debug the problem.

I’ve seen people rebut this by commenting: “We can add DBMS_UTILITY.FORMAT_CALL_STACK and DBMS_UTILITY.FORMAT_ERROR_BACKTRACE to capture the true line number“. So now your code looks some variant of this:

exception
   when others then
      rollback;
      raise_application_error(... 'Error occurred at '||DBMS_UTILITY.FORMAT_CALL_STACK||DBMS_UTILITY.FORMAT_ERROR_BACKTRACE);

or similar. You’re now adding more code and more complexity to reproduce the same behaviour you would have gotten if you didn’t have the code there at all!

But back to the rollback issue. It can be much worse than being just redundant code.

Corrupting your database

Let me take the PL/SQL block and store it as a procedure. (There is no difference between procedures and anonymous blocks, I’ve just done this to make the following example a little more concise and easier to digest).

procedure process_payment is
begin
   insert into sales 
   values (...);

   insert into audit_log
   values (...);

   select adjustment_factor
   into   l_factor
   from   commissions
   where  ...

   update accounts
   set balance = balance + bonus / l_factor
   where ...

exception
   when others then
      rollback;
      raise;
end;
/

Now consider this flow of logic in your application. Unless your application is subscribed to the SmartDB principle, there’s a very strong likelihood that it may well have commenced some transaction work before entering your PL/SQL block, for example:

select ...
from   customers
where ...
for update;

insert into activity_log
values (...)

after which follows the call to your procedure:

process_payment;

When process_payment fails, the ROLLBACK doesn’t just undo the work inside process_payment. It rolls back the entire session transaction – including the earlier DML that happened before the PL/SQL block.

The might be what you wanted to happen, but generally that is not what you expect from modular code. It should never have impact outside its own scope. In particular, the calling environment that handles the error has no clues as to what the current transaction state now is. It no longer knows if a commit will save those changes or not.

Savvy developers might opt to resolve this transaction scope creep with another PL/SQL feature – Savepoints. Here’s a “fix” to the transaction scope with the SAVEPOINT command:

procedure process_payment is
begin
   SAVEPOINT MY_SP;

   insert into sales 
   values (...);

   insert into audit_log
   values (...);

   select adjustment_factor
   into   l_factor
   from   commissions
   where  ...

   update accounts
   set balance = balance + bonus / l_factor
   where ...

exception
   when others then
      ROLLBACK TO MY_SP;
      raise;
end;
/

This is called … reinventing the wheel 🙂

You’ve just added code to solve an issue that is natively handled by PL/SQL without any additional code.

Savepoints absolutely have legitimate uses but “We should add rollback logic to every PL/SQL block” is not one of them.

This rollback pattern exists in many places, which of course means, as these codebases get scraped into training data, your favourite LLM is probably going to regurgitate this pattern ad infinitum. Rather than thinking “I need to add rollback handling” the better approach is “Do I need any specialised transaction handling at all in this procedure?“, because most of the time the answer will be no.

The Oracle Database is pretty good at knowing how transactions work 🙂 You’re probably best off trusting the defaults and only adding rollback functionality when you explicitly need it. You only need to use explicit transaction control when you are intentionally managing transaction boundaries.

I’m not saying you should never have an exception handler. By the key phrase here is “handler”. If you need to handle something, only then should you be adding a handler. For example, if you need to take some sort of action when an error occurs, then a valid handler might be:

exception
   when others then
      my_custom_action_i_need_to_take;
      raise;

because you actually needed to do something when an error occurred. But often you’ll only need this at the highest call level in your PL/SQL hierarchy, because unhandled errors from deeper in the stack will automatically handle the statement level transaction details as well as bubble the entire call stack back up to the calling level. It is incredibly unlikely you need a handler in every PL/SQL procedure you write – that sounds more like something that has been (incorrectly) mandated in your coding standards document.

Leave a Reply

Trending

Discover more from Learning is not a spectator sport

Subscribe now to keep reading and get access to the full archive.

Continue reading