Some people hate triggers, some people love triggers…
I am not that opinionated on them in either direction, but one thing I do hate, whether it be a trigger or not, is dumb code. And today’s post just happens to be about dumb code in a trigger.
Consider this simple trigger (you see these everywhere pre 12c):
CREATE OR REPLACE TRIGGER MY_TRG BEFORE INSERT ON MY_TABLE FOR EACH ROW BEGIN SELECT MY_SEQ.NEXTVAL INTO :NEW.MY_PK_COL FROM DUAL; END; /
Seems harmless enough…and I’m not talking about whether we should be using “:new.col := seq.nextval”, because under the covers it will do a select-from-dual anyway.
The reason the code is really dumb…is you are totally screwed if you need to some data maintenance…
Consider the following scenario:
Customer: “Last week, I accidentally deleted MY_PK_COL = 10, here’s what the row should look like, can you please put it back?”
Your response is simple….“No I cant”.
Why ? Because you can never, ever re-insert MY_PK_COL = 10 if the sequence has advanced past 10. All of your options are nasty…
a) reset the sequence value ? What of other transactions taking place?
b) disable the trigger ? ditto.
Now people will get up on their soap box and say “Its a surrogate key, the value should be meaningless, it shouldn’t matter what it is” etc etc…and I admire your tenacious grip on the religious argument. But that’s like saying “All databases should have referential integrity constraints!”…Well duh, but that’s not how the real world is 😦
Its just a dumb way of coding. If you really need these kinds of triggers (hint: you don’t), then at least code them defensively:
CREATE OR REPLACE TRIGGER MY_TRG BEFORE INSERT ON MY_TABLE FOR EACH ROW when ( new.MY_PK_COL is null ) BEGIN SELECT MY_SEQ.NEXTVAL INTO :NEW.MY_PK_COL FROM DUAL; END; /
so at least you’re not clobbering someone’s data.
Great point, this is similar to why I dislike triggers for maintaing application data.
I’ve worked on a couple of systems that have triggers on transaction tables. These then updated customer account balances etc. in separate tables.
These lead to similar issues to then one you describe if you need to do some (large scale) data maintenance (e.g. correcting balances). Disabling the triggers to do the update means you need downtime; adding new transactions in can work but can take a very long time to run.
At least these are generally slightly easier to overcome than re-inserting the id situation here!
Allowing any application or user to arbitrarily assign their primary key isn’t any better, in my humble opinion. At some point, the sequence will cross the arbitrary values inserted, so you’re setting yourself up for failed inserts due to uniqueness violations. I deal with “Oops I deleted a record” situations infrequently enough that I’d prefer inserting the record & updating the primary key back to its original value. As with anything, it depends.
Incidentally, I’ll take the benefits of foreign key constraints in spite of the pain they bring when copying data between environments. That said, I’m not going to go creating them on every table as I’m sure it’d break numerous applications that were written when the Earth was still flat (or before a full-time DBA was employed).
Please note – – > the author’s realisation that ‘…Well duh, but that’s not how the real world is ‘ — > The design of the data model should be revised so that the logical / physical deletes can be accommodated without issues that arise in ‘real world’. To ‘delete’ some record from the database is not entirely valid and why should a record get deleted (in any case). This is an issue of design and not the fault of any trigger that has been specified previously. It is not a simple matter to allow for flexibility for sake of ‘maintenance’ as there are and may well be legitimate reasons that the model had been designed with a concrete trigger as outline.