I’ve dragged this little demo up from the past, because friend Toon (http://thehelsinkideclaration.blogspot.com) has been blogging about triggers. This demo shows the overhead of triggers in terms of doing things that can be easily done without triggers. But I do stress – I’m with Toon on this one – this is not the same as having a blanket ban on triggers.
Someone asked why having triggers to populate primary keys with a sequence was a bad idea…so a simple demo is called for.
SQL> create table T ( x number ); Table created. SQL> create sequence S cache 1000; Sequence created. SQL> create or replace 2 trigger TRG before insert on T 3 for each row 4 begin 5 :new.x := s.nextval; 6 end; 7 / Trigger created. SQL> set timing on SQL> insert into T 2 select rownum from dual 3 connect by level <= 100000; 100000 rows created. Elapsed: 00:00:08.79
So it takes 9 seconds to load 100,000 rows – and everyone starts asking for more CPU, more RAM, faster servers, etc etc etc….
But lets try it without the trigger, and just code the insert using the sequence directly
SQL> insert into T 2 select s.nextval from dual 3 connect by level <= 100000; 100000 rows created. Elapsed: 00:00:01.81
Bottom line – if you want a 400% performance penalty on insert, then triggers for sequences are just what you are looking for 🙂
The triggers that I write to automatically create PKs from a sequence always look like this:
IF NVL(:new.my_pk,0) < 1 THEN
SELECT my_seq.nextval INTO :new.my_pk;
This makes sure that it only hits the sequence if the PK was zero or null. So you can assign the PK directly from the sequence if you want, or load data from records that already have a PK like when we recreate test data from production. Since it still has to execute the trigger, I'm not sure if this improves the performance problem you cited – have to try it sometime.
Oop – the select should be “FROM dual”, or for an 11g database, an assignment not a SELECT.