MERGE and ORA-30926

Just a quick blog post on MERGE and the “unable to get a stable set of rows” error that often bamboozles people. This is actually just the script output from a pre-existing YouTube video (see below) that I’ve already done on this topic, but I had a few requests for the SQL example end-to-end, so here it is.

Imagine the AskTOM team had a simple table defining the two core members, Chris Saxon and myself. But in the style of my true Aussie laziness, I was very slack about checking the quality of the data I inserted.


SQL> create table oracle_team (
  2      pk number primary key,
  3      first_name varchar2(10),
  4      last_name varchar2(10)
  5  );

Table created.

SQL>
SQL> insert into oracle_team (pk, first_name, last_name) values (1, 'Connor', 'Macdonald');

1 row created.

SQL> insert into oracle_team (pk, first_name, last_name) values (3, 'kris', 'saxon');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from oracle_team;

        PK FIRST_NAME LAST_NAME
---------- ---------- ----------
         1 Connor     Macdonald
         3 kris       saxon

2 rows selected.

You can see that the data is garbage. Both of our names are wrong so they need fixing. So I build a table called FIXES which lets people “queue up” fix requests to the table. I’ll add the 2 obvious fixes to that table.


SQL> create table fixes (
  2      team_pk number,
  3      first_name varchar2(10),
  4      last_name  varchar2(10),
  5      requested  date
  6  );

Table created.

SQL> insert into fixes values (1, 'Connor', 'McDonald',sysdate);

1 row created.

SQL> insert into fixes values (3, 'Chris', 'Saxon',sysdate);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from fixes;

   TEAM_PK FIRST_NAME LAST_NAME  APPLIED
---------- ---------- ---------- ---------
         1 Connor     McDonald   18-FEB-19
         3 Chris      Saxon      18-FEB-19

2 rows selected.

To apply those fixes to the table, a simple MERGE command is all I need. Notice that MERGE does not have to be a “full” merge (ie, update AND insert), you can pick and choose what elements you want. MERGE is very powerful and flexible in that regard.


SQL>
SQL>
SQL> merge into oracle_team target
  2  using (select team_pk, first_name, last_name
  3         from fixes
  4        ) source
  5  on (target.pk = source.team_pk)
  6  when matched then
  7  update set
  8      target.first_name = source.first_name,
  9      target.last_name = source.last_name
 10
SQL> pause

SQL> /

2 rows merged.

SQL>
SQL> select * from oracle_team;

        PK FIRST_NAME LAST_NAME
---------- ---------- ----------
         1 Connor     McDonald
         3 Chris      Saxon

2 rows selected.

So all looks well. Let me now show how what seems like a simple repeat of that operation can get us into trouble. I’ve dropped all the tables, so that I can recreate the demo from scratch.


SQL>
SQL> create table oracle_team (
  2      pk number primary key,
  3      first_name varchar2(10),
  4      last_name varchar2(10)
  5  );

Table created.

SQL>
SQL> insert into oracle_team (pk, first_name, last_name) values (1, 'Connor', 'Macdonald');

1 row created.

SQL> insert into oracle_team (pk, first_name, last_name) values (3, 'kris', 'saxon');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from oracle_team;

        PK FIRST_NAME LAST_NAME
---------- ---------- ----------
         1 Connor     Macdonald
         3 kris       saxon

2 rows selected.

SQL> create table fixes (
  2      team_pk number,
  3      first_name varchar2(10),
  4      last_name  varchar2(10),
  5      applied    date
  6  );

Table created.

This time we’ll assume that repeated fix requests have come in for a single AskTOM team member (PK=1). My first fix request was to the change the “Mac” to “Mc” in McDonald, but then I got all picky and realised that I’d like to have a capital “D” in McDonald. Fussy fussy fussy Smile


SQL> insert into fixes values (1, 'Connor', 'Mcdonald',sysdate-1);

1 row created.

SQL> insert into fixes values (1, 'Connor', 'McDonald',sysdate);

1 row created.

SQL> insert into fixes values (3, 'Chris', 'Saxon',sysdate);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from fixes;

   TEAM_PK FIRST_NAME LAST_NAME  APPLIED
---------- ---------- ---------- ---------
         1 Connor     Mcdonald   17-FEB-19
         1 Connor     McDonald   18-FEB-19
         3 Chris      Saxon      18-FEB-19

3 rows selected.

SQL>

Look what happens now when I re-attempt the MERGE.


SQL>
SQL>
SQL> merge into oracle_team target
  2  using (select team_pk, first_name, last_name
  3         from fixes
  4        ) source
  5  on (target.pk = source.team_pk)
  6  when matched then
  7  update set
  8      target.first_name = source.first_name,
  9      target.last_name = source.last_name
 10
SQL> pause

SQL> /
merge into oracle_team target
           *
ERROR at line 1:
ORA-30926: unable to get a stable set of rows in the source tables

Conceptually this example (hopefully) makes it clear why the error occurred. Depending on which FIXES row for my row in the target table (PK=1) the database sees first, the end result of the MERGE could be different. And we really can’t allow that because it means the results are pseudo-random.

So how can we fix these things? We need a stable set of rows in the sense that the MERGE results should never be questionable based on order in which we process the input set of rows. In the example above, we could probably make an assumption that the last fix request for a given primary key value is the one that should always take precedence. Hence the source set of fixes can be altered as below.


SQL>
SQL>
SQL> select *
  2  from
  3  (select team_pk,
  4          first_name,
  5          last_name,
  6          row_number() over ( partition by team_pk order by applied desc ) as r
  7   from fixes
  8  ) where r = 1;

   TEAM_PK FIRST_NAME LAST_NAME           R
---------- ---------- ---------- ----------
         1 Connor     McDonald            1
         3 Chris      Saxon               1

2 rows selected.

which then becomes an input into the MERGE command and the error is avoided.


SQL>
SQL> merge into oracle_team target
  2  using
  3  ( select *
  4    from
  5    (select team_pk,
  6            first_name,
  7            last_name,
  8            row_number() over ( partition by team_pk order by applied desc ) as r
  9     from fixes
 10    ) where r = 1
 11  ) source
 12  on (target.pk = source.team_pk)
 13  when matched then
 14  update set
 15      target.first_name = source.first_name,
 16      target.last_name = source.last_name
 17

SQL> /

2 rows merged.

SQL>
SQL> select * from oracle_team;

        PK FIRST_NAME LAST_NAME
---------- ---------- ----------
         1 Connor     McDonald
         3 Chris      Saxon

2 rows selected.

The video walking through this demo came from an Office Hours session, and you can watch it below

The death of UTL_FILE – part 2

I wrote a post a while back call “The Death of UTL_FILE”, and probably because of it’s click-bait title I got lots of feedback, so I’m back to flog that horse Smile. Seriously though, I stand behind my assertion in that post, that the majority of usages of UTL_FILE I’ve seen my career are mimicking the spooling behaviour of a SQL*Plus script. And as that post pointed out, you can now achieve that functionality directly with the scheduler.

That is well and good for writing files from the database, and I added:

“I’ll come back to UTL_FILE to read files in a future post”.

That time has come! Many developers will be aware of the feature known as external tables but a criticism of those has been that whereas UTL_FILE only needs permissions on the appropriate directory object to read a file, an external table requires a DDL definition, which typically means a lot of red tape and processes to follow when trying to implement changes.

Some of that pain was alleviated in 12c Release 2 when it became possible to modify portions of the external table definition on the fly at query execution time. For example, if you needed to change the name of the file name each day, then rather than having to re-craft an external table DDL, or even an ‘alter’ command, it could be done as required by the query:


SQL> select * from ext_emp
  2    external modify ( location ('emp20161002.dat') );

     EMPNO ENAME      JOB              MGR HIREDATE         SAL
---------- ---------- --------- ---------- --------- ---------- ---
      7902 FORD       ANALYST         7566 03-DEC-81       3000
      7934 MILLER     CLERK           7782 23-JAN-82       1300        
      7566 JONES      MANAGER         7839 02-APR-81       2975

That still of course does not alleviate the need for the initial table DDL. But in 18c, you can query an external file and the only thing you need to do so is appropriate privileges on the directory object. Sounds a lot like UTL_FILE, no? In 18c, we can query an external source directly with query. Here’s a simple example using the following data file (emp.dat) as a source:


7369,SMITH,CLERK,7902,800,,20
7499,ALLEN,SALESMAN,7698,1600,300,30
7521,WARD,SALESMAN,7698,1250,500,30
7566,JONES,MANAGER,7839,2975,,20
7654,MARTIN,SALESMAN,7698,1250,1400,30
7698,BLAKE,MANAGER,7839,2850,,30
7782,CLARK,MANAGER,7839,2450,,10
7788,SCOTT,ANALYST,7566,3000,,20
7839,KING,PRESIDENT,,5000,,10
7844,TURNER,SALESMAN,7698,1500,,30
7876,ADAMS,CLERK,7788,1100,,20
7900,JAMES,CLERK,7698,950,,30
7902,FORD,ANALYST,7566,3000,,20
7934,MILLER,CLERK,7782,1300,,10

The query (in this case) just requires the EXTERNAL modifier and a definition of the columns and we’re done!


SQL> select * from
  2  external (
  3     (
  4       empno     number(4),
  5       ename     varchar2(12),
  6       job       varchar2(12),
  7       mgr       number(4),
  8       sal       number(7,2),
  9       comm      number(7,2),
 10       deptno    number(2)
 11     )
 12  type oracle_loader
 13  default directory TMP
 14  location ('emp.dat')
 15  );

     EMPNO ENAME        JOB                 MGR        SAL       COMM     DEPTNO
---------- ------------ ------------ ---------- ---------- ---------- ----------
      7369 SMITH        CLERK              7902        800                    20
      7499 ALLEN        SALESMAN           7698       1600        300         30
      7521 WARD         SALESMAN           7698       1250        500         30
      7566 JONES        MANAGER            7839       2975                    20
      7654 MARTIN       SALESMAN           7698       1250       1400         30
      7698 BLAKE        MANAGER            7839       2850                    30
      7782 CLARK        MANAGER            7839       2450                    10
      7788 SCOTT        ANALYST            7566       3000                    20
      7839 KING         PRESIDENT                     5000                    10
      7844 TURNER       SALESMAN           7698       1500                    30
      7876 ADAMS        CLERK              7788       1100                    20
      7900 JAMES        CLERK              7698        950                    30
      7902 FORD         ANALYST            7566       3000                    20
      7934 MILLER       CLERK              7782       1300                    10

14 rows selected.

Of course, UTL_FILE is more about totally flexibility in the content your read. After all, you (typically) read a line from the file and then parse it however you please. But the same is possible with the 18c syntax. Here I’ll take some free format text from the Oracle documentation in a file plain.dat.


9.5 System Triggers
A system trigger is created on either a schema or the database.
Its triggering event is composed of either DDL statements (listed in "ddl_event") or database operation statements (listed in "database_event").
A system trigger fires at exactly one of these timing points:
Before the triggering statement runs
(The trigger is called a BEFORE statement trigger or statement-level BEFORE trigger.)
After the triggering statement runs
(The trigger is called a AFTER statement trigger or statement-level AFTER trigger.)
Instead of the triggering CREATE statement
(The trigger is called an INSTEAD OF CREATE trigger.)
Topics
SCHEMA Triggers
DATABASE Triggers
INSTEAD OF CREATE Triggers
9.5.1 SCHEMA Triggers
A SCHEMA trigger is created on a schema and fires whenever the user who owns it is the current user and initiates the triggering event.
Suppose that both user1 and user2 own schema triggers, and user1 invokes a DR unit owned by user2. Inside the DR unit, user2 is the current user. Therefore, if the DR unit initiates the triggering event of a schema trigger that user2 owns, then that trigger fires. However, if the DR unit initiates the triggering event of a schema trigger that user1 owns, then that trigger does not fire.
Example 9-19 creates a BEFORE statement trigger on the sample schema HR. When a user connected as HR tries to drop a database object, the database fires the trigger before dropping the object.

To mimic the line-by-line read style of UTL_FILE, I can simple define a single column holding a line from the data file.


SQL> select * from
  2  external (
  3     (
  4       text varchar2(4000)
  5     )
  6  type oracle_loader
  7  default directory TMP
  8  location ('plain.dat')
  9  );

TEXT
------------------------------------------------------------
9.5 System Triggers
A system trigger is created on either a schema or the databa
se.

Its triggering event is composed of either DDL statements (l
isted in "ddl_event") or database operation statements (list
ed in "database_event").

A system trigger fires at exactly one of these timing points
:

Before the triggering statement runs
(The trigger is called a BEFORE statement trigger or stateme
nt-level BEFORE trigger.)

After the triggering statement runs
(The trigger is called a AFTER statement trigger or statemen
t-level AFTER trigger.)

Instead of the triggering CREATE statement
(The trigger is called an INSTEAD OF CREATE trigger.)
Topics
SCHEMA Triggers
DATABASE Triggers
INSTEAD OF CREATE Triggers
9.5.1 SCHEMA Triggers
A SCHEMA trigger is created on a schema and fires whenever t
he user who owns it is the current user and initiates the tr
iggering event.

Suppose that both user1 and user2 own schema triggers
Example 9-19 creates a BEFORE statement trigger on the sampl
e schema HR. When a user connected as HR tries to drop a dat
abase object


18 rows selected.

So there is plenty of potential there to reduce the amount of source code to maintain using this nice syntax. The line between database data and external data gets blurrier, which is a great convenience for developers and ad-hoc users in being able to bring all the power of SQL to more data sources.

Footnote: When you query an external data source in 18c in the way mentioned above, behind the scenes, the database may create global temporary tables to support the operation. In the first example above, the database performed the following DDL recursively:


CREATE GLOBAL TEMPORARY TABLE "SCOTT"."SYS_TEMP_0FD9D678B_BDA818A8" SHARING=NONE  
("EMPNO" NUMBER(4),
 "ENAME" VARCHAR2(12) COLLATE "USING_NLS_COMP" ,
 "JOB" VARCHAR2(12) COLLATE "USING_NLS_COMP" ,
 "MGR" NUMBER(4),"SAL" NUMBER(7,2),
 "COMM" NUMBER(7,2),
 "DEPTNO" NUMBER(2) 
) 
IN_MEMORY_METADATA CURSOR_SPECIFIC_SEGMENT INLINE_XT  
STORAGE (OBJNO 4254951307 ) NOPARALLEL

That should not be cause for concern, but if you have hundreds of differently structured external data sources, and/or triggers that fire on DDL operations on the database, then you might see some impact. As with all things, thorough testing should always precede production implementation.

Use the features available!

Advance warning: This post is a just another normal Friday morning rant. If you’re not interested in my pontificating, move along…nothing else to see here Smile

Sometimes you can’t make use of a facility that you normally would, and you have to substitute in something else. For example, if I would normally take the train to the basketball game, but today it’s not running due to track maintenance, then I’ll take the bus. I have no problem with that, because there’s a reason that I can’t take the train that day.

What does get my goat is on a day when the train is running, you come to me and say:

“Can you tell how to get the basketball game? Oh, by the way, I can’t take the train; I won’t use the bus; I don’t have a car; planes bother me; I get sea sick; I’m allergic to horses, and my feet hurt when I walk.”

Because all I’m thinking then is: “Then just don’t go to the damn game!” Smile

So lets bring that metaphor into the world of Oracle. I always chuckle when AskTOM questions come in along the lines of:

“I need to achieve ‘X’. I have read that there is a feature ‘Y’ that achieves ‘X’. But I cannot use ‘Y'”

Don’t get me wrong. If there is a reason why “Y” is not a potential solution (eg, requires Enterprise Edition or perhaps an optional feature), then fine. We can then try work within those restrictions. But often what seems to be the case is that the statement:

“I cannot use ‘Y'”

is really just someone saying:

“I am not familiar with ‘Y’ so I’d rather not take the time and effort to become familiar with it”

and that’s…well…annoying.

No-one expects you to have immediate grasp and knowledge of the entire suite of features available in the Oracle database. I doubt that anyone does that breadth of knowledge. That is why we have the documentation; that is why we have community forums and sites like AskTOM, so that the collective knowledge of the community can be of assistance. But if you’re not prepared to spread your wings a little, and learn those new things, then you’re missing out on some of the great functionality available to you in the Oracle database.

AskTOM Question: How I can restrict the number of sessions a user has ?

Us: You can use a profile for this.

Response: I’d rather not use a profile

Us: OK then, then perhaps look at using Resource Manager to limit session activity.

Response: I’d rather not Resource Manager

Us: OK then, then perhaps look at using a logon trigger to perform your own custom logic to manage sessions.

Response: I don’t to write a trigger for this.

Us: Then just don’t go to the damn game! Smile

 

The 19c database has just come out, and I know it’s always tempting just to focus on the new features that come with a release, but rest assured that inside Oracle, the focus for 19c was less on new features, but more on honing and improving the existing features in the Oracle database, to make them even more compelling solutions for your business requirements. But all of that won’t count for much if you’re not prepared to give them a chance.

DBMS_JOB is an asynchronous mechanism

One of the very cool things about DBMS_JOB is that a job does not “exist” as such until the session that submitted the jobĀ commits the transaction. (This in my opinion is a critical feature that is missing from the DBMS_SCHEDULER package which, other than this omission, is superior to DBMS_JOB in every way).

Because DBMS_JOB is transactional, we can use it to make “non-transactional” things appear transactional. For example, if part of the workflow for hiring someone is to send an email to the Human Resources department, we can do the email via job submission so that an email is not sent if the employee record is not created successfully or is rolled back manually, eg:


begin
  insert into EMP (empno, ename) values (123, 'CONNOR');
  dbms_job.submit( :j,'notify_hr(empno=>123);');
end;

This post is not about that – that cool usage scenario is well known. But as a consequence, people tend to think that as soon as I commit, the job will spring into life. This is typically the case, but there are no guarantees that your job will run immediately, even if job queue processes are available.

Here’s a couple of examples of that in action


--
-- This one is almost instant
--
SQL> create table t
  2   ( submitted timestamp,
  3     started   timestamp );

Table created.

SQL>
SQL> create or replace
  2  procedure prc is
  3  begin
  4    update t set started = systimestamp;
  5    commit;
  6  end;
  7  /

Procedure created.

SQL>
SQL> declare
  2    j int;
  3  begin
  4    dbms_job.submit( j,'prc;');
  5    insert into t(submitted) values (systimestamp);
  6    commit;
  7  end;
  8  /

PL/SQL procedure successfully completed.

SQL>
SQL> col submitted format a32
SQL> col started format a32
SQL> select * from t;

SUBMITTED                        STARTED
-------------------------------- --------------------------------
13-FEB-19 09.31.10.956989 PM     13-FEB-19 09.31.10.994153 PM

--
-- This one not so much :-)
--

SQL> declare
  2    j int;
  3  begin
  4    dbms_job.submit( j,'prc;');
  5    insert into t(submitted) values (systimestamp);
  6    commit;
  7  end;
  8  /

PL/SQL procedure successfully completed.

SQL>
SQL> col submitted format a32
SQL> col started format a32
SQL> select * from t;

SUBMITTED                        STARTED
-------------------------------- --------------------------------
14-FEB-19 01.22.45.928000 PM

SQL> /

SUBMITTED                        STARTED
-------------------------------- --------------------------------
14-FEB-19 01.22.45.928000 PM

SQL> /

SUBMITTED                        STARTED
-------------------------------- --------------------------------
14-FEB-19 01.22.45.928000 PM     14-FEB-19 01.22.59.775000 PM

In particular, I tend to notice this more on Windows platforms than others. MOS Note 2109840.1 is also a good starting point if you are seeing huge delays – you may have hit upon a bug.

So just be aware that DBMS_JOB will run your job soon after committing, but “soon” is a flexible term Smile

Work Life Travel balance

I thought about writing a post on juggling work commitments, travel with my job and time at home with children and family. And then I came across this post from community friend Robin Moffatt.

https://rmoff.net/2019/02/08/travelling-for-work-with-kids-at-home/

And … well … it’s exactly the thoughts, sentiments and motivation that I wanted to put in writing. Pretty much exactly. (Except the part about a spreadsheet – I’m a DBA, we don’t use spreadsheets…ever! Smile). Robin will propbably be pissed with me saying this (because he writes some great tech stuff) but I reckon this post is his best work. It captures so well the importance of … the important stuff. This quote in particular is spot on:

In ten years who will give two monkeys if you presented at SuperKool Konference 2008? Conferences come and go. Kids just go

So true.

Hence I don’t need to write a post, because Robin has captured it perfectly. The only augmentation I have is due to the differences in ages of our children. Robin’s are 5 and 7, and mine are now 12 and 14. I’ve never been a huge fan of getting children onto the tech gadget bandwagon too soon, but now that mine are both in secondary school, I’ve changed that tune a little and got them a phone each. Obviously that comes with risks and the drawbacks that all kids tend to exhibit once they have phone (I’m looking at you Instagram and Snapchat!). But (for me) the positives of being able to reach out to the boys at a moment’s notice when I’m away outweighs that. Now I can immediately share with them the experiences of overseas adventures, and hopefully pique their interest to one day travel and explore the world when they’re older. Or even if it’s just a “Hey, I’ve just landed in transit in Dubai” it’s a reminder to them that their Dad is thinking about them when he’s away. Similarly, if they’re concerned about something or even stuck on their Maths homework, a quick Whatsapp or Telegram and we’re hooked up at least academically albeit not geographically. My next plan is to get them to occasionally come to these work trips with me; such experiences I think will be more benefit than the smattering of school days they might miss.

So kudos to Robin. If you travel for work, or even if you just work long hours – that post is well worth your time.

LISTAGG hits prime time

It’s a simple requirement. We want to transform this:


SQL> select deptno, ename
  2  from   emp
  3  order by 1,2;

    DEPTNO ENAME
---------- ----------
        10 CLARK
        10 KING
        10 MILLER
        20 ADAMS
        20 FORD
        20 JONES
        20 SCOTT
        20 SMITH
        30 ALLEN
        30 BLAKE
        30 JAMES
        30 MARTIN
        30 TURNER
        30 WARD

into this:


    DEPTNO MEMBERS
---------- -------------------------------------
        10 CLARK,KING,MILLER
        20 SMITH,JONES,SCOTT,ADAMS,FORD
        30 ALLEN,WARD,MARTIN,BLAKE,TURNER,JAMES

Dinosaurs like myself remember the terrible trouble we used to go to in order to solve this seemingly simple problem. We could use the MODEL clause,


SQL> select deptno , rtrim(ename,',') enames
  2  from ( select deptno,ename,rn
  3         from emp
  4         model
  5         partition by (deptno)
  6         dimension by (
  7            row_number() over
  8             (partition by deptno order by ename) rn
  9               )
 10         measures (cast(ename as varchar2(40)) ename)
 11         rules
 12         ( ename[any]
 13             order by rn desc = ename[cv()]||','||ename[cv()+1])
 14         )
 15   where rn = 1
 16   order by deptno;

    DEPTNO ENAMES
---------- ----------------------------------------
        10 CLARK,KING,MILLER
        20 ADAMS,FORD,JONES,SCOTT,SMITH
        30 ALLEN,BLAKE,JAMES,MARTIN,TURNER,WARD

or we could use hierarchical trickery,


SQL> select deptno,
  2         substr(max(sys_connect_by_path(ename, ',')), 2) members
  3  from (select deptno, ename,
  4               row_number ()
  5                   over (partition by deptno order by empno) rn
  6        from emp)
  7  start with rn = 1
  8  connect by prior rn = rn - 1
  9  and prior deptno = deptno
 10  group by deptno
 11  /

    DEPTNO MEMBERS
---------- ---------------------------------------------------------
        30 ALLEN,WARD,MARTIN,BLAKE,TURNER,JAMES
        20 SMITH,JONES,SCOTT,ADAMS,FORD
        10 CLARK,KING,MILLER

or we could build our own aggregation routine from the ground up,


SQL> create or replace type string_agg_type as object
  2  (
  3     total varchar2(4000),
  4  
  5     static function
  6          ODCIAggregateInitialize(sctx IN OUT string_agg_type )
  7          return number,
  8  
  9     member function
 10          ODCIAggregateIterate(self IN OUT string_agg_type ,
 11                               value IN varchar2 )
 12          return number,
 13  
 14     member function
 15          ODCIAggregateTerminate(self IN string_agg_type,
 16                                 returnValue OUT  varchar2,
 17                                 flags IN number)
 18          return number,
 19  
 20     member function
 21          ODCIAggregateMerge(self IN OUT string_agg_type,
 22                             ctx2 IN string_agg_type)
 23          return number
 24  );
 25  /

Or we had some sort of personality disorder Smile then we could resort to manipulating some XML via XMLDB.


SQL> select deptno,
  2     xmltransform
  3     ( sys_xmlagg
  4        ( sys_xmlgen(ename)
  5        ),
  6       xmltype
  7       (
  8         '<?xml version="1.0"?><xsl:stylesheet version="1.0"
  9           xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
 10            <xsl:template match="/">
 11              <xsl:for-each select="/ROWSET/ENAME">
 12                <xsl:value-of select="text()"/>,</xsl:for-each>
 13            </xsl:template>
 14          </xsl:stylesheet>'
 15       )
 16    ).getstringval() members
 17  from emp
 18  group by deptno;

    DEPTNO MEMBERS
---------- --------------------------------------------------------
        10 CLARK,MILLER,KING,
        20 SMITH,FORD,ADAMS,SCOTT,JONES,
        30 ALLEN,JAMES,TURNER,BLAKE,MARTIN,WARD,

Thankfully all of these were solved once we made it to 11g with the simple LISTAGG function


SQL> select deptno,
  2         listagg( ename, ',')
  3              within group (order by empno) members
  4  from   emp
  5  group  by deptno;

    DEPTNO MEMBERS
---------- -----------------------------------------
        10 CLARK,KING,MILLER
        20 SMITH,JONES,SCOTT,ADAMS,FORD
        30 ALLEN,WARD,MARTIN,BLAKE,TURNER,JAMES

I’ve always liked LISTAGG because the function reads like the business requirement we are trying to meet:

“AGG-regate a LIST of ENAMEs in sequence of EMPNO within the grouping of DEPTNO”

But there has been once problem with LISTAGG since it’s arrival in 11g, and that is what to do with duplicate data. Duplicates can either just look messy, for example, when I swap out ENAME for JOB:


SQL> select deptno
 2          listagg(job,',') within group ( order by job) as jobs
 3   from   scott.emp
 4   group by deptno
 5   order by 1;
 
   DEPTNO JOBS
--------- --------------------------------------------------
       10 CLERK,MANAGER,PRESIDENT
       20 ANALYST,ANALYST,CLERK,CLERK,MANAGER
       30 CLERK,MANAGER,SALESMAN,SALESMAN,SALESMAN,SALESMAN

Or they could be even worse in the situation where the number of duplicates results in data exceeding allowing database limits. In the example below, there are hundreds of objects each with the same object type within a schema, and hence the aggregation blows the length limit for a varchar2.


SQL> select owner
 2          listagg(object_type,',') within group 
 3              ( order by object_id ) as types
 4   from   all_ojects
 5   group by owner
 6   order by 1;
ERROR:
ORA-01499: result of string concatenation is too long

With 19c, our (technical) prayers have been answered with the long awaited arrival of the DISTINCT extension to the syntax.


SQL> select deptno,
  2        listagg(distinct job,',') within group ( order by job ) as jobs
  3  from   scott.emp
  4  group by deptno
  5  order by 1;

    DEPTNO JOBS
---------- ------------------------------------------------------------
        10 CLERK,MANAGER,PRESIDENT
        20 ANALYST,CLERK,MANAGER
        30 CLERK,MANAGER,SALESMAN

3 rows selected.

SQL> select owner,
  2        listagg(distinct object_type,',') within group ( order by object_type ) as types
  3  from   all_objects
  4  group by owner
  5  order by 1;

OWNER
------------------------------
TYPES
------------------------------------------------------------------------------------------------------------
------------
APPQOSSYS
SYNONYM,TABLE

AUDSYS
INDEX PARTITION,LIBRARY,PACKAGE,PACKAGE BODY,TABLE,TABLE PARTITION,VIEW

CTXSYS
FUNCTION,INDEX,INDEXTYPE,LIBRARY,OPERATOR,PACKAGE,PACKAGE BODY,PROCEDURE,SEQUENCE,TABLE,TYPE,TYPE BODY,VIEW

etc

Patch conflicts

My last post was about patching my home databases from 18.3 to 18.5 on Windows, and how I encountered a patch conflict when I tried to patch the JVM. I thought I’d give a little bit of info for anyone who runs into patch conflicts from time to time. It can be stressful especially if unforeseen, or you are in the middle of limited time outage window etc.

So before you jump into applying a patch, a nice little tool you might like to explore is the patch conflict checker on My Oracle Support. You can get it via:

https://support.oracle.com/epmos/faces/PatchConflictCheck

It is straightforward to use, you simply fill in the platform and your current patch inventory details, and then list out the patches you intend to apply.

image

 

One caveat – the online tool does not work with Windows environments Sad smile but you can get around that by downloading the patches you intend to apply to your local machine. Then you can use opatch itself to perform that check:


opatch prereq CheckConflictAmongPatchesWithDetail -phBaseDir path

where “path” is where you have unzipped the patch to.

Like all patching activities, the more preparation work you can do before actually commencing the work, the more likely your chances of not encountering a sticky moment during the work.