The death of UTL_FILE – part 2

I wrote a post a while back call “The Death of UTL_FILE”, and probably because of it’s click-bait title I got lots of feedback, so I’m back to flog that horse Smile. Seriously though, I stand behind my assertion in that post, that the majority of usages of UTL_FILE I’ve seen my career are mimicking the spooling behaviour of a SQL*Plus script. And as that post pointed out, you can now achieve that functionality directly with the scheduler.

That is well and good for writing files from the database, and I added:

“I’ll come back to UTL_FILE to read files in a future post”.

That time has come! Many developers will be aware of the feature known as external tables but a criticism of those has been that whereas UTL_FILE only needs permissions on the appropriate directory object to read a file, an external table requires a DDL definition, which typically means a lot of red tape and processes to follow when trying to implement changes.

Some of that pain was alleviated in 12c Release 2 when it became possible to modify portions of the external table definition on the fly at query execution time. For example, if you needed to change the name of the file name each day, then rather than having to re-craft an external table DDL, or even an ‘alter’ command, it could be done as required by the query:


SQL> select * from ext_emp
  2    external modify ( location ('emp20161002.dat') );

     EMPNO ENAME      JOB              MGR HIREDATE         SAL
---------- ---------- --------- ---------- --------- ---------- ---
      7902 FORD       ANALYST         7566 03-DEC-81       3000
      7934 MILLER     CLERK           7782 23-JAN-82       1300        
      7566 JONES      MANAGER         7839 02-APR-81       2975

That still of course does not alleviate the need for the initial table DDL. But in 18c, you can query an external file and the only thing you need to do so is appropriate privileges on the directory object. Sounds a lot like UTL_FILE, no? In 18c, we can query an external source directly with query. Here’s a simple example using the following data file (emp.dat) as a source:


7369,SMITH,CLERK,7902,800,,20
7499,ALLEN,SALESMAN,7698,1600,300,30
7521,WARD,SALESMAN,7698,1250,500,30
7566,JONES,MANAGER,7839,2975,,20
7654,MARTIN,SALESMAN,7698,1250,1400,30
7698,BLAKE,MANAGER,7839,2850,,30
7782,CLARK,MANAGER,7839,2450,,10
7788,SCOTT,ANALYST,7566,3000,,20
7839,KING,PRESIDENT,,5000,,10
7844,TURNER,SALESMAN,7698,1500,,30
7876,ADAMS,CLERK,7788,1100,,20
7900,JAMES,CLERK,7698,950,,30
7902,FORD,ANALYST,7566,3000,,20
7934,MILLER,CLERK,7782,1300,,10

The query (in this case) just requires the EXTERNAL modifier and a definition of the columns and we’re done!


SQL> select * from
  2  external (
  3     (
  4       empno     number(4),
  5       ename     varchar2(12),
  6       job       varchar2(12),
  7       mgr       number(4),
  8       sal       number(7,2),
  9       comm      number(7,2),
 10       deptno    number(2)
 11     )
 12  type oracle_loader
 13  default directory TMP
 14  location ('emp.dat')
 15  );

     EMPNO ENAME        JOB                 MGR        SAL       COMM     DEPTNO
---------- ------------ ------------ ---------- ---------- ---------- ----------
      7369 SMITH        CLERK              7902        800                    20
      7499 ALLEN        SALESMAN           7698       1600        300         30
      7521 WARD         SALESMAN           7698       1250        500         30
      7566 JONES        MANAGER            7839       2975                    20
      7654 MARTIN       SALESMAN           7698       1250       1400         30
      7698 BLAKE        MANAGER            7839       2850                    30
      7782 CLARK        MANAGER            7839       2450                    10
      7788 SCOTT        ANALYST            7566       3000                    20
      7839 KING         PRESIDENT                     5000                    10
      7844 TURNER       SALESMAN           7698       1500                    30
      7876 ADAMS        CLERK              7788       1100                    20
      7900 JAMES        CLERK              7698        950                    30
      7902 FORD         ANALYST            7566       3000                    20
      7934 MILLER       CLERK              7782       1300                    10

14 rows selected.

Of course, UTL_FILE is more about totally flexibility in the content your read. After all, you (typically) read a line from the file and then parse it however you please. But the same is possible with the 18c syntax. Here I’ll take some free format text from the Oracle documentation in a file plain.dat.


9.5 System Triggers
A system trigger is created on either a schema or the database.
Its triggering event is composed of either DDL statements (listed in "ddl_event") or database operation statements (listed in "database_event").
A system trigger fires at exactly one of these timing points:
Before the triggering statement runs
(The trigger is called a BEFORE statement trigger or statement-level BEFORE trigger.)
After the triggering statement runs
(The trigger is called a AFTER statement trigger or statement-level AFTER trigger.)
Instead of the triggering CREATE statement
(The trigger is called an INSTEAD OF CREATE trigger.)
Topics
SCHEMA Triggers
DATABASE Triggers
INSTEAD OF CREATE Triggers
9.5.1 SCHEMA Triggers
A SCHEMA trigger is created on a schema and fires whenever the user who owns it is the current user and initiates the triggering event.
Suppose that both user1 and user2 own schema triggers, and user1 invokes a DR unit owned by user2. Inside the DR unit, user2 is the current user. Therefore, if the DR unit initiates the triggering event of a schema trigger that user2 owns, then that trigger fires. However, if the DR unit initiates the triggering event of a schema trigger that user1 owns, then that trigger does not fire.
Example 9-19 creates a BEFORE statement trigger on the sample schema HR. When a user connected as HR tries to drop a database object, the database fires the trigger before dropping the object.

To mimic the line-by-line read style of UTL_FILE, I can simple define a single column holding a line from the data file.


SQL> select * from
  2  external (
  3     (
  4       text varchar2(4000)
  5     )
  6  type oracle_loader
  7  default directory TMP
  8  location ('plain.dat')
  9  );

TEXT
------------------------------------------------------------
9.5 System Triggers
A system trigger is created on either a schema or the databa
se.

Its triggering event is composed of either DDL statements (l
isted in "ddl_event") or database operation statements (list
ed in "database_event").

A system trigger fires at exactly one of these timing points
:

Before the triggering statement runs
(The trigger is called a BEFORE statement trigger or stateme
nt-level BEFORE trigger.)

After the triggering statement runs
(The trigger is called a AFTER statement trigger or statemen
t-level AFTER trigger.)

Instead of the triggering CREATE statement
(The trigger is called an INSTEAD OF CREATE trigger.)
Topics
SCHEMA Triggers
DATABASE Triggers
INSTEAD OF CREATE Triggers
9.5.1 SCHEMA Triggers
A SCHEMA trigger is created on a schema and fires whenever t
he user who owns it is the current user and initiates the tr
iggering event.

Suppose that both user1 and user2 own schema triggers
Example 9-19 creates a BEFORE statement trigger on the sampl
e schema HR. When a user connected as HR tries to drop a dat
abase object


18 rows selected.

So there is plenty of potential there to reduce the amount of source code to maintain using this nice syntax. The line between database data and external data gets blurrier, which is a great convenience for developers and ad-hoc users in being able to bring all the power of SQL to more data sources.

Footnote: When you query an external data source in 18c in the way mentioned above, behind the scenes, the database may create global temporary tables to support the operation. In the first example above, the database performed the following DDL recursively:


CREATE GLOBAL TEMPORARY TABLE "SCOTT"."SYS_TEMP_0FD9D678B_BDA818A8" SHARING=NONE  
("EMPNO" NUMBER(4),
 "ENAME" VARCHAR2(12) COLLATE "USING_NLS_COMP" ,
 "JOB" VARCHAR2(12) COLLATE "USING_NLS_COMP" ,
 "MGR" NUMBER(4),"SAL" NUMBER(7,2),
 "COMM" NUMBER(7,2),
 "DEPTNO" NUMBER(2) 
) 
IN_MEMORY_METADATA CURSOR_SPECIFIC_SEGMENT INLINE_XT  
STORAGE (OBJNO 4254951307 ) NOPARALLEL

That should not be cause for concern, but if you have hundreds of differently structured external data sources, and/or triggers that fire on DDL operations on the database, then you might see some impact. As with all things, thorough testing should always precede production implementation.

Use the features available!

Advance warning: This post is a just another normal Friday morning rant. If you’re not interested in my pontificating, move along…nothing else to see here Smile

Sometimes you can’t make use of a facility that you normally would, and you have to substitute in something else. For example, if I would normally take the train to the basketball game, but today it’s not running due to track maintenance, then I’ll take the bus. I have no problem with that, because there’s a reason that I can’t take the train that day.

What does get my goat is on a day when the train is running, you come to me and say:

“Can you tell how to get the basketball game? Oh, by the way, I can’t take the train; I won’t use the bus; I don’t have a car; planes bother me; I get sea sick; I’m allergic to horses, and my feet hurt when I walk.”

Because all I’m thinking then is: “Then just don’t go to the damn game!” Smile

So lets bring that metaphor into the world of Oracle. I always chuckle when AskTOM questions come in along the lines of:

“I need to achieve ‘X’. I have read that there is a feature ‘Y’ that achieves ‘X’. But I cannot use ‘Y'”

Don’t get me wrong. If there is a reason why “Y” is not a potential solution (eg, requires Enterprise Edition or perhaps an optional feature), then fine. We can then try work within those restrictions. But often what seems to be the case is that the statement:

“I cannot use ‘Y'”

is really just someone saying:

“I am not familiar with ‘Y’ so I’d rather not take the time and effort to become familiar with it”

and that’s…well…annoying.

No-one expects you to have immediate grasp and knowledge of the entire suite of features available in the Oracle database. I doubt that anyone does that breadth of knowledge. That is why we have the documentation; that is why we have community forums and sites like AskTOM, so that the collective knowledge of the community can be of assistance. But if you’re not prepared to spread your wings a little, and learn those new things, then you’re missing out on some of the great functionality available to you in the Oracle database.

AskTOM Question: How I can restrict the number of sessions a user has ?

Us: You can use a profile for this.

Response: I’d rather not use a profile

Us: OK then, then perhaps look at using Resource Manager to limit session activity.

Response: I’d rather not Resource Manager

Us: OK then, then perhaps look at using a logon trigger to perform your own custom logic to manage sessions.

Response: I don’t to write a trigger for this.

Us: Then just don’t go to the damn game! Smile

 

The 19c database has just come out, and I know it’s always tempting just to focus on the new features that come with a release, but rest assured that inside Oracle, the focus for 19c was less on new features, but more on honing and improving the existing features in the Oracle database, to make them even more compelling solutions for your business requirements. But all of that won’t count for much if you’re not prepared to give them a chance.

DBMS_JOB is an asynchronous mechanism

One of the very cool things about DBMS_JOB is that a job does not “exist” as such until the session that submitted the job is commits the transaction. (This in my opinion is a critical feature that is missing from the DBMS_SCHEDULER package which, other than this omission, is superior to DBMS_JOB in every way).

Because DBMS_JOB is transactional, we can use it to make “non-transactional” things appear transactional. For example, if part of the workflow for hiring someone is to send an email to the Human Resources department, we can do the email via job submission so that an email is not sent if the employee record is not created successfully or is rolled back manually, eg:


begin
  insert into EMP (empno, ename) values (123, 'CONNOR');
  dbms_job.submit( :j,'notify_hr(empno=>123);');
end;

This post is not about that – that cool usage scenario is well known. But as a consequence, people tend to think that as soon as I commit, the job will spring into life. This is typically the case, but there are no guarantees that your job will run immediately, even if job queue processes are available.

Here’s a couple of examples of that in action


--
-- This one is almost instant
--
SQL> create table t
  2   ( submitted timestamp,
  3     started   timestamp );

Table created.

SQL>
SQL> create or replace
  2  procedure prc is
  3  begin
  4    update t set started = systimestamp;
  5    commit;
  6  end;
  7  /

Procedure created.

SQL>
SQL> declare
  2    j int;
  3  begin
  4    dbms_job.submit( j,'prc;');
  5    insert into t(submitted) values (systimestamp);
  6    commit;
  7  end;
  8  /

PL/SQL procedure successfully completed.

SQL>
SQL> col submitted format a32
SQL> col started format a32
SQL> select * from t;

SUBMITTED                        STARTED
-------------------------------- --------------------------------
13-FEB-19 09.31.10.956989 PM     13-FEB-19 09.31.10.994153 PM

--
-- This one not so much :-)
--

SQL> declare
  2    j int;
  3  begin
  4    dbms_job.submit( j,'prc;');
  5    insert into t(submitted) values (systimestamp);
  6    commit;
  7  end;
  8  /

PL/SQL procedure successfully completed.

SQL>
SQL> col submitted format a32
SQL> col started format a32
SQL> select * from t;

SUBMITTED                        STARTED
-------------------------------- --------------------------------
14-FEB-19 01.22.45.928000 PM

SQL> /

SUBMITTED                        STARTED
-------------------------------- --------------------------------
14-FEB-19 01.22.45.928000 PM

SQL> /

SUBMITTED                        STARTED
-------------------------------- --------------------------------
14-FEB-19 01.22.45.928000 PM     14-FEB-19 01.22.59.775000 PM

In particular, I tend to notice this more on Windows platforms than others. MOS Note 2109840.1 is also a good starting point if you are seeing huge delays – you may have hit upon a bug.

So just be aware that DBMS_JOB will run your job soon after committing, but “soon” is a flexible term Smile

Work Life Travel balance

I thought about writing a post on juggling work commitments, travel with my job and time at home with children and family. And then I came across this post from community friend Robin Moffatt.

https://rmoff.net/2019/02/08/travelling-for-work-with-kids-at-home/

And … well … it’s exactly the thoughts, sentiments and motivation that I wanted to put in writing. Pretty much exactly. (Except the part about a spreadsheet – I’m a DBA, we don’t use spreadsheets…ever! Smile). Robin will propbably be pissed with me saying this (because he writes some great tech stuff) but I reckon this post is his best work. It captures so well the importance of … the important stuff. This quote in particular is spot on:

In ten years who will give two monkeys if you presented at SuperKool Konference 2008? Conferences come and go. Kids just go

So true.

Hence I don’t need to write a post, because Robin has captured it perfectly. The only augmentation I have is due to the differences in ages of our children. Robin’s are 5 and 7, and mine are now 12 and 14. I’ve never been a huge fan of getting children onto the tech gadget bandwagon too soon, but now that mine are both in secondary school, I’ve changed that tune a little and got them a phone each. Obviously that comes with risks and the drawbacks that all kids tend to exhibit once they have phone (I’m looking at you Instagram and Snapchat!). But (for me) the positives of being able to reach out to the boys at a moment’s notice when I’m away outweighs that. Now I can immediately share with them the experiences of overseas adventures, and hopefully pique their interest to one day travel and explore the world when they’re older. Or even if it’s just a “Hey, I’ve just landed in transit in Dubai” it’s a reminder to them that their Dad is thinking about them when he’s away. Similarly, if they’re concerned about something or even stuck on their Maths homework, a quick Whatsapp or Telegram and we’re hooked up at least academically albeit not geographically. My next plan is to get them to occasionally come to these work trips with me; such experiences I think will be more benefit than the smattering of school days they might miss.

So kudos to Robin. If you travel for work, or even if you just work long hours – that post is well worth your time.

LISTAGG hits prime time

It’s a simple requirement. We want to transform this:


SQL> select deptno, ename
  2  from   emp
  3  order by 1,2;

    DEPTNO ENAME
---------- ----------
        10 CLARK
        10 KING
        10 MILLER
        20 ADAMS
        20 FORD
        20 JONES
        20 SCOTT
        20 SMITH
        30 ALLEN
        30 BLAKE
        30 JAMES
        30 MARTIN
        30 TURNER
        30 WARD

into this:


    DEPTNO MEMBERS
---------- -------------------------------------
        10 CLARK,KING,MILLER
        20 SMITH,JONES,SCOTT,ADAMS,FORD
        30 ALLEN,WARD,MARTIN,BLAKE,TURNER,JAMES

Dinosaurs like myself remember the terrible trouble we used to go to in order to solve this seemingly simple problem. We could use the MODEL clause,


SQL> select deptno , rtrim(ename,',') enames
  2  from ( select deptno,ename,rn
  3         from emp
  4         model
  5         partition by (deptno)
  6         dimension by (
  7            row_number() over
  8             (partition by deptno order by ename) rn
  9               )
 10         measures (cast(ename as varchar2(40)) ename)
 11         rules
 12         ( ename[any]
 13             order by rn desc = ename[cv()]||','||ename[cv()+1])
 14         )
 15   where rn = 1
 16   order by deptno;

    DEPTNO ENAMES
---------- ----------------------------------------
        10 CLARK,KING,MILLER
        20 ADAMS,FORD,JONES,SCOTT,SMITH
        30 ALLEN,BLAKE,JAMES,MARTIN,TURNER,WARD

or we could use hierarchical trickery,


SQL> select deptno,
  2         substr(max(sys_connect_by_path(ename, ',')), 2) members
  3  from (select deptno, ename,
  4               row_number ()
  5                   over (partition by deptno order by empno) rn
  6        from emp)
  7  start with rn = 1
  8  connect by prior rn = rn - 1
  9  and prior deptno = deptno
 10  group by deptno
 11  /

    DEPTNO MEMBERS
---------- ---------------------------------------------------------
        30 ALLEN,WARD,MARTIN,BLAKE,TURNER,JAMES
        20 SMITH,JONES,SCOTT,ADAMS,FORD
        10 CLARK,KING,MILLER

or we could build our own aggregation routine from the ground up,


SQL> create or replace type string_agg_type as object
  2  (
  3     total varchar2(4000),
  4  
  5     static function
  6          ODCIAggregateInitialize(sctx IN OUT string_agg_type )
  7          return number,
  8  
  9     member function
 10          ODCIAggregateIterate(self IN OUT string_agg_type ,
 11                               value IN varchar2 )
 12          return number,
 13  
 14     member function
 15          ODCIAggregateTerminate(self IN string_agg_type,
 16                                 returnValue OUT  varchar2,
 17                                 flags IN number)
 18          return number,
 19  
 20     member function
 21          ODCIAggregateMerge(self IN OUT string_agg_type,
 22                             ctx2 IN string_agg_type)
 23          return number
 24  );
 25  /

Or we had some sort of personality disorder Smile then we could resort to manipulating some XML via XMLDB.


SQL> select deptno,
  2     xmltransform
  3     ( sys_xmlagg
  4        ( sys_xmlgen(ename)
  5        ),
  6       xmltype
  7       (
  8         '<?xml version="1.0"?><xsl:stylesheet version="1.0"
  9           xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
 10            <xsl:template match="/">
 11              <xsl:for-each select="/ROWSET/ENAME">
 12                <xsl:value-of select="text()"/>,</xsl:for-each>
 13            </xsl:template>
 14          </xsl:stylesheet>'
 15       )
 16    ).getstringval() members
 17  from emp
 18  group by deptno;

    DEPTNO MEMBERS
---------- --------------------------------------------------------
        10 CLARK,MILLER,KING,
        20 SMITH,FORD,ADAMS,SCOTT,JONES,
        30 ALLEN,JAMES,TURNER,BLAKE,MARTIN,WARD,

Thankfully all of these were solved once we made it to 11g with the simple LISTAGG function


SQL> select deptno,
  2         listagg( ename, ',')
  3              within group (order by empno) members
  4  from   emp
  5  group  by deptno;

    DEPTNO MEMBERS
---------- -----------------------------------------
        10 CLARK,KING,MILLER
        20 SMITH,JONES,SCOTT,ADAMS,FORD
        30 ALLEN,WARD,MARTIN,BLAKE,TURNER,JAMES

I’ve always liked LISTAGG because the function reads like the business requirement we are trying to meet:

“AGG-regate a LIST of ENAMEs in sequence of EMPNO within the grouping of DEPTNO”

But there has been once problem with LISTAGG since it’s arrival in 11g, and that is what to do with duplicate data. Duplicates can either just look messy, for example, when I swap out ENAME for JOB:


SQL> select deptno
 2          listagg(job,',') within group ( order by job) as jobs
 3   from   scott.emp
 4   group by deptno
 5   order by 1;
 
   DEPTNO JOBS
--------- --------------------------------------------------
       10 CLERK,MANAGER,PRESIDENT
       20 ANALYST,ANALYST,CLERK,CLERK,MANAGER
       30 CLERK,MANAGER,SALESMAN,SALESMAN,SALESMAN,SALESMAN

Or they could be even worse in the situation where the number of duplicates results in data exceeding allowing database limits. In the example below, there are hundreds of objects each with the same object type within a schema, and hence the aggregation blows the length limit for a varchar2.


SQL> select owner
 2          listagg(object_type,',') within group 
 3              ( order by object_id ) as types
 4   from   all_ojects
 5   group by owner
 6   order by 1;
ERROR:
ORA-01499: result of string concatenation is too long

With 19c, our (technical) prayers have been answered with the long awaited arrival of the DISTINCT extension to the syntax.


SQL> select deptno,
  2        listagg(distinct job,',') within group ( order by job ) as jobs
  3  from   scott.emp
  4  group by deptno
  5  order by 1;

    DEPTNO JOBS
---------- ------------------------------------------------------------
        10 CLERK,MANAGER,PRESIDENT
        20 ANALYST,CLERK,MANAGER
        30 CLERK,MANAGER,SALESMAN

3 rows selected.

SQL> select owner,
  2        listagg(distinct object_type,',') within group ( order by object_type ) as types
  3  from   all_objects
  4  group by owner
  5  order by 1;

OWNER
------------------------------
TYPES
------------------------------------------------------------------------------------------------------------
------------
APPQOSSYS
SYNONYM,TABLE

AUDSYS
INDEX PARTITION,LIBRARY,PACKAGE,PACKAGE BODY,TABLE,TABLE PARTITION,VIEW

CTXSYS
FUNCTION,INDEX,INDEXTYPE,LIBRARY,OPERATOR,PACKAGE,PACKAGE BODY,PROCEDURE,SEQUENCE,TABLE,TYPE,TYPE BODY,VIEW

etc

Patch conflicts

My last post was about patching my home databases from 18.3 to 18.5 on Windows, and how I encountered a patch conflict when I tried to patch the JVM. I thought I’d give a little bit of info for anyone who runs into patch conflicts from time to time. It can be stressful especially if unforeseen, or you are in the middle of limited time outage window etc.

So before you jump into applying a patch, a nice little tool you might like to explore is the patch conflict checker on My Oracle Support. You can get it via:

https://support.oracle.com/epmos/faces/PatchConflictCheck

It is straightforward to use, you simply fill in the platform and your current patch inventory details, and then list out the patches you intend to apply.

image

 

One caveat – the online tool does not work with Windows environments Sad smile but you can get around that by downloading the patches you intend to apply to your local machine. Then you can use opatch itself to perform that check:


opatch prereq CheckConflictAmongPatchesWithDetail -phBaseDir path

where “path” is where you have unzipped the patch to.

Like all patching activities, the more preparation work you can do before actually commencing the work, the more likely your chances of not encountering a sticky moment during the work.

From Database 18.3 to 18.5 (on Windows)

Contrary to wild rumours on the internet, it was not a fear of the number 13 that led to a numbering jump from version 12c to version 18c. The jump was part of our new, more flexible release mechanism so that we can get fixes and enhancements to customers on a more frequent and predictable schedule. In a nutshell, smaller bundles of features and fixes, more frequently.

I won’t dwell on that – if you’re unfamiliar with the new strategy, the best place to start is  MOS Note 2285040.1, which has a description and a FAQ. But in terms of (as the saying goes) eating one’s own dog food, I downloaded the 18.5 release update which came out this week, and applied it to my 18.3 installation and I thought I’d share the process.

You might be wondering: “Why am I skipping 18.4?”  Well that can be summarised simply as ignorance on my part – I missed the email a couple of few months back that told me 18.4 was available Smile You might also be wondering: “I read that there were 4 release updates per year, how come there is an 18.5?” That is a common misinterpretation. We aim to provide a release update each quarter, but we never said that this means there would only ever be 4.

Now I stress – I’m just sharing my experiences here, not claiming that this is your go-to guide for applying the release update. I was just updating 18c on a Windows laptop, not a server. You should always follow the installation documents that come with the release update. For example, I had to reboot my Windows machine a couple of times, because it had transpired that some webcam software I’d installed had opted to use the Microsoft C++ shared libraries in my 18c Oracle Home! I think we can safely assume your servers probably don’t have webcams on them!

Step 1: Download a fresh OPatch

Seriously…just do this every time. As it turned out, my OPatch was already at a high enough level to apply the release update, but for me, years ago I adopted the mindset: “If I am going to apply a patch, then download a new OPatch“. It just makes it a no-brainer.

Step 2: Shut everything down.

I am running single instance, no “rolling” patching for me.

Step 3: Unzip the patch, set the environment, and patch


C:\>set ORACLE_HOME=C:\oracle\product\18
C:\>set PATH=%ORACLE_HOME%\OPatch;%ORACLE_HOME%\perl\bin;%PATH%
C:\>set PERL5LIB=
C:\>cd \oracle\stage\29124511
C:\oracle\stage\29124511>opatch apply

Oracle Interim Patch Installer version 12.2.0.1.14
Copyright (c) 2019, Oracle Corporation.  All rights reserved.


Oracle Home       : C:\oracle\product\18
Central Inventory : C:\Program Files\Oracle\Inventory
   from           :
OPatch version    : 12.2.0.1.14
OUI version       : 12.2.0.4.0
Log file location : C:\oracle\product\18\cfgtoollogs\opatch\opatch2019-01-18_09-36-46AM_1.log

Verifying environment and performing prerequisite checks...

Conflicts/Supersets for each patch are:

Patch : 29124511

        Conflict with 28267731
        Conflict details:
        C:\oracle\product\18\javavm\admin\classes.bin
        C:\oracle\product\18\javavm\admin\lfclasses.bin
        C:\oracle\product\18\javavm\jdk\jdk8\admin\classes.bin
        C:\oracle\product\18\javavm\jdk\jdk8\admin\lfclasses.bin
        C:\oracle\product\18\rdbms\admin\oracle.sym
        C:\oracle\product\18\bin\oracle.exe

        Bug Superset of 27783303
        Super set bugs are:
        27748954, 27604293, 27747869, 27984028, 27550341, 27389352, 27433163, 27538461, 27341181,....

Well…that didn’t go so well Smile Now the first thing confused me was: “How could there be any pre-existing patch to conflict with?”. After all, this was a complete 18c installation that I had downloaded from OTN when it first became available. But then I remembered, this is 18.3. So while it’s true that I download it as a standalone complete piece of software, it is still a patched release of the database. This is so much better than the old days where if you wanted (say) version 9.2.0.8, you had to download 9.2.0.1 and then apply the patch on top of it before using the software. A listing of the patch inventory showed that I already had some patches installed with my initial fresh download.


C:\oracle\stage\29124511>opatch lsinventory
Oracle Interim Patch Installer version 12.2.0.1.14
Copyright (c) 2019, Oracle Corporation.  All rights reserved.


Oracle Home       : C:\oracle\product\18
Central Inventory : C:\Program Files\Oracle\Inventory
   from           :
OPatch version    : 12.2.0.1.14
OUI version       : 12.2.0.4.0
Log file location : C:\oracle\product\18\cfgtoollogs\opatch\opatch2019-01-18_09-39-23AM_1.log

Lsinventory Output file location : C:\oracle\product\18\cfgtoollogs\opatch\lsinv\lsinventory2019-01-18_09-39-23AM.txt

--------------------------------------------------------------------------------
Local Machine Information::
Hostname: gtx
ARU platform id: 233
ARU platform description:: Microsoft Windows (64-bit AMD)


Installed Top-level Products (1):

Oracle Database 18c                                                  18.0.0.0.0
There are 1 products installed in this Oracle Home.


Interim patches (3) :

Patch  28267731     : applied on Sat Aug 18 17:34:22 AWST 2018
Unique Patch ID:  22301563
Patch description:  "WINDOWS OJVM BUNDLE PATCH : 18.3.0.0.180717 (28267731)"
   Created on 8 Jul 2018, 06:48:38 hrs PST8PDT
   Bugs fixed:
     27642235, 27952586, 27304131, 27461740, 27636900, 27539876
   This patch overlays patches:
     27783303
   This patch needs patches:
     27783303
   as prerequisites

Patch  27908644     : applied on Sat Aug 18 17:31:26 AWST 2018
Unique Patch ID:  22299245
Patch description:  "UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171"
   Created on 29 Jun 2018, 02:51:19 hrs PST8PDT
   Bugs fixed:
     27908644

Patch  27783303     : applied on Sat Aug 18 17:25:42 AWST 2018
Unique Patch ID:  22238986
Patch description:  "Windows Database Bundle Patch : 18.3.0.0.180717 (27783303)"
   Created on 16 Aug 2018, 07:58:43 hrs PST8PDT
   Bugs fixed:
     27026401, 27994333, 27680509, 27314206, 27345231, 24925863, 27447452
     26792891, 27236110, 27608669, 27670484, 27421101, 27240246, 27213140
     27740844, 27616657, 18701017, 27177852, 27697092, 27379956, 26598422
     27688099, 27580996, 27534509, 27602488, 27333978, 27098733, 27163313
     27551855, 27012915, 27603841, 27224987, 28165545, 27259983, 27919283
     28039953, 27357773, 27302730, 27263996, 27345498, 27517818, 27377219
     26882126, 27396377, 27701279, 27285557, 27779886, 27739006, 27585755
     27321834, 27748954, 27950708, 26646549, 26961415, 27061736, 27066519
     27498477, 28174926, 21806121, 24489904, 27570318, 27365139, 27028251
     27435537, 27869339, 27226913, 27192754, 27038986, 27537472, 27483974
     27329812, 27356373, 27714373, 27422874, 27334648, 27339115, 25035594
     27128580, 27952762, 27691717, 27534289, 27425622, 27434974, 27518227
     27426363, 27352600, 26450454, 27631506, 27143882, 27346949, 27181521
     27199245, 27252023, 27911160, 27365702, 27497950, 26586174, 12816839
     27389352, 25287072, 27215007, 27345190, 27475272, 25634405, 27813267
     27726269, 27463879, 27086406, 27283029, 27850736, 27338838, 27428790
     27395404, 27924147, 27284286, 27430254, 27262945, 27250547, 27346329
     27693713, 27347126, 27745220, 27341036, 27481765, 28174827, 27450355
     27214085, 27503413, 27451182, 27359178, 27625274, 27587672, 28320117
     27367194, 27782464, 27735534, 27396365, 27210872, 27501327, 27984028
     27309182, 27520070, 27999597, 27381383, 27302415, 27378103, 27861909
     27782339, 27254851, 27086821, 27101273, 27465480, 27232983, 27941514
     27486253, 27489719, 27222626, 27560562, 27244785, 27458829, 27262650
     27155549, 25743479, 27897639, 27615608, 27459909, 27267992, 27304936
     27663370, 27602091, 27448162, 27434486, 26933599, 26928317, 27586810
     27153755, 27348081, 27314390, 27786669, 27573408, 27532375, 26818960
     25824236, 27563767, 27060859, 27126666, 27284499, 27210038, 25726981
     9062315, 27301308, 27725967, 27452760, 28188330, 27834984, 27748321
     26990202, 27811439, 27376871, 27654039, 27080748, 27333664, 28059199
     27420715, 27315159, 27331374, 27398660, 27680162, 25035599, 27718914
     27599689, 27595801, 26615291, 27040560, 26476244, 27801774, 27450783
     27066451, 27935826, 28098865, 26226953, 27501465, 27558559, 27496806
     27381656, 27299455, 27124867, 27496224, 27707544, 27163928, 27147979
     27395416, 27532009, 21766220, 27727843, 27607805, 27271876, 26860285
     27997875, 28204423, 27204133, 27627992, 27592466, 27395794, 27430802
     27511196, 27302800, 27204476, 27941896, 27560702, 27053044, 24689376
     27447687, 27451049, 27302695, 26843664, 27181537, 27744211, 27445462
     26427905, 27106915, 27896388, 27812593, 27926113, 27487795, 27135647
     27496308, 28239335, 27452897, 26986173, 27434050, 27513114, 27114112
     27265816, 27274536, 27396624, 28090453, 27396666, 27573409, 27331069
     27379846, 27270197, 27016503, 27934468, 27595973, 27410595, 27833369
     27577758, 26966120, 27757979, 27434193, 27393421, 27032726, 27613080
     27208953, 27538461, 27581484, 27321179, 27263677, 26898279, 27573154
     27492916, 27961746, 27591842, 27868727, 27047831, 27783289, 27405242
     27970265, 27333693, 27506774, 27112686, 27379233, 27471876, 27425507
     28205874, 27544030, 27401637, 27740854, 27993298, 27193810, 27212208
     27184253, 27288230, 27399499, 27786772, 27978668, 26423085, 27873643
     27481406, 27182006, 27547732, 27889841, 27169796, 27501413, 27679488
     27518310, 27545630, 27346644, 27625010, 27360126, 27378959, 27772815
     27525909, 27945870, 27275136, 27321575, 28182503, 26336101, 27216224
     27375260, 27249544, 27174948, 27294480, 27957892, 27774539, 27664702
     27839732, 27166715, 27432355, 27257509, 27657920, 27190851, 27773602
     27774320, 27508936, 27457666, 27330161, 27339396, 27092991, 27101652
     27803665, 27396672, 27472969, 27579969, 27610269, 27302594, 27778433
     27339495, 25724089, 27739957, 28023410, 27433163, 27222121, 27851757
     28109698, 27732323, 27691809, 27503208, 26822620, 28264172, 26846077
     27189611, 27222423, 28132287, 27121566, 27282707, 27133637, 27451531
     27613247, 27560735, 27702244, 27341181, 28240153, 27479358, 27370933
     27396357, 27153641, 26827699, 27238258, 27364916, 27307868, 27791223
     27041253, 27090765, 27892488, 27034318, 27349393, 27412805, 27399762
     27302960, 27679664, 27815347, 27399985, 27241247, 26654411, 27077948
     26987877, 27354783, 27701795, 27304410, 27882176, 27119621, 26956033
     27300007, 27339165, 28106402, 27451187, 27058530, 21547051, 28025398
     27682288, 27398080, 27586895, 27679806, 27164122, 27243810, 13554903
     27993289, 27504190, 26587652, 27212837, 27274143, 27768034, 27550341
     27558861, 27060167, 27600706, 28022847, 27241221, 27131377, 26992964
     27690578, 27747407, 27305318, 27230645, 27818871, 27346709, 28057267
     27405696, 27523368, 27574335, 27526362, 27174938, 27931506, 27392187
     27221900, 27797290, 28184554, 27401618, 27410300, 26313403, 27864737
     27362190, 27439835, 24737581, 27368850, 27593587, 27751006, 23840305
     26996813, 27625050, 27657467, 27073066, 27302711, 27453225, 27984314
     27274456, 27522245, 27417186, 27469329, 27338946, 27396813, 27786699
     27285244, 27692215, 27519708, 23003564, 27339483, 27783059, 26882316
     27757567, 26527054, 27862636, 27563629, 27635508, 27508985, 26785169
     27472898, 27971575, 28413955, 27302681, 27609819, 27345450, 27788520
     27018734, 27766679, 27101527, 27906509, 27593389, 27625620, 27036408
     27358232, 27335682, 23698980, 27144533, 27585800, 27458164, 22734786
     27523800, 28169711, 27384222, 27723002, 27473800, 27310092, 27604293
     27731346, 27365993, 27005278, 27320985, 27445330, 27649707, 27709046
     27313687, 27249215, 25348956, 27065091, 26433972, 27339654, 27318988
     27333658, 27533819, 27403244, 27520900, 27534651, 27030974, 27427805
     27359368, 23310101, 27644757, 27314512, 27044575, 27223171, 27240570
     27156355, 27528204, 27989849, 27143756, 27679961, 27110878, 25943740
     27747869, 27734470, 27283960, 27682151, 27719187, 26894737, 27869283
     27652302, 27182064, 27467543, 27334353, 26843558, 27840386, 27457891
     27334316, 27372756, 27705761, 27484556, 27708711, 27753336, 27364891
     27964051, 27503318, 27423251, 27865439, 27780562, 26731697, 27358241
     27634676, 27726780, 27444727, 27441326, 27326204, 27812560, 27432338
     27577122, 27177551, 27275776, 27558557, 27375542, 26299684, 27301568
     27593263, 27258578, 27222938, 27703242, 27512439, 27508984, 27398223
     27330158, 27807441, 27079545, 27404599, 27259386, 27688692, 28039471
     27292213, 27392256, 27307934, 27617522, 27505603, 27450400, 27348707
     27460675, 27238077, 27346984, 27364947, 26943660, 27236052, 27338912
     27244337, 28021205, 28032758, 28033429, 27263276, 27579353, 27233563
     27220610, 28099592, 27433870, 27426277, 26647619, 27847259, 25929650
     27738679, 27502420, 25709124, 28045209, 27668379, 27318869, 27832643
     27302777, 28072130, 27442041, 27430219, 27614272, 27930478



--------------------------------------------------------------------------------

OPatch succeeded.

The conflict was with 28267731, which is the OJVM patch. I remembered from older 12c versions that the remedy to this was to rollback the older JVM patch before applying the new one. So I ran


C:\oracle\stage\29124511>opatch rollback -id 28267731

and then downloaded the 18.5 OJVM patch (28790647) as well in readiness.

After that, everything went as planned. I applied the DB patch and then applied the 18.5 OJVM patch. The final step was to open my database (and all of the pluggable database) and run in the database-level patch changes using datapatch


Microsoft Windows [Version 10.0.17134.523]
(c) 2018 Microsoft Corporation. All rights reserved.

C:\>set ORACLE_HOME=C:\oracle\product\18

C:\>set PATH=%ORACLE_HOME%\OPatch;%ORACLE_HOME%\perl\bin;%PATH%

C:\>set PERL5LIB=

C:\>cd C:\oracle\product\18\bin

C:\oracle\product\18\bin>sqlplus / as sysdba

SQL*Plus: Release 18.0.0.0.0 - Production on Fri Jan 18 12:33:19 2019
Version 18.5.0.0.0

Copyright (c) 1982, 2018, Oracle.  All rights reserved.


Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.5.0.0.0


SQL> alter pluggable database all open;

Pluggable database altered.

SQL> exit
Disconnected from Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.5.0.0.0

C:\oracle\product\18\bin>cd %ORACLE_HOME%/OPatch

C:\oracle\product\18\OPatch>datapatch -verbose
SQL Patching tool version 18.0.0.0.0 Production on Fri Jan 18 12:33:32 2019
Copyright (c) 2012, 2019, Oracle.  All rights reserved.

Log file for this invocation: C:\oracle\cfgtoollogs\sqlpatch\sqlpatch_16792_2019_01_18_12_33_32\sqlpatch_invocation.log

Connecting to database...OK
Gathering database info...done

Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)

Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of interim SQL patches:
Interim patch 28267731 (WINDOWS OJVM BUNDLE PATCH : 18.3.0.0.180717 (28267731)):
  Binary registry: Not installed
  PDB CDB$ROOT: Applied successfully on 23-AUG-18 10.31.18.372000 AM
  PDB PDB$SEED: Applied successfully on 23-AUG-18 10.36.41.858000 AM
  PDB PDB1: Applied successfully on 23-AUG-18 10.36.41.858000 AM
Interim patch 28790647 (OJVM RELEASE UPDATE: 18.5.0.0.190115 (28790647)):
  Binary registry: Installed
  PDB CDB$ROOT: Not installed
  PDB PDB$SEED: Not installed
  PDB PDB1: Not installed

Current state of release update SQL patches:
  Binary registry:
    18.5.0.0.0 Release_Update 1812202039: Installed
  PDB CDB$ROOT:
    Applied 18.3.0.0.0 Release_Update 1808132056 successfully on 23-AUG-18 10.31.18.366000 AM
  PDB PDB$SEED:
    Applied 18.3.0.0.0 Release_Update 1808132056 successfully on 23-AUG-18 10.36.41.852000 AM
  PDB PDB1:
    Applied 18.3.0.0.0 Release_Update 1808132056 successfully on 23-AUG-18 10.36.41.852000 AM

Adding patches to installation queue and performing prereq checks...done
Installation queue:
  For the following PDBs: CDB$ROOT PDB$SEED PDB1
    The following interim patches will be rolled back:
      28267731 (WINDOWS OJVM BUNDLE PATCH : 18.3.0.0.180717 (28267731))
    Patch 29124511 (Windows Database Bundle Patch : 18.5.0.0.190115 (29124511)):
      Apply from 18.3.0.0.0 Release_Update 1808132056 to 18.5.0.0.0 Release_Update 1812202039
    The following interim patches will be applied:
      28790647 (OJVM RELEASE UPDATE: 18.5.0.0.190115 (28790647))

Installing patches...
Patch installation complete.  Total patches installed: 9

Validating logfiles...done
Patch 28267731 rollback (pdb CDB$ROOT): SUCCESS
  logfile: C:\oracle\cfgtoollogs\sqlpatch\28267731\22301563/28267731_rollback_DB18_CDBROOT_2019Jan18_12_34_14.log (no errors)
Patch 29124511 apply (pdb CDB$ROOT): SUCCESS
  logfile: C:\oracle\cfgtoollogs\sqlpatch\29124511\22646084/29124511_apply_DB18_CDBROOT_2019Jan18_12_34_47.log (no errors)
Patch 28790647 apply (pdb CDB$ROOT): SUCCESS
  logfile: C:\oracle\cfgtoollogs\sqlpatch\28790647\22646085/28790647_apply_DB18_CDBROOT_2019Jan18_12_35_21.log (no errors)
Patch 28267731 rollback (pdb PDB$SEED): SUCCESS
  logfile: C:\oracle\cfgtoollogs\sqlpatch\28267731\22301563/28267731_rollback_DB18_PDBSEED_2019Jan18_12_35_23.log (no errors)
Patch 29124511 apply (pdb PDB$SEED): SUCCESS
  logfile: C:\oracle\cfgtoollogs\sqlpatch\29124511\22646084/29124511_apply_DB18_PDBSEED_2019Jan18_12_35_37.log (no errors)
Patch 28790647 apply (pdb PDB$SEED): SUCCESS
  logfile: C:\oracle\cfgtoollogs\sqlpatch\28790647\22646085/28790647_apply_DB18_PDBSEED_2019Jan18_12_36_07.log (no errors)
Patch 28267731 rollback (pdb PDB1): SUCCESS
  logfile: C:\oracle\cfgtoollogs\sqlpatch\28267731\22301563/28267731_rollback_DB18_PDB1_2019Jan18_12_35_24.log (no errors)
Patch 29124511 apply (pdb PDB1): SUCCESS
  logfile: C:\oracle\cfgtoollogs\sqlpatch\29124511\22646084/29124511_apply_DB18_PDB1_2019Jan18_12_35_41.log (no errors)
Patch 28790647 apply (pdb PDB1): SUCCESS
  logfile: C:\oracle\cfgtoollogs\sqlpatch\28790647\22646085/28790647_apply_DB18_PDB1_2019Jan18_12_36_12.log (no errors)
SQL Patching tool complete on Fri Jan 18 12:36:17 2019
C:\oracle\product\18\OPatch>
C:\oracle\product\18\OPatch>
C:\oracle\product\18\OPatch>sqlplus / as sysdba

SQL*Plus: Release 18.0.0.0.0 - Production on Fri Jan 18 12:36:28 2019
Version 18.5.0.0.0

Copyright (c) 1982, 2018, Oracle.  All rights reserved.


Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.5.0.0.0


SQL> shutdown
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 2936010432 bytes
Fixed Size                  9033408 bytes
Variable Size            1023410176 bytes
Database Buffers         1895825408 bytes
Redo Buffers                7741440 bytes
Database mounted.
Database opened.
SQL> alter pluggable database all open;

Pluggable database altered.

SQL> @?/rdbms/admin/utlrp.sql

Session altered.


TIMESTAMP
----------------------------------------------------------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_BGN              2019-01-18 12:37:58

DOC>   The following PL/SQL block invokes UTL_RECOMP to recompile invalid
DOC>   objects in the database. Recompilation time is proportional to the
DOC>   number of invalid objects in the database, so this command may take
DOC>   a long time to execute on a database with a large number of invalid
DOC>   objects.
DOC>
DOC>   Use the following queries to track recompilation progress:
DOC>
DOC>   1. Query returning the number of invalid objects remaining. This
DOC>      number should decrease with time.
DOC>         SELECT COUNT(*) FROM obj$ WHERE status IN (4, 5, 6);
DOC>
DOC>   2. Query returning the number of objects compiled so far. This number
DOC>      should increase with time.
DOC>         SELECT COUNT(*) FROM UTL_RECOMP_COMPILED;
DOC>
DOC>   This script automatically chooses serial or parallel recompilation
DOC>   based on the number of CPUs available (parameter cpu_count) multiplied
DOC>   by the number of threads per CPU (parameter parallel_threads_per_cpu).
DOC>   On RAC, this number is added across all RAC nodes.
DOC>
DOC>   UTL_RECOMP uses DBMS_SCHEDULER to create jobs for parallel
DOC>   recompilation. Jobs are created without instance affinity so that they
DOC>   can migrate across RAC nodes. Use the following queries to verify
DOC>   whether UTL_RECOMP jobs are being created and run correctly:
DOC>
DOC>   1. Query showing jobs created by UTL_RECOMP
DOC>         SELECT job_name FROM dba_scheduler_jobs
DOC>            WHERE job_name like 'UTL_RECOMP_SLAVE_%';
DOC>
DOC>   2. Query showing UTL_RECOMP jobs that are running
DOC>         SELECT job_name FROM dba_scheduler_running_jobs
DOC>            WHERE job_name like 'UTL_RECOMP_SLAVE_%';
DOC>#

PL/SQL procedure successfully completed.


TIMESTAMP
----------------------------------------------------------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_END              2019-01-18 12:38:01

DOC> The following query reports the number of invalid objects.
DOC>
DOC> If the number is higher than expected, please examine the error
DOC> messages reported with each object (using SHOW ERRORS) to see if they
DOC> point to system misconfiguration or resource constraints that must be
DOC> fixed before attempting to recompile these objects.
DOC>#

OBJECTS WITH ERRORS
-------------------
                  0

DOC> The following query reports the number of exceptions caught during
DOC> recompilation. If this number is non-zero, please query the error
DOC> messages in the table UTL_RECOMP_ERRORS to see if any of these errors
DOC> are due to misconfiguration or resource constraints that must be
DOC> fixed before objects can compile successfully.
DOC> Note: Typical compilation errors (due to coding errors) are not
DOC>       logged into this table: they go into DBA_ERRORS instead.
DOC>#

ERRORS DURING RECOMPILATION
---------------------------
                          0


Function created.


PL/SQL procedure successfully completed.


Function dropped.


PL/SQL procedure successfully completed.

SQL>

================================

And that was all there was to it. I’ve now picked up over a 1000 fixes and improvements to the software. If you are one of those people who likes to patch rarely, I strongly recommend you have a re-think and take a closer look at the release update cycle. We want to get you better software, faster.

Enjoy your 18.5 release!

EXPORT not GATHER with DBMS_STATS

Just a short post today on something that came in as a question for the upcoming Office Hours session which I thought could be covered quickly in a blog post without needing a lot of additional discussion for which Office Hours is more suited to.

The question was:

“When I gather statistics using DBMS_STATS, can I just create a statistic table and pass that as a parameter to get the results of the gather”

And the answer simply is “No” Smile but let me clear up the confusion.

Many DBMS_STATS routines allow you to pass the name of the “statistics table” into which statistical information for the table, schema, database etc will be stored. For example, you can see the STAT-prefixed parameter to GATHER_TABLE_STATS


PROCEDURE GATHER_TABLE_STATS
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 OWNNAME                        VARCHAR2                IN
 TABNAME                        VARCHAR2                IN
 PARTNAME                       VARCHAR2                IN     DEFAULT
 ESTIMATE_PERCENT               NUMBER                  IN     DEFAULT
 BLOCK_SAMPLE                   BOOLEAN                 IN     DEFAULT
 METHOD_OPT                     VARCHAR2                IN     DEFAULT
 DEGREE                         NUMBER                  IN     DEFAULT
 GRANULARITY                    VARCHAR2                IN     DEFAULT
 CASCADE                        BOOLEAN                 IN     DEFAULT
 STATTAB                        VARCHAR2                IN     DEFAULT
 STATID                         VARCHAR2                IN     DEFAULT
 STATOWN                        VARCHAR2                IN     DEFAULT
 NO_INVALIDATE                  BOOLEAN                 IN     DEFAULT
 STATTYPE                       VARCHAR2                IN     DEFAULT
 FORCE                          BOOLEAN                 IN     DEFAULT
 CONTEXT                        CCONTEXT                IN     DEFAULT
 OPTIONS                        VARCHAR2                IN     DEFAULT

The statistics table must be of a certain structure, which is managed via the API as well using the CREATE_STAT_TABLE routine.


SQL> exec dbms_stats.create_stat_table('','st')

PL/SQL procedure successfully completed.

SQL> desc ST
 Name                          Null?    Type
 ----------------------------- -------- --------------------
 STATID                                 VARCHAR2(128)
 TYPE                                   CHAR(1)
 VERSION                                NUMBER
 FLAGS                                  NUMBER
 C1                                     VARCHAR2(128)
 C2                                     VARCHAR2(128)
 C3                                     VARCHAR2(128)
 C4                                     VARCHAR2(128)
 C5                                     VARCHAR2(128)
 C6                                     VARCHAR2(128)
 N1                                     NUMBER
 N2                                     NUMBER
 N3                                     NUMBER
 N4                                     NUMBER
 N5                                     NUMBER
 N6                                     NUMBER
 N7                                     NUMBER
 N8                                     NUMBER
 N9                                     NUMBER
 N10                                    NUMBER
 N11                                    NUMBER
 N12                                    NUMBER
 N13                                    NUMBER
 D1                                     DATE
 T1                                     TIMESTAMP(6) WITH TI
                                        ME ZONE
 R1                                     RAW(1000)
 R2                                     RAW(1000)
 R3                                     RAW(1000)
 CH1                                    VARCHAR2(1000)
 CL1                                    CLOB
 BL1                                    BLOB

But when calling the GATHER_xxx routines, if you pass the name of the statistic table, please note that this is for getting a copy of the relevant statistics before the fresh gathering of statistics is done. We can see this with a simple demo.


SQL> create table t1 as select * from dba_objects;

Table created.

SQL>
SQL> select num_rows
  2  from   user_tables
  3  where  table_name = 'T1';

  NUM_ROWS
----------
     83608

1 row selected.

So we can see that the table T1 has ~83,000 rows in it. Since I’m running this on 12c, there was no need to gather statistics after this initial load, because they were collected automatically as part of the loading process. (Very nifty!).

I now add another ~250,000 rows and perform a new GATHER_TABLE_STATS call, this time passing in the name of the statistics table (ST) that I just created.


SQL>
SQL> insert into t1
  2  select * from t1;

83608 rows created.

SQL>
SQL> insert into t1
  2  select * from t1;

167216 rows created.

SQL>
SQL> begin
  2  dbms_stats.gather_table_stats
  3   ( ownname=>'MCDONAC',
  4     tabname=>'T1',
  5     stattab=>'ST',
  6     statid=>'STATS'
  7  );
  8  end;
  9  /

PL/SQL procedure successfully completed.

Digging into the statistic table data, you can see that the number of rows recorded for T1 is 83597, which is the before image of the optimizer stat, not the after image.


SQL> select * from st where c1 = 'T1' and type = 'T'
  2  @pr
==============================
STATID                        : STATS
TYPE                          : T
VERSION                       : 8
FLAGS                         : 2
C1                            : T1
C2                            :
C3                            :
C4                            :
C5                            : MCDONAC
C6                            :
N1                            : 83597
N2                            : 1607
N3                            : 129
N4                            : 83597
N5                            :
N6                            :
N7                            :
N8                            :
N9                            : 0
N10                           :
N11                           :
N12                           :
N13                           :
D1                            : 16-JAN-19
T1                            :
R1                            :
R2                            :
R3                            :
CH1                           :
CL1                           :
BL1                           :

PL/SQL procedure successfully completed.

So just remember that if you want to get the DBMS_STATS information that is the result of a GATHER_xxx call, then you can follow it up with the appropriate EXPORT_xxx call to dump the data.

My APEX was fine and then it wasn’t

I got a nasty shock this morning when I fired up my local Application Expression installation.

image

It had been working fine and all of a sudden…just dead. I sounded like all of those family members that as I.T practitioners we have to deal with (and that we’re so sceptical of) when they say: “I didn’t change anything…it just stopped!” Smile

In keeping with the treatment of family members, I then adopted the advice that I normally give them first.

turning_on_and_off

and upon restart, I saw the following during the startup


C:\oracle\ords181>java -jar ords.war standalone
2019-01-11 11:47:36.071:INFO::main: Logging initialized @1378ms to org.eclipse.jetty.util.log.StdErrLog
Jan 11, 2019 11:47:36 AM
INFO: HTTP and HTTP/2 cleartext listening on port: 8080
Jan 11, 2019 11:47:36 AM
INFO: The document root is serving static resources located in: C:\oracle\ords181\conf\ords\standalone\doc_root
2019-01-11 11:47:36.409:INFO:oejs.Server:main: jetty-9.4.z-SNAPSHOT, build timestamp: 2017-11-22T05:27:37+08:00, git hash: 82b8fb23f757335bb3329d540ce37a2a2615f0a8
2019-01-11 11:47:36.422:INFO:oejs.session:main: DefaultSessionIdManager workerName=node0
2019-01-11 11:47:36.423:INFO:oejs.session:main: No SessionScavenger set, using defaults
2019-01-11 11:47:36.423:INFO:oejs.session:main: Scavenging every 600000ms
Jan 11, 2019 11:47:37 AM
WARNING: The pool named: |apex|| is invalid and will be ignored: The connection pool named: apex is not correctly configured, due to the following error(s): ORA-28001: the password has expired

Jan 11, 2019 11:47:37 AM
WARNING: The pool named: |apex|al| is invalid and will be ignored: The connection pool named: apex_al is not correctly configured, due to the following error(s): ORA-28001: the password has expired

Since security is our #1 thing for 2019 and probably should be the #1 item on your agenda for 2019, this was caused by some improvements to the Oracle defaults when you perform a database installation. Rather than the default being an “flexible” (aka loose Smile) policy we used to have when it comes to password management, we’ve gone for some more sensible options out of the box.


SQL> select * from dba_profiles order by 1,2
PROFILE          RESOURCE_NAME                    RESOURCE LIMIT
---------------- -------------------------------- -------- -----------
DEFAULT          COMPOSITE_LIMIT                  KERNEL   UNLIMITED
DEFAULT          CONNECT_TIME                     KERNEL   UNLIMITED
DEFAULT          CPU_PER_CALL                     KERNEL   UNLIMITED
DEFAULT          CPU_PER_SESSION                  KERNEL   UNLIMITED
DEFAULT          FAILED_LOGIN_ATTEMPTS            PASSWORD 10
DEFAULT          IDLE_TIME                        KERNEL   UNLIMITED
DEFAULT          INACTIVE_ACCOUNT_TIME            PASSWORD UNLIMITED
DEFAULT          LOGICAL_READS_PER_CALL           KERNEL   UNLIMITED
DEFAULT          LOGICAL_READS_PER_SESSION        KERNEL   UNLIMITED
DEFAULT          PASSWORD_GRACE_TIME              PASSWORD 7
DEFAULT          PASSWORD_LIFE_TIME               PASSWORD 180
DEFAULT          PASSWORD_LOCK_TIME               PASSWORD 1
DEFAULT          PASSWORD_REUSE_MAX               PASSWORD UNLIMITED
DEFAULT          PASSWORD_REUSE_TIME              PASSWORD UNLIMITED
DEFAULT          PASSWORD_VERIFY_FUNCTION         PASSWORD NULL
DEFAULT          PRIVATE_SGA                      KERNEL   UNLIMITED
DEFAULT          SESSIONS_PER_USER                KERNEL   UNLIMITED

So if you want your APEX public accounts (and I stress, no others!) to have a non-expiring password, then you should create a custom profile for those accounts and assign them accordingly.


create profile no_expire 
limit  password_life_time  unlimited;
  
alter user apex_public_user      profile no_expire ;
alter user apex_rest_public_user profile no_expire ;
alter user apex_listener         profile no_expire ;
alter user ords_public_user      profile no_expire ;

This will ensure you don’t get an unexpected drama next time you want to fire up Application Express.

2018-what grabbed your attention

Here are the blog posts that you hit on most this year, with the most viewed entry on top. Unsurprisingly it is about the release of 18c, but interestingly the ORA-14758 and the Active Sessions post have come up again from last years list, so it appears they are still common issues for the modern Oracle professional. And of course, it is nice to see that my Openworld Mega-download is helping the community.

Thanks for supporting the blog, and as always, there will be more content next year !

Your New Years Resolution

Aligning roughly with the calendar year, based on the Chinese zodiak we’re about to go from the year of the dog to the year of the pig. But for me, in the “Information Technology Zodiak” Smile , 2018 was the year of the hack, just as it was in 2017 and just as it will be for 2019.

I’ve not dedicated much time to keeping a record of all of the high profile breaches this year, but just off the top of my head I can think of:

  • The Elasticsearch breach,
  • The Starwood breach which, as a frequent traveller, probably guarantees I’ve been impacted in some as yet unknown way,
  • The Quora breach, a site that is very popular amongst I.T professionals.
  • The Google+ breach, which peeved me more about the fact that it was hush hushed by Google in order to preserve reputational damage. Um…what happened to looking after customers?

Adding to that list, whilst we were discussing security this year at Sangam 18, indian friends reminded me of the enormous Aadhaar breach at the start of the year, which is perhaps one of the largest of all time in terms of numbers of people impact.

And to be honest, I lost track of how many, which, where etc the amount of dramas that Facebook had.

I’m a realist so I’m not saying that hacks can always be avoided, or that anyone who gets hacked is being negligent. Sometimes even the best efforts are not good enough, because the hacks get more and more sophisticated all the time. But at the very least, I think we should set a goal for 2019 that all of our data at rest should be encrypted. If we can at least start with that small step, then at least if our data at rest is somehow nefariously obtained, it will still have a veil of protection.

So what’s your I.T resolution for next year? I hope it’s security related.

Another day…another "use the right datatype" post

Here’s an interesting little oddity (aka bug) with scalar queries.

We’ll start with a simple working example


SQL> create table t1 ( c1 number );

Table created.

SQL> insert into t1 values (1);

1 row created.

SQL> create table t2 ( c1 int, c2 varchar2(10));

Table created.

SQL>
SQL> insert into t2 values(1,'t1');

1 row created.

SQL> insert into t2 values(1,'t01');

1 row created.

SQL> commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats('','T1')

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.gather_table_stats('','T2')

PL/SQL procedure successfully completed.

SQL>
SQL>
SQL> select a.c1,
  2    ( select max(b.c2) from t2 b where a.c1 = b.c1 )
  3  from t1 a;

        C1 (SELECTMAX
---------- ----------
         1 t1

1 row selected.

That all seems straightforward:

  • We got the value 1 from T1.C1,
  • used that as an input to the query into T2
  • got the maximum of the 2 matching rows from table T2
  • return the result as a column outer query

Let us now repeat the example, but we’ll now make column C1 in table T2 a VARCHAR2. The query will remain unchanged, so now we are comparing a numeric “1” with a varchar2 value of “1”.


SQL> create table t1 ( c1 number );

Table created.

SQL> insert into t1 values (1);

1 row created.

SQL> create table t2 ( c1 varchar2(10), c2 varchar2(10));

Table created.

SQL>
SQL> insert into t2 values('1','t1');

1 row created.

SQL> insert into t2 values('01','t01');

1 row created.

SQL> commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats('','T1')

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.gather_table_stats('','T2')

PL/SQL procedure successfully completed.

SQL>
SQL>
SQL> select a.c1,
  2    ( select max(b.c2) from t2 b where a.c1 = b.c1 )
  3  from t1 a;

        C1 (SELECTMAX
---------- ----------
         1 t1
         1 t01

2 rows selected.

Whoops…that doesn’t look right.

Don’t get me wrong, that is an error in the way that we are processing the query, because we should not be getting 2 rows back from it. But it is another example of where you can encounter boundary cases (and hence bugs) when you stray from the well-trodden route. Something is obviously going awry during the implicit conversion, because if we explicitly take care of it, then things are fine.


SQL> select a.c1,
  2    ( select max(b.c2) from t2 b where a.c1 = to_number(b.c1) )
  3  from t1 a;

        C1 (SELECTMAX
---------- ----------
         1 t1

1 row selected.

I’d love it if there was an init.ora parameter that could be used in (say) development environments that would error out any SQL statement with an implicit data type conversion, but until that day comes (and it may be never!) then please make sure you take care with your data types!

Happy New Year everyone!