18.2 patch… painless for me

18.2 was released a few days ago, so I thought I’d throw it against my 18c instance and see how things played out.  This was just a single instance database, running with a single pluggable.

Step 1 for me was to download the latest OPatch.  To be honest, I don’t even check whether I already have the latest version in any installed ORACLE_HOME anymore.  If I’m going to apply a patch, I’m always going to download the latest OPatch unless the README explicitly tells me some reason why I should not.  I figure getting the latest version reduces the chances I’ll hit problems.  For me, I grabbed version “12.2.0.1.13”, but your experience may differ depending on when you encountered upon this post.

So I then downloaded patch 27676517 (note, this does not include patches for the JVM which are done separately).

From there is was just a case of following the README instructions and I had no dramas at all.  Software patching went through smoothly:


[oracle@host18 27676517]$ export PATH=$PATH:$ORACLE_HOME/OPatch
[oracle@host18 27676517]$ opatch apply
Oracle Interim Patch Installer version 12.2.0.1.13
Copyright (c) 2018, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/oracle/product/18.0.0/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/18.0.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.13
OUI version       : 12.2.0.4.0
Log file location : /u01/app/oracle/product/18.0.0/dbhome_1/cfgtoollogs/opatch/opatch2018-04-19_08-32-51AM_1.log

Verifying environment and performing prerequisite checks...
OPatch continues with these patches:   27676517  

Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.

Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u01/app/oracle/product/18.0.0/dbhome_1')


Is the local system ready for patching? [y|n]
y
User Responded with: Y
Backing up files...
Applying interim patch '27676517' to OH '/u01/app/oracle/product/18.0.0/dbhome_1'
ApplySession: Optional component(s) [ oracle.assistants.usm, 18.0.0.0.0 ] , [ oracle.has.crs, 18.0.0.0.0 ] , [ oracle.network.cman, 18.0.0.0.0 ] , [ oracle.assistants.asm, 18.0.0.0.0 ]  not present in the Oracle Home or a higher version is found.

Patching component oracle.network.rsf, 18.0.0.0.0...

Patching component oracle.rdbms.crs, 18.0.0.0.0...

Patching component oracle.assistants.acf, 18.0.0.0.0...

Patching component oracle.sqlplus.ic, 18.0.0.0.0...

Patching component oracle.rdbms.deconfig, 18.0.0.0.0...

Patching component oracle.sqlplus, 18.0.0.0.0...

Patching component oracle.rdbms.util, 18.0.0.0.0...

Patching component oracle.rdbms, 18.0.0.0.0...

Patching component oracle.rdbms.dbscripts, 18.0.0.0.0...

Patching component oracle.assistants.deconfig, 18.0.0.0.0...

Patching component oracle.assistants.server, 18.0.0.0.0...

Patching component oracle.rdbms.install.plugins, 18.0.0.0.0...

Patching component oracle.rdbms.rsf, 18.0.0.0.0...

Patching component oracle.rdbms.rman, 18.0.0.0.0...

Patching component oracle.javavm.client, 18.0.0.0.0...

Patching component oracle.ldap.owm, 18.0.0.0.0...

Patching component oracle.ldap.security.osdt, 18.0.0.0.0...
Patch 27676517 successfully applied.
Log file location: /u01/app/oracle/product/18.0.0/dbhome_1/cfgtoollogs/opatch/opatch2018-04-19_08-32-51AM_1.log

OPatch succeeded.

 

And the datapatch stage to update the database(s) also went through without any problems.


[oracle@host18 OPatch]$ ./datapatch -verbose
SQL Patching tool version 18.0.0.0.0 Production on Thu Apr 19 08:37:40 2018
Copyright (c) 2012, 2018, Oracle.  All rights reserved.

Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_10552_2018_04_19_08_37_40/sqlpatch_invocation.log

Connecting to database...OK
Gathering database info...done

Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)

Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of interim SQL patches:
  No interim patches found

Current state of release update SQL patches:
  Binary registry:
    18.2.0.0.0 Release_Update 1804041635: Installed
  PDB CDB$ROOT:
    No release update patches installed
  PDB PDB$SEED:
    No release update patches installed
  PDB PDB1:
    No release update patches installed

Adding patches to installation queue and performing prereq checks...done
Installation queue:
  For the following PDBs: CDB$ROOT PDB$SEED PDB1
    No interim patches need to be rolled back
    Patch 27676517 (Database Release Update : 18.2.0.0.180417 (27676517)):
      Apply from 18.1.0.0.0 Feature Release to 18.2.0.0.0 Release_Update 1804041635
    No interim patches need to be applied

Installing patches...
Patch installation complete.  Total patches installed: 3

Validating logfiles...done
Patch 27676517 apply (pdb CDB$ROOT): SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/27676517/22097537/27676517_apply_DB18_CDBROOT_2018Apr19_08_38_45.log (no errors)
Patch 27676517 apply (pdb PDB$SEED): SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/27676517/22097537/27676517_apply_DB18_PDBSEED_2018Apr19_08_39_58.log (no errors)
Patch 27676517 apply (pdb PDB1): SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/27676517/22097537/27676517_apply_DB18_PDB1_2018Apr19_08_40_01.log (no errors)
SQL Patching tool complete on Thu Apr 19 08:41:13 2018
[oracle@host18 OPatch]$ 




[oracle@host18 ~]$ sqlplus scott/tiger@pdb1

SQL*Plus: Release 18.0.0.0.0 - Production on Thu Apr 19 08:44:27 2018
Version 18.2.0.0.0

Copyright (c) 1982, 2018, Oracle.  All rights reserved.


Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.2.0.0.0

SQL> 

So all in all, a painless and straightforward experience for me.

One minor annoyance for me was the “Bugs Fixed” document.  The README references this as:

image

but that document is not accessible currently.  However, if you follow a different link in the README

image

then you will get to the document which contains the list of fixes etc.

Footnote: If you are not familiar with the new nomenclature of “Patch”, “Release Update”, etc etc, here is a good place to start

https://mikedietrichde.com/2017/11/07/ru-rur-recommendations-facts/

and follow the various links within that post for more information.

Happy upgrading!

Addenda: When it comes to the ‘datapatch’ script, the README is perhaps a little ambiguous.  More on that here

https://mikedietrichde.com/2018/04/19/do-you-have-to-execute-datapatch-when-you-create-a-new-database/

 

When you screw up … make it positive for your users

Yesterday I was caught up in an interesting SNAFU at my local Supermarket.  All of the checkout registers shut down, thus making it impossible to pay for groceries.  Later on Twitter, the company apologized as we discovered it was actually a nationwide outage!

 

image

News of the outage spread like wildfire through the media:

http://www.news.com.au/finance/business/retail/woolworths-checkouts-hit-by-national-outage/news-story/5611943156249d4ecc6427ef0b447c18

https://www.smh.com.au/business/consumer-affairs/woolworths-meltdown-closes-stores-across-australia-20180416-p4z9y4.html

http://www.abc.net.au/news/2018-04-16/woolworths-checkouts-across-australia-down/9663904

https://www.sbs.com.au/news/woolworths-hit-by-nationwide-technical-outage

https://www.9news.com.au/national/2018/04/16/16/29/woolworths-outage-stops-trading-across-australia

https://www.lifehacker.com.au/2018/04/the-woolworths-outage-is-a-lesson-in-business-continuity-planning/

TL;DR – people were forced to abandon their shopping trolleys and had to leave the stores. 

Needless to say, consumers vented loudly and forcefully at Woolworths, and the negative press naturally struck a chord with the general public, because we all feel empathy for the parent in front of the television cameras lamenting their inability to feed their family that night.  (For the record, my boys had meat balls and salad last night that we made with the leftovers we had in the fridge Smile)

In a perfect world, IT system updates should never cause pain for the users of those IT systems, but no matter how careful the testing and planning, I think it is reasonable to assert that we can never eliminate totally the chances of a major problem during an upgrade, our aim is always to shrink the probability to a close to zero as possible.

That brings me to the point of this post – and this perhaps slightly controversial stance.  I don’t think IT outages really matter that much from the perspective of the customer. For example, a while back Amazon had a huge outage here in Australia due to storms in Sydney.  Delta Airlines had a big outage in late 2016.  But last time I checked, people are still flying Delta and still buying stuff they didn’t need from Amazon Smile. Customers will forgive an outage but only if you prioritize their needs over yours during the crisis.  People are still ripping into Woolworths today because a Twitter apology doesn’t really get consumers any closer to taking groceries home. 

So this is what I would have done if I was Woolworths…. Make an announcement in each store that the store needs to close unexpectedly and customers to take your trolley to the nearest checkout (even though I know that the checkout’s are not working).  At that point, simply let people take what they have accumulated so far in their trolleys for no charge.  The news articles above already mentioned that the stores had security staff on hand to assist with closing the stores – so there is protection against a “looting mentality” being created.  Yes, there will be still be some negative press for those customers that could not get into the stores once they closed, but I contend that ultimately this would have turned into a positive result for Woolworths.  Yes you take a hit on the bottom line for yesterdays revenue, but the media attention becomes the mums and dads walking out of the stores smiling about the free shop they just got, rather than swearing they’ll never shop at Woolworths again.

Outages don’t matter.  Meeting the customer need is what matters.

Don’t get me wrong – I’m not claiming that any and every company I have ever worked for, or worked with, has a glowing record of understanding how to meet customer needs during times of crisis.  My point is that it should be something to always strive for – when you inflict pain on your customers due to the information technology solutions you build, then do your best to own the problem, and bust a gut trying to make the experience as bearable as possible for your customers, or even a win for them. 

Whether you turn bad into good, or bad into worse, rest assured your customers will remember you for it.

Concurrency … the path to success and the path the failure

Let’s face it. Concurrency is a good thing when it comes to database applications. After all, if there is only a single user of your application, then chances are, it is not a successful application Smile.  Of course there are exceptions to this rule, but by and large, most of the applications we build are going to be used by large populations of users.  And given the recent publicity of users data and privacy, we can also be pretty confident that we want the data in our applications to be correct at all times.

As a developer, it is absolutely critical to keep concurrency in mind when building applications.  Here is a simple demonstration to thrust home the point. 

The task assigned to the developer here is simple – transfer move all of the rows satisfying a particular condition from table T1 to table T2.  Let’s create our database objects first:


SQL> create table t1 ( x int, y int, z int );

Table created.

SQL> create table t2 as select * from t1 where 1=0;

Table created.

SQL>
SQL> insert into t1
  2  select rownum, rownum, rownum
  3  from dual connect by level <= 10000;

10000 rows created.

SQL> commit;

Commit complete.

So table T1 has 10,000 rows and T2 is empty, and our condition to determine which rows to transfer will be X being non-zero.  That happens to be all of the rows, but of course, it could be any filtering rule on the existing data. I’ve chosen all of the rows so I can keep the numbers easy to digest in the upcoming examples.

There is no such thing as a MOVE command in SQL, so we can implement this with two statements: 

  • INSERT-SELECT to copy the rows from T1 to T2
  • DELETE to remove the rows we just copied.

Here is the simple PLSQL routine to do that, and a first test run of the code.


SQL>
SQL> set serverout on
SQL> begin
  2      insert into t2 select * from t1 where x != 0;
  3      delete from t1 where x != 0;
  4      commit;
  5  end;
  6  /

PL/SQL procedure successfully completed.

SQL>
SQL> select count(*) from t1;

  COUNT(*)
----------
         0

1 row selected.

SQL> select count(*) from t2;

  COUNT(*)
----------
     10000

1 row selected.

SQL>

Everything looks fine doesn’t it? But it’s not.  In fact, I’ll do what any good developer should be doing, and add some instrumentation to the code to get more information about its operations.


SQL> create table t1 ( x int, y int, z int );

Table created.

SQL> create table t2 as select * from t1 where 1=0;

Table created.

SQL>
SQL> insert into t1
  2  select rownum, rownum, rownum
  3  from dual connect by level <= 10000;

10000 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> set serverout on
SQL> begin
  2      insert into t2 select * from t1 where x != 0;
  3      dbms_output.put_line(sql%rowcount);           
  4      delete from t1 where x != 0;
  5      dbms_output.put_line(sql%rowcount);
  6      commit;
  7  end;
  8  /
10000
10100      <=========== !!!!!

PL/SQL procedure successfully completed.

SQL>
SQL> select count(*) from t1;

  COUNT(*)
----------
         0

1 row selected.

SQL> select count(*) from t2;

  COUNT(*)
----------
     10000

1 row selected.

SQL>
SQL>

Look at the output from the PL/SQL block.  I copied(inserted ) 10,000 rows from T1 to T2, but the deletion phase removed more than 10,000 rows. How was that possible?  It is because databases are about concurrency.  In another session, I had a simple script running that just adds new rows to the table continuously.  As a developer, it is vital that you do not assume that you are the only person accessing or manipulating a table.  My code in its existing form has corrupted the database, because 100 rows have been deleted that were never copied to T2.  They’re gone forever!

So….onto some potential solutions.  Lets try locking the rows first


SQL> create table t1 ( x int, y int, z int );

Table created.

SQL> create table t2 as select * from t1 where 1=0;

Table created.

SQL>
SQL> insert into t1
  2  select rownum, rownum, rownum
  3  from dual connect by level <= 10000;

10000 rows created.

SQL> commit;

Commit complete.

SQL> set serverout on
SQL> declare
  2    cursor c is select * from t1 where x != 0 FOR UPDATE;
  3  begin
  4    open c;
  5    insert into t2 select * from t1 where x != 0;
  6    dbms_output.put_line(sql%rowcount);               
  7    delete from t1 where x != 0;
  8    dbms_output.put_line(sql%rowcount);
  9    commit;
 10    close c;
 11  end;
 12  /
10000
10100

PL/SQL procedure successfully completed.

SQL>
SQL> select count(*) from t1;

  COUNT(*)
----------
         0

1 row selected.

SQL> select count(*) from t2;

  COUNT(*)
----------
     10000

1 row selected.

As you can see, that has not yielded any benefit because locking is only of benefit for locking existing rows.  The new rows arriving from a separate session are unhindered, and hence my DELETE statement still picks up the new rows when it should not. I could workaround this by locking the entire table, but what if my filtering condition was only a small percentage of the rows?  It seems overly aggressive to lock everyone out of the table just to copy a small subset of rows.  In this case, I can use PL/SQL to help me out. Using the BULK COLLECT / BULK BIND facilities, I can restrict my DML operations to just the ROWIDs of the rows I am copying. 


SQL> create table t1 ( x int, y int, z int );

Table created.

SQL> create table t2 as select * from t1 where 1=0;

Table created.

SQL>
SQL> insert into t1
  2  select rownum, rownum, rownum
  3  from dual connect by level <= 10000;

10000 rows created.

SQL> commit;

Commit complete.

SQL> set serverout on
SQL> declare
  2    cursor c is select rowid rid, t1.* from t1 where x != 0 for update;
  3    type array is table of c%rowtype index by pls_integer;
  4    l_rows array;
  5    l_idx int;
  6  begin
  7    open c;
  8    loop
  9      fetch c bulk collect into l_rows limit 20000;
 10      exit when l_rows.count = 0;
 11
 12      forall i in 1 .. l_rows.count
 13         insert into t2 values (
 14            l_rows(i).x,
 15            l_rows(i).y,
 16            l_rows(i).z );
 17      dbms_output.put_line(sql%rowcount);       
 18      forall i in 1 .. l_rows.count
 19        delete from t1
 20        where rowid = l_rows(i).rid;
 21      dbms_output.put_line(sql%rowcount);
 22    end loop;
 23  end;
 24  /
10000
10000

PL/SQL procedure successfully completed.

SQL>
SQL>
SQL> select count(*) from t1;

  COUNT(*)
----------
       147

1 row selected.

SQL> select count(*) from t2;

  COUNT(*)
----------
     10000

1 row selected.

I am collecting the 10000 ROWID’s of interest into an array, and then deleting only those rows.  So even though new rows are arriving, I will never see them because they will not be in the set of ROWIDs that I am moving to table T2.  We can see that during the operation, 147 new rows arrived in T1 from other sessions.

(Note:  Normally you would choose a bulk collect size of perhaps 1000 or similar, and loop around repeated to avoid excessive PGA use.  I bulk collected the entire set to show the rowcounts more intuitively)

You might be thinking that perhaps we no longer need the “FOR UPDATE” clause on the cursor, but it is still a required element.  Without it, it is possible that a different session could update a row from the candidate data set, and that updated row would still be deleted.  Or a different session could even delete, commit, and insert a fresh row that happens to pick up the same ROWID as one from the original set.

Bottom Line – keep concurrency front and foremost in your mind when you are writing code to manipulate data.

TO_DOG_YEAR

Some members of the Oracle community got well and truly into the April Fools Day spirit this year.

There were plenty of very earnest looking blog posts about a new 18c function – “TO_DOG_YEAR”.  You can read their posts here

http://www.oralytics.com/2018/04/predicting-ibs-in-dogs-using-oracle-18c.html 
https://blog.dbi-services.com/after-iot-iop-makes-its-way-to-the-database/
http://berxblog.blogspot.ie/2018/04/more-fun-with-ages.html
http://vanpupi.stepi.net/2018/04/01/exploring-18c-exadata-functions/

They even enlisted the help of the team here at AskTOM where they posed a question looking for more details here.

But naturally, it was important to get as many puns and hints into our answer as possible – did you spot them all ? Smile

 

image

Hybrid histograms

Just a quick post here so I could consolidate some information about histograms in 12c.

On my last Office Hours session, one of the questions that was posed was asking for an explanation of the new types of histograms in use in 12c.  So I had a few slides and described conceptually at a high level how they work and what they represent.  If you’re new to 12c, or new to optimizer histograms in general, then take a look at the video below to get up to speed.  But for those who want to dig into a deeper level, I also saw recently two absolutely cracking articles which discuss the algorithms and implementation in much more detail.  So I’ve linked them here as well.

http://www.oracle.com/technetwork/articles/database/histogram-construction-oracle-12c-4426845.html

http://www.oracle.com/technetwork/articles/database/maintenance-histograms-db-12c-4426850.html

On that note…if you celebrate Easter – have a safe and happy break.  So you on the other side of the weekend!

Hare, Easter Bunny, Plush Bunny, Easter

Trip down memory lane

I did a little video for St Patricks day, but it also brought back memories of my first computer experiences.  A Sinclair ZX80, a Commodore 64, and many other machines that I thought were so cool for their time.

Feel free to share your experiences in comments.