Upgrade with no effort

We all know what upgrades mean.  Installation…Configuration…Patching…Testing…Deployment…Change approvals…Meetings…Long hours…Perhaps pulling an “overnighter” but hopefully never having to enact the dreaded “Backout Plan”.

Or…. that stuff can “just happen” without you worrying about it at all! 

I logged on to my Exadata Cloud Express database on the weekend, and lo and behold, it is not Oracle Database 12c anymore.  It is 12c Release 2 !

A look back through my email Inbox, and I saw that I had indeed been notified of the upcoming upgrade some time ago, but I was too lazy to pay attention to it Smile

image

Now that’s the kind of upgrade I like !

Apex upgrade 4.2 to 5.0.2

Just a quick note for anyone upgrading Apex on their systems.

The installation (into a non-multitenant 12.1.0.2 instance) went through with no problems, but tracing the installation suggests it will flush the shared pool 6 times during installation/upgrade.

That might have some impact on other applications/sessions running on that database, so best to find a quiet time to do it.

Virtualbox 5.0

I like it when stuff just “works”.

I’ve been using Virtual Box 4 for a long time now, and since my needs are relatively simple (ie, run a VM!) then I’ve not had any great cause to move to version 5.  But today I had a spare few minutes and thought “Well, its probably time”.

So the steps I followed were:

  • download
  • run the installer
  • start version 5

And .. well, all my stuff  (aka all my existing VM’s) worked just fine.  In fact, the only “error” I got was when I tried to start up too many of them at once and VirtualBox helpfully told me that I was either kidding myself or needed a better laptop.

This is how upgrades should be.  Simple and pain free.  Love it.

Downloads are here https://www.virtualbox.org/wiki/Downloads

image

Upgraded and can’t see the tables you could before ?

If you take a look at the “alter user” command in the old 9i documentation, you’ll see this:

DEFAULT ROLE Clause

Specify the roles granted by default to the user at logon. This clause can contain only roles that have been granted directly to the user with a GRANT statement. You cannot use the DEFAULT ROLE clause to enable:

  • Roles not granted to the user
  • Roles granted through other roles
  • Roles managed by an external service (such as the operating system), or by the Oracle Internet Directory

Oracle enables default roles at logon without requiring the user to specify their passwords.

And if you do a similar examination of the 10g documentation, its pretty much the same, with some more elaboration.

DEFAULT ROLE Clause

Specify the roles granted by default to the user at logon. This clause can contain only roles that have been granted directly to the user with a GRANT statement. You cannot use the DEFAULT ROLE clause to enable:

  • Roles not granted to the user
  • Roles granted through other roles
  • Roles managed by an external service (such as the operating system), or by the Oracle Internet Directory

Oracle Database enables default roles at logon without requiring the user to specify their passwords or otherwise be authenticated. If you have granted an application role to the user, you should use the DEFAULT ROLE ALL EXCEPT role clause to ensure that, in subsequent logons by the user, the role will not be enabled except by applications using the authorized package.

So lets take a look at the 11.2 documentation for the same section

DEFAULT ROLE Clause

Specify the roles granted by default to the user at logon. This clause can contain only roles that have been granted directly to the user with a GRANT statement, or roles created by the user with the CREATE ROLE privilege. You cannot use the DEFAULT ROLE clause to specify:

  • Roles not granted to the user
  • Roles granted through other roles
  • Roles managed by an external service (such as the operating system), or by the Oracle Internet Directory
  • Roles that are enabled by the SET ROLE statement, such as password-authenticated roles and secure application roles

Notice the new 4th bullet point, which has replaced the handling of default roles from previous versions. If a role is password protected, then even if it is set as a default role, then it will not be enabled automatically. We can see this with a simple example:

SQL> conn / as sysdba

Connected.

SQL> create user NO_PRIVS identified by blah;

User created.

SQL> grant create session to NO_PRIVS;

Grant succeeded.

SQL> create role SUPER_DOOPER identified by SECRET_PASSWORD;

Role created.

SQL> create role DOES_NOTHING;

Role created.

SQL> grant delete any table, select any table to SUPER_DOOPER;

Grant succeeded.

SQL> grant SUPER_DOOPER  to NO_PRIVS;

Grant succeeded.

SQL> grant DOES_NOTHING to NO_PRIVS;

Grant succeeded.

SQL> alter user NO_PRIVS default role all;

User altered.

SQL> conn NO_PRIVS/blah

Connected.

SQL> select count(*) from hr.employees;

select count(*) from hr.employees

                        *

ERROR at line 1:

ORA-00942: table or view does not exist

SQL>

SQL> set role SUPER_DOOPER identified by SECRET_PASSWORD;

Role set.

SQL> select count(*) from hr.employees;

  COUNT(*)

----------

       107

SQL>

Cross-platform database migration

Last weekend, we faced one of the larger challenges that we’ve faced in my time at my current client.  Migrate multiple database systems, across multiples sites, all to new hardware (with new endian format), new database version (12.1.0.2)…and of course, try to do it with as small a disruption to the service as possible.

We are not a Goldengate customer, so pursuing a zero-downtime migration was not considered, and to be honest, even we had such facilities at our disposal, I think we would still have taken a small outage just to allow a “quiet time” to do verification checks etc.

I had done a similar exercise many years ago, with a client moving from 9i to 10g.  In those days, migration with low downtime was a massively complex affair.  I remember countless SQL and shell scripts, all designed to do as much work concurrently as possible (multiple physical networks, multiple concurrent jobs pulling tables over db links, concurrent index builds, etc etc etc), all designed to keep that outage time to a minimum.  High risk days indeed.

In contrast, using the cross-platform transportable tablespaces, this current migration ran very smoothly indeed.  Below is a rough guide as to what is needed to perform such a migration – I didnt want to list precisely our steps, because people will mistakenly treat that as the definitive prescription for how to do it.  This is more of an overview, which will get you started, but hopefully you’ll be doing your own thorough planning and testing when the time comes !

Firstly, you create a fresh new database on your target system.  It will just have the usual tablespaces (SYSTEM, SYSAUX, etc).  It will be the recipient of all of the non-default tablespaces that will be be transporting over from our source system.

On the source system, first you’ll need a datapump of all the object definitions, without unloading any data.

expdp ... dumpfile=metadata_full.dmp full=y content=metadata_only exclude=user,role,role_grant,profile exclude=table_statistics exclude=index_statistics

and for each datafile, you need an imagecopy using RMAN, for example:

backup as copy tag 'my_tablespace' datafile 50,60,61 format '/backup_dest/file%f.bkp; 

And I do this for all of the user tablespaces, ie, not those that would be created as part of a fresh db install.  Note: Ensure compression is turned off for all RMAN parts of this process.

Notice I excluded statistics in the datapump.  Whilst datapump can also unload the optimizer stats for you, we found impdp to very slow for bringing those stats back into the target system, so we took control of that ourselves

exec dbms_stats.create_stat_table(user,'STATS'); 
exec dbms_stats.export_database_stats('ST'ATS,'WHOLE_DB',user); 

Now – so far, our source system is still running.  There has been no outage or disruption to service.  Those datafile copy backups we took, now need to be converted to the endian format of the target system, once again using RMAN, with a convert command for each file.

convert from platform 'AIX-Based Systems (64-bit)' datafile '/backup_dest/file50.bkp'  format '/target_dest/file50.dbf'; 

So now we have a converted set of files, but of course, they are a “fuzzy” copy of the data since the copies were taken with the source system still active.  This is where the nice part of cross-platform migration comes in.  I can regularly take incrementals from my source system, and convert/apply them to the freshly converted datafile copies.  Thus I can keep my target system up to date with my source system, without taking my source system offline.

So I can find the base checkpoint# from all files, and take an incremental of each tablespace since that point in time

select MIN(checkpoint_change#) from v$datafile; 

backup incremental from scn <mincheckpoint> tablespace 'MY_TSPACE' format  '/backup_dest/%U'; 

The incrementals can then be converted/applied to the target system using a little bit of PLSQL

DECLARE 
  d  varchar2(512); 
  h  varchar2(512) ; 
  t  varchar2(30) ; 
  b1 boolean ; 
  b2 boolean ;

BEGIN 
  d := sys.dbms_backup_restore.deviceAllocate; 
  sys.dbms_backup_restore.applysetdatafile(   check_logical=>false, cleanup=>false) ;

  sys.dbms_backup_restore.applyDatafileTo( 
     dfnumber=>50, 
     toname =>'/target_dest/file50.dbf, 
     fuzziness_hint=>0, 
     max_corrupt =>0, 
     islevel0=>0, 
     recid=>0, 
     stamp=>0);

  sys.dbms_backup_restore.restoreSetPiece( 
     handle=>'/backup_dest/incr12345', 
     tag=>null, 
     fromdisk=>true, 
     recid=>0, 
     stamp=>0) ;

  sys.dbms_backup_restore.restoreBackupPiece( 
    done=>d, 
    params=>null, 
    outhandle=>h, 
    outtag=>t, 
    failover=>failover);

  sys.dbms_backup_restore.restoreCancel(TRUE); 
  sys.dbms_backup_restore.deviceDeallocate;

END; 
/

 

This process can be repeated indefinitely until you are ready to cutover to the new system.  When that time will comes, the process is similar

Your source system now must go read-only (and hence this most likely is the commencement of your outage)

begin 
for i in ( 
select tablespace_name from dba_tablespaces 
where tablespace_name not in ('SYSTEM','SYSAUX') and contents = 'PERMANENT' 
) 
loop 
  execute immediate 'alter tablespace '||i.tablespace_name||' read only'; 
end loop; 
end; 
/

You now take a datapump of the tablespaces you are going to transport over.  We are omitting the statistics, because we already have them.

expdp ... exclude=table_statistics exclude=index_statistics dumpfile=transport.dmp transport_full_check=no transport_tablespaces=tspace1,tspace2,... 

And you take a final incremental backup, and apply it to the target system.  Now all is in readiness.

Create all the required users by datapump-ing across the network

impdp ... network_link=source_db_link full=y include=user,role,role_grant,profile 

and then import the transportable tablespace definitions

impdp dumpfile=transport.dmp transport_datafiles='/target_dest/file10.dbf','/target_dest/file11.dbf','/target_dest/file12.dbf',... 

And then set your tablespaces to read-write

begin 
for i in ( 
select tablespace_name from dba_tablespaces 
where tablespace_name not in ('SYSTEM','SYSAUX') and contents = 'PERMANENT' and status = 'READ ONLY' 
) 
loop 
  execute immediate 'alter tablespace '||i.tablespace_name||' read write'; 
end loop; 
end; 
/

And bring in all of the other bits and pieces (views, PL/SQL, etc etc) using the metadata dump you took earlier

impdp ... dumpfile=full.dmp full=y exclude=table_statistics exclude=index_statistics

and voila…you will have a migrated database with minimal downtime.  In our case, the transportable datapump export/import, plus the last incremental (thanks to block change tracking), was around 30mins, and the import of all of our PL/SQL etc around 30mins as well.  Not too bad for multiple multi-terabyte databases.

We then upgraded our stats table, and used dbms_stats to import the statistics, and started verification testing.

That covers the broad steps you’ll be taking.  There’s plenty of information on MOS and on OTN about the nitty gritty, but all in all, we were very happy with outcome.

Edition based redefinition – an apology

In April 2008 (wow, does time fly!) I used the following picture in my “11g features for Developers” presentation at the Australian Oracle User Group conference.

image

I think the picture is from the movie “Indiana Jones and the Last Crusade”… where they sought the Holy Grail.

I used the picture because I said that Edition Based Redefinition (EBR) was the Holy Grail of 24/7 Oracle based applications.  (Like most Oracle presentations at the time, and since then, on EBR) I then presented several examples of evolving an application without service interruption.  It’s not a trivial task – there is editions to manage, special edition views to be created, forward and reverse cross-edition triggers potentially required…but if the quest is no service interruption – then editions is the way to go.

So….has the Oracle community embraced editions ?  Well…to my knowledge, it has not been a runaway success.  Perhaps its the complexity?  Or perhaps for many customers, there simply isn’t the need to be strictly 24/7 in terms of application availability.  Easier to just take a small outage, deploy the code, and resume.  I’m sure people have their reasons – but maybe some of those reasons are due to Oracle practitioners like myself always talking about EBR from an application continuity perspective.

Which is why I come offering an apology 🙂  Like everyone else, I’ve always approached EBR from a view of application continuity.  But as I was creating a standard database trigger at work recently, I realised that stance was doing EBR a disservice, because it’s not necessarily about continuity and version control at all.  Here’s why:

Since 11g, you have been able to create triggers in an initial status of DISABLED.  That is very very cool.  Why ?  Because if you are creating a trigger, and for some reason that trigger will not compile successfully (eg, something as simple as a syntactical error or something less obvious, such as a privilege missing on an object that the trigger references) – then that error is a very very public one.  If the trigger is created ENABLED, but does not compile, then the table that the trigger is on is effectively broken, because all DML on that table will return an error.  11g fixed this problem by allowing the initial state for the trigger to be DISABLED.  So if the trigger does not compile, there is no damage done.  The problem can be resolved without impacting DML on the table.

Which brings me back to EBR.  Even if you don’t care about application continuity, and even if you are deploying your application changes during a scheduled outage, it dawned on me that EBR takes the disabled trigger metaphor and extends to other database objects.  Consider now how safe your deployments could be if you did this:

  • create a new edition,
  • (If needed) create edition-ing views representing the target state of your new tables,
  • compile all of your code changes into the new edition,
  • drop the edition

Whoa there!!!….drop the edition ?  What’s all that about?  Well…I’m presuming that you are currently not using EBR because there is something about it that makes you uncomfortable. And that’s fine – this post is not trying to dissuade you from your current stance.  My point is that even in this instance, you can be using EBR to test your deployments on the target database that your true deployment will be on.  How incredibly cool is that !?  Imagine the confidence you will have if you can roll out all of your changes in advance in deployment “test mode”.  You’ll catch any errors that may have had you in a panic and/or considering a backout if they had occurred during the true deployment.

And once you start doing “trial deployments” in this way…well, who knows, you might end up deciding to simply stay on the new edition, and voila – you’ve begun the journey toward application continuity in an easy and safe manner.

So here’s my new succinct summary on EBR:

If you have not enabled EBR on your database, you are not being the sharpest knife in the drawer 🙂

Happy Easter.

Upgrade to 12c … credentials

We did a “real” upgrade to 12c this weekend, where “real” means a production system, as opposed to my laptop, a play VM etc etc 🙂

It all went relatively smoothly except for one interesting thing, that I can’t 100% say was caused by the upgrade, but it would appear to be the case.

After the upgrade, our scheduler external jobs started failing. A quick look in the alert log revealed:

Sun Jun 29 09:26:11 2014
ORA-12012: error on auto execute of job "FIRE_SCHED_DAILY"
ORA-27496: credential "LOCAL_OS_ACCT" is disabled

So its possible (not proven) that upgrading to 12c might disable credentials. In this particular case, the database went from standalone to a pluggable database.

The remedy was the simply drop and recreate the credential