18c–If you can’t wait

You’ve seen the tweet !!

image

but perhaps the accompanying blog post has tempered your enthusiasm Sad smile

image

You might be thinking:

“I’m not on Exadata – how can I play with 18c?”

Well, there is still a way to get ahead of the game and skill up on 18c.  We’ve upgraded livesql.oracle.com to 18c, and we’ve pre-loaded some demo scripts for you as well.

image

So get cracking! Oracle Database 18c builds upon the foundation set by 12.2.

Enjoy!

 

Oracle Database 18c

Yup…it’s arrived!

New name obviously, because we’ve jumped to our new naming model to align with the calendar year as opposed to version number.  You might be thinking “So what?” but it’s a significant change in the way we getting software to customer.  Mike Dietrich blogged about this at length here so I won’t repeat what has been said, but in TL;DR form:

More frequent releases, with smaller amounts of change per release

In this way, the approach lets us focus more of solidifying existing features, and being able to quickly respond to bugs that arise.

So 18c is more an incremental release on 12.2 (in fact, internally it was referred to as “12.2.0.2” for most of it’s build cycle) focussed on stability and hardening of existing features.

Don’t worry, we still managed to pack some cool new things in there – which you can read about in the New Features guide.

I’ll be doing some blog posts and videos on the 18c content soon, but here’s something to whet your appetite Smile

Upgrade with no effort

We all know what upgrades mean.  Installation…Configuration…Patching…Testing…Deployment…Change approvals…Meetings…Long hours…Perhaps pulling an “overnighter” but hopefully never having to enact the dreaded “Backout Plan”.

Or…. that stuff can “just happen” without you worrying about it at all! 

I logged on to my Exadata Cloud Express database on the weekend, and lo and behold, it is not Oracle Database 12c anymore.  It is 12c Release 2 !

A look back through my email Inbox, and I saw that I had indeed been notified of the upcoming upgrade some time ago, but I was too lazy to pay attention to it Smile

image

Now that’s the kind of upgrade I like !

Apex upgrade 4.2 to 5.0.2

Just a quick note for anyone upgrading Apex on their systems.

The installation (into a non-multitenant 12.1.0.2 instance) went through with no problems, but tracing the installation suggests it will flush the shared pool 6 times during installation/upgrade.

That might have some impact on other applications/sessions running on that database, so best to find a quiet time to do it.

Virtualbox 5.0

I like it when stuff just “works”.

I’ve been using Virtual Box 4 for a long time now, and since my needs are relatively simple (ie, run a VM!) then I’ve not had any great cause to move to version 5.  But today I had a spare few minutes and thought “Well, its probably time”.

So the steps I followed were:

  • download
  • run the installer
  • start version 5

And .. well, all my stuff  (aka all my existing VM’s) worked just fine.  In fact, the only “error” I got was when I tried to start up too many of them at once and VirtualBox helpfully told me that I was either kidding myself or needed a better laptop.

This is how upgrades should be.  Simple and pain free.  Love it.

Downloads are here https://www.virtualbox.org/wiki/Downloads

image

Upgraded and can’t see the tables you could before ?

If you take a look at the “alter user” command in the old 9i documentation, you’ll see this:

DEFAULT ROLE Clause

Specify the roles granted by default to the user at logon. This clause can contain only roles that have been granted directly to the user with a GRANT statement. You cannot use the DEFAULT ROLE clause to enable:

  • Roles not granted to the user
  • Roles granted through other roles
  • Roles managed by an external service (such as the operating system), or by the Oracle Internet Directory

Oracle enables default roles at logon without requiring the user to specify their passwords.

And if you do a similar examination of the 10g documentation, its pretty much the same, with some more elaboration.

DEFAULT ROLE Clause

Specify the roles granted by default to the user at logon. This clause can contain only roles that have been granted directly to the user with a GRANT statement. You cannot use the DEFAULT ROLE clause to enable:

  • Roles not granted to the user
  • Roles granted through other roles
  • Roles managed by an external service (such as the operating system), or by the Oracle Internet Directory

Oracle Database enables default roles at logon without requiring the user to specify their passwords or otherwise be authenticated. If you have granted an application role to the user, you should use the DEFAULT ROLE ALL EXCEPT role clause to ensure that, in subsequent logons by the user, the role will not be enabled except by applications using the authorized package.

So lets take a look at the 11.2 documentation for the same section

DEFAULT ROLE Clause

Specify the roles granted by default to the user at logon. This clause can contain only roles that have been granted directly to the user with a GRANT statement, or roles created by the user with the CREATE ROLE privilege. You cannot use the DEFAULT ROLE clause to specify:

  • Roles not granted to the user
  • Roles granted through other roles
  • Roles managed by an external service (such as the operating system), or by the Oracle Internet Directory
  • Roles that are enabled by the SET ROLE statement, such as password-authenticated roles and secure application roles

Notice the new 4th bullet point, which has replaced the handling of default roles from previous versions. If a role is password protected, then even if it is set as a default role, then it will not be enabled automatically. We can see this with a simple example:

SQL> conn / as sysdba

Connected.

SQL> create user NO_PRIVS identified by blah;

User created.

SQL> grant create session to NO_PRIVS;

Grant succeeded.

SQL> create role SUPER_DOOPER identified by SECRET_PASSWORD;

Role created.

SQL> create role DOES_NOTHING;

Role created.

SQL> grant delete any table, select any table to SUPER_DOOPER;

Grant succeeded.

SQL> grant SUPER_DOOPER  to NO_PRIVS;

Grant succeeded.

SQL> grant DOES_NOTHING to NO_PRIVS;

Grant succeeded.

SQL> alter user NO_PRIVS default role all;

User altered.

SQL> conn NO_PRIVS/blah

Connected.

SQL> select count(*) from hr.employees;

select count(*) from hr.employees

                        *

ERROR at line 1:

ORA-00942: table or view does not exist

SQL>

SQL> set role SUPER_DOOPER identified by SECRET_PASSWORD;

Role set.

SQL> select count(*) from hr.employees;

  COUNT(*)

----------

       107

SQL>

Cross-platform database migration

Last weekend, we faced one of the larger challenges that we’ve faced in my time at my current client.  Migrate multiple database systems, across multiples sites, all to new hardware (with new endian format), new database version (12.1.0.2)…and of course, try to do it with as small a disruption to the service as possible.

We are not a Goldengate customer, so pursuing a zero-downtime migration was not considered, and to be honest, even we had such facilities at our disposal, I think we would still have taken a small outage just to allow a “quiet time” to do verification checks etc.

I had done a similar exercise many years ago, with a client moving from 9i to 10g.  In those days, migration with low downtime was a massively complex affair.  I remember countless SQL and shell scripts, all designed to do as much work concurrently as possible (multiple physical networks, multiple concurrent jobs pulling tables over db links, concurrent index builds, etc etc etc), all designed to keep that outage time to a minimum.  High risk days indeed.

In contrast, using the cross-platform transportable tablespaces, this current migration ran very smoothly indeed.  Below is a rough guide as to what is needed to perform such a migration – I didnt want to list precisely our steps, because people will mistakenly treat that as the definitive prescription for how to do it.  This is more of an overview, which will get you started, but hopefully you’ll be doing your own thorough planning and testing when the time comes !

Firstly, you create a fresh new database on your target system.  It will just have the usual tablespaces (SYSTEM, SYSAUX, etc).  It will be the recipient of all of the non-default tablespaces that will be be transporting over from our source system.

On the source system, first you’ll need a datapump of all the object definitions, without unloading any data.

expdp ... dumpfile=metadata_full.dmp full=y content=metadata_only exclude=user,role,role_grant,profile exclude=table_statistics exclude=index_statistics

and for each datafile, you need an imagecopy using RMAN, for example:

backup as copy tag 'my_tablespace' datafile 50,60,61 format '/backup_dest/file%f.bkp; 

And I do this for all of the user tablespaces, ie, not those that would be created as part of a fresh db install.  Note: Ensure compression is turned off for all RMAN parts of this process.

Notice I excluded statistics in the datapump.  Whilst datapump can also unload the optimizer stats for you, we found impdp to very slow for bringing those stats back into the target system, so we took control of that ourselves

exec dbms_stats.create_stat_table(user,'STATS'); 
exec dbms_stats.export_database_stats('ST'ATS,'WHOLE_DB',user); 

Now – so far, our source system is still running.  There has been no outage or disruption to service.  Those datafile copy backups we took, now need to be converted to the endian format of the target system, once again using RMAN, with a convert command for each file.

convert from platform 'AIX-Based Systems (64-bit)' datafile '/backup_dest/file50.bkp'  format '/target_dest/file50.dbf'; 

So now we have a converted set of files, but of course, they are a “fuzzy” copy of the data since the copies were taken with the source system still active.  This is where the nice part of cross-platform migration comes in.  I can regularly take incrementals from my source system, and convert/apply them to the freshly converted datafile copies.  Thus I can keep my target system up to date with my source system, without taking my source system offline.

So I can find the base checkpoint# from all files, and take an incremental of each tablespace since that point in time

select MIN(checkpoint_change#) from v$datafile; 

backup incremental from scn <mincheckpoint> tablespace 'MY_TSPACE' format  '/backup_dest/%U'; 

The incrementals can then be converted/applied to the target system using a little bit of PLSQL

DECLARE 
  d  varchar2(512); 
  h  varchar2(512) ; 
  t  varchar2(30) ; 
  b1 boolean ; 
  b2 boolean ;

BEGIN 
  d := sys.dbms_backup_restore.deviceAllocate; 
  sys.dbms_backup_restore.applysetdatafile(   check_logical=>false, cleanup=>false) ;

  sys.dbms_backup_restore.applyDatafileTo( 
     dfnumber=>50, 
     toname =>'/target_dest/file50.dbf, 
     fuzziness_hint=>0, 
     max_corrupt =>0, 
     islevel0=>0, 
     recid=>0, 
     stamp=>0);

  sys.dbms_backup_restore.restoreSetPiece( 
     handle=>'/backup_dest/incr12345', 
     tag=>null, 
     fromdisk=>true, 
     recid=>0, 
     stamp=>0) ;

  sys.dbms_backup_restore.restoreBackupPiece( 
    done=>d, 
    params=>null, 
    outhandle=>h, 
    outtag=>t, 
    failover=>failover);

  sys.dbms_backup_restore.restoreCancel(TRUE); 
  sys.dbms_backup_restore.deviceDeallocate;

END; 
/

 

This process can be repeated indefinitely until you are ready to cutover to the new system.  When that time will comes, the process is similar

Your source system now must go read-only (and hence this most likely is the commencement of your outage)

begin 
for i in ( 
select tablespace_name from dba_tablespaces 
where tablespace_name not in ('SYSTEM','SYSAUX') and contents = 'PERMANENT' 
) 
loop 
  execute immediate 'alter tablespace '||i.tablespace_name||' read only'; 
end loop; 
end; 
/

You now take a datapump of the tablespaces you are going to transport over.  We are omitting the statistics, because we already have them.

expdp ... exclude=table_statistics exclude=index_statistics dumpfile=transport.dmp transport_full_check=no transport_tablespaces=tspace1,tspace2,... 

And you take a final incremental backup, and apply it to the target system.  Now all is in readiness.

Create all the required users by datapump-ing across the network

impdp ... network_link=source_db_link full=y include=user,role,role_grant,profile 

and then import the transportable tablespace definitions

impdp dumpfile=transport.dmp transport_datafiles='/target_dest/file10.dbf','/target_dest/file11.dbf','/target_dest/file12.dbf',... 

And then set your tablespaces to read-write

begin 
for i in ( 
select tablespace_name from dba_tablespaces 
where tablespace_name not in ('SYSTEM','SYSAUX') and contents = 'PERMANENT' and status = 'READ ONLY' 
) 
loop 
  execute immediate 'alter tablespace '||i.tablespace_name||' read write'; 
end loop; 
end; 
/

And bring in all of the other bits and pieces (views, PL/SQL, etc etc) using the metadata dump you took earlier

impdp ... dumpfile=full.dmp full=y exclude=table_statistics exclude=index_statistics

and voila…you will have a migrated database with minimal downtime.  In our case, the transportable datapump export/import, plus the last incremental (thanks to block change tracking), was around 30mins, and the import of all of our PL/SQL etc around 30mins as well.  Not too bad for multiple multi-terabyte databases.

We then upgraded our stats table, and used dbms_stats to import the statistics, and started verification testing.

That covers the broad steps you’ll be taking.  There’s plenty of information on MOS and on OTN about the nitty gritty, but all in all, we were very happy with outcome.