Cross-platform database migration

Last weekend, we faced one of the larger challenges that we’ve faced in my time at my current client.  Migrate multiple database systems, across multiples sites, all to new hardware (with new endian format), new database version (…and of course, try to do it with as small a disruption to the service as possible.

We are not a Goldengate customer, so pursuing a zero-downtime migration was not considered, and to be honest, even we had such facilities at our disposal, I think we would still have taken a small outage just to allow a “quiet time” to do verification checks etc.

I had done a similar exercise many years ago, with a client moving from 9i to 10g.  In those days, migration with low downtime was a massively complex affair.  I remember countless SQL and shell scripts, all designed to do as much work concurrently as possible (multiple physical networks, multiple concurrent jobs pulling tables over db links, concurrent index builds, etc etc etc), all designed to keep that outage time to a minimum.  High risk days indeed.

In contrast, using the cross-platform transportable tablespaces, this current migration ran very smoothly indeed.  Below is a rough guide as to what is needed to perform such a migration – I didnt want to list precisely our steps, because people will mistakenly treat that as the definitive prescription for how to do it.  This is more of an overview, which will get you started, but hopefully you’ll be doing your own thorough planning and testing when the time comes !

Firstly, you create a fresh new database on your target system.  It will just have the usual tablespaces (SYSTEM, SYSAUX, etc).  It will be the recipient of all of the non-default tablespaces that will be be transporting over from our source system.

On the source system, first you’ll need a datapump of all the object definitions, without unloading any data.

expdp ... dumpfile=metadata_full.dmp full=y content=metadata_only exclude=user,role,role_grant,profile exclude=table_statistics exclude=index_statistics

and for each datafile, you need an imagecopy using RMAN, for example:

backup as copy tag 'my_tablespace' datafile 50,60,61 format '/backup_dest/file%f.bkp; 

And I do this for all of the user tablespaces, ie, not those that would be created as part of a fresh db install.  Note: Ensure compression is turned off for all RMAN parts of this process.

Notice I excluded statistics in the datapump.  Whilst datapump can also unload the optimizer stats for you, we found impdp to very slow for bringing those stats back into the target system, so we took control of that ourselves

exec dbms_stats.create_stat_table(user,'STATS'); 
exec dbms_stats.export_database_stats('ST'ATS,'WHOLE_DB',user); 

Now – so far, our source system is still running.  There has been no outage or disruption to service.  Those datafile copy backups we took, now need to be converted to the endian format of the target system, once again using RMAN, with a convert command for each file.

convert from platform 'AIX-Based Systems (64-bit)' datafile '/backup_dest/file50.bkp'  format '/target_dest/file50.dbf'; 

So now we have a converted set of files, but of course, they are a “fuzzy” copy of the data since the copies were taken with the source system still active.  This is where the nice part of cross-platform migration comes in.  I can regularly take incrementals from my source system, and convert/apply them to the freshly converted datafile copies.  Thus I can keep my target system up to date with my source system, without taking my source system offline.

So I can find the base checkpoint# from all files, and take an incremental of each tablespace since that point in time

select MIN(checkpoint_change#) from v$datafile; 

backup incremental from scn <mincheckpoint> tablespace 'MY_TSPACE' format  '/backup_dest/%U'; 

The incrementals can then be converted/applied to the target system using a little bit of PLSQL

  d  varchar2(512); 
  h  varchar2(512) ; 
  t  varchar2(30) ; 
  b1 boolean ; 
  b2 boolean ;

  d := sys.dbms_backup_restore.deviceAllocate; 
  sys.dbms_backup_restore.applysetdatafile(   check_logical=>false, cleanup=>false) ;

     toname =>'/target_dest/file50.dbf, 
     max_corrupt =>0, 

     stamp=>0) ;





This process can be repeated indefinitely until you are ready to cutover to the new system.  When that time will comes, the process is similar

Your source system now must go read-only (and hence this most likely is the commencement of your outage)

for i in ( 
select tablespace_name from dba_tablespaces 
where tablespace_name not in ('SYSTEM','SYSAUX') and contents = 'PERMANENT' 
  execute immediate 'alter tablespace '||i.tablespace_name||' read only'; 
end loop; 

You now take a datapump of the tablespaces you are going to transport over.  We are omitting the statistics, because we already have them.

expdp ... exclude=table_statistics exclude=index_statistics dumpfile=transport.dmp transport_full_check=no transport_tablespaces=tspace1,tspace2,... 

And you take a final incremental backup, and apply it to the target system.  Now all is in readiness.

Create all the required users by datapump-ing across the network

impdp ... network_link=source_db_link full=y include=user,role,role_grant,profile 

and then import the transportable tablespace definitions

impdp dumpfile=transport.dmp transport_datafiles='/target_dest/file10.dbf','/target_dest/file11.dbf','/target_dest/file12.dbf',... 

And then set your tablespaces to read-write

for i in ( 
select tablespace_name from dba_tablespaces 
where tablespace_name not in ('SYSTEM','SYSAUX') and contents = 'PERMANENT' and status = 'READ ONLY' 
  execute immediate 'alter tablespace '||i.tablespace_name||' read write'; 
end loop; 

And bring in all of the other bits and pieces (views, PL/SQL, etc etc) using the metadata dump you took earlier

impdp ... dumpfile=full.dmp full=y exclude=table_statistics exclude=index_statistics

and voila…you will have a migrated database with minimal downtime.  In our case, the transportable datapump export/import, plus the last incremental (thanks to block change tracking), was around 30mins, and the import of all of our PL/SQL etc around 30mins as well.  Not too bad for multiple multi-terabyte databases.

We then upgraded our stats table, and used dbms_stats to import the statistics, and started verification testing.

That covers the broad steps you’ll be taking.  There’s plenty of information on MOS and on OTN about the nitty gritty, but all in all, we were very happy with outcome.

14 thoughts on “Cross-platform database migration

    • Good point – I should have elaborated on that. Sequence *definitions* will come in the datapump, but the next sequence value to be used is incorrect (since its basically just the DDL to recreate the sequence). So you need do your own recreate script, which grabs the last_number+cache value as a starting point from dba_sequences.


    • It was our starting point (and I dont want to understate the value of that), but its handling of concurrency wasn’t particularly flash, plus it had a few problems here and there when it came to datapump generation, and the way it handled incrementals. Its also the same reason for my blog post being a “guide” rather than a “prescription” – every customer / environment will have their particular nuances, and to blindly adopt any instructional list is a recipe for problems 🙂


  1. (..) from platform ‘AIX-Based Systems (64-bit)’… please what was the rationale behind such migration?,is it some wider industry trend, just cost cutting, technology refresh?

  2. Transportable tablespaces are so useful for this type of work I don’t even want to remember what it was like ages ago when they were not available!
    One of the less run of the mill uses for them I have is to preserve tablespaces in uat and dev refreshes from production backups. Rather than data pump all required extra tablespaces from the targets into flat files and then reimporting after refresh, I prefer to read-only and just transport out the developer’s tablespaces from the test and dev dbs, then rman restore from a prod backup followed by adding the tablespaces back in after rman is done! That way I can very easily preserve developer’s work in progress and still refresh the data from a simple rman backup. Works a treat and saves me large amounts of time every month.

  3. Hello Connor,
    we’re facing a similar task, 10 databases with about 160 TB, cross plattform, so I’m interested how many TB you moved overall ?
    How big have your “multiple multi-terabyte databases” been together ?
    Regards Dirk

    • Hi Dirk,

      The size should be of minimal consequence, because the ‘outage’ will be defined by

      a- length of time of last incremental (which with block change tracking should be small)
      b- length of time to export/import the transportable tablespace definitions
      c- length of time to export/import the non-segment definitions (plsql etc etc)

      So (b) and (c) are where the challenges are – eg, if you had eBiz or similar, and you’re compiling tens of thousands of plsql packages, then you’d be thinking about alternate strategies (eg a change freeze and pre-compiling beforehand etc). Similarly, large number of partitions means longer import times just for the dictionary maintenance and stats import. (If its any use – we totalled around 15,000 table partitions and 40,000 index partitions across the board)

      But to answer your question, I havent looked too closely,but I’m guessing between 10 and 20 in total.

    • Hi Sergio,

      I *think* that one of the requirements is that you have to be or higher in order to do cross-platform incrementals, which of course is the critical element in order to not suffer downtime proportional to database size.

      You’d probably need to check with Support to get a definitive answer.


  4. Hello
    is the same procedure applicable from windows 2008
    to Oracle on oel6 ?
    actually i am facing some issues in the convert of the incremental backups,
    i cant find the rman command that convert incremental backups
    and the above procedure returned error on ” failover=>failover);”


  5. Hello Connor
    I don’t understand well the incremental applying procedure.
    You have to run these procedure for every datafile on the target database ?
    What happend if your incremental backup has more than one file ?


    • The incremental will be applied for each tablespace. But as I said in the post – “I didnt want to list precisely our steps”.

      Start with MOS note 1389592.1 and use that as a starting point to build an initial workflow, then customize to suit your needs.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s