In the world of DevOps, continuous integration and repeatable test cases, the demand for being able to

  • quickly build a suite of database objects,
  • utilise it for a series of tests,
  • then throw the objects away

has become far more common. This is one of the many great use cases for pluggable databases with all of the powerful cloning facilities available. In particular, now that you can take advantage of pluggable databases without* incurring additional license fees, there are some great opportunities there…but that is the topic for another post.

What about if the “unit of work” is not an entire pluggable database. What if we just want to clone just a schema within a pluggable database. Gone are the days where a DBA might be asked to clone a schema once per month, or even once per week. Cloning a schema is now something the developers want to do multiple times per day, as part of an automated process, and not have to involve anyone at all!  Welcome to DevOps! Smile

Unfortunately, we do not yet have a command in the Oracle database which lets you run:

create user SCOTT2 from SCOTT;

so I thought I’d throw something together which hopefully is the next best thing. We can use the PLSQL API into the Datapump feature to facilitate this. Taking advantage of a few concepts, namely:

  • Network based import,
  • The implicit database link associated with a global name,
  • Dynamically alterable external table definitions

we can build a PL/SQL procedure that is our one line clone schema resource for developers.


--
-- if you want to allow the drop user option, then the 
-- the owning schema will need the following privilege
--
-- Needless to say, you might want to wrap this within a procedure
-- within its own rights to ensure people don't drop the WRONG user
--
-- For example:
--
-- create or replace
-- procedure customised_drop_user(p_user varchar2) is
-- begin
--   if .... then
--      execute immediate 'drop user '||p_user||' cascade';
--   else
--      raise_application_error(-20000,'What the hell?!?!?');
--   end if;
-- end;
--

grant drop user to MY_USER;

drop table datapump_clone_log;

--
-- the initial file in the definition (dummy.log) must
-- exist, and the directory you are using (TEMP) must match
-- the declaration in the PLSQL proc which follows
--
create table datapump_clone_log (
     msg varchar2(4000)
)
organization external
( type oracle_loader
  default directory TEMP
  access parameters
  ( records delimited by newline
    fields terminated by ','
    missing field values are null
   ( msg )
   )
   location ('dummy.log')
) reject limit unlimited;

--
-- p_old    = existing schema
-- p_new    = target schema
-- p_drop   = whether we drop the target schema first
-- p_asynch = whether we wait or simply launch the import and return
--
-- I'd recommend p_asynch as false, because in that way, you'll get the
-- import log returned right back to your screen
--
create or replace
procedure clone_schema(
              p_old varchar2, 
              p_new varchar2, 
              p_drop_new boolean default true,
              p_asynch boolean default false
              ) is
  l_handle       number;
  l_status       ku$_status; 
  l_state        varchar2(30);
  l_link         varchar2(128);
  l_job_name     varchar2(128) := upper(p_old)||'_SCHEMA_IMP';
  l_log_file     varchar2(128) := lower(p_old)||'_import.log';
  l_default_dir  varchar2(128) := 'TEMP';
  rc             sys_refcursor;
  l_msg          varchar2(4000);
  
  procedure info(m varchar2,p_dbms_out boolean default false) is
  begin
    dbms_application_info.set_client_info(to_char(sysdate,'hh24miss')||':'||m);
    if p_dbms_out then
      dbms_output.put_line(to_char(sysdate,'hh24miss')||':'||m);
    end if;
  end;
BEGIN
  if p_drop_new then
    begin
      info('Dropping '||p_new,p_dbms_out=>true);
      --
      -- See notes about potentially wrapping this for safety
      --
      execute immediate 'drop user '||p_new||' cascade';
    exception
      when others then
        if sqlcode != -1918 then raise; end if;
    end;
  end if;
  select global_name into l_link from global_name;
  
  l_handle := dbms_datapump.open(
    operation   => 'IMPORT',
    job_mode    => 'SCHEMA',
    remote_link => l_link,
    job_name    => l_job_name);

  dbms_datapump.add_file(
    handle    => l_handle,
    filename  => l_log_file,
    directory => l_default_dir,
    filetype  => dbms_datapump.ku$_file_type_log_file,
    reusefile => 1);

  dbms_datapump.metadata_filter(
    handle => l_handle,
    name   => 'SCHEMA_EXPR',
    value  => '= '''||p_old||'''');

  dbms_datapump.metadata_remap(
    handle    => l_handle,
    name      => 'REMAP_SCHEMA',
    old_value => p_old,
    value     => p_new);

  info('Starting job',p_dbms_out=>true);
  dbms_datapump.start_job(l_handle);

  if not p_asynch then
    loop
      begin
        dbms_lock.sleep(3);
        dbms_datapump.get_status(
          handle    => l_handle,
          mask      => dbms_datapump.ku$_status_job_status,
          job_state => l_state,
          status    => l_status);
          info('l_state='||l_state);
      exception
        when others then
          if sqlcode = -31626 then
             l_state := 'COMPLETED';
          else
             raise;
          end if;
      end;
      exit when (l_state = 'COMPLETED') or (l_state = 'STOPPED');
    end loop;
    info('Final state:'||l_state,p_dbms_out=>true);
  end if;
   
  dbms_datapump.detach(l_handle);

  if not p_asynch then
    open rc for 'select msg from datapump_clone_log external modify ( location ( '''||l_log_file||''' ) )';
    loop
      fetch rc into l_msg;
      exit when rc%notfound;
      dbms_output.put_line(l_msg);
    end loop;
    close rc;
  end if;
    
end;
/
sho err

You can also get the source from my repo here.

Now lets have a look at the routine in action. This from my 18c database.


SQL> set serverout on
SQL> exec clone_schema('SCOTT','SCOTT2');
172055:Dropping SCOTT2
172057:Starting job
172232:Final state:COMPLETED
Starting "MCDONAC"."SCOTT_SCHEMA_IMP":
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 184.1 MB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
. . imported "SCOTT2"."BIGT"                            1146660 rows
. . imported "SCOTT2"."DEPT"                                  4 rows
. . imported "SCOTT2"."EMP"                                  14 rows
. . imported "SCOTT2"."SALGRADE"                              5 rows
. . imported "SCOTT2"."BONUS"                                 0 rows
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
ORA-39082: Object type PROCEDURE:"SCOTT2"."BLAH2" created with compilation warnings
ORA-39082: Object type PROCEDURE:"SCOTT2"."BLAH" created with compilation warnings
Job "MCDONAC"."SCOTT_SCHEMA_IMP" completed with 2 error(s) at Wed Nov 6 17:21:29 2019 elapsed 0 00:00:33

PL/SQL procedure successfully completed.

Note: If you want to run this on a version of the database below 18c, you can simply break the dynamic external table alteration into an ALTER statement to change the location, and then just query the external table as per normal. All of the rest of the code should work without alteration.

Enjoy!

* To see the details about the new license free options for pluggable databases, check out my video below:

16 responses to “Cloning a schema with a one line command”

  1. Hi, thank you for sharing such a great tip. Please advise on what you mean by “If you want to run this on a version of the database below 18c, you can simply break the dynamic external table alteration into
    an ALTER statement to change the location”

  2. In 12.2 you will probably be ok, but before that you cannot modify the location at *query* time, so you would need to do:

    alter table datapump_clone_log location (‘my_import_log’);
    open rc for select * from datapump_clone_log

    I’ve added a new version to handle this at my git repo

  3. Is there a way to include the OBJECT grants with the schema import ? I am trying (from 12.2) to import a schema from 11.2.0.4 and the schema import is fine, EXCEPT the grants for objects outside the user’s schema which are *not* copied.

  4. […] Published at DZone with permission of Connor McDonald , DZone MVB. See the original article here. […]

  5. Hi, thank you very much for sharing!

    Because I’m working with dbms_datapump the last days and trying out your “one line schema coding” today, I ran into the following issues with it. I think they’re worth to mention here and I hope this is okay!

    Error ‘PLS-00201: identifier ‘DBMS_LOCK’ must be declared
    is raised when running the script to create the clone_schema procedure, because there is no public access for that SYS package. Access must be manually granted (11g, 12c, 18xe and 19c)

    On 12c, 18xe, 19c I need to explicit grant “DROP USER” to the exporting user.

    Executing the clone_schema procedure raises: ORA-31626: job does not exist on 11g, 12c, 18xe and 19c.
    As you mentioned 2017 on Ask Tom, an explicit GRANT CREATE TABLE to the export user is also necessary.

    After that, the export works but with some errors:

    First I get the following, if a schema contains types:
    ORA-39083: Object type TYPE_NAME failed to create with error:
    ORA-02304: invalid object identifier literal while doing impdp

    This can be corrected by adding:
    dbms_datapump.metadata_transform (
    handle => l_handle,
    name => ‘OID’,
    value => 0 );
    to the code.

    I’m ashamed of, but the schema I’ve tested contains tables with LONG columns (our developers still life in the nineties).
    Cause of: ORA-31679 is received when using DataPump import over network link to import a table that has a LONG column
    I see no solution other then using a dump file for cloning this tables.

    Last I get ORA-00001: Unique Constraint (SYS.I_JOB_JOB) violated, because the tested schema contains a database job also. I have to manually create the job in the target schema.

    That’s it. I hope this will help someone coming along.

    Again thanks for all the input you give with your blog and on Ask Tom!

    1. Awesome input. Thanks for stopping by

      1. Michael Milligan Avatar
        Michael Milligan

        Connor, thanks for this script. Question: did you experience any of the errors that Maurice mentioned? Maybe not the LONG type errors but the others? For example, having to use dbms_datapump.metadata_transform to correct ORA-39083 when an object type exists? Is this a known bug or is something else going on?

    2. Michael Milligan Avatar
      Michael Milligan

      Thank you Maurice for these workarounds.

  6. Rajeshwaran Jeyabal Avatar
    Rajeshwaran Jeyabal

    Connor,

    what if the schema got spread across a set of (dedicated) tablespaces – with no other schema having objects on it – then the create pluggable database using “user_tablespaces” clause wont work here?

    @https://docs.oracle.com/en/database/oracle/oracle-database/18/sqlrf/CREATE-PLUGGABLE-DATABASE.html#GUID-F2DBA8DD-EEA8-4BB7-A07F-78DC04DB1FFC

  7. not sure what you mean here, and how it relates to datapump

  8. When I follow this and create the procedure in my schema, I can only clone my own schema to my own schema. Which is a bit pointless but it does run successfully to completion.

    If I specify a different schema to clone FROM I get “ORA-31631: privileges are required” when calling the metadata_filter. If I specify a different schema to clone TO I get “ORA-31631: privileges are required” when calling the metadata_remap.

    I have create table, create any table, import full database export full database all granted directly to my user ( not via a role) I also have DBA role and imp_full_database, exp_full_database. Version 18

    Which privileges am I missing?

    Thanks

  9. Could be this – from MOS 2327733.1

    CAUSE
    One of the users specified in the network link does not have the appropriate roles assigned as default roles. Unless a role is a default role for a user, the role will not be enabled by default when the user connects. There Data Pump roles must be default roles for the users involved with the network link.

    In the following example, all of USER123’s granted roles are default roles (having “YES” in the DEFAULT column of DBA_ROLE_PRIVS):

    SQL> select * from dba_role_privs where grantee = ‘USER123’;

    GRANTEE GRANTED_ROLE ADMI DELE DEFA COMM
    ———- —————————— —- —- —- —-
    USER123 CONNECT NO NO YES NO
    USER123 DBA NO NO YES NO
    USER123 IMP_FULL_DATABASE NO NO YES NO
    USER123 RESOURCE NO NO YES NO
    USER123 SELECT_CATALOG_ROLE NO NO YES NO
    USER123 EXP_FULL_DATABASE NO NO YES NO
    USER123 DATAPUMP_EXP_FULL_DATABASE NO NO YES NO
    USER123 DATAPUMP_IMP_FULL_DATABASE NO NO YES NO

    In contrast, on the other database in the network link, USER123 has similar roles, but most are not default:

    SQL> select * from dba_role_privs where GRANTEE = ‘USER123’;

    GRANTEE GRANTED_ROLE ADMI DELE DEFA COMM
    ———- —————————— —- —- —- —-
    USER123 CONNECT NO NO YES NO
    USER123 DBA NO NO NO NO
    USER123 IMP_FULL_DATABASE NO NO NO NO
    USER123 RESOURCE NO NO YES NO
    USER123 SELECT_CATALOG_ROLE NO NO NO NO
    USER123 EXP_FULL_DATABASE NO NO NO NO
    USER123 DATAPUMP_EXP_FULL_DATABASE NO NO NO NO
    USER123 DATAPUMP_IMP_FULL_DATABASE NO NO NO NO

    This will cause the Data Pump job to fail with:

    ORA-31631: privileges are required
    ORA-39149: cannot link privileged user to non-privileged user

    SOLUTION
    Change the non-default Data Pump roles on the relevant database to be default roles.

    For example, the following statement makes all roles granted directly to USER123 default roles, except the DBA_MANAGER role:

    ALTER USER USER123
    DEFAULT ROLE ALL EXCEPT DBA_MANAGER;

    At the beginning of USER123’s next session, Oracle Database enables all roles granted directly to USER123 except the DBA_MANAGER role.

  10. Hello Connor,

    Thank you for sharing a great approach.
    Would you know if I can use this db link based approach without generating a logfile? I checked the documentation but could not find any reference that explains how I can use the NOLOGFILE option from expel/impdp, while using DBMS_DATAPUMP?

    1. I think if you just omit the logfile part of the API call, it will not do one.

  11. Hi Connor,

    Firstly thanks very much for sharing this.

    I’d found something similar a while ago on another blog which worked fine – except when it didn’t! And, as it created no logfile, when it didn’t work, it was very difficult to troubleshoot.

    I’ve used your ‘12.2’ version a few times (on 11.2.0.4 – I know, I’m sorry we’re still on 11.2.0.4 too !!) and the exp/imp works fine, source and target object counts are identical, but I always hit this:

    SQL> BEGIN xxxx.clone_schema(‘DWHMIG’,’DWHMIG2′); END;

    *
    ERROR at line 1:
    ORA-06564: object DATA_PUMP_MIGRATE does not exist
    ORA-06512: at “xxxx.CLONE_SCHEMA”, line 93
    ORA-06512: at line 1

    The OS and database directories exist, the user owning the procedure has READ and WRITE on the Oracle directory and the log file is created OK.

    I have to confess to not undertsanding what purpose the dummy.log serves, but I’ve tried the procedure with and without this (empty) file being present in the Oracle directory – same result.

    Do you have any idea what I’m missing please?

    Thanks again for sharing and, not for the first time I’m sure, you’ve succeeded in your ‘Why I’m here’ goal!! 😉

    Just in case it helps anyone else, like Maurice, I also hit the ‘DBMS_LOCK’ and ‘GRANT CREATE TABLE’ issues and resolved them as he did.

    1. Take a look at MOS Doc ID 2784518.1

Leave a reply to Michael Milligan Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trending

Blog at WordPress.com.