Raw partitions?

Here’s a quirky one for you. It can happen when you are dealing with a partitioned table where the partition key is defined as RAW. To be honest, I really can’t think of a reason why you ever want a table with a raw partition key. (If you have one, please let me know in the comments). Anyway, here’s the demo. I’ll start with a table using the cool automatic list partition facility and load a single row into it.


C:\>sql18

SQL*Plus: Release 18.0.0.0.0 - Production on Mon Apr 29 16:38:19 2019
Version 18.6.0.0.0

Copyright (c) 1982, 2018, Oracle.  All rights reserved.

Last Successful login time: Mon Apr 29 2019 16:37:30 +08:00

Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.6.0.0.0


SQL> create table test
  2  (
  3     id         raw(16),
  4     groupid    raw(16),
  5     chardata   varchar2(20),
  6     numbdata   number,
  7     datedata   date
  8  )
  9    partition by list (groupid) automatic (partition P_ values ('FF'))
 10  ;

Table created.

SQL> alter table test
  2  add constraint test_pk
  3  primary key (id)
  4  using index;

Table altered.

SQL> insert into test
  2  values (sys_guid(), sys_guid(), 'some char data', 123, sysdate - 3);

1 row created.

SQL> commit;

Commit complete.

SQL> exit
Disconnected from Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.6.0.0.0

No dramas there. Now I’ll take a DataPump export of that table using the defaults.


C:\>expdp mcdonac/*****@db18_pdb1 directory=TEMP dumpfile=test_exp.dmp tables=test

Export: Release 18.0.0.0.0 - Production on Mon Apr 29 16:38:19 2019
Version 18.6.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Starting "MCDONAC"."SYS_EXPORT_TABLE_01":  mcdonac/********@db18_pdb1 directory=TEMP dumpfile=test_exp.dmp tables=test
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
. . exported "MCDONAC"."TEST":"P_"                           0 KB       0 rows
. . exported "MCDONAC"."TEST":"SYS_P7389"                6.804 KB       1 rows
Master table "MCDONAC"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for MCDONAC.SYS_EXPORT_TABLE_01 is:
  C:\TEMP\TEST_EXP.DMP
Job "MCDONAC"."SYS_EXPORT_TABLE_01" successfully completed at Mon Apr 29 16:38:40 2019 elapsed 0 00:00:16

Everything is still going OK. If I took a DataPump export, it’s a reasonable assumption that I will be wanting to load that into a new database somewhere. To mimic that, I’ll simply drop the table I just created so that I can import the table back into the same schema anew.


C:\>sql18

SQL*Plus: Release 18.0.0.0.0 - Production on Mon Apr 29 16:38:42 2019
Version 18.6.0.0.0

Copyright (c) 1982, 2018, Oracle.  All rights reserved.

Last Successful login time: Mon Apr 29 2019 16:38:19 +08:00

Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.6.0.0.0



SQL> drop table test purge;

Table dropped.

SQL> exit
Disconnected from Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.6.0.0.0

And now we’ll run a simple DataPump import to put my table back in place.



C:\>impdp mcdonac/******@db18_pdb1 directory=TEMP dumpfile=test_exp.dmp tables=test

Import: Release 18.0.0.0.0 - Production on Mon Apr 29 16:38:43 2019
Version 18.6.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Master table "MCDONAC"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "MCDONAC"."SYS_IMPORT_TABLE_01":  mcdonac/********@db18_pdb1 directory=TEMP dumpfile=test_exp.dmp tables=test
Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-39083: Object type TABLE:"MCDONAC"."TEST" failed to create with error:
ORA-14308: partition bound element must be one of: string, datetime or interval literal, number, or NULL

Failing sql is:
CREATE TABLE "MCDONAC"."TEST" ("ID" RAW(16), "GROUPID" RAW(16), "CHARDATA" VARCHAR2(20 BYTE) COLLATE "USING_NLS_COMP", "NUMBDATA" NUMB
ER, "DATEDATA" DATE)  DEFAULT COLLATION "USING_NLS_COMP" PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255  STORAGE( BUFFER_POOL DEFAULT F
LASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "USERS"  PARTITION BY LIST ("GROUPID") AUTOMATIC  (PARTITION "P_"  VALUES ('FF
') SEGMENT CREATION DEFERRED PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255  NOCOMPRESS LOGGING  STORAGE( BUFFER_POOL DEFAULT FLASH_CAC
HE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "USERS"  READ WRITE , PARTITION "SYS_P7389"  VALUES (HEXTORAW('38E50ADF7A7840149B16767
9433196C5')) SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255  NOCOMPRESS LOGGING  STORAGE(INITIAL 8388608 NEX
T 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_
FLASH_CACHE DEFAULT) TABLESPACE "USERS"  READ WRITE )

Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
ORA-39112: Dependent object type CONSTRAINT:"MCDONAC"."TEST_PK" skipped, base object type TABLE:"MCDONAC"."TEST" creation failed

Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job "MCDONAC"."SYS_IMPORT_TABLE_01" completed with 2 error(s) at Mon Apr 29 16:38:46 2019 elapsed 0 00:00:03

Uh oh… You can see from the highlighted part of the import log, that DataPump sees the RAW datatype definition and hence tries to map the partition keys to the the same datatype using HEXTORAW. It’s an interesting contradiction that you are allowed to partition a table by raw, but if you try to nominate a raw datatype in the actual partition definition values, then the database gets upset with you. Here’s the same DDL in simpler form showing the same error.



SQL> create table test
  2  (
  3     id         raw(16),
  4     groupid    raw(16),
  5     chardata   varchar2(20),
  6     numbdata   number,
  7     datedata   date
  8  )  partition by list (groupid) (partition P_ values (hextoraw('FF')));
)  partition by list (groupid) (partition P_ values (hextoraw('FF')))
                                                                   *
ERROR at line 8:
ORA-14308: partition bound element must be one of: string, datetime or interval literal, number, or NULL

To be honest, I’m not losing any sleep over this, because I can’t think of a reason to partition by raw. But if you do have a valid case for it, then you’ll need to pre-create the table (without the HEXTORAW expressions) and then use DataPump to load the data back into the table.

Take care with automatic indexes

This one came in via an AskTOM question. You need to be careful when disabling constraints that are underpinned by an index, especially when it comes to the storage for that index.

Let’s take a look at a simple example. I’ll explicitly nominate the LARGETS tablespace for my index that will support the primary key.



SQL> create table t as
  2      select 1 x, 1 y from dual;

Table created.

SQL>
SQL> alter table t add constraint
  2    t_pk primary key ( x ) using index tablespace largets;

Table altered.

SQL>
SQL> select tablespace_name from user_segments
  2  where segment_name = 'T_PK';

TABLESPACE_NAME
--------------------
LARGETS

1 row selected.

SQL>
SQL> alter table t modify constraint t_pk disable;

Table altered.

SQL> select * from user_indexes where index_name = 'T_PK';

no rows selected

SQL> alter table t modify constraint t_pk enable;

Table altered.

Now let us look at where the index that is needed for that constraint has been re-created.


SQL> select tablespace_name from user_segments
  2  where segment_name = 'T_PK';
TABLESPACE_NAME
--------------------
USERS

The index is a quite distinct entity from the constraint, and unless you nominate a tablespace it will be created in the default tablespace for the schema, which in my case was USERS.

So take care – you might need to pre-create indexes or explicitly rebuild them with a tablespace specification before re-enabling those constraints.


SQL> create table t as
  2      select 1 x, 1 y from dual;

Table created.

SQL>
SQL> create unique index t_pk on t ( x ) tablespace largets;

Index created.

SQL> alter table t add constraint   t_pk primary key ( x ) using index ;

Table altered.

SQL>
SQL> select tablespace_name from user_segments
  2  where segment_name = 'T_PK';

TABLESPACE_NAME
--------------------
LARGETS

1 row selected.

SQL>
SQL> alter table t modify constraint t_pk disable;

Table altered.

SQL> select tablespace_name from user_segments
  2  where segment_name = 'T_PK';

TABLESPACE_NAME
--------------------
LARGETS

1 row selected.

SQL> alter table t modify constraint t_pk enable;

Table altered.

SQL>
SQL> select tablespace_name from user_segments
  2  where segment_name = 'T_PK';

TABLESPACE_NAME
--------------------
LARGETS

1 row selected.

 

Grab all the DDL

I posted a video a couple of days ago showing a trigger mechanism to customize the capture of DDL that is issued on your database. The aim here is to be more generous with letting developers execute DDL on their Development databases, whilst still having a thorough record of the changes that are occurring. Of course, it goes without saying, which is why I am saying it 🙂 that this is not a replacement for good source code control.

Here’s the video

and here is the supporting code


DROP TABLE SYSTEM.DDL_LOG CASCADE CONSTRAINTS
/

CREATE TABLE SYSTEM.DDL_LOG
(
  TSTAMP       TIMESTAMP(6)   NOT NULL,
  HOST         VARCHAR2(100),
  IP_ADDRESS   VARCHAR2(100),
  MODULE       VARCHAR2(100),
  OS_USER      VARCHAR2(100),
  TERMINAL     VARCHAR2(100),
  OPERATION    VARCHAR2(100),
  OWNER        VARCHAR2(50),
  OBJECT_NAME  VARCHAR2(50),
  OBJECT_TYPE  VARCHAR2(50),
  SQLTEXT      CLOB,
  PREVSQLTEXT  CLOB
)
/


DROP TRIGGER SYSTEM.capture_all_ddl
/

CREATE OR REPLACE TRIGGER SYSTEM.CAPTURE_ALL_DDL
after create or alter or drop on database
begin
  --
  -- lots of flexibility here in choosing what you want to log
  -- and when etc etc.
  --
  if ora_dict_obj_owner in ('....')
    and  dbms_utility.format_call_stack not like '%NIGHTLY%'  -- not the nightly maint jobs
    and  nvl(sys_context('USERENV','MODULE'),'x') != 'DBMS_SCHEDULER'  -- not jobs
  then
    --
    -- and we can capture all the usual sys_context values
    --
    insert into SYSTEM.ddl_log
    values (systimestamp,
                sys_context('USERENV','HOST'),
                sys_context('USERENV','IP_ADDRESS'),
                sys_context('USERENV','MODULE'),
                sys_context('USERENV','OS_USER'),
                sys_context('USERENV','TERMINAL'),
                ora_sysevent,
                ora_dict_obj_owner,
                ora_dict_obj_name,
                ora_dict_obj_type,
                --
                -- In my case I choose to not log PL/SQL source, just the fact that it had been changed
                -- but you can do whatever you like here.
                --
                case when ora_dict_obj_type not in ('PACKAGE','PROCEDURE','FUNCTION','PACKAGE BODY') and ora_sysevent != 'DROP' then
                  ( select sql_fulltext from v$sql
                    where sql_id = ( select sql_id from v$session where sid = sys_context('USERENV','SID') ) 
                    and rownum = 1 
                  )
                end,
                case when ora_dict_obj_type not in ('PACKAGE','PROCEDURE','FUNCTION','PACKAGE BODY') and ora_sysevent != 'DROP' then
                  ( select sql_fulltext from v$sql
                    where sql_id = ( select prev_sql_id from v$session where sid = sys_context('USERENV','SID') ) 
                    and rownum = 1 
                  )
                end
           );
     
  end if;
exception
  when others then null;  -- we wil not STOP the ddl if we fail to track it
end;
/

PL/SQL – Don’t mix and match scope

Here’s a simple little PL/SQL block where we call an inner procedure PARAMETER_TESTER from its parent block. Pay particular attention to the parameter we pass to the procedure, and it’s value throughout the execution of that procedure.


SQL> set serverout on
SQL> declare
  2
  3     glob_var  int := 0;
  4     local_var int;
  5
  6     procedure PARAMETER_TESTER(param int) is
  7     begin
  8         dbms_output.put_line('Param came in as: '||param);
  9         glob_var := glob_var + 1;
 10         dbms_output.put_line('Param left as   : '||param);
 11         dbms_output.put_line('glob_var is now : '||glob_var);
 12     end;
 13
 14  begin
 15     parameter_tester(100);
 16  end;
 17  /
Param came in as: 100
Param left as   : 100
glob_var is now : 1

PL/SQL procedure successfully completed.

Now I’ll slowly extend the code, and just by eyeballing it, see if you can predict what the output will be before looking past the end of the PL/SQL block.



SQL> set serverout on
SQL> declare
  2
  3     glob_var  int := 0;
  4     local_var int;
  5
  6     procedure PARAMETER_TESTER(param int) is
  7     begin
  8         dbms_output.put_line('Param came in as: '||param);
  9         glob_var := glob_var + 1;
 10         dbms_output.put_line('Param left as   : '||param);
 11         dbms_output.put_line('glob_var is now : '||glob_var);
 12     end;
 13
 14  begin
 15     local_var := glob_var;
 16     parameter_tester(local_var);
 17  end;
 18  /
Param came in as: 0
Param left as   : 0
glob_var is now : 1

PL/SQL procedure successfully completed.

So far so good I imagine. The parameter came in as zero, we incremented the global variable which of course had no impact on the parameter. Let’s now up the ante a little.



SQL> set serverout on
SQL> declare
  2
  3     glob_var  int := 0;
  4     local_var int;
  5
  6     procedure PARAMETER_TESTER(param int) is
  7     begin
  8         dbms_output.put_line('Param came in as: '||param);
  9         glob_var := glob_var + 1;
 10         dbms_output.put_line('Param left as   : '||param);
 11         dbms_output.put_line('glob_var is now : '||glob_var);
 12     end;
 13
 14  begin
 15     parameter_tester(glob_var);
 16  end;
 17  /
Param came in as: 0
Param left as   : 1
glob_var is now : 1

PL/SQL procedure successfully completed.

This is perhaps the first one that you might find a little unexpected. Notice that the value of the parameter passed to the procedure has changed within the inner procedure even though it was passed (implicitly) as an IN parameter. People often assume that if you pass anything to a procedure without the IN OUT or OUT specification, then the parameter is “read only” and cannot be touched by code. This is true to the extent that you cannot perform an assignment to that parameter as you can see below



SQL> set serverout on
SQL> declare
  2
  3     glob_var  int := 0;
  4     local_var int;
  5
  6     procedure PARAMETER_TESTER(param int) is
  7     begin
  8         param := param + 1;
  9     end;
 10
 11  begin
 12     parameter_tester(glob_var);
 13  end;
 14  /
       param := param + 1;
       *
ERROR at line 8:
ORA-06550: line 8, column 8:
PLS-00363: expression 'PARAM' cannot be used as an assignment target
ORA-06550: line 8, column 8:
PL/SQL: Statement ignored

but that is not the same as saying that the parameter is fixed in value throughout the duration of the call. This behaviour is documented in the PL/SQL language manual in that an IN parameter can be passed by reference rather than a static value.

image

Now I’ll explore some other examples of how you might get caught out by this. I’ll modify the example just slightly now so that I’m passing an expression rather than just “glob_var”.



SQL> set serverout on
SQL> declare
  2
  3     glob_var  int := 0;
  4     local_var int;
  5
  6     procedure PARAMETER_TESTER(param int) is
  7     begin
  8         dbms_output.put_line('Param came in as: '||param);
  9         glob_var := glob_var + 1;
 10         dbms_output.put_line('Param left as   : '||param);
 11         dbms_output.put_line('glob_var is now : '||glob_var);
 12     end;
 13
 14  begin
 15     parameter_tester(glob_var+1);
 16  end;
 17  /
Param came in as: 1
Param left as   : 1
glob_var is now : 1

PL/SQL procedure successfully completed.

Now “normal” service has been resumed, in that the expression is evaluated first and hence is passed by value to the procedure leaving the parameter value unchanged throughout the procedure.

So it would appear an expression will disable the “pass by reference” mechanism? Well, let’s try two more examples



SQL> set serverout on
SQL> declare
  2
  3     glob_var  int := 0;
  4     local_var int;
  5
  6     procedure PARAMETER_TESTER(param int) is
  7     begin
  8         dbms_output.put_line('Param came in as: '||param);
  9         glob_var := glob_var + 1;
 10         dbms_output.put_line('Param left as   : '||param);
 11         dbms_output.put_line('glob_var is now : '||glob_var);
 12     end;
 13
 14  begin
 15    parameter_tester(to_char(glob_var));
 16  end;
 17  /
Param came in as: 0
Param left as   : 0
glob_var is now : 1

PL/SQL procedure successfully completed.

SQL>
SQL> set serverout on
SQL> declare
  2
  3     glob_var  int := 0;
  4     local_var int;
  5
  6     procedure PARAMETER_TESTER(param int) is
  7     begin
  8         dbms_output.put_line('Param came in as: '||param);
  9         glob_var := glob_var + 1;
 10         dbms_output.put_line('Param left as   : '||param);
 11         dbms_output.put_line('glob_var is now : '||glob_var);
 12     end;
 13
 14  begin
 15    parameter_tester(to_number(glob_var));
 16  end;
 17  /
Param came in as: 0
Param left as   : 1
glob_var is now : 1

PL/SQL procedure successfully completed.

Notice the last one in particular. Even though I had a TO_NUMBER expression around “glob_var”, it was still passed by reference and hence the parameter value changed throughout the execution of the inner procedure. This is because the PL/SQL compiler detected that the the TO_NUMBER function was redundant and optimized it out during the compilation process. That left just a parameter input of “glob_var” which could then be passed by reference.

Bottom line – it’s generally considered poor programming practice in any language to be mix and matching the scope of variables, in this case, using glob_var both within and outside the inner procedure. Trying to subvert the behaviour of the PL/SQL engine by using expressions such as to_char(glob_var) is a recipe for disaster. Who knows what additional optimizations the next release of the PL/SQL compiler will have? Perhaps it will optimize out “glob_val+1” or to_char(glob_var) and hence pass them by reference etc. If your functionality really demands on coding with these blurred scopes, then make sure you perform an assignment to a local variable and pass that to avoid unexpected side-effects.



SQL> set serverout on
SQL> declare
  2
  3     glob_var  int := 0;
  4     local_var int;
  5
  6     procedure PARAMETER_TESTER(param int) is
  7     begin
  8         dbms_output.put_line('Param came in as: '||param);
  9         glob_var := glob_var + 1;
 10         dbms_output.put_line('Param left as   : '||param);
 11         dbms_output.put_line('glob_var is now : '||glob_var);
 12     end;
 13
 14  begin
 15     local_var := glob_var;
 16     parameter_tester(local_var);
 17  end;
 18  /
Param came in as: 0
Param left as   : 0
glob_var is now : 1

PL/SQL procedure successfully completed.

Quick and easy masking

I had a request from a client a while back regarding masking of data. They had an application with sensitive data in the Production environment (where access and audit were very tightly controlled) but the issue was how to respect that sensitivity in non-Production environments whilst still preserving full size data sizes for application testing.

After some conversations about requirements, it turned out that since (even in non-Production environments) all access to application components was logged and audited, the issue was simply protecting against “inadvertent” access to sensitive data. For example, in the application, if I searched for “males with black hair with an interest in technology” I should never see the name “Connor McDonald” on screen in the results, because simply viewing that data could be treated as a breach of privacy.

Addenda: Anyone adding a comment to this post with “Shouldn’t that be males with black hair with a hint of grey” will be banned forever Smile

I suggested Data Redaction as a potential solution, but they wanted the data actually obfuscated within the database. Then I suggested Data Masking which definitely does the job, but they felt this offered far more than their simple requirement of just obfuscate the data and retain the distribution of data to preserve performance characteristics as much as possible.

So ultimately here is the basis of a little routine I created for them to obfuscate their data to satisfy their requirement of not viewing meaningful data in their non-Production environments. Please read the footnote at the end of this post before you consider using this code. The logic here is straight forward

  • Take the alphanumeric characters in a logical order as a single string.
  • Use DBMS_RANDOM to come up with 255 variations of that string and store them in an array.
  • Use the TRANSLATE function to perform a simple obfuscation of the input string.

That translation could then be used during the process of copying data from Production, or as a post-copy task.

First I’ll demo the concept of coming up with randomised strings. Here’s a simple anonymous block to create 10 random strings based on a base string of: ABC….Zabc…z0123456789


SQL> set serverout on
SQL> declare
  2    type rand_list is table of varchar2(62) index by pls_integer;
  3    source varchar2(62) := 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789';
  4    perms  rand_list;
  5    source_variant varchar2(62);
  6  begin
  7    for i in 1 .. 10 loop
  8      source_variant := null;
  9      for j in ( select rownum r from dual connect by level <= 52 order by dbms_random.value )
 10      loop
 11        source_variant := source_variant || substr(source,j.r,1);
 12      end loop;
 13      for j in ( select rownum r from dual connect by level <= 10 order by dbms_random.value )
 14      loop
 15        source_variant := source_variant || substr(source,52+j.r,1);
 16      end loop;
 17      perms(i) := source_variant;
 18      dbms_output.put_line(source_variant);
 19    end loop;
 20  end;
 21  /
xrLCuowbNfvXlMJSBKctPyheZWTnQUjYAdizkFVOIHGsmDRpaEqg5637918402
OkVxApdsLqTlyvgYXQGaSeNPMCjZEJhHRtbcIBrowKUunifzDFmW8013742695
gcVxtNeMdBuRJhvYrnkCIWzoyFwfLpqSATabQGUZKmDOEilPHsXj5601892743
vOUuisITyJjbaQLqfnpFVBoCrYtzHeDAmWExPkNwgScXKRlZGMhd3104879256
VLZkXndmytCWRwKeNgYifvTESzGFhHcqaMJPoljxUsAQrubOpBDI7302189564
VfPLKmObZzBMCpgrjUaQYieRNJykDwtnuWXFxdTEIlScqvohAGsH0138924756
ABMkfozebljnPGqHJhpNwxYEUCuQLyDrOasIgtFZKVmciTXSdvRW5702139864
iNjrmVTlyzFSLswfBvaWKUAHIZOcgMPYXCGqeoDJbhpktERdQunx1976045328
hiYGzxMEZFPcmkutbDlrCeynSITKHNJaXAwfpRsdqLjvQUoWVOgB0479632158
uZpPXOHLAvzricdjtqSQBKVhYoFfmUxEDNCwWlIJsbRgkMeGynaT6451328790

So a simple obfuscation could be:


SQL> select
  2    translate(
  3      'Connor McDonald',
  4      'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789',
  5      'ehZAFjmtYwWilTRDnCcLrvQqEpakKGPMxXyudVUNsoJOBfzSIbHg2163095784')
  6  from dual;

TRANSLATE('CONN
---------------
ZsNNsO lKAsNaVG

Armed with this, I can create a package which extends this to choose from 255 random strings and use these as a simple obfuscator for source data.


SQL> create or replace
  2  package masker is
  3    function rand_masks(p_size int default 255, p_refresh varchar2 default 'N',p_seed number default 0) return  sys.odcivarchar2list pipelined;
  4    function standard_source return varchar2;
  5    function idx_entry(p_idx int, p_size int default 255, p_refresh varchar2 default 'N',p_seed number default 0) return varchar2;
  6    function mask(p_input varchar2) return varchar2;
  7  end;
  8  /

Package created.

SQL> create or replace
  2  package body masker is
  3    perms sys.odcivarchar2list := sys.odcivarchar2list();
  4
  5  procedure init(p_size int,p_seed int) is
  6    source varchar2(62) := 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789';
  7    source_variant varchar2(62);
  8  begin
  9    dbms_random.seed(p_seed);
 10    perms.delete;
 11    perms.extend(p_size);
 12    for i in 1 .. p_size loop
 13      source_variant := null;
 14      for j in ( select rownum r from dual connect by level <= 52 order by dbms_random.value )
 15      loop
 16        source_variant := source_variant || substr(source,j.r,1);
 17      end loop;
 18      for j in ( select rownum r from dual connect by level <= 10 order by dbms_random.value )
 19      loop
 20        source_variant := source_variant || substr(source,52+j.r,1);
 21      end loop;
 22      perms(i) := source_variant;
 23  --      dbms_output.put_line(source_variant);
 24    end loop;
 25  end;
 26
 27  function rand_masks(p_size int default 255, p_refresh varchar2 default 'N',p_seed number default 0) return sys.odcivarchar2list pipelined is
 28  begin
 29    if perms.count < p_size or p_refresh in ('Y','y') then
 30      init(p_size,p_seed);
 31    end if;
 32
 33    for i in 1 .. p_size
 34    loop
 35      pipe row ( perms(i));
 36    end loop;
 37
 38    return;
 39  end;
 40
 41  function standard_source return varchar2 is
 42  begin
 43     return 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789';
 44  end;
 45
 46  function idx_entry(p_idx int, p_size int default 255, p_refresh varchar2 default 'N',p_seed number default 0) return varchar2 is
 47  begin
 48    if perms.count < p_size or p_refresh in ('Y','y') then
 49      init(p_size,p_seed);
 50    end if;
 51    return perms(p_idx);
 52  end;
 53
 54  function mask(p_input varchar2) return varchar2 is
 55  begin
 56    return translate(p_input,masker.standard_source,masker.idx_entry(ascii(p_input)));
 57  end;
 58
 59  end;
 60  /

Package body created.

Now I’ll test this with some sample data


SQL> create table t ( pk int, n varchar2(50));

Table created.

SQL>
SQL> insert into t values (1,'Connor McDonald');

1 row created.

SQL> insert into t values (2,'Chris Saxon');

1 row created.

SQL> insert into t values (3,'Maria Colgan');

1 row created.

SQL> insert into t values (4,'Bobby Curtis');

1 row created.

SQL> insert into t values (5,'Joel Kallman');

1 row created.

SQL> insert into t values (6,'Steven Feuerstein');

1 row created.

SQL> insert into t values (7,'Connor McDonald');

1 row created.

SQL> select pk, n, masker.mask(n) diddled
  2  from t;

        PK N                                                  DIDDLED
---------- -------------------------------------------------- ------------------------------
         1 Connor McDonald                                    sAJJAe CvnAJjWt
         2 Chris Saxon                                        sweOy RjrAJ
         3 Maria Colgan                                       czEJz BhMbzm
         4 Bobby Curtis                                       nkjjI EpzLBS
         5 Joel Kallman                                       oYfi luiiIuj
         6 Steven Feuerstein                                  CyUrUE SUtUWQyUXE
         7 Connor McDonald                                    sAJJAe CvnAJjWt

7 rows selected.

There we go! A simple obfuscator that runs quite efficiently on source data whilst preserving the distribution of the data.

Footnote: Remember that any masking scheme that does not completely randomize the obfuscation of  source data is not a complete encryption or security solution. Most mechanisms to deduce source data from a manipulated variant of that data involves distribution analysis of letters, n-grams, words and phrases. The moment you preserve distribution of source data for (say) performance and/or optimizer plan preservation, you are allowing for source data to be derived if there is enough of it available to perform that analysis. Remember that this is not a substitute for appropriate security and encryption protections.

APEX Upgrade redux

I posted about my APEX upgrade to 19 yesterday, and someone was quick to point out to me that they believed I hadn’t covered all of the steps.

“What if your APEX instance needs to call web services?” they said. “You need to update your Access Control Lists.”

I hadn’t thought of that, so I logged onto one of my other APEX instances that was still at version 18, and checked the current ACLs



SYS> select acl,
  2         principal,
  3         privilege
  4  from   dba_network_acl_privileges
  5  order by acl, principal, privilege;

ACL                                                PRINCIPAL            PRIVILEGE
-------------------------------------------------- -------------------- ----------
/sys/acls/oracle-sysman-ocm-Resolve-Access.xml     ORACLE_OCM           resolve
NETWORK_ACL_192DBA180AEB40AD98A73ACCD309FF8F       APEX_180200          http
NETWORK_ACL_296C00CF7F2744BAB526D4C4E85FE189       GGSYS                resolve
NETWORK_ACL_296C00CF7F2744BAB526D4C4E85FE189       GSMADMIN_INTERNAL    resolve

4 rows selected.

I can see the potential issue here. On upgrade, I’ll have a new schema which would need the same ACLs granted as the existing APEX 18 schema. Then I went ahead and upgraded this instance to version 19, and lo and behold, check out this nice little touch in the upgrade.


SYS> select acl,
  2         principal,
  3         privilege
  4  from   dba_network_acl_privileges
  5  order by acl, principal, privilege;

ACL                                                PRINCIPAL            PRIVILEGE
-------------------------------------------------- -------------------- ----------
/sys/acls/oracle-sysman-ocm-Resolve-Access.xml     ORACLE_OCM           resolve
NETWORK_ACL_0F93A8F653EC43DC9D90457B1151A330       APEX_190100          http
NETWORK_ACL_192DBA180AEB40AD98A73ACCD309FF8F       APEX_180200          http
NETWORK_ACL_296C00CF7F2744BAB526D4C4E85FE189       GGSYS                resolve
NETWORK_ACL_296C00CF7F2744BAB526D4C4E85FE189       GSMADMIN_INTERNAL    resolve

5 rows selected.

The upgrade took care of the ACLs for me! That’s pretty cool.

Application Express 19.1

AskTOM moved to Application Express 19.1 without any major issues last weekend. That in itself is a nice endorsement for APEX, given that the AskTOM application dates back nearly 20 years to 2001, and predates even the existence of APEX.

The only fix that we had to make was that AskTOM uses the static CDN files that Joel Kallman blogged about to make it nice and snappy wherever in the world it is used. The reference to those files have a hard-coded version number so that needed to updated. For AskTOM, we have a plugin that uses some jQuery elements that went pear-shaped when referencing the old version 18 files, but after a quick fix to that reference all was well.

image

Given that AskTOM is running on APEX 19 I figured I best upgrade my local PC installation as well. I find the standard documentation a little bit cryptic when I want to perform an upgrade because the docs have to cover all of the possible scenarios (CDB vs non-CDB, web listener vs EPG, etc), and as such when you click on the section on Upgrading, you don’t appear to the get a step by step guide on what to do.

image

But the secret is in the second paragraph:

“following any of the installations scenarios in the guide upgrades your Oracle Application Express instance…”

so here’s what I did to upgrade my local installation which is installed within a pluggable database, and running with a standalone ORDS listener.  (I stress, always consult the standard documentation set before commencing your upgrade).

  • Downloaded APEX and unzipped into an appropriate location
  • Connected as SYSDBA in the pluggable database where APEX is installed
  • Ran the standard installation script (which will automatically detected this be an upgrade)
@apexins.sql SYSAUX SYSAUX TEMP /i/
  • Let ORDS know that an upgrade had been done.

java -jar ords.war validate
  • And the reset the REST services component post-upgrade

@apex_rest_config.sql

and I was done! No dramas encountered and APEX on my PC is now on 19.1

Note: I moved my original APEX installation to a backup area, and placed the new version 19 installation in place of the old location, so I did not have to move or copy any images and static files around. Your approach may be different, so just to reiterate – consult the standard docs before diving into your upgrade.

Of course, all the dark mode fan bois will be telling me to switch to Dark Mode Smile

image

but for me, I think it would be wrong to maintain a classic 2000’s application such as AskTOM in anything but brilliant white Smile