These tweets somewhat befuddle me.
Because to be honest, if I no matter if I say “Yes” or “No” , and then for someone to take that response at face value…well…whilst I’m flattered in the level of trust, I’m also a little saddened because I think our community is best served by everyone always insisting on evidence for claims.
So naturally, it behoves me to conduct an experiment, which is the cause of my befuddlement. If the only way to answer this question with any confidence is via experiment, then perhaps these tweets should be “Hey, look at the results of my experiment!” or at least “Has anyone performed this experiment already?”. I’ll go with benefit of the doubt on this one and assume the latter.
Anyway…let us proceed. I’ll create a BASICFILE table, and load it up with ~4GB of data from ~300 files just taken at random from my PC. They range from a few kilobytes up to 500MB.
SQL> create table t_basic ( id int generated as identity, data blob ) tablespace largets lob ( data ) store as basicfile;
Table created.
SQL> declare
2 flist sys.odcivarchar2list :=
3 sys.odcivarchar2list(
4 '12c_DBFS_setup.pdf'
...
...
271 ,'zoom_dino.png'
272 ,'zoom_dino2.jpg'
273 ,'zoom_dino3.jpg'
274 ,'zoom_shark.jpg'
275 ,'zoom_shark.webp'
276 );
277
278 bf bfile;
279 b blob;
280 dest int;
281 src int;
282
283 begin
284 for i in 1 .. flist.count
285 loop
286 --
287 begin
288 insert into t_basic (data)
289 values (empty_blob())
290 return data into b;
291 dest := 1;
292 src := 1;
293 bf := bfilename('TMP', flist(i));
294 dbms_lob.fileopen(bf, dbms_lob.file_readonly);
295 dbms_lob.loadblobfromfile (b,bf,dbms_lob.lobmaxsize,dest,src);
296 dbms_lob.fileclose(bf);
297 commit;
298 exception
299 when others then
300 --
301 begin dbms_lob.fileclose(bf); exception when others then null; end;
302 end;
303 end loop;
304 end;
305 /
SQL> select count(*), sum(dbms_lob.getlength(data))
2 from t_basic;
COUNT(*) SUM(DBMS_LOB.GETLENGTH(DATA))
---------- -----------------------------
272 3508933562
SQL>
Now we’ll DataPump export that out, and look at a few options for reloading.
C:\>expdp userid=mcdonac/xxxxxxxx@db19_pdb1 directory=D_DRIVE tables=T_BASIC dumpfile=t_basic.dmp
Export: Release 19.0.0.0.0 - Production on Wed Jun 24 11:15:16 2020
Version 19.7.0.0.0
Copyright (c) 1982, 2020, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Starting "MCDONAC"."SYS_EXPORT_TABLE_01": userid=mcdonac/********@db19_pdb1 directory=D_DRIVE tables=T_BASIC dumpfile=t_basic.dmp
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/IDENTITY_COLUMN
. . exported "MCDONAC"."T_BASIC" 3.268 GB 272 rows
Master table "MCDONAC"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for MCDONAC.SYS_EXPORT_TABLE_01 is:
D:\ORACLE\ORADATA\T_BASIC.DMP
Job "MCDONAC"."SYS_EXPORT_TABLE_01" successfully completed at Wed Jun 24 11:16:17 2020 elapsed 0 00:00:59
Test 1: Reload as BASICFILE
SQL> truncate table t_basic reuse storage;
Table truncated.
C:\>impdp userid=mcdonac/xxxxxxxx@db19_pdb1 directory=D_DRIVE tables=T_BASIC dumpfile=t_basic.dmp TABLE_EXISTS_ACTION=APPEND
Import: Release 19.0.0.0.0 - Production on Wed Jun 24 11:23:31 2020
Version 19.7.0.0.0
Copyright (c) 1982, 2020, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Master table "MCDONAC"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "MCDONAC"."SYS_IMPORT_TABLE_01": userid=mcdonac/********@db19_pdb1 directory=D_DRIVE tables=T_BASIC dumpfile=t_basic.dmp TABLE_EXISTS_ACTION=APPEN
D
Processing object type TABLE_EXPORT/TABLE/TABLE
Table "MCDONAC"."T_BASIC" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "MCDONAC"."T_BASIC" 3.268 GB 272 rows
Processing object type TABLE_EXPORT/TABLE/IDENTITY_COLUMN
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job "MCDONAC"."SYS_IMPORT_TABLE_01" successfully completed at Wed Jun 24 11:25:02 2020 elapsed 0 00:01:30
So 90 seconds is our baseline value to reload the table “as is” back into the database
Test 2: Alter to SECUREFILE in database, and reload
SQL> truncate table t_basic reuse storage;
Table truncated.
SQL> alter table t_basic move lob ( data ) store as securefile;
Table altered.
C:\>impdp userid=mcdonac/xxxxxxxx@db19_pdb1 directory=D_DRIVE tables=T_BASIC dumpfile=t_basic.dmp TABLE_EXISTS_ACTION=APPEND
Import: Release 19.0.0.0.0 - Production on Wed Jun 24 11:25:55 2020
Version 19.7.0.0.0
Copyright (c) 1982, 2020, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Master table "MCDONAC"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "MCDONAC"."SYS_IMPORT_TABLE_01": userid=mcdonac/********@db19_pdb1 directory=D_DRIVE tables=T_BASIC dumpfile=t_basic.dmp TABLE_EXISTS_ACTION=APPEN
D
Processing object type TABLE_EXPORT/TABLE/TABLE
Table "MCDONAC"."T_BASIC" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "MCDONAC"."T_BASIC" 3.268 GB 272 rows
Processing object type TABLE_EXPORT/TABLE/IDENTITY_COLUMN
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job "MCDONAC"."SYS_IMPORT_TABLE_01" successfully completed at Wed Jun 24 11:26:50 2020 elapsed 0 00:00:55
The SECUREFILE load is faster. Only 55 seconds this time.
Test 3: Transform via IMPDP and reload
SQL> drop table t_basic purge;
Table dropped.
C:\>impdp userid=mcdonac/xxxxxxxx@db19_pdb1 directory=D_DRIVE tables=T_BASIC dumpfile=t_basic.dmp transform=lob_storage:securefile
Import: Release 19.0.0.0.0 - Production on Wed Jun 24 11:28:00 2020
Version 19.7.0.0.0
Copyright (c) 1982, 2020, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Master table "MCDONAC"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "MCDONAC"."SYS_IMPORT_TABLE_01": userid=mcdonac/********@db19_pdb1 directory=D_DRIVE tables=T_BASIC dumpfile=t_basic.dmp transform=lob_storage:sec
urefile
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "MCDONAC"."T_BASIC" 3.268 GB 272 rows
Processing object type TABLE_EXPORT/TABLE/IDENTITY_COLUMN
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job "MCDONAC"."SYS_IMPORT_TABLE_01" successfully completed at Wed Jun 24 11:29:01 2020 elapsed 0 00:01:00
And almost identical results from the transform parameter via DataPump. The little bit of extra time is probably attributable to the cost of creating a brand new table, and also extending the high water mark as is grows throughout the import.
Thus to answer the question: There is a good chance that migrating to SECUREFILE will not only give you the functionality benefits of the new storage type, but it will probably be faster to migrate to it as well….and lets hope that everyone in the community is motivated to always perform their own experiments to build our collective knowledge.