Generally my blog is just snippets of tech content that take my interest as I encounter them (most commonly when looking at AskTOM). If I think they’ll be useful, I’ll just plonk them out right there and then. If you prefer your content in longer (and more structured ) form, then also I publish longer form articles on Oracle Magazine every couple of months. Below is a consolidated list of my articles. I’ll try to keep this list updated as I add new ones.
Old Dog, New Tricks, Part 2
Here’s a new SQL syntax for hierarchy processing.
Improved Defaults in Oracle Database 12c
The new DEFAULT clause provides better values for getting started and better performance.
New features in Oracle Database 12c Release 2 make external tables even more flexible.
Tighter PL/SQL and SQL Integration
PL/SQL functions perform better in SQL in Oracle Database 12c.
All Aboard the SQL*Loader Express
A new processing mode takes the hassle out of dealing with flat file loading.
Long and Overflowing
LISTAGG in Oracle Database 12c Release 2 solves the problem of excessively long lists.
Assume the Best; Plan for the Worst
Here’s a technique for delivering better performance through optimistic code programming.
A Fresh Look at Auditing Row Changes
Triggers can provide auditing information, but there’s a future in flashback.
Better Tools for Better Data
New functions in Oracle Database 12c Release 2 solve data validation challenges.
Unintended Side Effects
Ensure that the code you write does not create problems elsewhere in your applications.
Write in a Read-Only Database
Run reports and DML against your standby database with Oracle Active Data Guard.
Open for Exchange
FOR EXCHANGE in Oracle Database 12c Release 2 takes the detective work out of partition exchanges.
A Higher-Level Perspective on SQL Tuning
The commonly missed first steps of tuning a SQL statement
Are We All on the Same Page?
Pagination of data can make (or break) your database.
Old Dog, New Tricks
Take advantage of SQL extensions for hierarchy processing.
There are plenty of blog posts about using the pre-processor facility in external tables to get OS level information available from inside the database. Here’s a simple example of getting a directory listing:
run_os.bat ========== @echo off cd \oracle dir /b
SQL> create table fs_size ( 2 disk varchar2(64) 3 ) 4 organization external 5 ( type oracle_loader 6 default directory temp 7 access parameters 8 ( records delimited by newline 9 preprocessor temp:'run_os.bat' 10 ) 11 location ( temp:'empty.txt' ) 12 ) reject limit unlimited ; Table created. SQL> SQL> select * from fs_size; DISK ------------------------------------------------ 18c_cloud admin adw_cloud atp_cloud audit cfgtoollogs checkpoints datamodeler dbsat diag instantclient jdbc183 ords.184 ords122 ords181 ords184 ordsconf product sql sqlcl sqldeveloper swingbench wallet
So far so good. But on Windows, it can be quite particular about the runtime environment. For example, if I fire up a command prompt, as a logged in user I can easily get a listing of disk partitions on my machine:
C:\>wmic logicaldisk get caption Caption C: D: E: F: G: M: P: X: Z:
It would seem logical that this would be a simple drop-in replacement for the directory listing batch file I used before. I’ll amend the batch file to list out disk partitions
run_os.bat ========== @echo off wmic logicaldisk get caption
and now I’ll run my query again.
SQL> select * from fs_size; no rows selected
Hmmm….that didn’t go as well as planned . Looking in the log file, there are no clues about the cause.
LOG file opened at 03/06/19 14:33:42 KUP-05007: Warning: Intra source concurrency disabled because the preprocessor option is being used. Field Definitions for table FS_SIZE Record format DELIMITED BY NEWLINE Data in file has same endianness as the platform Rows with all null fields are accepted Fields in Data Source: DISK CHAR (255) Terminated by "," Trim whitespace same as SQL Loader
The issue here is that many Windows programs require a minimum environment configuration before they will run. In this case, I need to let Windows know the location of the OS installation.
run_os.bat ========== @echo off set SystemRoot=C:\WINDOWS wmic logicaldisk get caption
And voila! I get access to many more Windows command to probe out information from the Operating System.
SQL> select * from fs_size; DISK -------------------------------- Caption C: D: E: F: G: M: P: X: Z:
Wow…it is nearly 4 years ago now that I wrote an article on connecting to the database via a wallet to avoid having to hard code passwords into script. That article is here:
So I went to do a similar exercise on my new 18c Windows database today, and to my surprise things went pear shaped at the very first step
c:\oracle\product\18\bin>mkstore -create -wrl c:\temp\wallet The syntax of the command is incorrect.
Not a lot of feedback there
Unfortunately it’s just a small error in the script (See MOS Note 2406404.1 ). All you need to do copy the mkstore.bat from as 12.1 client and you’ll be fine. I called mine mkstore121.bat.
c:\oracle\product\18\bin>mkstore121.bat -create -wrl c:\temp\wallet Oracle Secret Store Tool Release 188.8.131.52.0 - Production Version 184.108.40.206.0 Copyright (c) 2004, 2017, Oracle and/or its affiliates. All rights reserved. Enter password: ********* Enter password again: *********
Just a normal start to the day today…I had my coffee
and then started working on some AskTOM questions. Naturally pretty much the first thing I needed to do is connect to my database, and then this happened:
C:\oracle\product\18\bin>sqlplus scott/tiger@//gtx:1518/pdb1 SQL*Plus: Release 220.127.116.11.0 - Production on Wed Mar 6 09:23:09 2019 Version 18.104.22.168.0 Copyright (c) 1982, 2018, Oracle. All rights reserved. ERROR: ORA-12154: TNS:could not resolve the connect identifier specified
I made the standard assumptions
so once I’d ticked off the normal culprits, I then tried with other software installations on the same machine
C:\oracle\instantclient>sqlplus scott/tiger@//gtx:1518/pdb1 SQL*Plus: Release 22.214.171.124.0 Production on Wed Mar 6 09:21:44 2019 Copyright (c) 1982, 2014, Oracle. All rights reserved. Last Successful login time: Wed Mar 06 2019 09:13:31 +08:00 Connected to: Oracle Database 18c Enterprise Edition Release 126.96.36.199.0 - Production
C:\oracle\product\xe\18\dbhomeXE\bin>sqlplus scott/tiger@//gtx:1518/pdb1 SQL*Plus: Release 188.8.131.52.0 - Production on Wed Mar 6 09:22:38 2019 Version 184.108.40.206.0 Copyright (c) 1982, 2018, Oracle. All rights reserved. Last Successful login time: Wed Mar 06 2019 09:21:44 +08:00 Connected to: Oracle Database 18c Enterprise Edition Release 220.127.116.11.0 - Production Version 18.104.22.168.0
and voila! Obviously everything is OK from the listener and database perspective.
So back to the original installation, and I had the light bulb moment. I had been tinkering with sqlnet.ora to help out with a different question yesterday, and I had isolated the connection options down to only TNSNAMES
In such a scenario, you need to explicitly allow for EZCONNECT method to allow the connection to work.
NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)
and then everything was fine back in the original installation directory.
C:\oracle\product\18\bin>sqlplus scott/tiger@//gtx:1518/pdb1 SQL*Plus: Release 22.214.171.124.0 - Production on Wed Mar 6 09:50:52 2019 Version 126.96.36.199.0 Copyright (c) 1982, 2018, Oracle. All rights reserved. Last Successful login time: Wed Mar 06 2019 09:45:06 +08:00 Connected to: Oracle Database 18c Enterprise Edition Release 188.8.131.52.0 - Production Version 184.108.40.206.0
If you have never touched your sqlnet.ora, it probably looks like this:
NAMES.DIRECTORY_PATH= (TNSNAMES, ONAMES, HOSTNAME)
but there is no need to panic. Those default values allow EZCONNECT as well.
It’s a holiday here today in Perth, so a very brief blog post because I’ve been busy doing other things today
When I’m contacted by customers or friends and asked to help out with tuning their applications, probably the two most common issues are one or two poorly performing SQL statements, or the server being overly stressed with CPU load with no single SQL being the obvious cause.
In either case, a common plan of attack is probe V$SQL to gauge what statements are potential trouble makers. Due to the myriad of statistical metrics stored in V$SQL it is a very flexible view to discover more about potential problems just by adjusting the predicates. For example:
order by DISK_READS desc
order by BUFFER_GETS desc
Poor parsing applications?
order by PARSE_CALLS / EXECUTIONS
order by SHARABLE_MEM desc
each of which can be tweaked to hone into more specific requirements. But there is a risk to doing this. In particular, if you are diagnosing a system that is under duress, than hammering away at V$SQL may do more damage than good. Don’t forget that a system under duress is probably having its shared memory structures contended for just as heavily. And by aggressively querying V$SQL, you may be adding to that workload and/or contention.
You might be thinking this leaves in a Catch-22 situation – how can we diagnose a struggling system if diagnosing it will make it struggle more? Fortunately, a simple remedy is at hand.
Instead of querying V$SQL, try querying V$SQLSTATS instead. From the documentation:
V$SQLSTATS displays basic performance statistics for SQL cursors and contains one row per SQL statement (that is, one row per unique value of SQL_ID). The column definitions for columns in V$SQLSTATS are identical to those in the V$SQL and V$SQLAREA views. However, the V$SQLSTATS view differs from V$SQL and V$SQLAREA in that it is faster, more scalable, and has a greater data retention (the statistics may still appear in this view, even after the cursor has been aged out of the shared pool). Note that V$SQLSTATS contains a subset of columns that appear in V$SQL and V$SQLAREA.
Obviously this isn’t a leave pass to smash away at this performance view, but by avoiding V$SQL you are steering clear of a critical shared memory structure which any session wishing to parse/execute SQL will be competing for. V$SQLSTATS has been present since 10g and should be a drop-in replacement for your existing queries that use V$SQL.
Yes, I know it’s been awhile
Yes, I know people have been angry at the delay
But, can we put that behind us, and rejoice in the fact…that YES
Yes, 18c XE for Windows is now available.
Most probably, most developers in your organization are running a Windows PC. Now every single one of them can have a fully functioning Oracle database for developing, prototyping, testing, deploying, learning, exploring, educating, … the list goes on… on their own PC for absolutely zero cost.
Way back in Oracle 8.0 we introduced the “Object-Relational” database, which was “the next big thing” in the database community back then. Every vendor was scrambling to show just how cool their database technology was with the object-oriented programming paradigm.
Don’t get me wrong – using the Oracle database object types and features associated with them has made my programming life a lot easier over the years. But for me, it’s always been pretty much limited to that, ie, programming, not actually using the object types in a database design as such. Nevertheless, using objects as columns, or even creating tables of objects is supported by the database. For example, I can create a object type of MY_OBJECT (which could itself be made up of objects) and then have a table, not with that object as a column, but actually a table of that object.
SQL> create type myobj as object ( x int, y int ); 2 / Type created. SQL> create table t of myobj; Table created.
To populate that table, I need to construct objects before they can be inserted, because are inserting objects not rows.
SQL> insert into t 2 select myobj(rownum,rownum) 3 from dual connect by level <= 10; 10 rows created.
Obviously, for good performance, we always need to gather optimizer statistics on database tables so that the optimizer can derive the best execution plans it can for any SQL statements that access this object table. Which leads to the question: Where can we see the optimizer statistics for an object table? Because once we get into object table territory, the first thing that normally takes DBAs and Developers by surprise is that the standard dictionary view for tables looks … sparse.
SQL> exec dbms_stats.gather_table_stats('','T'); PL/SQL procedure successfully completed. SQL> SQL> select * 2 from user_tables 3 where table_name = 'T'; no rows selected
A quick perusal of the documentation reveals where we can get information for object tables – and that is the aptly named USER_OBJECT_TABLES view
SQL> select * 2 from user_object_tables 3 where table_name = 'T' 4 @pr ============================== TABLE_NAME : T TABLESPACE_NAME : USERS CLUSTER_NAME : IOT_NAME : STATUS : VALID PCT_FREE : 10 PCT_USED : INI_TRANS : 1 MAX_TRANS : 255 INITIAL_EXTENT : 65536 NEXT_EXTENT : 1048576 MIN_EXTENTS : 1 MAX_EXTENTS : 2147483645 PCT_INCREASE : FREELISTS : FREELIST_GROUPS : LOGGING : YES BACKED_UP : N NUM_ROWS : 10 BLOCKS : 5 EMPTY_BLOCKS : 0 AVG_SPACE : 0 CHAIN_CNT : 0 AVG_ROW_LEN : 23 AVG_SPACE_FREELIST_BLOCKS : 0 NUM_FREELIST_BLOCKS : 0 DEGREE : 1 INSTANCES : 1 CACHE : N TABLE_LOCK : ENABLED SAMPLE_SIZE : 10 LAST_ANALYZED : 18-FEB-19 PARTITIONED : NO IOT_TYPE : OBJECT_ID_TYPE : SYSTEM GENERATED TABLE_TYPE_OWNER : MCDONAC TABLE_TYPE : MYOBJ TEMPORARY : N SECONDARY : N NESTED : NO BUFFER_POOL : DEFAULT FLASH_CACHE : DEFAULT CELL_FLASH_CACHE : DEFAULT ROW_MOVEMENT : DISABLED GLOBAL_STATS : YES USER_STATS : NO DURATION : SKIP_CORRUPT : DISABLED MONITORING : YES CLUSTER_OWNER : DEPENDENCIES : DISABLED COMPRESSION : DISABLED COMPRESS_FOR : DROPPED : NO SEGMENT_CREATED : YES INMEMORY : DISABLED INMEMORY_PRIORITY : INMEMORY_DISTRIBUTE : INMEMORY_COMPRESSION : INMEMORY_DUPLICATE : EXTERNAL : NO CELLMEMORY : INMEMORY_SERVICE : INMEMORY_SERVICE_NAME : MEMOPTIMIZE_READ : DISABLED MEMOPTIMIZE_WRITE : DISABLED HAS_SENSITIVE_COLUMN : NO PL/SQL procedure successfully completed.
So far so good. But if I want to really dig down into optimizer statistics for table, I need to look further afield than just USER_TABLES and look at the dictionary views dedicated to just optimizer statistics information, and that is when things get a little interesting for object tables.
SQL> select * 2 from user_tab_statistics 3 where table_name = 'T'; no rows selected. SQL> select * 2 from user_tab_col_statistics 3 where table_name = 'T' 4 @pr ============================== TABLE_NAME : T COLUMN_NAME : SYS_NC_OID$ NUM_DISTINCT : 10 LOW_VALUE : 265DF9DB62004A2B90B6FB3D550CB9FD HIGH_VALUE : E868127216F045C89F54B55A48EAD6CF DENSITY : .1 NUM_NULLS : 0 NUM_BUCKETS : 1 LAST_ANALYZED : 18-FEB-19 SAMPLE_SIZE : 10 GLOBAL_STATS : YES USER_STATS : NO NOTES : AVG_COL_LEN : 17 HISTOGRAM : NONE SCOPE : SHARED ============================== TABLE_NAME : T COLUMN_NAME : X NUM_DISTINCT : 10 LOW_VALUE : C102 HIGH_VALUE : C10B DENSITY : .1 NUM_NULLS : 0 NUM_BUCKETS : 1 LAST_ANALYZED : 18-FEB-19 SAMPLE_SIZE : 10 GLOBAL_STATS : YES USER_STATS : NO NOTES : AVG_COL_LEN : 3 HISTOGRAM : NONE SCOPE : SHARED ============================== TABLE_NAME : T COLUMN_NAME : Y NUM_DISTINCT : 10 LOW_VALUE : C102 HIGH_VALUE : C10B DENSITY : .1 NUM_NULLS : 0 NUM_BUCKETS : 1 LAST_ANALYZED : 18-FEB-19 SAMPLE_SIZE : 10 GLOBAL_STATS : YES USER_STATS : NO NOTES : AVG_COL_LEN : 3 HISTOGRAM : NONE SCOPE : SHARED PL/SQL procedure successfully completed.
I can get column level statistics, but there is no entry in USER_TAB_STATISTICS for my object table. Checking the documentation indicates that unlike the USER_TABLES/USER_OBJECT_TABLES pairing, there is no partnering object-based view to match USER_TAB_STATISTICS. Digging into the definition for the USER_TAB_STATISTICS view shows that object tables are explicitly excluded:
and bitand(t.property, 1) = 0 /* not a typed table */
That might not seem such a big deal, but what happens if you lock the statistics for a table.
SQL> exec dbms_stats.lock_table_stats('','T'); PL/SQL procedure successfully completed.
The STATTYPE_LOCKED column is not on USER_TABLES and since we do not have an entry for the table in USER_TAB_STATISTICS, there is no direct mean of seeing if an object table has locked statistics. If you are faced with this problem, you have a couple of options at your disposal:
1) It would appear that the same flag on the internal dictionary table that indicates locked statistics is set for object tables as it would be for standard relational tables. Hence you could create a clone of the USER_TAB_STATISTICS view and remove the BITAND condition on the PROPERTY column. That of course is a very unsupported thing to do, and is just a rod for your back every time you patch or upgrade the database.
2) The other option is to assume that no-one is going to “mix and match” locking table statistics with index statistics. Every object table has an underlying index that is automatically created, so you can look at the locked status for this underlying index as a representative indicator of the table’s statistics locked state.
SQL> select * 2 from user_ind_statistics 3 where table_name = 'T' 4 @pr ============================== INDEX_NAME : SYS_C0028743 TABLE_OWNER : MCDONAC TABLE_NAME : T PARTITION_NAME : PARTITION_POSITION : SUBPARTITION_NAME : SUBPARTITION_POSITION : OBJECT_TYPE : INDEX BLEVEL : 0 LEAF_BLOCKS : 1 DISTINCT_KEYS : 10 AVG_LEAF_BLOCKS_PER_KEY : 1 AVG_DATA_BLOCKS_PER_KEY : 1 CLUSTERING_FACTOR : 1 NUM_ROWS : 10 AVG_CACHED_BLOCKS : AVG_CACHE_HIT_RATIO : SAMPLE_SIZE : 10 LAST_ANALYZED : 18-FEB-19 GLOBAL_STATS : YES USER_STATS : NO STATTYPE_LOCKED : ALL STALE_STATS : NO SCOPE : SHARED PL/SQL procedure successfully completed.
TL;DR: Querying optimizer statistics for object tables takes a little more care than with normal tables.