You’ve got a huge table right? Massive! Immense! And then something bad happens. You get asked to remove one of the columns from that table.
“No problem” you think. “I won’t run the ‘drop column’ command because that will visit every block and take forever!”
So you settle on the perfect tool for such a scenario – simply mark the column as unused so that it is no longer available to application code and the developers that write that code.
But there’s a catch that not many people know about. SET UNUSED is meant to be just a trivial database dictionary operation. We don’t touch the data, we only manipulate the column definition and thus an execution of SET UNUSED should be instantaneous. This is indeed the case most of the time, but as the example below shows – the way the column was added to the table, can have a bearing on what it costs to have that column removed.
SQL> select banner from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 22.214.171.124.0 - 64bit Production -- -- large-ish table to monitor the impact -- SQL> create table t1 2 as 3 select d.* 4 from dba_objects d, 5 ( select 1 from dual connect by level <= 100); Table created. SQL> set timing on SQL> select count(*) from t1; COUNT(*) ---------- 8713700 Elapsed: 00:00:01.92 -- -- Take an existing column to UNUSED ... instantaneous! -- SQL> alter table t1 set unused column created; Table altered. Elapsed: 00:00:00.06 -- -- Add a new column, and then make it UNUSED ... instantaneous! -- SQL> alter table t1 add new_col1 timestamp; Table altered. Elapsed: 00:00:00.01 SQL> alter table t1 set unused column new_col1; Table altered. Elapsed: 00:00:00.00 -- -- Add a new columns with a not null default, and then make it UNUSED ... careful! -- SQL> alter table t1 add new_col2 timestamp default sysdate not null; Table altered. Elapsed: 00:00:00.00 SQL> alter table t1 set unused column new_col2; Table altered. Elapsed: 00:01:35.39
You will not get the same issue in 12c.
SQL> select banner from v$version; BANNER ----------------------------------------------------------------------------- Oracle Database 12c Enterprise Edition Release 126.96.36.199.0 - 64bit Production SQL> create table t1 nologging 2 as 3 select d.* 4 from dba_objects d, 5 ( select 1 from dual connect by level <= 100); Table created. SQL> SQL> set timing on SQL> select count(*) from t1; COUNT(*) ---------- 7951500 Elapsed: 00:00:02.20 SQL> alter table t1 set unused column created; Table altered. Elapsed: 00:00:00.08 SQL> SQL> alter table t1 add new_col1 timestamp; Table altered. Elapsed: 00:00:00.00 SQL> alter table t1 set unused column new_col1; Table altered. Elapsed: 00:00:00.00 SQL> SQL> alter table t1 add new_col2 timestamp default sysdate not null; Table altered. Elapsed: 00:00:00.01 SQL> alter table t1 set unused column new_col2; Table altered. Elapsed: 00:00:00.01 SQL>
Just a quick post to clear up some confusion that can be seen on the partial indexing capabilities in Oracle Database 12c and above. I was at an event in the Oracle Perth office last week, and an attendee mentioned that they thought the feature was not working or was not enabled on their database, and presented the following demonstration to prove their case:
SQL> create table t ( x int, y int ) 2 partition by range ( x ) 3 ( 4 partition p1 values less than ( 1000 ) indexing on, 5 partition p2 values less than ( 2000 ) indexing off 6 ); Table created. SQL> insert into t 2 select rownum, rownum from dual connect by level < 2000; 1999 rows created. SQL> create index ix on t ( x ) local; Index created. SQL> select segment_name, partition_name 2 from user_segments 3 where segment_name = 'IX'; SEGMENT_NAME PARTITION_NAME ------------------------------ ------------------------------ IX P1 IX P2 SQL> select partition_name, status 2 from user_ind_partitions 3 where index_name = 'IX'; PARTITION_NAME STATUS ------------------------------ -------- P1 USABLE P2 USABLE
At first glance, this looks counter-intuitive. I have explicitly specified that I do not want indexing on partition P2, yet after creating a local index, I still have 2 segments, one for each partition, and double-checking USER_IND_PARTITIONS tells me that they are both “fully-fledged” usable index partitions.
As per the documentation linked above, nominating the INDEXING ON / OFF at the partition level on the table definition is not the whole story. When you create the index, you need to inform the database that you wish a particular index to respect that intent. This is because you might want some indexes to be partial and others not to be.
So by slightly changing my CREATE INDEX statement, I can get the desired outcome.
SQL> drop index ix; Index dropped. SQL> create index ix on t ( x ) local indexing partial; Index created. SQL> select segment_name, partition_name 2 from user_segments 3 where segment_name = 'IX'; SEGMENT_NAME PARTITION_NAME ------------------------------ ------------------------------ IX P1 SQL> select partition_name, status 2 from user_ind_partitions 3 where index_name = 'IX'; PARTITION_NAME STATUS ------------------------------ -------- P1 USABLE P2 UNUSABLE
Chris and I will be at OpenWorld next month, so our session details are below, but you can also click on the links below to add entries to your calendar to make sure you don’t miss us. Don’t forget to use the official Schedule Builder to make sure you have booked your spot at our sessions!
If you can’t get to our sessions, you might catch us wandering the halls or at The Hub. Feel free to come and say Hello and talk tech!
See you at OpenWorld!
Addenda: See the slides from my talks here
Calendar ICS files
Tune ANY SQL in 20 minutes (.ics)
Fast Lane to Database Success (.ics)
Developers – Don’t Be The Person That Discombobulates Your Database (.ics)
18 Things Developers Will Love About Database 18c (.ics)
Why Isn’t My Query Using An Index (.ics)
Last year, I flexed my technology muscles by building on the fine ground work of Lucas Jellema in using some Node, some REST, and some JSON to extract the full Oracle Openworld speaker catalogue, and then added some JSON parsing in the database, some SQL and slapped a nice helping on Application Express on top of it all, to end up with an Application Express rendition of the Oracle Openworld speaker catalogue.
So it is that time of year again, and whilst some of the bits and pieces have changed, I’ve managed to stumble my way through all of the difference pieces once again, and made some enhancements along the way to give you the new and improved 2018 version!
It’s waiting there on apex.oracle.com for you to explore.
Whilst the ever increasing speed of storage and servers, and features likes In-memory are decreasing the need for large numbers of materialized views in databases, there are still plenty of use cases where they can be a useful performance or integrity tool.
But what if that materialized view takes minutes or hours to build? Then even if it is fast refresh enabled, than fast refresh is no of use if we have to rebuild the entire materialized view from scratch due to an structural change in the underlying table.
For example, let’s say I have a table and a simple fast refresh materialized view on that table as below:
SQL> create table t(x number(8) primary key); Table created. SQL> insert into t values(55); 1 row created. SQL> create materialized view log on t with primary key, rowid; Materialized view log created. SQL> create materialized view mv 2 build immediate 3 refresh fast on demand as 4 select * from t; Materialized view created. SQL> SQL> select * from t; X ---------- 55 1 row selected. SQL> select * from mv; X ---------- 55 1 row selected. SQL> insert into t values (10); 1 row created. SQL> insert into t values (20); 1 row created. SQL> commit; Commit complete. SQL> select * from t; X ---------- 10 20 55 3 rows selected. SQL> select * from mv; X ---------- 55 1 row selected. SQL> SQL> exec dbms_mview.refresh('mv'); PL/SQL procedure successfully completed. SQL> select * from mv; X ---------- 10 20 55 3 rows selected.
For the sake of this discussion, let’s assume the build of materialized takes hours. Naturally we want to avoid having to a rebuild (or do a complete refresh) of that materialized view. But then…the inevitable happens. We need to change the table T. In this case, the values for column X now exceed the limits of the definition.
SQL> insert into t values (123123123); insert into t values (123123123) * ERROR at line 1: ORA-01438: value larger than specified precision allowed for this column
Obviously, this is a trivial fix for the table. We simply alter the column to make it larger (which does not require any downtime or reorganization of the data).
SQL> alter table t modify x number(10);
But all is not well…Having a materialized view means that either the materialized view log, or the materialized view itself may have replicas of that column, so they remain “broken”
SQL> insert into t values (123123123); insert into t values (123123123) * ERROR at line 1: ORA-12096: error in materialized view log on "MCDONAC"."T" ORA-01438: value larger than specified precision allowed for this column
Notice the subtle difference in the error message. It is not an error on table T, but an error on the materialized view log. You might have the urge to simply jump and run the alter commands on the materialized view log and the materialized view. And you might even find that this approach works. But please note – this approach is not supported, and thus we can’t guarantee that it will (a) work, or (b) not create problems later down the track when you attempt to refresh the view or perform other operations.
But if direct alteration is not support, how do we solve the problem without having to rebuild the entire materialized view from scratch?
The solution here is the option to preserve the materialized view as a standalone table. We can drop the definition of the materialized view but hold on to the table that supports it. Now that it is a standalone table, we can alter the column definition so that it matches our source table T. Notice that before I drop the definition, I perform a final refresh to make sure the materialized is totally up to date – so there is a little bit of coordination required here to make sure that you do not lose any changes that occur to table T during the process.
SQL> exec dbms_mview.refresh('mv'); PL/SQL procedure successfully completed. SQL> drop materialized view mv preserve table; Materialized view dropped. SQL> drop materialized view log on t ; Materialized view log dropped. SQL> SQL> alter table t modify x number(10); Table altered. SQL> alter table mv modify x number(10); Table altered.
We still don’t have our materialized view back though. But we can recreate it without needing a full build cycle, using the PREBUILT table clause.
SQL> create materialized view log on t with PRIMARY KEY, rowid; Materialized view log created. SQL> SQL> create materialized view mv 2 on prebuilt table 3 refresh fast on demand as 4 select * from t; Materialized view created.
And we are done! An easy and supported means of altering the materialized view structure without a full rebuild of the data.
Just a quick post today that arose from an AskTOM question a little while back. Over the years and across the evolution of various versions of the Oracle database, the amount of “power” you could pack into a CREATE TABLE statement as grown. For example, I can do a CREATE-TABLE-AS-SELECT, along with a partitioning clause, including an explicit CREATE INDEX command and add constraints all in one atomic operation.
SQL> create table t 2 (id1, 3 id2, 4 owner, 5 object_name, 6 constraint t_pk primary key ( id1) using index 7 ( create index t_pk on t ( id1, owner) ) 8 ) 9 partition by range ( id2 ) 10 ( 11 partition p1 values less than ( 100000 ), 12 partition p2 values less than ( 200000 ) 13 ) 14 as 15 select 16 object_id id1, 17 object_id id2, 18 owner, 19 object_name 20 from dba_objects 21 where object_id is not null; Table created.
That’s pretty cool, but one of the most common times you will be writing DDL that includes a “select * from” as part of the DDL definition, is during the creation of materialized views, because the view text naturally will contain a select statement. Unfortunately, we are not quite as “generous” when it comes to accepting all of the various physical implementation options when it comes to DDL for materialized views. Trying to create a materialized view that matches our table T above is a struggle.
SQL> create materialized view t_mv 2 (id1, 3 id2, 4 owner, 5 object_name, 6 constraint t_mv_pk primary key ( id1) using index 7 ( create index t_mv_pk on t_mv ( id1, owner) ) 8 ) 9 partition by range ( id2 ) 10 ( 11 partition p1 values less than ( 100000 ), 12 partition p2 values less than ( 200000 ) 13 ) 14 refresh complete on demand 15 as 16 select 17 id1, 18 id2, 19 owner, 20 object_name 21 from t; constraint t_mv_pk primary key ( id1) using index * ERROR at line 6: ORA-00907: missing right parenthesis SQL> SQL> create materialized view t_mv 2 (id1, 3 id2, 4 owner, 5 object_name, 6 constraint t_mv_pk primary key ( id1) using index 7 ( create index t_mv_pk on t_mv ( id1, owner) ) 8 ) 9 refresh complete on demand 10 partition by range ( id2 ) 11 ( 12 partition p1 values less than ( 100000 ), 13 partition p2 values less than ( 200000 ) 14 ) 15 as 16 select 17 id1, 18 id2, 19 owner, 20 object_name 21 from t; constraint t_mv_pk primary key ( id1) using index * ERROR at line 6: ORA-00907: missing right parenthesis
When you encounter this limitation, don’t forget that one of the nice things you can do with materialized view creation is pre-create the underlying table with all of its flexibility:
SQL> create table t_mv 2 (id1, 3 id2, 4 owner, 5 object_name, 6 constraint t_mv_pk primary key ( id1) using index 7 ( create index t_mv_pk on t_mv ( id1, owner) ) 8 ) 9 partition by range ( id2 ) 10 ( 11 partition p1 values less than ( 100000 ), 12 partition p2 values less than ( 200000 ) 13 ) 14 as 15 select 16 id1, 17 id2, 18 owner, 19 object_name 20 from t; Table created.
And then once that it done, you can use the PREBUILT TABLE clause to create your materialized view which will now satisfy all of the underlying physical structure elements you desired.
SQL> create materialized view t_mv 2 on prebuilt table 3 refresh complete on demand 4 as select 5 id1, 6 id2, 7 owner, 8 object_name 9 from t; Materialized view created. SQL> SQL>
Hopefully you’ve followed my very simple and easy guide to downloading the 18c database software for Windows. But of course, software on its own is not much use – we need a database! So let’s get cracking and create one. Using the Start menu like I’ve done below, or using the Windows panels, locate the Database Configuration assistant and start it.
After a few seconds the initial screen will ask what you want to do. Choose “Create Database”.
If you like you could just go with “Typical Configuration” and you’ll be done in just a couple of clicks, but I always prefer to opt for the “Advanced Configuration” for two reasons. Firstly, even if you accept all of the defaults, it gives you a better idea of what options are going to be installed, where the files will be stored etc. And secondly…well…we all like to think of ourselves as advanced don’t we
For just research and exploration on your own Windows machine, you’ll probably want to opt for just a single instance database. RAC takes a bit more setup and complexity. For a faster install, choose one of the options that has the datafiles included. This way, the installer will just copy some existing files and seed them as your database, rather then building the entire instance from scratch.
Now choose a unique and memorable name for your database. I’ve reached into the depths of my imagination and come up with “db18” for my version 18 database. Go figure . I’d also recommend you go with a container database configuration, because that is the strategic direction for Oracle going forward, so if you are going to have a database to skill up on, it makes sense for that database to be a container database.
To keep things simple, I’m just nominating 1 single location for all of my database files. It can be anywhere but a common convention is that wherever you house them, you’ll have a folder called “oradata” and then a folder for each database you create under that.
I’m skipping the fast recovery area and archiving at this stage. If I start to get serious with testing things like backup and recovery, then I would revisit this after database creation to enable at least archiving so that I can explore all the goodies that RMAN has to offer.
On a brand new installation, it is likely you will not have a pre-existing listener to choose from. (If you had a previous installation, or had run the Network Configuration Assistant already, then you would see a listener to use).
I’m creating one called LISTENER18. The default port is normally 1521, but I’ve opted for 1518 just to align it with the version I’m using.
I’m skipping Data Vault and Label Security, but hopefully you can now see why it’s cool to go with the “Advanced Configuration” – you get to see all the potential functionality areas of the database that you might want to explore.
Now you choose how much of your server/desktop/laptop you’re going to allow this database to grab. My machine has plenty of RAM, but it also has a stack of other database versions running on it to handle my AskTOM daily tasks. So I’ll keep this dude at around 6G.
Now we’ll flick across the other tabs on this screen to see if there anything of note. The default for processes seems to have gone up in this version (I think it used to be around 300 in 12c) but in any event, that’s more than enough for me on this machine.
I have simple rule for character sets – UTF all the way. Single byte charactersets are soooo last century. We live in a global village, so you should be able to handle characters from all over the world!
And for the last tab, I’m going to opt for the sample schemas, so that when my database is created I’m not just left with an empty database shell. I want some sample data there so I can jump straight in and start experimenting with this release.
If I’m a DBA, I might go for some EM management to see how that all works, but for now, I’m skipping that.
Being a sandbox for experimenting, I’m setting all the passwords to a common value. Naturally this will not what you’ll be doing for your production databases!
So now I’m ready to go. I always tick the “Generate Scripts” option because it lets me see what is actually occurring when the creation assistant is doing its job. Even so, the scripts are probably not what I would use to automate a database install, since the ‘dbca’ command has nice command line option nowadays, along with the ‘-silent’ option so you can create an entire database with just a single command.
I finally get a summary of what is about to occur, and we’re off! Database creation is go for launch!
The total time to create your database will depending on your hardware, in particular how fast your storage is. I’ve done a few creations now using both flash storage and conventional hard drives, and unsurprisingly the flash storage is faster. You’re probably looking at around 10 minutes to complete.
When your database creation is complete, you’ll get the standard summary screen and you are ready to go.
And here it is – the finished product! My 18c database on Windows is ready to go. By default, the underlying Windows service will have a Start status of “Automatic” which means your database will start every time Windows starts. If you are short on memory, or do not want the database started unless you explicitly want it to, you can set this to Manual via “services.msc”
You can watch the video version of this installation here
Enjoy your 18c Windows database !
If you’re a Windows enterprise, or you want to run your 18c database on your Windows laptop/desktop for research and education, then there has been some good news this week. The software is now available to you on the OTN network page. Here’s a walk through of the software installation process
Head to the standard database downloads page
Accept the license agreement and choose the Windows version to download
Note – if you want to see all of the various Windows 18c components (grid, client, examples, etc), you can get that here
Once you have downloaded the software, note that when you unzip it, you are unzipping it directly into place, not into a staging area. So unzip to a folder that you intend to be your ORACLE_HOME location.
Once the unzip has been completed, in the base directory where you unzipped the files, there will be a setup.exe file. Double click on that to launch the installer
The familiar java based software installer will appear. For this blog post, I opted to solely go with software configuration – I’ll cover database creation in a separate post.
For my use, I’m just using a home laptop, so single instance for me. If you do want RAC, you’ll be needing to download more components anyway (eg Grid)
You’ll then get the standard pre-installation checks on your machine. For reference, the machine I’m installing the software on – its a 32G RAM machine on Windows 10, and I did not get any warnings.
I nominated my existing ORACLE_BASE location as the target for this ORACLE_HOME as well
and went with the default option of using a virtual Windows account to own the software:
Note: The first time I did this install, I went with the Windows Built-In account because I had an old 11g database installation under the same ORACLE_BASE, and I thought at least it would be consistent. Whilst the installation and subsequent use of 18c worked fine, and my 11g instance was fine, it totally hosed by 12c installation which was also under the same ORACLE_BASE. The 12c installation had been done with the default virtual account, and it appears the subsequent addition of 18c using the built-in Windows SYSTEM account reset the permissions on the critial diagnostic directory paths. From that point on, my 12c installation could not start and got “permission denied” errors when trying to access various destinations.
So my advice would be – adopt a consistent approach for any software under a common ORACLE_BASE.
Next you choose the edition you want to install. Enterprise for me naturally
And then you are ready to install. Just hit the Install button, sit back and relax.
You will see the familiar progress dialog box, and like all progress boxes from any vendor, the percentage complete will typically have no true bearing on how long the process will take For the record, my installation took around 8-10mins at this stage, most of it configuring the central inventory.
All things going well, you finally get the confirmation screen, and voila! You’re 18c database software is installed!
Here’s an (accelerated) video showing the above steps as they were performed on my machine.
Enjoy 18c on Windows!
In an Office Hours session a couple of months back, I covered an important change that comes to regular expressions once you upgrade to 12c Release 2. You can see the video covering the issue here:
but for the TL;DR brigade reading this post: Regular expressions are not deterministic when you take NLS settings into account and thus cannot be used in constraints and/or function-based indexes.
This is just a post to quickly revisit the topic for anyone thinking of upgrading from an earlier release to 12c Release 2. An AskTOM question came in asking what would happen to such constraints during the upgrade process.
The sad answer is … nothing. For example, if you successfully (and by strict definition, incorrectly) created a constraint with a regular expression in 11g, then after the upgrade, that constraint will still be present in your new 12c Release 2 system. It will continue to work as it did in 11g, and even if you disable/enable it, or put it through a validate command to exercise the data, it will work as it did before.
To be honest, I do not like this, because it can become what I call a “sleeper” problem. If, 6 months after you upgrade, you happen to drop and recreate that constraint you’ll be most distressed to find that it cannot be added, and you will have most probably long forgotten that it was caused by an event that occurred 6 months ago, namely the upgrade. And perhaps worse, you now have an index or constraint that could potentially be corrupted by innocent tinkering with session NLS settings.
So before you upgrade, definitely take a cursory glance through your constraint definitions and take remedial action if needed.
Do a quick Google search and you’ll find plenty of blog posts about why GUIDs are superior to integers for a unique identifier, and of course, an equal number of posts about why integers are superior to GUIDs. In the Oracle world, most people have been using sequence numbers since they were pretty much the only option available to us in earlier versions. But developers coming from other platforms often prefer GUIDs simply due to their familiarity with them.
I’m pretty much ambivalent when it comes to which one to use. In fact, a good example is the AskTOM database -which had exclusively sequence-based primary keys on inception, but as the database has evolved and developers of different backgrounds and ages have worked on it, there is now of mix of strategies. The older tables have sequence based primary keys, and many of the newer tables have GUIDs as primary keys. Don’t get me wrong – I’m not advocating that you should have a mix – for a given database schema I’d recommend picking one regime and sticking with it. But my point is, that even with the mix of approaches in the AskTOM schema, I’ve never encountered any problems or issues with either.
However, there is one use case where I would strongly recommend using some caution on the use of GUIDs, and that is in the arena of systems that load data in bulk (eg data warehouses).
GUIDs are not cheap to generate. A quick look at the history on the structure and generation of unique IDs at https://en.wikipedia.org/wiki/Universally_unique_identifier all come down to a common component – the need to generate a good random number, and “good” can be a struggle for computers because you need algorithms that ensure sufficient randomness and distribution of the generated numbers. That takes CPU cycles and whilst that is something that you will never notice when using a GUID for the 100 customers in your customer table, it definitely will be noticeable if you are going to attach a GUID to every one of your 10 million daily sales transactions, or telephone records, or similar.
Here’s a simple example where I’ll load 100 million rows into a table. First I’ll try the conventional (when it comes to Oracle) approach of using a sequence number to uniquely identify each row.
SQL> create sequence seq cache 50000; Sequence created. SQL> SQL> create table t1 2 ( pk int, 3 data1 int, 4 data2 varchar2(10), 5 data3 int, 6 data4 varchar2(10) 7 ) 8 tablespace demo 9 / Table created. SQL> SQL> set timing on SQL> insert /*+ APPEND */ into t1 2 select seq.nextval, int_val, char_val, int_val, char_val 3 from 4 ( select level int_val, to_char(level) char_val from dual connect by level <= 10000 ), 5 ( select 1 from dual connect by level <= 10000 ); 100000000 rows created. Elapsed: 00:03:31.42 SQL> SQL> commit; Commit complete. Elapsed: 00:00:00.01 SQL>
3 minutes 30 seconds for 100 million rows is pretty good performance for a laptop, although obviously the table structure here is very simple.
And now I’ll repeat the exercise with the same table structure, but using a raw column to hold the output of a call to SYS_GUID().
SQL> SQL> SQL> create table t2 2 ( pk raw(20), 3 data1 int, 4 data2 varchar2(10), 5 data3 int, 6 data4 varchar2(10) 7 ) 8 tablespace demo 9 / Table created. SQL> SQL> set timing on SQL> insert /*+ APPEND */ into t2 2 select sys_guid(), int_val, char_val, int_val, char_val 3 from 4 ( select level int_val, to_char(level) char_val from dual connect by level <= 10000 ), 5 ( select 1 from dual connect by level <= 10000 ); 100000000 rows created. Elapsed: 00:30:56.78 SQL> commit; Commit complete. Elapsed: 00:00:00.03
That’s right – we’ve blown out to 30 minutes. As you can see, there can be a large cost when the row volumes (and hence number of calls to generate a GUID) get large. We can even take the INSERT out of the equation here, and simply do a raw stress test to see how many GUIDs we can call from the SQL engine using the following test harness.
SQL> create table t ( sz int, dur interval day to second ); Table created. SQL> SQL> SQL> declare 2 ts_start timestamp; 3 ts_end timestamp; 4 iter int; 5 dummy raw(32); 6 begin 7 for i in 1 .. 8 loop 8 iter := power(10,i); 9 10 ts_start := systimestamp; 11 if iter <= 10000 then 12 select max(x) into dummy from 13 ( 14 select sys_guid() x from 15 ( select 1 from dual connect by level <= iter ) 16 ); 17 else 18 select max(x) into dummy from 19 ( 20 select sys_guid() x from 21 ( select 1 from dual connect by level <= iter/10000 ), 22 ( select 1 from dual connect by level <= 10000 ) 23 ); 24 end if; 25 26 ts_end := systimestamp; 27 insert into t values (iter, ts_end - ts_start ); 28 commit; 29 30 end loop; 31 end; 32 / PL/SQL procedure successfully completed. SQL> SQL> select * from t order by 1; SZ DUR ---------- ---------------------------------------------------------- 10 +00 00:00:00.000000 100 +00 00:00:00.000000 1000 +00 00:00:00.015000 10000 +00 00:00:00.172000 100000 +00 00:00:01.607000 1000000 +00 00:00:16.083000 10000000 +00 00:02:49.713000 100000000 +00 00:26:46.570000
I’m not trying to scare you off GUIDs – but like any functionality or feature you’re using to build applications, make sure you test it for business requirements you need to satisfy and make an informed decision on how best to use (or not use) it.
I’m typing this at about 38,000 feet in the air. I’m travelling home from the Oracle Latin America tour – it was a great event, but I’ll save the details for another blog post. In reality, I should be sleeping, but as luck would have it one of the lights on our plane is malfunctioning. So whilst the rest of the plane is sleeping happily in restful darkness, my row and the rows around it are bathed in blazing artificial light
Ah…the joys of travel. Anyway, I digress.
Before I boarded the flight, I did what most people do in the airport. I signed my life away to a deluge of advertising emails as one does in order to get 30 precious minutes of free wi-fi. I checked some emails, and had a glance through twitter when I came upon this one.
As you might expect, there’s plenty of social media activity about this tweet, much of it about the contradiction between the language and what you might expect to come out of the Office of the President of the United States. I’m not going to enter into that debate, or get into political discussion. Let’s face it – Donald Trump strikes me as more a Sybase man (Hey don’t flame me Sybase readers…these are just jokes ok?)
But the tweet made me think about the kind of language we sometimes see on AskTOM and on technical forums in general.
So let’s talk about etiquette on technical forums, or “Techiquette” as I’ve coined it.
We’re actually pretty lucky in technical circles with our discussion forums because most of the conversation resolves around topics that can be backed up with cold hard facts. In those instances, even robust discussion stays civil because it’s pretty hard to disagree with solid evidence when its presented. A hypothetical example might be something like:
Forum Participant #1: “I need to have a table with 500 columns”
Forum Participant #2: “You cannot – Oracle is limited to 255 columns”
Forum Participant #3: “I disagree – here is an example DDL demonstrating 800 columns”
Participant #2 can’t really dispute the response, because the example is right there in black and white.
But here is where I think conversations can go off the rails and slide downhill into a abuse and insult. It is when the content is both technical and emotional, for example:
Forum Participant #1: “I love using RMAN – it made our backup processes much easier”
Forum Participant #2: “RMAN sucks – it is too complex, anyone using it obviously has no clue about anything”
Forum Participant #1: “Dumb ass”
and away we go…The conversation goes from friendly to flame war.
The problem is – I don’t want my forums to be devoid of emotion, even when the topic is technical. Ever been to a presentation where the presenter does not show any joy or enthusiasm for the material? It’s like spending 45 minutes having root canal work on your teeth. We want people to be passionate about the things that interest them.
So how do we avoid forums descending into abuse whilst still encouraging emotion, and hence potentially emotionally charged content?
Here is my ground rules:
Simple mathematics tells us that if we can stick to the above, the average “positivity index” in a tech conversation will always be greater than or equal to zero.
I’m not claiming to be an angel here – I don’t always manage to follow my own guidelines as well as I should. But I’m always striving to improve, and hopefully if we can all follow them to the best of our ability, then in the main, we’ll have much more civil and interesting community conversations as a result.
I recently participated in the Oracle Developer Community Yatra tour throughout India. It is a hectic event with 7 cities covered in a mere 9 days, so you can imagine how frantic the pace was. A typical day would be:
Yet as a speaker in this event, it never felt that the logistics of the event were out of control. This is mainly due to the incredible work of the people in the AIOUG (All India Oracle User Group) coordinated by Sai Penumuru. The smoothness of the organization prompted me to blog about how user groups could follow the AIOUG lead in terms of running successful events.
Every time I do an event in India, not only do I come away with a stronger network and connection with the attendees, I also gain new and strong friendships with the speakers and this is in no small part due to organizational skills of Sai and the volunteers.
So from that reference point – namely, the smart way in which Sai and the user group foster a great feeling of community amongst the speakers, I thought I’d share the mechanisms via which user groups can organize events that make speakers feel welcome and keen to return.
When I travel to India, Sai provides a recommended hotel for all speakers. Obviously, no speaker is compelled to stay there, but we all know that the recommendation Sai makes has taken in account:
So most of us will always use his recommendation, and it makes the decision making process easy. But most of all, it is a catalyst for speakers to meet in a casual environment outside the conference hours, and build relationships.
The AIOUG organizes transport to and from the venue, and from the airport as well. And for when this is not possible, they will provide a recommended transport means (such as Uber etc) so that speakers never have the risk of picking an unreliable or unsafe transport option. For multi-city events such as Yatra, the AIOUG also recommends common itinerary for flights, once again making the entire planning process much easier for speakers.
A communication mechanism
Before a conference, AIOUG sets up a Whatsapp group containing all of the speakers, and the key representatives from the user group. In this way, all communication is sent in a consistent fashion. We don’t have to jump between email, twitter, etc to see if any correspondence has been missed. And this group also helps build the relationship between speakers and user group. Humourous stories and pictures can be shared, but most importantly, if there is an issue or problem – everyone is aware of it immediately, making resolution is fast and effective.
And perhaps most importantly, it helps accentuate the human element – messages are sent as people land or takeoff, and when people arrive at the hotel. Organizers regularly send messages making sure no-one is having any difficulties. All these things make the speakers feel more comfortable before, during and after the event.
A typical day
Perhaps the most valuable piece of information that is conveyed by the user group is ‘local knowledge’. For example, each evening a whatsapp message would be sent out detailing
So even though it may be a first visit to a city for the speakers, there is a degree of familiarity with what is expected to happen, and hence knowledge of whether things are departing from the norm.
The most stressful 5 minutes for any speaker is the time they are setting up for their talk. Making sure projectors work, internet connectivity, schedule changes etc – are all things that can sabotage a good talk for a speaker. The AIOUG always has someone visit the room in that critical 5 minutes, so a speaker does not have to go hunting for technical assistance.
In summary, as you can see, none of these things are particular arduous to do, but the benefits are huge for speakers. We feel comfortable and welcome, which means a much better chance of a successful talks, and makes us much more likely to want to return.
So if you are reading this, and are part of a user group committee, please consider some of the tips above for your local events. If your speakers have a good experience, they’ll be much more keen to offer continued support for your events.