Your AskTOM Top 10 for 2018

Here we go folks…here are the top 10 most viewed questions in AskTOM this year!

We’ll count them down from 10 to 1

10) Inserting values into a table with ‘&’

Viewed 80,000 times.

First asked in 2012, this actually is not a database issue but a client issue. Many client tools view ampersand as a substitution variable, and this they intercept the execution before the command is sent to the database. Full details here

9) What is a SID, how to change it, how to find out what it is

Viewed 95,000 times.

A timeless classic. This question was asked in June 2000 – only 1 month after AskTOM went live. Given that most of our documentation contains, and most people are aware of the terms “instance” and “database”, I’ll never really understand why we didn’t go with ORACLE_INSTANCE instead of ORACLE_SID. Full details here.

8) How to Update millions or records in a table

Viewed 100,000 times.

Asked back in 2002, I can understand the popularity of this one. Perhaps the most common performance tuning issue is data conversion or data loading. Sometimes you just need to forget the word UPDATE and re-load the data – it can be much faster. Full details here.

7) How to connect SQLPlus without tnsnames.ora

Viewed 110,000 times.

Is it that much of a big deal to have a tnsnames.ora file? I guess so. Nowadays the EZ connect syntax makes connecting to any database trivial. Full details here.

6) How To FULL DB EXPORT/IMPORT

Viewed 130,000 times.

Another understandable one for the top 10 because of the transition from old style export/import to data pump around the timeframe that this question was first asked. There is also the classic chicken-and-egg issue to deal with, namely, that you need a database in order to import, but won’t a full import than clash with all the existing objects? Full details here.

5) Converting CLOBS TO VARCHAR

Viewed 132,000 times.

Once a golden oldie from 2001! It seems an odd proposition – if you have a CLOB, there’s a good chance you have it because it exceeds the allowable size for VARCHAR2. So converting back down seems a risky exercise. Full details here.

4) ORA-12560: TNS:protocol adapter error

Viewed 135,000 times.

Ah yes, not our finest hour this one. Try to use a database on Windows that has not been started, and do you get an error saying “Database not started”? No. You get a crytpic TNS error. Full details here.

3) Format the Number for display

Viewed 143,000 times.

There is such a thing as “too much of a good thing”. And perhaps our incredible flexibility when it comes to formatting data, and the range of format masks is tough for people to swallow. Full details here.

2) IN (vs) EXISTS and NOT IN (vs) NOT EXISTS

Viewed 144,000 times.

Well, there’s a simple reason this gem keeps on coming back year after year after year. For years, books on performance tuning and blogs on performance tuning would claim that EXISTS was better than IN, or vice versa. Amazingly it has never been the case that one was guaranteed to be better than the other, and more alarmingly a straight swap from one to the other can even impact the results you get back. I think we’ll see this guy in the list every year…..forever. Full details here.

And finally we have arrived at the number 1 most viewed question this year. Which begs me to ask the question:

Why do so many of you need to know how many are in each table? Smile

1) Finding the number of rows in each table by a single sql

Viewed 510,000 times.

Wow. Half a million people need to know how many rows are in their database tables. I’ve no idea why, because the moment you count them, you’re already out of date. Full details here.

And there you … our top 10 for 2018.

Have a great festive season!

It’s all downhill past 30

Yes, it sounds like a lamentation of the aging process, and from my own experience, it is definitely true that the wheels started falling off for my knees, back and plenty of other body parts once I turned 30. But that is perhaps a legacy of too much alcohol, not eating well in my youth and failing to stretch rather than any particular chronological milestone Smile.

But this post is not about that. This one is about the magical 30 character limit on identifiers that we finally lifted in version 12c. For static SQL, this is obviously a no impact change – you either define your tables and columns with longer names or you don’t, and the static SQL you write reflects that position.

But dynamic SQL is a different proposition, in particular, this discovery I made with DBMS_SQL recently. DBMS_SQL has a number of package data types that reflect the maximum identifier length in the database, for example:


  type desc_rec is record (
        col_type            binary_integer := 0,
        col_max_len         binary_integer := 0,
        col_name            varchar2(32)   := '',
        col_name_len        binary_integer := 0,
        col_schema_name     varchar2(32)   := '',
        col_schema_name_len binary_integer := 0,
        col_precision       binary_integer := 0,
        col_scale           binary_integer := 0,
        col_charsetid       binary_integer := 0,
        col_charsetform     binary_integer := 0,
        col_null_ok         boolean        := TRUE);

For dynamic SQL where we need to determine the column names dynamically with an arbitrary SQL statement, DBMS_SQL uses this data type in it’s API calls. The example below lists out some user names from the database and dynamically derives the column names (even though we obviously know them in advance for this simple example)


SQL> set serverout on format wrapped
SQL> declare
  2    c clob := 'select username, created from dba_users';
  3    l_cur     int := dbms_sql.open_cursor;
  4    l_val     varchar2(4000);
  5    l_status  int;
  6    l_desc    dbms_sql.desc_tab;
  7    l_cnt     int;
  8    l_longcol int := 0;
  9  begin
 10    dbms_sql.parse(l_cur,c,dbms_sql.native );
 11    dbms_sql.describe_columns(l_cur,l_cnt,l_desc);
 12    for i in 1 .. l_cnt loop
 13      dbms_sql.define_column(l_cur,i,l_val,4000);
 14      l_longcol := greatest(l_longcol,length(l_desc(i).col_name));
 15    end loop;
 16    l_status := dbms_sql.execute(l_cur);
 17    while dbms_sql.fetch_rows(l_cur) > 0
 18    loop
 19      for i in 1 .. l_cnt
 20      loop
 21        dbms_sql.column_value(l_cur, i,l_val);
 22        dbms_output.put_line(
 23          rpad(l_desc(i).col_name,l_longcol)||': '||l_val );
 24      end loop;
 25      dbms_output.new_line;
 26    end loop;
 27  exception
 28    when others then
 29      dbms_output.put_line(dbms_utility.format_error_backtrace);
 30      raise;
 31  end;
 32  /
USERNAME: SYS
CREATED : 08-MAR-17

USERNAME: SYSTEM
CREATED : 08-MAR-17

USERNAME: XS$NULL
CREATED : 08-MAR-17

USERNAME: OJVMSYS
CREATED : 08-MAR-17

USERNAME: SYSADMIN
CREATED : 19-JUL-18

It would seem a trivial change to increase the size limit as the version increases from 11g to 12c, but DBMS_SQL is caught between a rock and a hard place here. If you do change the limit, then you risk issues with backward compatibility for customers. If you do not change the limit, then you run into problems like the demo below:


SQL> create table t ( using_my_new_cool_long_column_names_in_12c )
  2  as select 1 from dual;

Table created.

SQL>
SQL>
SQL> set serverout on format wrapped
SQL> declare
  2    c clob := 'select * from t';
  3    l_cur     int := dbms_sql.open_cursor;
  4    l_val     varchar2(4000);
  5    l_status  int;
  6    l_desc    dbms_sql.desc_tab;
  7    l_cnt     int;
  8    l_longcol int := 0;
  9  begin
 10    dbms_sql.parse(l_cur,c,dbms_sql.native );
 11    dbms_sql.describe_columns(l_cur,l_cnt,l_desc);
 12    for i in 1 .. l_cnt loop
 13      dbms_sql.define_column(l_cur,i,l_val,4000);
 14      l_longcol := greatest(l_longcol,length(l_desc(i).col_name));
 15    end loop;
 16    l_status := dbms_sql.execute(l_cur);
 17    while dbms_sql.fetch_rows(l_cur) > 0
 18    loop
 19      for i in 1 .. l_cnt
 20      loop
 21        dbms_sql.column_value(l_cur, i,l_val);
 22        dbms_output.put_line(
 23          rpad(l_desc(i).col_name,l_longcol)||': '||l_val );
 24      end loop;
 25      dbms_output.new_line;
 26    end loop;
 27  exception
 28    when others then
 29      dbms_output.put_line(dbms_utility.format_error_backtrace);
 30      raise;
 31  end;
 32  /
ORA-06512: at "SYS.DBMS_SQL", line 2084
ORA-06512: at line 11

declare
*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error: character string buffer too small
ORA-06512: at line 30
ORA-06512: at "SYS.DBMS_SQL", line 2084
ORA-06512: at line 11

The solution here is a simple one. The existing definitions in DBMS_SQL were left unchanged to preserve that backward compatibility, and additional structures were added to handle longer column names. We need simply alter our code to use the new “desc_tab2” data type and it’s accompanying “describe_columns2” call.


SQL> declare
  2    c clob := 'select * from t';
  3    l_cur     int := dbms_sql.open_cursor;
  4    l_val     varchar2(4000);
  5    l_status  int;
  6    l_desc    dbms_sql.desc_tab2;
  7    l_cnt     int;
  8    l_longcol int := 0;
  9  begin
 10    dbms_sql.parse(l_cur,c,dbms_sql.native );
 11    dbms_sql.describe_columns2(l_cur,l_cnt,l_desc);
 12    for i in 1 .. l_cnt loop
 13      dbms_sql.define_column(l_cur,i,l_val,4000);
 14      l_longcol := greatest(l_longcol,length(l_desc(i).col_name));
 15    end loop;
 16    l_status := dbms_sql.execute(l_cur);
 17    while dbms_sql.fetch_rows(l_cur) > 0
 18    loop
 19      for i in 1 .. l_cnt
 20      loop
 21        dbms_sql.column_value(l_cur, i,l_val);
 22        dbms_output.put_line(
 23          rpad(l_desc(i).col_name,l_longcol)||': '||l_val );
 24      end loop;
 25      dbms_output.new_line;
 26    end loop;
 27  exception
 28    when others then
 29      dbms_output.put_line(dbms_utility.format_error_backtrace);
 30      raise;
 31  end;
 32  /
USING_MY_NEW_COOL_LONG_COLUMN_NAMES_IN_12C: 1


PL/SQL procedure successfully completed.

In fact, a column name could always be more than 32 characters because an un-aliased column expression can yield column names of almost arbitrary length, so these additions were made several releases ago, but 12c had brought that issue to the fore because even “normal” column names can now exceed the previous limit.

Now if I only I could add a “2” to the end of my aging body to get to handle years up to 128 just like the column names in 12c Smile

Automatic sequences not being dropped

One of the nice new things in 12c was the concept of identity columns. In terms of the functionality they provide (an automatic number default) it is really no different from anything we’ve had for years in the database via sequences, but native support for the declarative syntax makes migration from other database platforms a lot easier.

Under the covers, identity columns are implemented as sequences. This makes a lot of sense – why invent a new piece of functionality when you can exploit something that already has been tried and tested exhaustively for 20 years? So when you create a table with an identity column, you’ll see the appearance of a system named sequence to support it.

Another nice thing about identity columns is that if you drop the table, the underlying sequence that supports the column is also automatically dropped….. (cue ominous music…) or is it? I had this demo come my way via email asking that exact question:


SQL> create table t1 ( x int generated always as identity );

Table created.

SQL> create table t2 ( x int generated by default as identity );

Table created.

SQL> select table_name, sequence_name
  2  from   user_tab_identity_cols;

TABLE_NAME           SEQUENCE_NAME
-------------------- --------------------
T1                   ISEQ$$_139912
T2                   ISEQ$$_139914

6 rows selected.

SQL> drop table t1;

Table dropped.

SQL> drop table t2;

Table dropped.

SQL> select table_name, sequence_name
  2  from   user_tab_identity_cols;

no rows selected

SQL> select sequence_name from user_sequences;

SEQUENCE_NAME
--------------------
ISEQ$$_139912
ISEQ$$_139914

Is that a bug? Why are the sequences still there?

There is no need to panic – there is a very simple explanation to this. From 10g onwards, by default, when you drop a table we do not actually drop it (ie, release the data and the space it consumed). We simply rename it, and it becomes a member of the recycle bin. This is a nifty insurance policy against losing data inadvertently. The table “disappears” from standard dictionary views but the sequences remain which is what causes the confusion.Β  We don’t destroy the sequences because recreating them if the table is brought back from the recycle bin would require knowing what the current high watermark for the sequence value would need to be.

Purging the recycle bin brings everything back into alignment.


SQL> purge recyclebin;

Recyclebin purged.

SQL> select sequence_name from user_sequences;

no rows selected

The phantom tablespace

(Cueing my deep baritone Morpheus voice…) What if I told you that you can reference non-existent tablespaces in your DDL?

OK, it sounds like a gimmick but there is a real issue that I’ll get to shortly. But first the gimmick Smile

I’ve created a partitioned table called “T” (I’ll pause here for your applause at my incredible imagination skills for table naming Smile) and to show you the complete DDL, I’ll extract it using the familiar DBMS_METADATA package.


SQL> select dbms_metadata.get_ddl('TABLE','T','SCOTT') x from dual

X
-------------------------------------------------------------------------------------------------
CREATE TABLE "SCOTT"."T"
 (    "X" NUMBER(*,0)
 ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
STORAGE (  
  BUFFER_POOL DEFAULT 
  FLASH_CACHE DEFAULT 
  CELL_FLASH_CACHE DEFAULT)
TABLESPACE "DEMO"
PARTITION BY LIST ("X")
(PARTITION "P1"  VALUES (1) 
   SEGMENT CREATION IMMEDIATE  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
   STORAGE (
     INITIAL 8388608 
     NEXT 1048576 
     MINEXTENTS 1 
     MAXEXTENTS 2147483645  
     PCTINCREASE 0 
     FREELISTS 1 
     FREELIST GROUPS 1
     BUFFER_POOL DEFAULT 
     FLASH_CACHE DEFAULT 
     CELL_FLASH_CACHE DEFAULT)
   TABLESPACE "USERS" ,
 PARTITION "P2"  VALUES (2) 
   SEGMENT CREATION IMMEDIATE  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
   STORAGE (
     INITIAL 8388608 
     NEXT 1048576 
     MINEXTENTS 1 
     MAXEXTENTS 2147483645  
     PCTINCREASE 0 
     FREELISTS 1 
     FREELIST GROUPS 1
     BUFFER_POOL DEFAULT 
     FLASH_CACHE DEFAULT 
     CELL_FLASH_CACHE DEFAULT)
   TABLESPACE "LARGETS" )

With a little colour coding, you can see that there are three tablespaces that pertain to this table:

  • DEMO
  • USERS
  • LARGETS

But look what happens when I query the data dictionary for those tablespaces:


SQL> select tablespace_name
  2  from   dba_tablespaces
  3  where  tablespace_name in ('DEMO','USERS','LARGETS');

TABLESPACE_NAME
------------------------
LARGETS
USERS

Where is DEMO? Where has it gone? More startlingly, how can I have an existing table that needs that tablespace, and yet the tablespace is not present in the database? Have I lost data? Is there corruption?

Fortunately, the answer to all of these questions do not involve data loss and/or corruption. It is a quirk of the syntax that can be used for partitioned tables. Here is the DDL as I wrote it for the table T.


SQL> create table t ( x int ) tablespace demo
  2  partition by list ( x )
  3  ( partition p1 values (1) tablespace users,
  4    partition p2 values (2) tablespace largets
  5  );

Table created.

And immediately after I created the table, I did the following


SQL> drop tablespace demo including contents and datafiles;

Tablespace dropped.

You might be thinking that such an operation would surely drop the table I just created as well, but it is still here just fine.


SQL> desc t
 Name                                                              Null?    Type
 ----------------------------------------------------------------- -------- ---------------
 X                                                                          NUMBER(38)

The specification of the tablespace at table level for a partitioned table is nominating the default tablespace for each partition in case it is not specified at partition level. Because I specified a tablespace explicitly for the two partitions on T, the tablespace DEMO does not contain any data, or any partitions for that matter. Which is why I was able to drop it without any problems. Compounding the confusion that often arises is that you won’t see the tablespace name DEMO listed in the USER_TABLES dictionary view even if I had not dropped the tablespace.


SQL> select tablespace_name from user_tables
  2  where table_name = 'T';

TABLESPACE_NAME
------------------------
(null)

The tablespace name in USER_TABLES nominates the tablespace for the segment that will be associated with this table. You will see a similar null value in this column when the table is an Index Organized Table, because it is the underlying index that maps to a tablespace, not the table definition. For a partitioned table, to see which tablespace is the default tablespace, you need to query the USER_PART_TABLES dictionary view.


SQL> select def_tablespace_name
  2  from user_part_tables
  3  where table_name = 'T';

DEF_TABLESPACE_NAME
------------------------------
DEMO

Besides this being some trickery with tablespace definitions, there is a good reason why you should know about the segments tablespace and the default tablespace for partitioned objects. As it stands, I could run a datapump export command on the table T and it will successfully be unloaded to a data pump file. However, if I attempt to run a data pump import, the creation of the table will fail, because of the (now illegal) reference to the DEMO tablespace.

So if you are planning to run a data pump export, here’s a query I whipped up to run a check against your database to ensure that you don’t have any references to tablespaces that no longer exist in your database.


SQL> with all_possible_ts as
  2  (
  3  select tablespace_name from dba_lobs                       union all
  4  select tablespace_name from dba_clusters                   union all
  5  select tablespace_name from dba_indexes                    union all
  6  select tablespace_name from dba_rollback_segs              union all
  7  select tablespace_name from dba_tables                     union all
  8  select tablespace_name from dba_object_tables              union all
  9  select def_tablespace_name from dba_part_tables            union all
 10  select def_tablespace_name from dba_part_indexes           union all
 11  select tablespace_name from dba_tab_partitions             union all
 12  select tablespace_name from dba_ind_partitions             union all
 13  select tablespace_name from dba_tab_subpartitions          union all
 14  select tablespace_name from dba_ind_subpartitions          union all
 15  select def_tablespace_name from dba_part_lobs              union all
 16  select tablespace_name from dba_lob_partitions             union all
 17  select tablespace_name from dba_lob_subpartitions          union all
 18  select tablespace_name from dba_subpartition_templates     union all
 19  select tablespace_name from dba_lob_templates              union all
 20  select tablespace_name from dba_segments                   union all
 21  select tablespace_name from dba_extents                    union all
 22  select tablespace_name from dba_undo_extents
 23  )
 24  select tablespace_name from all_possible_ts
 25  minus
 26  select tablespace_name from dba_tablespaces;

TABLESPACE_NAME
--------------------
DEMO

IRR function in PLSQL

We had an AskTOM question recently about how to calculate the IRR, ie, the Internal Rate of Return. To be honest, I had not really heard of the function, but readers were quick to point out to me that it was a commonly used function in the most “popular” database on earth, namely Microsoft Excel Smile

It turns out that it is an iterative function, ie, there is no equation that calculates it directly – you start with an opening guess and then iterate until each guess gets closer to the answer within an accepted tolerance. A little research took me on a nice trip down memory lane to my university (college) days of a maths major where we learned the Newton Raphson method (although apparently it is more commonly referred to as Newton’s method).

So a little PL/SQL later, I’ve re-embraced my youthful mathematical roots. If only we could iterate like this function back to our youth as well Smile

Here is some test data that we can use


SQL> create table test_tbl
  2  (
  3  id number,
  4  cash_flow number
  5  );

Table created.

SQL>
SQL> insert into test_tbl (id,cash_flow) values (0,-6000000);
SQL> insert into test_tbl (id,cash_flow) values (1,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (2,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (3,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (4,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (5,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (6,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (7,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (8,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (9,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (10,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (11,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (12,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (13,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (14,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (15,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (16,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (17,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (18,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (19,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (20,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (21,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (22,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (23,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (24,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (25,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (26,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (27,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (28,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (29,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (30,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (31,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (32,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (33,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (34,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (35,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (36,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (37,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (38,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (39,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (40,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (41,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (42,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (43,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (44,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (45,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (46,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (47,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (48,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (49,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (50,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (51,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (52,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (53,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (54,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (55,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (56,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (57,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (58,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (59,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (60,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (61,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (62,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (63,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (64,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (65,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (66,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (67,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (68,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (69,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (70,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (71,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (72,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (73,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (74,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (75,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (76,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (77,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (78,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (79,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (80,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (81,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (82,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (83,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (84,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (85,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (86,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (87,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (88,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (89,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (90,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (91,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (92,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (93,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (94,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (95,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (96,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (97,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (98,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (99,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (100,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (101,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (102,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (103,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (104,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (105,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (106,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (107,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (108,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (109,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (110,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (111,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (112,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (113,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (114,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (115,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (116,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (117,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (118,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (119,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (120,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (121,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (122,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (123,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (124,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (125,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (126,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (127,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (128,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (129,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (130,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (131,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (132,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (133,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (134,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (135,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (136,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (137,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (138,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (139,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (140,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (141,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (142,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (143,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (144,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (145,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (146,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (147,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (148,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (149,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (150,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (151,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (152,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (153,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (154,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (155,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (156,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (157,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (158,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (159,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (160,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (161,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (162,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (163,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (164,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (165,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (166,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (167,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (168,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (169,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (170,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (171,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (172,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (173,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (174,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (175,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (176,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (177,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (178,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (179,50243.25);
SQL> insert into test_tbl (id,cash_flow) values (180,54802.38);

And here is the function using Newton’s method. You can see in line 15, that we iterate until we get within a nominated threshold. I could code the function just to read my test data above, but that does not make it very practical for an arbitrary set of data. So I’ve parameterised the function to take a CURSOR variable so that any resultset can be passed to it. As long as the data is pass in the correct order, the function should operate just fine.

Warning: Because this is a function that iterates until a threshold is reached, if you send in garbage data it is quite possible that the function will either never return, or crash with a numeric overflow error. If you are planning on using the code below, you may want to add some sanity checks to “harden” it for production usage.


SQL> create or replace
  2  function irr(rc sys_refcursor) return number is
  3    type nlist is table of number index by pls_integer;
  4    l_values nlist;
  5
  6    l_threshold number := 0.005;
  7    l_guess number := l_threshold + 1;
  8    l_next_guess number := 2;
  9    l_irr number := 1;
 10
 11  begin
 12    fetch rc bulk collect into l_values;
 13    close rc;
 14
 15    while abs(l_guess) > l_threshold
 16    loop
 17      l_guess := 0;
 18      l_next_guess := 0;
 19      for i in 1 .. l_values.count
 20      loop
 21        l_guess := l_guess + l_values(i)/power(1+l_irr/100, i-1);
 22        l_next_guess := l_next_guess + -i*l_values(i)/power(1+l_irr/100, i-1);
 23      end loop;
 24      l_irr := l_irr - l_guess/l_next_guess;
 25
 26      --dbms_output.put_line('l_irr='||l_irr);
 27      --dbms_output.put_line('l_guess='||l_guess);
 28      --dbms_output.put_line('l_next_guess='||l_next_guess);
 29    end loop;
 30    return l_irr;
 31  end;
 32  /

Function created.

And let’s give it a test.


SQL>
SQL> select irr(cursor(select cash_flow from test_tbl order by id asc )) irr
  2  from dual;

       IRR
----------
.490408759

1 row selected.

SQL>

You need 10 minutes! Not 5, not 15, not 7

What follow is just my opinion….Well duh, it’s a blog, of course it’s just my opinion, but I’ll try back up my rantings and pontificating with some reasoned arguments. Whether at the end of it, you still call my claims total BS is of course entirely within your rights and your opinion πŸ™‚

I’ve just completed a number of conference days as part of the APAC Groundbreakers tour. All of them were successful and rewarding events. My compliments, congratulations and thanks to the various organizers in each of the locations – it was a pleasure to be involved with them.

But it is interesting to see how different organizers approach the task of scheduling in different ways.

Now before I go any further let me set one thing straight – the length of the session does not really matter, if you have capable speakers. Of course, every speaker would probably prefer 50 mins over 40, and 60 over 50 and so so forth. But in reality, as long as we know the expected length of the session, we can always tailor our content to meet that requirement. We simply adjust our mindset from “quick tip” to “overview session” to “detailed session” to “workshop” etc etc. It is not that hard to cut or add content to a presentation.

But what I do want to talk about is the gap between sessions. Here is where I’m going lay it out nice and simple πŸ™‚

Ten minutes is the sweet spot.

Here’s what I’ve experienced in the past few months at various events around the world

0 minutes

No, I’m not kidding. The first talk finishes as (say) 10am, and the next talk starts at 10am! How anyone could possibly think this could ever work is beyond me. Somehow the two speakers involved will be exchanging positions on stage, and switching microphones, connecting laptops to AV equipment all in a nanosecond and seamlessly heading from one session into the next. It’s comical.

But let’s assume just a moment, that in some mythical land where pixies and fairies exist, that this transition could be achieved in just a few seconds. It is still a ridiculous proposition – because now the attendees have no time to change to a different room if they had planned to. Even if the event is a single stream, it is still just as ridiculous because no attendee can both do the necessary context switch in terms of digesting content from the first session straight into the second, and similarly, very few people have the mental fortitude to continuously absorb content hour after hour without a break and expect to get any real tangible benefit from any of the sessions.

What ultimately happens is the session invariably start and run later and later over their allocated time slot, and then you have terrible situation of attendees not knowing what time sessions later in the day will really commence, when they will truly finish and concurrent sessions get out of alignment making everything just break down into a giant mess.

Just don’t ever do this πŸ™‚

5 mins

Is pushing it. Think about it for a second – if the first speaker runs just 1 min over their allotment, which is only a 1-2% error on their part in terms of timing, you’ve just sucked up 20% of the switch over time. What if there is a problem with the HDMI or VGA cable for the next speaker? What if they need grab an audio technician? What if an attendee wants to come up after the session for quick question, or a selfie or similar? All of that pretty much goes out the window once you’re limiting switch over to 5mins. Everything has to run perfectly all day long for this to work. That’s a big ask.

10mins

10mins is the sweet spot. It gives the necessary down time for attendees to mentally take a quick break. It gives them time to easily switch between sessions, or grab a toilet break if they need. It removes much of the stress of switch over for speakers, but is still short enough for both organizers and attendees not to feel that there time and/or money is being wasted on dead space during the day.

15 mins

Is a coffee break πŸ™‚

It can work, and is next best option after 10mins, especially if you are running 45min sessions because the scheduling then neatly rounds to the hour. And if you are running a massive event, with 100s or 1000s of attendees in sessions, then this falls back to being equivalent the arguments I’ve made for the 10min duration.

But for smaller events, 15mins can be just a bit too long for attendees to stay engaged with the event. They’ll grab a coffee and then often feel awkward about coming back into the sessions 5 mins late, and thus skip it entirely rather than be “that person” that walks in late. (Pro Tip – we speakers don’t really mind if you come in late, as long as you’re polite about it)
So that’s my thoughts on running sessions for an event. Make them as long or as short as you like, but keep that 10min switch over time. It’s the best.

Happy Thanksgiving!

Just a quick blog post from Wellington New Zealand where we have just wrapped up the 2018 APAC Groundbreakers tour. It was a great way to finish the event with a small but enthusiastic crowd here in New Zealand.

image

Obviously our American counterparts in the Oracle community have other things on their mind this weekend, celebrating Thanksgiving. And without trying to be tooooo corny and cheesy about itΒ Smile, the thanks I give is for 18c XE! A free database for all – the perfect the springboard for developers and DBAs everywhere to explore, learn and experiment with the Oracle database.

18cxe

This is why I mentioned the APAC tour initially in this post. Whilst many will use 18c XE for a fully fledged database implementation for their free applications, another benefit I see of XE is that it can be catalyst for all of us to build and learn and then share our experiences with the community, whether it be via blog posts, or videos, or contributing content to your local user group.

So how about making this a pledge for Thanksgiving, or perhaps a new year resolution? Namely, download 18c XE, learn something new about the Oracle Database and then share that with the community. It’s free, it’s easy and we all benefit!

(Thanks to Sandesh for the NZ pic)