The devastating death PC emoji of doom

So there I am.  Sitting at Perth airport.  My flight from Perth to Sydney, the first leg on my trip to OpenWorld 2017 is delayed.  Of course, delays are not unusual and do not normally bother me.  Because I can just flip open my laptop, knock off a few AskTOM questions while I wait.

But not this time.  I press the Power button and I get this:

Image result for windows 10 collectin memory dump

That is the first time I’ve ever seen that on my nice Dell XPS 13 laptop, but I figure “No big drama.  Probably just some sleep glitch” and let the machine reboot.

That is when the real fun started.  As it is restarting I get a similar screen (which I can’t screen dump…because the laptop isn’t booting!) saying the machine cannot boot.

Now I’m starting to be concerned.  Because I’m a few days out from 6 conference talks at the biggest Oracle conference of the year.  The laptop then goes into a “Cannot boot, so we’ll reboot” cycle which I don’t seem to be able to escape from, and then boarding call comes out over the public address system at the airport.

Aggghhhhh!!!

So I have to do a forced power off on the laptop, jump on the plane…and now sit there in a cold sweat for 5 hours while I fly to Sydney (because I’m now too scared to tackle this issue without a power supply).

I arrive in Sydney and renew the battle.  I power it up, and the laptop says “I cannot boot”, but this time says … “Would I like to repair the startup?”.  Why it has changed I have no idea, but I give it a shot.  After a lot of progress bars, and messages about how long this might take, the laptop finally displays the normal login screen.

Phew !!!!!!!!!

But all is not totally right.  If I sleep or shutdown the laptop, then on boot up, the first message I get is a sad emoji BSOD like the one above, after which it reboots again and then appears fine.

So … here I am, at the hotel with a working laptop… and zero confidence in it.  Grrrrrr.

Not the best start to the week 😦

AskTOM TV–episode 11

Just a quick note to give the supporting collateral to the latest episode of AskTOM TV.

The question I tackled is this one:

https://asktom.oracle.com/pls/apex/asktom.search?tag=want-to-retrive-numbers-in-words

which was a fun one to answer because it showcases several useful SQL techniques:

  • Using CONNECT to synthesize rows,
  • Using regular expressions to parse text ,
  • Using MULTICAST in Oracle 10g to emulate the native LISTAGG functions from 11g onwards ,
  • Using the hooks into the OCI aggregation facilities to build custom aggregations ,
  • The JSP format mask as a mean to generate numeric words

And here is the entire script from the episode if you want to run it yourself.



drop type string_agg_type;
col column_value format a60
col digit format a60
col concat_str format a60
drop table  t purge;


select to_char(to_date('7','J'),'JSP') from dual;

select to_char(to_date('0','J'),'JSP') from dual;

select
  case x
    when '0' then 'zero'
    when '1' then 'one'
    when '2' then 'two'
    when '3' then 'three'
    when '4' then 'four'
    when '5' then 'five'
    when '6' then 'six'
    when '7' then 'seven'
    when '8' then 'eight'
    when '9' then 'nine'
  end
from ( select '3' x from dual ) 

/

select
  case x
    when '0' then 'zero'
    when '1' then 'one'
    when '2' then 'two'
    when '3' then 'three'
    when '4' then 'four'
    when '5' then 'five'
    when '6' then 'six'
    when '7' then 'seven'
    when '8' then 'eight'
    when '9' then 'nine'
  end
from (
  select substr('123',rownum,1) x
  from dual
  connect by level <= 3
  ) 

/  


create or replace type string_list is table of varchar2(1000);
/

create table t ( x int );
insert into t values (101);
insert into t values (456);
insert into t values (789);

select *
from t,
     table(cast(multiset(
        select substr(to_char(t.x),rownum,1)
        from dual
connect by level <= length(to_char(t.x))) as string_list)
)

/


select
  x,
  digit
from (
  select x, column_value digit
  from t,
       table(cast(multiset(
          select 
            case substr(to_char(t.x),rownum,1)
              when '0' then 'zero'
              when '1' then 'one'
              when '2' then 'two'
              when '3' then 'three'
              when '4' then 'four'
              when '5' then 'five'
              when '6' then 'six'
              when '7' then 'seven'
              when '8' then 'eight'
              when '9' then 'nine'
            end str
          from dual
          connect by level <= length(to_char(t.x))) as string_list)
  )
)

/

create or replace type string_agg_type as object
(
   data  string_list,

   static function
        ODCIAggregateInitialize(sctx IN OUT string_agg_type )
        return number,

   member function
        ODCIAggregateIterate(self IN OUT string_agg_type ,
                             value IN varchar2 )
        return number,

   member function
        ODCIAggregateTerminate(self IN string_agg_type,
                               returnValue OUT  varchar2,
                               flags IN number)
        return number,

   member function
        ODCIAggregateMerge(self IN OUT string_agg_type,
                           ctx2 IN string_agg_type)
        return number
);
/
 
create or replace type body string_agg_type
is

static function ODCIAggregateInitialize(sctx IN OUT string_agg_type)
return number
is
begin
    sctx := string_agg_type( string_list() );
    return ODCIConst.Success;
end;

member function ODCIAggregateIterate(self IN OUT string_agg_type,
                                     value IN varchar2 )
return number
is
begin
    data.extend;
    data(data.count) := value;
    return ODCIConst.Success;
end;

member function ODCIAggregateTerminate(self IN string_agg_type,
                                       returnValue OUT varchar2,
                                       flags IN number)
return number
is
    l_data varchar2(4000);
begin
    for x in ( select column_value from TABLE(data) order by 1 )
    loop
            l_data := l_data || ',' || x.column_value;
    end loop;
    returnValue := ltrim(l_data,',');
    return ODCIConst.Success;
end;

member function ODCIAggregateMerge(self IN OUT string_agg_type,
                                   ctx2 IN string_agg_type)
return number
is
begin
    for i in 1 .. ctx2.data.count
    loop
            data.extend;
            data(data.count) := ctx2.data(i);
    end loop;
    return ODCIConst.Success;
end;

end;
/
 
CREATE or replace
FUNCTION stragg(input varchar2 )
RETURN varchar2
PARALLEL_ENABLE AGGREGATE USING string_agg_type;
/


with source_data as
(
    select
      x,
      digit
    from (
      select x, column_value digit
      from t,
           table(cast(multiset(
              select 
                case substr(to_char(t.x),rownum,1)
                  when '0' then 'zero'
                  when '1' then 'one'
                  when '2' then 'two'
                  when '3' then 'three'
                  when '4' then 'four'
                  when '5' then 'five'
                  when '6' then 'six'
                  when '7' then 'seven'
                  when '8' then 'eight'
                  when '9' then 'nine'
                end str
              from dual
              connect by level <= length(to_char(t.x))) as string_list)
      )
    )
)
select x, stragg(digit) concat_str
from   source_data
group by x
order by 1

/
 
with source_data as
(
    select
      x,
      digit
    from (
      select x, column_value digit
      from t,
           table(cast(multiset(
              select '@'||lpad(level,10,'0')||'~'||
                case substr(to_char(t.x),rownum,1)
                  when '0' then 'zero'
                  when '1' then 'one'
                  when '2' then 'two'
                  when '3' then 'three'
                  when '4' then 'four'
                  when '5' then 'five'
                  when '6' then 'six'
                  when '7' then 'seven'
                  when '8' then 'eight'
                  when '9' then 'nine'
                end str
              from dual
              connect by level <= length(to_char(t.x))) as string_list)
      )
    )
)
select x, stragg(digit)concat_str
from   source_data
group by x
order by 1

/


with source_data as
(
    select
      x,
      digit
    from (
      select x, column_value digit
      from t,
           table(cast(multiset(
              select '@'||lpad(level,10,'0')||'~'||
                case substr(to_char(t.x),rownum,1)
                  when '0' then 'zero'
                  when '1' then 'one'
                  when '2' then 'two'
                  when '3' then 'three'
                  when '4' then 'four'
                  when '5' then 'five'
                  when '6' then 'six'
                  when '7' then 'seven'
                  when '8' then 'eight'
                  when '9' then 'nine'
                end str
              from dual
              connect by level <= length(to_char(t.x))) as string_list)
      )
    )
)
select x, regexp_replace(stragg(digit),'\@[0-9]*\~') concat_str
from   source_data
group by x
order by 1


/




OpenWorld 2017–where is the tech content ?

I can’t actually remember how many OpenWorld conferences I have attended. At a guess it would be around 7 or 8. Every year there is a criticism that there is “not enough technical content”.

Well I’m calling “fake news” on that. I can’t remember a time that I’ve struggled to find sessions that I’m interested in. Don’t get me wrong – I would love it if there was double the technical content especially in the core database area, but there’s a pretty simple reason for that. I’m a technically-oriented database-oriented kinda guy Smile. I’m sure that as I type this post, there is probably an Oracle Apps Functional Analyst preparing a blog post about how database content has gone overboard and there should be more Apps content. It’s a fact of life that you can’t please everyone.

“But I looked at the agenda, and a lot of it says ‘in the cloud'” I hear you exclaim. “It will all just be advertising!”.

Yeah, I’m calling “fake news” on that claim as well. Think about that for a second. If I’m submitting a talk for a conference that has a heavy emphasis on cloud architectures, then “Duh!”, of course I’m going to have “cloud” in the title or the synopsis. Because one of the prerequisites for giving a talk at the conference is getting the damn thing accepted ! I suspect some of the cynics that claim the agenda has not enough technical content perhaps have not taken the effort to actually study it.

But you need not worry… Some kind person has done that for you Smile. Having enough technical content is the least of your problems if you’re attending OpenWorld 2017. I’ve been through the session catalog and scraped out those sessions that I have an interest in, and even from my narrow database-centric perspective, I still have a massively tough task of deciding between multiple clashing sessions for almost every single timeslot. And as an Oracle employee, it will be even harder, because I have to wait until all of the conference attendees have entered the session, and only then do I get in if there are a few seats still vacant!

So here’s my list of sessions I’d love to attend at OpenWorld. Click on the code for each to get more details, or (if you are an attendee) to add them to your session builder … you lucky devils! Smile

Sunday 9:45

  • Franck Pachot Multitenant: Trust It by Understanding Beyond What Is Documented SUN1051 Marriott Marquis (Golden Gate Level) – Golden Gate A 01-OCT-2017 09:45 01-OCT-2017 09:45
  • Gary Gordhamer Oracle and NLS SUN5637 Moscone South – Room 155 01-OCT-2017 09:45 01-OCT-2017 09:45
  • Jason Arneil Cloud Networking for Beginners SUN1230 Marriott Marquis (Yerba Buena Level) – Nob Hill A/B 01-OCT-2017 09:45 01-OCT-2017 09:45

Sunday 10:45

  • Adeesh Fulay Docker 101 for Oracle DBAs SUN5617 Moscone South – Room 151 01-OCT-2017 10:45 01-OCT-2017 10:45
  • Francis Mignault Cloudy with a Chance of Oracle Application Express SUN6246 Marriott Marquis (Yerba Buena Level) – Salon 4-6 01-OCT-2017 10:45 01-OCT-2017 10:45
  • Debra Lilley,Ralf Koelling,Jan-Peter Timmermann EOUC Database ACES Share Their Favorite Database Things SUN2413 Marriott Marquis (Golden Gate Level) – Golden Gate A 01-OCT-2017 10:45 01-OCT-2017 10:45
  • Peter Koletzke Leveraging Oracle Database SQL and PL/SQL to Simplify User Interface App Dev SUN6252 Marriott Marquis (Yerba Buena Level) – Salon 10/11 01-OCT-2017 10:45 01-OCT-2017 10:45
  • Sai Janakiram Penumuru A 360-Degree View of Oracle Database Cloud Service SUN5296 Moscone South – Room 160 01-OCT-2017 10:45 01-OCT-2017 10:45
  • Stephen Kost The Thrifty DBA Does Database Security SUN5690 Moscone South – Room 159 01-OCT-2017 10:45 01-OCT-2017 10:45
  • Wayne Martin,Arun Sar Ingesting Data from Oracle Database 12cR2 in a Hadoop System SUN5615 Moscone South – Room 157 01-OCT-2017 10:45 01-OCT-2017 10:45

Sunday 11:45

  • Alex Zaballa Let’s Get Started with Oracle Database Cloud SUN1871 Moscone South – Room 314 01-OCT-2017 11:45 01-OCT-2017 11:45
  • Debra Lilley,Ralf Koelling,Jan-Peter Timmermann EOUC Developer Champions Show the Cool Tech They Use SUN2415 Marriott Marquis (Golden Gate Level) – Golden Gate A 01-OCT-2017 11:45 01-OCT-2017 11:45
  • Harris Baskaran SSL Securing Oracle Database: Goodbye Passwords SUN5642 Moscone South – Room 159 01-OCT-2017 11:45 01-OCT-2017 11:45
  • James Czuprynski Oracle Database In-Memory: Adventures with SwingBench TPC-DS SUN5644 Moscone South – Room 153 01-OCT-2017 11:45 01-OCT-2017 11:45
  • Yalim Gerger Version Control for PL/SQL SUN5700 Moscone South – Room 157 01-OCT-2017 11:45 01-OCT-2017 11:45

Sunday 12:45

  • Ludovico Caldara Get the Most Out of Oracle Data Guard SUN1399 Marriott Marquis (Yerba Buena Level) – Nob Hill A/B 01-OCT-2017 12:45 01-OCT-2017 12:45
  • Janis Griffin Need for Speed: Top Five Oracle Performance Tuning Tips SUN5647 Moscone South – Room 153 01-OCT-2017 12:45 01-OCT-2017 12:45
  • Oren Nakdimon Write Less (Code) with More: Oracle Database 12c New Features SUN2990 Marriott Marquis (Golden Gate Level) – Golden Gate A 01-OCT-2017 12:45 01-OCT-2017 12:45
  • Riyaj Shamsudeen Oracle Automatic Storage Management and Internals SUN5682 Moscone South – Room 155 01-OCT-2017 12:45 01-OCT-2017 12:45

Sunday 13:45

  • Brendan Tierney SQL: One Language to Rule All Your Data SUN1238 Marriott Marquis (Golden Gate Level) – Golden Gate A 01-OCT-2017 13:45 01-OCT-2017 13:45
  • Jonathan Dixon Alexa, How Do You Work with Oracle REST Data Services? SUN6251 Marriott Marquis (Yerba Buena Level) – Salon 4-6 01-OCT-2017 13:45 01-OCT-2017 13:45
  • Julian Dontcheff DBA Types SUN1111 Marriott Marquis (Yerba Buena Level) – Nob Hill A/B 01-OCT-2017 13:45 01-OCT-2017 13:45
  • Pini Dibask Database Consolidation Using Oracle’s Multitenant Architecture SUN5677 Moscone South – Room 153 01-OCT-2017 13:45 01-OCT-2017 13:45
  • Simon Pane,Ivica Arsov Securing Your DBA Monitoring and Backup Scripts SUN5687 Moscone South – Room 159 01-OCT-2017 13:45 01-OCT-2017 13:45
  • Stewart Bryson Kafka, Data Streaming, and Analytics Microservices SUN5693 Moscone South – Room 157 01-OCT-2017 13:45 01-OCT-2017 13:45
  • Tariq Farooq,francisco munoz alvarez Under a Cloudy Sky: The Different Types of Skies SUN4396 Moscone South – Room 313 01-OCT-2017 13:45 01-OCT-2017 13:45
  • Tim Gorman Linux/UNIX Tools for Oracle DBAs SUN5694 Moscone South – Room 151 01-OCT-2017 13:45 01-OCT-2017 13:45

Sunday 14:45

  • Alex Nuijten Life After Oracle Forms: The Move to Oracle Application Express SUN6274 Marriott Marquis (Yerba Buena Level) – Salon 4-6 01-OCT-2017 14:45 01-OCT-2017 14:45
  • Ami Aharonovich Exploring Oracle Database Partitioning New Features and Best Practices SUN5065 Marriott Marquis (Golden Gate Level) – Golden Gate A 01-OCT-2017 14:45 01-OCT-2017 14:45
  • Markus Flechtner Application Containers: Multitenancy for Database Applications SUN3110 Marriott Marquis (Yerba Buena Level) – Nob Hill A/B 01-OCT-2017 14:45 01-OCT-2017 14:45
  • Nelson Calero Redefining Tables Online Without Surprises SUN3541 Moscone South – Room 314 01-OCT-2017 14:45 01-OCT-2017 14:45

Sunday 15:45

  • Heli Helskyaho,Elise Valin-Raki,Tuomas Pystynen Oracle Private Cloud Built with Engineered Systems in Fennia Insurance Company SUN4487 Marriott Marquis (Golden Gate Level) – Golden Gate A 01-OCT-2017 15:45 01-OCT-2017 15:45
  • John King Oracle Database 12c New Features for Developers and DBAs SUN5654 Moscone South – Room 157 01-OCT-2017 15:45 01-OCT-2017 15:45
  • Martin D’Souza Explore Oracle Application Express APIs SUN6244 Marriott Marquis (Yerba Buena Level) – Salon 4-6 01-OCT-2017 15:45 01-OCT-2017 15:45
  • Mauro Pagano Adaptive Plans on Oracle Database 12c SUN5664 Moscone South – Room 155 01-OCT-2017 15:45 01-OCT-2017 15:45
  • Rita Nunez Managing Oracle Database in Oracle Database Exadata Express Cloud Service SUN3575 Moscone South – Room 314 01-OCT-2017 15:45 01-OCT-2017 15:45

Monday 11:00

  • Burt Clouse,Michael Sukhenko,Sreedhar Bobbadi Database Management with Gold Images: Easily Provision, Patch, and Upgrade CON6706 Moscone West – Room 3004 02-OCT-2017 11:00 02-OCT-2017 11:00
  • Geertjan Wielenga,Lucas Jellema,John Brock,Andrejus Baranovskis,Filip Huysmans,Luc Bors Getting Started with Oracle JavaScript Extension Toolkit CON3935 Moscone West – Room 3001 02-OCT-2017 11:00 02-OCT-2017 11:00
  • Mohamed Taman Effective Design of RESTful APIs CON1206 Moscone West – Room 2005 02-OCT-2017 11:00 02-OCT-2017 11:00
  • Perry Harrington,Mirko Ortensi MySQL DBA Primer HOL7298 Hilton San Francisco Union Square (Ballroom Level) – Continental Ballroom 7 02-OCT-2017 11:00 02-OCT-2017 11:00
  • Sergiusz Wolicki New Paradigm for Case-Sensitivity and Collation on Oracle Database 12c Release 2 CON6551 Moscone West – Room 3008 02-OCT-2017 11:00 02-OCT-2017 11:00

Monday 11:30

  • I’ll be in the theatre area in the Demo Grounds doing a lightning talk on Multi-tenant and In-Memory in 12c Release 2

Monday 12:15

  • Geertjan Wielenga,Lucas Jellema,John Brock,Andrejus Baranovskis,Filip Huysmans,Luc Bors Java Versus JavaScript for the Enterprise? CON6063 Moscone West – Room 2007 02-OCT-2017 12:15 02-OCT-2017 12:15
  • John Heimann How Safe Am I? Choosing the Right Security Tool for the Job CON6303 Moscone South – Room 313 02-OCT-2017 12:15 02-OCT-2017 12:15
  • Shay Shmeltzer,Patrik Frankovic Introduction to Oracle’s Visual Low-Code Development Platform CON6510 Moscone West – Room 3001 02-OCT-2017 12:15 02-OCT-2017 12:15

Monday 13:15

  • Avi Miller,Marshall Weymouth,Brian Grad Tips for Securing Your Docker Containers CON6462 Marriott Marquis (Yerba Buena Level) – Salon 13 02-OCT-2017 13:15 02-OCT-2017 13:15
  • Bruce Lowenthal,Juan Perez-Etchegoyen To Patch or Not to Patch: Answering the CPU Question CON6302 Moscone South – Room 313 02-OCT-2017 13:15 02-OCT-2017 13:15
  • Eleanor Meritt New Release Model for Oracle Database CON6550 Moscone West – Room 3014 02-OCT-2017 13:15 02-OCT-2017 13:15
  • Jeff Smith What’s New in Oracle REST Data Services and Oracle SQL Developer CON6669 Moscone West – Room 3012 02-OCT-2017 13:15 02-OCT-2017 13:15

Monday 14:15

  • George Lumpkin What’s New in Oracle Database 12c for Data Warehousing CON6643 Moscone West – Room 3010 02-OCT-2017 14:15 02-OCT-2017 14:15
  • Heli Helskyaho,Elise Valin-Raki,Tuomas Pystynen Design Your Databases Straight to the Cloud CON1846 Moscone West – Room 3007 02-OCT-2017 14:15 02-OCT-2017 14:15
  • Juan Loaiza Oracle Exadata: Disruptive New Memory and Cloud Technologies CON6661 Moscone West – Room 3014 02-OCT-2017 14:15 02-OCT-2017 14:15
  • Roel Hartman Docker for Dummies CON1049 Moscone West – Room 3005 02-OCT-2017 14:15 02-OCT-2017 14:15

Monday 15:15

  • Alan Williams,Keith Wilcox NEW FEATURE! Centralized Database User Management Using Active Directory CON6574 Moscone West – Room 3011 02-OCT-2017 15:15 02-OCT-2017 15:15
  • Jeff Smith Oracle SQL Developer: GUI, CLI, or Browser? CON6670 Moscone West – Room 3012 02-OCT-2017 15:15 02-OCT-2017 15:15


Monday 15:30

  • I’ll be in the theatre area in the Demo Grounds doing a lightning talk on hierarchy SQL and the WITH clause

Monday 16:45

  • Guido Schmutz Apache Kafka: Scalable Message Processing and More CON6156 Moscone West – Room 2004 02-OCT-2017 16:30 02-OCT-2017 16:30
  • Chaitanya Koratamaddi Oracle Application Express 5.2 New Features CON6739 Moscone West – Room 3002 02-OCT-2017 16:45 02-OCT-2017 16:45
  • Chaitanya Pydimukkala,Sai Devabhaktuni,Bobby Curtis Oracle Data Integration Platform Empowers Enterprise-Grade Big Data Solutions CON6893 Marriott Marquis (Golden Gate Level) – Golden Gate B 02-OCT-2017 16:45 02-OCT-2017 16:45
  • George Lumpkin Data Warehousing for Everybody: Oracle Data Warehouse Cloud Service CON6647 Moscone West – Room 3004 02-OCT-2017 16:45 02-OCT-2017 16:45
  • Julian Dontcheff Oracle RAC 12c Release 2: Operational Best Practices CON6684 Moscone West – Room 3014 02-OCT-2017 16:45 02-OCT-2017 16:45
  • Lisa Considine Five Inexpensive Tricks to Accelerate Oracle Database Using x86 CON7580 Marriott Marquis (Yerba Buena Level) – Nob Hill A/B 02-OCT-2017 16:45 02-OCT-2017 16:45
  • Nancy Ikeda,Christopher Jones,Anthony Tuininga Oracle Net Services 12c: Best Practices for Database Performance and Scalability CON6718 Moscone West – Room 3011 02-OCT-2017 16:45 02-OCT-2017 16:45
  • Santanu Datta,Christian Shay Oracle and .NET: Intro and What’s New CON6722 Moscone West – Room 3012 02-OCT-2017 16:45 02-OCT-2017 16:45

Monday 17:45

  • Lawrence Mcintosh,Ken Kutzer Best Practices for implementing Database Security CON6434 Marriott Marquis (Yerba Buena Level) – Nob Hill A/B 02-OCT-2017 17:45 02-OCT-2017 17:45
  • Michael Hichwa Low-Code Rapid Application Development CON6685 Moscone West – Room 3002 02-OCT-2017 17:45 02-OCT-2017 17:45
  • Pedro Lopes NEW! Database Security Assessment Tool Discovers Top Security Risks CON6575 Moscone West – Room 3011 02-OCT-2017 17:45 02-OCT-2017 17:45

Tuesday 11:00

  • Chris Richardson ACID Is So Yesterday: Maintaining Data Consistency with Sagas CON2832 Moscone West – Room 2005 03-OCT-2017 11:00 03-OCT-2017 11:00
  • Hermann Baer,Yasin Baskan What Everybody Should Know About Oracle Partitioning CON6638 Moscone West – Room 3008 03-OCT-2017 11 03-OCT-2017 11:00
  • Julian Dontcheff,Anil Nair,Markus Michalewicz Oracle RAC 12c Release 2 and Cluster Architecture Internals CON6690 Moscone West – Room 3004 03-OCT-2017 11 03-OCT-2017 11:00

Tuesday 11:30

  • Dominic Giles, Maria Colgan Oracle Database and the Internet of Things CON7121 Moscone West – Room 3011 03-OCT-2017 11:30 03-OCT-2017 11:30
  • Colm Divilly Oracle REST Data Services/Oracle Database REST API CON6662 Moscone West – Room 3012 03-OCT-2017 11:30 03-OCT-2017 11:30
  • Gp Gongloor,Sriram Vrinda,Konstantin Kerekovski SQL Tuning for Expert DBAs CON7091 Moscone West – Room 3018 03-OCT-2017 11:30 03-OCT-2017 11:30
  • Tirthankar Lahiri,Ananth Raghavan,Doug Hood Oracle Database In-Memory Deep Dive: Past, Present, and Future CON6584 Moscone West – Room 3014 03-OCT-2017 11:30 03-OCT-2017 11:30

Tuesday 12:45

  • Bryn Llewellyn Ten Rules for Doing a PL/SQL Performance Experiment CON7639 Moscone West – Room 3011 03-OCT-2017 12:45 03-OCT-2017 12:45
  • Colm Divilly Securing Your RESTful Services CON6667 Moscone West – Room 3012 03-OCT-2017 12:45 03-OCT-2017 12:45

Tuesday 13:15

  • Connor Mcdonald Leaner, Faster Code with Advanced SQL Techniques CON3557 Moscone West – Room 2002 03-OCT-2017 13:15 03-OCT-2017 13:15 RubberStamp-Must-See

Tuesday 15:00

  • Dan Mcghan Creating RESTful Web Services the Easy Way with Node.js CON1242 Moscone West – Room 2002 03-OCT-2017 15:00 03-OCT-2017 15:00

Tuesday 15:45

  • Dan Mcghan,Chris Saxon 12 Things Developers Will Love About Oracle Database 12c Release 2 CON6734 Moscone West – Room 3014 03-OCT-2017 15:45 03-OCT-2017 15:45
  • Martin Gubar Oracle Big Data SQL: Roadmap to High-Performance Data Virtualization CON6644 Moscone West – Room 3010 03-OCT-2017 15:45 03-OCT-2017 15:45

Tuesday 16:45

  • Stephan Haisley,Sai Krishnamurthy Maximizing Availability for Oracle GoldenGate Microservices CON6570 Moscone West – Room 3014 03-OCT-2017 16:45 03-OCT-2017 16:45

Tuesday 17:45

  • Connor Mcdonald Using Advanced SQL Techniques for Faster Applications CON6735 Moscone West – Room 3014 03-OCT-2017 17:45 03-OCT-2017 17:45 RubberStamp-Must-See

Wednesday 09:45

  • Nancy Ikeda,Christopher Jones,Anthony Tuininga Python and Oracle Database 12c: Scripting for the Future HOL7605 Hilton San Francisco Union Square (Ballroom Level) – Continental Ballroom 6 04-OCT-2017 09:45 04-OCT-2017 09:45

Wednesday 11:00

  • Greg Drobish Oracle Database 12c Storage Features that Cut Database Tuning Time by 67 Percent CON4671 Marriott Marquis (Yerba Buena Level) – Nob Hill C/D 04-OCT-2017 11:00 04-OCT-2017 11:00
  • Juergen Mueller,Andrew Holdsworth Why the Fastest IO Is Still the One You Don’t Do CON6560 Moscone West – Room 3012 04-OCT-2017 11:00 04-OCT-2017 11:00
  • Mark Drake NoSQL Development and JSON Support in the Next Generation of Oracle Database CON6636 Moscone West – Room 3011 04-OCT-2017 11:00 04-OCT-2017 11:00
  • Mark Fallon Inside the Head of a Database Hacker CON6572 Moscone West – Room 3014 04-OCT-2017 11:00 04-OCT-2017 11:00
  • Thomas Baby Oracle Multitenant: Ask the Architects CON6731 Moscone West – Room 3010 04-OCT-2017 11:00 04-OCT-2017 11:00
  • Tim Goh,Michael Schulman NoSQL Data Modeling for RDBMS Users CON6545 Moscone West – Room 3008 04-OCT-2017 11:00 04-OCT-2017 11:00

Wednesday 12:00

  • Gregg Christman Get the Best Out of Oracle Compression CON6577 Moscone West – Room 3006 04-OCT-2017 12:00 04-OCT-2017 12:00
  • Pravin Jha Oracle Database 12.2: Lockdown Profiles CON6424 Moscone West – Room 3016 04-OCT-2017 12:00 04-OCT-2017 12:00
  • Gerald Venzl, Maria Colgan Oracle Database Features Every Developer Should Know About CON6558 Moscone West – Room 3020 04-OCT-2017 12:00 04-OCT-2017 12:00

Wednesday 13:00

  • Connor Mcdonald War of the Worlds: DBAs Versus Developers CON6737 Moscone West – Room 3014 04-OCT-2017 13:00 04-OCT-2017 13:00 RubberStamp-Must-See

Wednesday 14:00

  • Mike Dietrich,Jay Barnhart Migrate a 100 TB Database Cross-Platform in Less Than a Day CON6467 Moscone West – Room 3004 04-OCT-2017 14:00 04-OCT-2017 14:00
  • Dominic Giles, Maria Colgan Five Things You Might Not Know About Oracle Database CON6631 Moscone West – Room 3020 04-OCT-2017 14:00 04-OCT-2017 14:00

Wednesday 14:45

  • Dan Mcghan Building Real-time Data in Web Applications with Node.js CON1240 Moscone West – Room 2003 04-OCT-2017 14:45 04-OCT-2017 14:45

Wednesday 15:30

  • Bryn Llewellyn Guarding Your Data Behind a Hard Shell PL/SQL API CON6633 Moscone West – Room 3014 04-OCT-2017 15:30 04-OCT-2017 15:30

Wednesday 16:30

  • Scott Rotondo Top Five Tips for Building Secure Database Applications CON6578 Moscone West – Room 3011 04-OCT-2017 16:30 04-OCT-2017 16:30
  • Ashish Agrawal SQL Analytics: Using Automatic Workload Repository and Active Session History CON7065 Moscone West – Room 3018 04-OCT-2017 17 04-OCT-2017 17:00
  • Tim Hall Make the RDBMS Relevant Again with RESTful Web Services and JSON CON7358 Moscone West – Room 2002 04-OCT-2017 16:30 04-OCT-2017 16:30

Wednesday 17:30

  • John King Blockchain? What Is Blockchain? Why Do I Care? CON2276 Moscone West – Room 2010 04-OCT-2017 17:30 04-OCT-2017 17:30
  • Lucas Jellema,Jurgen Leijer JavaScript to SQL: Diagnose Application Issues Across a Hybrid Infrastructure CON6914 Moscone West – Room 2024 04-OCT-2017 17:30 04-OCT-2017 17:30
  • Stewart Bryson Practical Advice from a Developer Using a Cloud-Based DevOps Environment CON7377 Moscone West – Room 2002 04-OCT-2017 17:30 04-OCT-2017 17:30

Thursday 10:45

  • Beda Hammerschmidt,Vikas Arora,Maxim Orgiyan The Top 10 Dos and Don’ts of JSON Processing in a Database CON4647 Marriott Marquis (Golden Gate Level) – Golden Gate C1/C2 05-OCT-2017 10:45 05-OCT-2017 10:45

Thursday 13:45

  • Brendan Tierney Is SQL the Best Language for Statistics and Machine Learning? CON7350 Marriott Marquis (Golden Gate Level) – Golden Gate C3 05-OCT-2017 13:45 05-OCT-2017 13:45


Thursday 14:45

  • Heli Helskyaho,Elise Valin-Raki,Tuomas Pystynen The Basics of Machine Learning CON7354 Marriott Marquis (Golden Gate Level) – Golden Gate C1/C2 05-OCT-2017 14:45 05-OCT-2017 14:45
  • Lucas Jellema,Jurgen Leijer How End User Monitoring and Selenium Can Help Troubleshoot Customer issues CON6912 Marriott Marquis (Golden Gate Level) – Golden Gate C3 05-OCT-2017 14:45 05-OCT-2017 14:45

 

And as I mentioned before…the whole AskTOM team will be at OpenWorld for the week

Partition-wise join

So just what is a “partition-wise” join ?  We will use a metaphor to hopefully Smile explain the benefit.

image

Let’s say two people, Logan and Shannon, decide to move in together.  If each of them already have an existing residence, they will both have a lot of the common items that you find in any household.  So they have a decision to make – do they keep two of everything, or do they have a bit of a “cull” of things that they have in common.  In this imaginary scenario, we will focus on household items in the bathroom and the kitchen.  Logan grabs a set of kitchen knives a knife block, calls Shannon and asks: “Hey Shannon, do you already have a knife block?”

What do you think Shannon will do ? Search the entire house for an existing knife block ?  Of course not.  If there is a knife block, then the only place it will be located will be in the kitchen.  In fact, when matching up the items throughout the house, Shannon and Logan will restrict their investigation to the room that makes sense for the item in question.  That is just common sense – why would anyone search in the bathroom for (say) forks and spoons ?  It would just be a waste of effort.

(Editors Note:  Anyone with young children will of course dispute this metaphor, stating quite correctly that you can probably find every possible household item in every possible room, and probably outside as well Smile but we’ll omit that possibility for the sake of this discussion)

image

And that is exactly what a partition-wise join enables us to do in the database.  If two tables are partitioned with the same definition, and we are joining on the partition key, then that definition guarantees that for a row in one table with partition key “K” and hence partition “P”, we only need to seek that row in the same partition in the table we are joining to (where “same” is based on the partitioning definition).  It is the partitioning equivalent of “only searching in the kitchen and not the bathroom”.  We can see this via the execution plan when doing such a join.  Let’s create two tables with equal partition definitions and then join on the partition key.


SQL> --
SQL> -- Example 1
SQL> --
SQL>
SQL> drop table t1 purge;

Table dropped.

SQL> drop table t2 purge;

Table dropped.

SQL>
SQL> create table t1 ( x int, y int )
  2  partition by range ( x )
  3  (
  4  partition p_kitchen values less than (10000),
  5  partition p_bathroom values less than (20000),
  6  partition p_dining values less than (30000)
  7  );

Table created.

SQL>
SQL>
SQL> create table t2 ( x int, y int )
  2  partition by range ( x )
  3  (
  4  partition p_kitchen values less than (10000),
  5  partition p_bathroom values less than (20000),
  6  partition p_dining values less than (30000)
  7  );

Table created.

SQL>
SQL>
SQL> insert into t1 select rownum, rownum from dual connect by level < 30000;

29999 rows created.

SQL> insert into t2 select * from t1;

29999 rows created.

SQL> commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats('','t1')

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.gather_table_stats('','t2')

PL/SQL procedure successfully completed.

SQL>
SQL> --
SQL> set autotrace traceonly explain
SQL> select count(t1.y), count(t2.y)
  2  from t1,t2
  3  where t1.x = t2.x;

Execution Plan
----------------------------------------------------------
Plan hash value: 3155849676

---------------------------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |     1 |    20 |  1641   (1)| 00:00:01 |       |       |
|   1 |  SORT AGGREGATE      |      |     1 |    20 |            |          |       |       |
|   2 |   PARTITION RANGE ALL|      | 29999 |   585K|  1641   (1)| 00:00:01 |     1 |     3 |
|*  3 |    HASH JOIN         |      | 29999 |   585K|  1641   (1)| 00:00:01 |       |       |
|   4 |     TABLE ACCESS FULL| T1   | 29999 |   292K|   820   (1)| 00:00:01 |     1 |     3 |
|   5 |     TABLE ACCESS FULL| T2   | 29999 |   292K|   820   (1)| 00:00:01 |     1 |     3 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("T1"."X"="T2"."X")

SQL> set autotrace off
SQL>

The key part of the execution plan here is that the HASH JOIN is occurring within (or “under”) the PARTITION RANGE ALL iteration.  This can be interpreted as: “Start with the first partition in each table, do a hash join on that partition.  Then move onto the next partition; do a hash join on that partition”, and so on.  This is efficient on resources because at no point are we trying (and obviously failing) to join a row from table T1 partition P_KITCHEN to table T2 partition P_BATHROOM or P_DINING.  Each hash join is a smaller operation and hence also more likely to be completed in the available PGA allocation for that session.  Also, when it comes to running such a query in parallel, then each parallel slave can tackle the job of handling a partition in isolation to the other slaves.

If the partitions do not align (see the Editors note above Smile), then our join will not be as efficient.


SQL> --
SQL> -- Example 2
SQL> --
SQL>
SQL>
SQL> drop table t2 purge;

Table dropped.

SQL> create table t2 ( x int, y int )
  2  partition by range ( x )
  3  (
  4  partition p1 values less than (15000),
  5  partition p3 values less than (30000)
  6  );

Table created.

SQL>
SQL> --
SQL> -- all partitions do NOT align, so we do NOT see partition-wise join
SQL> --
SQL>
SQL> insert into t2 select * from t1;

29999 rows created.

SQL> commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats('','t2')

PL/SQL procedure successfully completed.

SQL> set autotrace traceonly explain
SQL> select count(t1.y), count(t2.y)
  2  from t1,t2
  3  where t1.x = t2.x;

Execution Plan
----------------------------------------------------------
Plan hash value: 666786458

---------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name    | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
---------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |         |     1 |    20 |  1369   (1)| 00:00:01 |       |       |
|   1 |  SORT AGGREGATE               |         |     1 |    20 |            |          |       |       |
|*  2 |   HASH JOIN                   |         | 29999 |   585K|  1369   (1)| 00:00:01 |       |       |
|   3 |    PART JOIN FILTER CREATE    | :BF0000 | 29999 |   585K|  1369   (1)| 00:00:01 |       |       |
|   4 |     PARTITION RANGE ALL       |         | 29999 |   292K|   820   (1)| 00:00:01 |     1 |     3 |
|   5 |      TABLE ACCESS FULL        | T1      | 29999 |   292K|   820   (1)| 00:00:01 |     1 |     3 |
|   6 |    PARTITION RANGE JOIN-FILTER|         | 29999 |   292K|   548   (1)| 00:00:01 |:BF0000|:BF0000|
|   7 |     TABLE ACCESS FULL         | T2      | 29999 |   292K|   548   (1)| 00:00:01 |:BF0000|:BF0000|
---------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("T1"."X"="T2"."X")

Note
-----
   - this is an adaptive plan

SQL> set autotrace off
SQL>
SQL>

The key element here is that the HASH JOIN now sits above the cycling through all of the partitions.  In earlier releases of Oracle, you would not see the line containing the :BF0000, so it would be a simple join across all the rows as if the tables were not partitioned at all.  But when the partitions do not align, things are slightly better in modern releases.  We use a “Bloom filter” (hence the :BF prefix) to reduce the overhead of joining the two tables.  Since I’m using metaphors in this post, think of “phoning ahead” to the cinema to see if there are seats available for your favourite movie.  If the cinema owner says the movie is sold out, you have saved yourself a car trip. But just because the owner says there are seats available, it is still possible you might drive there and find that the movie has sold out during that time.  A Bloom filter is like phoning ahead – there’s a good chance you can avoid some work, but it is not a guarantee.  You can read about Bloom filters here in a great whitepaper by Christian Antognini.

Note that all of the partitions must align. Here is an example where the first three partitions are in alignment, having boundaries are 10000, 20000 and 30000, but our second table T2 has an additional partition defined.  Once again, we fall back to the Bloom filter option.


SQL> --
SQL> -- Example 3
SQL> --
SQL> drop table t2 purge;

Table dropped.

SQL> create table t2 ( x int, y int )
  2  partition by range ( x )
  3  (
  4  partition p1 values less than (10000),
  5  partition p2 values less than (20000),
  6  partition p3 values less than (30000),
  7  partition p4 values less than (40000)
  8  );

Table created.

SQL>
SQL> --
SQL> -- all partitions do NOT align, so we do NOT see partition-wise join
SQL> --
SQL>
SQL> insert into t2 select rownum, rownum from dual connect by level < 40000;

39999 rows created.

SQL> commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats('','t2')

PL/SQL procedure successfully completed.

SQL> set autotrace traceonly explain
SQL> select count(t1.y), count(t2.y)
  2  from t1,t2
  3  where t1.x = t2.x;

Execution Plan
----------------------------------------------------------
Plan hash value: 666786458

---------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name    | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
---------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |         |     1 |    20 |  1913   (1)| 00:00:01 |       |       |
|   1 |  SORT AGGREGATE               |         |     1 |    20 |            |          |       |       |
|*  2 |   HASH JOIN                   |         | 29999 |   585K|  1913   (1)| 00:00:01 |       |       |
|   3 |    PART JOIN FILTER CREATE    | :BF0000 | 29999 |   585K|  1913   (1)| 00:00:01 |       |       |
|   4 |     PARTITION RANGE ALL       |         | 29999 |   292K|   820   (1)| 00:00:01 |     1 |     3 |
|   5 |      TABLE ACCESS FULL        | T1      | 29999 |   292K|   820   (1)| 00:00:01 |     1 |     3 |
|   6 |    PARTITION RANGE JOIN-FILTER|         | 39999 |   390K|  1093   (1)| 00:00:01 |:BF0000|:BF0000|
|   7 |     TABLE ACCESS FULL         | T2      | 39999 |   390K|  1093   (1)| 00:00:01 |:BF0000|:BF0000|
---------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("T1"."X"="T2"."X")

Note
-----
   - this is an adaptive plan

SQL> set autotrace off
SQL>
SQL>
SQL>


So faster queries on partitioned tables is not just about partition pruning.  Partition-wise joins also can make a beneficial impact on query response times.

Updating indexes with partition maintenance

An index is basically a structure that maps keys (values) in columns to the physical location of their corresponding rows in a table.  So if you move the rows (ie, change the physical location of a row) then the index entries for those rows need to be updated, or the index is no longer usable.  And as most people are aware, the latter is the default when you perform a partition maintenance operation on a table that re-locates rows.  For example, we’ll create a simple partitioned table, add both a local and a global index, split one of the partitions, and then see what the impact on those indexes is.


SQL>
SQL> drop table t purge;

Table dropped.

SQL>
SQL> create table t (x int, y int, z int )
  2  partition by range (x)
  3  (
  4    partition p1 values less than (4000),
  5    partition p2 values less than (8000)
  6  );

Table created.

SQL>
SQL> insert into t select q*1000,rownum,rownum from dba_objects, ( select rownum q from dual connect by level <= 7 );

548338 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> create index ix_local on t ( y ) local;

Index created.

SQL> create index ix_global on t ( z ) ;

Index created.

SQL>
SQL> alter table t split partition p2 at (6000)
  2    into ( partition p2a, partition p2b ) ;

Table altered.

SQL>
SQL> select index_name, partition_name, status
  2  from user_ind_partitions
  3  where index_name like 'IX%';

INDEX_NAME                     PARTITION_NAME       STATUS
------------------------------ -------------------- --------
IX_LOCAL                       P1                   USABLE
IX_LOCAL                       P2A                  UNUSABLE
IX_LOCAL                       P2B                  UNUSABLE

3 rows selected.

SQL>
SQL> select index_name, status
  2  from user_indexes
  3  where index_name like 'IX%';

INDEX_NAME                     STATUS
------------------------------ --------
IX_GLOBAL                      UNUSABLE
IX_LOCAL                       N/A

2 rows selected.

SQL>
SQL>
SQL>

The result is what we would expect.  The global index has become unusable because a portion of the rows that had entries in the index have been moved as part of the partition split operation.  The local index has also become unusable but only for those partitions that were impacted by the split.  Partition P1 was not affected by the split, and hence the corresponding local index partition is still valid.  Just a quick footnote – the “N/A” in USER_INDEXES for the local index represents that the STATUS of the local index should be determined by looking at the status for each underlying partition.  In early releases of Oracle this could create problems in terms of availability for partitioned tables, because if you tried to use an index that had become unusable, you would get an error: ORA-01502: index or partition of such index is in unusable state,  which is not the most pleasant thing to be sending back to your application users Smile

In Oracle 9, a workaround to the problem was introduced, the UPDATE GLOBAL INDEXES clause.  This keeps the (global) index entries valid by correcting the index entries as the partition operation is performed.  This has some costs in terms of resource consumption, because obviously maintaining a large amount of index entries will consume CPU and redo, but it kept the index available to application users.  Here’s a demo of that in action:


SQL> drop table t purge;

Table dropped.

SQL>
SQL> create table t (x int, y int, z int )
  2  partition by range (x)
  3  (
  4    partition p1 values less than (4000),
  5    partition p2 values less than (8000)
  6  );

Table created.

SQL>
SQL> insert into t select q*1000,rownum,rownum from dba_objects, ( select rownum q from dual connect by level <= 7 );

548338 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> create index ix_local on t ( y ) local;

Index created.

SQL> create index ix_global on t ( z ) ;

Index created.

SQL>
SQL> alter table t split partition p2 at (6000)
  2    into ( partition p2a, partition p2b )
  3    update global indexes;

Table altered.

SQL>
SQL> select index_name, partition_name, status
  2  from user_ind_partitions
  3  where index_name like 'IX%';

INDEX_NAME                     PARTITION_NAME       STATUS
------------------------------ -------------------- --------
IX_LOCAL                       P1                   USABLE
IX_LOCAL                       P2A                  UNUSABLE
IX_LOCAL                       P2B                  UNUSABLE

3 rows selected.

SQL>
SQL> select index_name, status
  2  from user_indexes
  3  where index_name like 'IX%';

INDEX_NAME                     STATUS
------------------------------ --------
IX_GLOBAL                      VALID
IX_LOCAL                       N/A

2 rows selected.

SQL>
SQL> alter index ix_local rebuild partition P2A;

Index altered.

SQL> alter index ix_local rebuild partition P2B;

Index altered.

But notice that the local index has not been spared from being marked unusable.  We had to rebuild each local partition after the operation. You might be thinking that only having UPDATE GLOBAL INDEXES was a hence a waste of time, but don’t forget that some partition operations, such as DROP and TRUNCATE do not impact local indexes, so updating the global index entries is all that is required:


SQL> alter table t drop partition p2a update global indexes;

Table altered.

SQL>
SQL> select index_name, partition_name, status
  2  from user_ind_partitions
  3  where index_name like 'IX%';

INDEX_NAME                     PARTITION_NAME       STATUS
------------------------------ -------------------- --------
IX_LOCAL                       P1B                  USABLE
IX_LOCAL                       P2B                  USABLE

2 rows selected.

SQL>
SQL> select index_name,  status
  2  from user_indexes
  3  where index_name like 'IX%';

INDEX_NAME                     STATUS
------------------------------ --------
IX_GLOBAL                      VALID
IX_LOCAL                       N/A

2 rows selected.

SQL>
SQL>

But if you do have existing scripts that contain the UPDATE GLOBAL INDEXES clause, it is perhaps worth revisiting them to take advantage of the more complete implementation of this facility, which arrived in Oracle 11.  The UPDATE INDEXES clause will take care of both global and local indexes during partition maintenance operations.


SQL>
SQL> alter table t split partition p1 at (2000)
  2    into ( partition p1a, partition p1b )
  3    update indexes;

Table altered.

SQL>
SQL> select index_name, partition_name, status
  2  from user_ind_partitions
  3  where index_name like 'IX%';

INDEX_NAME                     PARTITION_NAME       STATUS
------------------------------ -------------------- --------
IX_LOCAL                       P1A                  USABLE
IX_LOCAL                       P1B                  USABLE
IX_LOCAL                       P2A                  USABLE
IX_LOCAL                       P2B                  USABLE

4 rows selected.

SQL>
SQL> select index_name,  status
  2  from user_indexes
  3  where index_name like 'IX%';

INDEX_NAME                     STATUS
------------------------------ --------
IX_GLOBAL                      VALID
IX_LOCAL                       N/A

2 rows selected.

SQL>
SQL> alter table t drop partition p1a update indexes;

Table altered.

SQL>
SQL> select index_name, partition_name, status
  2  from user_ind_partitions
  3  where index_name like 'IX%';

INDEX_NAME                     PARTITION_NAME       STATUS
------------------------------ -------------------- --------
IX_LOCAL                       P1B                  USABLE
IX_LOCAL                       P2A                  USABLE
IX_LOCAL                       P2B                  USABLE

3 rows selected.

SQL>
SQL> select index_name,  status
  2  from user_indexes
  3  where index_name like 'IX%';

INDEX_NAME                     STATUS
------------------------------ --------
IX_GLOBAL                      VALID
IX_LOCAL                       N/A

2 rows selected.

SQL>

Not that I can see a real need for it, but it’s worth noting that these are your only two options.  There is no such thing as keeping only the local indexes entries updated and not the global ones.


SQL>  alter table t split partition p2 at (6000)
  2      into ( partition p2a, partition p2b ) update local indexes;
    into ( partition p2a, partition p2b ) update local indexes
                                                 *
ERROR at line 2:
ORA-00905: missing keyword

Finally, remember that since Oracle 10g we decided that it is better to allow a query to run slowly rather than have it crash with an error, so the default value for “skip_unusable_indexes” is now TRUE.  So if you have indexes that have a status of UNUSABLE, you will not detect this via errors in your SQL queries.  The queries will just be optimized as if the indexes did not exist, which might have an impact on their execution time.  Here is a a simple starting point for a monitoring query to keep an eye on unusable indexes (and rebuild them if necessary):


SQL> set serverout on
SQL> declare
  2    procedure ddl(p_cmd varchar2) is
  3    begin
  4      --execute immediate p_cmd;
  5      dbms_output.put_line(p_cmd);
  6    end;
  7  begin
  8      for i in (
  9          select index_owner, index_name, partition_name, 'partition' ddl_type
 10          from all_ind_partitions
 11          where status = 'UNUSABLE'
 12          union all
 13          select index_owner, index_name, subpartition_name, 'subpartition' ddl_type
 14          from all_ind_subpartitions
 15          where status = 'UNUSABLE'
 16          union all
 17          select owner, index_name, null, null
 18          from all_indexes
 19          where status = 'UNUSABLE'
 20      )
 21      loop
 22        if i.ddl_type is null then
 23          ddl('alter index '||i.index_owner||'.'||i.index_name||' rebuild');
 24        else
 25          ddl('alter index '||i.index_owner||'.'||i.index_name||' modify '||i.ddl_type||' '||i.partition_name||' rebuild');
 26        end if;
 27      end loop;
 28  end;
 29  /
alter index MCDONAC.T_PAR_IX modify partition P1 rebuild
alter index MCDONAC.T_PAR_IX modify partition P2 rebuild

PL/SQL procedure successfully completed.

And finally, as Tim reminded me, with 12c Release 2, many partition operations (and many other maintenance operations as well) can now be done online simply by specifying ONLINE as a suffix to the operation. That (as the name suggests) will keep all indexes in a USABLE state.


SQL>  create table t (x int, y int, z int )
  2      partition by range (x)
  3      (
  4        partition p1 values less than (4000),
  5        partition p2 values less than (8000)
  6      );

Table created.

SQL>
SQL> insert into t select q*1000,rownum,rownum from dba_objects, ( select rownum q from dual connect by level <= 7 );

547862 rows created.

SQL>
SQL> commit;

Commit complete.

SQL>
SQL> create index ix_local on t ( y ) local;

Index created.

SQL>
SQL> create index ix_global on t ( z ) ;

Index created.

SQL>
SQL> alter table t split partition p2 at (6000)
  2      into ( partition p2a, partition p2b ) online;

Table altered.

SQL> select index_name, partition_name, status
  2  from user_ind_partitions
  3  where index_name like 'IX%';

INDEX_NAME                     PARTITION_NAME                 STATUS
------------------------------ ------------------------------ --------
IX_LOCAL                       P1                             USABLE
IX_LOCAL                       P2A                            USABLE
IX_LOCAL                       P2B                            USABLE

3 rows selected.

SQL>
SQL> select index_name, status
  2  from user_indexes
  3  where index_name like 'IX%';

INDEX_NAME                     STATUS
------------------------------ --------
IX_GLOBAL                      VALID
IX_LOCAL                       N/A

2 rows selected.



LOBs and tiny typos

This one caught me out – I was racing around looking for bugs, or parameter files, or hidden settings that stopped SECUREFILE lobs from being created.  Here was my incredibly simple test case – create a securefile LOB, and then make sure it’s a securefile.


SQL> create table t1 ( b blob ) lob ( b ) store as securfile;

Table created.

SQL> select securefile
  2  from   user_lobs
  3  where  table_name = 'T1';

SEC
---
NO

That had me bamboozled, but it is a trivial explanation.  Notice that I did not spell  “SECUREFILE” correctly.  As a result, the syntax is interpreted as being the name of the LOB segment in the data dictionary, rather than the specification of how the LOB should be stored.


SQL> select segment_name
  2  from   user_lobs
  3  where  table_name = 'T1';

SEGMENT_NAME
------------------------------
SECURFILE

All it takes is the correct “e” in “securefile” and normal service was resumed Smile


SQL> drop table t1 purge;

Table dropped.

SQL> create table t1 ( b blob ) lob ( b ) store as securefile;

Table created.

SQL> select securefile
  2  from   user_lobs
  3  where  table_name = 'T1';

SEC
---
YES

So why blog about it such a silly mistake ?  Because this is a common issue with all of us as developers.  We see something unusual and our first assumption is that it must be some fundamental flaw in the product we’re using.  That mindset has been around as long computer programming has existed, but but 20 years ago, it wasn’t so easy to make a fool of yourself by bleating about it on social media Smile.  I remember when I first started programming, one of my mentors told me: “Remember, the COBOL compiler is not broken”, and of course, we can insert any programming language into that sentence.  So before you jump on to Twitter … just take a moment to re-check that script, or get a colleague to give it a “once over”.  You’ll either save yourself some embarrassment, or you add additional rigour to your test case – it’s win-win.