iASH–my “infinite ASH” routine

I love Active Session History (ASH) data because a lot of the work I’ve done in my consulting life was “after the fact” diagnosis.  By this I mean that many of us have been in a similar circumstance where the customer will contact you not when a problem is occurring, but only when you contact them for some other potentially unrelated reason.  At which point you hear will that dreaded sentence:

“Yeah, the Order Entry screen was really slow a couple of hours ago

And this is where ASH is an awesome resource.  With the ASH data available, there is a good chance you will be able to diagnose the issue without having to make an embarrassing request for the customer to repeat the task so that you can trace the underlying database activity.  Because no-one likes to be the person that says:

“Yeah that performance must have really sucked for you … Hey, let’s do it again!”

But even ASH has it’s limitations because sometimes the customer sentence is phrased slightly differently Smile

“Yeah, the Order Entry screen was really slow last Tuesday

In this case, it is unlikely that the ASH data will still be available.  Whilst a subset of the invaluable ASH data is retained in DBA_HIST_ACTIVE_SESS_HISTORY, I would prefer to keep the complete set of ASH data available for longer than the timespan for which it is practical (due to the limitations of memory). So I wrote myself a simple little routine that keep all of the ASH data around for longer.  It’s hardly rocket science – just a little partitioned table to capture V$ACTIVE_SESSION_HISTORY at regular intervals.  Let’s walk through it so you can understand it and adapt it for your own use.

First I’ll create a partitioned table to hold the ASH data.  I’m using partitioning to avoid the need to index the table, so the insertion cost is minimal.  I’m partitioning by day and the code assumes this, so take care if you intend to modify it.


SQL> create table ash_hist
  2  partition by range (sample_time)
  3  interval( numtodsinterval(1,'day'))
  4  (partition p1 values less than (timestamp' 2017-01-01 00:00:00'))
  5  as select * from sys.gv_$active_session_history;

Table created.

Here is my procedure to capture the data.  The essentials of the routine are:

  • Starting with the most recent partition, find the last recorded entry in ASH_HIST.  We’ll look back up to 10 days to find our starting point (hence the daily partitions).
  • If there is no data for the last 10 days, we’ll bomb out, because we haven’t been running the routine frequently enough.
  • Copy all the ASH data from this point to now into ASH_HIST using a nice efficient INSERT-APPEND, but we’ll skip the session that is doing the copying. (You can include it if you want just by removing line 8)
  • Once per week (you can control this by tinkering with the IF conditions on line 34) we’ll drop the oldest partitions.  By default I keep 90 days, but you can set this by altering “l_retention” on line 5.

SQL>
SQL> CREATE OR REPLACE procedure save_ash_hist is
  2    l_hi_val_as_string varchar2(1000);
  3    l_hi_val_as_date   date;
  4    l_max_recorded     timestamp;
  5    l_retention        number := 90;
  6
  7  begin
  8    dbms_application_info.set_module('$$SAVE_ASH$$','');
  9    -- we are looping to take advantage
 10    -- of partition elimination
 11
 12    for i in 0 .. 10 loop
 13       select max(sample_time)
 14       into   l_max_recorded
 15       from   ash_hist
 16       where  sample_time > systimestamp - i;
 17
 18       exit when l_max_recorded is not null;
 19    end loop;
 20
 21    if l_max_recorded is null then
 22      raise_application_error(-20000,'No max sample time with 10 days');
 23    end if;
 24    dbms_output.put_line('Last copied time was '||l_max_recorded);
 25
 26    insert /*+ append */ into ash_hist
 27    select *
 28    from sys.gv_$active_session_history
 29    where sample_time > l_max_recorded
 30    and   ( module != '$$SAVE_ASH$$' or module is null );
 31    dbms_output.put_line('Copied '||sql%rowcount||' rows');
 32    commit;
 33
 34    if to_char(sysdate,'DYHH24') between 'TUE01' and 'TUE06' then
 35
 36      begin
 37        execute immediate 'alter table ash_hist set interval ()';
 38      exception
 39        when others then null;
 40      end;
 41      execute immediate 'alter table ash_hist set interval (NUMTODSINTERVAL(1,''DAY''))';
 42
 43      for i in ( select *
 44                 from   user_tab_partitions
 45                 where  table_name = 'ASH_HIST'
 46                 and    partition_position > 1
 47                 order by partition_position )
 48      loop
 49        l_hi_val_as_string := i.high_value;
 50        execute immediate 'select '||l_hi_val_as_string||' from dual' into l_hi_val_as_date;
 51
 52        if l_hi_val_as_date < sysdate - l_retention then
 53          execute immediate 'alter table ash_hist drop partition '||i.partition_name;
 54        else
 55          exit;
 56        end if;
 57
 58      end loop;
 59    end if;
 60  end;
 61  /

Procedure created.

And that is all there is to it.  Each time we run the procedure, we’ll grab all the ASH data since the last time we ran and keep it in ASH_HIST.


SQL>
SQL> select count(*) from ash_hist;

  COUNT(*)
----------
       792

1 row selected.


SQL>
SQL> exec save_ash_hist

PL/SQL procedure successfully completed.

SQL>
SQL> select count(*) from ash_hist;

  COUNT(*)
----------
       819

1 row selected.

A simple scheduler job to run the routine every couple of hours (I’m assuming your SGA holds at least 2 hours of samples in V$ACTIVE_SESSION_HISTORY – if not, you’d need to adjust the frequency) and you’re off and running.


SQL>
SQL>
SQL> BEGIN
  2      dbms_scheduler.create_job (
  3         job_name           =>  'ASH_CAPTURE',
  4         job_type           =>  'PLSQL_BLOCK',
  5         job_action         =>  'save_ash_hist;',
  6         start_date         =>  CAST((TRUNC(SYSDATE,'HH') + (1/24) + (55/24/60)) AS TIMESTAMP), -- job commences at 55 mins past the next hour
  7         repeat_interval    =>  'FREQ=HOURLY; INTERVAL=2',
  8         enabled            =>  true,
  9         comments           =>  'Permanent record of ASH data');
 10  END;
 11  /

PL/SQL procedure successfully completed.

You can assume all of the standard disclaimers here. Use at own risk, blah blah, no warranty, blah blah, etc

Addenda:  I should add that you could write this complete level of detail directly to DBA_HIST_ACTIVE_SESS_HISTORY via tinkering with “_ash_disk_filter_ratio” , but please get the endorsement of Support first.

LOBs from afar

This has always been a nuisance.  There you are – getting all the bells and whistles with LOBs…until a database link enters the room Smile


--
-- Database: DB11
--
SQL> create table t ( id int, c clob );

Table created.

SQL> insert into t values (1,rpad('x',32000,'x'));

1 row created.

SQL> select * 
  2  from   t;

        ID C
---------- --------------------------------------------
         1 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

--
-- Database: anywhere except DB11
--
SQL> select * 
  2  from   t@db11;

ERROR:
ORA-22992: cannot use LOB locators selected from remote tables

 

Well that sucks. But look what has snuck its way into 12.2 !


SQL> select * 
  2  from   t@db122;

        ID C
---------- --------------------------------------------
         1 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

 

There is a whole chapter on it here in the official docs.  Very nice indeed.

Buffer cache hit ratio–blast from the past

I was perusing some old content during a hard drive “spring clean” the other day, and I found an old gem from way back in 2001.  A time when the database community were trying to dispel the myth that all database performance issues could be tracked back to,  and solved via, the database buffer cache hit ratio.  Thankfully, much of that folklore has now passed into the realm of fiction, but I remember at the time, as a means of showing how silly some of the claims were, I published a routine that would generate any buffer cache hit ratio you desired.  It just simply ran a query to burn through logical I/O’s (and burn a whole in your CPU!) until the required number of operations bumped up the buffer cache hit ratio to whatever number you liked Smile 

Less performance, more work done…. all to get a nice summary number.

The kinds of statistics that the database collects, and what each one represents has changed over the years and versions of Oracle, but I figured I’d present the routine in original form as a nostalgic reminder that statistics without an understanding behind them are as good as no statistics at all.

Enjoy !


create or replace
procedure choose_a_hit_ratio(p_ratio number default 99,p_show_only boolean default false) is
  v_phy                number;
  v_db                 number;
  v_con                number;
  v_count              number;
  v_additional_congets number;
  v_hit number;
  
  procedure show_hit is
  begin
    select p.value, d.value, c.value
    into v_phy, v_db, v_con
    from 
      ( select value from v$sysstat where name = 'physical reads' ) p,
      ( select value from v$sysstat where name = 'db block gets' ) d,
      ( select value from v$sysstat where name = 'consistent gets' ) c;
    v_hit := 1-(v_phy/(v_db+v_con));
    dbms_output.put_line('Current ratio is: '||round(v_hit*100,5));
  end;
begin
--
-- First we work out the ratio in the normal fashion
--
  show_hit;

  if p_ratio/100 < v_hit or p_ratio > 99.9999999 then
    dbms_output.put_line('Sorry - I cannot help you');
    return;
  end if;
--
-- Flipping the formula we can work out how many more consistent gets
-- we need to increase the hit ratio
--
  v_additional_congets := trunc(v_phy/(1-p_ratio/100)-v_db - v_con);

  dbms_output.put_line('Another '||v_additional_congets||' consistent gets needed...');

  if p_show_only then return; end if;
--
-- Create a simple table to hold 200 rows in a single block
--
  begin
    execute immediate 'drop table dummy';
  exception 
    when others then null;
  end;

  execute immediate 'create table dummy (n primary key) organization index as '||
                    'select rownum n from all_objects where rownum <= 200';
--
-- Turn off any new 9i connect-by features to ensure we still do lots of 
-- logical IO
--
  begin
    execute immediate 'alter session set "_old_connect_by_enabled" = true';
  exception 
    when others then null;
  end;
--
-- Grind away until we do all those additional gets
--
  execute immediate '
    select count(*) 
    from (
      select n
      from dummy
      connect by n > prior n
      start with n = 1 )
    where rownum < :v_additional_congets' into v_count using v_additional_congets;

  show_hit;
end;
/

And some output to keep the hit ratio fanatics happy!


SQL> exec choose_a_hit_ratio(85,true);
Current ratio is: 82.30833
Another 29385 consistent gets needed...

PL/SQL procedure successfully completed.

SQL> exec choose_a_hit_ratio(85);
Current ratio is: 82.30833
Another 29385 consistent gets needed...
Current ratio is: 86.24548

PL/SQL procedure successfully completed.

SQL> exec choose_a_hit_ratio(90,true);
Current ratio is: 86.24731
Another 79053 consistent gets needed...

PL/SQL procedure successfully completed.

SQL> exec choose_a_hit_ratio(90);
Current ratio is: 86.24731
Another 79053 consistent gets needed...
Current ratio is: 90.5702

PL/SQL procedure successfully completed.

SQL> exec choose_a_hit_ratio(98,true);
Current ratio is: 90.5709
Another 1141299 consistent gets needed...

PL/SQL procedure successfully completed.

SQL> exec choose_a_hit_ratio(98);
Current ratio is: 90.5709
Another 1141299 consistent gets needed...
Current ratio is: 98.02386

PL/SQL procedure successfully completed.

UKOUG is coming

Yes it is just a few more weeks until the UKOUG conference swings by.

This has been one of my favourite conferences for years – dating back to my first one in 2002 !!.  You can see from the picture at the tail of this post – whilst times have changed in those 15 years, the basic tenets of community, networking and technical content remain unchanged to this day.

The AskTOM team will all be there and there’s a fantastic agenda spread over 4 days.  This user group really knows how to “bake” a good conference.  Maybe they followed my recipe. Smile

 

 

OakTable area, UKOUG 2002

image

Parsing freeform data in flat files

SQL loader is a very cool utility that has existed for a long time within Oracle to load flat files into the database. However sometimes people find the control file syntax quite cryptic, and when it comes to passing very complicated structures, this can mean control files which are hard to maintain. For me the best solution here is to use an external table. That way we can combine the power of the SQL Loader control file syntax embedded within the external table definition, along with the full power of PL/SQL and SQL for additional parsing of that data.

Here is an example where the data is spread across multiple lines and the task is to bring all that data together into a natural form, namely an ID followed by text.

So here is my file that has free format text


10,some data
that is really 
annoying because it
spans lots of rows.
20,and some more than does not.
30,and a mixture of stuff and we 
will assume that numerics then comm as 
is the thing that tells us it is a new line. 
40,otherwise we will keep on going.

What I’m going to do is use an external table to bring that data in as it is within the file, and then use a PL/SQL function in pipelined mode to parse the data into the component attributes.


SQL>
SQL> create table freeform
  2  (
  3  line varchar2(200)
  4  )
  5  ORGANIZATION external
  6  (
  7  TYPE oracle_loader
  8  DEFAULT DIRECTORY temp
  9  ACCESS PARAMETERS
 10  (
 11  RECORDS DELIMITED BY NEWLINE
 12  FIELDS LDRTRIM
 13  (
 14  line char(200)
 15  )
 16  )
 17  location
 18  (
 19  'freeform.dat'
 20  )
 21  )REJECT LIMIT UNLIMITED ;

Table created.

SQL>
SQL> select * from freeform;

LINE
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
10,some data
that is really
annoying because it
spans lots of rows.
20,and some more than does not.
30,and a mixture of stuff and we
will assume that numerics then comm as
is the thing that tells us it is a new line.
40,otherwise we will keep on going.

9 rows selected.

SQL>
SQL> create or replace
  2  type output_data as object (id int, str varchar2(500))
  3  /

Type created.

SQL>
SQL> create or replace
  2  type output_list as table of output_data
  3  /

Type created.

SQL>
SQL> create or replace
  2  function cleanup_that_sucker(rc sys_refcursor) return output_list pipelined is
  3    type strlist is table of varchar2(200) index by pls_integer;
  4    s strlist;
  5    o output_data := output_data(null,null);
  6    c pls_integer := 0;
  7  begin
  8    loop
  9      fetch rc bulk collect into s limit 500;
 10
 11      for i in 1 .. s.count loop
 12        c := c + 1;
 13        if regexp_like(s(i),'^[[:digit:]]+,') then
 14          if c > 1 then
 15             pipe row (o);
 16          end if;
 17          o.id  := substr(s(i),1,instr(s(i),',')-1);
 18          o.str := substr(s(i),instr(s(i),',')+1);
 19        else
 20          o.str := o.str || ' ' || s(i);
 21        end if;
 22      end loop;
 23      exit when rc%notfound;
 24    end loop;
 25    pipe row (o);
 26    return;
 27  end;
 28  /

Function created.

SQL> sho err
No errors.
SQL>
SQL> select *
  2  from cleanup_that_sucker(cursor(
  3          select line from freeform
  4          ));

        ID STR
---------- ------------------------------------------------------------------------------------------------------------------------
        10 some data that is really annoying because it spans lots of rows.
        20 and some more than does not.
        30 and a mixture of stuff and we will assume that numerics then comm as is the thing that tells us it is a new line.
        40 otherwise we will keep on going.

4 rows selected.

SQL>
SQL>

And there we have it. We have the full power of PL/SQL and SQL at our fingertips even though we are parsing flat file data. This avoids complicated control file syntax and makes ongoing maintenance far easier. Performance is still good because we are using the bulk collect feature in PL/SQL. Also because we are taking a cursor expression as input, we have lots of flexibility in terms of what data are we passing in. We simply change the SQL expression in the cursor.

“Oh…another language is too hard”

We had a request on AskTOM a few days ago asking for an implementation of the XIRR function in PL/SQL.

I didn’t really know much about what that function was, or what it did, but a quick web search yielded plenty of examples in Excel, where it is a pre-delivered function, as described here:

“Returns the internal rate of return for a schedule of cash flows that is not necessarily periodic.”

That explains what the function does, but doesn’t explain how the function result is derived.  So I pushed back to the author of the question saying that if they could give us an algorithm for the function, then we could take a look at implementing the algorithm in PL/SQL.  After some research they came back with the following diagram taken from: http://maestro.ipcc.ca/files/rate_of_return/XIRR_ROR.pdf

image

The formula with some cool mathematical symbols on the page looks like an algorithm but it is not – the real nuts and bolts is in bullet point “3”.  There is no “formula” as such for XIRR – we need an iterative approach, namely, try an initial set of conditions that yields an answer “close” to the desired result, but if not, iterate continuously using previous results to slowly “narrow in” on the result until we meet some nominated tolerance value.  This was starting to sound complicated Smile

Luckily, our author also posted a link to a solution that has been written in C##.  So I posed myself the question – “How hard is it to port from C##…” (a language I have never used professionally as a developer) “… run in the database as PL/SQL?”.  After all, often we hear that database-centric code cannot be tackled in an organization because the task of understanding PL/SQL as a non-PL/SQL developer is simply “too onerous”.

Here is the original C## code.


using System;
using System.Collections.Generic;
using System.Linq;


namespace Xirr
{
    public class Program
    {
        private const Double DaysPerYear = 365.0;
        private const int MaxIterations = 100;
        private const double DefaultTolerance = 1E-6;
        private const double DefaultGuess = 0.1;

    private static readonly Func, Double> NewthonsMethod =
        cf => NewtonsMethodImplementation(cf, Xnpv, XnpvPrime);

    private static readonly Func, Double> BisectionMethod =
        cf => BisectionMethodImplementation(cf, Xnpv);

    public static void Main(string[] args)
    {
        RunScenario(new[]
            {
                // this scenario fails with Newton's but succeeds with slower Bisection
                new CashItem(new DateTime(2012, 6, 1), 0.01),
                new CashItem(new DateTime(2012, 7, 23), 3042626.18),
                new CashItem(new DateTime(2012, 11, 7), -491356.62),
                new CashItem(new DateTime(2012, 11, 30), 631579.92),
                new CashItem(new DateTime(2012, 12, 1), 19769.5),
                new CashItem(new DateTime(2013, 1, 16), 1551771.47),
                new CashItem(new DateTime(2013, 2, 8), -304595),
                new CashItem(new DateTime(2013, 3, 26), 3880609.64),
                new CashItem(new DateTime(2013, 3, 31), -4331949.61)
            });
        RunScenario(new[]
            {
                new CashItem(new DateTime(2001, 5, 1), 10000),
                new CashItem(new DateTime(2002, 3, 1), 2000),
                new CashItem(new DateTime(2002, 5, 1), -5500),
                new CashItem(new DateTime(2002, 9, 1), 3000),
                new CashItem(new DateTime(2003, 2, 1), 3500),
                new CashItem(new DateTime(2003, 5, 1), -15000)
            });
    }

    private static void RunScenario(IEnumerable cashFlow)
    {
        try
        {
            try
            {
                var result = CalcXirr(cashFlow, NewthonsMethod);
                Console.WriteLine("XIRR [Newton's] value is {0}", result);
            }
            catch (InvalidOperationException)
            {
                // Failed: try another algorithm
                var result = CalcXirr(cashFlow, BisectionMethod);
                Console.WriteLine("XIRR [Bisection] (Newton's failed) value is {0}", result);
            }
        }
        catch (ArgumentException e)
        {
            Console.WriteLine(e.Message);
        }
        catch (InvalidOperationException exception)
        {
            Console.WriteLine(exception.Message);
        }
    }

    private static double CalcXirr(IEnumerable cashFlow, Func, double> method)
    {
        if (cashFlow.Count(cf => cf.Amount > 0) == 0)
            throw new ArgumentException("Add at least one positive item");

        if (cashFlow.Count(c => c.Amount < 0) == 0)
            throw new ArgumentException("Add at least one negative item");

        var result = method(cashFlow);

        if (Double.IsInfinity(result))
            throw new InvalidOperationException("Could not calculate: Infinity");

        if (Double.IsNaN(result))
            throw new InvalidOperationException("Could not calculate: Not a number");

        return result;
    }

    private static Double NewtonsMethodImplementation(IEnumerable cashFlow,
                                                      Func, Double, Double> f,
                                                      Func, Double, Double> df,
                                                      Double guess = DefaultGuess,
                                                      Double tolerance = DefaultTolerance,
                                                      int maxIterations = MaxIterations)
    {
        var x0 = guess;
        var i = 0;
        Double error;
        do
        {
            var dfx0 = df(cashFlow, x0);
            if (Math.Abs(dfx0 - 0) < Double.Epsilon)
                throw new InvalidOperationException("Could not calculate: No solution found. df(x) = 0");

            var fx0 = f(cashFlow, x0);
            var x1 = x0 - fx0/dfx0;
            error = Math.Abs(x1 - x0);

            x0 = x1;
        } while (error > tolerance && ++i < maxIterations);
        if (i == maxIterations)
            throw new InvalidOperationException("Could not calculate: No solution found. Max iterations reached.");

        return x0;
    }

    internal static Double BisectionMethodImplementation(IEnumerable cashFlow,
                                                         Func, Double, Double> f,
                                                         Double tolerance = DefaultTolerance,
                                                         int maxIterations = MaxIterations)
    {
        // From "Applied Numerical Analysis" by Gerald
        var brackets = Brackets.Find(Xnpv, cashFlow);
        if (Math.Abs(brackets.First - brackets.Second) < Double.Epsilon)
            throw new ArgumentException("Could not calculate: bracket failed");

        Double f3;
        Double result;
        var x1 = brackets.First;
        var x2 = brackets.Second;

        var i = 0;
        do
        {
            var f1 = f(cashFlow, x1);
            var f2 = f(cashFlow, x2);

            if (Math.Abs(f1) < Double.Epsilon && Math.Abs(f2) < Double.Epsilon)
                throw new InvalidOperationException("Could not calculate: No solution found");

            if (f1*f2 > 0)
                throw new ArgumentException("Could not calculate: bracket failed for x1, x2");

            result = (x1 + x2)/2;
            f3 = f(cashFlow, result);

            if (f3*f1 < 0)
                x2 = result;
            else
                x1 = result;
        } while (Math.Abs(x1 - x2)/2 > tolerance && Math.Abs(f3) > Double.Epsilon && ++i < maxIterations);

        if (i == maxIterations)
            throw new InvalidOperationException("Could not calculate: No solution found");

        return result;
    }

    private static Double Xnpv(IEnumerable cashFlow, Double rate)
    {
        if (rate <= -1)
            rate = -1 + 1E-10; // Very funky ... Better check what an IRR <= -100% means

        var startDate = cashFlow.OrderBy(i => i.Date).First().Date;
        return
            (from item in cashFlow
             let days = -(item.Date - startDate).Days
             select item.Amount*Math.Pow(1 + rate, days/DaysPerYear)).Sum();
    }

    private static Double XnpvPrime(IEnumerable cashFlow, Double rate)
    {
        var startDate = cashFlow.OrderBy(i => i.Date).First().Date;
        return (from item in cashFlow
                let daysRatio = -(item.Date - startDate).Days/DaysPerYear
                select item.Amount*daysRatio*Math.Pow(1.0 + rate, daysRatio - 1)).Sum();
    }

    public struct Brackets
    {
        public readonly Double First;
        public readonly Double Second;

        public Brackets(Double first, Double second)
        {
            First = first;
            Second = second;
        }

        internal static Brackets Find(Func, Double, Double> f,
                                      IEnumerable cashFlow,
                                      Double guess = DefaultGuess,
                                      int maxIterations = MaxIterations)
        {
            const Double bracketStep = 0.5;
            var leftBracket = guess - bracketStep;
            var rightBracket = guess + bracketStep;
            var i = 0;
            while (f(cashFlow, leftBracket)*f(cashFlow, rightBracket) > 0 && i++ < maxIterations)
            {
                leftBracket -= bracketStep;
                rightBracket += bracketStep;
            }

            return i >= maxIterations
                       ? new Brackets(0, 0)
                       : new Brackets(leftBracket, rightBracket);
        }
    }

    public struct CashItem
    {
        public DateTime Date;
        public Double Amount;

        public CashItem(DateTime date, Double amount)
        {
            Date = date;
            Amount = amount;
        }
    }
}

And after …maybe…30 minutes, I got a PL/SQL version up and running.


SQL> set serverout on
SQL> declare
  2
  3    cDaysPerYear      number := 365.0;
  4    cMaxIterations    number := 100;
  5    cDefaultTolerance number := 1E-6;
  6    cDefaultGuess     number := 0.1;
  7    cDouble_Epsilon   number := 0.00000000001;
  8
  9    type cash_obj is record ( dte date, amount number );
 10    type cash_list is table of cash_obj index by pls_integer;
 11    cash_item cash_list;
 12
 13    type bracket_rec is record ( frst number, second number );
 14
 15    function Xnpv(cashFlow cash_list, prate number) return number is
 16      startdate date := cashFlow(1).dte;
 17      dys number;
 18      tot number := 0;
 19      rate number := prate;
 20    begin
 21        if (rate <= -1) then rate := -1 + 1E-10; end if;
 22
 23        for item in 1 .. cashFlow.count loop
 24             dys := -(cashFlow(item).Dte - startDate);
 25             tot := tot + cashFlow(item).Amount*Power(1 + rate, dys/cDaysPerYear);
 26        end loop;
 27        return tot;
 28    end;
 29
 30    function XnpvPrime(cashFlow cash_list, prate number) return number is
 31      startdate date := cashFlow(1).dte;
 32      daysRatio number;
 33      tot number := 0;
 34      rate number := prate;
 35    begin
 36        for item in 1 .. cashFlow.count loop
 37             daysRatio := -(cashFlow(item).Dte - startDate)/cDaysPerYear;
 38             tot := tot + cashFlow(item).Amount*daysRatio*Power(1 + rate, daysRatio - 1);
 39        end loop;
 40        return tot;
 41    end;
 42
 43    function BracketsFind(cashFlow cash_list,guess number := cDefaultGuess) return bracket_rec is
 44      x bracket_rec;
 45      bracketStep number := 0.5;
 46      leftBracket number := guess - bracketStep;
 47      rightBracket number := guess + bracketStep;
 48      i int := 0;
 49    begin
 50      while Xnpv(cashFlow, leftBracket)*Xnpv(cashFlow, rightBracket) > 0
 51      loop
 52          leftBracket := leftBracket - bracketStep;
 53          rightBracket:= rightBracket + bracketStep;
 54          i := i + 1;
 55      end loop;
 56
 57      if i >= cmaxIterations then
 58         x.frst := 0;
 59         x.second := 0;
 60         return x;
 61      else
 62         x.frst := leftBracket;
 63         x.second := rightBracket;
 64         return x;
 65      end if;
 66    end;
 67
 68    function BisectionMethodImplementation(cashFlow cash_list,tolerance number := cDefaultTolerance, maxIterations number := cMaxIterations) return number is
 69
 70       brackets bracket_rec;
 71       f3 number;
 72       result number;
 73       x1 number;
 74       x2 number;
 75       i int := 0;
 76       f1 number;
 77       f2 number;
 78    begin
 79       brackets := BracketsFind(cashFlow);
 80       if Abs(brackets.Frst - brackets.Second) < cDouble_Epsilon then
 81          raise_application_error(-20000,'Could not calculate: bracket failed');
 82       end if;
 83       x1 := brackets.Frst;
 84       x2 := brackets.Second;
 85
 86       loop
 87
 88          f1 := Xnpv(cashFlow, x1);
 89          f2 := Xnpv(cashFlow, x2);
 90
 91          if Abs(f1) < cDouble_Epsilon and Abs(f2) < cDouble_Epsilon then
 92              raise_application_error(-20000,'Could not calculate: No solution found');
 93          end if;
 94
 95          if (f1*f2 > 0) then
 96              raise_application_error(-20000,'Could not calculate: bracket failed for x1, x2');
 97          end if;
 98
 99          result := (x1 + x2)/2;
100          f3 := Xnpv(cashFlow, result);
101
102          if (f3*f1 < 0) then
103              x2 := result;
104          else
105              x1 := result;
106          end if;
107          i := i + 1;
108          exit when not
109            (Abs(x1 - x2)/2 > tolerance and Abs(f3) > cDouble_Epsilon and i < cmaxIterations);
110       end loop;
111
112       if (i = cmaxIterations) then
113          raise_application_error(-20000,'Could not calculate: No solution found');
114       end if;
115
116       return result;
117    end;
118
119    function NewtonsMethodImplementation(cashFlow cash_list,tolerance number := cDefaultTolerance, maxIterations number := cMaxIterations,guess number := cDefaultGuess) return number is
120        x0 number := guess;
121        i int := 0;
122        error number;
123        fx0 number;
124        x1 number;
125        dfx0 number;
126    begin
127        loop
128            dfx0 := XnpvPrime(cashFlow, x0);
129            if (Abs(dfx0 - 0) < cDouble_Epsilon) then
130                raise_application_error(-20000,'Could not calculate: No solution found. df(x) = 0');
131            end if;
132
133            fx0 := Xnpv(cashFlow, x0);
134            x1 := x0 - fx0/dfx0;
135            error := Abs(x1 - x0);
136
137            x0 := x1;
138          i := i + 1;
139          exit when not (error > tolerance and  i < cmaxIterations );
140        end loop;
141
142        if (i = maxIterations) then
143            raise_application_error(-20000,'Could not calculate: No solution found. Max iterations reached.');
144        end if;
145        return x0;
146      end;
147
148      function CalcXirr(cashFlow cash_list) return number is
149        ok boolean := false;
150      begin
151        for i in 1 .. cashFlow.count loop
152           ok := ok or cashFlow(i).amount > 0;
153        end loop;
154        if not ok then raise_application_error(-20000,'Add at least one positive item');   end if;
155
156        ok := false;
157        for i in 1 .. cashFlow.count loop
158           ok := ok or cashFlow(i).amount < 0;
159        end loop;
160        if not ok then raise_application_error(-20000,'Add at least one negative item');   end if;
161
162  --      return BisectionMethodImplementation(cashFlow);
163        return NewtonsMethodImplementation(cashFlow);
164
165      end;
166
167  begin
168
169    cash_item(1).dte := date '2008-01-01';
170    cash_item(2).dte := date '2008-03-01';
171    cash_item(3).dte := date '2008-10-30';
172    cash_item(4).dte := date '2009-02-15';
173    cash_item(5).dte := date '2009-04-01';
174
175    cash_item(1).Amount := -10000;
176    cash_item(2).Amount := 2750;
177    cash_item(3).Amount := 4250;
178    cash_item(4).Amount := 3250;
179    cash_item(5).Amount := 2750;
180
181    dbms_output.put_line(CalcXirr(cash_item));
182  end;
183  /
.3733625335188315103083924482521883278399

PL/SQL procedure successfully completed.

I validated the results against various data examples I found on the web.  The example above maps to the example in the Excel link previous mentioned.

 

image

Let me assure you, I am in no way claiming that this was some sort of incredible feat on my part.  It was simply taking each function in turn, making some assumptions about the variables and the coding constructs and iteratively converting the code over, and then testing it for validity within an anonymous block.  And similarly, the PL/SQL code is not a shining example of quality coding standards – quite the opposite.  It deliberately is trying to stay close to the naming standards and conventions that were in the C## source.  Besides a couple of web searches (for example, to examine what Double.Epilson represented) the process was relatively straightforward.

You may be rolling your eyes and thinking to yourself: “Here we go….he’s going to tell us that C# is junk and that we should refactor our C# applications into the database”, but that is definitely not the case.  My point is – if you have some data-intensive code in the application tier that will get benefits from locating that code closer to the database, you might surprise yourself by discovering how easy it is to port the logic from one coding language to another, in this case, PL/SQL.  And for your Oracle databases, there is no simply better data-centric language than PL/SQL.