Listener log data mining with SQL

Posted by

If you take a look at the log files created by the listener, there is obviously a nice wealth of information in there. We get service updates, connections etc, all of which might be useful particularly in terms of auditing security

However, it also is in a fairly loose text format, which means ideally I’d like to utilise the power of SQL to mine the data.


16-DEC-2019 09:54:20 * service_update * db18 * 0
2019-12-16T09:54:23.563403+08:00
16-DEC-2019 09:54:23 * service_update * db18 * 0
2019-12-16T09:54:34.442995+08:00
16-DEC-2019 09:54:34 * (CONNECT_DATA=(CID=(PROGRAM=null)(HOST=__jdbc__)(USER=null))(SERVICE_NAME=pdb1)) * (ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=61716)) * establish * pdb1 * 0
2019-12-16T09:54:38.578569+08:00
16-DEC-2019 09:54:38 * (CONNECT_DATA=(CID=(PROGRAM=null)(HOST=__jdbc__)(USER=null))(SERVICE_NAME=pdb1)) * (ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=61720)) * establish * pdb1 * 0
16-DEC-2019 09:54:38 * (CONNECT_DATA=(CID=(PROGRAM=null)(HOST=__jdbc__)(USER=null))(SERVICE_NAME=db18)) * (ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=61721)) * establish * db18 * 0
2019-12-16T09:55:04.116322+08:00
16-DEC-2019 09:55:04 * (CONNECT_DATA=(CID=(PROGRAM=null)(HOST=__jdbc__)(USER=null))(SERVICE_NAME=db18)) * (ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=61742)) * establish * db18 * 0

It is easy to create an external table to simply grab each line of the log file and return it as a row from a table. We can see that below, simply returning a single column called LINE


SQL> create or replace directory LISTENER_LOG as 'C:\oracle\diag\tnslsnr\gtx\listener18\trace';

Directory created.

SQL> create table listener
  2  (
  3     line varchar2(4000)
  4  )
  5  organization external (
  6     type   oracle_loader
  7     default directory LISTENER_LOG
  8     access parameters
  9     (
 10        records delimited by newline
 11        nobadfile
 12        nologfile
 13        nodiscardfile
 14        fields ldrtrim
 15        missing field values are null
 16        reject rows with all null fields
 17        (
 18            line char(4000)
 19        )
 20     )
 21     location ('listener.log')
 22  )
 23  reject limit unlimited
 24  /

Table created.

SQL>
SQL> select * from listener where rownum <= 20;

LINE
----------------------------------------------------------------------------------------------------------------------------------
2019-02-18T23:48:42.647524-08:00
Create Relation ADR_CONTROL
Create Relation ADR_INVALIDATION
Create Relation INC_METER_IMPT_DEF
Create Relation INC_METER_PK_IMPTS
LISTENER for Linux: Version 19.0.0.0.0 - Production
Version 19.2.0.0.0
System parameter file is /u01/app/oracle/product/19.2.0/dbhome_1/network/admin/listener.ora
Log messages written to /u01/app/oracle/diag/tnslsnr/db192/listener/alert/log.xml
Trace information written to /u01/app/oracle/diag/tnslsnr/db192/listener/trace/ora_4087_140671000284160.trc

While I have achieved something by having this data exposed as a table, there are still some challenges to face. It now becomes our job to extract the metadata from each line. For example, to appropriately timestamp each row, I need to look for date formatted strings and use some analytics to carry that date down through subsequent rows to assign them to the rest of the data. In the example below I’ve done that with LAST_VALUE, and given an example of how we could extract the HOST metadata


SQL> with details as
  2  (
  3  select last_value(tstamp ignore nulls) over ( order by tstamp ) as tstamp,
  4         substr(host,1,instr(host,')')-1) host
  5  from
  6    ( select
  7        case when line like '__-___-____ __:__:__ %' then to_date(substr(line,1,20),'DD-MON-YYYY HH24:MI:SS') end tstamp,
  8        case when line like '%HOST=%' then substr(line,instr(line,'HOST=')+5) end host
  9      from listener
 10    )
 11  )
 12  select  *
 13  from details
 14  where host is not null;

TSTAMP              HOST
------------------- ----------------------------------------
18/02/2019 23:48:43 db192.localdomain
18/02/2019 23:49:28 db192.localdomain
18/02/2019 23:49:28 db192.localdomain
18/02/2019 23:49:28 db192.localdomain
18/02/2019 23:49:28 db192.localdomain
18/02/2019 23:49:28 db192.localdomain
18/02/2019 23:49:28 db192.localdomain
18/02/2019 23:49:28 db192.localdomain
18/02/2019 23:49:29 db192.localdomain
18/02/2019 23:49:29 db192.localdomain
18/02/2019 23:49:29 db192.localdomain
18/02/2019 23:49:29 db192.localdomain
18/02/2019 23:50:09 192.168.1.182

That kind of SQL manipulation is cool and fun, but it is also going to get complex fast! Here is perhaps an alternative way of tackling the problem. As well as the ‘trace’ folder, the listener logs are also written as XML to the ‘alert’ folder. Looking at the log.xml file in this folder, we can see the same listener log information in XML format.

<msg time='2019-11-14T15:27:35.463+08:00' org_id='oracle' comp_id='tnslsnr'
type='UNKNOWN' level='16' host_id='GTX'
host_addr='fe80::65ad:2a40:1ed2:b65a%24' pid='4696' version='1'>
<txt>Creating new log segment:
</txt>
</msg>
<msg time='2019-11-14T15:27:56.452+08:00' org_id='oracle' comp_id='tnslsnr'
type='UNKNOWN' level='16' host_id='GTX'
host_addr='fe80::65ad:2a40:1ed2:b65a%24' pid='4696'>
<txt>14-NOV-2019 15:27:56 * service_update * db18 * 0
</txt>
</msg>
 

Rather than using an external table, perhaps I can load that data as XML because then much of the metadata is already appropriately tagged for me. No more complex SQL!


SQL> create table t ( c clob );

Table created.

SQL> create or replace directory LISTENER_LOG as 'C:\oracle\diag\tnslsnr\gtx\listener18\alert';

Directory created.

SQL> declare
  2     l_c clob;
  3     b bfile := bfilename('LISTENER_LOG', 'log.xml');
  4     dest_offset integer := 1;
  5     src_offset integer := 1;
  6     src_csid number := nls_charset_id('UTF8');
  7     lang_context integer := dbms_lob.default_lang_ctx;
  8     warning integer;
  9
 10  begin
 11    insert into t values (empty_clob()) returning c into l_c;
 12    dbms_lob.open(b);
 13    dbms_lob.loadclobfromfile(l_c, b, dbms_lob.getlength(b), dest_offset, src_offset, src_csid, lang_context, warning);
 14    dbms_lob.close(b);
 15    commit;
 16  end;
 17  /

PL/SQL procedure successfully completed.

So far so good. I now have my listener XML log loaded as a single CLOB into a table. But what happens when I try to manipulate that data using XMLTYPE.


SQL> select xmltype(c) from t;
ERROR:
ORA-31011: XML parsing failed
ORA-19213: error occurred in XML processing at lines 7
LPX-00245: extra data after end of document
ORA-06512: at "SYS.XMLTYPE", line 272
ORA-06512: at line 1



no rows selected

Hmmm…not no good. Initially I suspected that maybe the log files were not valid XML. So I extracted just a single MSG row from the file and tried to convert that to XMLTYPE.


SQL> delete t;

1 row deleted.

SQL> declare
  2    x clob := q'{<msg time='2019-02-27T18:00:32.372+08:00' version='1'> <txt>Creating new log segment: </txt></msg> }';
  3  begin
  4  delete t;
  5  insert into t values (x);
  6  end;
  7  /

PL/SQL procedure successfully completed.

SQL> select * from t;

C
--------------------------------------------------------------------------------
 Creating new log seg

1 row selected.

SQL> select xmltype(c) from t;

XMLTYPE(C)
-----------------------------------------------------------------------------------------------------------------------------

  Creating new log s


1 row selected.

So a single row is fine, but what happens if I try to repeat that with the same row duplicated. I know that the row is valid XML because I just proved that above. But look what happens.



SQL> delete t;

1 row deleted.

SQL> declare
  2    x clob := q'{ <msg time='2019-02-27T18:00:32.372+08:00' version='1'> <txt>Creating new log segment: </txt></msg> }';
  3  begin
  4  delete t;
  5  insert into t values (x||x);
  6  end;
  7  /

PL/SQL procedure successfully completed.

SQL> select * from t;

C
--------------------------------------------------------------------------------
 Creating new log seg

1 row selected.

SQL> select xmltype(c) from t;
ERROR:
ORA-31011: XML parsing failed
ORA-19213: error occurred in XML processing at lines 1
LPX-00245: extra data after end of document
ORA-06512: at "SYS.XMLTYPE", line 272
ORA-06512: at line 1

This is an expected result, because the XMLTYPE conversion is expecting a single XML document. Hence in order to mimic that with multiple MSG tags, I need to enclose the entire clob in a single XML tag. I’ll use LISTENER



SQL> delete t;

1 row deleted.

SQL> declare
  2     l_c clob;
  3     b bfile := bfilename('LISTENER_LOG', 'log.xml');
  4     dest_offset integer := 1;
  5     src_offset integer := 1;
  6     src_csid number := nls_charset_id('UTF8');
  7     lang_context integer := dbms_lob.default_lang_ctx;
  8     warning integer;
  9
 10     l_xml clob;
 11  begin
 12    dbms_lob.createtemporary(l_c,true);
 13    dbms_lob.open(b);
 14    dbms_lob.loadclobfromfile(l_c, b, dbms_lob.getlength(b), dest_offset, src_offset, src_csid, lang_context, warning);
 15    dbms_lob.close(b);
 16
 17    insert into t values (empty_clob()) returning c into l_xml;
 18    dbms_lob.writeappend(l_xml,10,'<listener>');
 19    dbms_lob.append(l_xml,l_c);
 20    dbms_lob.writeappend(l_xml,11,'</listener>');
 21
 22    commit;
 23  end;
 24  /

PL/SQL procedure successfully completed.

And now finally, the clob can be treated as an XMLTYPE without any issues.



SQL> select xmltype(c) from t;

XMLTYPE(C)
------------------------------------------------------------------------------------------------------------------------------


SQL> select xt.*
  2  from  t,
  3        xmltable('//listener/msg' passing xmltype(t.c)
  4                                       columns
  5                                           xorg_id      varchar2(10) path '@org_id',
  6                                           xcomp_id     varchar2(10) path '@comp_id',
  7                                           xtime        varchar2(30) path '@time',
  8                                           xtype        varchar2(10) path '@type',
  9                                           xlevel       varchar2(10) path '@level',
 10                                           xhost_id     varchar2(10) path '@host_id',
 11                                           xhost_addr   varchar2(30) path '@host_addr',
 12                                           xpid         varchar2(10) path '@pid',
 13                                           xtxt         varchar2(300) PATH 'txt'
 14                                   ) as xt;

XORG_ID    XCOMP_ID   XTIME                          XTYPE      XLEVEL     XHOST_ID   XHOST_ADDR                     XPID       XTXT
---------- ---------- ------------------------------ ---------- ---------- ---------- ------------------------------ ---------- ------------------------------
oracle     tnslsnr    2019-11-14T15:27:35.463+08:00  UNKNOWN    16         GTX        fe80::65ad:2a40:1ed2:b65a%24   4696       Creating new log segment:
oracle     tnslsnr    2019-11-14T15:27:56.452+08:00  UNKNOWN    16         GTX        fe80::65ad:2a40:1ed2:b65a%24   4696       14-NOV-2019 15:27:56 * service
oracle     tnslsnr    2019-11-14T15:36:56.539+08:00  UNKNOWN    16         GTX        fe80::65ad:2a40:1ed2:b65a%24   4696       14-NOV-2019 15:36:56 * service
oracle     tnslsnr    2019-11-14T15:37:35.542+08:00  UNKNOWN    16         GTX        fe80::65ad:2a40:1ed2:b65a%24   4696       14-NOV-2019 15:37:35 * service
oracle     tnslsnr    2019-11-14T15:37:56.544+08:00  UNKNOWN    16         GTX        fe80::65ad:2a40:1ed2:b65a%24   4696       14-NOV-2019 15:37:56 * service
oracle     tnslsnr    2019-11-14T15:38:02.545+08:00  UNKNOWN    16         GTX        fe80::65ad:2a40:1ed2:b65a%24   4696       14-NOV-2019 15:38:02 * service
oracle     tnslsnr    2019-11-14T15:44:26.610+08:00  UNKNOWN    16         GTX        fe80::65ad:2a40:1ed2:b65a%24   4696       14-NOV-2019 15:44:26 * service
oracle     tnslsnr    2019-11-14T15:44:47.613+08:00  UNKNOWN    16         GTX        fe80::65ad:2a40:1ed2:b65a%24   4696       14-NOV-2019 15:44:47 * service
oracle     tnslsnr    2019-11-14T15:44:56.613+08:00  UNKNOWN    16         GTX        fe80::65ad:2a40:1ed2:b65a%24   4696       14-NOV-2019 15:44:56 * service
oracle     tnslsnr    2019-11-14T15:47:35.640+08:00  UNKNOWN    16         GTX        fe80::65ad:2a40:1ed2:b65a%24   4696       14-NOV-2019 15:47:35 * service
oracle     tnslsnr    2019-11-14T15:47:56.642+08:00  UNKNOWN    16         GTX        fe80::65ad:2a40:1ed2:b65a%24   4696       14-NOV-2019 15:47:56 * service

So there you go. Data mining your listener logs by taking advantage of the XML facilities. I bet you never thought you’d hear a database guy singing the praises of XML ๐Ÿ™‚

6 comments

  1. Hi Connor,

    Regarding the complex SQL …

    I think that there is a problem in using ” last_value(tstamp ignore nulls) over ( order by tstamp ) ”
    when the tstamp column contains NULLs, because the ORDER BY will sort all the nulls as highest,
    so all the rows that have a NULL tstamp will ultimately get the (same) highest non-null tstamp value.

    The following small example shows it:

    with t as (
    select rownum rn,
    decode(mod(rownum,3), 0, sysdate + rownum, null) tstamp
    from dual
    connect by level <= 15
    )
    select t.*,
    last_value(tstamp) over(order by tstamp) last_tstamp,
    last_value(tstamp ignore nulls) over(order by tstamp) last_tstamp_nn
    from t
    order by rn
    /

    RN TSTAMP LAST_TSTAMP LAST_TSTAMP_NN
    ————————————————————————–
    1 – – 31-DEC-19
    2 – – 31-DEC-19
    3 19-DEC-19 19-DEC-19 19-DEC-19
    4 – – 31-DEC-19
    5 – – 31-DEC-19
    6 22-DEC-19 22-DEC-19 22-DEC-19
    7 – – 31-DEC-19
    8 – – 31-DEC-19
    9 25-DEC-19 25-DEC-19 25-DEC-19
    10 – – 31-DEC-19
    11 – – 31-DEC-19
    12 28-DEC-19 28-DEC-19 28-DEC-19
    13 – – 31-DEC-19
    14 – – 31-DEC-19
    15 31-DEC-19 31-DEC-19 31-DEC-19

    15 rows selected.

    The workaround is to select the ROWNUM as RN from the "listener" external table and then use ORDER BY RN
    in the LAST_VALUE function.

    Cheers & Best Regards,
    Iudith

      1. Thanks a lot for your fast feedback ๐Ÿ™‚
        You are right regarding the “database guy singing praises to XML”, don’t worry, you are not alone ๐Ÿ™‚

        The XML stuff is not “easily edible”, and it’s somewhat hard “to digest” the Oracle XML documentation if you don’t already come with a very serious XML background from elsewhere …
        I wonder whether Oracle will continue to stick to using XML files for its various management files
        (like in your example above), or it will switch to the (more friendly in my opinion) JSON … or maybe both ?

        Cheers again,
        Iudith

  2. I have analyzed weblogic log output by using the ELK stack (Elasticsearch, Logstach, Kibana, and Filebeat) to spot ORA errors and other anomalies. Filebeat to enable a constant flow, Logstash to parse the log, Elasticsearch for the inverted index, and Kibana for the visualization. It would be nice to do the same with listener logs. Just an idea. Thinking loudly โ€ฆ

    1. Definitely. This was not a post suggesting that log management/consolidation/analysis stay entirely within the realm of the database. I’ve seen similar to ELK via Splunk etc. They all have their place.

Got some thoughts? Leave a comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.