We often think of Advanced Compression being exclusively about compressing data “at rest”, ie, on some sort of storage device. And don’t get me wrong, if we consider just that part of Advanced Compression, that still covers a myriad of opportunities that could yield benefits for your databases and database applications:
- Heat maps
- Automatic Data Optimization
- XML, JSON and LOB compression (including de-duplication)
- Compression on backups
- Compression on Data Pump files
- Additional compression options on indexes and tables
- Compressed Flashback Data Archive storage
- Storage snapshot compression
However, if you are licensed for the option, there are other things that you can also take advantage of when it comes to compression of data on the network.
In 12c, Advanced Network Compression is part of the Advanced Compression option. Obviously, the modern data centre typically co-locates application servers and database servers within a high speed high bandwidth network, but once data needs to be accessed over less optimal networks, for example, users running ad-hoc queries to their desktops, then compressing data across the wire can yield benefits.
Trust me, when you live in Perth, Western Australia, any kind of improvement to access data across the network is a good thing!
Here’s an example of the option in action. Here’s my source table – which I’ve deliberately created with lots of potential for compression:
SQL create table comp_Test
2 as select rownum x, rpad('x',1000,'x') y from dual
3 connect by level <= 100000;
Table created.
Every row is almost a replica of the others, and column Y is 1000 repeated characters. So it is an artificial but perfect candidate to see the compression in action. To ensure that the network transmission of the data is a impactful factor here, I’ve located this table in a “database server” (my spare laptop) accessed via a slow wireless network from my other machine. We can see the impact of the network by starting with the default settings
Default settings
C:\temp>sqlplus scott/tiger@pdb122
SQL*Plus: Release 12.2.0.1.0 Production on Wed Dec 27 13:48:05 2017
Copyright (c) 1982, 2016, Oracle. All rights reserved.
Last Successful login time: Wed Dec 27 2017 13:48:26 +08:00
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
SQL> set timing on
SQL> set arraysize 1000
SQL> set feedback only
SQL> select * from scott.comp_test;
100000 rows selected.
Elapsed: 00:00:24.68
So a touch more than 1000 bytes * 100000 rows yields the following session level statistics
SQL> @netstat
NAME VALUE
------------------------------------------------------------ ----------
bytes sent via SQL*Net to client 102138497
Now I’ll activate compression by adding the following parameters to sqlnet.ora on client and server.
SQLNET.COMPRESSION=on
SQLNET.COMPRESSION_LEVELS=(low,high)
SQLNET.COMPRESSION_THRESHOLD=1024
and repeat the test in the a fresh connection. To see if we can potentially use compression, we can check V$SESSION_CONNECT_INFO
SQL> select sid, network_service_banner from V$SESSION_CONNECT_INFO
2 where sid = sys_context('USERENV','SID')
3 @pr
==============================
SID : 324
NETWORK_SERVICE_BANNER : Windows NT TCP/IP NT Protocol Adapter for 64-bit Windows: Version 12.2.0.1.0 - Production
==============================
SID : 324
NETWORK_SERVICE_BANNER : Oracle Advanced Network Compression Service for 64-bit Windows: Version 12.2.0.1.0 - Production
==============================
SID : 324
NETWORK_SERVICE_BANNER : NTS Authentication service adapter for 64-bit Windows: Version 2.0.0.0.0 - Production
==============================
SID : 324
NETWORK_SERVICE_BANNER : Encryption service for 64-bit Windows: Version 12.2.0.1.0 - Production
==============================
SID : 324
NETWORK_SERVICE_BANNER : Crypto-checksumming service for 64-bit Windows: Version 12.2.0.1.0 - Production
or if you enable a SQL*Net trace, you’ll entries like this in the log files
nlstddp_dump_ptable: SQLNET.COMPRESSION_LEVELS = HIGH
nlstddp_dump_ptable: SQLNET.COMPRESSION = ON
Compression enabled
SQL> set timing on
SQL> set arraysize 1000
SQL> set feedback only
SQL> select * from scott.comp_test;
100000 rows selected.
Elapsed: 00:00:01.93
You can see the huge difference in elapsed time. Unfortunately, as far I can ascertain, the session network statistics are derived after decompression, because the bytes transferred still reflect the original sizes
SQL> @netstat
NAME VALUE
------------------------------------------------------------ ----------
bytes sent via SQL*Net to client 102114310
We can also take advantage of the larger SDU sizes available in 12c. Now my sqlnet.ora looks like this:
DEFAULT_SDU_SIZE=1048576
SQLNET.COMPRESSION=on
SQLNET.COMPRESSION_LEVELS=(low,high)
SQLNET.COMPRESSION_THRESHOLD=1024
and then the test can be repeated.
Compression on, larger SDU
SQL> set timing on
SQL> set arraysize 1000
SQL> set feedback only
SQL> select * from scott.comp_test;
100000 rows selected.
Elapsed: 00:00:00.98
(* – elapsed times are averaged across a number of executions)
There is a whitepaper on Advanced Compression with more details here, so if you have the license, make sure you take a look to see if there are some benefits for you to maximize the return on your investment.
One comment