Development

More tricks with OPT_PARAM

XTended Oracle SQL - Tue, 2021-11-30 18:24

Did you know you can set most parameters for the execution of a single statement without using an Alter Session by using an OPT_PARAM hint? For example, regular parameters (here forcing the storage clause in the query plan):

SQL> select /*+ OPT_PARAM('cell_offload_plan_display' 'always') */ col1 From table1;

and underscore parameters:

SQL>  select /*+ OPT_PARAM('_arch_comp_dbg_scan',2048) */ n_name from nation;

However if you try conflicting settings that set a different value in an inner query block, the value you set in the outermost query block will prevail. In this trvial example _arch_comp_dbg_scan=2048 will prevail:

SQL> select /*+ OPT_PARAM('_arch_comp_dbg_scan',2048) */ n_name from nation
where n_nationkey = (
    select /*+ OPT_PARAM('_arch_comp_dbg_scan',0) */ min(r_regionkey)
    from region where n_name < r_name); 

Another point to remember is that not all parameters get their value from the cursor environment. For example the buffer cache gets the raw value of _serial_direct_read so it cannot be overruled in this way: 

SQL> select /*+ OPT_PARAM('_serial_direct_read', always) */ n_name from nation; 

 will not force a DR scan of the table but an alter session will.

Just one more tool to keep in your toolkit for when you need it.


Roger

Categories: Development

Improvements to HCC with wide tables in 12.2

XTended Oracle SQL - Tue, 2021-11-30 18:20
HCC Compression Unit Sizing 

Since the beginning Oracle has provided four compression levels to offer a trade-off between the compression ratio and various other factors including table scans and the performance of single-row retrieval. I can not emphasize enough that the various trade offs mean that YMMV with the different levels and you should always test what works best with your application and hardware when choosing the level. Historically people have rarely used Query Low since the fast compression with reduced compression ratio means that the extra disk I/O is slower than the cost of decompression with Query High. The one time that Query Low makes sense on spinning disks is if you still have a significant number of row retrieval operations (including from index access joins).

NMVe Flash

X5 introduced NVMe technology which means that the extra I/O from Query Low is faster than ZLIB decompression which makes Query Low beneficial. So we needed to reassess the sizing of Compression Units. from 11.2.0.1 to 12.1.2.4 the sizing guidelines are as follows:

Name Target
 Rows
 Target 
 Minimum Size
  Target
 Maximum Size
 CompressionQuery Low 1000 to 8000 32 kb 32 kb LZOQuery High 1000 to 8000 32 kb 64 kb ZLIBArchive Low 8000 64 kb 256 kb ZLIBArchive High 8000 256 kb 256 kb BZ2

So, for example, Compress for Query High aims to pivot around at least 1000 rows and create a minimum compression unit size of 32 kb and a maximum of 64 kb. Using 12.1.2.3 I ran these against a TPC-H Lineitem table than contained between 1 and 6 copies of each column.

For Query Low fixed 32 kb CUs this gave us the following:

 Additional copies of lineitem Rows per 32 kb CU 0 2797 1 580 2 318 3 216 4 162 5 129

and for Query High 32 to 64 kb CUs this gave us:

 Additional copies of lineitem Rows per 32 kb CU CU ave size 0 5031 32 1 1010 32 2 936 51 3 794 63 4 595 67 5 476 63

so we see that the CU size remains as 32 kb as long as we are getting a 1000 rows or more then increases in size to 64 kb to try and fit in at least 1000 rows.

It became clear that this size range was inadequate for wide tables so to get more efficient compression and longer column runs for faster predicate performance (and also better CELLMEMORY rewrites) we removed the fixed size for Query Low and increased the max:

Query Low: 32 kb to 64 kb

Query High:  32 kb to 80 kb

This will not affect narrow tables at all but wider tables should see better table compression and faster scans at the cost of slightly slower single row retrieval for data loaded by 12.2 RDBMS. If you have HCC wide tables and typically cache them on flash cache you should consider re-evaluating Query Low for data loaded in 12.2 (or use Alter Table Move Compress to recompress existing data).

Roger 

Categories: Development

How to tell if the Exadata column cache is fully loaded

XTended Oracle SQL - Tue, 2021-11-30 18:11

When a background activity is happening on the cell you typically can’t use RDBMS v$ views to monitor it in the same way. One such question is how to tell if a segment is fully loaded in the Exadata column cache since this does not appear in the equivalent In-Memory v$ views.

When a segment is scanned by Smart Scan sufficiently often to be eligible the AUTOKEEP pool (typically that means at least twice an hour), the eligible 1MB chunks are written to flash in 12.1.0.2 style format, and put on a background queue. Lower priority tasks pick up the queued 1MB 12.1.0.2 format chunks from the flash cache, run them though the In-Memory loader, and rewrite the pure columnar representation in place of the old 12.1.0.2 style column cache chunks.

The easiest way that I know of to tell when this completes is to monitor that background activity is to use the following query until it shows zero:

select name, sum(value) value from (
      select extractvalue(value(t),'/stat/@name') name,
            extractvalue(value(t),'/stat') value
      from v$cell_state cs,
           table(xmlsequence(extract(xmltype(cs.statistics_value),
                                     '//stats[@type="columnarcache"]/stat'))) t
     where statistics_type='CELL')
     where name in ('outstanding_imcpop_requests')
     group by name;

Categories: Development

External Tables Part 1 – Project Columns All vs Referenced

XTended Oracle SQL - Tue, 2021-11-30 18:09

I normally blog about table scans on Oracle native data but Oracle also supports a wide variety of features for scanning external tables and I need to cover these too. One of the ways I learn new things is being sent a bug report and saying to myself “Oh! I didn’t know you could do that”. So today I’m going to start with the grammar:

Alter Table <xt> Project Columns [ All | Referenced ]  

This DDL changes the value in the Property column displayed in user_external_tables:

SQL> select property
2    from user_external_tables
3    where table_name='C_ET';

PROPERTY
----------
ALL       

Here we have an external table defined using the legacy driver ORACLE_LOADER. This driver defaults to projecting all the columns in the base table rather than just those needed to satisfy the query (i.e. the referenced columns) and discarding rows that have data conversion issues up to the reject limit. 

So for example we have a DATE column in our external table that contains dirty data that won’t convert cleanly to Oracle internal dates using the supplied mask we can either import it as a VARCHAR2 to not lose values or import it as a date but lose rows even on queries that don’t need that date column. We can change the behaviour to only project the referenced columns by 

SQL> alter table c_et project column referenced;
 
Table altered.
 
SQL> 
SQL> select property 
2    from user_external_tables
3    where table_name='C_ET';
 
PROPERTY   
----------
REFERENCED 

The driver will now ignore unreferenced columns and if the date column is not used we will get all the rows in the external data and the query will run faster since datatype conversion is expensive but we may get an inconsistent number of rows depending on which columns are used in any given query. This is OK if the DBA knows a priori that there are no conversion errors or if they are willing to live with inconsistency in the number of rows returned.

The big data drivers such as ORACLE_HIVE have a different default behaviour which is to only project referenced columns and to replace data with conversion errors with NULL values and i.e. they default to returning a consistent number of rows with best performance.

Take away: In order to get consistency and fast table scan performance with ORACLE_LOADER,  the trick is to define the external table with the ‘convert_error store_null’ option and switch to ‘Project Column Referenced’. For example:

CREATE TABLE "T_XT"
(
  c0 varchar(10),
  c1 varchar(10)
)
ORGANIZATION external
(
  TYPE oracle_loader
  DEFAULT DIRECTORY DMPDIR
  ACCESS PARAMETERS
  (
    RECORDS DELIMITED BY NEWLINE
    CHARACTERSET  al32utf8
    ESCAPE
    FIELDS CSV WITH EMBEDDED terminated by ',' enclosed by '|'
    REJECT ROWS WITH ALL NULL FIELDS
    nullif equal blanks
    truncate_columns
    convert_error store_null
    (
      "C0" CHAR(10),
      "C1" CHAR(10)
    )
  )
  location
  (
    'simple.dat'
  )
) REJECT LIMIT UNLIMITED
parallel 4;

If you want more information on handling conversion errors when scanning an external table please check the Utilities Guide section on each driver type

Categories: Development

Examining the new Columnar Cache with v$cell_state

XTended Oracle SQL - Tue, 2021-11-30 18:03

12.1.0.2 introduced the new Columnar Flash Cache where 1MB of  blocks that are all in HCC format are rewritten so as to make each column from each CU contiguous. This works by first writing all the block headers to an array, then writing all the CU headers to an array, finally writing all the Column 1 column-level-CUs, then writing all the Column2 column-level-CUs etc.

The flash cache hash table maintains a simple mapping of column numbers to 64KB flash pages so, for any given query, we can simply do asynchronous disk I/O of the minimum set of 64KB pages required to cover the referenced columns.

Within the “flashcache” cell stats there is a new nested cell stat called “columnarcache” that enables you to track how it is working.

> set long 50000000
> set pagesize 10000
> select xmltype(STATISTICS_VALUE).getclobval(2,2) from v$cell_state;

XMLTYPE(STATISTICS_VALUE).GETCLOBVAL(2,2)
--------------------------------------------------------------------------------
....

  <stats type="flashcache">
    <stats type="columnarcache">
      <stat name="columnar_cache_size">0</stat>
      <stat name="columnar_cache_keep_size">0</stat>
      <stat name="columnar_cache_keep_read">0</stat>
      <stat name="columnar_cache_keep_read_bytes">0</stat>
      <stat name="columnar_attempted_read">0</stat>
      <stat name="columnar_attempted_read_bytes">0</stat>
      <stat name="columnar_cache_hits">0</stat>
      <stat name="columnar_cache_hits_read_bytes">0</stat>
      <stat name="columnar_cache_hits_saved_bytes">0</stat>
      <stat name="columnar_cache_pop">0</stat>
      <stat name="columnar_cache_pop_bytes">0</stat>
      <stat name="columnar_cache_pop_throttled">0</stat>
      <stat name="columnar_cache_pop_invalid_buf">0</stat>
      <stat name="columnar_cache_pop_size_align">0</stat>
      <stat name="columnar_cache_pop_findchdrfailure_mainchdr">0</stat>
      <stat name="columnar_cache_pop_findchdrfailure_childchdr">0</stat>
    </stats>
  </stats>

I typically spool the output of this to wade through it an editor but if we want to monitor how it is working with some workload, we need to extract individual stats, for example I bounced the cells and verified the cache was empty:

> select xmlcast(xmlquery('/cell_stats/stats/stats/stat[@name="columnar_cache_size"]' passing xmltype(STATISTICS_VALUE) returning content) as varchar2(200) ) "COLUMNAR CACHE SIZE"
from v$cell_state
where xmlexists('/cell_stats/stats/stats[@type="columnarcache"]' passing xmltype(STATISTICS_VALUE));

COLUMNAR CACHE SIZE
--------------------------------------------------------------------------------
0

I am using the 1GB TPC-H schema which takes a little more 400MB on disk when compressed with Query Low:

SQL> select sum(bytes) from user_segments where SEGMENT_NAME in ('SUPPLIER','PARTSUPP','LINEITEM','ORDERS','PART','CUSTOMER');

SUM(BYTES)
----------
 420675584

and checking the columnar cache again shows about half of the data has been rewritten into columnar cache format instead of caching raw blocks:

SQL> select xmlcast(xmlquery('/cell_stats/stats/stats/stat[@name="columnar_cache_size"]' passing xmltype(STATISTICS_VALUE) returning content) as varchar2(200) ) "COLUMNAR CACHE SIZE"
from v$cell_state
where xmlexists('/cell_stats/stats/stats[@type="columnarcache"]' passing xmltype(STATISTICS_VALUE));

COLUMNAR CACHE SIZE
--------------------------------------------------------------------------------
179306496

So let’s look at how the cache helped:

select
   xmlcast(
      xmlquery(
         '/cell_stats/stats/stats/stat[@name="columnar_cache_hits_read_bytes"]' 
         passing xmltype(STATISTICS_VALUE) 
         returning content
      ) as varchar2(200) 
   ) "HIT READ BYTES"
from v$cell_state
where xmlexists('/cell_stats/stats/stats[@type="columnarcache"]' passing xmltype(STATISTICS_VALUE));

 
HIT READ BYTES
--------------------------------------------------------------------------------
1909456896

select
   xmlcast(
      xmlquery(
         '/cell_stats/stats/stats/stat[@name="columnar_cache_hits_saved_bytes"]'
         passing xmltype(STATISTICS_VALUE)
         returning content
      ) as varchar2(200) 
   ) "HIT SAVED BYTES"
from v$cell_state
where xmlexists('/cell_stats/stats/stats[@type="columnarcache"]' passing xmltype(STATISTICS_VALUE));

 
HIT SAVED BYTES
--------------------------------------------------------------------------------
1128267776

which shows we were able to eliminate about 35% of the disk I/O for this query run!

We could, of course, have gotten that information more simply with the regular stat “cell physical IO bytes saved by columnar cache” but I wanted to show how to pull values from v$cell_state for use in scripts.


Many people only use Query High compression as they find the increased disk I/O from Query Low more than offsets the savings from cheaper decompression costs. However, with the columnar cache in place, those trade-offs have changed. It may be worth re-evaluating the decision as to when user Query Low vs. Query High particularly on CPU-bound cells.

Addendum: performance bug 20525311 affecting the columnar cache with selective predicates is fixed in the next rpm.

Roger MacNicol
Oracle Data Storage Technology 
Categories: Development

Create External Table as Select

XTended Oracle SQL - Tue, 2021-11-30 17:51

I was looking through a test script and saw something I didn’t know you could do in Oracle. I mentioned it to an Oracle ACE and he didn’t know it either. I then said to the External Table engineers “Oh I see you’ve added this cool new feature” and he replied dryly – “Yes, we added it in Oracle 10.1”. Ouch! So just in case you also didn’t know, you can create an External Table using a CTAS and the ORACLE_DATAPUMP driver.

This feature only work with the ORACLE_DATAPUMP access driver (it does NOT work with with the LOADER, HIVE, or HDFS drivers) and we can use it like this:

SQL> create table cet_test organization external
  2  (
  3    type ORACLE_DATAPUMP
  4    default directory T_WORK
  5    location ('xt_test01.dmp','xt_test02.dmp')
  6  ) parallel 2
  7  as select * from lineitem
 
Table created.

Checking the results shows us

-rw-rw---- ... 786554880 Mar 9 10:48 xt_test01.dmp 
-rw-rw---- ... 760041472 Mar 9 10:48 xt_test02.dmp

This can be a great way of creating a (redacted) sample of data to give to a developer to test or for a bug repro to give to support or to move between systems. 

Categories: Development

Correct syntax for the table_stats hint

XTended Oracle SQL - Tue, 2021-11-30 17:33

A friend contacted me to ask why they were having problems using the table_stats hint to influence optimizer decision making and also to influence the decision to use direct read or buffer cache scan so this is just a quick blog post to clarify the syntax as it is not well documented.

table_stats(<table_name> <method> {<keyword>=<value>} )

Method is one of: DEFAULT, SET, SCALE, SAMPLE

Keyword is one of: BLOCKS, ROWS, ROW_LENGTH

The most useful methods are SET which does for statement duration what dbms_stats.set_table_stats does globally; and SCALE which acts to scale up the current size of the segment and can therefore be used to try what if scenarios on the segment growing on performance

For example:

select /*+  table_stats(scott.emp set rows=14 blocks=1  row_length=10)  */ * from scott.emp;
Effect of the table_stats hint on table scans

Since this is a table scan blog, let’s look at the impact on table scans. Using the Scale 1 customers table with 150,000 rows

SQL> exec dbms_stats.gather_table_stats(USER,'RDM');

SQL> select sum(BLOCKS) from user_segments where segment_name='RDM';
SUM(BLOCKS)
-----------
       1792

and use trace events

event="trace[NSMTIO] disk medium"                  # Direct I/O decision making
event="10358 trace name context forever, level 2"  # Buffer cache decision making

We see this segment is smaller than the small table threshold for this buffer cache (kcbstt=9458) and so decision making is short-circuited and will use the buffer cache :

kcbism: islarge 0 next 0 nblks 1689 type 2, bpid 3, kcbisdbfc 0 kcbnhl 8192 kcbstt 9458 keep_nb 0 kcbnbh 461198 kcbnwp 1 kcbpstt 0, BCRM_ON 0
NSMTIO: qertbFetch:NoDirectRead:[- STT < OBJECT_SIZE < MTT]:Obect's size: 1689 (blocks), Threshold: MTT(46119 blocks),

Now let’s try the same query with the hint shown in the example above:

kcbism: islarge 1 next 0 nblks 66666666 type 2, bpid 3, kcbisdbfc 0 kcbnhl 8192 kcbstt 9458 keep_nb 0 kcbnbh 461198 kcbnwp 1 kcbpstt 0, BCRM_ON 0
kcbimd: nblks 66666666 kcbstt 9458 kcbnbh 46119 bpid 3 kcbisdbfc 0 is_medium 0
kcbivlo: nblks 66666666 vlot 500 pnb 461198 kcbisdbfc 0 is_large 1
NSMTIO: qertbFetch:DirectRead:[OBJECT_SIZE>VLOT]
NSMTIO: Additional Info: VLOT=2305990
Object# = 75638, Object_Size = 66666666 blocks

Now the size of the table in blocks is far larger than our small table threshold so we go on to evaluate whether it is a medium table and it is too large to be considered medium (cutoff is 10% cache i.e. kcbnbh=46119 blocks) so then it is evaluated as a very large table and that is true so direct read will be used.

Making the new value permanent

If for some reason we wanted to make some value permanent (caveat emptor) after doing experiments with the hint, we can set the table stats like this:

BEGIN
  DBMS_STATS.SET_TABLE_STATS( 
    ownname => 'TPCH'
  , tabname => 'RDM'
  , numrows => 2000000
  , numblks => 10000 );
END;

SQL>  select NUM_ROWS,BLOCKS,EMPTY_BLOCKS from DBA_TAB_STATISTICS where TABLE_NAME='RDM';

NUM_ROWS     BLOCKS EMPTY_BLOCKS
---------- ---------- ------------
   2000000     10000             0

and now we see the size we decided upon after the needed experiments being used without a hint:

kcbism: islarge 1 next 0 nblks 10000 type 2, bpid 3, kcbisdbfc 0 kcbnhl 8192 kcbstt 9458 keep_nb 0 kcbnbh 461198 kcbnwp 1 kcbpstt 0, BCRM_ON 0
kcbimd: nblks 10000 kcbstt 9458 kcbnbh 46119 bpid 3 kcbisdbfc 0 is_medium 1
kcbcmt1: hit age_diff adjts last_ts nbuf nblk has_val kcbisdbfc cache_it 191 23693 23502 461198 10000 1 0 1
NSMTIO: qertbFetch:NoDirectRead:[- STT < OBJECT_SIZE < MTT]:Obect's size: 10000 (blocks), Threshold: MTT(46119 blocks),

Our table is no longer small as 10,000 blocks is larger than STT=9458 blocks so it is a medium table but as it is smaller than the medium table threshold it will still use the buffer cache.

I hope you found this useful.

Roger

Categories: Development

Controlling the offload of specific operators

XTended Oracle SQL - Tue, 2021-11-30 17:25

One of the joys of regexp is that you can write a pattern that is painfully expensive to match and offloading these to the cell can cause significant impact on other users and overall throughput (including heartbeat issues). If you have a user who is prone to writing bad regexp expressions you as DBA can prevent regexp (or any other operator) from being offloaded to the cells.

Let’s take a very simple example using a cut down version of TPC-H Query 16 and a NOT LIKE predicate: 

SQL> explain plan for select p_brand, p_type, p_size
from part
where p_brand <> 'Brand#45'
and p_type not like 'MEDIUM POLISHED%'
and p_size in (49, 14, 23, 45, 19, 3, 36, 9)
group by p_brand, p_type, p_size;

SQL> select * FROM TABLE(DBMS_XPLAN.DISPLAY);

  |*  3 |    TABLE ACCESS STORAGE FULL| PART | 29833 |  1048K|    |   217   (2)| 00:00:01 |  1 |  8 
------------------------------------------------------------------------------------------------------------
      3 - storage(("P_SIZE"=3 OR "P_SIZE"=9 OR "P_SIZE"=14 OR "P_SIZE"=19 
      OR "P_SIZE"=23 OR "P_SIZE"=36 OR "P_SIZE"=45 OR "P_SIZE"=49) 
      AND "P_BRAND"<>'Brand#45' AND "P_TYPE" NOT LIKE 'MEDIUM POLISHED%')

Here we see all the predicates get offloaded as expected. So, for example, to stop NOT LIKE being offloaded we would need to find the operator in v$sqlfn_metadata

SQL> column descr format a18
SQL> select func_id, descr, offloadable from v$sqlfn_metadata where descr like '%LIKE%';

   FUNC_ID DESCR              OFF
---------- ------------------ ---
        26  LIKE              YES
        27  NOT LIKE          YES
        99  LIKE              NO
       120  LIKE              YES
       121  NOT LIKE          YES
       ...
       524  REGEXP_LIKE       YES
       525  NOT REGEXP_LIKE   YES
       537  REGEXP_LIKE       YES
       538  NOT REGEXP_LIKE   YES

we can ignore all but the two basic LIKE operators in this case, so to disable the offload of our LIKE predicates we use:

   FUNC_ID DESCR              OFF
---------- ------------------ ---
        26  LIKE              YES
        27  NOT LIKE          YES
        99  LIKE              NO
       120  LIKE              YES
       121  NOT LIKE          YES
       ...
       524  REGEXP_LIKE       YES
       525  NOT REGEXP_LIKE   YES
       537  REGEXP_LIKE       YES
       538  NOT REGEXP_LIKE   YES

we can ignore all but the two basic LIKE operators in this case, so to disable the offload of our LIKE predicates we use:

SQL> alter session set cell_offload_parameters="OPT_DISABLED={26,27};";

and we see this reflected in the offloadable column in v$sqlfn_metadata.

SQL> select func_id, descr, offloadable from v$sqlfn_metadata where descr like '%LIKE%';

   FUNC_ID DESCR              OFF
---------- ------------------ ---
        26  LIKE              NO
        27  NOT LIKE          NO
        99  LIKE              NO
       120  LIKE              YES
       121  NOT LIKE          YES

To re-enable them you would use:

SQL> alter session set cell_offload_parameters="OPT_DISABLED={};";

One thing to note about this param is that it doesn’t work like events (whose settings are additive), here it replaces the previous value and so every operator you want disabled has to be included in the same alter session (and the param is limited to 255 maximum characters limiting the number of operators that can be disabled). With the offload of LIKE and NOT LIKE disabled we can see the impact on the plan:

SQL> explain plan for select p_brand, p_type, p_size
from part
where p_brand <> 'Brand#45'
and p_type not like 'MEDIUM POLISHED%'
and p_size in (49, 14, 23, 45, 19, 3, 36, 9)
group by p_brand, p_type, p_size;

SQL> select * FROM TABLE(DBMS_XPLAN.DISPLAY); 

 |*  3 |    TABLE ACCESS STORAGE FULL| PART | 29833 |  1048K|    |   217   (2)| 00:00:01 |  1 |  8 
------------------------------------------------------------------------------------------------------------

     3 - storage(("P_SIZE"=3 OR "P_SIZE"=9 OR "P_SIZE"=14 OR "P_SIZE"=19 OR "P_SIZE"=23
     OR "P_SIZE"=36 OR "P_SIZE"=45 OR "P_SIZE"=49) AND "P_BRAND"<>'Brand#45')

and the NOT LIKE is no longer in the storage filter. Now lets say that you as DBA are faced with a more complex problem and want to halt all complex processing on the cells temporarily. There is a parameter that will disable everything except the simple comparison operators and NULL checks:

SQL> alter session set "_cell_offload_complex_processing"=FALSE;

Now lets see what happens:

SQL> explain plan for select p_brand, p_type, p_size
from part
where p_brand <> 'Brand#45'
and p_type not like 'MEDIUM POLISHED%'
and p_size in (49, 14, 23, 45, 19, 3, 36, 9)
group by p_brand, p_type, p_size;

SQL> select * FROM TABLE(DBMS_XPLAN.DISPLAY); 

 |*  3 |    TABLE ACCESS STORAGE FULL| PART | 29833 |  1048K|    |   217   (2)| 00:00:01 |  1 |  8 
------------------------------------------------------------------------------------------------------------

    3 - filter(("P_SIZE"=3 OR "P_SIZE"=9 OR "P_SIZE"=14 OR "P_SIZE"=19 OR "P_SIZE"=23
    OR "P_SIZE"=36 OR "P_SIZE"=45 OR "P_SIZE"=49) AND "P_BRAND"<>'Brand#45'
    AND "P_TYPE" NOT LIKE 'MEDIUM POLISHED%')

Well we got no storage predicates at all and we didn’t expect that because we had one simple predicate namely p_brand != 'Brand#45' and the IN predicate had been rewritten to a series of OR’ed comparisons so what happened? This parameter only permits simple predicates that are linked by AND’s and can be attached directly to one column. Disjuncts are not pushable so they are normally evaluated by an eva tree or by pcode neither of which are sent to the cell with this parameter set to FALSE. So why wasn’t our one simple predicate offloaded. Well, note where it is in the explain plan. It comes after the rewritten the IN and since the predicates are sorted by the optimizer on effectiveness we stop looking as soon as we see one that can’t be offloaded. Let’s remove the IN and see what happens:

SQL> explain plan for select  p_brand, p_type, p_size
from part
where p_brand <> 'Brand#45'
and p_type not like 'MEDIUM POLISHED%';

|*  2 |   TABLE ACCESS STORAGE FULL| PART |   190K|  6686K|   217   (2)| 00:00:01 | 1 | 8 |
---------------------------------------------------------------------------------------------------

    2 - storage("P_BRAND"<>'Brand#45')
       filter("P_BRAND"<>'Brand#45' AND "P_TYPE" NOT LIKE 'MEDIUM POLISHED%')

as expected the simple predicate is now offloaded. If you look at v$sqlfn_metadata you’ll see this param is reflected in the offloadable column:

SQL> select func_id, descr, offloadable from v$sqlfn_metadata where descr like '%LIKE%';

   FUNC_ID DESCR              OFF
---------- ------------------ ---
        26  LIKE              NO
        27  NOT LIKE          NO
        99  LIKE              NO
       120  LIKE              NO
       ...
       121  NOT LIKE          NO
       524  REGEXP_LIKE       NO
       525  NOT REGEXP_LIKE   NO
       537  REGEXP_LIKE       NO
       538  NOT REGEXP_LIKE   NO

I hope you never need any of this in real life but it’s good to have it in the toolbag.

Categories: Development

Alter Table Shrink Space and SmartScan

XTended Oracle SQL - Tue, 2021-11-30 17:10

There are three main tools available for cleaning up a segment (Alter Table Shrink, Alter Table Move, and export/import), but one of them isn’t as helpful as you might have thought.  

Consider the following sequence of events where we update the 256th column to cause widespread fragmentation:

SQL> update t set c256 = 'abcdefghijklmnopqrstuvwxyz'; 
2000000 rows updated.
SQL> commit;
Commit complete.

SQL> analyze table t compute statistics;
Table analyzed.

SQL> select chain_cnt from dba_tables where table_name = 'T' and owner = 'FUSION';
CHAIN_CNT
----------
   2000000

SQL> select sum(blocks) from user_segments where segment_name = 'T';

SUM(BLOCKS)
-----------
     139264
SQL> alter table t enable row movement;
Table altered.
SQL> alter table t shrink space cascade;
Table altered.
SQL> analyze table t compute statistics;
Table analyzed.

SQL> select chain_cnt from dba_tables where table_name = 'T' and owner = 'FUSION';

CHAIN_CNT
----------
   1970068

1 row selected.

Note: ‘chain_cnt” does not count chained rows, rather it counts rows whose row pieces are chained across more than one block.  A Row that is in three pieces but all three pieces are in the same block has a zero chain_cnt.

In this particular artificial scenario Shrink has not gained us much reduction in space used, and more importantly it hasn’t reduced the kind of fragmentation that affects SmartScan performance.

This is because Shrink works in two phases. In Phase 1, the segment is scanned down from the end of the segment to the beginning. Rows with their head piece in the currently scanned block are moved together with all their row pieces. The segment is scanned from beginning upwards looking for space for the entire row. When it is unable to move any more entire rows, Phase 2 starts scanning down again from the end of the segment trying to move individual row pieces to blocks with space. This meant that while Phase 1 could potentially reduce chaining for relocated rows, Phase 2 was very unlikely to reduce the chain count and could in fact increase the chain_cnt. The moral of this is that Shrink really is for freeing up blocks close to the High Water Mark and not for cleaning up fragmented rows.

Now let’s try Alter Table move with the same segment:

SQL> alter table t move; 

Table altered.

SQL> analyze table t compute statistics;
Table analyzed.

SQL> select chain_cnt from dba_tables where table_name = 'T' and owner = 'FUSION';
CHAIN_CNT
----------
     45976
1 row selected.

SQL> select sum(blocks) from user_segments where segment_name = 'T';

SUM(BLOCKS)
-----------
      92160
1 row selected.

OK, that did what we hoped: more space has been reclaimed but more importantly for SmartScan, the number of fragmented rows has been reduced considerably.

With the fix for 19433200, the mechanics of Shrink have been reworked and it is now better at reducing the chain_cnt. However, even with the improvements made, when faced with heavily fragmented rows, Alter Table Move or export/import are likely to provide significantly better table scan performance with SmartScan.

Roger MacNicol

Oracle Data Storage Technology 

Categories: Development

Source Control for Oracle, a Customer Success Story

Gerger Consulting - Tue, 2021-09-28 02:20

Gitora is a version control tool for the Oracle Database. It helps DBA's, developers to manage their database code and objects using Git and enables them to use modern software development workflows.

Attend the webinar by Plexxis to learn how they are using Gitora to improve their software development process.

Click here to register.

Categories: Development

Where does the commit or rollback happen in PL/SQL code?

XTended Oracle SQL - Sat, 2021-09-11 17:01

One of the easiest ways is to use diagnostic events:

alter session set events 'sql_trace {callstack: fname xctend} errorstack(1)';
Image Image
Categories: Development

Another interesting troubleshooting case

XTended Oracle SQL - Fri, 2021-05-28 16:58

Got an interesting question today in RuOUG:

Some very simple PL/SQL procedures usually are completed within ~50ms, but sometimes sporadically longer than a second. For example, the easiest one from these procedures:

create or replace PROCEDURE XXXX (
  P_ORG_NUM IN number,
  p_result OUT varchar2, 
  p_seq OUT number 
) AS 
BEGIN
  p_seq := P_ORG_NUM; p_result:='';
END;

sql_trace shows that it was executed for 1.001sec and all the time was “ON CPU”:

Trace file /opt/oracle/diag/rdbms/rdb4/rdb/trace/rdb_ora_7100.trc
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.10.0.0.0
Build label:    RDBMS_19.10.0.0.0DBRU_SOLARIS.SPARC64_210107
ORACLE_HOME:    /opt/oracle/product/19
System name:	SunOS
Node name:	node2.zzzzz.ru
Release:	5.11
Version:	11.3
Machine:	sun4v
Oracle process number: 137
Unix process pid: 7100, image: oracle@node2.zzzzz.ru

*** 2021-05-26T15:38:31.321550+03:00
*** SESSION ID:(68.47940) 2021-05-26T15:38:31.321609+03:00
*** CLIENT ID:() 2021-05-26T15:38:31.321633+03:00
*** SERVICE NAME:(SYS$USERS) 2021-05-26T15:38:31.321656+03:00
*** MODULE NAME:(JDBC Thin Client) 2021-05-26T15:38:31.321679+03:00
*** ACTION NAME:() 2021-05-26T15:38:31.321703+03:00
*** CLIENT DRIVER:(jdbcthin : 19.3.0.0.0) 2021-05-26T15:38:31.321728+03:00

=====================
PARSING IN CURSOR #18446744071368904384 len=53 dep=0 uid=51 oct=47 lid=51 tim=190436799743741 hv=804128640 ad='12345678' sqlid='aabbaabbaa123'
BEGIN AAAA.XXXX(:1 ,:2 ,:3 ); END;
END OF STMT
PARSE #18446744071368904384:c=28,e=28,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=0,tim=190436799743741
BINDS #18446744071368904384:

 Bind#0
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=03 fl2=1000000 frm=01 csi=873 siz=24 off=0
  kxsbbbfp=ffffffff747c6c98  bln=22  avl=02  flg=05
  value=6
 Bind#1
  oacdty=01 mxl=32767(32767) mxlc=00 mal=00 scl=00 pre=00
  oacflg=03 fl2=1400010 frm=01 csi=873 siz=32767 off=0
  kxsbbbfp=ffffffff747b8020  bln=32767  avl=00  flg=05
 Bind#2
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=03 fl2=1400000 frm=01 csi=873 siz=24 off=0
  kxsbbbfp=ffffffff747c6c68  bln=22  avl=00  flg=05

*** 2021-05-26T15:38:33.172899+03:00
WAIT #18446744071368904384: nam='SQL*Net message to client' ela= 3 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=190436800744970
EXEC #18446744071368904384:c=1104,e=1001110,p=0,cr=0,cu=0,mis=0,r=1,dep=0,og=1,plh=0,tim=190436800745125
XCTEND rlbk=0, rd_only=1, tim=190436800745192


Originally I asked for dtrace/flamegraph and got just this:

              unix`_resume_from_idle+0x280
              genunix`cv_timedwait_sig_hires+0x1c4
              genunix`cv_waituntil_sig+0x90
              genunix`nanosleep+0xd4
              unix`syscall_trap+0x228

              0xffffffff7c04dbcc
              0xffffffff7c0393ec
              0xffffffff7c02e084
              0xffffffff7c02f860
              0x1176ef724
              0x118376518
              0x118366b84
              0x118365d14
              0x118366078
              0x118314b60
              0x1183973cc
              0x1182c707c
              0x1102edef8
              0x10aa60064
              0x10c224038
              0x10aa50ca8
              0x115bbc370
              0x10aa196dc
              0x10aa2c59c
              0x10aa50ca8
  ns                                                
           value  ------------- Distribution ------------- count    
       268435456 |                                         0        
       536870912 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 1        
      1073741824 |                                         0      

But truss showed much more interesting details:

 0.000037    write(11, "   B i n d # 2\n", 8)           = 8
 0.000045    write(12, " 2 0 m 7 } M X 8\n", 9)         = 9
 0.000022    lseek(11, 0, SEEK_CUR)                             = 4673
 0.000038    write(11, "     o a c d t y = 0 2  ".., 104)       = 104
 0.000040    write(12, " 2 0 m N } 5 Y e 1\n", 10)              = 10
 0.000020    lseek(11, 0, SEEK_CUR)                             = 4777
 0.000037    write(11, "     k x s b b b f p = f".., 52)        = 52
 0.000038    write(12, " 2 0 m 7 } j Y q\n", 9)         = 9
 0.000028    pathconf("/tmp/", _PC_NAME_MAX)                    = 255
 0.000029    open("/tmp/.SHMLPESLD_rdb4_26_000000047334dc80", O_RDWR|O_CREAT|O_EXCL|O_NOFOLLOW|O_CLOEXEC, 0666) Err#17 EEXIST
 1.000029    nanosleep(0xFFFFFFFF7FFF5FF0, 0xFFFFFFFF7FFF5FE0) = 0
 0.000038    open("/tmp/.SHMLPESLD_rdb4_26_000000047334dc80", O_RDWR|O_CREAT|O_EXCL|O_NOFOLLOW|O_CLOEXEC, 0666) = 13
 0.000023    close(13)                                  = 0
 0.000026    open("/tmp/.SHMDPESLD_rdb4_26_000000047334dc80", O_RDONLY|O_NOFOLLOW|O_CLOEXEC) = 13
 0.000028    unlink("/tmp/.SHMLPESLD_rdb4_26_000000047334dc80") = 0
 0.000052    mmap(0x00000000, 528, PROT_READ|PROT_EXEC, MAP_PRIVATE, 13, 0) = 0xFFFFFFFF74700000
 0.000020    close(13)                                  = 0
 0.000023    lseek(11, 0, SEEK_CUR)                             = 4829
 0.000044    write(11, "\n * * *   2 0 2 1 - 0 5".., 38)        = 38
 0.000040    write(12, " - T + 7 } d b k t . c *".., 16)        = 16
 0.000020    lseek(11, 0, SEEK_CUR)                             = 4867
 0.000037    write(11, " W A I T   # 1 8 4 4 6 7".., 130)       = 130
 0.000037    write(12, " 8 4 j U 1 y 5 $ o 7 2 2".., 18)        = 18
 0.000020    lseek(11, 0, SEEK_CUR)                             = 4997
 0.000037    write(11, " E X E C   # 1 8 4 4 6 7".., 105)       = 105
 0.000037    write(12, " 8 0 j d 0 y 5 $ k x s t".., 20)        = 20
 0.000022    lseek(11, 0, SEEK_CUR)                             = 5102

As you can see it calls nanosleep in the line #11 and sleeps 1 sec, but more interesting a line before and a few after nanosleep: oracle tries to open (and fails “err17”) a file "/tmp/.SHMLPESLD_rdb4_XXX” , which means that it tries to open a pl/sql procedure compiled in native mode (pesld – Plsql code Execution Shared object manager native code Loader). I’ve asked to check compilation parameters of the procedure, and it was really compiled in native mode:

PLSQL_OPTIMIZE_LEVEL 2
PLSQL_CODE_TYPE NATIVE
PLSQL_DEBUG FALSE

So obvious workaround to recompile those procedures in the INTERPRETED mode fixed the issue.

UPDATE: Forgot to show what exactly is going on here: Finally I got normal stack:

              unix`_resume_from_idle+0x280
              genunix`cv_timedwait_sig_hires+0x1c4
              genunix`cv_waituntil_sig+0x90
              genunix`nanosleep+0xd4
              unix`syscall_trap+0x228

              libc.so.1`__nanosleep+0x4
              libc.so.1`sleep+0x14
              libc.so.1`__pos4obj_lock+0x68
              libc.so.1`shm_open+0x50
              oracle`pesld02_Map_Object+0xbc4
              oracle`penglur+0x1d18
              oracle`pfrins+0x8e4
              oracle`pfri7_inst_body_common+0x214
              oracle`pfri3_inst_body+0x18
              oracle`pfrrun+0x9e0
              oracle`plsql_run+0x22c
              oracle`peicnt+0xfc
              oracle`kkxexe+0x2f8
              oracle`opiexe+0x5784
              oracle`kpoal8+0x918
              oracle`opiodr+0x448
              oracle`ttcpip+0x430
              oracle`opitsk+0x83c
              oracle`opiino+0x39c
              oracle`opiodr+0x448
  ns                                                
           value  ------------- Distribution ------------- count    
       268435456 |                                         0        
       536870912 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 3        
      1073741824 |                                         0    

As you can see pesld02_Map_Object is trying to load (map) a user-defined function (native compilation) and calls shm_open which calls __pos4obj_lock and if we look at it, we can find why and when it calls sleep(which calls nanosleep internally):

int
__pos4obj_lock(const char *name, const char *ltype)
{
 char *dfile;
 int fd;
 int limit = 64;

 if ((dfile = __pos4obj_name(name, ltype)) == NULL) {
  return (-1);
 }

 while (limit-- > 0) {
  if ((fd = __open_nc(dfile, O_RDWR | O_CREAT | O_EXCL, 0666))
      < 0) {
   if (errno != EEXIST)
    break;
   (void) sleep(1);
   continue;
  }

  (void) __close_nc(fd);
  free(dfile);
  return (1);
 }

 free(dfile);
 return (-1);
}
Categories: Development

SQL*Plus tips #9: Reading traces and incident files

XTended Oracle SQL - Thu, 2021-05-27 19:57

@tracefile_read_last_by_mask filemask [regexp] [ignore_regexp]
– finds last trace by filemask and filters rows by regexp and filters out rows by ignore_regexp:

@ tracefile_by_mask.sql [mask] – finds and shows last 10 trace files by mask

@tracefile_current – shows a file name of the your session’s trace file
@tracefile_read_current – reads a trace file of the current session

@tracefile_last [n] – shows last N trace files (by default 10)

@tracefile_read filename – reads a specified trace file

@tracefile_read_full.sql <filename> – reads a specified trace file and shows it with additional columns: component_name, operation_name, file_name, function_name

@tracefile_spool <tracefilename> <dest_file> – saves tracefilename to dest_file on local machine:

Incidents, problems and incident files

@diag/problems_last.sql [N] – shows last N problems, N is optional, by default shows last 10 problems

@diag/incidents_last.sql [N] – shows last N incidents (by default 10)

@diag/incidents_by_problem.sql <problem_id> [N] – shows last N incidents by problem_id

@diag/adr_home_create.sql – creates a directory object for reading incident files:

@diag/incident_files_by_mask.sql filemask – shows incident file names by file mask

SQL> @diag/incident_files_by_mask.sql %incident%387.%

INCIDENT_ID   OWNER_ID      FLAGS CON_ID BFILE
----------- ---------- ---------- ------ ------------------------------------------------------------
      26387          1          1      3 <ADR_HOME>/incident/incdir_26387/ORA19_ora_2942_i26387.trc#0

1 row selected.

@diag/incident_files_by_id.sql incident_id – shows incident files by incident_id

@diag/incident_files_spool_by_id.sql incident_id – saves an incident file on local machine:

Categories: Development

Oracle diagnostic events — Cheat sheet

XTended Oracle SQL - Wed, 2021-05-19 20:13

Oracle diagnostic events is a great feature, but unfortunately poorly documented and nonintuitive, so it’s difficult to remember all events/actions/parameters and even read its internal documentation using oradebug. So I decided to compile its internal doc as a more convenient html-version (http://orasql.org/files/events/) and make a cheat sheet of some unknown or little-known use cases.

alter system set events 
   'kg_event[1476]
        {occurence: start_after 1, end_after 3}
            trace("stack is: %\n", shortstack())
            errorstack(2)
    ';
  1. kg_event[errno] – Kernel Generic event in library Generic for error number events, which instructs to trace ORA-errno errors;
  2. {occurence: start_after X, end_after Y} – is a filter, which instructs to skip X event checks and trace just Y times;
  3. trace(format, str1, str2, …, str15) – is a function from ACTIONS for printing into a trace file;
  4. shortstack() – is a function from ACTIONS , which returns a short call stack as a string;
  5. errorstack(level) – is a function from ACTIONS, which prints extended info (level: 0 – errorstack only, 1 – errorstack + call stack, 2 – as level 1 + processtate, 3 – as level 2 + context area). You can get more details with  PROCESSSTATE or SYSTEMSTATE. If you need just a call stack, you can use CALLSTACK(level) , with function arguments in case of level>1.
alter system set events 
    'trace[SQL_Compiler.* | SQL_Execution.*]
        [SQL: ...]
            {process: ospid = ...}
            {occurence:end_after 3}
                controlc_signal()';
  1. trace[component] – is the main diagnostic event, which allows to specify Components to trace. This command shows how to specify all child components of SQL_Compiler and SQL_Execution. 
  2. SQL[SQL: sqlid ] – is the only SCOPE in library RDBMS, which allows to trace specific SQL_ID and all its recursive calls (or for example, if you specify sql_id of a PL/SQL call, then all its internal queries will be traced too);
  3. {process: …} – is a filter, which allows to filter processes;
  4. controlc_signal – is an ACTION, that raises “ORA-01013: user requested cancel of current operation”, ie it’s like this session interrupted own call.

I’ve used this command recently when I had to help developers to find where some suboptimal and unnecessary queries, discovered during performance load testing, come from their code (huge webapp with hibernate).

alter system set events 
    'sql_trace {process: ospid = ...} 
     trace("sqlid(%): %\n", sqlid(), evfunc())
     ';

sql_trace – is just a good old trace event #10046, and this function forces oracle to print a function name and sql_id for each sql_trace event. Simple example:

enabling the event сначала выполняем запрос с настройками по умолчанию, а затем с _rowsource_statistics_sampfreq=1executing a query twice: with default parameters and with _rowsource_statistics_sampfreq=1 Разница в трейсе заметнаThe difference is noticeable: 12 vs 26 traced events

As you can see we get 26 “occurrences” in case of “_rowsource_statistics_sampfreq”=1 instead just 12 by default. More details about this here: https://alexanderanokhin.com/2012/12/24/timing-rowsource-statistics-part-2-overhead-and-inconsistent-time/

alter system set events 
    'wait_event["enq: TM - contention"]
        {wait: minwait=1000} 
            errorstack(1)
            trace("event=[%] sqlid=%, ela=% p1=% p2=% p3=%\n", 
                   evargs(5), sqlid(), evargn(1), evargn(2), evargn(3), evargn(4))
    ';
  1. wait_event[name] – event to control wait event post-wakeup actions, first argument is a wait event name, all wait event names and their parameters (P1,P2,P3) you can find in v$event_name:
select 
   wait_class,name
  ,parameter1,parameter2,parameter3
  ,display_name 
from v$event_name;
  1. {wait: … } – is a filter that allows to specify both the minimum wait time(ms) and parameters P1, P2, P3. For example, for this “TM-contention” wait event, P2 is “object #”, and we can specify {wait: minwait=1000; p2=12345}, ie only table TM-lock wait events on the table with object_id=12345 and longer than 1 sec;
  2. evargX() – functions from ACTIONs, which return event check arguments values, where 1st argument is an elapsed time(ms), 2nd-4th – parameters p1-p3, and 5th – wait event name. kg_event has own related functions – errargX().

or another example, when you need to find out which session variables have been changed: Let’s imagine someone forgot to specify nls parameters in to_number() and didn’t specify “on conversion error” clause, and some session sporadically get “ORA-01722: invalid number”:

-- original parameters:
SQL> alter session set nls_numeric_characters='.,';

Session altered.

-- all works fine...
SQL> select to_number('1,200.3','999g999d999') + 10 from dual;

TO_NUMBER('1,200.3','999G999D999')+10
-------------------------------------
                               1210.3

-- until they've been changed at some point:
SQL> alter session set nls_numeric_characters=".'";

Session altered.

-- obviously we get errors:
SQL> select to_number('1,200.3','999g999d999') + 10 from dual;
select to_number('1,200.3','999g999d999') + 10 from dual
                 *
ERROR at line 1:
ORA-01722: invalid number

There is no special view for session parameters which are not a part of v$ses_optimizer_env, but we can easily get them using  MODIFIED_PARAMETERS():

alter system set events 
    'kg_event[1722] 
        {process: ospid=27556}
        {occurence:end_after 1}
            MODIFIED_PARAMETERS()';

And since we have v$diag_alert_ext for alert.log and v$diag_trace_file_contents for trace files, we can get them using this simple query:

select c.payload
from v$diag_trace_file_contents c
where 1=1
  and c.session_id   = &sid -- session sid 
  and c.serial#      = &serial -- session serial#
  and c.section_name = 'Error Stack' -- they will be in "Error Stack"
--  and c.payload like '%nls_numeric%' -- we can filter only specific nls parameters
  and c.timestamp>systimestamp-interval'15'minute -- last 15 minutes
;

-- Results:
DYNAMICALLY MODIFIED PARAMETERS:
  nls_language             = "AMERICAN"
  nls_territory            = "AMERICA"
  nls_sort                 = "BINARY"
  nls_date_language        = "AMERICAN"
  nls_date_format          = "yyyy-mm-dd hh24:mi:ss"
  nls_currency             = "$"
  nls_numeric_characters   = ".'"
  nls_iso_currency         = "AMERICA"
  nls_calendar             = "GREGORIAN"
  nls_time_format          = "hh24:mi:ssxff"
  nls_timestamp_format     = "yyyy-mm-dd hh24:mi:ssxff"
  nls_time_tz_format       = "hh24:mi:ssxff TZR"
  nls_timestamp_tz_format  = "yyyy-mm-dd hh24:mi:ssxff TZR"
  nls_dual_currency        = "$"
  nls_comp                 = "BINARY"
  local_listener           = ""

Update: I’ve just got a question in RuOUG:

We use “Extensible Optimizer Interface“, i.e. Oracle CBO calls such code internally, which fails periodically with “ORA-06550 PL/SQL compile error”. Can we specify both “kg_event” and “sql_id” so we could get those errors with details in alert.log

Of course, simple example:

alter system set events 
    'kg_event[6550]
        [sql: 5cmnpq8t0g7pd]
            {occurence: start_after 1, end_after 3}
                errorstack(3)
                incident(plserrors)
      ';

As you can see we use incident(label) here to generate an incident file (which you can easily pack and send to Oracle support or consulters) and kg_event[6550] to trace  “ORA-06550: line %, column %” errors.

let’s generate an error we can see that an incident file was successfully generated

PS, More details in Tanel Poder’s articles:
1. The full power of Oracle’s diagnostic events, part 1: Syntax for KSD debug event handling
2. The full power of Oracle’s diagnostic events, part 2: ORADEBUG DOC and 11g improvements
3. What Caused This Wait Event: Using Oracle’s wait_event[] tracing
4. http://tech.e2sn.com/oracle/troubleshooting/oradebug-doc

Ora C functions on Frits Hoogland’s site: http://orafun.info/

HTML version of the internal oradebug/diagnostic events doc: http://orasql.org/files/events

Categories: Development

v$blog #funny #friday

XTended Oracle SQL - Sat, 2021-04-24 18:22
select title, short_url 
from v$blog 
where pubDate>=systimestamp - interval '5' month;
with v$blog as (
    select  
        title
       ,to_timestamp_tz(pubDate,'DY, dd mon yyyy hh24:mi:ss TZR') pubDate
       ,short_url
       ,description
    from xmltable(
            '/rss/channel/item'
            passing
                xmltype(httpuritype('http://orasql.org/feed/').getclob())
            columns
                 title       varchar2(70)  path 'title'
                ,pubDate     varchar2(40)   path 'pubDate'
                ,url         varchar2(128)  path 'link'
                ,short_url   varchar2(50)   path 'guid'
                ,description varchar2(500)  path 'fn:substring(./description,1,500)'
    ) 
)
select title, short_url from v$blog where pubDate>=systimestamp - interval '5' month;
Categories: Development

Murder in the Age of Enlightenment

Greg Pavlik - Fri, 2021-04-09 17:51

I had a few days of downtime to deal with some medical issues and turned to some short story collections to fill the time. My companions for a bit were Ryūnosuke Akutagawa and Anton Chekhov. I was quite delighted with a new translation of Akutagawa from Pushkin Press, Murder in the Age of Enlightenment. What sparse but sharp imagery - taken from Japanese history, European literature, Mahayana Buddhism, Christianity, Chinese writings - it was a bit of a smorgasbord. Akutagawa can be dark: his preoccupation with suicide in his writing no doubt reflected in his own suicide at age 35; I found his piece Madonna in Black on a peculiarly evil Maria-Kannon to be troubling, not least because I have a kind of devotional fascination with Maria-Kannon as our Lady of Mercy. But still Akutagawa is deeply humanistic and wide-ranging. The Karetnyk translation can be digested in an afternoon, no doubt time well spent.

My Chekhov choice was the recent translation of fifty-two stories by the unsurpassable translator pair Richard Pevear and Larissa Volokhonsky. These two are artists in their own right... I can't say enough good things about their portfolio of translations. They are so good I've been forced to re-read a number of novels just to digest their interpretative readings over the years.

But back to Akutagawa. Here I post a translation done under Creative Commons license* of the story The Spider's Thread. I don't know if this is a re-telling of Dostoevsky's "Tale of the Onion" in Karamazov for sure, though the story line is so close that I find it impossible to believe otherwise: Lord Buddha Shakyamuni simply replacing the Guardian Angel. Get the Pushkin Press book to read it in a slightly more refined form, but I found this a wonderful read as well:


ONE


One day, the Buddha was strolling alone along the edge of a lotus pond in Paradise. The blooming lotus flowers in the pond were each pure white like jewels, and the place was filled with the indescribably wondrous fragrance continually emitted from each flower’s golden center. It was just morning in Paradise.

After a time, the Buddha paused at the edge of the pond and from between the lotus leaves that covered it saw a glimpse of the state of things below. Now this celestial pond just happened to lie directly over Hell, and peering through that crystal-clear water was like looking through a magnifying glass at the River of Death and the Mountain of Needles and such.

The Buddha saw there, in the depths of Hell, a single man writhing along with the other sinners. This man was named Kandata, and he had been a notorious thief who had performed murder and arson and other acts of evil. In his past, however, he had performed just one good deed: one day, when walking through the deep forest, he saw a spider crawling along the road. At first he raised his foot to crush it, but suddenly he changed his mind and stopped, saying, “No, small though it may be, a spider, too, has life. It would be a pity to meaninglessly end it,” and so did not kill it.

Looking down upon the captives in Hell the Buddha recalled this kind act that Kandata had performed, and thought to use his good deed as a way to save him from his fate. Looking aside, there on a jade-colored lotus leaf he saw a single spider, spinning out a web of silver thread. The Buddha carefully took the spider’s thread into his hand, and lowered it straight down between the jewel-like white lotuses into the depths of Hell.


TWO


Kandata was floating and sinking along with the other sinners in the Lake of Blood at the bottom of Hell. It was pitch black no matter which way he looked, and the occasional glimpse of light that he would see in the darkness would turn out to be just the glint of the terrible Mountain of Needles. How lonely he must have felt! All about him was the silence of the grave, the only occasional sound being a faint sigh from one of the damned. Those who were so evil as to be sent to this place were tired by its various torments, and left without even the strength to cry out. Even the great thief Kandata could only squirm like a dying frog as he choked in the Lake of Blood.

But one day, raising up his head and glancing at the sky above the lake, in the empty darkness Kandata saw a silver spider’s thread being lowered from the ceiling so far, far away. The thread seemed almost afraid to be seen, emitting a frail, constant light as it came down to just above Kandata’s head. Seeing this, Kandata couldn’t help but clap his hands in joy. If he were to cling to this thread and climb up it, he may be able to climb out of Hell! Perhaps he could even climb all the way to Paradise! Then he would never be chased up the Mountain of Needles, nor drowned in the Lake of Blood again.

Thinking so, he firmly grasped the spider’s thread with both hands and began to climb the thread, higher and higher. Having once been a great thief, he was used to tasks such as this. But the distance between Hell and Paradise is tens of thousands of miles, and so it would seem that no amount of effort would make this an easy journey. After climbing for some time Kandata tired, and couldn’t climb a bit higher. Having no other recourse, he hung there from the thread, resting, and while doing so looked down below.

He saw that he had made a good deal of progress. The Lake of Blood that he had been trapped in was now hidden in the dark below, and he had even climbed higher than the dimly glowing Mountain of Needles. If he could keep up this pace, perhaps he could escape from Hell after all. Kandata grasped the thread with both hands, and laughingly spoke in a voice that he hadn’t used in the many years since he had come here, “I’ve done it! I’ve done it!”

Looking down, however, what did he see but an endless queue of sinners, intently following him up the thread like a line of ants! Seeing this, surprise and fear kept Kandata hanging there for a time with mouth open and eyes blinking like a fool. How could this slender spider’s web, which should break even under just his weight, support the weight of all these other people? If the thread were to snap, all of his effort would be wasted and he would fall back into Hell with the others! That just would not do. But even as he thought these thoughts, hundreds more, thousands more of the damned came crawling up from the Lake of Blood, forming a line and scurrying up the thread. If he didn’t do something fast, surely the thread would snap in the middle and he would fall back down.

Kandata shouted out, “Hey! You sinners! This thread is mine! Who said you could climb up it? Get off! Get off!”

Though the thread had been fine until just then, with these words it snapped with a twang right where Kandata held it. Poor Kandata fell headfirst through the air, spinning like a top, right down through the darkness. The severed end of the silver thread hung there, suspended from heaven, shining with its pale light in that moonless, starless sky.


THREE


The Buddha stood in Paradise at the edge of the lotus pond, silently watching these events. After Kandata sank like a stone to the bottom of the Lake of Blood, he continued his stroll with a sad face. He must have been surprised that even after such severe punishment Kandata’s lack of compassion would lead him right back into Hell.

Yet the lotus blossoms in the lotus ponds of Paradise care nothing about such matters. Their jewel-like white flowers waved about the feet of the Buddha, and each flower’s golden center continuously filled the place with their indescribably wondrous fragrance. It was almost noon in Paradise.


(16 April 1918)

* Translation http://tonygonz.blogspot.com/2006/05/spiders-thread-akutagawa-ryunosuke.html

Triaging Smart Scan

XTended Oracle SQL - Thu, 2021-04-08 09:27

This document is my attempt to bring together the available options that can be used to determine the root cause of an issue in order to create a roadmap to help support engineers narrow down the cause of concern.

It is a living document and will be edited and amended as time goes by. Please do check back again in the future.

Warning: these parameters should only be used in conjunction with an Oracle Support Engineer and are not intended for DBAs to self-triage; also they should not be left set after triage without discussion with an Oracle Support Engineer.

The Basics
  1. Check if the issue reproduces without SmartScan
    alter session set cell_offload_processing=FALSE;

    or

    select /*+ opt_param(‘cell_offload_processing’,’false') */ <col> from <tab>;

    This completely turns off Smart Scan, RDBMS will act like non-Exadata and do its own disk I/O through the buffer cache or Direct Read
     
  2. Check if the issue reproduces in cell pass thru mode
    alter session set “_kcfis_cell_passthru_enabled”=TRUE;

    This still uses SmartScan but turns off the smarts, the blocks are read by the offload server and returned unprocessed
     
  3. Check if it reproduces in emulation mode
    alter session set “_rdbms_internal_fplib_enabled”=TRUE;
    alter session set “_serial_direct_read”=TRUE;

    This mode runs the copy of Smart Scan linked into RDBMS to see if the issue stems from offload but not the SmartScan part of it. Be aware: bug fixes and patches are delivered to RDBMS and to the offload server independently – one may have fixes that the other does not and vice versa.
     
  4. There is NO point in trying
    — alter session set “_kcfis_rdbms_blockio_enabled”=TRUE;

    It simply forces the ‘file intelligent storage’ layer to divert to the ‘direct file’ layer i.e. regular block I/O – this achieves exactly the same thing as ‘cell_offload_processing=FALSE’ but in a round about way.
Triaging issues with Storage Index
  1. Check if the issue reproduces with storage index disabled
    alter session set “_kcfis_storageidx_disabled”=TRUE;

    This will completely disable Storage Index and all chunks will be processed by SmartScan without SI filtering happening.
     
  2. Check if the issue reproduces in Diagnostic mode
    alter session set “_ kcfis_storageidx_diag_mode “=1;

    This will run the query both with and without SI and then compare to make sure SI would have returned the same result for that chunk.
     
  3. Check whether SI Min/Max processing is the issue:
    alter cell offloadgroupEvents = “immediate cellsrv.cellsrv_setparam(‘_cell_pred_enable_fp_preprocess’, ‘FALSE’)”;
     
  4. Check whether any Set Memerbship metadata stored in SI is the issue (this only works in conjunction with the IM format columnar cache a.k.a CC2).
    alter session set “_kcfis_storageidx_set_membership_disabled”=FALSE;
Triaging issues with Flash Cache
  1. If the object is to bypass FC for a single table, the correct way to eliminate FC as a cause is to disable caching for that segment and cause any cached blocks to be flushed.
    alter table <foo> storage( cell_flash_cache  NONE);

    In order to resume default behaviour one would use:
    alter table <foo> storage(cell_flash_cache  DEFAULT );
     
  2. If the goal is to completely bypass the FC layer we need to change the caching policy of the griddisk which will flush the current contents and prevent both write-thru and write-back caching.

    cellcli>ALTER GRIDDISK grid_disk_name CACHINGPOLICY=”none”;

    In order to resume normal caching policy, one would use:
    cellcli>ALTER GRIDDISK grid_disk_name CACHINGPOLICY=”default”;
     
  3. Note: the parameter “_kcfis_kept_in_cellfc_enabled” is NOT the correct way to bypass FC because in many cases the disk I/O must go through FC anyway.
Triaging issues with the Columnar Cache
  1. Check if the columnar cache is the cause of the issue:
    alter session set “_enable_columnar_cache”=0;

    and to turn it back on again with default behaviour:
    alter session set “_enable_columnar_cache”=1;

    Note: do not use “_kcfis_cellcache_disabled” – that is not the correct way to triage this.
     
  2. Check if the IM format (a.k.a. CC2) columnar cache is the cause of the issue by forcing version 1 format to be used:
    alter session set “_enable_columnar_cache”=33;  — 0x01 + 0x20
     
  3. Check whether using the columnar cache with row-major blocks is the cause of the issue
    alter session set “_enable_columnar_cache”=16385; — 0x01 + 0x4000
     
  4. Check whether using the columnar cache with Hybrid Columnar blocks is the cause of the issue
    alter session set “_enable_columnar_cache”=32769; — 0x01 + 0x8000
Tracing offload processing

Please see:

  1. Using trace events in an offload server
  2. Tracing Hybrid Columnar Compression Offload
  3. Tracing an Offload Group
Triaging PCODE processing
  1. PCODE is our new byte code for evaluating predicates and aggregates – to go back to the old way use:
    alter session set “_kdz_pcode_flags” = 1;

    and to turn it back on again:
    alter session set “_kdz_pcode_flags” = 0;
Useful Debug Scan values
  1. Disable LOB predicate pushdown to Smart Scan:
    alter session set “_dbg_scan”=1;
     
  2. Disable rowset function evaluation in Smart Scan:
    alter session set “_dbg_scan”=4096;
     
  3. Disable aggregation pushdown to Smart Scan:
    alter session set “_dbg_scan”=8192;
     
  4. Disable Hybrid IM scan – this is where In-Memory is interleaved with Smart Scan
    alter session set “_dbg_scan”=131072;
Categories: Development

Silence in 4 Movements

Greg Pavlik - Wed, 2021-03-24 12:05

"What is the relation of [contemplation] to action? Simply this. He who attempts to act and do things for others or for the world without deepening his own self-understanding, freedom, integrity and capacity to love will not have anything to give others. He will communicate to them nothing but the contagion of his own obsessions, his aggressiveness, his ego-centered ambitions, his delusions about ends and means, his doctrinaire prejudices and ideas."
—Thomas Merton


"Those who know do not talk.

Those who talk do not know.

~

Stop talking,

block off your senses,

meditate in silence,

release your worries,

blunt your sharpness,

untie your knots,

soften your glare,

harmonise your inner light

and unite the world into one whole!

This is the primal union or secret embrace."

Tao Te Ching 56

"Make stillness your criterion for testing the value of everything, and choose always what contributes to it."

-Evagrius Ponticus




"Silence and Beauty - Eco" (Minerals and gesso on canvas, 2016) by contemporary Japanese American artist Fujimura Makoto (藤村真, born in Boston, Massachusetts in 1960). Abstract expressionist piece done with nihonga techniques. Picture found online.

I read Fujimura's book Silence and Beauty last year, which was inspired by the Endo Shusaku's 20th century novel Silence (itself adapted to film by the great Martin Scorsese). Fujimura reflects on his relationship with Japanese culture in the light of Shusaku's work, the Hiroshima bombing, and his own experience as a Japanese-American: most importantly how it has manifested in his work as an artist. Shusaku's work itself dwells on "silence" as absence. But I think this painting shows absence-as-presence: something is there, something beautiful, but its not clear what or even why - in fact that presence changes over time for the viewer, depending on vantage point or even focus.

Version Control for Oracle Webinar Video

Gerger Consulting - Wed, 2021-02-24 05:34

 

On February 16th, we hosted a webinar about Gitora 5. Many thanks to everyone who attended. The recording of the webinar is below.

There were a few questions we received several times during the webinar. We'd like to reiterate our answers below:

How does Gitora work with GitLab, GitHub or similar services...?
Gitora repos work with these services just like any other local Git repo does. From the point of GitLab, GitHub or similar services, Gitora repos are just local repos that you can push to these services.

How can I create a merge/pull request with Gitora?
Merge/Pull request is not a Git feature. It is a GitHub/GitLab feature. You can continue creating merge/pull requests on these services using the same workflow you use for any other local repo. To create a merge/pull request for a Gitora repo, first push your repo to GitHub/GitLab using the Gitora web app (or use Git directly). Next, go to your GitHub/GitLab account and create your merge/pull request.

How can I work on two different projects with different deadlines in the same development database and send only code related to one project to the test database?
You can't. Gitora cannot do magic. :-) There will never be a tool that will help you to do this because it is impossible. What Gitora empowers you to do is to have multiple databases per project. Then you can work on the same logical objects in different databases and use Gitora to merge code bases.
You can watch a full workflow and detailed explanation of this process at this link.

Categories: Development

Pages

Subscribe to Oracle FAQ aggregator - Development