A Recipe for Summoning the RBO Monster (even in Oracle 12c): On Delete Cascade, Function-Based Index and Missing Table Statistics

The last version of Oracle in which CHOOSE was officially supported as an OPTIMIZER_MODE parameter value was 9.2.
This is what the documentation of Oracle 9.2 says about it:

choose
The optimizer chooses between a cost-based approach and a rule-based approach based on whether statistics are available.
If the data dictionary contains statistics for at least one of the accessed tables, then the optimizer uses a cost-based approach and optimizes with a goal of best throughput. If the data dictionary contains only some statistics, then the cost-based approach is used, and the optimizer must guess the statistics for the subjects without any statistics. This can result in sub-optimal execution plans. If the data dictionary contains no statistics for any of the accessed tables, then the optimizer uses a rule-based approach.

In subsequent versions CHOOSE does not appear in the documentation as a valid value, and the “What’s New in Oracle Performance?” chapter in the documentation of Oracle 10.1 includes this sentence:

Some consequences of the desupport of RBO are:
• CHOOSE and RULE are no longer supported as OPTIMIZER_MODE initialization parameter values …

But the Oracle database keeps using CHOOSE, and therefore keeps potentially using the RBO, internally, even in version 12c. And last week I hit a performance issue that was caused because of this fact and some poor circumstances.

Here is a simplified example, running on Oracle 12.1.0.2.

We have two tables – PARENTS and CHILDREN – with an “on delete cascade” foreign key constraint.

> create table parents (
    id number not null primary key
  );

Table created.

> insert into parents select rownum from dual connect by level<=10000;

10000 rows created.

> create table children (
    id number not null primary key,
    parent_id number not null references parents (id) on delete cascade,
    filler varchar2(4000)
  );

Table created.

> insert into children
  select rownum,mod(rownum,10000)+1,lpad('x',4000,'x')
  from dual
  connect by level<=100000;

100000 rows created.

> commit;

Commit complete.

There is an index that supports the foreign key, and it is a function-based index. Note that the leading column of the index is simply the referencing column.

> create index children_idx on children (parent_id, substr(filler,1,10));

Index created. 

Now let’s delete a record from the PARENTS table.

> alter system flush buffer_cache;

System altered.

> set timing on
> delete parents where id=1;

1 row deleted.

Elapsed: 00:00:27.80
> set timing off
> roll
Rollback complete.

Why did it take so much time?

When we delete a record from PARENTS, Oracle implicitly deletes the child records of this parent (because of the “on delete cascade” foreign key). This is the recursive SQL statement:

 delete from "DEMO"."CHILDREN" where "PARENT_ID" = :1

Oracle uses the CHOOSE optimizer mode for the implicit deletion from CHILDREN.

> select value,isdefault
  from v$sql_optimizer_env
  where sql_id='f7j1aq9z8k6r1'
  and child_number=0
  and name='optimizer_mode';

VALUE                ISDEFAULT
-------------------- ---------
choose               NO

1 row selected.

Unfortunately I forgot to gather statistics on the CHILDREN table. The lack of statistics means that Oracle chose to use the rule-based optimizer for the implicit DELETE statement.
And this means that it cannot use the CHILDREN_IDX index, because it is a function-based index.

> SELECT * FROM table(DBMS_XPLAN.DISPLAY_CURSOR('f7j1aq9z8k6r1',0));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  f7j1aq9z8k6r1, child number 0
-------------------------------------
 delete from "DEMO"."CHILDREN" where "PARENT_ID" = :1

Plan hash value: 3379301338

---------------------------------------
| Id  | Operation          | Name     |
---------------------------------------
|   0 | DELETE STATEMENT   |          |
|   1 |  DELETE            | CHILDREN |
|*  2 |   TABLE ACCESS FULL| CHILDREN |
---------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("PARENT_ID"=:1)

Note
-----
   - rule based optimizer used (consider using cbo)


23 rows selected.

If I execute the DELETE FROM CHILDREN statement explicitly, the CBO chooses the good execution plan that uses the index, because the statistics on the index have been automatically gathered during the index creation. It’s the missing statistics on the table that caused the implicit (recursive) DELETE statement to use the RBO.

> alter system flush buffer_cache;

System altered.

> set timing on
> delete from "DEMO"."CHILDREN" where "PARENT_ID" = 1;

10 rows deleted.

Elapsed: 00:00:00.21
> set timing off
> select * from table(dbms_xplan.display_cursor());

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  0cm664fx8b944, child number 0
-------------------------------------
delete from "DEMO"."CHILDREN" where "PARENT_ID" = 1

Plan hash value: 1030488021

----------------------------------------------------------------------------------
| Id  | Operation         | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
|   0 | DELETE STATEMENT  |              |       |       |     2 (100)|          |
|   1 |  DELETE           | CHILDREN     |       |       |            |          |
|*  2 |   INDEX RANGE SCAN| CHILDREN_IDX |    10 | 40100 |     2   (0)| 00:00:01 |
----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("PARENT_ID"=1)


19 rows selected.

Even if I didn’t have statistics on the index, the CBO would still choose the index for performing the explicit DELETE, because it would use dynamic sampling:

> exec dbms_stats.delete_index_stats (user,'CHILDREN_IDX')

PL/SQL procedure successfully completed.

> delete from "DEMO"."CHILDREN" where "PARENT_ID" = 3;

10 rows deleted.

> select * from table(dbms_xplan.display_cursor());

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  a8x8wszgx1g6r, child number 0
-------------------------------------
delete from "DEMO"."CHILDREN" where "PARENT_ID" = 3

Plan hash value: 1030488021

----------------------------------------------------------------------------------
| Id  | Operation         | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
|   0 | DELETE STATEMENT  |              |       |       |     3 (100)|          |
|   1 |  DELETE           | CHILDREN     |       |       |            |          |
|*  2 |   INDEX RANGE SCAN| CHILDREN_IDX |  1066 |  2111K|     3   (0)| 00:00:01 |
----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("PARENT_ID"=3)

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)


23 rows selected.

Finally, if the index were a regular one rather than function-based, then a good plan would have been chosen for the recursive SQL, because the RBO prefers an index range scan over a full table scan:

> drop index children_idx;

Index dropped.

> create index children_idx on children (parent_id, id);

Index created.

> alter system flush buffer_cache;

System altered.

> set timing on
> delete parents where id=4;

1 row deleted.

Elapsed: 00:00:00.14
> set timing off

> select * from table(DBMS_XPLAN.DISPLAY_CURSOR('f7j1aq9z8k6r1',0));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  f7j1aq9z8k6r1, child number 0
-------------------------------------
 delete from "DEMO"."CHILDREN" where "PARENT_ID" = :1

Plan hash value: 1030488021

------------------------------------------
| Id  | Operation         | Name         |
------------------------------------------
|   0 | DELETE STATEMENT  |              |
|   1 |  DELETE           | CHILDREN     |
|*  2 |   INDEX RANGE SCAN| CHILDREN_IDX |
------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("PARENT_ID"=:1)

Note
-----
   - rule based optimizer used (consider using cbo)


23 rows selected.

The Vicious Index that Sabotages Plans it is not Part of [@DBoriented]

We all know that adding a new index may change the execution plans of existing queries – sometimes improving the performance, other times causing a performance degradation (and sometimes not making any difference).
But is it true even if the new index is not used by the new execution plan? And even if the new index is invisible?
I’ve recently discovered that the answer is yes for a specific scenario that involves a descending and function-based index.

Here is a simple example – tested on 11.2.0.4 and 12.1.0.2.

create table t (
  n1 number not null,
  n2 number not null,
  n3 number not null,
  str varchar2(100)
);

insert /*+ append */ into t 
  select rownum,rownum,rownum,rownum 
  from dual 
  connect by level<=100000;
commit;

exec dbms_stats.gather_table_stats (user,'T')

create index t_i1 on t (n1, round(n3) desc);

explain plan for
  select null
  from   t
  where  n1 = :n1
  order  by n1,
            round(n3) desc;

select * from table(dbms_xplan.display());

-------------------------------------------------------------------------
| Id  | Operation        | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT |      |     1 |    10 |     2   (0)| 00:00:01 |
|*  1 |  INDEX RANGE SCAN| T_I1 |     1 |    10 |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("N1"=TO_NUMBER(:N1))

The execution plan simply retrieves results from the T_I1 index. The results are already sorted in the desired way, as they are stored like that in the index, so the plan does not include a sorting step. Also, since we don’t need any column from the table, the plan does not access T at all.

Now let’s add a second index:

create index t_i2 on t (n2, round(n3));

Note that this index contains the same expression as in the first index – round(n3) – but in the “regular” ascending order. As the new index starts with n2, it is quite obvious that the previous plan – simply range scanning the T_I1 index – is still the best option.
But the plan has changed now – it still accesses T_I1 – but it does two more steps: accessing the table and sorting:

explain plan for
  select null
  from   t
  where  n1 = :n1
  order  by n1,
            round(n3) desc;

select * from table(dbms_xplan.display());

---------------------------------------------------------------------------------------------
| Id  | Operation                            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |      |     1 |    18 |     4  (25)| 00:00:01 |
|   1 |  SORT ORDER BY                       |      |     1 |    18 |     4  (25)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| T    |     1 |    18 |     3   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN                  | T_I1 |     1 |       |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("N1"=TO_NUMBER(:N1))

Even if the second index is invisible, it still confuses the optimizer:

alter index t_i2 invisible;

explain plan for
  select null
  from   t
  where  n1 = :n1
  order  by n1,
            round(n3) desc;

select * from table(dbms_xplan.display());

---------------------------------------------------------------------------------------------
| Id  | Operation                            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |      |     1 |    18 |     4  (25)| 00:00:01 |
|   1 |  SORT ORDER BY                       |      |     1 |    18 |     4  (25)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| T    |     1 |    18 |     3   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN                  | T_I1 |     1 |       |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("N1"=TO_NUMBER(:N1))

Only after dropping the second index, we get our original plan back:

drop index t_i2;

explain plan for
  select null
  from   t
  where  n1 = :n1
  order  by n1,
            round(n3) desc;

select * from table(dbms_xplan.display());

-------------------------------------------------------------------------
| Id  | Operation        | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT |      |     1 |    10 |     2   (0)| 00:00:01 |
|*  1 |  INDEX RANGE SCAN| T_I1 |     1 |    10 |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("N1"=TO_NUMBER(:N1))

Who Rewrote My SQL?

There are several new features in Oracle 12c that are implemented under the hood by changing the SQL statement that we write to a different statement (e.g., by adding some hidden predicates).
In OUG Ireland 2016 I talked about two such features – In Database Archiving and Temporal Validity – as part of my “Write Less (Code) with More (Oracle12c New Features)” presentation. I usually talk about another such feature in this presentation – the Row Limiting clause. This time I skipped it, but Tim Hall talked about it two hours later in his “Analytic Functions: An Oracle Developer’s Best Friend” presentation. Following these presentations I had two short and interesting chats with Tim and with Jonathan Lewis about when, during the statement execution, Oracle rewrites the statements in these features. These chats are the motivation for this post.

When a SQL statement is processed, it goes through several stages, in this order: parsing, optimization, row source generation, and execution.

Note: Parsing is a confusing term, as many times when we say “parsing” (especially “hard parsing”) we actually mean “parsing + optimization + row source generation”.

The first stage, the parsing, is not too complex. The Parser basically checks the syntax and the semantics of the statement. If needed, it also expands the statement. For example, it replaces each view referenced in the statement with its definition, so after parsing the statement refers only to actual tables. Another example: it expands * to the actual column list.
The second stage, the optimization, is much more complex. The Optimizer has several components, and the first one is the Query Transformer. This component may further rewrites the SQL statement that it gets from the Parser, but the purpose here is to find an equivalent statement with a lower cost.

In Oracle 12c we have a simple (and documented) way to see the output of the expansion that is done by the Parser – using the DBMS_UTILITY.EXPAND_SQL_TEXT procedure.

Note: to be more precise, I assume that this procedure reveals everything that the Parser does during the expansion stage, and only that. The documentation of DBMS_UTILITY.EXPAND_SQL_TEXT is very limited. It only says “This procedure recursively replaces any view references in the input SQL query with the corresponding view subquery”, and the Usage Notes imply that it also shows the outcome of applying VPD policies.

Row Limiting

Apparently the new Row Limiting clause, used for Top-N and paging queries, is implemented at the expansion stage. We can see that a query that uses the new Row Limiting syntax is expanded to a pre-12c syntax using analytic functions:

var x clob
begin
  dbms_utility.expand_sql_text(
    input_sql_text => '
      select project_id,
             person_id,
             assignment_id,
             assignment_period_start,
             assignment_period_end
      from project_assignments
      order by project_id,person_id
      OFFSET 100 ROWS 
      FETCH NEXT 4 ROWS ONLY',
    output_sql_text => :x);
end;
/
 
print x

-- I formatted the output to make it more readable
SELECT "A1"."PROJECT_ID"              "PROJECT_ID",
       "A1"."PERSON_ID"               "PERSON_ID",
       "A1"."ASSIGNMENT_ID"           "ASSIGNMENT_ID",
       "A1"."ASSIGNMENT_PERIOD_START" "ASSIGNMENT_PERIOD_START",
       "A1"."ASSIGNMENT_PERIOD_END"   "ASSIGNMENT_PERIOD_END"
FROM   (SELECT "A2"."PROJECT_ID" "PROJECT_ID",
               "A2"."PERSON_ID" "PERSON_ID",
               "A2"."ASSIGNMENT_ID" "ASSIGNMENT_ID",
               "A2"."ASSIGNMENT_PERIOD_START" "ASSIGNMENT_PERIOD_START",
               "A2"."ASSIGNMENT_PERIOD_END" "ASSIGNMENT_PERIOD_END",
               "A2"."PROJECT_ID" "rowlimit_$_0",
               "A2"."PERSON_ID" "rowlimit_$_1",
               row_number() over(ORDER BY "A2"."PROJECT_ID", "A2"."PERSON_ID") "rowlimit_$$_rownumber"
        FROM   "DEMO5"."PROJECT_ASSIGNMENTS""A2") "A1"
WHERE  "A1"."rowlimit_$$_rownumber" <= CASE WHEN (100 >= 0) THEN floor(to_number(100)) ELSE 0 END + 4
AND    "A1"."rowlimit_$$_rownumber" > 100
ORDER  BY "A1"."rowlimit_$_0",
          "A1"."rowlimit_$_1"

For more examples like this, and more details about Row Limiting in general, see Write Less with More – Part 5.

Temporal Validity

Temporal Validity allows to apply filtering based on validity period (or range), either explicitly or implicitly.
Explicit filtering is done at the statement-level. Implicit filtering is done by a session-level control.
We can see that both statement-level control and session-level control are implemented at the expansion stage:

alter table project_assignments 
       add PERIOD FOR assignment_period;

var x clob
begin
  dbms_utility.expand_sql_text(
    input_sql_text => '
      select person_id,
             project_id,
             assignment_period_start,
             assignment_period_end
      from   project_assignments
      AS OF PERIOD FOR assignment_period SYSDATE',
    output_sql_text => :x);
end;
/
 
print x

SELECT "A1"."PERSON_ID"               "PERSON_ID",
       "A1"."PROJECT_ID"              "PROJECT_ID",
       "A1"."ASSIGNMENT_PERIOD_START" "ASSIGNMENT_PERIOD_START",
       "A1"."ASSIGNMENT_PERIOD_END"   "ASSIGNMENT_PERIOD_END"
FROM   (SELECT "A2"."ASSIGNMENT_PERIOD_START" "ASSIGNMENT_PERIOD_START",
               "A2"."ASSIGNMENT_PERIOD_END"   "ASSIGNMENT_PERIOD_END",
               "A2"."ASSIGNMENT_ID"           "ASSIGNMENT_ID",
               "A2"."PERSON_ID"               "PERSON_ID",
               "A2"."PROJECT_ID"              "PROJECT_ID"
        FROM   "DEMO5"."PROJECT_ASSIGNMENTS" "A2"
        WHERE  ("A2"."ASSIGNMENT_PERIOD_START" IS NULL OR "A2"."ASSIGNMENT_PERIOD_START" <= SYSDATE)
        AND    ("A2"."ASSIGNMENT_PERIOD_END" IS NULL OR "A2"."ASSIGNMENT_PERIOD_END" > SYSDATE)) "A1"
> exec dbms_flashback_archive.enable_at_valid_time('CURRENT')

PL/SQL procedure successfully completed.

var x clob
begin
  dbms_utility.expand_sql_text(
    input_sql_text => '
      select person_id,
             project_id,
             assignment_period_start,
             assignment_period_end
      from   project_assignments',
    output_sql_text => :x);
end;
/
 
print x

SELECT "A1"."PERSON_ID"               "PERSON_ID",
       "A1"."PROJECT_ID"              "PROJECT_ID",
       "A1"."ASSIGNMENT_PERIOD_START" "ASSIGNMENT_PERIOD_START",
       "A1"."ASSIGNMENT_PERIOD_END"   "ASSIGNMENT_PERIOD_END"
FROM   (SELECT "A2"."ASSIGNMENT_PERIOD_START" "ASSIGNMENT_PERIOD_START",
               "A2"."ASSIGNMENT_PERIOD_END"   "ASSIGNMENT_PERIOD_END",
               "A2"."ASSIGNMENT_ID"           "ASSIGNMENT_ID",
               "A2"."PERSON_ID"               "PERSON_ID",
               "A2"."PROJECT_ID"              "PROJECT_ID"
        FROM   "DEMO5"."PROJECT_ASSIGNMENTS" "A2"
        WHERE  ("A2"."ASSIGNMENT_PERIOD_START" IS NULL OR "A2"."ASSIGNMENT_PERIOD_START" <= systimestamp(6))
        AND    ("A2"."ASSIGNMENT_PERIOD_END" IS NULL OR "A2"."ASSIGNMENT_PERIOD_END" > systimestamp(6))) "A1"

For more details about Temporal Validity, see Write Less with More – Part 4.

In-Database Archiving

Tables that are defined as ROW ARCHIVAL have the hidden column ORA_ARCHIVE_STATE. By default, when we select from such tables a hidden predicate is added automatically: ORA_ARCHIVE_STATE = ‘0’.
As shown in Write Less with More – Part 3:

drop table projects cascade constraints;
create table projects (
       project_id integer not null constraint projects_pk primary key,
       project_name varchar2(100) not null,
       status number(1) not null,
       last_days_to_show_in_reports integer not null
)
ROW ARCHIVAL;

insert into projects values (1,'Project A',1,2);
insert into projects values (2,'Project B',2,3);
insert into projects values (3,'Project C',1,4);
insert into projects values (4,'Project D',2,3);
commit;

> update projects set ORA_ARCHIVE_STATE='1' where project_id in (1,3);

2 rows updated.

> select * from projects; 

                                      LAST_DAYS_TO
PROJECT_ID PROJECT_NAME     STATUS SHOW_IN_REPORTS
---------- ------------ ---------- ---------------
         2 Project B             2               3
         4 Project D             2               3 

> select * from table(dbms_xplan.display_cursor());

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------
SQL_ID  dcthaywgmzra7, child number 1
-------------------------------------
select * from projects

Plan hash value: 2188942312

------------------------------------------------------------------------------
| Id  | Operation         | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |          |       |       |     3 (100)|          |
|*  1 |  TABLE ACCESS FULL| PROJECTS |     4 |  8372 |     3   (0)| 00:00:01 |
------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("PROJECTS"."ORA_ARCHIVE_STATE"='0')

But when does Oracle add this predicate?

In this case, it’s not during expansion:

var x clob
begin
  dbms_utility.expand_sql_text(
    input_sql_text => 'select * from projects',
    output_sql_text => :x);
end;
/

PL/SQL procedure successfully completed.

print x

SELECT "A1"."PROJECT_ID"                   "PROJECT_ID",
       "A1"."PROJECT_NAME"                 "PROJECT_NAME",
       "A1"."STATUS"                       "STATUS",
       "A1"."LAST_DAYS_TO_SHOW_IN_REPORTS" "LAST_DAYS_TO_SHOW_IN_REPORTS"
FROM   "DEMO5"."PROJECTS" "A1"

A 10053 trace shows that the predicate is not added by the Query Transformer either (which I think is a good thing, as the Transformer should not change the meaning of the query):

.
.
.
******************************************
----- Current SQL Statement for this session (sql_id=dcthaywgmzra7) -----
select * from projects
*******************************************
.
.
.
=====================================
SPD: BEGIN context at statement level
=====================================
Stmt: ******* UNPARSED QUERY IS *******
SELECT "PROJECTS"."PROJECT_ID" "PROJECT_ID","PROJECTS"."PROJECT_NAME" "PROJECT_NAME","PROJECTS"."STATUS" "STATUS","PROJECTS"."LAST_DAYS_TO_SHOW_IN_REPORTS" "LAST_DAYS_TO_SHOW_IN_REPORTS" FROM "DEMO5"."PROJECTS" "PROJECTS" WHERE "PROJECTS"."ORA_ARCHIVE_STATE"='0'
Objects referenced in the statement
  PROJECTS[PROJECTS] 113224, type = 1
Objects in the hash table
  Hash table Object 113224, type = 1, ownerid = 8465150763180795273:
    No Dynamic Sampling Directives for the object
Return code in qosdInitDirCtx: ENBLD
===================================
SPD: END context at statement level
===================================
Final query after transformations:******* UNPARSED QUERY IS *******
SELECT "PROJECTS"."PROJECT_ID" "PROJECT_ID","PROJECTS"."PROJECT_NAME" "PROJECT_NAME","PROJECTS"."STATUS" "STATUS","PROJECTS"."LAST_DAYS_TO_SHOW_IN_REPORTS" "LAST_DAYS_TO_SHOW_IN_REPORTS" FROM "DEMO5"."PROJECTS" "PROJECTS" WHERE "PROJECTS"."ORA_ARCHIVE_STATE"='0'
kkoqbc: optimizing query block SEL$1 (#0)
.
.
.

A 10046 trace file contains no indication for ORA_ARCHIVE_STATE at all.

Comparing 10053 trace files of the statement select * from projects between two executions – one with the default behavior where the predicate is added and the second with “alter session set ROW ARCHIVAL VISIBILITY = ALL” which returns all the records with no filtering on ORA_ARCHIVE_STATE – shows only one significant difference: under Compilation Environment Dump we see that ilm_filter = 0 in the former and ilm_filter = 1 in the latter.

So the predicate on ORA_ARCHIVE_STATE is probably added by neither the Parser nor the Query Transformer. I don’t know who does add it and when, but it seems that it is not done in the “standard” way Oracle usually do such things. Perhaps if it would have been done in the standard way, this bug (look at the “The Bad News” section) would not have happened.

When X+0 and X-0 are not the same

In the old days, when the Rule Based Optimizer (RBO) ruled, there was a very common technique to help the optimizer choose one plan over the other (if they had the same rank) by preventing the use of an index.

Look at the following query, where there are unique indexes on T1.ID and T2.ID and non-unique indexes on T1.A and T2.B:

select *
  from T1,T2
 where T1.ID = T2.ID
   and T1.A = :value1
   and T2.B = :value2;

One possible plan is to use nested loops, starting by accessing T1 via the index on A and then accessing T2 via the index on ID.
A second possible plan is to use nested loops, starting by accessing T2 via the index on B and then accessing T1 via the index on ID.

For the RBO these two plans were similar. If we wanted to convince the RBO to choose the second plan, a common trick was to prevent the use of the index on T1.A by adding “+0” to the condition:

select *
  from T1,T2
 where T1.ID = T2.ID
   and T1.A + 0 = :value1
   and T2.B = :value2;

What happens if we try using the same technique today, with the Cost Based Optimizer (CBO)?
Not that I recommend it!
It’s certainly not the way to influence the CBO, but I admit that I recently used it myself (it was in some weird constrained situation, please don’t ask…) and something surprised me. I’ll show you what in this post.

But first…
What is the cardinality of “column = some_value” (i.e., how many rows will return from this condition)? When the CBO doesn’t have histograms, its best guess is the number of rows in the table / the number of distinct values in the column.
And what is the cardinality of “some_function_of_column = some_value”? Without extended statistics or virtual columns, the CBO simply guesses that the cardinality is 1% of the table.
Keep this in mind.

I’ll start by crafting an example:

-- 10000 rows in T1
-- 5000 distinct values in A
-- A=0 in 5000 rows
create table t1 as 
  select rownum id,
         case when rownum<5000 then rownum else 0 end a, 
         lpad('x',100,'x') x
    from dual
 connect by level<=10000;

alter table t1 
  add constraint t1_pk primary key (id);

create index t1_i1 on t1 (a);

-- 1000 rows in T2
-- 21 distinct values in B
-- B=0 in 1 row only
create table t2 as 
  select rownum id,
         case when rownum<1000 then ceil(rownum/50) else 0 end b,
         lpad('x',10,'x') x 
    from dual 
 connect by level<=1000;

alter table t2 
  add constraint t2_pk primary key (id);

create index t2_i1 on t2 (b);

begin
  dbms_stats.gather_table_stats(user,'T1',method_opt =>'for all columns size 1',cascade => true);
  dbms_stats.gather_table_stats(user,'T2',method_opt =>'for all columns size 1',cascade => true);
end;
/

Note that I gathered statistics, but without histograms.

Now let’s run the following query:

> set autotrace on

> select *
    from T1,T2
   where T1.ID = T2.ID
     and T1.A = 0
     and T2.B = 0;

no rows selected

Execution Plan
----------------------------------------------------------
Plan hash value: 1001848667

---------------------------------------------------------------------------------------
| Id  | Operation                     | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |       |     2 |   252 |     3   (0)| 00:00:01 |
|   1 |  NESTED LOOPS                 |       |     2 |   252 |     3   (0)| 00:00:01 |
|   2 |   NESTED LOOPS                |       |     2 |   252 |     3   (0)| 00:00:01 |
|   3 |    TABLE ACCESS BY INDEX ROWID| T1    |     2 |   216 |     2   (0)| 00:00:01 |
|*  4 |     INDEX RANGE SCAN          | T1_I1 |     2 |       |     1   (0)| 00:00:01 |
|*  5 |    INDEX UNIQUE SCAN          | T2_PK |     1 |       |     0   (0)| 00:00:01 |
|*  6 |   TABLE ACCESS BY INDEX ROWID | T2    |     1 |    18 |     1   (0)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("T1"."A"=0)
   5 - access("T1"."ID"="T2"."ID")
   6 - filter("T2"."B"=0)

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
         99  consistent gets
          0  physical reads
          0  redo size
        646  bytes sent via SQL*Net to client
        513  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          0  rows processed

Without histograms the optimizer assumes uniform distribution, so the estimated cardinality of “T1.A=0” is 2 rows (=10000 rows in the table / 5000 distinct values in A), as we can see in the execution plan. And the estimated cardinality of “T2.B=0” is about 48 rows (=1000 rows in the table / 21 distinct values).
But we know that the data is skewed. The actual cardinality of “T1.A=0” is 5000 rows (there are 5000 rows out of 10000 with the value 0 in T1.A), and the actual cardinality of “T2.B=0” is 1 row. So it will be much better in this case to start the plan by accessing the index on T2.B.
The sensible way would be to gather histograms on the columns with the skewed data, to use the cardinality hint, or other “modern” approach. But what will happen if we use the old trick of preventing using the index on T1.A by adding “+0”? Let’s see:

> select *
    from T1,T2
   where T1.ID = T2.ID
     and T1.A + 0 = 0
     and T2.B = 0;

no rows selected

Execution Plan
----------------------------------------------------------
Plan hash value: 1837274416

--------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |     2 |   252 |    49   (0)| 00:00:01 |
|   1 |  NESTED LOOPS                |       |     2 |   252 |    49   (0)| 00:00:01 |
|   2 |   NESTED LOOPS               |       |     2 |   252 |    49   (0)| 00:00:01 |
|*  3 |    TABLE ACCESS FULL         | T1    |     2 |   216 |    48   (0)| 00:00:01 |
|*  4 |    INDEX UNIQUE SCAN         | T2_PK |     1 |       |     0   (0)| 00:00:01 |
|*  5 |   TABLE ACCESS BY INDEX ROWID| T2    |     1 |    18 |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("T1"."A"+0=0)
   4 - access("T1"."ID"="T2"."ID")
   5 - filter("T2"."B"=0)

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        170  consistent gets
          0  physical reads
          0  redo size
        646  bytes sent via SQL*Net to client
        513  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          0  rows processed

Hurray! we succeeded to prevent the use of the index on T1.A. But, wait, the cardinality estimation was not changed – it is still 2. I would expect it to be 100 rows – 1% of 10000 – the usual guess of the CBO in case of “function_of_some_column = some_value”.
And because of this estimation, the plan still starts from T1, but this time with a full table scan (as we neutralized the index use). And this is a worse plan than the first one – note the 170 consistent gets compared to 99 before.

So the CBO is smart enough to realize that A+0 = A.

What about A-0 or A*1 ? Let’s try:

> select *
    from T1,T2
   where T1.ID = T2.ID
     and T1.A - 0 = 0
     and T2.B = 0;

no rows selected

Execution Plan
----------------------------------------------------------
Plan hash value: 1272512196

---------------------------------------------------------------------------------------
| Id  | Operation                     | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |       |    48 |  6048 |    50   (0)| 00:00:01 |
|   1 |  NESTED LOOPS                 |       |    48 |  6048 |    50   (0)| 00:00:01 |
|   2 |   NESTED LOOPS                |       |    48 |  6048 |    50   (0)| 00:00:01 |
|   3 |    TABLE ACCESS BY INDEX ROWID| T2    |    48 |   864 |     2   (0)| 00:00:01 |
|*  4 |     INDEX RANGE SCAN          | T2_I1 |    48 |       |     1   (0)| 00:00:01 |
|*  5 |    INDEX UNIQUE SCAN          | T1_PK |     1 |       |     0   (0)| 00:00:01 |
|*  6 |   TABLE ACCESS BY INDEX ROWID | T1    |     1 |   108 |     1   (0)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("T2"."B"=0)
   5 - access("T1"."ID"="T2"."ID")
   6 - filter("T1"."A"-0=0)

Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
          6  consistent gets
          0  physical reads
          0  redo size
        646  bytes sent via SQL*Net to client
        513  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          0  rows processed

We changed “A+0” to “A-0″ and got a totally different execution plan! Unlike with A+0, the CBO does not realize that A-0=A. It treats A-0 just as any other function, and guesses that the cardinality of “A-0=0” is 1% of T1, which is 100 rows.

Remember that the example I used has no real significance. I just tried to mimic an old technique that was relevant for the RBO and is not appropriate for the CBO.

The important part is that the CBO gives a special treatment to COLUMN + 0 when it calculates cardinality. We can see it more explicitly from the following execution plans:

> select * from T1 where A = :val;

Execution Plan
----------------------------------------------------------
Plan hash value: 1429545322

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     2 |   216 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     2 |   216 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T1_I1 |     2 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("A"=TO_NUMBER(:VAL))
> select * from T1 where A + 0 = :val;

Execution Plan
----------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     2 |   216 |    48   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |     2 |   216 |    48   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("A"+0=TO_NUMBER(:VAL))
> select * from T1 where A - 0 = :val;

Execution Plan
----------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |   100 | 10800 |    48   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |   100 | 10800 |    48   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("A"-0=TO_NUMBER(:VAL))
> select * from T1 where A * 1 = :val;

Execution Plan
----------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |   100 | 10800 |    48   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |   100 | 10800 |    48   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("A"*1=TO_NUMBER(:VAL))

One last check: what happens if instead of 0 we’ll use a bind variable, and we’ll bind 0 in the first hard parse (with bind peeking enabled). Let’s see:

> var zero number
> exec :zero := 0
> select * from T1 where A + :zero = 1;
--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       |    48 (100)|          |
|*  1 |  TABLE ACCESS FULL| T1   |   100 | 10800 |    48   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("A"+:ZERO=1)

So the special treatment happens only with 0 as literal, not as bind variable.

Do you know of other cases like this one?

 

Note: I executed all the examples in 11.2.0.4 and in 12.1.0.2.

Optimizer bug leads to wrong results

A few days ago a query that I wrote did not return the expected results. After some investigation I came to conclusion that the reason is a bug of the optimizer. Here is a very simplified example (and quite silly functionality-wise, but I guarantee you that the original, more complex, query does make sense).
I tried it with Oracle 11.2.0.2, 11.2.0.4 and 12.1.0.2, and got the same (wrong) behavior in all three cases.

Update [August 2, 2017]
I tested it with 12.2.0.1, and it’s working fine!

The query is based on the following table:

create table t(
  id   number not null constraint t_pk primary key,
  num1 number not null
);

Let’s fill it with some records:

insert into t values (1,74);
insert into t values (2,96);
insert into t values (3,41);

We start with this query (that works just fine):

select (select max(level) from dual connect by level <= t.num1) 
from   t
where  t.id = :id;

For our sample data we would expect the query to return 74 for :id=1, 96 for :id=2 and 41 for :id=3, and indeed these are the results that we get.
This is the execution plan:

--------------------------------------------------------------------------------------
| Id  | Operation                     | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |      |     1 |    26 |     3   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE               |      |     1 |       |            |          |
|*  2 |   CONNECT BY WITHOUT FILTERING|      |       |       |            |          |
|   3 |    FAST DUAL                  |      |     1 |       |     2   (0)| 00:00:01 |
|   4 |  TABLE ACCESS BY INDEX ROWID  | T    |     1 |    26 |     1   (0)| 00:00:01 |
|*  5 |   INDEX UNIQUE SCAN           | T_PK |     1 |       |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(LEVEL<=:B1)
   5 - access("T"."ID"=TO_NUMBER(:ID))

Now, I will only add an innocent inline view to the query:

select (select max(level) from dual connect by level <= t.num1) 
from   t,
       (select distinct 1 from dual)
where  t.id = :id;

Obviously the existence of the inline view - (select distinct 1 from dual) - should make no difference. It returns a single record, we don't use its result set, and we have no join conditions in the query (so we actually have a cartesian product between 1 record and [0 or 1] record).

But now the results are wrong. The query returns 1 in all the cases, and the reason is that the scalar subquery - (select max(level) from dual connect by level <= t.num1) - is always executed with t.num1 = null.

Let's look at the execution plan for this case:

-------------------------------------------------------------------------------------------
| Id  | Operation                     | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |           |     1 |    13 |     6  (17)| 00:00:01 |
|   1 |  SORT AGGREGATE               |           |     1 |       |            |          |
|*  2 |   CONNECT BY WITHOUT FILTERING|           |       |       |            |          |
|   3 |    FAST DUAL                  |           |     1 |       |     2   (0)| 00:00:01 |
|   4 |  VIEW                         | VM_NWVW_0 |     1 |    13 |     4  (25)| 00:00:01 |
|   5 |   NESTED LOOPS                |           |     1 |    38 |     3   (0)| 00:00:01 |
|   6 |    FAST DUAL                  |           |     1 |       |     2   (0)| 00:00:01 |
|*  7 |    INDEX UNIQUE SCAN          | T_PK      |     1 |       |     0   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(LEVEL<=:B1)
   7 - access("T"."ID"=TO_NUMBER(:ID))

For some reason, the optimizer decided that it is unnecessary to go to the table T. Oracle accesses only the index T_PK, which contains only the ID column. With this execution plan it is impossible to get the value of the NUM1 column, and indeed the results show that NUM1 is allegedly NULL.

Step 4 in the execution plan refers to VM_NWVW_0, which is usually related to Complex View Merging. So let's try to disable view merging using the NO_MERGE hint:

select (select max(level) from dual connect by level <= t.num1) 
from   t,
       (select /*+ NO_MERGE */ distinct 1 from dual)
where  t.id = :id;

And indeed we get the correct results, with this execution plan:

--------------------------------------------------------------------------------------
| Id  | Operation                     | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |      |     1 |    26 |     6  (17)| 00:00:01 |
|   1 |  SORT AGGREGATE               |      |     1 |       |            |          |
|*  2 |   CONNECT BY WITHOUT FILTERING|      |       |       |            |          |
|   3 |    FAST DUAL                  |      |     1 |       |     2   (0)| 00:00:01 |
|   4 |  NESTED LOOPS                 |      |     1 |    26 |     4  (25)| 00:00:01 |
|   5 |   TABLE ACCESS BY INDEX ROWID | T    |     1 |    26 |     1   (0)| 00:00:01 |
|*  6 |    INDEX UNIQUE SCAN          | T_PK |     1 |       |     1   (0)| 00:00:01 |
|   7 |   VIEW                        |      |     1 |       |     3  (34)| 00:00:01 |
|   8 |    FAST DUAL                  |      |     1 |       |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(LEVEL<=:B1)
   6 - access("T"."ID"=TO_NUMBER(:ID))

Also, if we remove the DISTINCT from the inline view, there will be no complex view merging anymore, and the execution plan (and therefore the results) is fine:

select (select max(level) from dual connect by level <= t.num1) 
from   t,
       (select 1 from dual)
where  t.id = :id;

--------------------------------------------------------------------------------------
| Id  | Operation                     | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |      |     1 |    26 |     5   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE               |      |     1 |       |            |          |
|*  2 |   CONNECT BY WITHOUT FILTERING|      |       |       |            |          |
|   3 |    FAST DUAL                  |      |     1 |       |     2   (0)| 00:00:01 |
|   4 |  NESTED LOOPS                 |      |     1 |    26 |     3   (0)| 00:00:01 |
|   5 |   FAST DUAL                   |      |     1 |       |     2   (0)| 00:00:01 |
|   6 |   TABLE ACCESS BY INDEX ROWID | T    |     1 |    26 |     1   (0)| 00:00:01 |
|*  7 |    INDEX UNIQUE SCAN          | T_PK |     1 |       |     0   (0)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(LEVEL<=:B1)
   7 - access("T"."ID"=TO_NUMBER(:ID))

(However, in my original query the DISTINCT was required...)

You are most welcome to comment or to ask questions in this page, or to write me at oren@db-oriented.com.