Write (Even) Less with More – VALIDATE_CONVERSION

Oren Nakdimon 1 Reply

I wrote the post Write Less with More – Part 8 – PL/SQL in the WITH Clause in November 2015, when the latest released Oracle version was 12.1.
In that post I explained about PL/SQL in the WITH Clause – a new 12.1 feature – and demonstrated it using the following example:

todo8

Since then Oracle 12.2 was released, and introduced a new feature that enables solving this task in a simpler way – the VALIDATE_CONVERSION function. This function gets an expression and a data type, and returns 1 if the expression can be converted to the data type and 0 if not.
Using the same setup from the original post, the requested query becomes as simple as:

> select *
  from   people
  where  general_info is not null
  and    validate_conversion(general_info as date, 'dd/mm/yyyy') = 1;

PERSON_ID FIRST_NAME LAST_NAME       GENERAL_INFO
---------- ---------- --------------- --------------------
       102 Paul       McCartney       18/6/1942
       202 Ella       Fitzgerald      15/6/1996
       203 Etta       James           20/1/2012

In addition to introducing the new VALIDATE_CONVERSION function, the older CAST and some of the TO_* conversion functions have been enhanced in Oracle 12.2 and include a DEFAULT ON CONVERSION ERROR clause, so when data type conversion fails we can get some default value instead of an error.

> select p.person_id,
         p.first_name,
         p.last_name,
         to_date(p.general_info default null on conversion error, 'dd/mm/yyyy') my_date
  from   people p;

 PERSON_ID FIRST_NAME LAST_NAME       MY_DATE
---------- ---------- --------------- ----------
       101 John       Lennon
       102 Paul       McCartney       18/06/1942
       103 Ringo      Starr
       104 George     Harisson
       201 Louis      Armstrong
       202 Ella       Fitzgerald      15/06/1996
       203 Etta       James           20/01/2012
       317 Julie      Andrews

8 rows selected.

PL/SQL in SQL in View in SQL in PL/SQL

I presented “Write Less (Code) With More (Oracle 12c New Features)” yesterday at OGh Tech Experience 2017.
One of the features I talked about was PL/SQL in the WITH Clause. One of the restrictions of this feature is that you cannot embed a static SQL query, that contains PL/SQL in the WITH clause, in PL/SQL (see the section PL/SQL in SQL in PL/SQL in this post).
I was asked, regarding this restriction, if it’s possible to embed in PL/SQL a static “regular” SQL query, that selects from a view, that contains PL/SQL in the WITH clause. The answer is yes, since the restriction is only syntactic.

Using the same example from the original post:

create view people_with_dates_v as      
with 
  function is_date(i_info in varchar2) return number as
    l_date date;
  begin
    if i_info is null then
      return 0;
    else
      l_date := to_date(i_info, 'dd/mm/yyyy');
      return 1;
    end if;
  exception
    when others then
      return 0;
  end;
select p.*
from   people p
where  is_date(p.general_info) = 1;
/

View created.
create or replace procedure show_date_people as
begin
    for l_rec in (
      select * from people_with_dates_v
      )
    loop
        dbms_output.put_line(l_rec.person_id || ': ' || l_rec.first_name || ' ' || l_rec.last_name);
    end loop;
end show_date_people;
/
> exec show_date_people
102: Paul McCartney
202: Ella Fitzgerald
203: Etta James

PL/SQL procedure successfully completed.

RETURNING INTO

Oren Nakdimon 1 Reply

The RETURNING INTO clause is one of my favorite PL/SQL features. It allows to write less code, improves readability and reduces context switches between PL/SQL and SQL.
In this post I’d like to highlight some less-known characteristics of the RETURNING INTO clause and emphasize differences that exist when it is used in different DML statements.

Supported Statements

The RETURNING INTO clause is supported by the UPDATE, DELETE, and single-table single-row (“values-based”) INSERT statements.
It is not supported by subquery-based INSERT, multi-table INSERT, and MERGE. Some additional restrictions are documented.

What is returned?

The RETURNING INTO clause returns data from the rows that have been affected by the DML statement.
For INSERT it returns the after-insert values of the new row’s columns.
For UPDATE it returns the after-update values of the affected rows’ columns.
For DELETE it returns the before-delete values of the affected rows’ columns.

> create table t (
    id number generated as identity,
    data number
  );

Table created.

> declare
    ret1 number;
    ret2 number;
  begin
    insert into t (data) values (10) returning id,data into ret1,ret2;
    dbms_output.put_line('id='||ret1||' ; data='||ret2);
    insert into t (data) values (20) returning id,data into ret1,ret2;
    dbms_output.put_line('id='||ret1||' ; data='||ret2);
    insert into t (data) values (30) returning id,data into ret1,ret2;
    dbms_output.put_line('id='||ret1||' ; data='||ret2);
    commit;
  end;
  /

id=1 ; data=10
id=2 ; data=20
id=3 ; data=30

PL/SQL procedure successfully completed.
> declare
    ret number;
  begin
    update t set data=111 where id=1 returning data into ret;
    dbms_output.put_line('post-update value = '||ret);
    delete t where id=2 returning data into ret;
    dbms_output.put_line('pre-delete value = '||ret);
    rollback;
  end;
  /

post-update value = 111
pre-delete value = 20

PL/SQL procedure successfully completed.

Expressions

The returned values are not limited just to simple column names. It can virtually be any expression you can place in the select-list of a SELECT statement from the affected table.

> declare
    ret1 number;
    ret2 varchar2(10);
  begin
    update t
    set data = round(dbms_random.value(1,100))
    where id = 1
    returning data, decode(mod(data,2),0,'even','odd')
    into ret1, ret2;
 
    dbms_output.put_line('the new value is '||ret1||', an '||ret2||' number');
    rollback;
  end;
  /

the new value is 19, an odd number

PL/SQL procedure successfully completed.

Warning: you can even include in the returned expression a scalar subquery against the affected table. This is probably not a very good idea, especially with INSERT and UPDATE, which return the post-DML values. The scalar subquery, on the other hand, obeys the statement-level read consistency rule – it returns the data as it was before the DML happened. So it may lead to confusing inconsistent results:

> declare
    ret1 number;
    ret2 number;
  begin
    update t
    set data = 29081969
    where id = 1
    returning data, (select max(data) from t)
    into ret1, ret2;

    dbms_output.put_line('just set DATA of a row to ' || ret1 ||
                         ' and the maximum value is (no! *was*) ' || ret2);
    rollback;
  end;
  /

just set DATA of a row to 29081969 and the maximum value is (no! *was*) 30

PL/SQL procedure successfully completed.

BULK COLLECT

The returned values can be placed into collection variables, using RETURNING with the BULK COLLECT INTO clause. This is very useful for the UPDATE and DELETE statements, that may affect multiple rows in a single statement (but this syntax is supported also for the INSERT statement – perhaps as a preparation for supporting RETURNING in multi-row INSERT in the future?).

> declare
    type ntt is table of number;
    ret1 ntt;
    ret2 ntt;
  begin
    update t
    set data = data * 2
    returning id, data
    bulk collect into ret1, ret2;
 
    for i in 1..ret1.count loop
      dbms_output.put_line('id='||ret1(i)||' ; data='||ret2(i));
    end loop;
 
    rollback;
  end;
  /

id=1 ; data=20
id=2 ; data=40
id=3 ; data=60

PL/SQL procedure successfully completed.

Aggregation

For UPDATE and DELETE the returned expression can be an aggregate function. The aggregation is performed on the affected rows, and (since the aggregated value is scalar) BULK COLLECT is not needed when using this option. This is not supported for INSERT.

> declare
    ret1 number;
    ret2 number;
    ret3 varchar2(100);
  begin
    update t
    set data = data * 2
    where id < 3
    returning sum(data),
              avg(data),
              listagg(id||':'||data,',') within group (order by id)
    into ret1, ret2, ret3;
 
    dbms_output.put_line('sum(data) of updated rows = ' || ret1);
    dbms_output.put_line('avg(data) of updated rows = ' || ret2);
    dbms_output.put_line('new values = ' || ret3);
 
    rollback;
  end;
  /
sum(data) of updated rows = 60
avg(data) of updated rows = 30
new values = 1:20,2:40

PL/SQL procedure successfully completed.

RETURNING or RETURN?

One last anecdote: the RETURN keyword can be used instead of RETURNING. I prefer RETURNING.

What about MERGE?

As many others, I wish the MERGE statement would support the RETURNING INTO clause as well.
However, since MERGE is a combination of INSERT, UPDATE and DELETE, and given the differences in the way RETURNING works with these 3 DML statements (as listed above), supporting RETURNING INTO in MERGE is clearly not a simple task.
It may also lead to confusing situations – think for example about a MERGE statement that includes the UPDATE SET and DELETE WHERE clauses – what will RETURNING return for rows affected by the DELETE WHERE clause? The pre-MERGE value, the post-UPDATE-pre-DELETE value, or perhaps simply 42?

Adding a Column with a Default Value and a Constraint

Oren Nakdimon Leave a Reply

The Constraint Optimization series:


In the previous parts of this series I showed that Oracle does a nice optimization – that may save plenty of time – when we add in a single ALTER TABLE statement a new nullable with no default value column and a corresponding:
– inline (a.k.a. “column-level”) check constraint
or
– foreign key constraint (either inline or out-of-line)

In these cases Oracle enables the constraint (for future DML statements) and marks the constraint as VALIDATED (stating that existing records respect the constraint) without actually performing any check – counting on the fact that all the existing records have NULL in the new column. This is a beautiful optimization – if the table already contains many records, then skipping the (unnecessary) validation phase can save lots of time (during which the table is locked in a highly restrictive mode [I’ll discuss locking in the next part of this series]).

What about adding a new column with a (constant) default value?

Since all the existing records will have the same value in the new column (the default value) – then the validation phase can be very simple and short. Oracle should only check that this single value respects the constraint; there is no reason to visit each and every record in the table for repeating the same validation over and over again. Unfortunately, this kind of optimization is not done.

I added this suggestion to the OTN Database Ideas section: http://community.oracle.com/ideas/17751. Please vote up if you think it’s a good idea.

With the latest releases of Oracle, such an optimization would be valuable in particular, because as of 11g adding a NOT NULLable column with a default value is a metadata-only operation, and as of 12c the same is true also for adding a NULLable column with a default value.

Here is an example with a check constraint:

> alter table t add (c number default 8 not null constraint c_chk check(c>0)); 

We can see in the SQL trace file the following:

=====================
PARSING IN CURSOR #407309168 len=110 dep=1 uid=194 oct=3 lid=0 tim=745434447778 hv=3957567910 ad='7ffb3434b250' sqlid='2z018fgpy7cd6'
 select /*+ all_rows ordered dynamic_sampling(2) */ A.rowid, :1, :2, :3 from "DEMO"."T" A where not ( c>0)
END OF STMT
.
.
.
FETCH #407309168:c=781250,e=3187349,p=142858,cr=142882,cu=0,mis=0,r=0,dep=1,og=1,plh=1601196873,tim=745437636908
STAT #407309168 id=1 cnt=0 pid=0 pos=1 obj=99064 op='TABLE ACCESS FULL T (cr=142882 pr=142858 pw=0 time=3187348 us cost=38895 size=1250000 card=50000)'
CLOSE #407309168:c=0,e=3,dep=1,type=0,tim=745437636980

And here is an example with a foreign key constraint:

> alter table c add (
  parent_id number default 1 not null constraint c_fk_p references p(id)
  );
PARSING IN CURSOR #386313832 len=207 dep=1 uid=0 oct=3 lid=0 tim=747182328580 hv=3249972061 ad='7ffb09624948' sqlid='83w6f8v0vd8ux'
 select /*+ all_rows ordered dynamic_sampling(2) */ A.rowid, :1, :2, :3 
 from "DEMO"."C" A , "DEMO"."P" B 
 where( "A"."PARENT_ID" is not null) and( "B"."ID" (+)= "A"."PARENT_ID") and( "B"."ID" is null)
END OF STMT
.
.
.
FETCH #386313832:c=906250,e=3396711,p=142857,cr=142883,cu=0,mis=0,r=0,dep=1,og=1,plh=1351743368,tim=747185726213
STAT #386313832 id=1 cnt=0 pid=0 pos=1 obj=0 op='HASH JOIN ANTI (cr=142883 pr=142857 pw=0 time=3396711 us cost=40650 size=38000000 card=1000000)'
STAT #386313832 id=2 cnt=1000000 pid=1 pos=1 obj=98387 op='TABLE ACCESS FULL C (cr=142882 pr=142857 pw=0 time=616429 us cost=38895 size=25000000 card=1000000)'
STAT #386313832 id=3 cnt=1 pid=1 pos=2 obj=98386 op='INDEX FULL SCAN P_PK (cr=1 pr=0 pw=0 time=8 us cost=0 size=13 card=1)'
CLOSE #386313832:c=0,e=3,dep=1,type=0,tim=747185726320

COLLECT DISTINCT in PL/SQL Works in Oracle 12.2

Oren Nakdimon 1 Reply

About a year ago I wrote the post Subtleties – Part 1 (SQL and PL/SQL). I wrote there:

Almost every valid SQL statement (i.e., that is executed successfully by the SQL engine) can be embedded successfully as a static SQL in PL/SQL. Almost, but not every statement.
One example is the COLLECT aggregate function with the DISTINCT option.

And I showed an example that was executed in 11.2.0.4 and in 12.1.0.2.

Today I tried it in 12.2.0.1, and I was very pleased to see that now COLLECT DISTINCT is working also in PL/SQL.
Continue reading

(Lack of) Optimization of Unique Constraint Creation

Oren Nakdimon 1 Reply

The Constraint Optimization series:


In the previous parts of this series I showed that Oracle does a nice optimization – that may save plenty of time – when we add in a single ALTER TABLE statement a new (nullable with no default value) column and a corresponding:
– inline (a.k.a. “column-level”) check constraint
or
– foreign key constraint (either inline or out-of-line)

What about unique constraints?

When we create a unique constraint on some column, Oracle creates by default a corresponding unique B*Tree index on that column, in order to enforce the constraint.
When we add a new column with no default value to a non-empty table, and in the same ALTER TABLE statement we create a unique constraint on that column, then the corresponding index will be created, by definition, with no entries (as B*Tree indexes do not contain entirely NULL keys). It would be nice if Oracle would simply create an empty index in this case, without scanning all the table rows, but unfortunately this kind of optimization is not done.
Continue reading

Optimization of Foreign Key Constraint Creation

Oren Nakdimon Leave a Reply

The Constraint Optimization series:


In a recent post I showed that Oracle does a nice optimization when we add a new (nullable with no default value) column with an inline (a.k.a. “column-level”) check constraint in a single ALTER TABLE statement. This optimization does not apply for out-of-line (“table-level”) check constraints.

So, what about foreign key constraints?

Clearly, when adding a new nullable with no default value column to a table which contains records, then, by definition, the new column has NULL in all the existing records. If in the same ALTER TABLE statement we also create a foreign key constraint for the new column, then, by definition, the constraint is valid for all the existing records, because NULLs always pass the foreign key constraint check. So in this case it is unnecessary to access each and every record of the table for the validation phase of the foreign key constraint creation, and indeed Oracle applies this optimization, in both inline and out-of-line foreign key constraints. Let’s see it.
Continue reading

Optimization that Violates Data Integrity

Oren Nakdimon 1 Reply

The Constraint Optimization series:


In the previous post I showed that Oracle does a nice optimization when we add a new (nullable with no default value) column with an inline check constraint in a single ALTER TABLE statement.
However, there is one case where this optimization allows for data integrity violation instead of forbidding it (which makes it a bug, in this specific case, rather than an optimization). It happens when the check constraint is “column IS NOT NULL”.

I’ll setup a table with a few records for demonstrating this:

SQL> create table t (x number);

Table created.

SQL> insert into t select rownum from dual connect by level<=10;

10 rows created.

SQL> commit;

Commit complete.

First, let's try to add a column with an out-of-line constraint, so the optimization will not apply:

SQL>
SQL> ALTER TABLE T ADD (
  2    C NUMBER,
  3    CONSTRAINT C_CHK CHECK (C IS NOT NULL)
  4  );
  CONSTRAINT C_CHK CHECK (C IS NOT NULL)
             *
ERROR at line 3:
ORA-02293: cannot validate (TRANZMATE_PRODUCTION.C_CHK) - check constraint violated

It failed, and that's good.

Now, let's try to add the same column with the same check constraint, but this time we'll use an inline constraint:

SQL> ALTER TABLE T ADD (
  2    C NUMBER CONSTRAINT C_CHK CHECK (C IS NOT NULL)
  3  );

Table altered.

And it succeeded, although it should have failed.
So now we have a validated constraint, stating that C IS NOT NULL, and at the same time the table contains records, and in all of them C IS NULL:

SQL> select table_name,search_condition,status,validated,rely
  2  from user_constraints where constraint_name='C_CHK';

TABLE_NAME SEARCH_CONDITION     STATUS     VALIDATED  RELY
---------- -------------------- ---------- ---------- ------------
T          C IS NOT NULL        ENABLED    VALIDATED

SQL> select * from t;

         X          C
---------- ----------
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10

10 rows selected.

Since we're talking about adding a nullable column with no default value to an existing table that already contains rows, specifying a check constraint that states that this column is not nullable doesn't make much sense, and if someone tries to do it it's probably their bug. Still, it would be nice to get an error in this case, rather than violating the data integrity.

Why does it happen?

When Oracle validates some check constraint, it checks if there are records for which the constraint condition is FALSE. When the column is NULL, then usually the condition evaluates to NULL, not to FALSE, so the constraint is valid. For example, when C is NULL, then "C>0" is NULL (neither FALSE nor TRUE).
I guess this is why Oracle skips the validation step completely as part of this optimization.

What should happen?

It's good that Oracle doesn't validate each and every record, as it's really unnecessary, but to make this optimization foolproof it should validate a single value – NULL.

Optimization of Check Constraint Creation

Oren Nakdimon 3 Replies

The Constraint Optimization series:


I have a table T with many records and I want to add a new column C to this table with some check constraint on C.

Does it matter if I use the following statement

ALTER TABLE T ADD (C NUMBER, CONSTRAINT C_CHK CHECK (C>0));

or this one

ALTER TABLE T ADD (C NUMBER CONSTRAINT C_CHK CHECK (C>0));

?
Note that the only difference is the comma that appears in the first option and not in the second one.

It certainly matters.
In the first option the constraint is an out-of-line (“table-level”) check constraint, while in the second option it is an inline (“column-level”) one.
With out-of-line check constraint Oracle scans the table, and validates that each and every record satisfies the constraint. This is similar to adding the column in one statement, and then adding the constraint in a second statement.
But when we add the column with an inline check constraint in a single statement, Oracle optimizes this process. Since the added column is NULL for all the existing records, it is actually unnecessary to access the records in order to perform the validation.

We can see a small example (tested in 11.2.0.4, 12.1.0.2, 12.2.0.1). First, adding the column with the out-of-line check constraint:

> create table t (x varchar2(1000));

Table created.

> insert /*+ append */ into t
> select lpad(rownum,1000,'x') from dual connect by level<=1e6;

1000000 rows created.

> commit;

Commit complete.

> col value new_value value_0 noprint
> select b.value from v$statname a, v$mystat b 
> where a.statistic# = b.statistic# and a.name = 'session logical reads';

> set timing on
> ALTER TABLE T ADD (C NUMBER, CONSTRAINT C_CHK CHECK (C>0));

Table altered.

Elapsed: 00:00:04.27

> set timing off
> select b.value-&VALUE_0 logical_reads from v$statname a, v$mystat b
> where a.statistic# = b.statistic# and a.name = 'session logical reads';

LOGICAL_READS
-------------
       143243

Note the elapsed time and the number of logical reads – most of them are due to accessing all the rows of T in order to validate the check constraint.
Now, let’s see the same example but with the inline check constraint instead of the out-of-line one:

> drop table t purge;

Table dropped.

> create table t (x varchar2(1000));

Table created.

> insert /*+ append */ into t
> select lpad(rownum,1000,'x') from dual connect by level<=1e6;

1000000 rows created.

> commit;

Commit complete.

> col value new_value value_0 noprint
> select b.value from v$statname a, v$mystat b
> where a.statistic# = b.statistic# and a.name = 'session logical reads';

> set timing on
> ALTER TABLE T ADD (C NUMBER CONSTRAINT C_CHK CHECK (C>0));

Table altered.

Elapsed: 00:00:00.01

> set timing off
> select b.value-&VALUE_0 logical_reads from v$statname a, v$mystat b
> where a.statistic# = b.statistic# and a.name = 'session logical reads';

LOGICAL_READS
-------------
          359

Much faster, with significantly less logical reads, since accessing the actual records is omitted.

A Recipe for Summoning the RBO Monster (even in Oracle 12c): On Delete Cascade, Function-Based Index and Missing Table Statistics

Oren Nakdimon Leave a Reply

The last version of Oracle in which CHOOSE was officially supported as an OPTIMIZER_MODE parameter value was 9.2.
This is what the documentation of Oracle 9.2 says about it:

choose
The optimizer chooses between a cost-based approach and a rule-based approach based on whether statistics are available.
If the data dictionary contains statistics for at least one of the accessed tables, then the optimizer uses a cost-based approach and optimizes with a goal of best throughput. If the data dictionary contains only some statistics, then the cost-based approach is used, and the optimizer must guess the statistics for the subjects without any statistics. This can result in sub-optimal execution plans. If the data dictionary contains no statistics for any of the accessed tables, then the optimizer uses a rule-based approach.

In subsequent versions CHOOSE does not appear in the documentation as a valid value, and the “What’s New in Oracle Performance?” chapter in the documentation of Oracle 10.1 includes this sentence:

Some consequences of the desupport of RBO are:
• CHOOSE and RULE are no longer supported as OPTIMIZER_MODE initialization parameter values …

But the Oracle database keeps using CHOOSE, and therefore keeps potentially using the RBO, internally, even in version 12c. And last week I hit a performance issue that was caused because of this fact and some poor circumstances.

Here is a simplified example, running on Oracle 12.1.0.2.

We have two tables – PARENTS and CHILDREN – with an “on delete cascade” foreign key constraint.

> create table parents (
    id number not null primary key
  );

Table created.

> insert into parents select rownum from dual connect by level<=10000;

10000 rows created.

> create table children (
    id number not null primary key,
    parent_id number not null references parents (id) on delete cascade,
    filler varchar2(4000)
  );

Table created.

> insert into children
  select rownum,mod(rownum,10000)+1,lpad('x',4000,'x')
  from dual
  connect by level<=100000;

100000 rows created.

> commit;

Commit complete.

There is an index that supports the foreign key, and it is a function-based index. Note that the leading column of the index is simply the referencing column.

> create index children_idx on children (parent_id, substr(filler,1,10));

Index created. 

Now let’s delete a record from the PARENTS table.

> alter system flush buffer_cache;

System altered.

> set timing on
> delete parents where id=1;

1 row deleted.

Elapsed: 00:00:27.80
> set timing off
> roll
Rollback complete.

Why did it take so much time?

When we delete a record from PARENTS, Oracle implicitly deletes the child records of this parent (because of the “on delete cascade” foreign key). This is the recursive SQL statement:

 delete from "DEMO"."CHILDREN" where "PARENT_ID" = :1

Oracle uses the CHOOSE optimizer mode for the implicit deletion from CHILDREN.

> select value,isdefault
  from v$sql_optimizer_env
  where sql_id='f7j1aq9z8k6r1'
  and child_number=0
  and name='optimizer_mode';

VALUE                ISDEFAULT
-------------------- ---------
choose               NO

1 row selected.

Unfortunately I forgot to gather statistics on the CHILDREN table. The lack of statistics means that Oracle chose to use the rule-based optimizer for the implicit DELETE statement.
And this means that it cannot use the CHILDREN_IDX index, because it is a function-based index.

> SELECT * FROM table(DBMS_XPLAN.DISPLAY_CURSOR('f7j1aq9z8k6r1',0));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  f7j1aq9z8k6r1, child number 0
-------------------------------------
 delete from "DEMO"."CHILDREN" where "PARENT_ID" = :1

Plan hash value: 3379301338

---------------------------------------
| Id  | Operation          | Name     |
---------------------------------------
|   0 | DELETE STATEMENT   |          |
|   1 |  DELETE            | CHILDREN |
|*  2 |   TABLE ACCESS FULL| CHILDREN |
---------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("PARENT_ID"=:1)

Note
-----
   - rule based optimizer used (consider using cbo)


23 rows selected.

If I execute the DELETE FROM CHILDREN statement explicitly, the CBO chooses the good execution plan that uses the index, because the statistics on the index have been automatically gathered during the index creation. It’s the missing statistics on the table that caused the implicit (recursive) DELETE statement to use the RBO.

> alter system flush buffer_cache;

System altered.

> set timing on
> delete from "DEMO"."CHILDREN" where "PARENT_ID" = 1;

10 rows deleted.

Elapsed: 00:00:00.21
> set timing off
> select * from table(dbms_xplan.display_cursor());

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  0cm664fx8b944, child number 0
-------------------------------------
delete from "DEMO"."CHILDREN" where "PARENT_ID" = 1

Plan hash value: 1030488021

----------------------------------------------------------------------------------
| Id  | Operation         | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
|   0 | DELETE STATEMENT  |              |       |       |     2 (100)|          |
|   1 |  DELETE           | CHILDREN     |       |       |            |          |
|*  2 |   INDEX RANGE SCAN| CHILDREN_IDX |    10 | 40100 |     2   (0)| 00:00:01 |
----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("PARENT_ID"=1)


19 rows selected.

Even if I didn’t have statistics on the index, the CBO would still choose the index for performing the explicit DELETE, because it would use dynamic sampling:

> exec dbms_stats.delete_index_stats (user,'CHILDREN_IDX')

PL/SQL procedure successfully completed.

> delete from "DEMO"."CHILDREN" where "PARENT_ID" = 3;

10 rows deleted.

> select * from table(dbms_xplan.display_cursor());

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  a8x8wszgx1g6r, child number 0
-------------------------------------
delete from "DEMO"."CHILDREN" where "PARENT_ID" = 3

Plan hash value: 1030488021

----------------------------------------------------------------------------------
| Id  | Operation         | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
|   0 | DELETE STATEMENT  |              |       |       |     3 (100)|          |
|   1 |  DELETE           | CHILDREN     |       |       |            |          |
|*  2 |   INDEX RANGE SCAN| CHILDREN_IDX |  1066 |  2111K|     3   (0)| 00:00:01 |
----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("PARENT_ID"=3)

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)


23 rows selected.

Finally, if the index were a regular one rather than function-based, then a good plan would have been chosen for the recursive SQL, because the RBO prefers an index range scan over a full table scan:

> drop index children_idx;

Index dropped.

> create index children_idx on children (parent_id, id);

Index created.

> alter system flush buffer_cache;

System altered.

> set timing on
> delete parents where id=4;

1 row deleted.

Elapsed: 00:00:00.14
> set timing off

> select * from table(DBMS_XPLAN.DISPLAY_CURSOR('f7j1aq9z8k6r1',0));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  f7j1aq9z8k6r1, child number 0
-------------------------------------
 delete from "DEMO"."CHILDREN" where "PARENT_ID" = :1

Plan hash value: 1030488021

------------------------------------------
| Id  | Operation         | Name         |
------------------------------------------
|   0 | DELETE STATEMENT  |              |
|   1 |  DELETE           | CHILDREN     |
|*  2 |   INDEX RANGE SCAN| CHILDREN_IDX |
------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("PARENT_ID"=:1)

Note
-----
   - rule based optimizer used (consider using cbo)


23 rows selected.