ilOUG Day

ארגון משתמשי אורקל בישראל מציג שינוי מרענן בשיטת מפגשי הפורומים. ביום שני 23/3/2015 יתקיים לראשונה מפגש לכלל הקהילה הטכנולוגית, בניגוד למפגשים בעבר שהיו ממוקדים בכל פעם בקבוצת עניין מסויימת (ולכן גם קטנה במקרים רבים). במפגש הקרוב יתקיימו 15 הרצאות ב-5 מסלולים מקבילים:

  • Database Administration
  • Database Development
  • Engineered Systems, Hardware & OS
  • Development Technologies and Tools
  • Big Data & BI

הגיוון גדול, ומארגני המפגש מדגישים כי ניתן לעבור בין המסלולים בצורה חופשית, כך שאני בטוח שכל מי שיגיעו למפגש ימצאו מענה לתחומי העניין שלהם (וחלק בוודאי יתקשו לבחור בין הרצאות שונות שיתקיימו באותו זמן, אבל אלה צרות טובות). בנוסף, העובדה שהמפגש מיועד לכלל הקהילה מאפשרת למשתתפים להכיר ולפגוש יותר עמיתים מאשר במפגשים הקודמים המצומצמים.

אני אעביר במפגש את ההרצאה WRITE LESS code WITH MORE oracle 12c new features במסלול Database Development. הנה התקציר:

Oracle 12c introduced many new features that allow us developers to write less code than in previous releases, and become more efficient and productive.
Some features, such as Row Limiting and Lateral Inline Views, enhance the SQL language, so SQL statements can become much shorter and more readable than before.
Other features, such as Temporal Validity and In-Database Archiving, offer built-in functionality that previously had to be implemented by the application.
Attend this session to learn about several of these new features, and see many useful examples.

המפגש יתקיים במלון דן פנורמה בתל אביב, החל מהשעה 14:00. ההשתתפות היא ללא תשלום, אך יש להרשם מראש.
לרשימת ההרצאות המלאה, רישום ופרטים נוספים: http://www.iloug.org.il/ilOUGDay/?page=Agenda2

ilOUG Day

ilOUG, the Israeli Oracle User Group, is introducing a new and refreshing concept for its SIG meetings. On Monday, March 23rd 2015, there will be for the first time a meeting for the entire technological community, unlike the past meetings that were focused each time on a specific interest group (and therefore also small in many cases). In the upcoming event there will be 15 lectures in 5 parallel tracks:

  • Database Administration
  • Database Development
  • Engineered Systems, Hardware & OS
  • Development Technologies and Tools
  • Big Data & BI

There is great variety, and event participants can freely switch tracks at any point, so I’m sure anyone can find something of interest (some of you may even find it hard to choose between simultaneous presentations). In addition, such a community wide event allows more mingling and networking than previous smaller events.

I will present the lecture Write Less (Code) with More (Oracle 12c New Features) in the Database Development track. Here is the abstract:

Oracle 12c introduced many new features that allow us developers to write less code than in previous releases, and become more efficient and productive.
Some features, such as Row Limiting and Lateral Inline Views, enhance the SQL language, so SQL statements can become much shorter and more readable than before.
Other features, such as Temporal Validity and In-Database Archiving, offer built-in functionality that previously had to be implemented by the application.
Attend this session to learn about several of these new features, and see many useful examples.

The meeting will take place at the Dan Panorama hotel in Tel Aviv, from 2pm. Participation is free of charge, but requires registration in advance.

For full details see: http://www.iloug.org.il/ilOUGDay/?page=Agenda2

 

Optimizer bug leads to wrong results

A few days ago a query that I wrote did not return the expected results. After some investigation I came to conclusion that the reason is a bug of the optimizer. Here is a very simplified example (and quite silly functionality-wise, but I guarantee you that the original, more complex, query does make sense).
I tried it with Oracle 11.2.0.2, 11.2.0.4 and 12.1.0.2, and got the same (wrong) behavior in all three cases.

Update [August 2, 2017]
I tested it with 12.2.0.1, and it’s working fine!

The query is based on the following table:

create table t(
  id   number not null constraint t_pk primary key,
  num1 number not null
);

Let’s fill it with some records:

insert into t values (1,74);
insert into t values (2,96);
insert into t values (3,41);

We start with this query (that works just fine):

select (select max(level) from dual connect by level <= t.num1) 
from   t
where  t.id = :id;

For our sample data we would expect the query to return 74 for :id=1, 96 for :id=2 and 41 for :id=3, and indeed these are the results that we get.
This is the execution plan:

--------------------------------------------------------------------------------------
| Id  | Operation                     | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |      |     1 |    26 |     3   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE               |      |     1 |       |            |          |
|*  2 |   CONNECT BY WITHOUT FILTERING|      |       |       |            |          |
|   3 |    FAST DUAL                  |      |     1 |       |     2   (0)| 00:00:01 |
|   4 |  TABLE ACCESS BY INDEX ROWID  | T    |     1 |    26 |     1   (0)| 00:00:01 |
|*  5 |   INDEX UNIQUE SCAN           | T_PK |     1 |       |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(LEVEL<=:B1)
   5 - access("T"."ID"=TO_NUMBER(:ID))

Now, I will only add an innocent inline view to the query:

select (select max(level) from dual connect by level <= t.num1) 
from   t,
       (select distinct 1 from dual)
where  t.id = :id;

Obviously the existence of the inline view - (select distinct 1 from dual) - should make no difference. It returns a single record, we don't use its result set, and we have no join conditions in the query (so we actually have a cartesian product between 1 record and [0 or 1] record).

But now the results are wrong. The query returns 1 in all the cases, and the reason is that the scalar subquery - (select max(level) from dual connect by level <= t.num1) - is always executed with t.num1 = null.

Let's look at the execution plan for this case:

-------------------------------------------------------------------------------------------
| Id  | Operation                     | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |           |     1 |    13 |     6  (17)| 00:00:01 |
|   1 |  SORT AGGREGATE               |           |     1 |       |            |          |
|*  2 |   CONNECT BY WITHOUT FILTERING|           |       |       |            |          |
|   3 |    FAST DUAL                  |           |     1 |       |     2   (0)| 00:00:01 |
|   4 |  VIEW                         | VM_NWVW_0 |     1 |    13 |     4  (25)| 00:00:01 |
|   5 |   NESTED LOOPS                |           |     1 |    38 |     3   (0)| 00:00:01 |
|   6 |    FAST DUAL                  |           |     1 |       |     2   (0)| 00:00:01 |
|*  7 |    INDEX UNIQUE SCAN          | T_PK      |     1 |       |     0   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(LEVEL<=:B1)
   7 - access("T"."ID"=TO_NUMBER(:ID))

For some reason, the optimizer decided that it is unnecessary to go to the table T. Oracle accesses only the index T_PK, which contains only the ID column. With this execution plan it is impossible to get the value of the NUM1 column, and indeed the results show that NUM1 is allegedly NULL.

Step 4 in the execution plan refers to VM_NWVW_0, which is usually related to Complex View Merging. So let's try to disable view merging using the NO_MERGE hint:

select (select max(level) from dual connect by level <= t.num1) 
from   t,
       (select /*+ NO_MERGE */ distinct 1 from dual)
where  t.id = :id;

And indeed we get the correct results, with this execution plan:

--------------------------------------------------------------------------------------
| Id  | Operation                     | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |      |     1 |    26 |     6  (17)| 00:00:01 |
|   1 |  SORT AGGREGATE               |      |     1 |       |            |          |
|*  2 |   CONNECT BY WITHOUT FILTERING|      |       |       |            |          |
|   3 |    FAST DUAL                  |      |     1 |       |     2   (0)| 00:00:01 |
|   4 |  NESTED LOOPS                 |      |     1 |    26 |     4  (25)| 00:00:01 |
|   5 |   TABLE ACCESS BY INDEX ROWID | T    |     1 |    26 |     1   (0)| 00:00:01 |
|*  6 |    INDEX UNIQUE SCAN          | T_PK |     1 |       |     1   (0)| 00:00:01 |
|   7 |   VIEW                        |      |     1 |       |     3  (34)| 00:00:01 |
|   8 |    FAST DUAL                  |      |     1 |       |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(LEVEL<=:B1)
   6 - access("T"."ID"=TO_NUMBER(:ID))

Also, if we remove the DISTINCT from the inline view, there will be no complex view merging anymore, and the execution plan (and therefore the results) is fine:

select (select max(level) from dual connect by level <= t.num1) 
from   t,
       (select 1 from dual)
where  t.id = :id;

--------------------------------------------------------------------------------------
| Id  | Operation                     | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |      |     1 |    26 |     5   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE               |      |     1 |       |            |          |
|*  2 |   CONNECT BY WITHOUT FILTERING|      |       |       |            |          |
|   3 |    FAST DUAL                  |      |     1 |       |     2   (0)| 00:00:01 |
|   4 |  NESTED LOOPS                 |      |     1 |    26 |     3   (0)| 00:00:01 |
|   5 |   FAST DUAL                   |      |     1 |       |     2   (0)| 00:00:01 |
|   6 |   TABLE ACCESS BY INDEX ROWID | T    |     1 |    26 |     1   (0)| 00:00:01 |
|*  7 |    INDEX UNIQUE SCAN          | T_PK |     1 |       |     0   (0)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(LEVEL<=:B1)
   7 - access("T"."ID"=TO_NUMBER(:ID))

(However, in my original query the DISTINCT was required...)

You are most welcome to comment or to ask questions in this page, or to write me at oren@db-oriented.com.

DOAG 2014

I’ll present two lectures in DOAG 2014 – the annual conference of the German Oracle user group.
The conference will be on November 18th-20th 2014 in Nuremberg.
Here is the full program of the conference.

The first presentation is Edition Based Redefinition best practices. Here is the abstract:
Edition-Based Redefinition (EBR) is a powerful and fascinating feature of Oracle (added in version 11.2), that enables application upgrades with zero downtime, while the application is actively used and operational. Join this session to learn how to use EBR, see many live examples, and get tips from real-life experience in a production site using EBR extensively.

The second presentation is Write Less (Code) with More (Oracle 12c New Features). Here is the abstract:
Oracle 12c introduced many new features that allow us developers to write less code than in previous releases, and become more efficient and productive.
Some features, such as Row Limiting and Lateral Inline Views, enhance the SQL language, so SQL statements can become much shorter and more readable than before.
Other features, such as Temporal Validity and In-Database Archiving, offer built-in functionality that previously had to be implemented by the application.
Attend this session to learn about several of these new features, and see many useful examples.

impdp – which rows failed?

Since version 11.1, Data Pump Import supports the SKIP_CONSTRAINT_ERRORS option, which specifies that you want the import operation to proceed even if (non-deferred) constraint violations are encountered. It logs any rows that cause constraint violations, but does not stop, and does load the other rows. If SKIP_CONSTRAINT_ERRORS is not set, then the default behavior is to roll back the entire load of the data object on which constraint violations are encountered.

For example:

The table presidents was created by the following statement:

create table presidents (
  id         number       primary key,
  first_name varchar2(20),
  last_name  varchar2(20)
);

and is currently containing the following data:

        ID FIRST_NAME           LAST_NAME
---------- -------------------- --------------------
         1 George               Washington
         2 John                 Adams
         3 Thomas               Jefferson
         4 James                Madison
         5 James                Monroe
         6 John Quincy          Adams
         7 Andrew               Jackson
         8 Martin               Van Buren

We have a dump file that contains an export of the presidents table (from somewhere else or from some previous point in time), and we’d like to append the data from the dump file to the table.
The dump file contains the following data:

        ID FIRST_NAME           LAST_NAME
---------- -------------------- --------------------
         7 William              Harrison
         8 John                 Tyler
         9 James                Polk
        10 Zachary              Taylor
        11 Millard              Fillmore
        12 Franklin             Pierce

Obviously, the records with ID 7 and 8 cannot be added to the table, because a primary key constraint is defined on the ID column, and the table already contains records with these ID values.

If we try to execute the following import, without the SKIP_CONSTRAINT_ERRORS option, we will get an error and the whole operation will be rolled back:

impdp oren/oren tables=presidents dumpfile=presidents.dmp table_exists_action=append logfile=presidents_imp1.log

Import: Release 11.2.0.4.0 - Production on Fri Jul 18 19:33:33 2014
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Release 11.2.0.1.0 - 64bit Production
 Master table "OREN"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
 Starting "OREN"."SYS_IMPORT_TABLE_01": oren/******** tables=presidents dumpfile=presidents.dmp table_exists_action=append logfile=presidents_imp1.log
 Processing object type TABLE_EXPORT/TABLE/TABLE
 ORA-39152: Table "OREN"."PRESIDENTS" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append
 Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
 ORA-31693: Table data object "OREN"."PRESIDENTS" failed to load/unload and is being skipped due to error:
 ORA-00001: unique constraint (OREN.SYS_C0023857) violated
 Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
 Job "OREN"."SYS_IMPORT_TABLE_01" completed with 2 error(s) at 16:33:35

As expected, if we check the contents of the table we see that no records have been added from the dump file:

select * from presidents;

        ID FIRST_NAME           LAST_NAME
---------- -------------------- --------------------
         1 George               Washington
         2 John                 Adams
         3 Thomas               Jefferson
         4 James                Madison
         5 James                Monroe
         6 John Quincy          Adams
         7 Andrew               Jackson
         8 Martin               Van Buren

Now let’s execute the import with the SKIP_CONSTRAINT_ERRORS option:

impdp oren/oren tables=presidents dumpfile=presidents.dmp table_exists_action=append logfile=presidents_imp2.log data_options=SKIP_CONSTRAINT_ERRORS

Import: Release 11.2.0.4.0 - Production on Fri Jul 18 19:38:00 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Release 11.2.0.1.0 - 64bit Production
Master table "OREN"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "OREN"."SYS_IMPORT_TABLE_01": oren/******** tables=presidents dumpfile=presidents.dmp table_exists_action=append logfile=presidents_imp2.log data_options=SKIP_CONSTRAINT_ERRORS
Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-39152: Table "OREN"."PRESIDENTS" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "OREN"."PRESIDENTS" 5.945 KB 4 out of 6 rows
2 row(s) were rejected with the following error:
ORA-00001: unique constraint (OREN.SYS_C0023857) violated

Rejected rows with the primary keys are:
Rejected row #1:
column ID: 7
Rejected row #2:
column ID: 8
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Job "OREN"."SYS_IMPORT_TABLE_01" completed with 1 error(s) at 16:38:06

We see that 4 rows were imported and that 2 rows were rejected due to unique constraint violation. We can also see from the log which rows were rejected, or more precisely, what are the primary key values of the rejected rows.

But what if we want to know the values of the other columns in the rejected rows?
I don’t know about such a feature, but I made some “digging” and succeeded to come out with a solution.

First I looked at the statements that Data Pump did behind the scenes (by querying V$SQL), and one of the statements I found was this:

INSERT INTO "OREN"."ERR$DP004BC9640001" (
  ORA_ERR_NUMBER$, ORA_ERR_MESG$, ORA_ERR_ROWID$, ORA_ERR_OPTYP$, ORA_ERR_TAG$,
  "ID", "FIRST_NAME", "LAST_NAME")
VALUES (:1, :2, :3, :4, :5, :6, :7, :8)

So apparently Data Pump simply uses DML Error Logging to support the SKIP_CONSTRAINT_ERRORS option. And this means that the values of all the columns (except for LOBs etc.) of the rejected rows are basically available. All we need to do is to select from this ERR$DP004BC9640001 table, right?
Well, there are two problems with this:

  • The name of the table is not consistent. I ran it several times and got a different name in each run (but it always started with ERR$DP).
  • The ERR$DP… table is dropped as soon as the import is done, so by the time we want to select from it, it is already gone.

But we can overcome these issues, with the following flow:

1. Before starting the import process, open a SQL*Plus session and execute:

lock table presidents in share mode;

and keep this session (and transaction) open. This will block the import process (as well as any other process!) when it tries to insert records into the table, and will give us the opportunity to check the name of the error logging table, which, as we’ll see, will already exist at that point in time.

2. Start the import:

impdp oren/oren tables=presidents dumpfile=presidents.dmp table_exists_action=append logfile=presidents_imp.log data_options=SKIP_CONSTRAINT_ERRORS

Import: Release 11.2.0.4.0 - Production on Fri Jul 18 23:13:14 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Release 11.2.0.1.0 - 64bit Production
Master table "OREN"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "OREN"."SYS_IMPORT_TABLE_01": oren/******** tables=presidents dumpfile=presidents.dmp table_exists_action=append logfile=presidents_imp.log data_options=SKIP_CONSTRAINT_ERRORS
Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-39152: Table "OREN"."PRESIDENTS" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA

3. The import is blocked now. Going back to the SQL*Plus session, let’s find out the error logging table name:

select table_name from user_tables where table_name like 'ERR$DP%';

TABLE_NAME
-----------------------
ERR$DP0047ABBF0001

4. Start a second SQL*Plus session and lock the error logging table in a mode that will allow the import to fill it but not to drop it:

lock table ERR$DP0047ABBF0001 in row share mode;

5. In the first SQL*Plus session, release the lock on the presidents table so the import can continue:

Rollback;

And the import continues:

. . imported "OREN"."PRESIDENTS" 5.945 KB 4 out of 6 rows
2 row(s) were rejected with the following error:
ORA-00001: unique constraint (OREN.SYS_C0023889) violated

Rejected rows with the primary keys are:
Rejected row #1:
column ID: 7
Rejected row #2:
column ID: 8
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Job "OREN"."SYS_IMPORT_TABLE_01" completed with 1 error(s) at 20:35:21

6. When the import is done, we can see all the rejected rows from one of the SQL*Plus sessions:

select id,first_name,last_name from ERR$DP0047ABBF0001;

ID  FIRST_NAME           LAST_NAME
--- -------------------- --------------------
7   William              Harrison
8   John                 Tyler

7. And finally, to cleanup, we can drop the error logging table from the second SQL*Plus session:

drop table ERR$DP0047ABBF0001;

You are most welcome to comment or to ask questions in this page, or to write me at oren@db-oriented.com.

3 Points about Indexes and Order

When dealing with indexes, order may be very significant, in several aspects.
Perhaps it’s not surprising after all when talking about a data structure that its purpose is to keep data in order
We’ll refer to three different points:
1. Is the order of columns in a composite index important?
2. Which order is better: filling a table with data and then creating indexes, or creating indexes and then filling the table with data?
3. When creating several indexes, is the order of creation important?
Not always there is one correct answer that covers all the different cases, but it is always worth asking the questions before rushing to execute.

Is the order of columns in a composite index important?
Certainly yes. Let’s take a look at the following two indexes – they both contain the same two columns, but in a different order:

CREATE INDEX T_IDX_1 ON T (COL_A,COL_B);
CREATE INDEX T_IDX_2 ON T (COL_B,COL_A);

Let’s assume that the table T contains many records and that the following queries are highly selective (i.e., they return a relatively small number of records from the table), so it is likely that using an index is better than full scanning the table.

For the following query both indexes are equally good:

SELECT *
FROM T
WHERE COL_A = :VAL1
  AND COL_B = :VAL2;

But for the following query, the index T_IDX_1 is good, while T_IDX_2 is not:

SELECT *
FROM T
WHERE COL_A = :VAL1;

Although the index T_IDX_2 contains the column COL_A, it is not enough, as it does not contain it in its leading part. The order of columns in the index matters.
It’s easy to understand the difference when thinking of the following example: in the phone book the names are ordered first by last name, and then by first name. We can find very quickly all the subscribers whose last name is “Nakdimon”, but we must scan the whole phone book for finding all the subscribers whose first name is “Oren”.

Note: if the table contains a relatively small number of different unique values in the column COL_B, Oracle will still be able to use the index T_IDX_2 for answering the last query by using the Index Skip Scan access path, but still using T_IDX_1 with Index Range Scan will be better.

For the next two questions we’ll consider the following case: we need to create a new table with the following two indexes, and fill it with plenty of data:

CREATE TABLE T (
   COL_A NUMBER,
   COL_B DATE,
   COL_C VARCHAR2(4000),
   …
);
CREATE INDEX T_IDX_A ON T (COL_A);
CREATE INDEX T_IDX_BA ON T (COL_B,COL_A);

Which order is better: filling a table with data and then creating indexes, or creating indexes and then filling the table with data?
Usually working in the former order (creating the indexes when the table is already filled) will take less time than working in the latter order (filling the table when the indexes already exist), since in the latter case the indexes need to be maintained with the insertion of each and every record.

When creating several indexes, is the order of creation important?
Here the answer is positive in certain cases.
Suppose that we created the table T and filled it with many rows, and now it’s time to create the indexes. We can create T_IDX_A first and T_IDX_BA second, or vice versa. Let’s examine both options:

Option 1:

  • We’ll create T_IDX_A first. For that, Oracle will do a Full Table Scan of T (and will take the value of COL_A from every record, and of course the ROWID of every record to know where to point to from the index)
  • Now we’ll create T_IDX_BA. Once again, Oracle will do a Full Table Scan of T (and will take the values of COL_B and COL_A and the ROWID from every record)

Option 2:

  • We’ll create T_IDX_BA first. For that, Oracle will do a Full Table Scan of T (and will take the values of COL_B and COL_A and the ROWID from every record)
  • Now we’ll create T_IDX_A, and this is where the plot changes. Oracle can do a Full Table Scan of T here as well, but in this case it has another alternative, a better one in most cases. The only details that are needed in order to build the index are the values of COL_A and the ROWID of all the records in the table (where COL_A is not null), and these details already exist in the index T_IDX_BA. Therefore, Oracle can do an Index Fast Full Scan of T_IDX_BA, instead of Full Table Scan of the table.

So, if all the columns of one index are included in a second index, it is recommended to create the second index first, and only then the first index, and enable Oracle to consider more alternatives. The more the number of columns in the table that do not exist in the indexes, the more significant the improvement in the creation time of the second index (by doing Index Fast Full Scan instead of Full Table Scan) is.

You are most welcome to comment or to ask questions in this page, or to write me at oren@db-oriented.com.

One Problem – Many Solutions

One of the things I love in the SQL language is that one problem may have many different solutions, or more specifically, one functional question may be solved by different SQL queries.

It doesn’t mean that all the solutions are equivalent in all aspects. If all the solutions solve the functional question correctly, then all the result sets must be the same. But they may be different in performance, for example (it’s so satisfying reducing the execution time of a query from hours to seconds just by rewriting it).

Another example: one query may be short and elegant while another “equivalent” query may be long and cumbersome.

And not all the solutions are even legitimate in all versions of the database – many times a new feature may make a query much simpler (analytic functions are a great example for that – I still remember how their appearance [in Oracle 8.1.6 if I remember correctly] had a huge effect on my query writing – making them shorter, and many times faster – and actually I see the same effect on people today when I rewrite their queries and at the same time introduce them to analytic functions).

Oracle 12c introduced some new features that let us write less for getting the same functionality, like Lateral Inline Views or the Row Limiting clause.

Here is a nice example for one problem with several solutions.

A few days ago, Lucas Jellema from AMIS raised in this blog post the question of “packing” multiple rows that represent adjacent time periods into a single row that represents the unified period, and suggested the following solution, that demonstrates well some important features such as the LAG and RANK analytic functions, Subquery Factoring (supported since Oracle 9i), and Recursive Subquery Factoring (introduced in Oracle 11g):

with chairpeople as
( select chairperson
  ,      date_from
  ,      date_to
  ,      case date_from - lag(date_to) over (partition by chairperson order by date_from asc)
         when 1 then 'PACK'
         end action
  ,      rank()  over (partition by chairperson order by date_from asc) rnk
  from   chairmanships
)
, packing_chairs (chair, date_from, date_to, lvl) as
( select chairperson, date_from, date_to, 1
  from   chairpeople
  where  action is null
  union all
  select p.chair, p.date_from, c.date_to, lvl+1
  from   chairpeople c
         join
         packing_chairs p
         on (c.chairperson = p.chair and c.rnk = p.lvl+1)
  where  c.action='PACK'
  )
, packed_chairs as
( select chair, date_from, nullif(max(nvl(date_to,date'9999-12-31')),date'9999-12-31') date_to
  from   packing_chairs
  group
  by     chair, date_from
)
select *
from   packed_chairs
order
by     date_from;

-- note: this is a slightly revised version of the query from the original post

I suggested another solution, based on the LAST_VALUE analytic function:

select chairperson,
       date_from,
       max(date_to) keep(dense_rank last order by date_to) date_to
from   (select chairperson,
               last_value(new_period_date_from ignore nulls) over(partition by chairperson order by date_from) date_from,
               date_to
        from   (select chairperson,
                       date_from,
                       date_to,
                       case when lnnvl(date_from – lag(date_to) over(partition by chairperson order by date_from) = 1) then date_from end new_period_date_from
                from   chairmanships))
group  by chairperson,
          date_from
order  by date_from;

In another comment to Lucas’ post, Sayan Malakshinov suggested an even simpler and shorter solution:

select 
  chairperson
 ,min(date_from) keep (dense_rank first order by date_from,date_to) as date_from
 ,max(date_to  ) keep (dense_rank last  order by date_from,date_to) as date_to
from (
      select
           chairperson 
         , date_from 
         , date_to 
         , sum(flag) over(partition by chairperson order by date_from,date_to) grp
      from (
            select 
                 chairperson 
               , date_from 
               , date_to 
               , decode( 1 + lag(date_to)over(partition by chairperson order by date_from,date_to), date_from, 0, 1) flag
            from chairmanships
           )
     )
group by chairperson, grp
order by chairperson, grp;

Finally (for now at least), I suggested yet another solution, using a new Oracle 12c feature – Pattern Matching:

select * from chairmanships 
  match_recognize (
     partition by chairperson
     order by date_from
     measures frst.date_from as date_from,
              date_to as date_to
     one row per match
     pattern (frst nxt*)
     define nxt as nxt.date_from = prev(nxt.date_to)+1) 
order by chairperson,date_from;

So, there you go – one question, four very different solutions.

The COALESCE Function

Everybody knows the NVL function, and use it frequently. On the other hand, the COALESCE function, which belongs to the same family of functions, is less known, although it exists since version 9i and has more capabilities than the good old NVL.

Let’s start with the similarities between these functions:

What?

Both functions have a similar purpose – to return the first non-null element from a list of elements. If all the elements in the list are null, the result will be null as well.

Where?

Both functions can be called from SQL and from PL/SQL.

 

Regarding the differences between the functions – there are two major and significant ones:

How Many?

The first difference is in the number of arguments the functions get.

NVL supports in exactly two arguments. The expression

NVL(a,b)

returns the value of a if it is not null, and the value of b if a is null.

The COALESCE function supports two or more arguments.

Let’s look at the expression

COALESCE(X1, X2, X3,..., Xn)
  • If X1 is not null its value is returned
  • Otherwise, if X2 is not null its value is returned
  • Otherwise, if X3 is not null its value is returned
  • And so on…

So if you write sometimes something like

NVL(a,NVL(b,c))

or alternatively

CASE WHEN a IS NOT NULL THEN a WHEN b IS NOT NULL THEN b ELSE c END

then you can write instead

COALESCE(a,b,c)

When?

The second difference is less evident, but very significant in some cases.

For many years I assumed that if I write

NVL(expression1,expression2)

then expression2 will be evaluated only if expression1 is null. It seems logic, doesn’t it? But this is not how it works. in NVL both expressions are always evaluated. In COALESCE, on the other hand, the expressions are evaluated only when needed (and so is the case with DECODE and CASE, by the way).

Let’s see an example where this behavior of NVL is problematic:

In the following query we’d like to return the value of the DESCRIPTION column if it is not null. If the column is empty, we’d like to return the result of some function – get_description – that gets as input the columns A_ID and B_ID.

SELECT NVL(DESCRIPTION, get_description(A_ID,B_ID))
FROM TEST_TABLE;

It doesn’t matter what get_description does exactly. What matters is that every call to the function when DESCRIPTION contains a non-null value is unnecessary, but this is exactly what happens here, and the performance may be hurt significantly.

But if we use COALESCE instead of NVL, we’ll achieve the same functionality, while removing the unnecessary calls. get_description will be called only for records in which the DESCRIPTION column is empty:

SELECT COALESCE(DESCRIPTION, get_description(A_ID,B_ID))
FROM TEST_TABLE;

One advantage NVL does have over COALESCE is the number of characters we need to type.

So if we have exactly two expressions, and both of them are already evaluated, we can continue using NVL. In the other cases, using COALESCE is probably a better choice.

Multiple indexes on the same column list

Zohar Elkayam wrote about the new feature added in Oracle 12c that enables creation of multiple indexes on the same column list (as long as they are different somehow [unique vs. non-unique, b*tree vs. bitmap, etc.] and only one of them is visible).

If the same capability is needed in pre-12c versions: create the new index with the addition of an extra column containing a constant. For example, if the original index is based on (X, Y), then the new index can be based on (X, Y, 1).

Oracle 12c New Features for Developers

One of the seminars I gave in Oracle Week 2013 was “Oracle Database 12c New Features for Developers”. I covered quite a lot of features (well, it was a full day seminar), and I tried to present them in a rational order, by dividing them to categories and talking about one category at a time. Of course, there are features that fall into more than one category, so I had to choose where to talk about such features deeply and where to only mention them.

Following is the complete list of features I covered in the seminar, with a very very short description for each one of them. I plan to write dedicated posts about some of them. In the meantime, I’m adding links to some good references.

The presentation is available here.

 

For starters (probably not the most important new feature 😉 )

  • Last login
C:\Users\orenn>sqlplus

SQL*Plus: Release 12.1.0.1.0 Production on Sat Nov 9 08:24:09 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Enter user-name: hr
Enter password:
Last Successful login time: Fri Nov 08 2013 17:26:27 +02:00

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL>

Schema-level new features

  • Invisible columns: Invisible columns are invisible, unless explicitly specified (for example, they will not appear in the result set of “select * from table”) [a post by Tom Kyte]
  • Column defaults [an article by Tim Hall]
    • Sequence: a column can be associated with a user-defined sequence
    • On null: a default can be defined for explicit Null values
    • Identity: a column can be defined as “identity”, which implicitly associates it with a system-generated sequence
    • New optional column: adding an optional column with default is a meta-data only operation
  • Extended strings: the maximum length of RAW and N/VARCHAR2 columns can be extended to 32KB [an article by Tim Hall]
  • Multiple indexes on the same column list: it is possible to define several indexes on the same column list, as long as they are different somehow (unique vs. non-unique, b*tree vs. bitmap, etc.) and only one of them is visible [a post by Richard Foote]
  • Session level sequences: non-persistent sequences, return a range of numbers which is unique within a session
  • Temporary undo: undo for temporary tables can be stored in the temporary tablespace

Optimizer new features

  • Histograms: new types of histograms – “top frequency” and “hybrid” – can give much better information than the legacy “frequency” and “height balanced” ones [posts by Jonathan Lewis: part 1, part 2, part 3 (with a very clear example of hybrid histograms)]
  • Online statistics gathering: table and column (but not index and histogram) statistics are gathered automatically during direct path load into empty tables/partitions
  • Dynamic statistics: a new level of dynamic statistics (formerly known as dynamic sampling)  – 11 – enables the optimizer to automatically decide to use dynamic statistics for any SQL statement, and the results are persisted in the cache
  • Global temporary tables: statistics gathered on global temporary tables are kept and used either in session-level (the default) or shared across sessions (as in pre-12c versions)
  • Concurrent execution of UNION/ALL: different branches of UNION operations may be executed in parallel (if at least one of the branches is considered being processed in parallel)

PL/SQL new features

  • White lists: a new clause – ACCESSIBLE BY – can be added to packages, procedures, functions and types to specify which objects are able to reference the PL/SQL object directly [an article by Tim Hall]
  • Invoker’s rights enhancements
    • Granting roles to program units: roles can be granted to program units [a post by Tom Kyte]
    • BEQUEATH: a new clause in view definition – BEQUEATH – specifies whether functions referenced in the view are executed using the view invoker’s rights or the view definer’s rights
    • INHERIT PRIVILEGES: when executing an invoker’s rights procedure the owner gets access to the invoker’s privileges; in 12c the owner must be granted INHERIT PRIVILEGES on the invoker to achieve this
    • Function result cache: invoker’s rights functions can be RESULT_CACHE-ed too (not only definer’s rights functions)
  • SQL text expansion: DBMS_UTILITY.EXPAND_SQL_TEXT reveals the actual SQL executed for a given query [a post by Tom Kyte]
  • Introspection: the new package UTL_CALL_STACK gives a modeled access to call stacks and error backtrace, and a fine-grained resolution within the program unit [a post by Tom Kyte]
  • Binding: it is possible to bind PL/SQL-only data types to SQL statements
  • SELECT from collections: it is possible to select from package-level collections

Easier migration to Oracle

  • Implicit result sets: it is possible to return result sets not only explicitly through cursor variables, but also implicitly [an article by Tim Hall]
  • SQL translation framework: SQL Translation Framework translates SQL statements of a client program from a non-Oracle dialect into the Oracle dialect (and can also be used to substitute an Oracle SQL statement with another Oracle statement to address a semantic or a performance issue) [a post by Kerry Osborne]

SQL new features

  • Row limiting: a new clause of the SELECT statement allows to fetch a specific chunk of X rows or X% of rows [an article by Tim Hall]
  • PL/SQL in the WITH clause: the WITH clause can include not only subquery factoring but also PL/SQL declarations of functions that can be used in the query (and procedures that can be used in the functions) [an article by Tim Hall]
  • Pattern matching: the SELECT syntax allows enhanced analysis of row sequences
  • TRUNCATE CASCADE: it is possible to truncate the parent table of an enabled foreign key constraint, and to truncate the parent table of a reference-partitioned table (assuming the foreign key is defined with ON DELETE CASCADE)
  • New join syntax: now supported – CROSS APPLY, OUTER APPLY, LATERAL inline views, multi-table “left outer join” with Oracle syntax (+)
  • Online operations: more operations can be executed online (DROP INDEX, DROP CONSTRAINT, SET UNUSED COLUMN, ALTER INDEX UNUSABLE, ALTER INDEX VISIBLE|INVISIBLE, ALTER TABLE MOVE (SUB)PARTITION)

Utilities new features

  • Data Pump enhancements: added options to disable redo logging in impdp, to change table compression in impdp, to export views as tables in expdp, to add timestamps to the log file
  • SQL*Loader express mode: SQL*Loader can be executed with no control file and with many defaults, and it generates a log file for future use including control file, CREATE EXTERNAL TABLE statement and a corresponding INSERT statement

Transaction Guard

  • Transaction Guard: a generic tool for applications to use for at-most-once execution in case of outages

Information lifecycle management

  • Temporal validity: tables can have pairs of columns defining “time periods”, and data can be selected by a specific valid time (or range)
  • In-database archiving: tables can be defined as ROW ARCHIVAL, which implicitly adds a hidden column to the table, holding an archiving (“logical deletion”) state;  “archived” rows become invisible  (or visible, when we want)
  • Heat map: the Heat Map feature provides data access tracking (last read time, last update time, last full table scan, last index scan)
  • Automatic data optimization: policies can be defined to automatically move data between storage tiers and to automatically compress data [an Oracle Magazine article by Arup Nanda]

Partitioning new features

  • Partitioning schemes: the combination of reference partitioning and interval partitioning is possible
  • Asynchronous global index maintenance: DROP/TRUNCATE PARTITION of a table with global indexes can be now a fast operation while keeping the global indexes usable; the “orphaned” index entries are deleted asynchronously [posts by Richard Foote: part 1, part 2, part 3]
  • TRUNCATE/EXCHANGE CASCADE: it is possible to TRUNCATE/EXCHNAGE partitions of the parent table of a reference-partitioned table (assuming the foreign key is defined with ON DELETE CASCADE)
  • Multiple (sub)partitions in single DDL operations: it is possible to handle multiple partitions in a single statement (depending on the partitioning scheme) – ADD | DROP | MERGE | SPLIT | TRUNCATE
  • Partial indexes: it is possible to create an index on a subset of partitions [posts by Richard Foote: part 1, part 2]