Constraint Optimization Summary

Oren Nakdimon Leave a Reply

This is the last part of a series about Constraint Optimization.
In this post I’ll summarize the conclusions from the previous parts.

When we add a constraint to an existing table, there are two aspects that are worth taking into consideration: duration and availability.

Duration

When the table contains a significant number of rows, adding a constraint may take a lot of time. In some cases Oracle applies a very nice optimization that can reduce this time to almost zero.

In part 1 we saw that such optimization happens when adding a column and an inline check constraint on that column in a single statement; and that this optimization does not happen for out-of-line check constraint.
In part 2 we saw that this optimization may lead to data integrity violation in some extreme cases.
In part 3 we saw that such optimization happens when adding a column and a foreign key constraint on that column in a single statement (for both inline and out-of-line constraints).
In part 4 we saw that unfortunately this optimization never happens for unique constraints.
In part 5 we saw that this optimization doesn’t happen if the added column is defined with a default value.

Availability

By default, adding a constraint is an offline operation. It means that it cannot start as long as the table is locked by active transactions, and that it blocks DML statements on the table for the duration of the operation. Obviously, the longer the constraint addition takes, the higher the significance of the availability issue.

In part 6 we saw how to add check constraints and foreign key constraints in an online way, by splitting the single operation into several ones.
In part 7 we saw how to add unique constraints in an online way (assuming Enterprise Edition).

Summary

The following table summarizes all of the above:

Adding a column and an inline constraint in a single statement Adding a column and an out-of-line constraint in a single statement Adding a column and a constraint in separate statements
Check Constraint Fast Duration depends on table size Duration depends on table size
Offline Offline Online can be achieved
Foreign Key Constraint Fast Fast Duration depends on table size
Offline Offline Online can be achieved
Unique Constraint Duration depends on table size Duration depends on table size Duration depends on table size
Offline Offline Online can be achieved in Enterprise Edition

Adding a Unique Constraint in an Online Way

Oren Nakdimon 2 Replies

Note: unlike most of my posts, this one assumes using Enterprise Edition

I have a table t and I want to add a unique constraint on one of its columns – c1.

The Offline Way

The straightforward and most simple way to do it is using a single alter table statement:

SQL> alter table t add constraint c1_uk unique (c1);

Table altered.

By default, Oracle creates in this operation a unique constraint (named c1_uk) and a corresponding unique index (named c1_uk as well) that enforces the constraint.
The downside is that this is an offline operation – the table is locked in Share mode.
This is true even if we specify that the creation of the index is online:

SQL> alter table t add constraint c1_uk unique (c1) using index online;

Table altered.

If the table contains many records, the creation of the index may take a significant amount of time, during which the table is locked and DML operations on the table are blocked.

The Online Way

We can create the unique constraint in an online way, by splitting the operation into three steps:

Step 1: Creating the Unique Index Explicitly

Instead of letting Oracle create the index implicitly, we’ll create it explicitly, using the online keyword:

SQL> create unique index c1_uk on t(c1) online;

Index created.

This operation may take some time, depending on the size of the table, but it is an online operation.

Step 2: Creating the Constraint

Now we can add the constraint, and associate it with the already-existing index. This is a fast operation, as the index already exists, but the default alter table… add constraint operation is an offline one. To make it online we should create the constraint as NOT VALIDATED:

SQL> alter table t add constraint c1_uk unique (c1)
  2  using index c1_uk
  3  enable novalidate;

Table altered.

So now the constraint is marked as ENABLED, which means that future DML statements will not be able to violate it, and as NOT VALIDATED, which means that existing records may violate it:

SQL> select status,validated,generated,index_name
  2  from user_constraints
  3  where constraint_name='C1_UK';

STATUS     VALIDATED       GENERATED  INDEX_NAME
---------- --------------- ---------- ----------
ENABLED    NOT VALIDATED   USER NAME  C1_UK

De facto we know that no existing record violates the constraint, because the unique index enforces the uniqueness for the entire table. To make this fact “officially documented”, we’ll go to the third step.

Step 3: Validating the Constraint

To mark the constraint as VALIDATED, we’ll issue the following statement:

SQL> alter table t enable validate constraint c1_uk;

Table altered.

This is an online operation, but is it also a fast operation?
When we validate a check constraint or a foreign key constraint, Oracle scans all the records in the table to make sure no record violates the constraint, and for big tables this may take a significant amount of time.

But in our case Oracle knows that the unique index already enforces the constraint for all the existing records in the table, and it optimizes the validation phase.
Using SQL trace we can see that the query that performs the actual validation looks like this:

select /*+ all_rows ordered dynamic_sampling(2) */ A.rowid, :1, :2, :3 
from "DEMO"."T" A 
where 1=0

so this validation phase is fast, regardless of the size of the table, as the validation does not visit the table records at all.

Enforcing a Unique Constraint by a Non-Unique Index

Oracle can enforce a unique constraint also by using a non-unique index, as long as the columns in the constraint are the leading columns of the index.

Let’s repeat the steps from the previous section with a non-unique index (after dropping and recreating the table).

SQL> create /* non-unique */ index c1_idx on t(c1) online;

Index created.

SQL> alter table t add constraint c1_uk unique (c1)
  2  using index c1_idx
  3  enable novalidate;

Table altered.

SQL> alter table t enable validate constraint c1_uk;

Table altered.

In this case the third step is not just a “rubber stamp”, as it was when we used a unique index. Here the existing records should be actually validated, and indeed we can see it in the trace file:

select /*+ all_rows ordered dynamic_sampling(2) */ A.rowid, :1, :2, :3
from "DEMO"."T" A,
     (select /*+ all_rows */ "C1" from "DEMO"."T" A 
      where ("C1" is not null)
      group by "C1" having count(1) > 1) B
where( "A"."C1" is not null)
and (sys_op_map_nonnull("A"."C1") =  sys_op_map_nonnull("B"."C1"))
STAT #2199513034064 id=1 cnt=0 pid=0 pos=1 obj=0 op='HASH JOIN  (cr=4180 pr=0 pw=0 str=1 time=706676 us cost=2308 size=1150000 card=50000)'
STAT #2199513034064 id=2 cnt=1000000 pid=1 pos=1 obj=113672 op='TABLE ACCESS FULL T (cr=2090 pr=0 pw=0 str=1 time=103204 us cost=582 size=10000000 card=1000000)'
STAT #2199513034064 id=3 cnt=0 pid=1 pos=2 obj=0 op='VIEW  (cr=2090 pr=0 pw=0 str=1 time=425247 us cost=620 size=650000 card=50000)'
STAT #2199513034064 id=4 cnt=0 pid=3 pos=1 obj=0 op='FILTER  (cr=2090 pr=0 pw=0 str=1 time=425246 us)'
STAT #2199513034064 id=5 cnt=1000000 pid=4 pos=1 obj=0 op='SORT GROUP BY (cr=2090 pr=0 pw=0 str=1 time=404184 us cost=620 size=250000 card=50000)'
STAT #2199513034064 id=6 cnt=1000000 pid=5 pos=1 obj=113672 op='TABLE ACCESS FULL T (cr=2090 pr=0 pw=0 str=1 time=24097 us cost=582 size=5000000 card=1000000)'

Related Posts

The Constraint Optimization series:

Fast but Offline, or Online but Slow?

Oren Nakdimon Leave a Reply

The Constraint Optimization series:


In the previous parts of this series I showed that Oracle does a nice optimization – that may save plenty of time – when we add in a single ALTER TABLE statement a new nullable with no default value column and a corresponding:
– inline (a.k.a. “column-level”) check constraint
or
– foreign key constraint (either inline or out-of-line)

In these cases Oracle enables the constraint (for future DML statements) and marks the constraint as VALIDATED (stating that existing records respect the constraint) without actually performing any check – counting on the fact that all the existing records have NULL in the new column. This is a beautiful optimization – if the table already contains many records, then skipping the (unnecessary) validation phase can save lots of time.

So, is it always better to add the new column and the corresponding constraint in a single ALTER TABLE statement? No, it isn’t. Sorry for the cliché, but… it depends.

The advantage is clear: skipping the unnecessary validation phase can save lots of time (the bigger the table, the higher the saving).

But there is also a downside: such a combined operation is an offline one. The table is locked; although for a short time, but in eXclusive mode.

If we cannot afford to do it in one short offline operation, we can do it in three separate online operations, but without the aforementioned optimization:

1. Adding the column (a short operation)

alter table t add (c number);

2. Adding the constraint without validating it (a short operation)

alter table t add (constraint c_chk check (c>0) enable novalidate);

3. Validating the constraint (the bigger the table, the longer this operation takes)

alter table t enable validate constraint c_chk;

By default, when we add a constraint it is both enabled and validated, and this is done as an offline operation. This is why I split it into two steps – both are online: the first step is adding the constraint and enabling it without validation, and the second one is validating the constraint.

EBR – Part 5: Explicit Actualization of Dependent Objects

Oren Nakdimon 5 Replies

This is part 5 of a post series about EBR.
In part 1 we created the baseline model and code – a table (PEOPLE) and two packages (PEOPLE_DL and APP_MGR).
In part 2 we saw that even a simple change – a package body compilation – can be dangerous in a busy system.
In part 3 we learned about editions and how they can be used for solving the problems described in part 2, so changes can be applied in both online and safe way. I discussed enabling, creating and using editions; session edition; and editioned objects.
In part 4 we discussed another challenge of online upgrades – invalidation of dependent objects.
In this part I’ll demonstrate everything I wrote about in the previous post.

Visit the index page for all the parts of the series

Creating a New Edition

We want to change the PEOPLE_DL spec, and as in any online upgrade, we start by creating a new edition. Let’s create edition V2 and grant use on it to DEMO_EBR:

-- connected as system
ORA$BASE> create edition v2;

Edition created.

ORA$BASE> grant use on edition v2 to demo_ebr;

Grant succeeded.

Invalidation of Dependent Objects

Before doing the online upgrade properly (i.e., using the new edition), let’s see what happens in a non-EBR environment.
First let’s see the objects in our schema and the dependencies between them before the upgrade:

SQL> break on object_type on object_name on status skip 1
SQL> select o.object_type,
  2         o.object_name,
  3         o.status,
  4         nullif(
  5           d.referenced_name || ' [' || lower(d.referenced_type) || ']',
  6           ' []'
  7         ) depends_on
  8  from   user_objects      o,
  9         user_dependencies d
 10  where  o.object_type like 'PACKAGE%'
 11  and    d.name(+) = o.object_name
 12  and    d.type(+) = o.object_type
 13  and    d.referenced_owner(+) = user
 14  order by o.object_type,
 15           o.object_name,
 16           d.referenced_name;

OBJECT_TYPE  OBJECT_NAME  STATUS  DEPENDS_ON
------------ ------------ ------- --------------------
PACKAGE      APP_MGR      VALID

             PEOPLE_DL    VALID   PEOPLE [table]

PACKAGE BODY APP_MGR      VALID   APP_MGR [package]
                                  PEOPLE_DL [package]

             PEOPLE_DL    VALID   PEOPLE [table]
                                  PEOPLE_DL [package]


6 rows selected.

Compiling the PEOPLE_DL spec will invalidate its dependent objects – the PEOPLE_DL body and the APP_MGR body:

SQL> alter package people_dl compile specification;

Package altered.

SQL> select o.object_type,
  2         o.object_name,
  3         o.status,
  4         nullif(
  5           d.referenced_name || ' [' || lower(d.referenced_type) || ']',
  6           ' []'
  7         ) depends_on
  8  from   user_objects      o,
  9         user_dependencies d
 10  where  o.object_type like 'PACKAGE%'
 11  and    d.name(+) = o.object_name
 12  and    d.type(+) = o.object_type
 13  and    d.referenced_owner(+) = user
 14  order by o.object_type,
 15           o.object_name,
 16           d.referenced_name;

OBJECT_TYPE  OBJECT_NAME  STATUS  DEPENDS_ON
------------ ------------ ------- --------------------
PACKAGE      APP_MGR      VALID

             PEOPLE_DL    VALID   PEOPLE [table]

PACKAGE BODY APP_MGR      INVALID APP_MGR [package]
                                  PEOPLE_DL [package]

             PEOPLE_DL    INVALID PEOPLE [table]
                                  PEOPLE_DL [package]


6 rows selected. 

This is the expected behavior, and, as explained in the previous post, one of the motivations for using EBR for online upgrades.
The dependent objects can be revalidated explicitly (using the ALTER PACKAGE … COMPILE BODY statement) or implicitly (as soon as they are used; for example, by executing app_mgr.do_something).

Delayed Actualization

Now, let’s return to our edition-enabled environment.
In the developer session, let’s change the session edition to V2:


-- the developer session
V1> @set_edition v2

Session altered.

V2>

At this point, right after creating the new edition, no editioned objects are actual in this edition; all of them are inherited.
Let’s see the objects in our schema and the dependencies between them, and also the edition in which each editioned object is actual:


V2> break on object_type on object_name on status on edition_name skip 1
V2>  select o.object_type,
  2         o.object_name,
  3         o.status,
  4         o.edition_name,
  5         nullif(
  6           d.referenced_name || ' [' || lower(d.referenced_type) || ']',
  7           ' []'
  8         ) depends_on
  9  from   user_objects      o,
 10         user_dependencies d
 11  where  d.name(+) = o.object_name
 12  and    d.type(+) = o.object_type
 13  and    d.referenced_owner(+) = user
 14  order by nvl2(o.edition_name, 2, 1),
 15           o.object_type,
 16           o.object_name,
 17           d.referenced_name;

OBJECT_TYPE  OBJECT_NAME  STATUS  EDITION_NAME DEPENDS_ON
------------ ------------ ------- ------------ --------------------
INDEX        PEOPLE_PK    VALID

TABLE        PEOPLE       VALID

PACKAGE      APP_MGR      VALID   ORA$BASE

             PEOPLE_DL    VALID   ORA$BASE     PEOPLE [table]

PACKAGE BODY APP_MGR      VALID   ORA$BASE     APP_MGR [package]
                                               PEOPLE_DL [package]

             PEOPLE_DL    VALID   V1           PEOPLE [table]
                                               PEOPLE_DL [package]


8 rows selected.

Now let’s do our task – change the PEOPLE_DL spec. For showing how it works we don’t have to change anything – we can simply recompile the spec:


V2> alter package people_dl compile specification;

Package altered.

Compiling an existing object (using the ALTER statement) explicitly actualizes the object in the current edition (if it’s not actual in this edition yet).
Let’s see it by repeating the query of the objects and their dependencies:


V2> select o.object_type,
  2         o.object_name,
  3         o.status,
  4         o.edition_name,
  5         nullif(
  6           d.referenced_name || ' [' || lower(d.referenced_type) || ']',
  7           ' []'
  8         ) depends_on
  9  from   user_objects      o,
 10         user_dependencies d
 11  where  d.name(+) = o.object_name
 12  and    d.type(+) = o.object_type
 13  and    d.referenced_owner(+) = user
 14  order by nvl2(o.edition_name, 2, 1),
 15           o.object_type,
 16           o.object_name,
 17           d.referenced_name;

OBJECT_TYPE  OBJECT_NAME  STATUS  EDITION_NAME DEPENDS_ON
------------ ------------ ------- ------------ --------------------
INDEX        PEOPLE_PK    VALID

TABLE        PEOPLE       VALID

PACKAGE      APP_MGR      VALID   ORA$BASE

             PEOPLE_DL    VALID   V2           PEOPLE [table]

PACKAGE BODY APP_MGR      VALID   ORA$BASE     APP_MGR [package]

             PEOPLE_DL    VALID   V1           PEOPLE [table]


6 rows selected.

And indeed, we can see that the PEOPLE_DL spec is now actual in the current edition, V2.
But we can also observe some weird things regarding the PEOPLE_DL body and the APP_MGR body – the dependent objects of the PEOPLE_DL spec:

  • It seems they do not depend on the PEOPLE_DL spec anymore
  • They seem valid, although (as we’ve seen above in the non-EBR example) they must have been invalidated by the compilation of the PEOPLE_DL spec
  • It seems that we still inherit them from ancestor editions, although (as explained in the previous post) this is impossible under the EBR rules

All of this is the result of the fact that Oracle delays the actualization of the dependent objects until they are used.

Let’s call the app_mgr.do_something procedure, and as a result Oracle will actualize and validate the package bodies.


V2> exec app_mgr.do_something

wow! this is not the base edition anymore

PL/SQL procedure successfully completed.

V2> select o.object_type,
  2         o.object_name,
  3         o.status,
  4         o.edition_name,
  5         nullif(
  6           d.referenced_name || ' [' || lower(d.referenced_type) || ']',
  7           ' []'
  8         ) depends_on
  9  from   user_objects      o,
 10         user_dependencies d
 11  where  d.name(+) = o.object_name
 12  and    d.type(+) = o.object_type
 13  and    d.referenced_owner(+) = user
 14  order by nvl2(o.edition_name, 2, 1),
 15           o.object_type,
 16           o.object_name,
 17           d.referenced_name;

OBJECT_TYPE  OBJECT_NAME  STATUS  EDITION_NAME DEPENDS_ON
------------ ------------ ------- ------------ --------------------
INDEX        PEOPLE_PK    VALID

TABLE        PEOPLE       VALID

PACKAGE      APP_MGR      VALID   ORA$BASE

             PEOPLE_DL    VALID   V2           PEOPLE [table]

PACKAGE BODY APP_MGR      VALID   V2           APP_MGR [package]
                                               PEOPLE_DL [package]

             PEOPLE_DL    VALID   V2           PEOPLE [table]
                                               PEOPLE_DL [package]


8 rows selected.

That’s better! Now everything is in order and makes sense.
But remember that we cannot count on the implicit actualization and revalidation mechanism in an online upgrade scenario, due to its procrastination characteristic.

Explicit Actualization

So one of the most important advices I can give you about EBR is:

Before exposing a new edition, always actualize all the dependent objects explicitly

How to do it? Let’s repeat the demo, but this time we’ll explicitly actualize and revalidate the dependent objects.

-- connected as system
ORA$BASE> drop edition v2 cascade;

Edition dropped.

ORA$BASE> create edition v2;

Edition created.

ORA$BASE> grant use on edition v2 to demo_ebr;

Grant succeeded.

-- the developer session (connected as demo_ebr)
V1> @set_edition v2

Session altered.

V2> alter package people_dl compile specification;
Package altered.

V2> select o.object_type,
  2         o.object_name,
  3         o.status,
  4         o.edition_name,
  5         nullif(
  6           d.referenced_name || ' [' || lower(d.referenced_type) || ']',
  7           ' []'
  8         ) depends_on
  9  from   user_objects      o,
 10         user_dependencies d
 11  where  d.name(+) = o.object_name
 12  and    d.type(+) = o.object_type
 13  and    d.referenced_owner(+) = user
 14  order by nvl2(o.edition_name, 2, 1),
 15           o.object_type,
 16           o.object_name,
 17           d.referenced_name;

OBJECT_TYPE  OBJECT_NAME  STATUS  EDITION_NAME DEPENDS_ON
------------ ------------ ------- ------------ --------------------
INDEX        PEOPLE_PK    VALID

TABLE        PEOPLE       VALID

PACKAGE      APP_MGR      VALID   ORA$BASE

             PEOPLE_DL    VALID   V2           PEOPLE [table]

PACKAGE BODY APP_MGR      VALID   ORA$BASE     APP_MGR [package]

             PEOPLE_DL    VALID   V1           PEOPLE [table]


6 rows selected.

It’s important to note that it’s not enough to compile the objects that are in status “INVALID”, as you may do in non-EBR environments, because the objects may appear (misleadingly) as “VALID”, like in our example.
You can just go and compile all the objects in the schema, but this will unnecessarily actualize all the objects in the new edition.
You can use dbms_utility.compile_schema (with compile_all=>false) or the UTL_RECOMP procedures, but (at least in 11g) these procedures suffered from some nasty bugs (for example bug 13502183), so beware.

I use the following script:


V2> --
V2> -- Name:    revalidate.sql
V2> -- Author:  Oren Nakdimon
V2> -- Tested:  11.2; 12.1; 12.2
V2> --
V2> -- this script is provided AS IS without warranty of any kind
V2> --
V2> declare
  2      type obj_t is record(
  3          object_name user_objects.object_name%type,
  4          namespace   user_objects.namespace%type);
  5      type obj_tt is table of obj_t;
  6
  7      l_obj_list obj_tt;
  8
  9      l_obj_count binary_integer := 0;
 10
 11  begin
 12      loop
 13          select object_name,namespace
 14          bulk   collect
 15          into   l_obj_list
 16          from   user_objects
 17          where  edition_name != sys_context('userenv', 'session_edition_name')
 18          or     status = 'INVALID';
 19
 20          exit when l_obj_list.count = l_obj_count;
 21
 22          l_obj_count := l_obj_list.count;
 23
 24          for i in 1 .. l_obj_count
 25          loop
 26              dbms_utility.validate(user, l_obj_list(i).object_name, l_obj_list(i).namespace);
 27          end loop;
 28      end loop;
 29  end;
 30  /

PL/SQL procedure successfully completed.

The scripts calls dbms_utility.validate for all the current user’s objects that are either:

  • Invalid

Or

  • Inherited by the current edition (i.e., they are actualized in an ancestor edition and not in the current edition)

If the object should be actualized, dbms_utility.validate does the actualization as part of the revalidation.
This process is done repeatedly until it has no more work to do (i.e., all the required actualization was done, and all the objects are either valid or cannot be validated [due to “real” compilation errors]).


V2> select o.object_type,
  2          o.object_name,
  3          o.status,
  4          o.edition_name,
  5          nullif(
  6            d.referenced_name || ' [' || lower(d.referenced_type) || ']',
  7            ' []'
  8          ) depends_on
  9   from   user_objects      o,
 10          user_dependencies d
 11   where  d.name(+) = o.object_name
 12   and    d.type(+) = o.object_type
 13   and    d.referenced_owner(+) = user
 14   order by nvl2(o.edition_name, 2, 1),
 15            o.object_type,
 16            o.object_name,
 17            d.referenced_name;

OBJECT_TYPE  OBJECT_NAME  STATUS  EDITION_NAME DEPENDS_ON
------------ ------------ ------- ------------ --------------------
INDEX        PEOPLE_PK    VALID

TABLE        PEOPLE       VALID

PACKAGE      APP_MGR      VALID   ORA$BASE

             PEOPLE_DL    VALID   V2           PEOPLE [table]

PACKAGE BODY APP_MGR      VALID   V2           APP_MGR [package]
                                               PEOPLE_DL [package]

             PEOPLE_DL    VALID   V2           PEOPLE [table]
                                               PEOPLE_DL [package]


8 rows selected.

For other parts of the EBR series, please visit the index page

EBR – Part 4: Invalidation and Actualization of Dependent Objects

Oren Nakdimon 4 Replies

This is part 4 of a post series about EBR.
In part 1 we created the baseline model and code – a table (PEOPLE) and two packages (PEOPLE_DL and APP_MGR).
In part 2 we saw that even a simple change – a package body compilation – can be dangerous in a busy system.
In part 3 we learned about editions and how they can be used for solving the problems described in part 2, so changes can be applied in both online and safe way. I discussed enabling, creating and using editions; session edition; and editioned objects.
In this part we’ll see more challenges that online upgrades bring – this time when changing a package spec.

Visit the index page for all the parts of the series

Invalidation

This time we need to change the PEOPLE_DL package spec. There are no table changes, and of course, as we speak about EBR, the upgrade from the previous version to the new one should be online.

An online upgrade means that the application users should be able to continue working uninterruptedly. The code objects that they use should remain valid and available at any time.

In addition to the challenges raised from the first use case – changing PEOPLE_DL package body – the current use case introduces another challenge: invalidation. Changing the PEOPLE_DL package spec will cause its dependent objects become invalid.
Even if the change does not cause any compilation errors in the dependent objects, they become invalid.
Actually, even if we just recompile an object, without making any changes in it, its dependent objects still become invalid.
Now, it’s true that in these cases revalidation will occur automatically as soon as the invalid object is used, but in an online upgrade scenario this is usually unacceptable.

Moreover, many times we need to change multiple interrelated objects. For example, changing the API of some procedure in one package, and consequently changing it in the package body and changing the calls to that procedure from another package. In this case we cannot avoid having broken objects – invalid and with compilation errors – during the upgrade, and this is obviously cannot be really considered an online upgrade.

The Transaction Concept

It would be nice if we could compile all these objects in a single transaction, as one atomic operation, so at any point of time the users will see only a consistent view of all the objects – and all of them in a “Valid” state.
Well, actually, editions allow us to achieve this concept. We make all the changes in a new edition, and expose the new edition to the clients only when all the objects are valid.

Actualization

We start this post’s example at the same point we ended the previous post. We have two editions and four editioned objects. In the V1 edition one of the objects – the PEOPLE_DL package body – is actual, and the other three objects are inherited from the OR$BASE edition:

Now we want to change the PEOPLE_DL spec, and we already know that the safe way to do it online is by using a new edition.

So we’ll create edition V2 and we’ll compile the new version of the PEOPLE_DL spec in V2, so the PEOPLE_DL spec will be actual in V2, like this:

But something is very wrong with this picture. Recall the following statement from the previous post:

By definition, when we are in the context of a specific edition, we see and use the objects that are actual in this session and the inherited objects from ancestor editions, but never objects from descendant editions.
This observation may seem trivial, but in my opinion it’s the cornerstone of EBR. This is what lets us apply the new or changed code in the privacy of a new edition, where no client is exposed to the new version until we want to expose it.

In the picture, the PEOPLE_DL package body from V1 and the APP_MGR package body from ORA$BASE refer to the spec from V2, and this violates the “privacy rule”.
It means that whenever an object becomes actual in some edition, all its dependent objects must become actual in this edition as well. In our case, the correct picture is this:

Now, I have good news, bad news, very bad news, and really good news.

Good News

The good news is that this actualization of dependent objects happens automatically.

Bad News

The bad news is that it happens too late. It doesn’t happen when we actualize the object, but only when the dependent objects are used (just like with regular revalidation), and that means we cannot count on the automatic actualization in an online upgrade scenario.

Very Bad News

The very bad news (in my opinion, of course) is that once we actualize an object in the new edition, we get a false and very misleading picture from the data dictionary views, until the dependent objects are actualized as well:

  • By looking at the USER_OBJECTS view it seems that we still inherit the dependent objects from ancestor editions, although this is impossible under the EBR rules.
  • There is no indication that the dependent objects became invalid as a result of the actualization.
  • The dependencies from the dependent objects to the object we’ve just actualized disappear from the USER_DEPENDENCIES view.

All of this is a direct consequence of the fact that the actualization of the dependent objects is delayed.

Really Good News

So what is the really good news?
That once we know and understand this behavior, it’s easy to deal with it.
In the next post I’ll show how, as well as examples for everything described in this post.

For other parts of the EBR series, please visit the index page.

EBR – Part 3: Editions and Editioned Objects

Oren Nakdimon 2 Replies

This is part 3 of a post series about EBR.
In part 1 we created the baseline model and code – a table (PEOPLE) and two packages (PEOPLE_DL and APP_MGR).
In part 2 we saw that even a simple change – such as a package body compilation – can be dangerous in a busy system.
In this post we’ll see how EBR solves the problems described in the previous post, so such a change can be applied in both online and safe way.

Visit the index page for all the parts of the series

Overview

The basic concept of EBR is that while the clients are connected to the database and use the objects – say the package pkg from the previous post – we can create another instance of pkg – a copy of pkg – and make all the necessary changes in this new instance.
We create this new pkg in the same schema, so now we actually have two packages named pkg in one schema. But each session sees only one of these packages – the clients continue seeing the old instance of pkg, and we the developers currently see the new instance of pkg.
When we are happy with the new implementation, we can expose it, so clients that will open new connections to the database from now on, will see the new pkg.
We achieve this by editions. Imagine that our schema is divided into multiple separate areas. At any point in time, each session sees one and only one area.
These separate areas are called editions, and we can keep basically any code object there – procedures, functions, packages, triggers, views, and more.
We can’t have two objects named pkg in the same edition, but we can in two separate editions.

Editions

Editions are database-level objects Continue reading

EBR – Part 2: Locking, Blocking and ORA-04068

Oren Nakdimon 1 Reply

This is part 2 of a post series about EBR.
In part 1 we created the baseline model and code – a table (PEOPLE) and two packages (PEOPLE_DL and APP_MGR).
In this post we’ll start handling the first type of change request: changing a package body.

Visit the index page for all the parts of the series

The Task

We need to change the implementation of the PEOPLE_DL package; i.e. we need to change the package body.
There are no API changes (the package spec is not changed) and no table changes.
And of course, as we speak about EBR, the upgrade from the base version to the new one should be online.

The Problems

Locking and Blocking

An online upgrade means Continue reading

EBR – Part 1: Overview and Setup

Oren Nakdimon Leave a Reply

This is a link to an index page for all the parts of the series

I have been using EBR in a real production system for more than 4 years now.
EBR – an acronym for Edition-Based Redefinition – is a powerful and unique feature (or, more precisely, a set of features) that enables patching and upgrading live Oracle-based applications in an online fashion, with zero downtime.

As an Oracle Developer and DBA I find EBR one of the most important tools in my toolkit, and I take advantage of every opportunity to let other Oracle developers and DBAs know about it.

I love EBR because:

  • It enables us to do any change to active production systems – from a small bug fix to a major version upgrade – in an online fashion, using hot rollover: we expose a new version (a new edition) while users are still using the old version, and different users may use two (or even more) versions at the same time.
  • We perform the upgrade in the privacy of a new unexposed edition, which is a huge benefit, in my opinion, because it eliminates the regular pressure of completing the upgrade as soon as possible to reduce downtime, and the stress of not making any harm by mistake to a live system.
  • We can do the upgrade at any time, not necessarily at off-peak hours.
  • And EBR is supported in all the editions (since Oracle 11.2), including standard edition, and requires no special license.

I’m starting today a series of posts about Continue reading

ODC Appreciation Day: Collections in SQL

Oren Nakdimon Leave a Reply

Here’s my contribution to the ODC Appreciation Day.

Overview

Last week I had the privilege to participate in the EOUC Database ACES Share Their Favorite Database Things session at Oracle OpenWorld, so I think that the best topic to write about, as part of the ODC Appreciation Day, is the one I talked about in this session.
My 5-minute presentation was about Collections in SQL.

Collections are very useful in PL/SQL development. This is a well-known fact, and collections are indeed used a lot by PL/SQL developers.
But it is less known that collections can be very useful also in SQL statements, not only in PL/SQL. The Oracle database supports user-defined collection types (Nested Tables and Varrays) and built-in functions that operate on collections, and they add yet more power to the already-powerful SQL language.

Examples

Here are some links to blog posts that I’ve written in the past, in which I used collections in SQL.

This post includes an example for a PL/SQL procedure that gets a collection parameter as input and needs to perform some manipulation on it. The implementation of this procedure becomes easy using Collection Unnesting – simply writing a SQL statement (MERGE in this case), that treats the collection parameter as if it were a simple table.

In this post we take advantage of the PowerMultiSet built-in function for creating a Kakuro cheat-sheet.

In this post we observe subtle differences between the two types of collections – Nested Tables and Varrays – with regard to the SQL functions that can be applied on them.

This post shows how sometimes collection functions such as SET and CARDINALITY enable solving a problem in a more concise and simple way than the alternatives.

And here is a simple solution for some problem that was asked on Ask TOM, using the SQL function with probably the longest name – POWERMULTISET_BY_CARDINALITY.

Documentation

RETURNING INTO – Enhancement Suggestion

Oren Nakdimon 3 Replies

The RETURNING INTO clause is one of my favorite features.
It returns data from the rows that have been affected by the DML statement, and as I wrote in this previous post:
For INSERT it returns the after-insert values of the new row’s columns.
For UPDATE it returns the after-update values of the affected rows’ columns.
For DELETE it returns the before-delete values of the affected rows’ columns.

For INSERT there are no before-insert values, so the “after-insert values” is the only reasonable option.
Likewise, for DELETE there are no after-delete values, so the “before-delete values” is the only reasonable option.
But for UPDATE there are both before-update and after-update values, but currently the RETURNING INTO clause supports only the after-update values.

I think it will be really useful if for UPDATE we’ll be able to get the before-update values in addition to the after-update values. This will also make the feature more symmetric and complete, in my opinion.
Currently, if we need to know the before-update values, we have to make two SQL statement calls: SELECT (FOR UPDATE) for the record(s) we’re about to update, and then the UPDATE itself.

As a simple example, consider the following procedure, that increases the salary of an employee by some amount, up to maximum of 10000, and returns the percentage in which the salary was changed:

create or replace procedure raise_salary
(
    i_employee_id  in employees.employee_id%type,
    i_raise_amount in employees.salary%type,
    o_raise_pct    out number
) as
    l_old_salary employees.salary%type;
    l_new_salary employees.salary%type;
begin
    select e.salary
    into   l_old_salary
    from   employees e
    where  e.employee_id = i_employee_id
    for    update of e.salary;

    update employees e
    set    e.salary = least(e.salary + i_raise_amount, greatest (e.salary,10000))
    where  e.employee_id = i_employee_id
    returning e.salary into l_new_salary;

    o_raise_pct := l_new_salary / l_old_salary;

end raise_salary;

With the suggested enhancement, we could achieve the same functionality with one context switch to the SQL engine rather than two (obviously the syntax I’m using is made-up, just for this example):

create or replace procedure raise_salary
(
    i_employee_id  in employees.employee_id%type,
    i_raise_amount in employees.salary%type,
    o_raise_pct    out number
) as
    l_old_salary employees.salary%type;
    l_new_salary employees.salary%type;
begin

    update employees e
    set    e.salary = least(e.salary + i_raise_amount, greatest (e.salary,10000))
    where  e.employee_id = i_employee_id
    returning old e.salary, new e.salary 
    into l_old_salary, l_new_salary;

    o_raise_pct := l_new_salary / l_old_salary;

end raise_salary;

I submitted this suggestion to the Database Ideas space of the Oracle Developer Community (formerly known as OTN) web site.
If you think it’s a good idea, please vote up 🙂

Note

As a (dirty, confusing, and most probably inferior performance-wise to the “real” solution) workaround, we can achieve the same functionality for this example today, using the fact that a scalar subquery in the RETURNING INTO clause obeys the statement-level read consistency rule – it returns the data as it was before the DML happened:

create or replace procedure raise_salary
(
    i_employee_id  in employees.employee_id%type,
    i_raise_amount in employees.salary%type,
    o_raise_pct    out number
) as
    l_old_salary employees.salary%type;
    l_new_salary employees.salary%type;
begin

    update employees e
    set    e.salary = least(e.salary + i_raise_amount, greatest (e.salary,10000))
    where  e.employee_id = i_employee_id
    returning 
      (select e1.salary from employees e1 where e1.employee_id = i_employee_id),
      e.salary
    into l_old_salary, l_new_salary;

    o_raise_pct := l_new_salary / l_old_salary;

end raise_salary;