EBR – Part 1: Overview and Setup

This is a link to an index page for all the parts of the series

I have been using EBR in a real production system for more than 4 years now.
EBR – an acronym for Edition-Based Redefinition – is a powerful and unique feature (or, more precisely, a set of features) that enables patching and upgrading live Oracle-based applications in an online fashion, with zero downtime.

As an Oracle Developer and DBA I find EBR one of the most important tools in my toolkit, and I take advantage of every opportunity to let other Oracle developers and DBAs know about it.

I love EBR because:

  • It enables us to do any change to active production systems – from a small bug fix to a major version upgrade – in an online fashion, using hot rollover: we expose a new version (a new edition) while users are still using the old version, and different users may use two (or even more) versions at the same time.
  • We perform the upgrade in the privacy of a new unexposed edition, which is a huge benefit, in my opinion, because it eliminates the regular pressure of completing the upgrade as soon as possible to reduce downtime, and the stress of not making any harm by mistake to a live system.
  • We can do the upgrade at any time, not necessarily at off-peak hours.
  • And EBR is supported in all the editions (since Oracle 11.2), including standard edition, and requires no special license.

I’m starting today a series of posts about Continue reading “EBR – Part 1: Overview and Setup”

ODC Appreciation Day: Collections in SQL

Here’s my contribution to the ODC Appreciation Day.

Overview

Last week I had the privilege to participate in the EOUC Database ACES Share Their Favorite Database Things session at Oracle OpenWorld, so I think that the best topic to write about, as part of the ODC Appreciation Day, is the one I talked about in this session.
My 5-minute presentation was about Collections in SQL.

Collections are very useful in PL/SQL development. This is a well-known fact, and collections are indeed used a lot by PL/SQL developers.
But it is less known that collections can be very useful also in SQL statements, not only in PL/SQL. The Oracle database supports user-defined collection types (Nested Tables and Varrays) and built-in functions that operate on collections, and they add yet more power to the already-powerful SQL language.

Examples

Here are some links to blog posts that I’ve written in the past, in which I used collections in SQL.

This post includes an example for a PL/SQL procedure that gets a collection parameter as input and needs to perform some manipulation on it. The implementation of this procedure becomes easy using Collection Unnesting – simply writing a SQL statement (MERGE in this case), that treats the collection parameter as if it were a simple table.

In this post we take advantage of the PowerMultiSet built-in function for creating a Kakuro cheat-sheet.

In this post we observe subtle differences between the two types of collections – Nested Tables and Varrays – with regard to the SQL functions that can be applied on them.

This post shows how sometimes collection functions such as SET and CARDINALITY enable solving a problem in a more concise and simple way than the alternatives.

And here is a simple solution for some problem that was asked on Ask TOM, using the SQL function with probably the longest name – POWERMULTISET_BY_CARDINALITY.

Documentation

RETURNING INTO – Enhancement Suggestion

Update: this enhancement suggestion got accepted, and it is now part of Oracle Database 23c

The RETURNING INTO clause is one of my favorite features.
It returns data from the rows that have been affected by the DML statement, and as I wrote in this previous post:
For INSERT it returns the after-insert values of the new row’s columns.
For UPDATE it returns the after-update values of the affected rows’ columns.
For DELETE it returns the before-delete values of the affected rows’ columns.

For INSERT there are no before-insert values, so the “after-insert values” is the only reasonable option.
Likewise, for DELETE there are no after-delete values, so the “before-delete values” is the only reasonable option.
But for UPDATE there are both before-update and after-update values, but currently the RETURNING INTO clause supports only the after-update values.

I think it will be really useful if for UPDATE we’ll be able to get the before-update values in addition to the after-update values. This will also make the feature more symmetric and complete, in my opinion.
Currently, if we need to know the before-update values, we have to make two SQL statement calls: SELECT (FOR UPDATE) for the record(s) we’re about to update, and then the UPDATE itself.

As a simple example, consider the following procedure, that increases the salary of an employee by some amount, up to maximum of 10000, and returns the percentage in which the salary was changed:

create or replace procedure raise_salary
(
    i_employee_id  in employees.employee_id%type,
    i_raise_amount in employees.salary%type,
    o_raise_pct    out number
) as
    l_old_salary employees.salary%type;
    l_new_salary employees.salary%type;
begin
    select e.salary
    into   l_old_salary
    from   employees e
    where  e.employee_id = i_employee_id
    for    update of e.salary;

    update employees e
    set    e.salary = least(e.salary + i_raise_amount, greatest (e.salary,10000))
    where  e.employee_id = i_employee_id
    returning e.salary into l_new_salary;

    o_raise_pct := l_new_salary / l_old_salary;

end raise_salary;

With the suggested enhancement, we could achieve the same functionality with one context switch to the SQL engine rather than two (obviously the syntax I’m using is made-up, just for this example):

create or replace procedure raise_salary
(
    i_employee_id  in employees.employee_id%type,
    i_raise_amount in employees.salary%type,
    o_raise_pct    out number
) as
    l_old_salary employees.salary%type;
    l_new_salary employees.salary%type;
begin

    update employees e
    set    e.salary = least(e.salary + i_raise_amount, greatest (e.salary,10000))
    where  e.employee_id = i_employee_id
    returning old e.salary, new e.salary 
    into l_old_salary, l_new_salary;

    o_raise_pct := l_new_salary / l_old_salary;

end raise_salary;

I submitted this suggestion to the Database Ideas space of the Oracle Developer Community (formerly known as OTN) web site.
If you think it’s a good idea, please vote up 🙂

Note

As a (dirty, confusing, and most probably inferior performance-wise to the “real” solution) workaround, we can achieve the same functionality for this example today, using the fact that a scalar subquery in the RETURNING INTO clause obeys the statement-level read consistency rule – it returns the data as it was before the DML happened:

create or replace procedure raise_salary
(
    i_employee_id  in employees.employee_id%type,
    i_raise_amount in employees.salary%type,
    o_raise_pct    out number
) as
    l_old_salary employees.salary%type;
    l_new_salary employees.salary%type;
begin

    update employees e
    set    e.salary = least(e.salary + i_raise_amount, greatest (e.salary,10000))
    where  e.employee_id = i_employee_id
    returning 
      (select e1.salary from employees e1 where e1.employee_id = i_employee_id),
      e.salary
    into l_old_salary, l_new_salary;

    o_raise_pct := l_new_salary / l_old_salary;

end raise_salary;

Implementing Arc Relationships with Virtual Columns? Or Not?

I wrote a post some time ago about implementing arc relationships using virtual columns.
Recently, Toon Koppelaars wrote a detailed and reasoned comment to that post. Since I admire Toon, getting his point of view on something that I wrote is a privilege for me, regardless if he agrees with me or disagrees (and just to be clear, it’s the latter this time). I think that having a public (and civilized) discussion – this time about principles of data modeling and implementation – is a good thing, even if at the end we don’t convince each other.
Therefore I thought Toon’s comment deserves a post of its own. So I’ll quote here everything that he wrote in the comment, with my response after every sentence or paragraph. And everyone is welcome to add their own points of view in the comments section.

I’d like to emphasize that basically I agree with Toon about the general guidelines. What makes life harder, as always, are the nuances, or as we know them well as “it depends”.

And just one general observation first. The starting point of the original post was that the design included an arc. This is the given fact, and my intent was to suggest an implementation for this given fact. I don’t want to put words in your mouth (or keyboard), Toon, but I think that the basis for your arguments is that the arc, as a design concept, is wrong. I think you would prefer to see several entities in the ERD instead of the one with the arc. Am I right in this interpretation?

Toon wrote:
Oren, I just cannot, not comment on this post 😉

And I really appreciate it. Really.

Toon wrote:
Re: Multiple Tables
1) What do you mean by “maintaining another table”? And why is that a disadvantage?

I’ll refer to it after the next paragraph.

Toon wrote:
In my opinion, you should always have a software factory in which the act of introducing a new table is cheap. Why? Because otherwise you end up with database designs with few tables, into which more than few “things” are stored. Which is bad. It’s bad for understandability, it’s bad for performance, it’s bad for future maintenance.

And I completely agree with you.

Toon wrote:
Apparently adding a table is not cheap in your factory?
But, yet, dropping and recreating a CHECK constraint (to involve a new single-char value) is cheap?

As you know me and my database development principles, beliefs and guidelines (for example, here and here), I assume this is a rhetorical question. Of course adding a table is cheap. Actually it is one of the cheapest tasks.
And dropping and recreating a CHECK constraint is not expensive either. It may take some time to VALIDATE it if the table is big, but this is not a reason not to do it. And anyway, if, in order to implement a good design, something is “expensive” (but still realistic), it should not be a reason not to do it (at least, not a good reason).

But, if we have, by design, several tables that are identical – in column names, data types, indexes, constraints – everything except for the table on the other side of the “owner” foreign key, then I start to feel uncomfortable. I get the same feeling that I get when I see two procedures that do the same (or almost the same) thing.
And although many times the arc appears for simple join tables, as in my example, this is not always the case. The entity with the arc may include many columns.
In addition, by “maintaining another table” I did not mean just the CREATE TABLE statement. When a new attribute is added to the designed entity, we need to add a new column – exactly the same column – to all the tables. And the same is true for any change – renaming a column, changing data types, adding or removing constraints, etc.
In my opinion, if I have to repeat the same work over and over instead of doing it just once, I probably do something wrong.
It also reminds me of denormalization. Usually when we talk about denormalization we mean that the same data is kept more than once. I think it’s the same here – just that the duplicate data is kept in the data dictionary tables and not in our schema. And one of the disadvantages of denormalization is that it increases the risk for bugs; in this case, due to a human error we may end up with PERSON_ADDRESSES and COMPANY_ADDRESSESS that are not exactly the same, although they should be.

Toon wrote:
2) Why is the implied code-difference for this solution, more difficult to maintain?

Here I was mainly thinking about lack of reusability.

Toon wrote:
Instead of having these two inserts:
insert into entity_addresses values(AID1,’P’,PID1), and insert into entity_addresses values(AID2,’C’,CID1).

You’d have these two:
insert into person_addresses(AID1,PID1), and insert into company_addresses(AID2,CID1).

Why is the former “cheaper”?

I was thinking of writing only one INSERT statement:

CREATE OR REPLACE PACKAGE BODY address_mgr AS

    PROCEDURE add_entity_address
    (
        i_address_id    IN entity_addresses.address_id%TYPE,
        i_owner_type_id IN entity_addresses.owner_type_id%TYPE,
        i_owner_id      IN entity_addresses.owner_id%TYPE
    ) IS
    BEGIN
        INSERT INTO entity_addresses
            (address_id,
             owner_type_id,
             owner_id)
        VALUES
            (i_address_id,
             i_owner_type_id,
             i_owner_id);
    END add_entity_address;
.
.
.
END address_mgr;
/

Toon wrote:
And this trickles down to all other kinds of SQL statements. Why is:

select *
from adresses a
entity_addresses ea
,persons p
where a.id = ea.address_id
and ea.object_type = ‘P’
and ea.object_id = p.person_id

apparently easier to write and/or maintain (i.e. cheaper), than:

select *
from adresses a
person_addresses pa
,persons p
where a.id = ea.person_id
and pa.person_id = p.person_id

?
I just don’t get that…

Again, I was thinking of a more generic code, rather than duplicating the same code pattern. Something like this:

create type address_t as object (
  id integer,
  street varchar2(30),
  house_number varchar2(10),
  city varchar2(30),
  country varchar2(30)
)
/

create type address_tt as table of address_t 
/

CREATE OR REPLACE PACKAGE BODY address_mgr AS
.
.
.
    PROCEDURE get_addresses
    (
        i_owner_type_id IN entity_addresses.owner_type_id%TYPE,
        i_owner_id      IN entity_addresses.owner_id%TYPE,
        o_addresses     OUT address_tt
    ) IS
    BEGIN
        SELECT address_t(a.id, a.street, a.house_number, a.city, a.country)
        BULK   COLLECT
        INTO   o_addresses
        FROM   addresses        a,
               entity_addresses ea
        WHERE  ea.owner_type_id = i_owner_type_id
        AND    ea.owner_id = i_owner_id
        AND    a.id = ea.address_id;
    END get_addresses;
.
.
.
END address_mgr;
/

CREATE OR REPLACE PACKAGE BODY person_mgr AS

    PROCEDURE get_person
    (
        i_person_id  IN people.id%TYPE,
        o_first_name OUT people.first_name%TYPE,
        o_last_name  OUT people.last_name%TYPE,
        o_addresses  OUT address_tt
    ) IS
    BEGIN
        SELECT p.first_name,
               p.last_name
        INTO   o_first_name,
               o_last_name
        FROM   people p
        WHERE  p.id = i_person_id;
    
        address_mgr.get_addresses(i_owner_type_id => 'P',
                                  i_owner_id      => i_person_id,
                                  o_addresses     => o_addresses);
    END get_person;
.
.
.
END person_mgr;
/

CREATE OR REPLACE PACKAGE BODY company_mgr AS

    PROCEDURE get_company
    (
        i_company_id          IN companies.id%TYPE,
        o_name                OUT companies.name%TYPE,
        o_number_of_employees OUT companies.number_of_employees%TYPE,
        o_description         OUT companies.description%TYPE,
        o_addresses           OUT address_tt
    ) IS
    BEGIN
        SELECT c.name,
               c.number_of_employees,
               c.description
        INTO   o_name,
               o_number_of_employees,
               o_description
        FROM   companies c
        WHERE  c.id = i_company_id;
    
        address_mgr.get_addresses(i_owner_type_id => 'C',
                                  i_owner_id      => i_company_id,
                                  o_addresses     => o_addresses);
    END get_company;
.
.
.
END company_mgr;
/

By the way, we can achieve the same level of code reusability with the “multiple tables” option, by using dynamic SQL instead of static SQL, but I think both of us do not want to go there, do we?

Toon wrote:
To me this alternative (Multiple Tables) *is* the way to deal with the information requirement.
Having entity_adresses, introduces a column in my design (owner_id) whose meaning depends upon the value of another column (owner_type), which is just bad in my opinion.

Isn’t this the case with every composite key, that we need the values of all the columns that compose the key in order to uniquely identify the entity?

Toon wrote:
This alternative also offers the best chances of being able to easily cater for possible future differences in information requirements at the relationship-level. Eg. we may want to track since when a person has had a particular address, but we don’t care for that information requirement in case of companies: you then just add the ‘as-of’ date column to the person_addresses table only, and be done with it.

In my opinion, this becomes a different case. If the entities are not identical in their attributes, I would consider them different entities, and therefore each one deserves its own table. And since I’m not afraid of changing my schema and my code, I would split the generic table into multiple tables only when the need arises.

Toon wrote:
It is the most simple, and easy to understand for others that come join the maintenance team.
Imagine coming in and seeing the table with invisible generated virtual columns that have FK’s on them. Hmmm…

I would probably say: “what a cool idea” 😉

PL/SQL in SQL in View in SQL in PL/SQL

I presented “Write Less (Code) With More (Oracle 12c New Features)” yesterday at OGh Tech Experience 2017.
One of the features I talked about was PL/SQL in the WITH Clause. One of the restrictions of this feature is that you cannot embed a static SQL query, that contains PL/SQL in the WITH clause, in PL/SQL (see the section PL/SQL in SQL in PL/SQL in this post).
I was asked, regarding this restriction, if it’s possible to embed in PL/SQL a static “regular” SQL query, that selects from a view, that contains PL/SQL in the WITH clause. The answer is yes, since the restriction is only syntactic.

Using the same example from the original post:

create view people_with_dates_v as      
with 
  function is_date(i_info in varchar2) return number as
    l_date date;
  begin
    if i_info is null then
      return 0;
    else
      l_date := to_date(i_info, 'dd/mm/yyyy');
      return 1;
    end if;
  exception
    when others then
      return 0;
  end;
select p.*
from   people p
where  is_date(p.general_info) = 1;
/

View created.
create or replace procedure show_date_people as
begin
    for l_rec in (
      select * from people_with_dates_v
      )
    loop
        dbms_output.put_line(l_rec.person_id || ': ' || l_rec.first_name || ' ' || l_rec.last_name);
    end loop;
end show_date_people;
/
> exec show_date_people
102: Paul McCartney
202: Ella Fitzgerald
203: Etta James

PL/SQL procedure successfully completed.

RETURNING INTO

The RETURNING INTO clause is one of my favorite PL/SQL features. It allows to write less code, improves readability and reduces context switches between PL/SQL and SQL.
In this post I’d like to highlight some less-known characteristics of the RETURNING INTO clause and emphasize differences that exist when it is used in different DML statements.

Supported Statements

The RETURNING INTO clause is supported by the UPDATE, DELETE, and single-table single-row (“values-based”) INSERT statements.
It is not supported by subquery-based INSERT, multi-table INSERT, and MERGE. Some additional restrictions are documented.

What is returned?

The RETURNING INTO clause returns data from the rows that have been affected by the DML statement.
For INSERT it returns the after-insert values of the new row’s columns.
For UPDATE it returns the after-update values of the affected rows’ columns.
For DELETE it returns the before-delete values of the affected rows’ columns.

> create table t (
    id number generated as identity,
    data number
  );

Table created.

> declare
    ret1 number;
    ret2 number;
  begin
    insert into t (data) values (10) returning id,data into ret1,ret2;
    dbms_output.put_line('id='||ret1||' ; data='||ret2);
    insert into t (data) values (20) returning id,data into ret1,ret2;
    dbms_output.put_line('id='||ret1||' ; data='||ret2);
    insert into t (data) values (30) returning id,data into ret1,ret2;
    dbms_output.put_line('id='||ret1||' ; data='||ret2);
    commit;
  end;
  /

id=1 ; data=10
id=2 ; data=20
id=3 ; data=30

PL/SQL procedure successfully completed.
> declare
    ret number;
  begin
    update t set data=111 where id=1 returning data into ret;
    dbms_output.put_line('post-update value = '||ret);
    delete t where id=2 returning data into ret;
    dbms_output.put_line('pre-delete value = '||ret);
    rollback;
  end;
  /

post-update value = 111
pre-delete value = 20

PL/SQL procedure successfully completed.

Expressions

The returned values are not limited just to simple column names. It can virtually be any expression you can place in the select-list of a SELECT statement from the affected table.

> declare
    ret1 number;
    ret2 varchar2(10);
  begin
    update t
    set data = round(dbms_random.value(1,100))
    where id = 1
    returning data, decode(mod(data,2),0,'even','odd')
    into ret1, ret2;
 
    dbms_output.put_line('the new value is '||ret1||', an '||ret2||' number');
    rollback;
  end;
  /

the new value is 19, an odd number

PL/SQL procedure successfully completed.

Warning: you can even include in the returned expression a scalar subquery against the affected table. This is probably not a very good idea, especially with INSERT and UPDATE, which return the post-DML values. The scalar subquery, on the other hand, obeys the statement-level read consistency rule – it returns the data as it was before the DML happened. So it may lead to confusing inconsistent results:

> declare
    ret1 number;
    ret2 number;
  begin
    update t
    set data = 29081969
    where id = 1
    returning data, (select max(data) from t)
    into ret1, ret2;

    dbms_output.put_line('just set DATA of a row to ' || ret1 ||
                         ' and the maximum value is (no! *was*) ' || ret2);
    rollback;
  end;
  /

just set DATA of a row to 29081969 and the maximum value is (no! *was*) 30

PL/SQL procedure successfully completed.

BULK COLLECT

The returned values can be placed into collection variables, using RETURNING with the BULK COLLECT INTO clause. This is very useful for the UPDATE and DELETE statements, that may affect multiple rows in a single statement (but this syntax is supported also for the INSERT statement – perhaps as a preparation for supporting RETURNING in multi-row INSERT in the future?).

> declare
    type ntt is table of number;
    ret1 ntt;
    ret2 ntt;
  begin
    update t
    set data = data * 2
    returning id, data
    bulk collect into ret1, ret2;
 
    for i in 1..ret1.count loop
      dbms_output.put_line('id='||ret1(i)||' ; data='||ret2(i));
    end loop;
 
    rollback;
  end;
  /

id=1 ; data=20
id=2 ; data=40
id=3 ; data=60

PL/SQL procedure successfully completed.

Aggregation

For UPDATE and DELETE the returned expression can be an aggregate function. The aggregation is performed on the affected rows, and (since the aggregated value is scalar) BULK COLLECT is not needed when using this option. This is not supported for INSERT.

> declare
    ret1 number;
    ret2 number;
    ret3 varchar2(100);
  begin
    update t
    set data = data * 2
    where id < 3
    returning sum(data),
              avg(data),
              listagg(id||':'||data,',') within group (order by id)
    into ret1, ret2, ret3;
 
    dbms_output.put_line('sum(data) of updated rows = ' || ret1);
    dbms_output.put_line('avg(data) of updated rows = ' || ret2);
    dbms_output.put_line('new values = ' || ret3);
 
    rollback;
  end;
  /
sum(data) of updated rows = 60
avg(data) of updated rows = 30
new values = 1:20,2:40

PL/SQL procedure successfully completed.

RETURNING or RETURN?

One last anecdote: the RETURN keyword can be used instead of RETURNING. I prefer RETURNING.

What about MERGE?

As many others, I wish the MERGE statement would support the RETURNING INTO clause as well.
However, since MERGE is a combination of INSERT, UPDATE and DELETE, and given the differences in the way RETURNING works with these 3 DML statements (as listed above), supporting RETURNING INTO in MERGE is clearly not a simple task.
It may also lead to confusing situations – think for example about a MERGE statement that includes the UPDATE SET and DELETE WHERE clauses – what will RETURNING return for rows affected by the DELETE WHERE clause? The pre-MERGE value, the post-UPDATE-pre-DELETE value, or perhaps simply 42?

COLLECT DISTINCT in PL/SQL Works in Oracle 12.2

About a year ago I wrote the post Subtleties – Part 1 (SQL and PL/SQL). I wrote there:

Almost every valid SQL statement (i.e., that is executed successfully by the SQL engine) can be embedded successfully as a static SQL in PL/SQL. Almost, but not every statement.
One example is the COLLECT aggregate function with the DISTINCT option.

And I showed an example that was executed in 11.2.0.4 and in 12.1.0.2.

Today I tried it in 12.2.0.1, and I was very pleased to see that now COLLECT DISTINCT is working also in PL/SQL.
Continue reading “COLLECT DISTINCT in PL/SQL Works in Oracle 12.2”

The “Guardian Trigger” Design Pattern

Suppose that every time we add records into the T1 table we have to do some additional stuff.
One option to implement this is by using an AFTER INSERT trigger that will perform this additional stuff, but I really dislike this option (mainly because the code becomes hidden in a way, and there may be a negative impact on performance).
I prefer writing a procedure that inserts the records into T1 and performs this additional stuff.

CREATE OR REPLACE PACKAGE my_package AS 
    PROCEDURE insert_t1 (...);
END my_package;
/
CREATE OR REPLACE PACKAGE BODY my_package AS 
    PROCEDURE insert_t1 (...) IS 
    BEGIN 
        INSERT INTO t1 ...;
        -- do some additional stuff, that must be done
        -- whenever inserting new records into T1 
    END insert_t1;
END my_package;
/

But what will happen if someone (perhaps even myself in one year from now) will insert records into T1 from another place (in another procedure or directly via an INSERT statement) and will not know (or maybe forget) about the mandatory additional stuff?
The best thing would be to make this procedure the one and only way to insert records into T1. But how to do it?
I like doing it in a way that I call “the guardian trigger”, using a combination of a global variable and a trigger (but this trigger doesn’t do actual stuff, it is only a gatekeeper).

I’m adding a Boolean global variable and initialize it to FALSE. I change it to TRUE at the beginning of the insert_t1 procedure, and make sure to set it back to FALSE when leaving the procedure.

CREATE OR REPLACE PACKAGE BODY my_package AS 
    g_is_insert_into_t1_allowed BOOLEAN := FALSE; 
    PROCEDURE insert_t1 (...) IS 
    BEGIN 
        g_is_insert_into_t1_allowed := TRUE; 
        INSERT INTO t1 ...;
        -- do some additional stuff, that must be done
        -- whenever inserting new records into T1 
        g_is_insert_into_t1_allowed := FALSE; 
    EXCEPTION 
        WHEN OTHERS THEN 
            g_is_insert_into_t1_allowed := FALSE; 
            RAISE; 
    END insert_t1;
END my_package;
/

Now I’m adding a function – a public function – to get the current value of the global variable:

CREATE OR REPLACE PACKAGE my_package AS 
    FUNCTION is_insert_into_t1_allowed RETURN BOOLEAN;  
    PROCEDURE insert_t1 (...);
END my_package;
/

CREATE OR REPLACE PACKAGE BODY my_package AS 
    g_is_insert_into_t1_allowed BOOLEAN := FALSE;
    FUNCTION is_insert_into_t1_allowed RETURN BOOLEAN IS 
    BEGIN 
        RETURN g_is_insert_into_t1_allowed; 
    END is_insert_into_t1_allowed;  
    PROCEDURE insert_t1 (...) IS 
    BEGIN 
        g_is_insert_into_t1_allowed := TRUE;
        INSERT INTO t1 ...;
        -- do some additional stuff, that must be done
        -- whenever inserting new records into T1 
        g_is_insert_into_t1_allowed := FALSE;
    EXCEPTION 
        WHEN OTHERS THEN 
            g_is_insert_into_t1_allowed := FALSE;
            RAISE;
    END insert_t1;
END my_package;
/

And as the final step I’m creating a statement level trigger – BEFORE INSERT ON T1; a trigger that doesn’t change anything, it just checks that the global variable is currently set to TRUE.

CREATE OR REPLACE TRIGGER t1_trig
    BEFORE INSERT ON t1
BEGIN 
    IF NOT my_package.is_insert_into_t1_allowed THEN 
        raise_application_error(-20000,
          'INSERTs into T1 are allowed only via my_package.insert_t1');
    END IF;
END t1_trig;
/

So if I’ll try to insert records into T1 not via the procedure the trigger will not allow it, it will raise an exception saying “INSERTs into T1 are allowed only via my_package.insert_t1”.
Obviously this pattern can be implemented for other operations, not only for INSERT.

OTN Appreciation Day: Edition-Based Redefinition

Here’s my contribution to the OTN Appreciation Day.

Edition-Based Redefinition (EBR) is a really great feature, or more accurately a set of features, that was introduced in Oracle 11.2 and allows for online application upgrades using hot rollover. If you know me then you know that I talk (ok, preach) about it a lot, and from a long and successful experience.

It is impossible to learn EBR in 5 minutes, but perhaps you’ll understand why I like EBR from this 5-minute introduction-concise-summary video, and hopefully you’ll be intrigued to learn more:

Subtleties – Part 2 (Nested Tables and Varrays)

In Part 1 we saw that the SQL function COLLECT with the DISTINCT option is not natively supported in PL/SQL.
One suggested workaround was to apply the SET function on the result of the “simple” COLLECT function (without the DISTINCT option).
This works fine, in both SQL and PL/SQL, as long as the collection type that we use is Nested Table.

create type integer_ntt as table of integer
/

select person_id,set(cast(collect(project_id) as integer_ntt)) project_id_list
from project_assignments
group by person_id
order by person_id;

PERSON_ID PROJECT_ID_LIST
---------- -----------------------------------------
       101 INTEGER_NTT(1, 3, 2)
       102 INTEGER_NTT(2)
       103 INTEGER_NTT(3)

But what if the collection type we use is Varray and not Nested Table?

create type integer_vt as varray(100) of integer
/

select person_id,set(cast(collect(project_id) as integer_vt)) project_id_list 
from project_assignments 
group by person_id
order by person_id;

select person_id,set(cast(collect(project_id) as integer_vt)) project_id_list
                     *
ERROR at line 1:
ORA-00932: inconsistent datatypes: expected UDT got DEMO.INTEGER_VT

So not every operation we can do with one collection type (Nested Table) is supported with other collection type (Varray).
SET is one example for that, and actually it makes sense. The elements within a set are unordered by definition, while a Varray is an ordered list of elements.
Another example for a function that works for nested tables but not for varrays is the CARDINALITY collection function:

select cardinality(integer_ntt(7,8,9)) from dual;

CARDINALITY(INTEGER_NTT(7,8,9))
-------------------------------
                              3

select cardinality(integer_vt(7,8,9)) from dual;

select cardinality(integer_vt(7,8,9)) from dual
                   *
ERROR at line 1:
ORA-00932: inconsistent datatypes: expected UDT got DEMO.INTEGER_VT