EBR – Part 13: The Trouble with Foreign Keys with ON DELETE Clause and Related Triggers

This is part 13 of a post series about Oracle Edition-Based Redefinition.

Visit the index page for all the parts of the series

Introduction

As I wrote in Part 7: Editioning Views:

Editioning views are the interface between the application and the tables

This is one of the most important rules when using EBR.
Code objects should never refer to tables.
Instead of referring to a table, code objects should refer to the corresponding editioning view – the editioning view that covers that table.

Code objects include packages, procedures, functions, views, regular (i.e., not cross-edition) triggers, etc. You can see the full list in Part 6: Editionable and Non-Editionable, Editioned and Non-Editioned.

Cross-edition triggers are excluded from this list, as they are not really part of our code base – they are temporary objects that their lifespan is limited to the upgrade timeframe (as soon as no client uses the pre-upgrade edition anymore, they are dropped).
And, by definition, they should refer to the base tables.

However, there is one case (that I can think of) of a code object that has to refer to the base table, as referring to the editioning view is not enough.

The ON DELETE Clause

Let’s say we have two tables with a foreign key constraint between them. The foreign key is defined with the ON DELETE clause (either CASCADE or SET NULL), which means that whenever a record is deleted from the parent table, the corresponding child records are automatically deleted (if the ON DELETE CASCADE option is used) or updated (if the ON DELETE SET NULL is used). Continue reading “EBR – Part 13: The Trouble with Foreign Keys with ON DELETE Clause and Related Triggers”

Happy New Square Year

At a family dinner a few days ago, we realized that the upcoming year – 2025 – is a square number 🤩
Promptly the kids checked and announced that it’s the only year in the 21st century that is a square number.
We wondered how many square years there are per century over time, and, as always, such questions are answered using a simple SQL query.

with
   n as (select 30 centuries from dual),
   c as (select rownum century from n connect by level <= n.centuries),
   y as (select rownum*rownum year from n connect by level * level <= n.centuries*100),
   yc as (select y.year, to_number(to_char(to_date(y.year,'yyyy'),'cc')) century from y)
select
  c.century,
  count(yc.year) number_of_sqaure_years,
  decode(count(yc.year), 1, any_value(yc.year)) the_only_sqaure_in_the_century
from c, yc
where yc.century(+) = c.century
group by c.century
order by c.century;

   CENTURY NUMBER_OF_SQAURE_YEARS THE_ONLY_SQAURE_IN_THE_CENTURY
---------- ---------------------- ------------------------------
         1                     10
         2                      4
         3                      3
         4                      3
         5                      2
         6                      2
         7                      2
         8                      2
         9                      2
        10                      1                            961
        11                      2
        12                      1                           1156
        13                      2
        14                      1                           1369
        15                      1                           1444
        16                      2
        17                      1                           1681
        18                      1                           1764
        19                      1                           1849
        20                      1                           1936
        21                      1                           2025 -- We Are Here!
        22                      1                           2116
        23                      1                           2209
        24                      1                           2304
        25                      2
        26                      0
        27                      1                           2601
        28                      1                           2704
        29                      1                           2809
        30                      1                           2916

30 rows selected.

Looking at the first 30 centuries, the 10th century was the first to have only one square year. Since the 17th century, each century has exactly one square year, except for the 25th that has two and the 26th that has none.

Happy new year!

EBR – Even If Downtime Is Acceptable

So, You Don’t Need Online Upgrades

EBR is a set of features that enables patching and upgrading live Oracle-based applications in an online fashion, with zero downtime.

If you are responsible for an Oracle-based application, and you have a maintenance window, during which users do not use the application, you don’t need EBR.
You can upgrade in an offline fashion, by taking the system down during the maintenance window.

But, although you don’t have to use EBR, using EBR may improve your quality of life and your users’ satisfaction.

Improve Your Users’ Satisfaction

The website of the bank that manages my account is “temporarily unavailable” every night between midnight and 4am. I guess this is their maintenance window, during which they do (among other tasks) offline application upgrades.

My user experience is usually ok.
Rarely do I want to check my balance or to do some transactions in the middle of the night, and even if I do want, it’s not a matter of life and death – I’ll wait for the morning.
But if the bank’s website was available during nights as well, then my experience as a user would be excellent, rather than just ok.

Online upgrades mean less downtime, and less downtime means better user experience.
Even if downtime is acceptable, users are happier with no (or less) downtime.

Improve Your Quality Of Life

Upgrades can be done at any time

With EBR you are not limited to do upgrades only during the “maintenance window”.

Upgrades can be done at any time. We don’t need to look for off-peak hours, that are usually at nights and weekends – when we are tired and frustrated. We can do it when it’s most convenient to us – during work hours.

But it’s not just about our convenience.
If something goes wrong with an upgrade that is executed during work hours, rather than at nights/weekends, then solving the problem is much easier and usually faster. Our colleagues that can help with the problems are there. We don’t need to wake someone in the middle of the night, or to try tracking someone on a weekend family trip.

And most importantly, in my opinion, is that we can deploy features as soon as they are ready.
With offline upgrades, waiting for the maintenance window – which may be once a week, once a month, once a quarter, or any other frequency – means that your users do not get the best service.
EBR allows for agile development – you deploy features (and bug fixes) as soon as they are ready.

Upgrades can take as long as needed

The maintenance window is usually limited to several hours. What if the upgrade takes more than that? With EBR, the upgrade takes as long as needed. It is not limited to some arbitrary period of time.

When the upgrade is limited in time, we tend to be stressed, because we’re in a race against the clock. This is obviously not a good thing. And this stress may cause us to make some errors, which means the upgrade will take even longer, with a higher risk of not finishing it on time, which makes the whole situation even more stressful.

With EBR the upgrades can be done properly, with no pressure.

We can start an upgrade with a new edition in the morning, and then something more urgent comes up and we handle it, and get back to the upgrade later on – maybe even in the evening – and complete it.

Or, if we start an upgrade before lunch time, there is no problem having lunch before finishing the upgrade.

If during the upgrade there are some “half-baked” objects, or even invalid objects, in the production schema, it doesn’t matter.
Not even if it’s true for many hours.
As long as these objects are in the new (unexposed) edition, nobody knows it, and the end users (that are exposed only to the pre-upgrade edition) keep using the application uninterruptedly.

Database-side upgrades can be done independently of server-side upgrade readiness

With EBR, we can prepare a new edition, expose it to a new service, and the app server upgrade can be done at any time afterwards.
If I’m brave enough, I can even go on vacation once I expose the new edition, and the server may start using it even days later 🙂

Flexible exposure of new versions

With EBR, we don’t have to expose the new edition to all the servers at once. We can take advantage of that, and first use it only by a dedicated testing server. All the “real” users continue using the old edition during this time, without being affected.

Another flexibility we gain with EBR is that different types of app servers may use different versions (i.e., editions). Sometimes a database change affects several types of app servers, and only some of them are ready to upgrade. No problem – they can continue using an older edition, while the app servers that are ready can upgrade and start using the new edition.

Summary

EBR is not just about reducing downtime.
It gives several benefits that can improve the quality of life for both the developers and the end users.
And, at least based on my experience, the additional efforts that are required to develop and deploy with EBR are not high.

My First Decade in the International Oracle Community

Exactly ten years ago, in November 2014, I gave my first presentation at a conference outside my country and not in my native language.

At that point, I already had over twenty years of career experience in software development, focusing on databases – my long-lasting passion. I’ve always believed in knowledge sharing, and during those years, I enjoyed giving occasional presentations – sharing from what I learned and from the experience I gained – but I always presented in Hebrew and always in Israel: in companies I worked for, in the historical DBA Forum of ilOUG (Israel Oracle User Group) and at some local conferences.

Sharing and Learning

I discovered that when you build a presentation and prepare to deliver it, you learn a tremendous amount yourself. For example, in 2013, I took it upon myself to deliver a full-day seminar on the new features for developers in Oracle 12c. Since this version had just been released at the time, no one here had the experience and knowledge – and consequently, the desire – to build and deliver such a seminar in Israel. I volunteered to do it and spent many days preparing the seminar, during which I learned a lot. It’s likely that without the goal of preparing and delivering the seminar, I wouldn’t have learned so much new (and useful) content in a relatively short time period and at such a deep level.
To learn, I also used materials published by members of the international Oracle community – both Oracle users and Oracle employees.
Here’s a post I published then, containing the list of topics I included in that seminar.

I regularly read materials shared by community members (and learned a lot from them), and I heard about user group conferences around the world.
I realized I could develop more professionally, learn more, and share my knowledge with more people if I stepped out of my comfort zone, traveled to conferences worldwide, and presented in English – a language I naturally find more challenging to express myself in.
It wasn’t easy, it was even scary, but I took heart and decided to try. I submitted abstracts to several conferences, and finally two of them got accepted.

My Trial by Fire

And so, in November 2014, I arrived in Nuremberg for the DOAG conference, one of the largest Oracle user group conferences in the world.
It was a fascinating experience, somewhat frightening, and very enlightening.

I understood that this step meant much more than I had thought.

For instance, the opportunity to meet product managers face-to-face – the people directly responsible for the software and features I use. To talk with them, consult with them, complain to them. Even to have lunch with them, like what happened with Bryn Llewellyn, who was then the product manager for PL/SQL and EBR.

The first presentation I gave at that conference (and as mentioned, my first-ever presentation abroad, in English, and in front of a large audience) was about EBR.

As I was preparing on stage, just before starting the presentation, nervous and excited, someone approached me and said “Excuse me, I’d like to introduce myself: I’m Bryn, the product manager for the topic you’re about to present on.
Of course, that wasn’t the best antidote for my nerves…

I replied that I knew very well who he was (I had read his articles, seen his videos), and that it was a great honor for me, as well as terrifying. He was incredibly nice and said “I just wanted to say I promise not to interfere, and if you want my help, you’re welcome to ask.


The presentation went quite well, and afterward, I had lunch with Bryn, where he already solved one problem I had with EBR (how to run a job in a specific edition – you associate the job with a class, the class with a service, and the service with an edition).

Bryn helped me quite a lot afterward, we continued to meet at conferences and exchanged opinions. When he came to Israel he visited my workplace, and he encouraged me to attend Oracle’s big conference in San Francisco – Oracle OpenWorld – and we even gave a joint presentation there, and he was a staunch supporter.

None of this would have happened if I had stayed at home and only learn from what he and others published.

Access to Oracle product managers not only helps solve problems (like the advice I received at that lunch ten years ago) but also allows influencing the product’s future. There’s a lot of listening to users that translates into actions.
An example from my experience is the USER_TRIGGERS_AE dictionary view that was added in version 19c.
Another example is the support of both pre-update and post-update values in the DML RETURNING clause that was added in version 23ai.

The Community

Since DOAG 2014, I’ve participated in dozens of conferences in about 20 countries. I became part of a community of hundreds of people from around the world, with shared interests, united most of all by their desire to contribute to others, share their knowledge and experience, and collaborate with each other.

The Oracle ACE program has a big and important role in building and maintaining this community, and many of us are members of the program. The program recognizes, rewards, and promotes technical experts and community leaders for their contributions to the Oracle community.

Another group I’m a member of is Symposium 42, which was created by the Oracle database community to support the Oracle database community. All members are experts in creating & sharing technical content or organizing community events and activities.

I have many interactions with my community peers on professional topics.
There’s always someone to consult with and get guidance from.
Getting honest feedback on your post or presentation from people whose opinions you greatly value is priceless.
The opportunity to do a joint presentation with someone like Connor is something I wouldn’t have dared to dream about ten years ago.

The Friendship

But this community is much more than just a group of professionals with shared interests. Real friendships form, people support each other, and of course, enjoy drinking and eating together.

In times when there are many troubles and conflicts in the world, it’s heartwarming, and even hopeful, to see people from different countries and different cultures simply sit, drink, chat, and enjoy together.
Here for example – 7 people from 7 different countries:


During this journey that I started ten years ago, I gained a new circle of friends from many countries.
People I used to admire from afar for their knowledge and the way they share it in books, blogs, or videos – I’m proud today to call them friends (and still admire some of them 🙂).

I remember one evening many years ago at OOW in San Francisco, when I ran into Tim on the street, and he noticed I was a bit lost and down, and immediately took me to get something to eat. It was the most natural thing for him, and for me it emphasized how much this community is far beyond just sharing knowledge about features.

And there are many more examples. Here are just a few of them:

The warm hospitality of Alex and his family when I came with my wife and children for a vacation in their country.

The first time I met Erik, when I came to a conference in the Netherlands, and he was so welcoming and warm that in an instant I felt comfortable in the company of people I hadn’t known before.

When Jože heard at a conference in Ireland that I also bake bread, and gave me half of the sourdough starter he had brought for Martin.
Actually, sourdough bread is a serious matter of deep discussions with Martin, Jonathan, Gerald, and others. Which also reminds me of the discussions about pizza ovens with Maria.

Driving a very small car in Slovenia with Heli, Javed and Kamran (once again, each one of us from a different country).

A very unique experience was traveling around India together with my son Nimrod at OCYatra, along with Sai, Mike, Connor, Sandesh, Dimitri, Pedro, and others.

In the last year – a challenging period in my country during which I haven’t traveled – many friends moved me by showing interest and concern for my family’s well-being.

Local User Groups

Many members of the international community are also active in their local user groups, which benefits the various organizations and allows even smaller organizations to bring international-level content to their members.

I, for example, as an active member in the Israeli organization, helped organize conferences and meetups in Israel where speakers from around the world participated.

The crown jewel for me was in 2020, the last conference we held in Israel before COVID, during which I had the honor of hosting my fellow speakers at my home.


I hope to continue being active and contributing more to the community. I’ll update again in 2034 🙂

Optimization of NOT NULL Constraint Creation

Several years ago I wrote a series of 8 posts about constraint creation optimization. I think it’s time to add some more posts to the series.
I showed there, among other things, that Oracle does a nice optimization – that may save plenty of time – when we add in a single ALTER TABLE statement a new (nullable with no default value) column and a corresponding:
– inline (a.k.a. “column-level”) check constraint
or
– foreign key constraint (either inline or out-of-line)

What about NOT NULL constraints?

Oracle has a special optimization for the case of adding a column with a NOT NULL constraint (and no default value).
It checks if the table contains at least one record. If it does, then we cannot add the new column, becuase the NOT NULL constraint will be violated for all the existing rows. If the table doesn’t contain any records, the new column can be added.

Let’s start with an empty table: Continue reading “Optimization of NOT NULL Constraint Creation”

DECODE With Ranges

Tim Hall wrote this post yesterday (as part of Joel Kallman Day 2022) with some examples for “the old way” vs. “the new way” to do some stuff.
One of these examples is DECODE and CASE Expressions. As Tim wrote:

CASE expressions were first released in Oracle 8i […] Prior to that if we wanted something similar to a CASE expression in our SQL, we had to use the DECODE function.

This reminded me of a trick I came up with many years ago, probably before 8i 🙂

A table T has a column X, and we need a query that one of the items it returns is a rank – either “Low”, “Medium” or “High”. This rank is determined by the value of X and two thresholds – LOW_TH and HIGH_TH.

With a (Searched) CASE Expression it’s as simple as this: Continue reading “DECODE With Ranges”

Eurovision Analysis #JoelKallmanDay

Here’s my contribution to Joel Kallman Day 2022.

The picture is from the last time I met Joel. It was taken at ilOUG Tech Days 2020, just before an invigorating keynote by Joel.

Four years ago, as part of ODC Appreciation Day 2018 (as the annual Oracle community appreciation day was called that year), I wrote about Pattern Matching in SQL. That post was not a technical one.

This time I decided to return to the same topic, with a technical post.
I recorded a video that explains Pattern Matching by analyzing the winners of the Eurovision Song Contest over the years.

Pre-19c Crossedition Triggers

As shown in the previous post, finding all the crossedition triggers in the database is easy using DBA_TRIGGERS_AE, but this can be done only as of Oracle 19c – the release in which this data dictionary view was added.

Here are two options that can be used before 19c.

A Proper Way

We can use DBA_OBJECTS_AE to find all the triggers across all editions, and then for each one to “jump” to its edition using dbms_sql and check whether it is a crosseedition trigger or not using DBA_TRIGGERS. Continue reading “Pre-19c Crossedition Triggers”

USER_TRIGGERS_AE

One of the fundamental building blocks of EBR (Edition Based Redefinition) is Crossedition Triggers.

This special type of triggers is used when we change the structure of tables in an online application upgrade, and the data of these tables may be changed while we do the upgrade and afterwards, until no session uses the pre-upgrade edition. These triggers keep the pre-upgrade representation of the data and post-upgrade representation of the data in sync.

Crossedition triggers are temporary in nature – they should be dropped when all the clients use the edition in which the triggers were created (or one of its descendent editions).

In theory, crossedition triggers live for a very short period.

In real life, when we have multiple active editions and different types of clients that may keep using older editions for a while before upgrading to the latest edition, it may take days and even weeks before we can drop the crossedition triggers. In my opinion, this is perfectly fine, and actually one of the benefits that I see in EBR.

Another significant difference between crossedition triggers and regular triggers is that crossedition triggers are not inherited by descendent editions. However, they may be triggered by DML operations that are executed in a descendent edition.

This means that we need a way to check which crossedition triggers exist in our schema/database.
Continue reading “USER_TRIGGERS_AE”

Frequency-Based Ranges – Generalized with SQL Macro

In the previous post I used MATCH_RECOGNIZE to convert a list of concrete event times to frequency-based ranges.
The same technique can be applied with different tables, so I wrapped the query in a Table SQL Macro.

The parameters are the table, the “type” column, the “time” column, and (optionally) the period used for filtering.
I assumed the frequencies are always in minutes. The macro can be further generalized to support different time resolutions.

Here is the code:

create or replace function freq_view
(
    i_table     in dbms_tf.table_t,
    i_columns   in dbms_tf.columns_t,
    i_from_time in date default date '1970-01-01',
    i_to_time   in date default date '9999-12-31'
) return varchar2 sql_macro as
begin
    return 'select event_type ' || i_columns(1) || ',
       to_char(from_time,''yyyy-mm-dd'') "DATE",
       to_char(from_time,''hh24:mi'') ||
         nvl2(to_time,
              '' - '' || to_char(to_time,''hh24:mi'')
                    || '' every '' || minutes || '' minutes (''
                    || number_of_events || '' events)'',
              null) event_times
from (
  select ' || i_columns(1) || ' as event_type, trunc(' || i_columns(2) || ',''mi'') as event_time
  from   i_table
  where  ' || i_columns(2) || ' between i_from_time and i_to_time
) match_recognize (
    partition by event_type
    order by event_time
    measures X.event_time from_time,
             last(Y.event_time) to_time,
             round((first(Y.event_time)-X.event_time)*24*60) minutes,
             count(*) number_of_events
    pattern (X Y{2,} | X)
    define
      Y as event_time-prev(event_time) = first(Y.event_time)-X.event_time
)
order by event_type, from_time';
end freq_view;
/

Let’s use it for the same data set used in the original post:

SQL> break on event_type skip 1 dup
SQL> select * from freq_view(events,columns(event_type,event_time),timestamp'2022-07-29 08:00:00');

EVENT_ DATE           EVENT_TIMES
------ -------------- ----------------------------------------------
A      2022-07-29     08:10 - 08:50 every 10 minutes (5 events)
A      2022-07-29     08:58 - 09:22 every 8 minutes (4 events)
A      2022-07-29     09:33
A      2022-07-29     09:42 - 10:52 every 10 minutes (8 events)
A      2022-07-29     10:59 - 11:11 every 4 minutes (4 events)
A      2022-07-29     11:21 - 12:01 every 10 minutes (5 events)

B      2022-07-29     08:15 - 09:55 every 20 minutes (6 events)
B      2022-07-29     10:10
B      2022-07-29     10:25 - 12:05 every 25 minutes (5 events)

C      2022-07-29     08:02 - 08:32 every 2 minutes (16 events)
C      2022-07-29     08:49 - 09:05 every 2 minutes (9 events)
C      2022-07-29     09:08 - 09:20 every 4 minutes (4 events)
C      2022-07-29     09:22 - 10:16 every 2 minutes (28 events)
C      2022-07-29     11:46 - 12:20 every 2 minutes (18 events)


14 rows selected.

And now let’s use it for something else – the DBMS_SCHEDULER executions from the last 3 hours:

SQL> select * from freq_view(user_scheduler_job_run_details,columns(job_name,log_date),sysdate-3/24);

JOB_NAME     DATE            EVENT_TIMES
------------ --------------- ------------------------------------------------
JOB_AAA      2022-07-30      08:45 - 10:45 every 60 minutes (3 events)
JOB_BBB      2022-07-30      08:10 - 10:10 every 60 minutes (3 events)
JOB_CCC      2022-07-30      08:07 - 11:02 every 5 minutes (36 events)
JOB_DDD      2022-07-30      10:00
JOB_EEE      2022-07-30      08:06 - 11:05 every 1 minutes (180 events)