This post was originally published on this site

Subheading 1: Preplugin Backups for non-CDB to PDB Conversions and the RMAN-06054

Subheading 2: PrePlugin Backups and the need for “Trick 17”

The following is coming from a customer project. Imagine you have a huge non-CDB database, assume some terabytes. And you want to bring it to the CDB architecture. It is obvious to use the approach, which converts the non-CDB to a PDB while plugging it into a CDB. And to be on the safe side you should have backups in place right after the plug. And you need to minimize the downtime, so you have neither time to copy the entire database nor to back it up completely during maintenance window. One way would be to prepare a copy as a standby database beforehand. But let us assume that you even want to save that space. So now you come to the 18c preplugin backups feature, which becomes crucial for the described demands. The feature allows you to use the backups from preplugin times – in this case non-CDB times – after the plug operation for restore and recovery scenarios. You can read about it here https://support.oracle.com/epmos/faces/DocumentDisplay?id=2358523.1 and here https://oracle-base.com/articles/18c/multitenant-usable-backups-of-non-cdbs-and-relocated-pdbs-18c .

As Tim Hall already stated in his nice article there is a little catch and this is what I want to talk about. There is a missing piece. One more archived redo log file will be needed to be recovered. It is the current online log in the source database which contains some indispensable redo. Tim’s solution was to open the former database read write and do the log switch. That is okay, but impossible in case you reused the same data files, as you cannot open that db any more. Here I describe how we can fix that missing piece.

An now all needed details from the beginning, all shown output comes from 19.7 Enterprise Edition databases on Linux:

1. You have regular RMAN backups of the non-CDB database in question and of course also of its archived redo logs as usual, check with RMANs “LIST BACKUP”.

2. shortly before maintenance window starts, you export backup metadata into internal tables, this is part of the preplugin backups feature

SQL> EXEC dbms_pdb.exportrmanbackup;

With that command you store backup metadata in the non-CDB database in SYS schema, SYSAUX tablespace, some details (if not interested jump to 3.):
It is stored in 34 tables: (SELECT table_name FROM dba_tables WHERE owner=’SYS’ AND table_name LIKE ‘RPP$X$KC%’;) See some docs about the tables in OH/rdbms/admin/catpplb.sql . The tables are truncated and reloaded whenever the procedure is called. It actually copies the contents of its associated “X$KC%” fixed tables. Data is accessible via 34 views: ( SELECT view_name FROM dba_views WHERE owner=’SYS’ AND view_name LIKE ‘CDB_RPP$X$KC%’;) or 34 PUBLIC synonyms: (SELECT synonym_name FROM dba_synonyms WHERE owner=’PUBLIC’ AND synonym_name LIKE ‘CDB_RPP$X$KC%’; ) for everybody with SELECT_CATALOG_ROLE or SELECT ANY DICTIONARY privilege.

Later on when you use that backup metadata in the CDB it will be available within tables in the root container prefixed with ROPP: (SELECT table_name FROM dba_tables WHERE owner=’SYS’ AND table_name LIKE ‘ROPP$X$KC%’;)

3. downtime starts, you prepare the plug in on the non-CDB

SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP OPEN READ ONLY
SQL> EXEC dbms_pdb.describe(pdb_descr_file => '/home/oracle/my_good_old_nonCDB.xml');
SQL> SHUTDOWN IMMEDIATE

Note that you should also do the checks with dbms_pdb.check_plug_compatibility and pdb_plug_in_violations at that stage. So far so good, before we go on too fast, here comes the missing piece, we need a log switch, but must somehow not change the data files any more.

The following does the trick, also referred as good old “Trick 17”:

Start up the non-CDB again, but only in mount first:

SQL> STARTUP MOUNT

Let us note down the SCNs of all data files. Should be identical at that stage for all data files because we shut down cleanly as long as you do not play around with offline files or redo only tablespaces. In that case, good luck.

SELECT COUNT(*), checkpoint_change#
FROM v$datafile_header
GROUP BY checkpoint_change#
ORDER BY checkpoint_change#;

  COUNT(*) CHECKPOINT_CHANGE#
---------- ------------------
         4            4072501

Okay okay, only 4 data files in my lab. We find the same SCN in the XML manifest, which we already created.

# grep fcpsb /home/oracle/my_good_old_nonCDB.xml | sort | uniq
      <fcpsb>4072501</fcpsb>

Now we place a guaranteed restore point.

SQL> CREATE RESTORE POINT before_plugin_and_logswitch GUARANTEE FLASHBACK DATABASE;

Restore point created.

SQL> SELECT scn FROM v$restore_point WHERE name='BEFORE_PLUGIN_AND_LOGSWITCH';

       SCN
----------
   4072500

Let us check the log situation at that step:

SQL> SELECT group#, sequence#, archived, status, first_change#, next_change# FROM v$log;

    GROUP#  SEQUENCE# ARCHIVED  STATUS    FIRST_CHANGE#         NEXT_CHANGE#
---------- ---------- --------- --------- ------------- --------------------
         1         16 YES       INACTIVE        4057479              4064652
         3         18 NO        CURRENT         4067731  9295429630892703743
         2         17 YES       INACTIVE        4064652              4067731

We have archives of sequence# 16 and 17 as it shows archived=’YES’. We need an archive of sequence# 18. We can switch log file only after open. But we have a savepoint now, the restore point. So let us open again and do the log switch.

SQL> ALTER DATABASE OPEN;

Database altered.

SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;

System altered.

SQL> SELECT name FROM v$archived_log WHERE sequence#=18;

NAME
-------------------------------------------------------------------------------------
/u02/fast_recovery_area/NASTY_SITE1/archivelog/2020_07_02/o1_mf_1_18_hhvm2fhn_.arc

The data files meanwhile advanced to higher SCNs cause we opened read write and therefore need to be flashed back:

SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP MOUNT
SQL> FLASHBACK DATABASE TO RESTORE POINT before_plugin_and_logswitch;

Check, that the data files reverted back to the former SCN:

SQL> SELECT COUNT(*), checkpoint_change# FROM v$datafile_header GROUP BY checkpoint_change# ORDER BY checkpoint_change#;

  COUNT(*) CHECKPOINT_CHANGE#
---------- ------------------
         4            4072501

This is fine. So now we do the final shutdown of non-CDB.

SQL> SHUTDOWN IMMEDIATE

4. Hence in good conscience – we can plug it into the CDB. Note: no time to copy the db as we said, thus using NOCOPY. Next steps are performed on the CDB:

SQL> CREATE PLUGGABLE DATABASE prod USING '/home/oracle/my_good_old_nonCDB.xml' NOCOPY TEMPFILE REUSE;

Pluggable database created.

After that we do the magic with the noncdb_to_pdb script.

SQL> ALTER SESSION SET CONTAINER=prod;
SQL> SPOOL noncdb_to_pdb.log
SQL> SET TERMOUT OFF
SQL> @?/rdbms/admin/noncdb_to_pdb.sql
SQL> SET TERMOUT ON
SQL> SPOOL OFF
SQL> ALTER PLUGGABLE DATABASE prod OPEN;

Everything fine so far. The database can be used again from now on, downtime is over.

5. Now after a certain while, but before completing a new backup, you have a problem and need restore and recover.

We can see the backups and known archived logs with:

RMAN> SET PREPLUGIN CONTAINER prod;
RMAN> LIST PREPLUGIN BACKUP;
RMAN> LIST PREPLUGIN ARCHIVELOG ALL;

We can catalog missing archivelogs including our magic number 18 with:

SET PREPLUGIN CONTAINER prod;
CATALOG PREPLUGIN ARCHIVELOG '/u02/fast_recovery_area/NASTY_SITE1/archivelog/2020_07_02/o1_mf_1_18_hhvm2fhn_.arc';

And now et voila: all needed preplugin archive log information is complete.

RMAN> SET PREPLUGIN CONTAINER prod;
RMAN> LIST PREPLUGIN ARCHIVELOG ALL;

List of Archived Log Copies for database with db_unique_name CYPECORE_SITE1
=====================================================================

Key     Thrd Seq     S Low Time
------- ---- ------- - -------------------
...

21      1    17      A 02.07.2020 13:12:21
        Name: /u02/fast_recovery_area/NASTY_SITE1/archivelog/2020_07_02/o1_mf_1_17_hhvk9wr9_.arc

22      1    18      A 02.07.2020 13:30:04
        Name: /u02/fast_recovery_area/NASTY_SITE1/archivelog/2020_07_02/o1_mf_1_18_hhvm2fhn_.arc

And now let us try the restore and recover of that PDB.

RMAN> ALTER PLUGGABLE DATABASE prod CLOSE IMMEDIATE;

Statement processed

RMAN> SET PREPLUGIN CONTAINER prod;

executing command: SET PREPLUGIN CONTAINER
using target database control file instead of recovery catalog

RMAN> RESTORE PLUGGABLE DATABASE prod FROM PREPLUGIN;

Starting restore at 02.07.2020 14:23:35
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=278 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00071 to /u01/oradata/NASTY/system01NASTY.dbf
channel ORA_DISK_1: restoring datafile 00072 to /u01/oradata/NASTY/sysaux01NASTY.dbf
channel ORA_DISK_1: restoring datafile 00073 to /u01/oradata/NASTY/undotbs01NASTY.dbf
channel ORA_DISK_1: restoring datafile 00074 to /u01/oradata/NASTY/users01NASTY.dbf
channel ORA_DISK_1: reading from backup piece /u02/fast_recovery_area/NASTY_SITE1/backupset/2020_07_02/o1_mf_nnndf_TAG20200702T114723_hhvc9d5v_.bkp
channel ORA_DISK_1: piece handle=/u02/fast_recovery_area/NASTY_SITE1/backupset/2020_07_02/o1_mf_nnndf_TAG20200702T114723_hhvc9d5v_.bkp tag=TAG20200702T114723
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:15
Finished restore at 02.07.2020 14:23:52

RMAN> RECOVER PLUGGABLE DATABASE prod FROM PREPLUGIN;

Starting recover at 02.07.2020 14:24:06
using channel ORA_DISK_1

starting media recovery

archived log for thread 1 with sequence 14 is already on disk as file /u02/fast_recovery_area/NASTY_SITE1/archivelog/2020_07_02/o1_mf_1_14_hhvcb8c1_.arc
archived log for thread 1 with sequence 15 is already on disk as file /u02/fast_recovery_area/NASTY_SITE1/archivelog/2020_07_02/o1_mf_1_15_hhvfkccc_.arc
archived log for thread 1 with sequence 16 is already on disk as file /u02/fast_recovery_area/NASTY_SITE1/archivelog/2020_07_02/o1_mf_1_16_hhvj8osz_.arc
archived log for thread 1 with sequence 17 is already on disk as file /u02/fast_recovery_area/NASTY_SITE1/archivelog/2020_07_02/o1_mf_1_17_hhvk9wr9_.arc
archived log for thread 1 with sequence 18 is already on disk as file /u02/fast_recovery_area/NASTY_SITE1/archivelog/2020_07_02/o1_mf_1_18_hhvm2fhn_.arc
media recovery complete, elapsed time: 00:00:01
Finished recover at 02.07.2020 14:24:09

RMAN> RESTORE PLUGGABLE DATABASE prod SKIP PREPLUGIN;

Starting restore at 02.07.2020 14:24:23
using channel ORA_DISK_1

data file 71 not processed because it is a plugged in data file
data file 72 not processed because it is a plugged in data file
data file 73 not processed because it is a plugged in data file
data file 74 not processed because it is a plugged in data file
restore not done; all files read only, offline, excluded, or already restored
Finished restore at 02.07.2020 14:24:23

RMAN> RECOVER PLUGGABLE DATABASE prod;

Starting recover at 02.07.2020 14:24:46
using channel ORA_DISK_1

starting media recovery

archived log for thread 1 with sequence 26 is already on disk as file /u02/fast_recovery_area/CYPECORE_SITE1/archivelog/2020_07_02/o1_mf_1_26_hhvmldoh_.arc
...
archived log for thread 1 with sequence 140 is already on disk as file /u02/fast_recovery_area/CYPECORE_SITE1/archivelog/2020_07_02/o1_mf_1_140_hhvndsm6_.arc
archived log file name=/u02/fast_recovery_area/CYPECORE_SITE1/archivelog/2020_07_02/o1_mf_1_26_hhvmldoh_.arc thread=1 sequence=26
...
archived log file name=/u02/fast_recovery_area/CYPECORE_SITE1/archivelog/2020_07_02/o1_mf_1_138_hhvn2w0f_.arc thread=1 sequence=138
media recovery complete, elapsed time: 00:01:02
Finished recover at 02.07.2020 14:25:49

RMAN> ALTER PLUGGABLE DATABASE prod OPEN;

Statement processed

Yes, we made it.