top of page
Search
  • Writer's pictureSudipta Bhaskar

Downgrade 19c Grid to 12.2 in an Oracle RAC environment.

I have upgraded my 2 node RAC to 19.8 from 12.2 using GUI Method. But I want to create a blog post of upgrade using silent method. So, I am downgrading my RAC Grid Home from 19.8 to previous 12.2 version.


My environment


[oracle@OEL7N1 ~]$ crsctl query crs softwareversion
Oracle Clusterware version on node [oel7n1] is [19.0.0.0.0]
[oracle@OEL7N1 ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [19.0.0.0.0]
[oracle@OEL7N1 ~]$ crsctl query crs softwarepatch
Oracle Clusterware patch level on node oel7n1 is [441346801].


[oracle@OEL7N2 ~]$ crsctl query crs softwareversion
Oracle Clusterware version on node [oel7n2] is [19.0.0.0.0]
[oracle@OEL7N2 ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [19.0.0.0.0]
[oracle@OEL7N2 ~]$ crsctl query crs softwarepatch
Oracle Clusterware patch level on node oel7n2 is [441346801].

We need to delete the mgmt database now. We will check from which node it is running. Delete command needs to be run from that node only

[oracle@OEL7N2 ~]$ srvctl status mgmtdb
Database is enabled
Database is not running.

crsctl stat res -t  output is following

ora.mgmtdb
      1        ONLINE  OFFLINE                               Instance Shutdown,ST
                                                             ABLE

Let's start it up.

Nothing seems to work

[oracle@OEL7N1 ~]$ srvctl start mgmtdb
PRCR-1079 : Failed to start resource ora.mgmtdb
CRS-5005: IP Address: 127.0.0.1 is already in use in the network
CRS-2674: Start of 'ora.oel7n2.vip' on 'oel7n2' failed
CRS-5005: IP Address: 127.0.0.1 is already in use in the network
CRS-2674: Start of 'ora.oel7n1.vip' on 'oel7n1' failed
CRS-2632: There are no more servers to try to place resource 'ora.mgmtdb' on that would satisfy its placement policy

[oracle@OEL7N2 ~]$ srvctl start mgmtdb
PRCR-1079 : Failed to start resource ora.mgmtdb
CRS-5005: IP Address: 127.0.0.1 is already in use in the network
CRS-2674: Start of 'ora.oel7n2.vip' on 'oel7n2' failed
CRS-5005: IP Address: 127.0.0.1 is already in use in the network
CRS-2674: Start of 'ora.oel7n1.vip' on 'oel7n1' failed
CRS-2632: There are no more servers to try to place resource 'ora.mgmtdb' on that would satisfy its placement policy

Let's try to delete the mgmtdb though it is not running.

[oracle@OEL7N1 bin]$ ./dbca -silent -deleteDatabase -sourceDB -MGMTDB
[FATAL] [DBT-10003] Delete operation for Oracle Grid Infrastructure Management Repository (GIMR) cannot be performed on the current node (oel7n1).
   CAUSE: Oracle GIMR is running on a remote node (oel7n2).
   ACTION: Invoke DBCA on the remote node (oel7n2) to delete Oracle GIMR.
[oracle@OEL7N1 bin]$

[oracle@OEL7N2 bin]$ ./dbca -silent -deleteDatabase -sourceDB -MGMTDB
[WARNING] [DBT-11503] The instance (-MGMTDB) is not running on the local node. This may result in partial delete of Oracle database.
   CAUSE: A locally running instance is required for complete deletion of Oracle database instance and database files.
   ACTION: Specify a locally running database, or execute DBCA on a node where the database instance is running.
[WARNING] [DBT-19202] The Database Configuration Assistant will delete the Oracle instances and datafiles for your database. All information in the database will be destroyed.
Prepare for db operation
32% complete
Connecting to database
35% complete
39% complete
42% complete
[WARNING] The data files for database with SID "-MGMTDB" could not be determined because the database could not be started. DBCA will proceed with the service deletion.
65% complete
Updating network configuration files
68% complete
Deleting instance and datafiles
84% complete
100% complete
Database deletion completed.
Look at the log file "/u01/app/grid/cfgtoollogs/dbca/-MGMTDB/-MGMTDB.log" for further details.

This behaviour can be because of mgmtdb being optional in 19c. Let's see further for more details.


Let's run the downgrade command

[root@OEL7N1 ~]# cd /home/oracle
[root@OEL7N1 oracle]# pwd
/home/oracle
[root@OEL7N1 oracle]# /grid/app/oracle/19.3/grid/crs/install/rootcrs.sh -downgrade
Using configuration parameter file: /grid/app/oracle/19.3/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/oel7n1/crsconfig/crsdowngrade_oel7n1_2021-04-28_11-14-09AM.log
2021/04/28 11:14:43 CLSRSC-180: An error occurred while executing the command 'cluutil -doesHomeExist -nodelist oel7n1,oel7n2 -oraclehome /grid/app/oracle/19.3/grid'
Died at /grid/app/oracle/19.3/grid/crs/install/crsutils.pm line 18053.
The command '/grid/app/oracle/19.3/grid/perl/bin/perl -I/grid/app/oracle/19.3/grid/perl/lib -I/grid/app/oracle/19.3/grid/crs/install -I/grid/app/oracle/19.3/grid/xag /grid/app/oracle/19.3/grid/crs/install/rootcrs.pl -downgrade' execution failed

It failed. Need to investigate why it failed.

Log says

2021-04-28 11:14:43: Executing cmd: /grid/app/oracle/19.3/grid/bin/clsecho -p has -f clsrsc -m 180 'cluutil -doesHomeExist -nodelist oel7n1,oel7n2 -oraclehome /grid/app/oracle/19.3/grid'
2021-04-28 11:14:43: Executing cmd: /grid/app/oracle/19.3/grid/bin/clsecho -p has -f clsrsc -m 180 'cluutil -doesHomeExist -nodelist oel7n1,oel7n2 -oraclehome /grid/app/oracle/19.3/grid'
2021-04-28 11:14:43: Command output:
>  CLSRSC-180: An error occurred while executing the command 'cluutil -doesHomeExist -nodelist oel7n1,oel7n2 -oraclehome /grid/app/oracle/19.3/grid'
>End Command output
2021-04-28 11:14:43: CLSRSC-180: An error occurred while executing the command 'cluutil -doesHomeExist -nodelist oel7n1,oel7n2 -oraclehome /grid/app/oracle/19.3/grid'
2021-04-28 11:14:43: ###### Begin DIE Stack Trace ######
2021-04-28 11:14:43:     Package         File                 Line Calling
2021-04-28 11:14:43:     --------------- -------------------- ---- ----------
2021-04-28 11:14:43:  1: main            rootcrs.pl            357 crsutils::dietrap
2021-04-28 11:14:43:  2: crsutils        crsutils.pm          18053 main::__ANON__
2021-04-28 11:14:43:  3: crsdowngrade    crsdowngrade.pm       970 crsutils::checkHomeExists
2021-04-28 11:14:43:  4: crsdowngrade    crsdowngrade.pm      1106 crsdowngrade::isLastNodeToDowngrade
2021-04-28 11:14:43:  5: crsdowngrade    crsdowngrade.pm       352 crsdowngrade::lastnodeCheck
2021-04-28 11:14:43:  6: crsdowngrade    crsdowngrade.pm       227 crsdowngrade::downgrade_validate
2021-04-28 11:14:43:  7: crsdowngrade    crsdowngrade.pm       141 crsdowngrade::CRSDowngrade
2021-04-28 11:14:43:  8: main            rootcrs.pl            370 crsdowngrade::new
2021-04-28 11:14:43: ####### End DIE Stack Trace #######

2021-04-28 11:14:43:  checkpoint has failed

Seems like a network issue. As this is a lab environemnt, this kind of issues are quite normal. Let me check my dns setting


So, here my dns settings are all broken and resolv.conf doesn't have my dns server ip. Most probably I forgot to do chattr +i for /etc/resolv.conf

[root@OEL7N1 oracle]# nslookup OEL7N2
Server:         192.168.1.1
Address:        192.168.1.1#53

** server can't find OEL7N2: NXDOMAIN

[root@OEL7N1 oracle]# cat /etc/resolv.conf
# Generated by NetworkManager
search localdomain
nameserver 192.168.1.1

After fixing the DNS Settings

[root@OEL7N1 oracle]# nslookup OEL7N2
Server:         192.168.126.102
Address:        192.168.126.102#53

Name:   OEL7N2.localdomain
Address: 192.168.126.21

Let's resume it again.


After restart of cluster, we can see that mgmtdb is available now. Need to delete it again.


[root@OEL7N2 ~]# ps -ef | grep pmon
oracle   20763     1  0 11:35 ?        00:00:00 asm_pmon_+ASM2
oracle   21755     1  0 11:36 ?        00:00:00 ora_pmon_orcl2
oracle   21937     1  0 11:36 ?        00:00:00 apx_pmon_+APX2
oracle   21944     1  0 11:36 ?        00:00:00 mdb_pmon_-MGMTDB
root     26465  5002  0 11:40 pts/0    00:00:00 grep --color=auto pmon

[oracle@OEL7N2 bin]$ cd /grid/app/oracle/19.3/grid/bin/
[oracle@OEL7N2 bin]$ ./dbca -silent -deleteDatabase -sourceDB -MGMTDB
[WARNING] [DBT-19202] The Database Configuration Assistant will delete the Oracle instances and datafiles for your database. All information in the database will be destroyed.
Prepare for db operation
32% complete
Connecting to database
35% complete
39% complete
42% complete
45% complete
48% complete
52% complete
65% complete
Updating network configuration files
68% complete
Deleting instance and datafiles
84% complete
100% complete
Database deletion completed.
Look at the log file "/u01/app/grid/cfgtoollogs/dbca/_mgmtdb/_mgmtdb.log" for further details.

Let's run the downgrade again.

[root@OEL7N1 oracle]# /grid/app/oracle/19.3/grid/crs/install/rootcrs.sh -downgrade
Using configuration parameter file: /grid/app/oracle/19.3/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/oel7n1/crsconfig/crsdowngrade_oel7n1_2021-04-28_11-49-17AM.log
2021/04/28 11:53:28 CLSRSC-591: successfully downgraded Oracle Clusterware stack on this node
[root@OEL7N1 oracle]# 2021/04/28 11:53:45 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.

This time it went quite well.


Last few lines from the log.


2021-04-28 11:53:28: The 'ROOTCRS_FIRSTNODE' status is SUCCESS
2021-04-28 11:53:28: Global ckpt 'ROOTCRS_FIRSTNODE' state: SUCCESS
2021-04-28 11:53:28: First node operations have been done by forcing first node on a non-installer node.
2021-04-28 11:53:28: Local node: oel7n1 is not the first node.
2021-04-28 11:53:28: Successfully downgraded Oracle Clusterware stack on this node
2021-04-28 11:53:28: Executing cmd: /grid/app/oracle/19.3/grid/bin/clsecho -p has -f clsrsc -m 591
2021-04-28 11:53:28: Executing cmd: /grid/app/oracle/19.3/grid/bin/clsecho -p has -f clsrsc -m 591
2021-04-28 11:53:28: Command output:
>  CLSRSC-591: successfully downgraded Oracle Clusterware stack on this node
>End Command output
2021-04-28 11:53:28: CLSRSC-591: successfully downgraded Oracle Clusterware stack on this node
2021-04-28 11:53:45: Command output:
>
>  AHF Installer for Platform Linux Architecture x86_64
>
>  AHF Installation Log : /tmp/ahf_install_204400_18374_2021_04_28-11_53_26.log
>
>  Starting Autonomous Health Framework (AHF) Installation
>
>  AHF Version: 20.4.4 Build Date: 202103031514
>
>  AHF is already installed at /opt/oracle.ahf
>
>  Installed AHF Version: 20.4.4 Build Date: 202103031514
>
>  AHF is already running latest version. No need to upgrade.
>  Starting TFA..
>  Created symlink from /etc/systemd/system/multi-user.target.wants/oracle-tfa.service to /etc/systemd/system/oracle-tfa.service.
>  Created symlink from /etc/systemd/system/graphical.target.wants/oracle-tfa.service to /etc/systemd/system/oracle-tfa.service.
>  Waiting up to 100 seconds for TFA to be started..
>  . . . . .
>  . . . . .
>  Successfully started TFA Process..
>  . . . . .
>  TFA Started and listening for commands
>
>End Command output
2021-04-28 11:53:45: Executing cmd: /grid/app/oracle/19.3/grid/bin/clsecho -p has -f clsrsc -m 4002
2021-04-28 11:53:45: Executing cmd: /grid/app/oracle/19.3/grid/bin/clsecho -p has -f clsrsc -m 4002
2021-04-28 11:53:45: Command output:
>  CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
>End Command output
2021-04-28 11:53:45: CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.

Let's downgrade on the second node.


[root@OEL7N2 oracle]# /grid/app/oracle/19.3/grid/crs/install/rootcrs.sh -downgrade
Using configuration parameter file: /grid/app/oracle/19.3/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/oel7n2/crsconfig/crsdowngrade_oel7n2_2021-04-28_12-01-01AM.log
CRS-4123: Oracle High Availability Services has been started.
CRS-2672: Attempting to start 'ora.evmd' on 'oel7n2'
CRS-2672: Attempting to start 'ora.mdnsd' on 'oel7n2'
CRS-2676: Start of 'ora.mdnsd' on 'oel7n2' succeeded
CRS-2676: Start of 'ora.evmd' on 'oel7n2' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'oel7n2'
CRS-2676: Start of 'ora.gpnpd' on 'oel7n2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'oel7n2'
CRS-2672: Attempting to start 'ora.gipcd' on 'oel7n2'
CRS-2676: Start of 'ora.cssdmonitor' on 'oel7n2' succeeded
CRS-2676: Start of 'ora.gipcd' on 'oel7n2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'oel7n2'
CRS-2672: Attempting to start 'ora.diskmon' on 'oel7n2'
CRS-2676: Start of 'ora.diskmon' on 'oel7n2' succeeded
CRS-2676: Start of 'ora.cssd' on 'oel7n2' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'oel7n2'
CRS-2672: Attempting to start 'ora.ctssd' on 'oel7n2'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'oel7n2'
CRS-2676: Start of 'ora.crf' on 'oel7n2' succeeded
CRS-2676: Start of 'ora.ctssd' on 'oel7n2' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'oel7n2' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'oel7n2'
CRS-2676: Start of 'ora.asm' on 'oel7n2' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'oel7n2'
CRS-2676: Start of 'ora.storage' on 'oel7n2' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'oel7n2'
CRS-2676: Start of 'ora.crsd' on 'oel7n2' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'oel7n2'
CRS-2672: Attempting to start 'ora.storage' on 'oel7n2'
CRS-2676: Start of 'ora.storage' on 'oel7n2' succeeded
CRS-2676: Start of 'ora.crf' on 'oel7n2' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'oel7n2'
CRS-2676: Start of 'ora.crsd' on 'oel7n2' succeeded
2021/04/28 12:07:02 CLSRSC-338: Successfully downgraded OCR to version 12.2.0.1.0
CRS-5702: Resource 'ora.crsd' is already running on 'oel7n2'
CRS-4000: Command Start failed, or completed with errors.
2021/04/28 12:10:12 CLSRSC-591: successfully downgraded Oracle Clusterware stack on this node
2021/04/28 12:10:13 CLSRSC-640: To complete the downgrade operation, ensure that the node inventory on all nodes points to the configured Grid Infrastructure home '/grid/app/oracle/12.2'.
2021/04/28 12:10:14 CLSRSC-592: Run 'crsctl start crs' from home /grid/app/oracle/12.2 on each node to complete downgrade.
[root@OEL7N2 oracle]#    

Let's follow this "Run 'crsctl start crs' from home /grid/app/oracle/12.2 on each node to complete downgrade."

[root@OEL7N1 oracle]# . oraenv
ORACLE_SID = [+ASM1] ?
ORACLE_HOME = [/home/oracle] ? /grid/app/oracle/12.2
The Oracle base remains unchanged with value /u01/app/grid
[root@OEL7N1 oracle]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.

[root@OEL7N2 oracle]# . oraenv
ORACLE_SID = [/grid/app/oracle/12.2] ? +ASM2
ORACLE_HOME = [/home/oracle] ? /grid/app/oracle/12.2
The Oracle base remains unchanged with value /u01/app/grid
[root@OEL7N2 oracle]#
[root@OEL7N2 oracle]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
[root@OEL7N2 oracle]#

Everything came up well apart from mgmtdb as we deleted it previously. We need to create it again.

[root@OEL7N1 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ARCH.dg
               ONLINE  ONLINE       oel7n1                   STABLE
               ONLINE  ONLINE       oel7n2                   STABLE
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       oel7n1                   STABLE
               ONLINE  ONLINE       oel7n2                   STABLE
ora.DATA.dg
               ONLINE  ONLINE       oel7n1                   STABLE
               ONLINE  ONLINE       oel7n2                   STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       oel7n1                   STABLE
               ONLINE  ONLINE       oel7n2                   STABLE
ora.VOTE.dg
               ONLINE  ONLINE       oel7n1                   STABLE
               ONLINE  ONLINE       oel7n2                   STABLE
ora.chad
               ONLINE  OFFLINE      oel7n1                   STABLE
               ONLINE  OFFLINE      oel7n2                   STABLE
ora.net1.network
               ONLINE  ONLINE       oel7n1                   STABLE
               ONLINE  ONLINE       oel7n2                   STABLE
ora.ons
               ONLINE  ONLINE       oel7n1                   STABLE
               ONLINE  ONLINE       oel7n2                   STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       oel7n2                   STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       oel7n1                   STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       oel7n1                   STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       oel7n2                   169.254.185.253 192.
                                                             168.100.21,STABLE
ora.asm
      1        ONLINE  ONLINE       oel7n1                   Started,STABLE
      2        ONLINE  ONLINE       oel7n2                   Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       oel7n1                   STABLE
ora.mgmtdb
      1        ONLINE  OFFLINE                               Instance Shutdown,ST
                                                             ABLE
ora.oel7n1.vip
      1        ONLINE  ONLINE       oel7n1                   STABLE
ora.oel7n2.vip
      1        ONLINE  ONLINE       oel7n2                   STABLE
ora.orcl.db
      1        ONLINE  ONLINE       oel7n1                   Open,HOME=/dboracle/
                                                             app/oracle/product/1
                                                             2.2.0/dbhome_1,STABL
                                                             E
      2        ONLINE  ONLINE       oel7n2                   Open,HOME=/dboracle/
                                                             app/oracle/product/1
                                                             2.2.0/dbhome_1,STABL
                                                             E
ora.orcl.pdborcl_srv.svc
      1        ONLINE  ONLINE       oel7n1                   STABLE
      2        ONLINE  ONLINE       oel7n2                   STABLE
ora.qosmserver
      1        ONLINE  ONLINE       oel7n1                   STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       oel7n2                   STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       oel7n1                   STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       oel7n1                   STABLE
--------------------------------------------------------------------------------
[root@OEL7N1 ~]#



Before that need to delete 19 grid_home from Active Cluster Inventory.

This needs to be run from only one node and 19c home.

[oracle@OEL7N1 grid]$ /grid/app/oracle/19.3/grid/oui/bin/runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=false ORACLE_HOME=/grid/app/oracle/19.3/grid "CLUSTER_NODES=OEL7N1,OEL7N2" -doNotUpdateNodeList
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 9949 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
You can find the log of this install session at:
 /u01/app/oraInventory/logs/UpdateNodeList2021-04-28_12-35-16PM.log
'UpdateNodeList' was successful.

Now update the Active Cluster Inventory with 12c home.


[oracle@OEL7N1 grid]$ /grid/app/oracle/12.2/oui/bin/runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=true ORACLE_HOME=/grid/app/oracle/12.2 "CLUSTER_NODES=OEL7N1,OEL7N2"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 9951 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.

Let's remove mgmtdb Service from Cluster


[oracle@OEL7N1 grid]$ srvctl remove mgmtdb
Remove the database _mgmtdb? (y/[n]) y
PRKO-3077 : Failed to remove database _mgmtdb: PRCD-1032 : Failed to remove database resource _mgmtdb
PRCR-1028 : Failed to remove resource ora.mgmtdb
PRCR-1072 : Failed to unregister resource ora.mgmtdb
CRS-2730: Resource 'ora.chad' depends on resource 'ora.mgmtdb'

Let's fix this.


Stop ora.crf resource from both nodes and disable it so that it doesn't start automatically.

[oracle@OEL7N1 ~]$ crsctl stop resource ora.crf -init
CRS-2673: Attempting to stop 'ora.crf' on 'oel7n1'
CRS-2677: Stop of 'ora.crf' on 'oel7n1' succeeded

[root@OEL7N1 19.3]# crsctl modify resource ora.crf -attr ENABLED=0 -init

[root@OEL7N2 oracle]# crsctl stop resource ora.crf -init
CRS-2673: Attempting to stop 'ora.crf' on 'oel7n2'
CRS-2677: Stop of 'ora.crf' on 'oel7n2' succeeded
[root@OEL7N2 oracle]#
[root@OEL7N2 oracle]# crsctl modify resource ora.crf -attr ENABLED=0 -init
[root@OEL7N2 oracle]#

Disabled /etc/oratab entry for mgmtdb.

[oracle@OEL7N1 ~]$ srvctl remove mgmtdb
Remove the database _mgmtdb? (y/[n]) y
PRKO-3077 : Failed to remove database _mgmtdb: PRCD-1032 : Failed to remove database resource _mgmtdb
PRCR-1028 : Failed to remove resource ora.mgmtdb
PRCR-1072 : Failed to unregister resource ora.mgmtdb
CRS-2730: Resource 'ora.chad' depends on resource 'ora.mgmtdb'
[oracle@OEL7N1 ~]$

[oracle@OEL7N1 ~]$ srvctl stop mgmtdb
PRCC-1016 : _mgmtdb was already stopped
[oracle@OEL7N1 ~]$ srvctl stop mgmtlsnr

[oracle@OEL7N1 ~]$ srvctl remove mgmtdb -force

[oracle@OEL7N1 ~]$ srvctl remove mgmtdb
PRCD-1120 : The resource for database _mgmtdb could not be found.
PRCR-1001 : Resource ora.mgmtdb does not exist


Let's compare both the clusters now.


Everything seems to be okay now.

[oracle@OEL7N1 ~]$ crsctl query crs softwareversion
Oracle Clusterware version on node [oel7n1] is [12.2.0.1.0]
[oracle@OEL7N1 ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.2.0.1.0]
[oracle@OEL7N1 ~]$ crsctl query crs softwarepatch
Oracle Clusterware patch level on node oel7n1 is [1831702305].


[root@OEL7N2 oracle]# crsctl query crs softwareversion
Oracle Clusterware version on node [oel7n2] is [12.2.0.1.0]
[root@OEL7N2 oracle]# crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.2.0.1.0]
[root@OEL7N2 oracle]#  crsctl query crs softwarepatch
Oracle Clusterware patch level on node oel7n2 is [1831702305].

Let's create the mgmtdb container database. After that we will create the mgmtdb pdb and enable the ora.crf service in both nodes.


Download mdbuil.pl perl script from MDBUtil: GI Management Repository configuration tool (Doc ID 2065175.1)



[oracle@OEL7N1 ~]$ ./mdbutil.pl --addmdb --target=+DATA
mdbutil.pl version : 1.99
2021-04-28 14:15:15: I Starting To Configure MGMTDB at +DATA...
2021-04-28 14:15:27: I Container database creation in progress... for GI 12.2.0.1.0
2021-04-28 14:42:37: I Plugable database creation in progress...
2021-04-28 14:51:46: I Executing "/tmp/mdbutil.pl --addchm" on oel7n1 as root to configure CHM.
root@oel7n1's password:
2021-04-28 14:59:35: W Not able to execute "/tmp/mdbutil.pl --addchm" on oel7n1 as root to configure CHM.
2021-04-28 14:59:36: I Executing "/tmp/mdbutil.pl --addchm" on oel7n2 as root to configure CHM.
root@oel7n2's password:
2021-04-28 14:59:52: W Not able to execute "/tmp/mdbutil.pl --addchm" on oel7n2 as root to configure CHM.
2021-04-28 14:59:52: I MGMTDB & CHM configuration done!

[oracle@OEL7N1 ~]$ srvctl status MGMTDB
Database is enabled
Instance -MGMTDB is running on node oel7n1

[oracle@OEL7N1 ~]$ mgmtca
[oracle@OEL7N1 ~]$
[root@OEL7N1 templates]# crsctl status res ora.crf
CRS-2613: Could not find resource 'ora.crf'.
[root@OEL7N1 templates]# crsctl modify res ora.crf -attr ENABLED=1 -init
[root@OEL7N1 templates]# crsctl start res ora.crf -init
CRS-2672: Attempting to start 'ora.crf' on 'oel7n1'
CRS-2676: Start of 'ora.crf' on 'oel7n1' succeeded

[root@OEL7N2 templates]# crsctl status res ora.crf
CRS-2613: Could not find resource 'ora.crf'.
[root@OEL7N2 templates]# crsctl modify res ora.crf -attr ENABLED=1 -init
[root@OEL7N2 templates]# crsctl start res ora.crf -init
CRS-2672: Attempting to start 'ora.crf' on 'oel7n2'
CRS-2676: Start of 'ora.crf' on 'oel7n2' succeeded

[oracle@OEL7N1 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
32542421;TOMCAT RELEASE UPDATE 12.2.0.1.0(ID:RELEASE) (32542421)
32507738;Database Apr 2021 Release Update : 12.2.0.1.210420 (32507738)
32231681;ACFS JAN 2021 RELEASE UPDATE 12.2.0.1.210119 (32231681)
31802727;OCW OCT 2020 RELEASE UPDATE 12.2.0.1.201020 (31802727)
26839277;DBWLM RELEASE UPDATE 12.2.0.1.0(ID:170913) (26839277)

[oracle@OEL7N2 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
26839277;DBWLM RELEASE UPDATE 12.2.0.1.0(ID:170913) (26839277)
32542421;TOMCAT RELEASE UPDATE 12.2.0.1.0(ID:RELEASE) (32542421)
32231681;ACFS JAN 2021 RELEASE UPDATE 12.2.0.1.210119 (32231681)
32507738;Database Apr 2021 Release Update : 12.2.0.1.210420 (32507738)
31802727;OCW OCT 2020 RELEASE UPDATE 12.2.0.1.201020 (31802727)

Grid downgrade from 19.8 to 12.2 is complete.

963 views0 comments

Recent Posts

See All
Post: Blog2_Post
bottom of page