Bo's Oracle Station

【博客文章2023】Oracle MAA架构不停机滚动升级:physru.sh方式 v.s. dbms_rolling方式(以18c升级19c为例)

2023-10-24 10:20| 发布者: admin| 查看: 118| 评论: 0|原作者: Bo Tang

摘要: 以18c升级19c为例,详细介绍两种Oracle MAA架构不停机滚动升级的全部操作过程。详细对比physru.sh方式和dbms_rolling方式的异同。

【博客文章2023Oracle MAA架构不停机滚动升级:physru.sh方式 v.s. dbms_rolling方式(以18c升级19c为例)

 

Author Bo Tang

 

1. 基本起点环境:

 

    一套Admin Mangaed 18c MAA环境作为起点grid基础架构部分已经滚动升级成功(18c升级到19c)。主库名为c01orcl.example.com,备库名为c02orcl.example.comPolicy Managed的相似环境的操作注意事项将在下一篇博客中介绍。

    在主库所在的两个节点上查询网格基础架构的版本:    

 

[grid@station5 ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [19.0.0.0.0]

 

[grid@station6 ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [19.0.0.0.0]

 

    在备库所在的两个节点上查询网格基础架构的版本:    

 

[grid@station7 ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [19.0.0.0.0]

 

[grid@station8 ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [19.0.0.0.0]

 

    查询Dataguard Broker的信息:    

 

[oracle@station5 ~]$ dgmgrl
DGMGRL for Linux: Release 18.0.0.0.0 - Production on Mon Oct 2 05:19:43 2023
Version 18.3.0.0.0

Copyright (c) 1982, 2018, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys/oracle_4U@c01orcl
Connected to "c01orcl"
Connected as SYSDBA.
DGMGRL> show configuration verbose;

Configuration - dg_config

  Protection Mode: MaxPerformance
  Members:
  c01orcl - Primary database
    c02orcl - Physical standby database

  Properties:
    FastStartFailoverThreshold      = '30'
    OperationTimeout                = '30'
    TraceLevel                      = 'USER'
    FastStartFailoverLagLimit       = '30'
    CommunicationTimeout            = '180'
    ObserverReconnect               = '0'
    FastStartFailoverAutoReinstate  = 'TRUE'
    FastStartFailoverPmyShutdown    = 'TRUE'
    BystandersFollowRoleChange      = 'ALL'
    ObserverOverride                = 'FALSE'
    ExternalDestination1            = ''
    ExternalDestination2            = ''
    PrimaryLostWriteAction          = 'CONTINUE'
    ConfigurationWideServiceName    = 'c01orcl_CFG'

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

 

 

2. 执行前的准备:

          

执行前的准备

physru.sh方式

dbms_rolling方式

1. 主库和备库都要打开闪回数据库

1. 主库和备库都要打开闪回数据库

2. 不需要

2. 主库和备库都要打开补充日志

3. 所有版本都要关闭dataguard broker

3. 12.1.0之前(含)的版本要关闭dataguard broker

4. 备库处于read only状态,并且有mrp0进程在recover日志

4. 备库处于mount状态,并且有mrp0进程在recover日志

 

     本过程注意事项1

     主库和备库的log_archive_dest_state_2参数值必需设置正确(不能是"reset"),否则,在后续的操作过程中会发生日志无法传送的dataguard最严重的问题。对于dbms_rolling方面来说,还会出现build_plan过程中找不到备库的严重错误。

 

SQL> alter system set log_archive_dest_2='service=c01orcl valid_for=(primary_role,online_logfiles) db_unique_name=c01orcl';

SQL> alter system set log_archive_dest_state_2=enable;

 

本过程注意事项2

     对于多租户架构,需要特别注意:为了在备库转换成逻辑standby和之后的升级过程中不出错,主库和备库都必须在所有实例上将所有的插件数据库都打开,并且使用alter pluggable database all save state instances=all命令保存打开状态的记忆。如果备库暂时无法执行,那么在下面的转成逻辑standby过程中,也不要忘记在逻辑备库所有实例上将所有的插件数据库都打开,并且使用alter pluggable database all save state instances=all命令保存打开状态的记忆。

 

3. 第一次在主库执行:

 

     第一次执行过程中,备库由物理standby转变为逻辑standby,备库变为可读写。在物理standby到逻辑standby转换之前,库会产生一个记忆物理standby状态的保障闪回还原点,以便将来转变回物理standby

 

第一次在主库执行

physru.sh方式

dbms_rolling方式

1.脚本会提示备库实现单实例运行的命令

1.备库先要单实例运行

2.运行的命令

[oracle@station5 ~]$ ./physru.sh sys c01orcl1 c02orcl1 c01orcl c02orcl 19.0.0.0.0

Please enter the sysdba password:

 

### Initialize script to either start over or resume execution

Oct 02 18:21:19 2023 [0-1] Identifying rdbms software version

Oct 02 18:21:19 2023 [0-1] database c01orcl is at version 18.0.0.0.0

Oct 02 18:21:19 2023 [0-1] database c02orcl is at version 18.0.0.0.0

Oct 02 18:21:21 2023 [0-1] verifying flashback database is enabled at c01orcl and c02orcl

Oct 02 18:21:22 2023 [0-1] verifying available flashback restore points

Oct 02 18:21:22 2023 [0-1] verifying DG Broker is disabled

Oct 02 18:21:23 2023 [0-1] looking up prior execution history

Oct 02 18:21:23 2023 [0-1] purging script execution state from database c01orcl

Oct 02 18:21:23 2023 [0-1] purging script execution state from database c02orcl

Oct 02 18:21:24 2023 [0-1] starting new execution of script

 

### Stage 1: Backup user environment in case rolling upgrade is aborted

Oct 02 18:21:24 2023 [1-1] stopping media recovery on c02orcl

Oct 02 18:21:27 2023 [1-1] creating restore point PRU_0000_0001 on database c02orcl

Oct 02 18:21:27 2023 [1-1] backing up current control file on c02orcl

Oct 02 18:21:27 2023 [1-1] created backup control file /u01/app/oracle/product/18.3.0/dbhome_1/dbs/PRU_0001_c02orcl_f.f

Oct 02 18:21:27 2023 [1-1] creating restore point PRU_0000_0001 on database c01orcl

Oct 02 18:21:28 2023 [1-1] backing up current control file on c01orcl

Oct 02 18:21:28 2023 [1-1] created backup control file /u01/app/oracle/product/18.3.0/dbhome_1/dbs/PRU_0001_c01orcl_f.f

 

NOTE: Restore point PRU_0000_0001 and backup control file PRU_0001_c02orcl_f.f

      can be used to restore c02orcl back to its original state as a

      physical standby, in case the rolling upgrade operation needs to be aborted

      prior to the first switchover done in Stage 4.

 

### Stage 2: Create transient logical standby from existing physical standby

Oct 02 18:21:29 2023 [2-1] verifying RAC is disabled at c02orcl

 

WARN: c02orcl is a RAC database.  Before this script can continue, you

      must manually reduce the RAC to a single instance, disable the RAC, and

      restart instance c02orcl1 in mounted mode.  This can be accomplished

      with the following steps:

 

        1) Shutdown all instances other than instance c02orcl1.

           eg: srvctl stop instance -d c02orcl -i c02orcl2 -o abort

 

        2) On instance c02orcl1, set the cluster_database parameter to FALSE.

           eg: SQL> alter system set cluster_database=false scope=spfile;

 

        3) Shutdown instance c02orcl1.

           eg: SQL> shutdown abort;

 

        4) Startup instance c02orcl1 in mounted mode.

           eg: SQL> startup mount;

 

      Once these steps have been performed, enter 'y' to continue the script.

      If desired, you may enter 'n' to exit the script to perform the required

      steps, and recall the script to resume from this point.

 

Are you ready to continue? (y/n):

--------------------------------------------------------------------------------------------------------

 

Oct 02 18:24:53 2023 [2-1] continuing

Oct 02 18:24:53 2023 [2-1] verifying RAC is disabled at c02orcl

Oct 02 18:24:54 2023 [2-1] verifying database roles

Oct 02 18:24:54 2023 [2-1] verifying physical standby is mounted

Oct 02 18:24:55 2023 [2-1] verifying database protection mode

Oct 02 18:24:55 2023 [2-1] verifying transient logical standby datatype support

Oct 02 18:24:57 2023 [2-2] starting media recovery on c02orcl

Oct 02 18:25:04 2023 [2-2] confirming media recovery is running

Oct 02 18:25:04 2023 [2-2] waiting for apply lag to fall under 30 seconds

Oct 02 18:25:10 2023 [2-2] apply lag measured at 5 seconds

Oct 02 18:25:11 2023 [2-2] stopping media recovery on c02orcl

Oct 02 18:25:12 2023 [2-2] executing dbms_logstdby.build on database c01orcl

Oct 02 18:25:30 2023 [2-2] converting physical standby into transient logical standby

Oct 02 18:25:38 2023 [2-3] opening database c02orcl

./physru.sh: line 3337: warning: here-document at line 3332 delimited by end-of-file (wanted `EOF')

Oct 02 18:25:52 2023 [2-4] configuring transient logical standby parameters for rolling upgrade

Oct 02 18:25:53 2023 [2-4] starting logical standby on database c02orcl

Oct 02 18:25:55 2023 [2-4] waiting until logminer dictionary has fully loaded

Oct 02 18:26:59 2023 [2-4] dictionary load 32% complete

Oct 02 18:27:10 2023 [2-4] dictionary load 62% complete

Oct 02 18:27:20 2023 [2-4] dictionary load 99% complete

Oct 02 18:43:53 2023 [2-4] dictionary load is complete

Oct 02 18:43:54 2023 [2-4] waiting for apply lag to fall under 30 seconds

Oct 02 18:44:01 2023 [2-4] apply lag measured at 7 seconds

 

NOTE: Database c02orcl is now ready to be upgraded.  This script has left the

      database open in case you want to perform any further tasks before

      upgrading the database.  Once the upgrade is complete, the database must

      opened in READ WRITE mode before this script can be called to resume the

      rolling upgrade.

 

NOTE: Database c02orcl may be reverted back to a RAC database upon completion

      of the rdbms upgrade.  This can be accomplished by performing the

      following steps:

 

          1) On instance c02orcl1, set the cluster_database parameter to TRUE.

          eg: SQL> alter system set cluster_database=true scope=spfile;

 

          2) Shutdown instance c02orcl1.

          eg: SQL> shutdown abort;

 

          3) Startup and open all instances for database c02orcl.

          eg: srvctl start database -d c02orcl

2.运行的命令

 

exec dbms_rolling.init_plan

(future_primary=>’c02orcl’);

future_primary取值为log_archive_config的值)

exec dbms_rolling.build_plan;

exec dbms_rolling.start_plan;

 

 

    本过程注意事项1

     再次提醒:在转换成逻辑standby过程中,不要忘记在逻辑备库所有实例上将所有的插件数据库都打开,并且使用alter pluggable database all save state instances=all命令保存打开状态的记忆。

    本过程注意事项2

dbms_rolling方式过程中的输出可以查看以下数据字典视图:         

dba_rolling_parameters

dba_rolling_unsupported

dba_rolling_plan

dba_rolling_events

dba_rolling_status

dba_rolling_databases

最重要的是查看dba_rolling_plan的输出,成功执行完dbms_rolling.start后,其输出举例如下:

 

......

1    1    c01orcl    c01orcl    START    SUCCESS    COMPLETE    Notify Data Guard broker that DBMS_ROLLING has started    0        01-OCT-23 05.59.33.765609000 PM    01-OCT-23 05.59.33.777607000 PM
1

1    2    c01orcl    c02orcl    START    SUCCESS    COMPLETE    Notify Data Guard broker that DBMS_ROLLING has started    0        01-OCT-23 05.59.33.785985000 PM    01-OCT-23 05.59.33.810804000 PM
1    

2    3    c01orcl    c01orcl    START    SUCCESS    COMPLETE    Verify database is a primary    0        01-OCT-23 05.59.33.820043000 PM    01-OCT-23 05.59.33.821118000 PM
1    

2    4    c01orcl    c01orcl    START    SUCCESS    COMPLETE    Verify MAXIMUM PROTECTION is disabled    0        01-OCT-23 05.59.33.822495000 PM    01-OCT-23 05.59.33.829887000 PM
1    

3    5    c01orcl    c02orcl    START    SUCCESS    COMPLETE    Verify database is a physical standby    0        01-OCT-23 05.59.33.837520000 PM    01-OCT-23 05.59.33.837953000 PM
1    

3    6    c01orcl    c02orcl    START    SUCCESS    COMPLETE    Verify physical standby is mounted    0        01-OCT-23 05.59.33.844561000 PM    01-OCT-23 05.59.33.844779000 PM
1    

4    7    c01orcl    c02orcl    START    SUCCESS    COMPLETE    Verify future primary is configured with standby redo logs    0        01-OCT-23 05.59.33.852133000 PM    01-OCT-23 05.59.33.865419000 PM
1    

5    8    c01orcl    c01orcl    START    SUCCESS    COMPLETE    Verify server parameter file exists and is modifiable    0        01-OCT-23 05.59.33.871090000 PM    01-OCT-23 05.59.33.871341000 PM
1    

5    9    c01orcl    c02orcl    START    SUCCESS    COMPLETE    Verify server parameter file exists and is modifiable    0        01-OCT-23 05.59.33.879613000 PM    01-OCT-23 05.59.33.879815000 PM
1    

6    10    c01orcl    c01orcl    START    SUCCESS    COMPLETE    Verify Data Guard broker configuartion is enabled    0        01-OCT-23 05.59.33.883636000 PM    01-OCT-23 05.59.33.884035000 PM
1    

6    11    c01orcl    c02orcl    START    SUCCESS    COMPLETE    Verify Data Guard broker configuartion is enabled    0        01-OCT-23 05.59.33.893851000 PM    01-OCT-23 05.59.33.894038000 PM
1    

7    12    c01orcl    c01orcl    START    SUCCESS    COMPLETE    Verify Fast-Start Failover is disabled    0        01-OCT-23 05.59.33.897599000 PM    01-OCT-23 05.59.33.897756000 PM
1    

7    13    c01orcl    c02orcl    START    SUCCESS    COMPLETE    Verify Fast-Start Failover is disabled    0        01-OCT-23 05.59.33.904082000 PM    01-OCT-23 05.59.33.904256000 PM
1    

8    14    c01orcl    c01orcl    START    SUCCESS    COMPLETE    Verify fast recovery area is configured    0        01-OCT-23 05.59.33.908784000 PM    01-OCT-23 05.59.33.946399000 PM
1    

8    15    c01orcl    c01orcl    START    SUCCESS    COMPLETE    Verify available flashback restore points    0        01-OCT-23 05.59.33.947901000 PM    01-OCT-23 05.59.33.967184000 PM
1    

8    16    c01orcl    c02orcl    START    SUCCESS    COMPLETE    Verify fast recovery area is configured    0        01-OCT-23 05.59.33.972590000 PM    01-OCT-23 05.59.33.988973000 PM
1    

8    17    c01orcl    c02orcl    START    SUCCESS    COMPLETE    Verify available flashback restore points    0        01-OCT-23 05.59.33.993987000 PM    01-OCT-23 05.59.34.002199000 PM
1    

9    18    c01orcl    c02orcl    START    SUCCESS    COMPLETE    Stop media recovery    0        01-OCT-23 05.59.34.011066000 PM    01-OCT-23 05.59.35.048752000 PM
1    

10    19    c01orcl    c02orcl    START    SUCCESS    COMPLETE    Drop guaranteed restore point DBMSRU_INITIAL    0        01-OCT-23 05.59.35.060059000 PM    01-OCT-23 05.59.35.097693000 PM
1    

10    20    c01orcl    c02orcl    START    SUCCESS    COMPLETE    Create guaranteed restore point DBMSRU_INITIAL    0        01-OCT-23 05.59.35.106405000 PM    01-OCT-23 05.59.48.768301000 PM
1    

11    21    c01orcl    c01orcl    START    SUCCESS    COMPLETE    Drop guaranteed restore point DBMSRU_INITIAL    0        01-OCT-23 05.59.48.774330000 PM    01-OCT-23 05.59.48.818631000 PM
1    

11    22    c01orcl    c01orcl    START    SUCCESS    COMPLETE    Create guaranteed restore point DBMSRU_INITIAL    0        01-OCT-23 05.59.48.819982000 PM    01-OCT-23 05.59.58.127047000 PM
1    

12    23    c01orcl    c02orcl    START    SUCCESS    COMPLETE    Start media recovery    0        01-OCT-23 05.59.58.138471000 PM    01-OCT-23 06.00.05.242673000 PM
1    

12    24    c01orcl    c02orcl    START    SUCCESS    COMPLETE    Verify media recovery is running    0        01-OCT-23 06.00.05.251237000 PM    01-OCT-23 06.00.05.252468000 PM
1    

13    25    c01orcl    c01orcl    START    SUCCESS    COMPLETE    Verify db_recovery_file_dest has been specified    0        01-OCT-23 06.00.05.258677000 PM    01-OCT-23 06.00.05.279936000 PM
1    

13    26    c01orcl    c01orcl    START    SUCCESS    COMPLETE    Backup control file to rolling_change_backup.f    0        01-OCT-23 06.00.05.281109000 PM    01-OCT-23 06.00.05.938718000 PM
1    

13    27    c01orcl    c02orcl    START    SUCCESS    COMPLETE    Verify db_recovery_file_dest has been specified    0        01-OCT-23 06.00.05.944642000 PM    01-OCT-23 06.00.05.949419000 PM
1    

13    28    c01orcl    c02orcl    START    SUCCESS    COMPLETE    Backup control file to rolling_change_backup.f    0        01-OCT-23 06.00.05.958352000 PM    01-OCT-23 06.00.06.532592000 PM
1    

14    29    c01orcl    c01orcl    START    SUCCESS    COMPLETE    Get current supplemental logging on the primary database    0        01-OCT-23 06.00.06.537821000 PM    01-OCT-23 06.00.06.538041000 PM
1    

15    30    c01orcl    c01orcl    START    SUCCESS    COMPLETE    Get current redo branch of the primary database    0        01-OCT-23 06.00.06.540305000 PM    01-OCT-23 06.00.06.542866000 PM
1    

16    31    c01orcl    c02orcl    START    SUCCESS    COMPLETE    Wait until recovery is active on the primary's redo branch    0        01-OCT-23 06.00.06.549227000 PM    01-OCT-23 06.00.06.551788000 PM
1    

17    32    c01orcl    c02orcl    START    SUCCESS    COMPLETE    Reduce to a single instance if database is a RAC    0        01-OCT-23 06.00.06.557263000 PM    01-OCT-23 06.00.08.459858000 PM
1    

18    33    c01orcl    c02orcl    START    SUCCESS    COMPLETE    Verify only a single instance is active if future primary is RAC    0        01-OCT-23 06.00.08.470384000 PM    01-OCT-23 06.00.08.470604000 PM
1    

19    34    c01orcl    c02orcl    START    SUCCESS    COMPLETE    Stop media recovery    0        01-OCT-23 06.00.08.479704000 PM    01-OCT-23 06.00.09.488575000 PM
1    

20    35    c01orcl    c01orcl    START    SUCCESS    COMPLETE    Execute dbms_logstdby.build    0        01-OCT-23 06.00.09.493866000 PM    01-OCT-23 06.01.22.645916000 PM
1    

21    36    c01orcl    c02orcl    START    SUCCESS    COMPLETE    Convert into a transient logical standby    0        01-OCT-23 06.01.22.796084000 PM    01-OCT-23 06.01.30.619452000 PM
1    

22    37    c01orcl    c02orcl    START    SUCCESS    COMPLETE    Open database including instance-peers if RAC    0        01-OCT-23 06.01.30.699637000 PM    01-OCT-23 06.01.47.952456000 PM
1    

23    38    c01orcl    c02orcl    START    SUCCESS    COMPLETE    Verify logical standby is open read/write    0        01-OCT-23 06.01.48.069977000 PM    01-OCT-23 06.01.48.070190000 PM
1    

23    39    c01orcl    c02orcl    START    SUCCESS    COMPLETE    Get redo branch of transient logical standby    0        01-OCT-23 06.01.48.099317000 PM    01-OCT-23 06.01.48.112069000 PM
1    

23    40    c01orcl    c02orcl    START    SUCCESS    COMPLETE    Get reset scn of transient logical redo branch    0        01-OCT-23 06.01.48.182857000 PM    01-OCT-23 06.01.48.198615000 PM
1    

23    41    c01orcl    c02orcl    START    SUCCESS    COMPLETE    Configure logical standby parameters    0        01-OCT-23 06.01.48.212137000 PM    01-OCT-23 06.01.49.124815000 PM
1    

23    42    c01orcl    c02orcl    START    SUCCESS    COMPLETE    Start logical standby apply    0        01-OCT-23 06.01.49.136317000 PM    01-OCT-23 06.01.50.952436000 PM
1    

24    43    c01orcl    c02orcl    START    SUCCESS    COMPLETE    Wait until logminer dictionary has been loaded    0    loaded: 100%%    01-OCT-23 06.01.51.006000000 PM    01-OCT-23 06.03.29.166443000 PM
1    

25    44    c01orcl    c02orcl    START    SUCCESS    COMPLETE    Enable compatibility advance despite presence of GRPs    0        01-OCT-23 06.03.29.235201000 PM    01-OCT-23 06.03.29.268315000 PM
1    

26    45    c01orcl    c01orcl    START    SUCCESS    COMPLETE    Log pre-switchover instructions to events table    0        01-OCT-23 06.03.29.300574000 PM    01-OCT-23 06.03.29.359985000 PM
1    

27    46    c01orcl    c02orcl    START    SUCCESS    COMPLETE    Record start of user upgrade of c02orcl    0        01-OCT-23 06.03.29.393410000 PM    01-OCT-23 06.03.29.554957000 PM
1    

28    47    c01orcl    c02orcl    SWITCH    PENDING    PENDING    Verify database is in OPENRW mode    0            
1    

29    48    c01orcl    c02orcl    SWITCH    PENDING    PENDING    Record completion of user upgrade of c02orcl    0            
1    

30    49    c01orcl    c02orcl    SWITCH    PENDING    PENDING    Scan LADs for presence of c01orcl destination    0            
1    

31    50    c01orcl    c02orcl    SWITCH    PENDING    PENDING    Test if c01orcl is reachable using configured TNS service    0            
1    

32    51    c01orcl    c01orcl    SWITCH    PENDING    PENDING    Call Data Guard broker to enable redo transport to c02orcl    0            
1    

33    52    c01orcl    c01orcl    SWITCH    PENDING    PENDING    Archive all current online redo logs    0            
1    

34    53    c01orcl    c02orcl    SWITCH    PENDING    PENDING    Archive all current online redo logs    0            
1    

35    54    c01orcl    c02orcl    SWITCH    PENDING    PENDING    Stop logical standby apply    0            
1    

36    55    c01orcl    c02orcl    SWITCH    PENDING    PENDING    Start logical standby apply    0            
1    

37    56    c01orcl    c02orcl    SWITCH    PENDING    PENDING    Wait until apply lag has fallen below 600 seconds    0            
1    

38    57    c01orcl    c01orcl    SWITCH    PENDING    PENDING    Notify Data Guard broker that switchover to logical standby database is starting    0            
1    

39    58    c01orcl    c01orcl    SWITCH    PENDING    PENDING    Log post-switchover instructions to events table    0            
1    

40    59    c01orcl    c01orcl    SWITCH    PENDING    PENDING    Switch database to a logical standby    0            
1    

41    60    c01orcl    c01orcl    SWITCH    PENDING    PENDING    Notify Data Guard broker that switchover to logical standby database has completed    0            
1    

42    61    c01orcl    c02orcl    SWITCH    PENDING    PENDING    Wait until end-of-redo has been applied    0            
1    

43    62    c01orcl    c01orcl    SWITCH    PENDING    PENDING    Archive all current online redo logs    0            
1    

44    63    c01orcl    c01orcl    SWITCH    PENDING    PENDING    Notify Data Guard broker that switchover to primary is starting    0            
1    

45    64    c01orcl    c02orcl    SWITCH    PENDING    PENDING    Switch database to a primary    0            
1    

46    65    c01orcl    c02orcl    SWITCH    PENDING    PENDING    Notify Data Guard broker that switchover to primary has completed    0            
1    

47    66    c01orcl    c01orcl    SWITCH    PENDING    PENDING    Enable compatibility advance despite presence of GRPs    0            
1    

48    67    c01orcl    c01orcl    SWITCH    PENDING    PENDING    Synchronize plan with new primary    0            
1    

49    68    c02orcl    c01orcl    FINISH    PENDING    PENDING    Verify only a single instance is active    0            
1    

50    69    c02orcl    c01orcl    FINISH    PENDING    PENDING    Verify database is mounted    0            
1    

51    70    c02orcl    c01orcl    FINISH    PENDING    PENDING    Flashback database    0            
1    

52    71    c02orcl    c01orcl    FINISH    PENDING    PENDING    Convert into a physical standby    0            
1    

53    72    c02orcl    c02orcl    FINISH    PENDING    PENDING    Verify database is open    0            
1    

54    73    c02orcl    c02orcl    FINISH    PENDING    PENDING    Save the DBID of the new primary    0            
1    

55    74    c02orcl    c02orcl    FINISH    PENDING    PENDING    Save the logminer session start scn    0            
1    

56    75    c02orcl    c01orcl    FINISH    PENDING    PENDING    Wait until transient logical redo branch has been registered    0            
1    

57    76    c02orcl    c01orcl    FINISH    PENDING    PENDING    Start media recovery    0            
1    

58    77    c02orcl    c01orcl    FINISH    PENDING    PENDING    Wait until apply/recovery has started on the transient branch    0            
1    

59    78    c02orcl    c01orcl    FINISH    PENDING    PENDING    Wait until upgrade redo has been fully recovered    0            
1    

60    79    c02orcl    c01orcl    FINISH    PENDING    PENDING    Prevent compatibility advance if GRPs are present    0            
1    

61    80    c02orcl    c02orcl    FINISH    PENDING    PENDING    Prevent compatibility advance if GRPs are present    0            
1    

62    81    c02orcl    c01orcl    FINISH    PENDING    PENDING    Drop guaranteed restore point DBMSRU_INITIAL    0            
1    

62    82    c02orcl    c02orcl    FINISH    PENDING    PENDING    Drop guaranteed restore point DBMSRU_INITIAL    0            
1    

63    83    c02orcl    c02orcl    FINISH    PENDING    PENDING    Purge logical standby metadata from database if necessary    0            
1    

64    84    c02orcl    c01orcl    FINISH    PENDING    PENDING    Notify Data Guard broker that DBMS_ROLLING has finished    0            
1    

64    85    c02orcl    c02orcl    FINISH    PENDING    PENDING    Notify Data Guard broker that DBMS_ROLLING has finished    0            
1    

65    86    c02orcl    c02orcl    FINISH    PENDING    PENDING    Restore Supplemental Logging    0   


 

    本过程注意事项3

    备库在转换为逻辑standby后,常见的问题可以通过查询以下的视图发现: 

    select  * from v$logstdby_progress;(确保applied_scn紧紧跟随latest_scn

    select  * from v$logstdby_process;

    select  * from v$logstdby_state;

    select * from v$logstdby_stats;

    select  * from v$logstdby_transaction;

启动逻辑apply的命令是:alter database start logical standby apply immediate;

 

SQL>alter database start logical standby apply immediate;

SQL>select  * from v$logstdby_process;

399    28337    -1    99288    COORDINATOR    16116    ORA-16116: no work available    3463307    1

409    26761    0    99312    ANALYZER    16116    ORA-16116: no work available    3463306    1

19    43161    1    99314    APPLIER    16116    ORA-16116: no work available    3453850    1

161    34831    2    99316    APPLIER    16116    ORA-16116: no work available    3463275    1

290    1174    3    99318    APPLIER    16117    ORA-16117: processing    3463287    1

411    25254    4    99320    APPLIER    16125    ORA-16125: large transaction 7 32 890 is waiting for more data    3463305    1

32    50149    5    99322    APPLIER    16116    ORA-16116: no work available        1

163    10891    6    99324    APPLIER    16116    ORA-16116: no work available        1

293    64813    7    99326    APPLIER    16116    ORA-16116: no work available        1

413    11549    8    99328    APPLIER    16116    ORA-16116: no work available        1

34    8187    9    99330    APPLIER    16116    ORA-16116: no work available        1

165    46730    10    99332    APPLIER    16116    ORA-16116: no work available        1

295    56090    11    99334    APPLIER    16116    ORA-16116: no work available        1

31    1620    1024    99294    READER    16117    ORA-16117: processing    3463834    1

406    45033    1025    99297    BUILDER    44604    ORA-44604: Barrier synchronization on DDL with XID 9.7.1100 (waiting on 2 transactions)    3463307    1

157    54996    1026    99299    PREPARER    16127    ORA-16127: stalled waiting for additional transactions to be applied    3463830    1

 

 

   本过程注意事项4

    需要在本过程中关注dataguard最重要的事项:日志传送。

    此过程前段是物理standby传送,所有日志都要传送到物理备库。

    在主库上通过以下命令查看:asmcmd ls +fra/c01orcl/archivelog/日期/ | sort -n -t . -k 3

    在备库上通过以下命令查看:asmcmd ls +fra/c02orcl/archivelog/日期/ | sort -n -t . -k 3

    此过程后段是逻辑standby传送,所有日志都要传送到逻辑备库。

在主库上通过以下命令查看:asmcmd ls +fra/c01orcl/archivelog/日期/ | sort -n -t . -k 3

 

[grid@station5 ~]$ asmcmd ls +fra/c01orcl/archivelog/2023_10_02/  | sort -n -t . -k 3

thread_1_seq_85.431.1149136313

thread_2_seq_81.429.1149136313

thread_2_seq_82.430.1149136313

thread_2_seq_83.432.1149136313

thread_2_seq_84.433.1149136325

thread_1_seq_86.434.1149136669

thread_2_seq_85.435.1149136681

thread_1_seq_87.442.1149138393

thread_2_seq_86.443.1149138483

......

thread_1_seq_127.518.1149196435

thread_2_seq_121.519.1149197131

thread_1_seq_128.520.1149198235

thread_2_seq_122.521.1149198931

thread_1_seq_129.523.1149200035

thread_2_seq_123.524.1149200733

thread_1_seq_130.525.1149201835

thread_2_seq_124.526.1149202533

thread_1_seq_131.527.1149203635

thread_2_seq_125.528.1149204333

 

在备库上通过以下命令查看:asmcmd ls +fra/c02orcl/foreign_archivelog/c01orcl/日期/ | sort -n -t . -k 3

 

[grid@station7 ~]$ asmcmd ls +fra/c02orcl/foreign_archivelog/c01orcl/2023_10_02/ | sort -n -t . -k 3

thread_1_seq_118.497.1149186403

thread_1_seq_119.496.1149186403

thread_1_seq_120.495.1149186403

thread_2_seq_113.493.1149186403

thread_2_seq_114.492.1149186403

thread_2_seq_115.494.1149186403

......

thread_2_seq_121.576.1149197133

thread_1_seq_128.577.1149198235

thread_2_seq_122.579.1149198931

thread_1_seq_129.580.1149200035

thread_2_seq_123.582.1149200733

thread_1_seq_130.583.1149201835

thread_2_seq_124.585.1149202533

thread_1_seq_131.586.1149203637

thread_2_seq_125.589.1149204333

 

4. 升级备库:  

 

    升级备库过程都是一样的,建议使用命令行方式进行升级。(在多租户环境下使用dbua升级会产生很多问题,这里列出使用dbua方式升级的简单步骤:1.选择并行升级;2.要编译无效对象;3.要升级时区数据;4.升级前要收集优化器统计信息;5.务必使用新的Guaranteed RestorePoint作为备份选项。dbua升级方法会在ORACLE_HOMEdbs目录下生成参数文件等垃圾文件。dbua升级方法也会产生一个多余的tnsnames.ora文件。) 

    命令行升级第一步是在18c环境下进行升级前的检查:

 

[oracle@station7 ~]$ /u01/app/oracle/product/18.3.0/dbhome_1/jdk/bin/java -jar /u01/app/oracle/product/19.3.0/dbhome_1/rdbms/admin/preupgrade.jar

================== PREUPGRADE SUMMARY ==================

/u01/app/oracle/cfgtoollogs/c02orcl/preupgrade/preupgrade.log /u01/app/oracle/cfgtoollogs/c02orcl/preupgrade/preupgrade_fixups.sql /u01/app/oracle/cfgtoollogs/c02orcl/preupgrade/postupgrade_fixups.sql

Execute fixup scripts across the entire CDB: Before upgrade:

1. Execute preupgrade fixups with the below command $ORACLE_HOME/perl/bin/perl -I$ORACLE_HOME/perl/lib -I$ORACLE_HOME/rdbms/admin $ORACLE_HOME/rdbms/admin/catcon.pl -l /u01/app/oracle/cfgtoollogs/c02orcl/preupgrade/ -b preup_c02orcl /u01/app/oracle/cfgtoollogs/c02orcl/preupgrade/preupgrade_fixups.sql

2. Review logs under /u01/app/oracle/cfgtoollogs/c02orcl/preupgrade/

After the upgrade:

1. Execute postupgrade fixups with the below command $ORACLE_HOME/perl/bin/perl -I$ORACLE_HOME/perl/lib -I$ORACLE_HOME/rdbms/admin $ORACLE_HOME/rdbms/admin/catcon.pl -l /u01/app/oracle/cfgtoollogs/c02orcl/preupgrade/ -b postup_c02orcl /u01/app/oracle/cfgtoollogs/c02orcl/preupgrade/postupgrade_fixups.sql

2. Review logs under /u01/app/oracle/cfgtoollogs/c02orcl/preupgrade/ Preupgrade complete: 2023-10-01T18:30:12

 

[oracle@station7 ~]$$ORACLE_HOME/perl/bin/perl -I$ORACLE_HOME/perl/lib -I$ORACLE_HOME/rdbms/admin $ORACLE_HOME/rdbms/admin/catcon.pl -l /u01/app/oracle/cfgtoollogs/c02orcl/preupgrade/ -b preup_c02orcl /u01/app/oracle/cfgtoollogs/c02orcl/preupgrade/preupgrade_fixups.sql

 

    命令行升级第二步是改备库在19c环境下运行:

 

1. 在原18c环境下,以oracle用户在备库所在的第一台主机上关闭备库(注意不要修改cluster_databasetrue,即使physru.sh提示这么做。如果是dbms_rolling方式,那么请把cluster_database手工改为false)。

2. root用户在备库所在的两台主机上修改好/etc/oratab,使得备库的ORACLE_HOME/u01/app/oracle/product/19.3.0/dbhome_1

3. root用户在备库所在的两台主机上执行. oraenv命令分别设置好+ASM1/+ASM2环境变量后,执行setasmgid o=/u01/app/oracle/product/19.3.0/dbhome_1/bin/oracle。这条命令使得新的database软件与ASM磁盘组建立好了读写关系。

4. grid用户在备库所在的第一台主机上执行:srvctl stop scan_listenersrvctl stop listener。在备库所在的两台主机上修改监听器静态注册信息指向/u01/app/oracle/product/19.3.0/dbhome_1,然后在备库所在的第一台主机上执行srvctl start scan_listenersrvctl start listener

5. oracle用户在备库所在的两台主机上,把/u01/app/oracle/product/18.3.0/dbhome_1/network/admin/tnsnames.ora拷贝到/u01/app/oracle/product/19.3.0/dbhome_1/network/admin/tnsnames.ora

6. oracle用户在备库所在的第一台主机上执行. oraenv命令设置好19c环境后执行:srvctl upgrade database -d c02orcl -o /u01/app/oracle/product/19.3.0/dbhome_1

7. oracle用户在备库所在的两台主机上设置好.bash_profile

8. 19c环境下,以oracle用户在备库所在的第一台主机上执行startup upgrade

9. 使用以下命令打开所有插件数据库:alter pluggable database all open upgrade;

 

     命令行升级第三步是备库在19c环境下进行dbupgrade

 

1. dbupgrade -n 数字(数字为并行SQL进程个数)

2. cluster_database参数为true

3. shutdown immediate

4. 19c环境下,以oracle用户在备库所在的第一台主机上执行:srvctl start database -d c02orcl -o open。一定要加上-o open,否则c02orcl这个逻辑备库会以只读方式打开(集群件仍然注册它为物理备库),无法执行升级。

5. alter pluggable database open instances=all

6. /u01/app/oracle/product/19.3.0/dbhome_1/OPatch/datapatch -pdbs='pdb$seed','pdb1_1'

7. srvctl stop database -d c02orcl 

8. srvctl start database -d c02orcl -o open

9. 执行@?/rdbms/admin/utlrp.sql

10. 执行@/u01/app/oracle/cfgtoollogs/c02orcl/preupgrade/postupgrade_fixups.sql

 

dbupgrade过程输出:

[oracle@station7 dbs]$ dbupgrade -n 4

 

Argument list for [/u01/app/oracle/product/19.3.0/dbhome_1/rdbms/admin/catctl.pl]

For Oracle internal use only A = 0

Run in                       c = 0

Do not run in                C = 0

Input Directory              d = 0

Echo OFF                     e = 1

Simulate                     E = 0

Forced cleanup               F = 0

Log Id                       i = 0

Child Process                I = 0

Log Dir                      l = 0

Priority List Name           L = 0

Upgrade Mode active          M = 0

SQL Process Count            n = 4

SQL PDB Process Count        N = 0

Open Mode Normal             o = 0

Start Phase                  p = 0

End Phase                    P = 0

Reverse Order                r = 0

AutoUpgrade Resume           R = 0

Script                       s = 0

Serial Run                   S = 0

RO User Tablespaces          T = 0

Display Phases               y = 0

Debug catcon.pm              z = 0

Debug catctl.pl              Z = 0

 

catctl.pl VERSION: [19.0.0.0.0]

           STATUS: [Production]

            BUILD: [RDBMS_19.3.0.0.0DBRU_LINUX.X64_190417]

 

 

/u01/app/oracle/product/19.3.0/dbhome_1/rdbms/admin/orahome = [/u01/app/oracle/product/19.3.0/dbhome_1]

/u01/app/oracle/product/19.3.0/dbhome_1/bin/orabasehome = [/u01/app/oracle/product/19.3.0/dbhome_1]

catctlGetOraBaseLogDir = [/u01/app/oracle/product/19.3.0/dbhome_1]

 

Analyzing file /u01/app/oracle/product/19.3.0/dbhome_1/rdbms/admin/catupgrd.sql

 

Log file directory = [/tmp/cfgtoollogs/upgrade20231001192633]

 

catcon::set_log_file_base_path: ALL catcon-related output will be written to [/tmp/cfgtoollogs/upgrade20231001192633/catupgrd_catcon_93243.lst]

 

catcon::set_log_file_base_path: catcon: See [/tmp/cfgtoollogs/upgrade20231001192633/catupgrd*.log] files for output generated by scripts

 

catcon::set_log_file_base_path: catcon: See [/tmp/cfgtoollogs/upgrade20231001192633/catupgrd_*.lst] files for spool files, if any

 

 

Number of Cpus        = 4

Database Name         = c02orcl

DataBase Version      = 18.0.0.0.0

catcon::set_log_file_base_path: ALL catcon-related output will be written to [/u01/app/oracle/product/19.3.0/dbhome_1/cfgtoollogs/c02orcl/upgrade20231001192649/catupgrdcdbroot_catcon_93243.lst]

 

catcon::set_log_file_base_path: catcon: See [/u01/app/oracle/product/19.3.0/dbhome_1/cfgtoollogs/c02orcl/upgrade20231001192649/catupgrdcdbroot*.log] files for output generated by scripts

 

catcon::set_log_file_base_path: catcon: See [/u01/app/oracle/product/19.3.0/dbhome_1/cfgtoollogs/c02orcl/upgrade20231001192649/catupgrdcdbroot_*.lst] files for spool files, if any

 

 

Log file directory = [/u01/app/oracle/product/19.3.0/dbhome_1/cfgtoollogs/c02orcl/upgrade20231001192649]

 

Parallel SQL Process Count (PDB)      = 2

Parallel SQL Process Count (CDB$ROOT) = 4

Concurrent PDB Upgrades               = 2

Generated PDB Inclusion:[PDB$SEED PDB1_1]

Components in [CDB$ROOT]

    Installed [APS CATALOG CATJAVA CATPROC CONTEXT DV JAVAVM OLS ORDIM OWM RAC SDO XDB XML XOQ]

Not Installed [APEX EM MGW ODM WK]

 

------------------------------------------------------

Phases [0-107]         Start Time:[2023_10_01 19:27:11]

Container Lists Inclusion:[CDB$ROOT] Exclusion:[NONE]

------------------------------------------------------

***********   Executing Change Scripts   ***********

Serial   Phase #:0    [CDB$ROOT] Files:1    Time: 34s

***************   Catalog Core SQL   ***************

Serial   Phase #:1    [CDB$ROOT] Files:5    Time: 60s

Restart  Phase #:2    [CDB$ROOT] Files:1    Time: 2s

***********   Catalog Tables and Views   ***********

Parallel Phase #:3    [CDB$ROOT] Files:19   Time: 23s

Restart  Phase #:4    [CDB$ROOT] Files:1    Time: 2s

*************   Catalog Final Scripts   ************

Serial   Phase #:5    [CDB$ROOT] Files:7    Time: 26s

*****************   Catproc Start   ****************

Serial   Phase #:6    [CDB$ROOT] Files:1    Time: 18s

*****************   Catproc Types   ****************

Serial   Phase #:7    [CDB$ROOT] Files:2    Time: 18s

Restart  Phase #:8    [CDB$ROOT] Files:1    Time: 3s

****************   Catproc Tables   ****************

Parallel Phase #:9    [CDB$ROOT] Files:67   Time: 26s

Restart  Phase #:10   [CDB$ROOT] Files:1    Time: 4s

*************   Catproc Package Specs   ************

Serial   Phase #:11   [CDB$ROOT] Files:1    Time: 76s

Restart  Phase #:12   [CDB$ROOT] Files:1    Time: 3s

**************   Catproc Procedures   **************

Parallel Phase #:13   [CDB$ROOT] Files:94   Time: 14s

Restart  Phase #:14   [CDB$ROOT] Files:1    Time: 4s

Parallel Phase #:15   [CDB$ROOT] Files:120  Time: 19s

Restart  Phase #:16   [CDB$ROOT] Files:1    Time: 3s

Serial   Phase #:17   [CDB$ROOT] Files:22   Time: 9s

Restart  Phase #:18   [CDB$ROOT] Files:1    Time: 3s

*****************   Catproc Views   ****************

Parallel Phase #:19   [CDB$ROOT] Files:32   Time: 19s

Restart  Phase #:20   [CDB$ROOT] Files:1    Time: 2s

Serial   Phase #:21   [CDB$ROOT] Files:3    Time: 18s

Restart  Phase #:22   [CDB$ROOT] Files:1    Time: 4s

Parallel Phase #:23   [CDB$ROOT] Files:25   Time: 119s

Restart  Phase #:24   [CDB$ROOT] Files:1    Time: 4s

Parallel Phase #:25   [CDB$ROOT] Files:12   Time: 83s

Restart  Phase #:26   [CDB$ROOT] Files:1    Time: 4s

Serial   Phase #:27   [CDB$ROOT] Files:1    Time: 0s

Serial   Phase #:28   [CDB$ROOT] Files:3    Time: 9s

Serial   Phase #:29   [CDB$ROOT] Files:1    Time: 0s

Restart  Phase #:30   [CDB$ROOT] Files:1    Time: 2s

***************   Catproc CDB Views   **************

Serial   Phase #:31   [CDB$ROOT] Files:1    Time: 6s

Restart  Phase #:32   [CDB$ROOT] Files:1    Time: 4s

Serial   Phase #:34   [CDB$ROOT] Files:1    Time: 0s

*****************   Catproc PLBs   *****************

Serial   Phase #:35   [CDB$ROOT] Files:293  Time: 73s

Serial   Phase #:36   [CDB$ROOT] Files:1    Time: 0s

Restart  Phase #:37   [CDB$ROOT] Files:1    Time: 3s

Serial   Phase #:38   [CDB$ROOT] Files:6    Time: 9s

Restart  Phase #:39   [CDB$ROOT] Files:1    Time: 4s

***************   Catproc DataPump   ***************

Serial   Phase #:40   [CDB$ROOT] Files:3    Time: 56s

Restart  Phase #:41   [CDB$ROOT] Files:1    Time: 4s

******************   Catproc SQL   *****************

Parallel Phase #:42   [CDB$ROOT] Files:13   Time: 85s

Restart  Phase #:43   [CDB$ROOT] Files:1    Time: 2s

Parallel Phase #:44   [CDB$ROOT] Files:11   Time: 11s

Restart  Phase #:45   [CDB$ROOT] Files:1    Time: 3s

Parallel Phase #:46   [CDB$ROOT] Files:3    Time: 8s

Restart  Phase #:47   [CDB$ROOT] Files:1    Time: 3s

*************   Final Catproc scripts   ************

Serial   Phase #:48   [CDB$ROOT] Files:1    Time: 32s

Restart  Phase #:49   [CDB$ROOT] Files:1    Time: 3s

**************   Final RDBMS scripts   *************

Serial   Phase #:50   [CDB$ROOT] Files:1    Time: 9s

************   Upgrade Component Start   ***********

Serial   Phase #:51   [CDB$ROOT] Files:1    Time: 6s

Restart  Phase #:52   [CDB$ROOT] Files:1    Time: 1s

**********   Upgrading Java and non-Java   *********

Serial   Phase #:53   [CDB$ROOT] Files:2

 

 

 

*****************   Upgrading XDB   ****************

Restart  Phase #:54   [PDB1_1] Files:1    Time: 0s

Serial   Phase #:56   [PDB1_1] Files:3    Time: 14s

Serial   Phase #:57   [PDB$SEED] Files:3    Time: 13s

Serial   Phase #:57   [PDB1_1] Files:3    Time: 9s

Parallel Phase #:58   [PDB$SEED] Files:10    Time: 9s

Parallel Phase #:58   [PDB1_1] Files:10   Time: 10s

Parallel Phase #:59   [PDB$SEED] Files:25   Time: 11s

Parallel Phase #:59   [PDB1_1] Files:25   Time: 13s

Serial   Phase #:60   [PDB$SEED] Files:4   Time: 12s

Serial   Phase #:60   [PDB1_1] Files:4    Time: 15s

Serial   Phase #:61   [PDB$SEED] Files:1    Time: 0s

Serial   Phase #:62   [PDB$SEED] Files:32    Time: 16s

Serial   Phase #:61   [PDB1_1] Files:1    Time: 0s

Serial   Phase #:62   [PDB1_1] Files:32   Time: 14s

Serial   Phase #:63   [PDB1_1] Files:1    Time: 0s

Parallel Phase #:64   [PDB1_1] Files:6   Time: 15s

Serial   Phase #:63   [PDB$SEED] Files:1    Time: 0s

Parallel Phase #:64   [PDB$SEED] Files:6    Time: 13s

Serial   Phase #:65   [PDB$SEED] Files:2    Time: 13s

Serial   Phase #:65   [PDB1_1] Files:2    Time: 22s

Serial   Phase #:66   [PDB$SEED] Files:3    Time: 22s

Serial   Phase #:66   [PDB1_1] Files:3    Time: 29s

****************   Upgrading ORDIM   ***************

Restart  Phase #:67   [PDB1_1] Files:1    Time: 30s

****************   Upgrading ORDIM   ***************

Restart  Phase #:67   [PDB$SEED] Files:1    Time: 2s

Serial   Phase #:69   [PDB1_1] Files:1    Time: 1s

Serial   Phase #:69   [PDB$SEED] Files:1    Time: 8s

Parallel Phase #:70   [PDB1_1] Files:2    Time: 9s

Parallel Phase #:70   [PDB$SEED] Files:2    Time: 18s

Restart  Phase #:71   [PDB1_1] Files:1    Time: 17s

Restart  Phase #:71   [PDB$SEED] Files:1    Time: 1s

Parallel Phase #:72   [PDB$SEED] Files:2    Time: 1s

Parallel Phase #:72   [PDB1_1] Files:2    Time: 10s

Serial   Phase #:73   [PDB$SEED] Files:2    Time: 10s

Serial   Phase #:73   [PDB1_1] Files:2    Time: 8s

*****************   Upgrading SDO   ****************

Restart  Phase #:74   [PDB$SEED] Files:1    Time: 8s

*****************   Upgrading SDO   ****************

Restart  Phase #:74   [PDB1_1] Files:1    Time: 1s

Serial   Phase #:76   [PDB1_1] Files:1    Time: 1s

Serial   Phase #:76   [PDB$SEED] Files:1    Time: 41s

Serial   Phase #:77   [PDB1_1] Files:2    Time: 41s

Serial   Phase #:77   [PDB$SEED] Files:2    Time: 9s

Restart  Phase #:78   [PDB1_1] Files:1    Time: 9s

Restart  Phase #:78   [PDB$SEED] Files:1    Time: 0s

Serial   Phase #:79   [PDB$SEED] Files:1    Time: 2s

Serial   Phase #:79   [PDB1_1] Files:1    Time: 13s

Restart  Phase #:80   [PDB$SEED] Files:1    Time: 11s

Restart  Phase #:80   [PDB1_1] Files:1    Time: 1s

Parallel Phase #:81   [PDB$SEED] Files:3    Time: 2s

Parallel Phase #:81   [PDB1_1] Files:3    Time: 62s

Restart  Phase #:82   [PDB1_1] Files:1    Time: 64s

Restart  Phase #:82   [PDB$SEED] Files:1    Time: 2s

Serial   Phase #:83   [PDB1_1] Files:1    Time: 2s

Serial   Phase #:83   [PDB$SEED] Files:1    Time: 12s

Restart  Phase #:84   [PDB$SEED] Files:1    Time: 13s

Restart  Phase #:84   [PDB1_1] Files:1    Time: 2s

Serial   Phase #:85   [PDB$SEED] Files:1    Time: 2s

Serial   Phase #:85   [PDB1_1] Files:1    Time: 14s

Restart  Phase #:86   [PDB$SEED] Files:1    Time: 1s

Parallel Phase #:87   [PDB$SEED] Files:4    Time: 15s

Restart  Phase #:86   [PDB1_1] Files:1    Time: 2s

Parallel Phase #:87   [PDB1_1] Files:4    Time: 93s

Restart  Phase #:88   [PDB$SEED] Files:1    Time: 91s

Restart  Phase #:88   [PDB1_1] Files:1    Time: 2s

Serial   Phase #:89   [PDB$SEED] Files:1    Time: 2s

Serial   Phase #:89   [PDB1_1] Files:1    Time: 8s

Restart  Phase #:90   [PDB$SEED] Files:1    Time: 8s

Restart  Phase #:90   [PDB1_1] Files:1    Time: 2s

Serial   Phase #:91   [PDB$SEED] Files:2    Time: 2s

Serial   Phase #:91   [PDB1_1] Files:2    Time: 22s

Restart  Phase #:92   [PDB$SEED] Files:1    Time: 22s

Restart  Phase #:92   [PDB1_1] Files:1    Time: 1s

Serial   Phase #:93   [PDB$SEED] Files:1    Time: 1s

Serial   Phase #:93   [PDB1_1] Files:1    Time: 6s

Restart  Phase #:94   [PDB$SEED] Files:1    Time: 6s

Restart  Phase #:94   [PDB1_1] Files:1    Time: 2s

*******   Upgrading ODM, WK, EXF, RUL, XOQ   *******

Serial   Phase #:95   [PDB$SEED] Files:1    Time: 2s

*******   Upgrading ODM, WK, EXF, RUL, XOQ   *******

Serial   Phase #:95   [PDB1_1] Files:1    Time: 16s

Restart  Phase #:96   [PDB1_1] Files:1    Time: 17s

Restart  Phase #:96   [PDB$SEED] Files:1    Time: 1s

***********   Final Component scripts    ***********

Serial   Phase #:97   [PDB$SEED] Files:1    Time: 2s

***********   Final Component scripts    ***********

Serial   Phase #:97   [PDB1_1] Files:1    Time: 7s

*************   Final Upgrade scripts   ************

Serial   Phase #:98   [PDB$SEED] Files:1    Time: 8s

*************   Final Upgrade scripts   ************

Serial   Phase #:98   [PDB1_1] Files:1    Time: 141s

*******************   Migration   ******************

Serial   Phase #:99   [PDB1_1] Files:1    Time: 145s

*******************   Migration   ******************

Serial   Phase #:99   [PDB$SEED] Files:1    Time: 10s

***   End PDB Application Upgrade Pre-Shutdown   ***

Serial   Phase #:100  [PDB1_1] Files:1    Time: 8s

***   End PDB Application Upgrade Pre-Shutdown   ***

Serial   Phase #:100  [PDB$SEED] Files:1    Time: 6s

Serial   Phase #:101  [PDB1_1] Files:1    Time: 6s

Serial   Phase #:101  [PDB$SEED] Files:1    Time: 5s

Serial   Phase #:102  [PDB1_1] Files:1    Time: 6s

Serial   Phase #:102  [PDB$SEED] Files:1    Time: 7s

*****************   Post Upgrade   *****************

Serial   Phase #:103  [PDB1_1] Files:1    Time: 6s

*****************   Post Upgrade   *****************

Serial   Phase #:103  [PDB$SEED] Files:1    Time: 30s

****************   Summary report   ****************

Serial   Phase #:104  [PDB1_1] Files:1    Time: 7s

***   End PDB Application Upgrade Post-Shutdown   **

Serial   Phase #:105  [PDB1_1] Files:1    Time: 6s

Serial   Phase #:106  [PDB1_1] Files:1    Time: 3s

Serial   Phase #:107  [PDB1_1] Files:1     Time: 0s

 

------------------------------------------------------

Phases [0-107]         End Time:[2023_10_01 20:44:02]

Container Lists Inclusion:[PDB1_1] Exclusion:[NONE]

------------------------------------------------------

 

Grand Total Time: 2101s [PDB1_1]

 

 LOG FILES: (/u01/app/oracle/product/19.3.0/dbhome_1/cfgtoollogs/c02orcl/upgrade20231001192649/catupgrdpdb1_1*.log)

 

Upgrade Summary Report Located in:

/u01/app/oracle/product/19.3.0/dbhome_1/cfgtoollogs/c02orcl/upgrade20231001192649/upg_summary.log

   Time: 365s

****************   Summary report   ****************

Serial   Phase #:104  [PDB$SEED] Files:1    Time: 6s

***   End PDB Application Upgrade Post-Shutdown   **

Serial   Phase #:105  [PDB$SEED] Files:1    Time: 7s

Serial   Phase #:106  [PDB$SEED] Files:1    Time: 6s

Serial   Phase #:107  [PDB$SEED] Files:1     Time: 0s

 

------------------------------------------------------

Phases [0-107]         End Time:[2023_10_01 20:49:41]

Container Lists Inclusion:[PDB$SEED] Exclusion:[NONE]

------------------------------------------------------

 

Grand Total Time: 2445s [PDB$SEED]

 

 LOG FILES: (/u01/app/oracle/product/19.3.0/dbhome_1/cfgtoollogs/c02orcl/upgrade20231001192649/catupgrdpdb_seed*.log)

 

Upgrade Summary Report Located in:

/u01/app/oracle/product/19.3.0/dbhome_1/cfgtoollogs/c02orcl/upgrade20231001192649/upg_summary.log

 

     Time: 2466s For CDB$ROOT

     Time: 2493s For PDB(s)

 

Grand Total Time: 4959s

 

 LOG FILES: (/u01/app/oracle/product/19.3.0/dbhome_1/cfgtoollogs/c02orcl/upgrade20231001192649/catupgrdcdbroot*.log)

 

Upgrade Summary Report Located in:

/u01/app/oracle/product/19.3.0/dbhome_1/cfgtoollogs/c02orcl/upgrade20231001192649/upg_summary.log

 

Grand Total Upgrade Time:    [0d:1h:22m:39s]

 

datapatch过程输出:

 

SQL>alter pluggable database all open instances=all;

 

Warning: execution completed with warning

 

Pluggable database ALL altered.

 

SQL>select name , open_mode  ,restricted  from v$pdbs;

PDB$SEED    READ ONLY    YES

PDB1_1    READ WRITE    YES

 

SQL>select  name, type, message from pdb_plug_in_violations;

PDB$SEED    ERROR    Interim patch 27923415/22239273 (OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)): Installed in the CDB but not in the PDB

PDB$SEED    ERROR    '18.3.0.0.0 Release_Update 1806280943' is installed in the CDB but no release updates are installed in the PDB

PDB$SEED    ERROR    PDB's version does not match CDB's version: PDB's version 18.0.0.0.0. CDB's version 19.0.0.0.0.

PDB1_1    ERROR    PDB's version does not match CDB's version: PDB's version 18.0.0.0.0. CDB's version 19.0.0.0.0.

PDB$SEED    ERROR    19.3.0.0.0 Release_Update 1904101227: APPLY with status WITH ERRORS in the PDB

PDB1_1    ERROR    19.3.0.0.0 Release_Update 1904101227: APPLY with status WITH ERRORS in the PDB

 

-----------------------------------------------------------------------------------------------------------------

[oracle@station7 ~]$ /u01/app/oracle/product/19.3.0/dbhome_1/OPatch/datapatch -pdbs='pdb$seed',pdb1_1

SQL Patching tool version 19.3.0.0.0 Production on Mon Oct  2 23:27:02 2023

Copyright (c) 2012, 2019, Oracle.  All rights reserved.

 

Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_90892_2023_10_02_23_27_02/sqlpatch_invocation.log

 

Connecting to database...OK

Gathering database info...done

 

Note:  Datapatch will only apply or rollback SQL fixes for PDBs

       that are in an open state, no patches will be applied to closed PDBs.

       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation

       (Doc ID 1585822.1)

 

Bootstrapping registry and package to current versions...done

Determining current state...done

 

Current state of interim SQL patches:

  No interim patches found

 

Current state of release update SQL patches:

  Binary registry:

    19.3.0.0.0 Release_Update 190410122720: Installed

  PDB PDB$SEED:

    Applied 19.3.0.0.0 Release_Update 190410122720 with errors on 02-OCT-23 09.12.14.144430 PM

  PDB PDB1_1:

    Applied 19.3.0.0.0 Release_Update 190410122720 with errors on 02-OCT-23 09.12.13.792314 PM

 

Adding patches to installation queue and performing prereq checks...done

Installation queue:

  For the following PDBs: PDB$SEED PDB1_1

    No interim patches need to be rolled back

    Patch 29517242 (Database Release Update : 19.3.0.0.190416 (29517242)):

      Apply from 19.1.0.0.0 Feature Release to 19.3.0.0.0 Release_Update 190410122720

    No interim patches need to be applied

 

Installing patches...

Patch installation complete.  Total patches installed: 2

 

Validating logfiles...done

Patch 29517242 apply (pdb PDB$SEED): SUCCESS

  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/29517242/22862832/29517242_apply_C01ORCL_PDBSEED_2023Oct02_23_27_40.log (no errors)

Patch 29517242 apply (pdb PDB1_1): SUCCESS

  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/29517242/22862832/29517242_apply_C01ORCL_PDB1_1_2023Oct02_23_27_41.log (no errors)

SQL Patching tool complete on Mon Oct  2 23:50:26 2023

 

 

 

[oracle@station7 ~]$ srvctl stop database -d c02orcl

[oracle@station7 ~]$ srvctl start database -d c02orcl -o open

 

-----------------------------------------------------------------------------------------------------------------

 

SQL>select name , open_mode  ,restricted  from v$pdbs;

PDB$SEED    READ ONLY    NO

PDB1_1    READ WRITE    NO

 

utlrp.sql过程输出:

 

SQL> @?/rdbms/admin/utlrp.sql

 

Session altered.

 

 

TIMESTAMP

--------------------------------------------------------------------------------

COMP_TIMESTAMP UTLRP_BGN          2023-10-03 01:08:25

 

DOC>   The following PL/SQL block invokes UTL_RECOMP to recompile invalid

DOC>   objects in the database. Recompilation time is proportional to the

DOC>   number of invalid objects in the database, so this command may take

DOC>   a long time to execute on a database with a large number of invalid

DOC>   objects.

DOC>

DOC>   Use the following queries to track recompilation progress:

DOC>

DOC>   1. Query returning the number of invalid objects remaining. This

DOC>      number should decrease with time.

DOC>         SELECT COUNT(*) FROM obj$ WHERE status IN (4, 5, 6);

DOC>

DOC>   2. Query returning the number of objects compiled so far. This number

DOC>      should increase with time.

DOC>         SELECT COUNT(*) FROM UTL_RECOMP_COMPILED;

DOC>

DOC>   This script automatically chooses serial or parallel recompilation

DOC>   based on the number of CPUs available (parameter cpu_count) multiplied

DOC>   by the number of threads per CPU (parameter parallel_threads_per_cpu).

DOC>   On RAC, this number is added across all RAC nodes.

DOC>

DOC>   UTL_RECOMP uses DBMS_SCHEDULER to create jobs for parallel

DOC>   recompilation. Jobs are created without instance affinity so that they

DOC>   can migrate across RAC nodes. Use the following queries to verify

DOC>   whether UTL_RECOMP jobs are being created and run correctly:

DOC>

DOC>   1. Query showing jobs created by UTL_RECOMP

DOC>         SELECT job_name FROM dba_scheduler_jobs

DOC>        WHERE job_name like 'UTL_RECOMP_SLAVE_%';

DOC>

DOC>   2. Query showing UTL_RECOMP jobs that are running

DOC>         SELECT job_name FROM dba_scheduler_running_jobs

DOC>        WHERE job_name like 'UTL_RECOMP_SLAVE_%';

DOC>#

 

PL/SQL procedure successfully completed.

 

 

TIMESTAMP

--------------------------------------------------------------------------------

COMP_TIMESTAMP UTLRP_END          2023-10-03 01:08:36

 

DOC> The following query reports the number of invalid objects.

DOC>

DOC> If the number is higher than expected, please examine the error

DOC> messages reported with each object (using SHOW ERRORS) to see if they

DOC> point to system misconfiguration or resource constraints that must be

DOC> fixed before attempting to recompile these objects.

DOC>#

 

OBJECTS WITH ERRORS

-------------------

          0

 

DOC> The following query reports the number of exceptions caught during

DOC> recompilation. If this number is non-zero, please query the error

DOC> messages in the table UTL_RECOMP_ERRORS to see if any of these errors

DOC> are due to misconfiguration or resource constraints that must be

DOC> fixed before objects can compile successfully.

DOC> Note: Typical compilation errors (due to coding errors) are not

DOC>       logged into this table: they go into DBA_ERRORS instead.

DOC>#

 

ERRORS DURING RECOMPILATION

---------------------------

              0

 

 

Function created.

 

 

PL/SQL procedure successfully completed.

 

 

Function dropped.

 

 

PL/SQL procedure successfully completed.

 

5. 第二次在主库(仍然为18c)执行:

 

    次执行过程中,原主库在切换后短暂地变成逻辑standby18c)原备库转变为主库(19c)然后通过闪回保障闪回还原点,把原主库转变回物理standby原主库之后要以19c软件运行。physru.sh和dbms_rolling在本过程中做了类似的事情,它们的差别仅仅是保障闪回还原点的命名不同而已。

 

次在主库执行

physru.sh方式

dbms_rolling方式

1.运行的命令

[oracle@station5 ~]$ ./physru.sh sys c01orcl1 c02orcl1 c01orcl c02orcl 19.0.0.0.0

Please enter the sysdba password:

 

### Initialize script to either start over or resume execution

Oct 03 01:26:42 2023 [0-1] Identifying rdbms software version

Oct 03 01:26:43 2023 [0-1] database c01orcl is at version 18.0.0.0.0

Oct 03 01:26:43 2023 [0-1] database c02orcl is at version 19.0.0.0.0

Oct 03 01:26:44 2023 [0-1] verifying flashback database is enabled at c01orcl and c02orcl

Oct 03 01:26:45 2023 [0-1] verifying available flashback restore points

Oct 03 01:26:45 2023 [0-1] verifying DG Broker is disabled

Oct 03 01:26:46 2023 [0-1] looking up prior execution history

Oct 03 01:26:46 2023 [0-1] last completed stage [2-4] using script version 0001

Oct 03 01:26:46 2023 [0-1] resuming execution of script

 

### Stage 3: Validate upgraded transient logical standby

Oct 03 01:26:46 2023 [3-1] database c02orcl is no longer in OPEN MIGRATE mode

Oct 03 01:26:46 2023 [3-1] database c02orcl is at version 19.0.0.0.0

 

### Stage 4: Switch the transient logical standby to be the new primary

Oct 03 01:26:47 2023 [4-1] waiting for c02orcl to catch up (this could take a while)

Oct 03 01:26:48 2023 [4-1] waiting for apply lag to fall under 30 seconds

Oct 03 01:26:53 2023 [4-1] apply lag measured at 5 seconds

Oct 03 01:26:54 2023 [4-2] switching c01orcl to become a logical standby

Oct 03 01:28:37 2023 [4-2] c01orcl is now a logical standby

Oct 03 01:28:37 2023 [4-3] waiting for standby c02orcl to process end-of-redo from primary

Oct 03 01:28:38 2023 [4-4] switching c02orcl to become the new primary

Oct 03 01:29:13 2023 [4-4] c02orcl is now the new primary

 

### Stage 5: Flashback former primary to pre-upgrade restore point and convert to physical

Oct 03 01:29:14 2023 [5-1] verifying instance c01orcl1 is the only active instance

 

WARN: c01orcl is a RAC database.  Before this script can continue, you

      must manually reduce the RAC to a single instance.  This can be

      accomplished with the following step:

 

        1) Shutdown all instances other than instance c01orcl1.

           eg: srvctl stop instance -d c01orcl -i c01orcl2 -o abort

 

      Once these steps have been performed, enter 'y' to continue the script.

      If desired, you may enter 'n' to exit the script to perform the required

      steps, and recall the script to resume from this point.

 

Are you ready to continue? (y/n):

[oracle@station5 ~]$ srvctl stop instance -d c01orcl -i c01orcl2 -o abort

-----------------------------------------------------

Are you ready to continue? (y/n): y

 

Oct 03 01:38:41 2023 [5-1] continuing

Oct 03 01:38:41 2023 [5-1] verifying instance c01orcl1 is the only active instance

Oct 03 01:38:42 2023 [5-1] shutting down database c01orcl

./physru.sh: line 3674: warning: here-document at line 3670 delimited by end-of-file (wanted `EOF')

Oct 03 01:39:29 2023 [5-1] mounting database c01orcl

./physru.sh: line 3301: warning: here-document at line 3296 delimited by end-of-file (wanted `EOF')

Oct 03 01:39:49 2023 [5-2] flashing back database c01orcl to restore point PRU_0000_0001

Oct 03 01:39:57 2023 [5-3] converting c01orcl into physical standby

Oct 03 01:39:58 2023 [5-4] shutting down database c01orcl

./physru.sh: line 3674: warning: here-document at line 3670 delimited by end-of-file (wanted `EOF')

 

NOTE: Database c01orcl has been shutdown, and is now ready to be started

      using the newer version Oracle binary.  This script requires the

      database to be mounted (on all active instances, if RAC) before calling

      this script to resume the rolling upgrade.

 

NOTE: Database c01orcl is no longer limited to single instance operation since

      the database has been successfully converted into a physical standby.

      For increased availability, Oracle recommends starting all instances in

      the RAC on the newer binary by performing the following step:

 

        1) Startup and mount all instances for database c01orcl

        eg: srvctl start database -d c01orcl -o mount

1.运行的命令

exec dbms_rolling.switchover;

 

 

    本过程注意事项:

dbms_rolling方式过程中的输出可以查看以下数据字典视图:         

dba_rolling_parameters

dba_rolling_unsupported

dba_rolling_plan

dba_rolling_events

dba_rolling_status

dba_rolling_databases

最重要的是查看dba_rolling_plan的输出

 

6. 原主库以19c软件运行

 

1. 在原18c环境下,以oracle用户在备库所在的第一台主机上关闭备库。

2. root用户在备库所在的两台主机上修改好/etc/oratab,使得备库的ORACLE_HOME/u01/app/oracle/product/19.3.0/dbhome_1

3. root用户在备库所在的两台主机上执行. oraenv命令分别设置好+ASM1/+ASM2环境变量后,执行setasmgid o=/u01/app/oracle/product/19.3.0/dbhome_1/bin/oracle。这条命令使得新的database软件与ASM磁盘组建立好了读写关系。

4. grid用户在备库所在的第一台主机上执行:srvctl stop scan_listenersrvctl stop listener。在备库所在的两台主机上修改监听器静态注册信息指向/u01/app/oracle/product/19.3.0/dbhome_1,然后在备库所在的第一台主机上执行srvctl start scan_listenersrvctl start listener

5. oracle用户在备库所在的两台主机上,把/u01/app/oracle/product/18.3.0/dbhome_1/network/admin/tnsnames.ora拷贝到/u01/app/oracle/product/19.3.0/dbhome_1/network/admin/tnsnames.ora

6. oracle用户在备库所在的第一台主机上执行. oraenv命令设置好19c环境后执行:srvctl upgrade database -d c02orcl -o /u01/app/oracle/product/19.3.0/dbhome_1

7. oracle用户在备库所在的两台主机上设置好.bash_profile

8. 启动原主库到mount状态

 

7. 最后一次执行:

    次执行过程中,原主库通过应用原备库升级时产生的日志,隐式地升级成19c

 

次执行

physru.sh方式

dbms_rolling方式

1.在原主库上执行

1.在原备库上执行

2.执行过程中会提示角色切换

2.要手工执行角色切换

3.运行的命令

[oracle@station5 ~]$ ./physru.sh sys c01orcl1 c02orcl1 c01orcl c02orcl 19.0.0.0.0

Please enter the sysdba password:

 

### Initialize script to either start over or resume execution

Oct 03 04:19:58 2023 [0-1] Identifying rdbms software version

Oct 03 04:19:59 2023 [0-1] database c01orcl is at version 19.0.0.0.0

Oct 03 04:20:00 2023 [0-1] database c02orcl is at version 19.0.0.0.0

Oct 03 04:20:07 2023 [0-1] verifying flashback database is enabled at c01orcl and c02orcl

Oct 03 04:20:09 2023 [0-1] verifying available flashback restore points

Oct 03 04:20:11 2023 [0-1] verifying DG Broker is disabled

Oct 03 04:20:13 2023 [0-1] looking up prior execution history

Oct 03 04:20:16 2023 [0-1] last completed stage [5-4] using script version 0001

Oct 03 04:20:16 2023 [0-1] resuming execution of script

 

### Stage 6: Run media recovery through upgrade redo

Oct 03 04:20:23 2023 [6-1] upgrade redo region identified as scn range [3402842, 6634337]

Oct 03 04:20:27 2023 [6-1] waiting for media recovery to initialize v$recovery_progress

Oct 03 04:20:43 2023 [6-1] monitoring media recovery's progress

Oct 03 04:20:48 2023 [6-4] media recovery has finished recovering through upgrade

 

### Stage 7: Switch back to the original roles prior to the rolling upgrade

 

NOTE: At this point, you have the option to perform a switchover

     which will restore c01orcl back to a primary database and

     c02orcl back to a physical standby database.  If you answer 'n'

     to the question below, c01orcl will remain a physical standby

     database and c02orcl will remain a primary database.

 

Do you want to perform a switchover? (y/n):

---------------------------------------------------------------------------------------------------------------

 

Do you want to perform a switchover? (y/n): y

 

Oct 03 04:27:28 2023 [7-1] continuing

Oct 03 04:27:29 2023 [7-2] verifying instance c02orcl1 is the only active instance

 

WARN: c02orcl is a RAC database.  Before this script can continue, you

      must manually reduce the RAC to a single instance.  This can be

      accomplished with the following step:

 

        1) Shutdown all instances other than instance c02orcl1.

           eg: srvctl stop instance -d c02orcl -i c02orcl2

 

      Once these steps have been performed, enter 'y' to continue the script.

      If desired, you may enter 'n' to exit the script to perform the required

      steps, and recall the script to resume from this point.

 

Are you ready to continue? (y/n): y

--------------------------------------------------------------------------------------------------------------------

Oct 03 04:29:34 2023 [7-2] continuing

Oct 03 04:29:35 2023 [7-2] verifying instance c02orcl1 is the only active instance

Oct 03 04:29:44 2023 [7-2] waiting for apply lag to fall under 30 seconds

Oct 03 04:29:52 2023 [7-2] apply lag measured at 6 seconds

Oct 03 04:29:56 2023 [7-3] switching c02orcl to become a physical standby

Oct 03 04:30:14 2023 [7-3] c02orcl is now a physical standby

Oct 03 04:30:14 2023 [7-3] shutting down database c02orcl

./physru.sh: line 3674: warning: here-document at line 3670 delimited by end-of-file (wanted `EOF')

Oct 03 04:30:19 2023 [7-3] mounting database c02orcl

./physru.sh: line 3301: warning: here-document at line 3296 delimited by end-of-file (wanted `EOF')

-----------------------------------------------------------------

[oracle@station7 ~]$ srvctl stop database -d c02orcl

[oracle@station7 ~]$ srvctl start database -d c02orcl

-------------------------------------------------------------------------------------------------------------------

[oracle@station5 ~]$ ./physru.sh sys c01orcl1 c02orcl1 c01orcl c02orcl 19.0.0.0.0

Please enter the sysdba password:

 

### Initialize script to either start over or resume execution

Oct 03 04:42:16 2023 [0-1] Identifying rdbms software version

Oct 03 04:42:17 2023 [0-1] database c01orcl is at version 19.0.0.0.0

Oct 03 04:42:19 2023 [0-1] database c02orcl is at version 19.0.0.0.0

Oct 03 04:42:25 2023 [0-1] verifying flashback database is enabled at c01orcl and c02orcl

Oct 03 04:42:27 2023 [0-1] verifying available flashback restore points

Oct 03 04:42:29 2023 [0-1] verifying DG Broker is disabled

Oct 03 04:42:32 2023 [0-1] looking up prior execution history

Oct 03 04:42:34 2023 [0-1] last completed stage [7-2] using script version 0001

 

WARN: The last execution of this script either exited in error or at the

      user's request.  At this point, there are three available options:

 

        1) resume the rolling upgrade where the last execution left off

        2) restart the script from scratch

        3) exit the script

 

      Option (2) assumes the user has restored the primary and physical

      standby back to the original configuration as required by this script.

 

Enter your selection (1/2/3): 1

 

Oct 03 04:42:36 2023 [0-1] resuming execution of script

Oct 03 04:42:37 2023 [7-3] verifying instance c02orcl1 is the only active instance

 

WARN: c02orcl is a RAC database.  Before this script can continue, you

      must manually reduce the RAC to a single instance.  This can be

      accomplished with the following step:

 

        1) Shutdown all instances other than instance c02orcl1.

           eg: srvctl stop instance -d c02orcl -i c02orcl2

 

      Once these steps have been performed, enter 'y' to continue the script.

      If desired, you may enter 'n' to exit the script to perform the required

      steps, and recall the script to resume from this point.

 

Are you ready to continue? (y/n): y

------------------------------------------------------------------------------------------------------------------------

Oct 03 04:43:43 2023 [7-3] continuing

Oct 03 04:43:44 2023 [7-3] verifying instance c02orcl1 is the only active instance

Oct 03 04:43:46 2023 [7-3] c02orcl is already a physical standby

Oct 03 04:43:49 2023 [7-4] waiting for standby c01orcl to process end-of-redo from primary

Oct 03 04:43:56 2023 [7-5] switching c01orcl to become the new primary

Oct 03 04:44:17 2023 [7-5] c01orcl is now the new primary

Oct 03 04:44:17 2023 [7-5] opening database c01orcl

./physru.sh: line 3337: warning: here-document at line 3332 delimited by end-of-file (wanted `EOF')

Oct 03 04:44:31 2023 [7-6] starting media recovery on c02orcl

Oct 03 04:44:38 2023 [7-6] confirming media recovery is running

 

NOTE: Database c01orcl has completed the switchover to the primary role, but

      instance c01orcl1 is the only open instance.  For increased availability,

      Oracle recommends opening the remaining active instances which are

      currently in mounted mode by performing the following steps:

 

        1) Shutdown all instances other than instance c01orcl1.

        eg: srvctl stop instance -d c01orcl -i c01orcl2

 

        2) Startup and open all inactive instances for database c01orcl.

        eg: srvctl start database -d c01orcl

 

NOTE: Database c02orcl is no longer limited to single instance operation since

      it has completed the switchover to the physical standby role.  For

      increased  availability, Oracle recommends starting the inactive

      instances in the RAC by performing the following step:

 

        1) Startup and mount inactive instances for database c02orcl

        eg: srvctl start database -d c02orcl -o mount

 

### Stage 8: Statistics

script start time:                                           02-Oct-23 18:21:27

script finish time:                                          03-Oct-23 04:44:53

total script execution time:                                       +00 10:23:26

wait time for user upgrade:                                        +00 06:42:44

active script execution time:                                      +00 03:40:42

transient logical creation start time:                       02-Oct-23 18:24:56

transient logical creation finish time:                      02-Oct-23 18:25:38

primary to logical switchover start time:                    03-Oct-23 01:26:53

logical to primary switchover finish time:                   03-Oct-23 01:29:14

primary services offline for:                                      +00 00:02:21

total time former primary in physical role:                        +00 02:47:00

time to reach upgrade redo:                                        

time to recover upgrade redo:                                      

primary to physical switchover start time:                   03-Oct-23 04:27:27

physical to primary switchover finish time:                  03-Oct-23 04:44:29

primary services offline for:                                      +00 00:17:02

 

SUCCESS: The physical rolling upgrade is complete

3.运行的命令

exec dbms_rolling.finish_plan;

 

 

   

   本过程注意事项

    需要在本过程中关注dataguard最重要的事项---日志传送:此过程中是物理standby传送,所有日志都要传送到物理备库(原主库)。

    在主库(原备库)上通过以下命令查看:asmcmd ls +fra/c02orcl/archivelog/日期/ | sort -n -t . -k 3

在备库(原主库)上通过以下命令查看:asmcmd ls +fra/c01orcl/archivelog/日期/ | sort -n -t . -k 3

查询以下视图,直到没有日志没被应用:

 

SQL>select  name from v$archived_log

 where deleted='NO' and applied='NO';

 

+FRA/C01ORCL/ARCHIVELOG/2023_10_03/thread_1_seq_120.572.1149221285

+FRA/C01ORCL/ARCHIVELOG/2023_10_03/thread_2_seq_115.573.1149221285

+FRA/C01ORCL/ARCHIVELOG/2023_10_03/thread_1_seq_6.566.1149221283

+FRA/C01ORCL/ARCHIVELOG/2023_10_03/thread_1_seq_5.565.1149221283

+FRA/C01ORCL/ARCHIVELOG/2023_10_03/thread_1_seq_7.574.1149221291

+FRA/C01ORCL/ARCHIVELOG/2023_10_03/thread_1_seq_8.575.1149221293

+FRA/C01ORCL/ARCHIVELOG/2023_10_03/thread_1_seq_9.576.1149221293

......

+FRA/C01ORCL/ARCHIVELOG/2023_10_03/thread_1_seq_61.630.1149221403

+FRA/C01ORCL/ARCHIVELOG/2023_10_03/thread_1_seq_62.631.1149221405

+FRA/C01ORCL/ARCHIVELOG/2023_10_03/thread_1_seq_64.633.1149221407

+FRA/C01ORCL/ARCHIVELOG/2023_10_03/thread_1_seq_65.634.1149221409

+FRA/C01ORCL/ARCHIVELOG/2023_10_03/thread_1_seq_66.635.1149221409

+FRA/C01ORCL/ARCHIVELOG/2023_10_03/thread_1_seq_67.636.1149221409

 

查询以下视图,查看日志应用情况:

 

SQL>select  * from gv$recovery_progress;

1    2023-10-03 04:09:27    Media Recovery    Log Files    Files    103    0            1

1    2023-10-03 04:09:27    Media Recovery    Active Apply Rate    KB/sec    17220    0            1

1    2023-10-03 04:09:27    Media Recovery    Average Apply Rate    KB/sec    11572    0            1

1    2023-10-03 04:09:27    Media Recovery    Maximum Apply Rate    KB/sec    31691    0            1

1    2023-10-03 04:09:27    Media Recovery    Redo Applied    Megabytes    9696    0            1

1    2023-10-03 04:09:27    Media Recovery    Recovery ID    RCVID    0    0        RCVID: 1169020322124815392    1

1    2023-10-03 04:09:27    Media Recovery    Last Applied Redo    SCN+Time    0    0    2023-10-03 04:23:46    SCN: 6691026    1

1    2023-10-03 04:09:27    Media Recovery    Active Time    Seconds    400    0            1

1    2023-10-03 04:09:27    Media Recovery    Apply Time per Log    Seconds    3    0            1

1    2023-10-03 04:09:27    Media Recovery    Checkpoint Time per Log    Seconds    0    0            1

1    2023-10-03 04:09:27    Media Recovery    Elapsed Time    Seconds    858    0            1

 

8. 收尾工作:

 

       Enable broker的configuration

删除主备库的所有在滚动升级过程中产生的保障闪回还原点

修改主备库的compatible参数指向19c

 

[oracle@station5 ~]$ srvctl status database -d c01orcl  -v

Instance c01orcl1 is running on node station5 with online services pserv2,serv2. Instance status: Open.

Instance c01orcl2 is running on node station6. Instance status: Mounted (Closed).

[oracle@station5 ~]$ srvctl stop instance -d c01orcl -i c01orcl2

[oracle@station5 ~]$ srvctl start database -d c01orcl

[oracle@station5 ~]$ srvctl status database -d c01orcl  -v

Instance c01orcl1 is running on node station5 with online services pserv2,serv2. Instance status: Open.

Instance c01orcl2 is running on node station6 with online services pserv2,serv2. Instance status: Open.

 

[oracle@station7 ~]$ srvctl status database -d c02orcl -v

Instance c02orcl1 is running on node station7. Instance status: Open,Readonly.

Instance c02orcl2 is not running on node station8

[oracle@station7 ~]$ srvctl start database -d c02orcl

[oracle@station7 ~]$ srvctl status database -d c02orcl -v

Instance c02orcl1 is running on node station7. Instance status: Open,Readonly.

Instance c02orcl2 is running on node station8. Instance status: Open,Readonly.

 

DGMGRL> enable configuration ;

Enabled.

DGMGRL> show configuration verbose;

 

Configuration - dg_config

 

  Protection Mode: MaxPerformance

  Members:

  c01orcl - Primary database

    c02orcl - Physical standby database

 

  Properties:

    FastStartFailoverThreshold      = '30'

    OperationTimeout                = '30'

    TraceLevel                      = 'USER'

    FastStartFailoverLagLimit       = '30'

    CommunicationTimeout            = '180'

    ObserverReconnect               = '0'

    FastStartFailoverAutoReinstate  = 'TRUE'

    FastStartFailoverPmyShutdown    = 'TRUE'

    BystandersFollowRoleChange      = 'ALL'

    ObserverOverride                = 'FALSE'

    ExternalDestination1            = ''

    ExternalDestination2            = ''

    PrimaryLostWriteAction          = 'CONTINUE'

    ConfigurationWideServiceName    = 'c01orcl_CFG'

 

Fast-Start Failover:  Disabled

 

Configuration Status:

SUCCESS

 


路过

雷人

握手

鲜花

鸡蛋

QQ|手机版|Bo's Oracle Station   

GMT+8, 2023-12-8 15:24 , Processed in 0.033875 second(s), 20 queries .

返回顶部