Bo's Oracle Station

【博客文章2023】最佳实践:Policy-Managed RAC数据库不停库无残留删除节点(分离网格基础架构部分)

2023-5-9 15:27| 发布者: botang| 查看: 30| 评论: 0|原作者: Bo Tang

摘要: 在本文中包含详细的分离和删除该节点上的网格基础架构软件的操作。该操作之后需要进行ASM实例计数修正。修正完毕后,需要将RACONENODE转成RAC。然后伴随一些去除残留的最后操作和验证。
【博客文章2023】最佳实践:Policy-Managed RAC数据库不停库无残留删除节点分离网格基础架构部分)


Author: Bo Tang

1. 分离和删除该节点上的网格基础架构软件:

     当完成最佳实践:Policy-Managed RAC数据库不停库无残留删除节点(删除实例和分离数据库HOME部分)后
    在要保留的节点上操作:

[grid@station3 ~]$ /u01/app/12.2.0/grid/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/u01/app/12.2.0/grid "CLUSTER_NODES={station3,station4}"

Starting Oracle Universal Installer...


Checking swap space: must be greater than 500 MB. Actual 4095 MB Passed

The inventory pointer is located at /etc/oraInst.loc

'UpdateNodeList' was successful.


[grid@station4 ~]$ /u01/app/12.2.0/grid/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/u01/app/12.2.0/grid "CLUSTER_NODES={station3,station4}"

Starting Oracle Universal Installer...


Checking swap space: must be greater than 500 MB. Actual 4095 MB Passed

The inventory pointer is located at /etc/oraInst.loc

'UpdateNodeList' was successful.


    在将要删除的节点上操作(与分离和删除该节点上的数据库软件不同,本操作不需要detachHome):

[grid@station7 ~]$ /u01/app/12.2.0/grid/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/u01/app/12.2.0/grid "CLUSTER_NODES={station7}" -local

Starting Oracle Universal Installer...


Checking swap space: must be greater than 500 MB. Actual 4095 MB Passed

The inventory pointer is located at /etc/oraInst.loc

'UpdateNodeList' was successful.


    下面这条命令一定不能遗漏-local(红色),否则整个集群都会被删除掉:

[grid@station7 ~]$ /u01/app/12.2.0/grid/deinstall/deinstall -local

Checking for required files and bootstrapping ...

Please wait ...

Location of logs /tmp/deinstall2023-05-02_06-51-54PM/logs/


############ ORACLE DECONFIG TOOL START ############



######################### DECONFIG CHECK OPERATION START #########################

## [START] Install check configuration ##



Checking for existence of the Oracle home location /u01/app/12.2.0/grid

Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster

Oracle Base selected for deinstall is: /u01/app/grid

Checking for existence of central inventory location /u01/app/oraInventory

Checking for existence of the Oracle Grid Infrastructure home /u01/app/12.2.0/grid

The following nodes are part of this cluster: station7,station4,station3

Checking for sufficient temp space availability on node(s) : 'station7'


## [END] Install check configuration ##


Traces log file: /tmp/deinstall2023-05-02_06-51-54PM/logs//crsdc_2023-05-02_06-52-21-PM.log


Network Configuration check config START


Network de-configuration trace file location: /tmp/deinstall2023-05-02_06-51-54PM/logs/netdc_check2023-05-02_06-52-24-PM.log


Network Configuration check config END


Asm Check Configuration START


ASM de-configuration trace file location: /tmp/deinstall2023-05-02_06-51-54PM/logs/asmcadc_check2023-05-02_06-52-24-PM.log


Database Check Configuration START


Database de-configuration trace file location: /tmp/deinstall2023-05-02_06-51-54PM/logs/databasedc_check2023-05-02_06-52-25-PM.log


Oracle Grid Management database was found in this Grid Infrastructure home


Database Check Configuration END


######################### DECONFIG CHECK OPERATION END #########################



####################### DECONFIG CHECK OPERATION SUMMARY #######################

Oracle Grid Infrastructure Home is: /u01/app/12.2.0/grid

The following nodes are part of this cluster: station7,station4,station3

The cluster node(s) on which the Oracle home deinstallation will be performed are:station7

Oracle Home selected for deinstall is: /u01/app/12.2.0/grid

Inventory Location where the Oracle home registered is: /u01/app/oraInventory

Option -local will not modify any ASM configuration.

Oracle Grid Management database was found in this Grid Infrastructure home

Local configuration of Oracle Grid Management database will be removed

Do you want to continue (y - yes, n - no)? [n]: y

A log of this session will be written to: '/tmp/deinstall2023-05-02_06-51-54PM/logs/deinstall_deconfig2023-05-02_06-52-16-PM.out'

Any error messages from this session will be written to: '/tmp/deinstall2023-05-02_06-51-54PM/logs/deinstall_deconfig2023-05-02_06-52-16-PM.err'


######################## DECONFIG CLEAN OPERATION START ########################

Database de-configuration trace file location: /tmp/deinstall2023-05-02_06-51-54PM/logs/databasedc_clean2023-05-02_06-53-13-PM.log

ASM de-configuration trace file location: /tmp/deinstall2023-05-02_06-51-54PM/logs/asmcadc_clean2023-05-02_06-53-13-PM.log

ASM Clean Configuration END


Network Configuration clean config START


Network de-configuration trace file location: /tmp/deinstall2023-05-02_06-51-54PM/logs/netdc_clean2023-05-02_06-53-13-PM.log


Network Configuration clean config END



Run the following command as the root user or the administrator on node "station7".


/u01/app/12.2.0/grid/crs/install/rootcrs.sh -force -deconfig -paramfile "/tmp/deinstall2023-05-02_06-51-54PM/response/deinstall_OraGI12Home2.rsp"


Press Enter after you finish running the above commands


<----------------------------------------

运行完成上面这个红色显示的命令后,按回车。上面这条红色显示的命令的输出在下一个代码框。


######################### DECONFIG CLEAN OPERATION END #########################



####################### DECONFIG CLEAN OPERATION SUMMARY #######################

Local configuration of Oracle Grid Management database was removed successfully

Oracle Clusterware is stopped and successfully de-configured on node "station7"

Oracle Clusterware is stopped and de-configured successfully.

#######################################################################



############# ORACLE DECONFIG TOOL END #############


Using properties file /tmp/deinstall2023-05-02_06-51-54PM/response/deinstall_2023-05-02_06-52-16-PM.rsp

Location of logs /tmp/deinstall2023-05-02_06-51-54PM/logs/


############ ORACLE DEINSTALL TOOL START ############






####################### DEINSTALL CHECK OPERATION SUMMARY #######################

A log of this session will be written to: '/tmp/deinstall2023-05-02_06-51-54PM/logs/deinstall_deconfig2023-05-02_06-52-16-PM.out'

Any error messages from this session will be written to: '/tmp/deinstall2023-05-02_06-51-54PM/logs/deinstall_deconfig2023-05-02_06-52-16-PM.err'


######################## DEINSTALL CLEAN OPERATION START ########################

## [START] Preparing for Deinstall ##

Setting LOCAL_NODE to station7

Setting CLUSTER_NODES to station7

Setting CRS_HOME to true

Setting oracle.installer.invPtrLoc to /tmp/deinstall2023-05-02_06-51-54PM/oraInst.loc

Setting oracle.installer.local to true


## [END] Preparing for Deinstall ##


Setting the force flag to false

Setting the force flag to cleanup the Oracle Base

Oracle Universal Installer clean START


Detach Oracle home '/u01/app/12.2.0/grid' from the central inventory on the local node : Done


Delete directory '/u01/app/12.2.0/grid' on the local node : Done


Delete directory '/u01/app/oraInventory' on the local node : Done


Failed to delete the directory '/u01/app/grid/log/diag/asmcmd/user_root/station7.example.com/trace'. Either user has no permission to delete or it is in use.

The Oracle Base directory '/u01/app/grid' will not be removed on local node. The directory is not empty.


Oracle Universal Installer cleanup was successful.


Oracle Universal Installer clean END



## [START] Oracle install clean ##



## [END] Oracle install clean ##



######################### DEINSTALL CLEAN OPERATION END #########################



####################### DEINSTALL CLEAN OPERATION SUMMARY #######################

Successfully detached Oracle home '/u01/app/12.2.0/grid' from the central inventory on the local node.

Successfully deleted directory '/u01/app/12.2.0/grid' on the local node.

Successfully deleted directory '/u01/app/oraInventory' on the local node.

Oracle Universal Installer cleanup was successful.



Run 'rm -r /etc/oraInst.loc' as root on node(s) 'station7' at the end of the session.


Run 'rm -r /opt/ORCLfmap' as root on node(s) 'station7' at the end of the session.

Review the permissions and contents of '/u01/app/grid' on nodes(s) 'station7'.

If there are no Oracle home(s) associated with '/u01/app/grid', manually delete '/u01/app/grid' and its contents.

Oracle deinstall tool successfully cleaned up temporary directories.

#######################################################################



############# ORACLE DEINSTALL TOOL END #############


    /u01/app/12.2.0/grid/crs/install/rootcrs.sh -force -deconfig -paramfile "/tmp/deinstall2023-05-02_06-51-54PM/response/deinstall_OraGI12Home2.rsp"命令的输出:

[root@station7 ~]# /u01/app/12.2.0/grid/crs/install/rootcrs.sh -force -deconfig -paramfile "/tmp/deinstall2023-05-02_06-51-54PM/response/deinstall_OraGI12Home2.rsp"

Using configuration parameter file: /tmp/deinstall2023-05-02_06-51-54PM/response/deinstall_OraGI12Home2.rsp

The log of current session can be found at:

/tmp/deinstall2023-05-02_06-51-54PM/logs/crsdeconfig_station7_2023-05-02_06-54-24PM.log

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'station7'

CRS-2673: Attempting to stop 'ora.crsd' on 'station7'

CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'station7'

CRS-2673: Attempting to stop 'ora.FRA.dg' on 'station7'

CRS-2673: Attempting to stop 'ora.DATA.dg' on 'station7'

CRS-2673: Attempting to stop 'ora.data.acfs_vol1.acfs' on 'station7'

CRS-2677: Stop of 'ora.DATA.dg' on 'station7' succeeded

CRS-2677: Stop of 'ora.FRA.dg' on 'station7' succeeded

CRS-2673: Attempting to stop 'ora.asm' on 'station7'

CRS-2677: Stop of 'ora.asm' on 'station7' succeeded

CRS-2673: Attempting to stop 'ora.ASMNET2LSNR_ASM.lsnr' on 'station7'

CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'station7'

CRS-2677: Stop of 'ora.data.acfs_vol1.acfs' on 'station7' succeeded

CRS-2673: Attempting to stop 'ora.DATA.ACFS_VOL1.advm' on 'station7'

CRS-2677: Stop of 'ora.DATA.ACFS_VOL1.advm' on 'station7' succeeded

CRS-2673: Attempting to stop 'ora.proxy_advm' on 'station7'

CRS-2677: Stop of 'ora.ASMNET2LSNR_ASM.lsnr' on 'station7' succeeded

CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'station7' succeeded

CRS-2677: Stop of 'ora.proxy_advm' on 'station7' succeeded

CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'station7' has completed

CRS-2677: Stop of 'ora.crsd' on 'station7' succeeded

CRS-2673: Attempting to stop 'ora.asm' on 'station7'

CRS-2673: Attempting to stop 'ora.crf' on 'station7'

CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'station7'

CRS-2673: Attempting to stop 'ora.gpnpd' on 'station7'

CRS-2673: Attempting to stop 'ora.mdnsd' on 'station7'

CRS-2677: Stop of 'ora.drivers.acfs' on 'station7' succeeded

CRS-2677: Stop of 'ora.gpnpd' on 'station7' succeeded

CRS-2677: Stop of 'ora.crf' on 'station7' succeeded

CRS-2677: Stop of 'ora.mdnsd' on 'station7' succeeded

CRS-2677: Stop of 'ora.asm' on 'station7' succeeded

CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'station7'

CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'station7' succeeded

CRS-2673: Attempting to stop 'ora.ctssd' on 'station7'

CRS-2673: Attempting to stop 'ora.evmd' on 'station7'

CRS-2677: Stop of 'ora.ctssd' on 'station7' succeeded

CRS-2677: Stop of 'ora.evmd' on 'station7' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on 'station7'

CRS-2677: Stop of 'ora.cssd' on 'station7' succeeded

CRS-2673: Attempting to stop 'ora.driver.afd' on 'station7'

CRS-2673: Attempting to stop 'ora.gipcd' on 'station7'

CRS-2677: Stop of 'ora.driver.afd' on 'station7' succeeded

CRS-2677: Stop of 'ora.gipcd' on 'station7' succeeded

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'station7' has completed

CRS-4133: Oracle High Availability Services has been stopped.

2023/05/02 18:56:26 CLSRSC-4006: Removing Oracle Trace File Analyzer (TFA) Collector.

2023/05/02 18:56:46 CLSRSC-4007: Successfully removed Oracle Trace File Analyzer (TFA) Collector.

2023/05/02 18:56:50 CLSRSC-336: Successfully deconfigured Oracle Clusterware stack on this node


2. ASM实例计数修正

    删除节点残留的信息:

[grid@station3 ~]$ olsnodes -n -t -s

station3 1 Active Unpinned

station4 2 Active Unpinned

station7 3 Inactive Unpinned

[grid@station3 ~]$ su -

Password:


[root@station3 ~]# . oraenv

ORACLE_SID = [root] ? +ASM1

The Oracle base has been set to /u01/app/grid

[root@station3 ~]# crsctl delete node -n station7

CRS-4661: Node station7 successfully deleted.


    验证节点残留的信息已经被部分删除了,但是ASM仍然有残留(这是因为网格基础架构的deinstall -local是不会处理ASM的,见当时的输出Option -local will not modify any ASM configuration.):

[grid@station3 ~]$ crsctl status res -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.ASMNET1LSNR_ASM.lsnr ONLINE ONLINE station3 STABLE ONLINE ONLINE station4 STABLE ora.ASMNET2LSNR_ASM.lsnr ONLINE ONLINE station3 STABLE ONLINE ONLINE station4 STABLE ora.DATA.ACFS_VOL1.advm ONLINE ONLINE station3 STABLE ONLINE ONLINE station4 STABLE ora.DATA.dg ONLINE ONLINE station3 STABLE ONLINE ONLINE station4 STABLE ora.FRA.dg ONLINE ONLINE station3 STABLE ONLINE ONLINE station4 STABLE ora.LISTENER.lsnr ONLINE ONLINE station3 STABLE ONLINE ONLINE station4 STABLE ora.data.acfs_vol1.acfs ONLINE OFFLINE station3 volume /home/oracle/ data offline,STABLE ONLINE OFFLINE station4 stale fs on /home/or acle/data,STABLE ora.net1.network ONLINE ONLINE station3 STABLE ONLINE ONLINE station4 STABLE ora.ons ONLINE ONLINE station3 STABLE ONLINE ONLINE station4 STABLE ora.proxy_advm ONLINE ONLINE station3 STABLE ONLINE ONLINE station4 STABLE -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE station3 STABLE ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE station3 STABLE ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE station4 STABLE ora.MGMTLSNR 1 ONLINE ONLINE station4 169.254.82.210 172.3 1.118.4 172.31.118.2 04,STABLE ora.asm 1 ONLINE ONLINE station3 Started,STABLE 2 ONLINE ONLINE station4 Started,STABLE 3 ONLINE OFFLINE STABLE ora.c01orcl.db 2 ONLINE ONLINE station3 Open,HOME=/u01/app/o racle/product/12.2.0 /dbhome_1,STABLE ora.c01orcl.sales_r.svc 1 OFFLINE OFFLINE STABLE ora.c01orcl.sales_r2.svc 1 OFFLINE OFFLINE STABLE ora.c01orcl.serv2.svc 2 ONLINE ONLINE station3 STABLE ora.c01orcl.serv3.svc 2 ONLINE ONLINE station3 STABLE ora.cvu 1 ONLINE ONLINE station4 STABLE ora.gns 1 ONLINE ONLINE station4 STABLE ora.gns.vip 1 ONLINE ONLINE station4 STABLE ora.mgmtdb 1 ONLINE ONLINE station4 Open,STABLE ora.qosmserver 1 ONLINE ONLINE station4 STABLE ora.scan1.vip 1 ONLINE ONLINE station3 STABLE ora.scan2.vip 1 ONLINE ONLINE station3 STABLE ora.scan3.vip 1 ONLINE ONLINE station4 STABLE ora.station3.vip 1 ONLINE ONLINE station3 STABLE ora.station4.vip 1 ONLINE ONLINE station4 STABLE --------------------------------------------------------------------------------


    删除ASM残留的信息:

[grid@station3 ~]$ srvctl modify asm -count 2


    验证ASM残留的信息已经被删除: 

[grid@station3 ~]$ crsctl status res -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.ASMNET1LSNR_ASM.lsnr ONLINE ONLINE station3 STABLE ONLINE ONLINE station4 STABLE ora.ASMNET2LSNR_ASM.lsnr ONLINE ONLINE station3 STABLE ONLINE ONLINE station4 STABLE ora.DATA.ACFS_VOL1.advm ONLINE ONLINE station3 STABLE ONLINE UNKNOWN station4 STABLE ora.DATA.dg ONLINE ONLINE station3 STABLE ONLINE ONLINE station4 STABLE ora.FRA.dg ONLINE ONLINE station3 STABLE ONLINE ONLINE station4 STABLE ora.LISTENER.lsnr ONLINE ONLINE station3 STABLE ONLINE ONLINE station4 STABLE ora.data.acfs_vol1.acfs ONLINE OFFLINE station3 volume /home/oracle/ data offline,STABLE ONLINE UNKNOWN station4 stale fs on /home/or acle/data,STABLE ora.net1.network ONLINE ONLINE station3 STABLE ONLINE ONLINE station4 STABLE ora.ons ONLINE ONLINE station3 STABLE ONLINE ONLINE station4 STABLE ora.proxy_advm ONLINE ONLINE station3 STABLE ONLINE ONLINE station4 STABLE -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE station3 STABLE ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE station3 STABLE ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE station4 STABLE ora.MGMTLSNR 1 ONLINE ONLINE station4 169.254.82.210 172.3 1.118.4 172.31.118.2 04,STABLE ora.asm 1 ONLINE ONLINE station3 Started,STABLE 2 ONLINE ONLINE station4 Started,STABLE ora.c01orcl.db 2 ONLINE ONLINE station3 Open,HOME=/u01/app/o racle/product/12.2.0 /dbhome_1,STABLE ora.c01orcl.sales_r.svc 1 OFFLINE OFFLINE STABLE ora.c01orcl.sales_r2.svc 1 OFFLINE OFFLINE STABLE ora.c01orcl.serv2.svc 2 ONLINE ONLINE station3 STABLE ora.c01orcl.serv3.svc 2 ONLINE ONLINE station3 STABLE ora.cvu 1 ONLINE ONLINE station4 STABLE ora.gns 1 ONLINE ONLINE station4 STABLE ora.gns.vip 1 ONLINE ONLINE station4 STABLE ora.mgmtdb 1 ONLINE ONLINE station4 Open,STABLE ora.qosmserver 1 ONLINE ONLINE station4 STABLE ora.scan1.vip 1 ONLINE ONLINE station3 STABLE ora.scan2.vip 1 ONLINE ONLINE station3 STABLE ora.scan3.vip 1 ONLINE ONLINE station4 STABLE ora.station3.vip 1 ONLINE ONLINE station3 STABLE ora.station4.vip 1 ONLINE ONLINE station4 STABLE


3. 将RACONENODE转成RAC

    调整serverpool的最大值:

[grid@station3 ~]$ srvctl status serverpool

Server pool name: Free

Active servers count: 0

Server pool name: Generic

Active servers count: 0

Server pool name: racdbpool

Active servers count: 2

[grid@station3 ~]$ srvctl config serverpool -g racdbpool

Server pool name: racdbpool

Importance: 5, Min: 1, Max: 3

Category: hub

Candidate server names:

[grid@station3 ~]$ srvctl modify serverpool -g racdbpool -u 2

[grid@station3 ~]$ srvctl config serverpool -g racdbpool

Server pool name: racdbpool

Importance: 5, Min: 1, Max: 2

Category: hub

Candidate server names:


    把RACONENODE转化为RAC,仍然使用原先的serverpool:

[oracle@station3 ~]$ srvctl convert database -d c01orcl -dbtype RAC


[oracle@station3 ~]$ srvctl config database -d c01orcl

Database unique name: c01orcl

Database name: c01orcl

Oracle home: /u01/app/oracle/product/12.2.0/dbhome_1

Oracle user: oracle

Spfile: +data/c01orcl/spfilec01orcl.ora

Password file: +data/c01orcl/orapwc01orcl

Domain: example.com

Start options: open

Stop options: immediate

Database role: PRIMARY

Management policy: AUTOMATIC

Server pools: racdbpool

Disk Groups: FRA,DATA

Mount point paths:

Services: sales_r,sales_r2,serv2,serv3

Type: RAC

Start concurrency:

Stop concurrency:

OSDBA group: dba

OSOPER group: oper

Database instances:

Configured nodes:

CSS critical: no

CPU count: 0

Memory target: 0

Maximum memory: 0

Default network number for database services:

Database is policy managed


    查看转化成RAC后集群的状态:

[grid@station3 ~]$ crsctl status res -t

--------------------------------------------------------------------------------

Name Target State Server State details

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.ASMNET1LSNR_ASM.lsnr

ONLINE ONLINE station3 STABLE

ONLINE ONLINE station4 STABLE

ora.ASMNET2LSNR_ASM.lsnr

ONLINE ONLINE station3 STABLE

ONLINE ONLINE station4 STABLE

ora.DATA.ACFS_VOL1.advm

ONLINE ONLINE station3 STABLE

ONLINE ONLINE station4 STABLE

ora.DATA.dg

ONLINE ONLINE station3 STABLE

ONLINE ONLINE station4 STABLE

ora.FRA.dg

ONLINE ONLINE station3 STABLE

ONLINE ONLINE station4 STABLE

ora.LISTENER.lsnr

ONLINE ONLINE station3 STABLE

ONLINE ONLINE station4 STABLE

ora.data.acfs_vol1.acfs

ONLINE OFFLINE station3 volume /home/oracle/

data offline,STABLE

ONLINE OFFLINE station4 stale fs on /home/or

acle/data,STABLE

ora.net1.network

ONLINE ONLINE station3 STABLE

ONLINE ONLINE station4 STABLE

ora.ons

ONLINE ONLINE station3 STABLE

ONLINE ONLINE station4 STABLE

ora.proxy_advm

ONLINE ONLINE station3 STABLE

ONLINE ONLINE station4 STABLE

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE station3 STABLE

ora.LISTENER_SCAN2.lsnr

1 ONLINE ONLINE station3 STABLE

ora.LISTENER_SCAN3.lsnr

1 ONLINE ONLINE station4 STABLE

ora.MGMTLSNR

1 ONLINE ONLINE station4 169.254.82.210 172.3

1.118.4 172.31.118.2

04,STABLE

ora.asm

1 ONLINE ONLINE station3 Started,STABLE

2 ONLINE ONLINE station4 Started,STABLE

ora.c01orcl.db

1 ONLINE ONLINE station4 Open,HOME=/u01/app/o

racle/product/12.2.0

/dbhome_1,STABLE

2 ONLINE ONLINE station3 Open,HOME=/u01/app/o

racle/product/12.2.0

/dbhome_1,STABLE

ora.c01orcl.sales_r.svc

1 OFFLINE OFFLINE STABLE

2 OFFLINE OFFLINE STABLE

ora.c01orcl.sales_r2.svc

1 OFFLINE OFFLINE STABLE

2 OFFLINE OFFLINE STABLE

ora.c01orcl.serv2.svc

1 ONLINE ONLINE station4 STABLE

2 ONLINE ONLINE station3 STABLE

ora.c01orcl.serv3.svc

1 ONLINE ONLINE station4 STABLE

2 ONLINE ONLINE station3 STABLE

ora.cvu

1 ONLINE ONLINE station4 STABLE

ora.gns

1 ONLINE ONLINE station4 STABLE

ora.gns.vip

1 ONLINE ONLINE station4 STABLE

ora.mgmtdb

1 ONLINE ONLINE station4 Open,STABLE

ora.qosmserver

1 ONLINE ONLINE station4 STABLE

ora.scan1.vip

1 ONLINE ONLINE station3 STABLE

ora.scan2.vip

1 ONLINE ONLINE station3 STABLE

ora.scan3.vip

1 ONLINE ONLINE station4 STABLE

ora.station3.vip

1 ONLINE ONLINE station3 STABLE

ora.station4.vip

1 ONLINE ONLINE station4 STABLE


4. 去除残留的最后操作和验证:

[oracle@station3 ~]$ sqlplus /nolog


SQL*Plus: Release 12.2.0.1.0 Production on Tue May 2 19:13:46 2023


Copyright (c) 1982, 2016, Oracle. All rights reserved.


SQL> conn sys/oracle_4U@c01orcl as sysdba

Connected.


SQL> select thread#, status from v$thread;


THREAD# STATUS

---------- ------

1 OPEN

2 OPEN

3 CLOSED


SQL> alter database disable thread 3;


Database altered.



SQL> select GROUP#, THREAD# from v$log;


GROUP# THREAD#

---------- ----------

1 1

2 1

3 2

4 2

11 3

12 3


6 rows selected.




SQL> alter database drop logfile group 11;


Database altered.


SQL> alter database drop logfile group 12;


Database altered.


SQL> select thread#, status from v$thread;


THREAD# STATUS

---------- ------

1 OPEN

2 OPEN


SQL>


[grid@station3 ~]$ cluvfy stage -post nodedel -n station7 -verbose


Verifying Node Removal ...

Verifying CRS Integrity ...PASSED

Verifying Clusterware Version Consistency ...PASSED

Verifying Node Removal ...PASSED


Post-check for node removal was successful.


CVU operation performed: stage -post nodedel

Date: May 2, 2023 7:18:23 PM

CVU home: /u01/app/12.2.0/grid/

User: grid






路过

雷人

握手

鲜花

鸡蛋

QQ|手机版|Bo's Oracle Station   

GMT+8, 2023-5-9 22:11 , Processed in 0.033579 second(s), 21 queries .

返回顶部