Oracle11g剔除节点(二)

5.在 GRID 层面删除节点(Clusterware)

5.1 查看节点都是 unpinned 状态

olsnodes -s -t
如果要删除的节点为 pinned 状态,请 ROOT 手工执行以下命令:
crsctl unpin css -n rac3
[root@rac1 ~]# olsnodes -s -t
rac1    Active  Unpinned
rac2    Active  Unpinned
rac3    Active  Unpinned

5.2 在节点 3 以 root 用户运行 deconfig

/u01/grid/11.2.0.4/crs/install/rootcrs.pl -deconfig -deinstall -force
校验:
olsnodes -s -t
crsctl stat res -t

过程如下:
[root@rac3 install]# /u01/grid/11.2.0.4/crs/install/rootcrs.pl -deconfig -deinstall -force
Using configuration parameter file: /u01/grid/11.2.0.4/crs/install/crsconfig_params
Network exists: 1/192.168.174.0/255.255.255.0/eth0, type static
VIP exists: /rac1vip/192.168.174.123/192.168.174.0/255.255.255.0/eth0, hosting node rac1
VIP exists: /rac2vip/192.168.174.124/192.168.174.0/255.255.255.0/eth0, hosting node rac2
VIP exists: /rac3vip/192.168.174.127/192.168.174.0/255.255.255.0/eth0, hosting node rac3
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'rac3'
CRS-2677: Stop of 'ora.registry.acfs' on 'rac3' succeeded
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac3'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac3'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac3'
CRS-2673: Attempting to stop 'ora.ARCH.dg' on 'rac3'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac3'
CRS-2673: Attempting to stop 'ora.OCR.dg' on 'rac3'
CRS-2677: Stop of 'ora.ARCH.dg' on 'rac3' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'rac3' succeeded
CRS-2677: Stop of 'ora.OCR.dg' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac3'
CRS-2677: Stop of 'ora.asm' on 'rac3' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac3' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac3'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac3'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac3'
CRS-2673: Attempting to stop 'ora.asm' on 'rac3'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac3'
CRS-2677: Stop of 'ora.evmd' on 'rac3' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac3' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac3' succeeded
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac3' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac3'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac3'
CRS-2677: Stop of 'ora.cssd' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'rac3'
CRS-2677: Stop of 'ora.crf' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac3'
CRS-2677: Stop of 'ora.gipcd' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac3'
CRS-2677: Stop of 'ora.gpnpd' on 'rac3' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac3' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Can't exec "/usr/bin/lsb_release": No such file or directory at /u01/grid/11.2.0.4/lib/osds_acfslib.pm line 542.
Use of uninitialized value $LSB_RELEASE in split at /u01/grid/11.2.0.4/lib/osds_acfslib.pm line 547.
Can't exec "/usr/bin/lsb_release": No such file or directory at /u01/grid/11.2.0.4/lib/osds_acfslib.pm line 542.
Use of uninitialized value $LSB_RELEASE in split at /u01/grid/11.2.0.4/lib/osds_acfslib.pm line 547.
Can't exec "/usr/bin/lsb_release": No such file or directory at /u01/grid/11.2.0.4/lib/osds_acfslib.pm line 542.
Use of uninitialized value $LSB_RELEASE in split at /u01/grid/11.2.0.4/lib/osds_acfslib.pm line 547.
Can't exec "/usr/bin/lsb_release": No such file or directory at /u01/grid/11.2.0.4/lib/osds_acfslib.pm line 542.
Use of uninitialized value $LSB_RELEASE in split at /u01/grid/11.2.0.4/lib/osds_acfslib.pm line 547.
Can't exec "/usr/bin/lsb_release": No such file or directory at /u01/grid/11.2.0.4/lib/osds_acfslib.pm line 542.
Use of uninitialized value $LSB_RELEASE in split at /u01/grid/11.2.0.4/lib/osds_acfslib.pm line 547.
Can't exec "/usr/bin/lsb_release": No such file or directory at /u01/grid/11.2.0.4/lib/osds_acfslib.pm line 542.
Use of uninitialized value $LSB_RELEASE in split at /u01/grid/11.2.0.4/lib/osds_acfslib.pm line 547.
Removing Trace File Analyzer
Successfully deconfigured Oracle clusterware stack on this node

[root@rac1 ~]# olsnodes -s -t
rac1    Active  Unpinned
rac2    Active  Unpinned
rac3    Inactive        Unpinned
[root@rac1 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ARCH.dg
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.DATA.dg
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.OCR.dg
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.asm
               ONLINE  ONLINE       rac1                     Started             
               ONLINE  ONLINE       rac2                     Started             
ora.gsd
               OFFLINE OFFLINE      rac1                                         
               OFFLINE OFFLINE      rac2                                         
ora.net1.network
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.ons
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.registry.acfs
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac1                                         
ora.cvu
      1        ONLINE  ONLINE       rac1                                         
ora.oc4j
      1        ONLINE  ONLINE       rac1                                         
ora.rac1.vip
      1        ONLINE  ONLINE       rac1                                         
ora.rac2.vip
      1        ONLINE  ONLINE       rac2                                         
ora.racdb.db
      1        ONLINE  ONLINE       rac1                     Open                
      2        ONLINE  ONLINE       rac2                     Open                
ora.scan1.vip
      1        ONLINE  ONLINE       rac1       

5.3在节点 1 以 root 用户执行 crsctl,删除节点

crsctl delete node -n rac3
[root@rac1 ~]# crsctl delete node -n rac3
CRS-4661: Node rac3 successfully deleted.

5.4在节点 3 以 grid 用户运行 updateNodeList,更新 inventory

su - grid
$ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/u01/grid/11.2.0.4/ "CLUSTER_NODES=rac3" -silent -local
检查:
cat /u01/app/oraInventory/ContentsXML/inventory.xml

过程如下:
[root@rac3 install]# su - grid
[grid@rac3 ~]$ echo $ORACLE_HOME
/u01/grid/11.2.0.4/
[grid@rac3 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/u01/grid/11.2.0.4/ "CLUSTER_NODES=rac3" -silent -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 2990 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[grid@rac3 ~]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2013, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>11.2.0.4.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/grid/11.2.0.4" TYPE="O" IDX="1" CRS="true">
   <NODE_LIST>
      <NODE NAME="rac3"/>
   </NODE_LIST>
</HOME>
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>

5.5在节点 3 以 grid 用户运行 deinstall,删除 GIRD_HOME 软件

用 grid 用户执行以下命令删除 grid 软件:
$ORACLE_HOME/deinstall/deinstall -local
运行过程中,需要多次回车确认相关信息,然后输入 y 确认要删除,最后以 root 用户执行类似于如下的命令:
/tmp/deinstall2021-01-04_11-04-44AM/perl/bin/perl -I/tmp/deinstall2021-01-04_11-04-44AM/perl/lib -I/tmp/deinstall2021-01-04_11-04-44AM/crs/install /tmp/deinstall2021-01-04_11-04-44AM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2021-01-04_11-04-44AM/response/deinstall_Ora11g_gridinfrahome1.rsp

过程如下:
[grid@rac3 ~]$ $ORACLE_HOME/deinstall/deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /tmp/deinstall2021-01-04_11-04-44AM/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############

######################### CHECK OPERATION START #########################
## [START] Install check configuration ##

Checking for existence of the Oracle home location /u01/grid/11.2.0.4
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /u01/app/grid
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home 
The following nodes are part of this cluster: rac3
Checking for sufficient temp space availability on node(s) : 'rac3'

## [END] Install check configuration ##

Traces log file: /tmp/deinstall2021-01-04_11-04-44AM/logs//crsdc.log
Enter an address or the name of the virtual IP used on node "rac3"[null]
 > 

The following information can be collected by running "/sbin/ifconfig -a" on node "rac3"
Enter the IP netmask of Virtual IP "rac3-vip" on node "rac3"[255.255.255.0]
 > 

Enter the network interface name on which the virtual IP address "rac3-vip" is active
 > 

Enter an address or the name of the virtual IP[]
 > 

Network Configuration check config START

Network de-configuration trace file location: /tmp/deinstall2021-01-04_11-04-44AM/logs/netdc_check2021-01-04_11-08-46-AM.log

Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER]:

Network Configuration check config END

Asm Check Configuration START

ASM de-configuration trace file location: /tmp/deinstall2021-01-04_11-04-44AM/logs/asmcadc_check2021-01-04_11-09-11-AM.log

######################### CHECK OPERATION END #########################

####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: 
The cluster node(s) on which the Oracle home deinstallation will be performed are:rac3
Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'rac3', and the global configuration will be removed.
Oracle Home selected for deinstall is: /u01/grid/11.2.0.4
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
Following RAC listener(s) will be de-configured: LISTENER
Option -local will not modify any ASM configuration.
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/tmp/deinstall2021-01-04_11-04-44AM/logs/deinstall_deconfig2021-01-04_11-05-26-AM.out'
Any error messages from this session will be written to: '/tmp/deinstall2021-01-04_11-04-44AM/logs/deinstall_deconfig2021-01-04_11-05-26-AM.err'

######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /tmp/deinstall2021-01-04_11-04-44AM/logs/asmcadc_clean2021-01-04_11-09-15-AM.log
ASM Clean Configuration END

Network Configuration clean config START

Network de-configuration trace file location: /tmp/deinstall2021-01-04_11-04-44AM/logs/netdc_clean2021-01-04_11-09-15-AM.log

De-configuring RAC listener(s): LISTENER

De-configuring listener: LISTENER
    Stopping listener on node "rac3": LISTENER
    Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.

De-configuring Naming Methods configuration file...
Naming Methods configuration file de-configured successfully.

De-configuring backup files...
Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END

---------------------------------------->

The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on  the local node after the execution completes on all the remote nodes.

Run the following command as the root user or the administrator on node "rac3".

/tmp/deinstall2021-01-04_11-04-44AM/perl/bin/perl -I/tmp/deinstall2021-01-04_11-04-44AM/perl/lib -I/tmp/deinstall2021-01-04_11-04-44AM/crs/install /tmp/deinstall2021-01-04_11-04-44AM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2021-01-04_11-04-44AM/response/deinstall_Ora11g_gridinfrahome1.rsp"

Press Enter after you finish running the above commands

<----------------------------------------

Remove the directory: /tmp/deinstall2021-01-04_11-04-44AM on node: 
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START

Detach Oracle home '/u01/grid/11.2.0.4' from the central inventory on the local node : Done

Delete directory '/u01/grid/11.2.0.4' on the local node : Done

Delete directory '/u01/app/oraInventory' on the local node : Done

Delete directory '/u01/app/grid' on the local node : Done

Oracle Universal Installer cleanup was successful.

Oracle Universal Installer clean END

## [START] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2021-01-04_11-04-44AM' on node 'rac3'

## [END] Oracle install clean ##

######################### CLEAN OPERATION END #########################

####################### CLEAN OPERATION SUMMARY #######################
Following RAC listener(s) were de-configured successfully: LISTENER
Oracle Clusterware is stopped and successfully de-configured on node "rac3"
Oracle Clusterware is stopped and de-configured successfully.
Successfully detached Oracle home '/u01/grid/11.2.0.4' from the central inventory on the local node.
Successfully deleted directory '/u01/grid/11.2.0.4' on the local node.
Successfully deleted directory '/u01/app/oraInventory' on the local node.
Successfully deleted directory '/u01/app/grid' on the local node.
Oracle Universal Installer cleanup was successful.

Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'rac3' at the end of the session.

Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'rac3' at the end of the session.
Run 'rm -rf /etc/oratab' as root on node(s) 'rac3' at the end of the session.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################

############# ORACLE DEINSTALL & DECONFIG TOOL END #############

[root@rac3 deinstall2021-01-04_11-04-44AM]# /tmp/deinstall2021-01-04_11-04-44AM/perl/bin/perl -I/tmp/deinstall2021-01-04_11-04-44AM/perl/lib -I/tmp/deinstall2021-01-04_11-04-44AM/crs/install /tmp/deinstall2021-01-04_11-04-44AM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2021-01-04_11-04-44AM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Using configuration parameter file: /tmp/deinstall2021-01-04_11-04-44AM/response/deinstall_Ora11g_gridinfrahome1.rsp
****Unable to retrieve Oracle Clusterware home.
Start Oracle Clusterware stack and try again.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Stop failed, or completed with errors.
Either /etc/oracle/ocr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Either /etc/oracle/ocr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Modify failed, or completed with errors.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Delete failed, or completed with errors.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Stop failed, or completed with errors.
################################################################
# You must kill processes or reboot the system to properly #
# cleanup the processes started by Oracle clusterware          #
################################################################
Can't exec "/usr/bin/lsb_release": No such file or directory at /u01/grid/11.2.0.4/lib/osds_acfslib.pm line 542.
Use of uninitialized value $LSB_RELEASE in split at /u01/grid/11.2.0.4/lib/osds_acfslib.pm line 547.
Can't exec "/usr/bin/lsb_release": No such file or directory at /u01/grid/11.2.0.4/lib/osds_acfslib.pm line 542.
Use of uninitialized value $LSB_RELEASE in split at /u01/grid/11.2.0.4/lib/osds_acfslib.pm line 547.
Can't exec "/usr/bin/lsb_release": No such file or directory at /u01/grid/11.2.0.4/lib/osds_acfslib.pm line 542.
Use of uninitialized value $LSB_RELEASE in split at /u01/grid/11.2.0.4/lib/osds_acfslib.pm line 547.
Can't exec "/usr/bin/lsb_release": No such file or directory at /u01/grid/11.2.0.4/lib/osds_acfslib.pm line 542.
Use of uninitialized value $LSB_RELEASE in split at /u01/grid/11.2.0.4/lib/osds_acfslib.pm line 547.
Can't exec "/usr/bin/lsb_release": No such file or directory at /u01/grid/11.2.0.4/lib/osds_acfslib.pm line 542.
Use of uninitialized value $LSB_RELEASE in split at /u01/grid/11.2.0.4/lib/osds_acfslib.pm line 547.
ACFS-9313: No ADVM/ACFS installation detected.
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Failure in execution (rc=-1, 256, No such file or directory) for command /etc/init.d/ohasd deinstall
error: package cvuqdisk is not installed
Successfully deconfigured Oracle clusterware stack on this node

5.6在节点 1 以 grid 用户运行 updateNodeList,更新 inventory

在节点 1,用 grid 用户执行:
$ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/u01/grid/11.2.0.4/ "CLUSTER_NODES=rac1,rac2" -silent
检查:
cat /u01/app/oraInventory/ContentsXML/inventory.xml

过程:
[grid@rac1 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/u01/grid/11.2.0.4/ "CLUSTER_NODES=rac1,rac2" -silent
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4581 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[grid@rac1 ~]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2013, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>11.2.0.4.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/grid/11.2.0.4" TYPE="O" IDX="1" CRS="true">
   <NODE_LIST>
      <NODE NAME="rac1"/>
      <NODE NAME="rac2"/>
   </NODE_LIST>
</HOME>
<HOME NAME="OraDb11g_home1" LOC="/u01/app/oracle/product/11.2.0.4/db_1" TYPE="O" IDX="2">
   <NODE_LIST>
      <NODE NAME="rac1"/>
      <NODE NAME="rac2"/>
   </NODE_LIST>
</HOME>
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>

5.7更新/etc/hosts 文件

更新/etc/hosts 文件,删除节点 3 的有关信息
vi /etc/hosts
192.168.174.121  rac1
192.168.174.122  rac2
192.168.160.121 rac1priv
192.168.160.122  rac2priv
192.168.174.123  rac1vip
192.168.174.124  rac2vip
192.168.174.125 racscanip

5.8CVU 检查节点删除是否成功

在节点 1 上用 grid 用户执行:
cluvfy stage -post nodedel -n rac3 -verbose

[grid@rac1 ~]$ cluvfy stage -post nodedel -n rac3 -verbose

Performing post-checks for node removal 

Checking CRS integrity...

Clusterware version consistency passed
The Oracle Clusterware is healthy on node "rac2"
The Oracle Clusterware is healthy on node "rac1"

CRS integrity check passed
Result: 
Node removal check passed

Post-check for node removal was successful

6.验证

olsnodes -n -s -t 
crsctl stat res -t
[grid@rac1 ~]$ olsnodes -n -s -t 
rac1    1       Active  Unpinned
rac2    2       Active  Unpinned

可以尝试重启节点 1 和节点 2 的 CRS 进行校验。
crsctl stop crs -f
crsctl start crs

7.清除节点 3 残留文件

在节点 3 上可能还有一些目录存在,可以使用如下命令,进行清除:
清除家目录:
rm -rf /u01/app/grid_home
rm -rf /home/oracle
清除相关文件:
rm -rf /tmp/.oracle
rm -rf /var/tmp/.oracle
rm -rf /etc/init/oracle-ohasd.conf
rm -rf /etc/init.d/ohasd
rm -rf /etc/init.d/init.ohasd
rm -rf /etc/oraInst.loc
rm -rf /etc/oratab
rm -rf /etc/oracle

Related Posts