Opatch in RAC
===============
. All-Node Patch
. Shutdown all Oracle instances on all nodes
. Apply the patch to all nodes
. Bring all nodes up
. Minimum downtime
. Shutdown the Oracle instance on node 1
. Apply the patch to the Oracle instance on node 1
. Shutdown the Oracle instance on node 2
. Apply the patch to the Oracle instance on node 2
. Shutdown the Oracle instance on node 3
. At this point, instances on nodes 1 and 2 can be brought up
. Apply the patch to the Oracle instance on node 3
. Startup the Oracle instance on node 3
. Rolling patch (no downtime)
. Shutdown the Oracle instance on node 1
. Apply the patch to the Oracle instance on node 1
. Start the Oracle instance on node 1
. Shutdown the Oracle instance on node 2
. Apply the patch to the Oracle instance on node 2
. Start the Oracle instance on node 2
. Shutdown the Oracle instance on node 3
. Apply the patch to the Oracle instance on node 3
. Start the Oracle instance on node 3
5 - How to determine if a patch is a "rolling patch" or not?
- 9i or 10gR1: (run)
$ opatch query -is_rolling
Opatch will ask the patch location and then will inform if the patch is or
not a "rolling patch"
- 10gR2: (run)
$ opatch query -all | grep rolling
Patching one node at time
The Opatch strategies discussed above (All-Node, Min. Down-Time, and Rolling)
presumes that all nodes will be patched at the same time. Additionally,
each node can be patched individually, at different times, using the "-local"
key word, which will patch only the local node.
opatch apply -local
What is OCR and Voting disk
===========================
The voting disk is a file that contains and manages information of all the node memberships
OCR is a file that manages the cluster and RAC configuration.
Version of CRS
---------------
crsctl query crs activeversion
command gives the details about voting disk
-------------------------------------------
crsctl query css votedisk
to backup voting disk
---------------------
dd if=voting_disk_name of=backup_file_name
alpgadbq02sec:/export/users/crsqa $ dd if=/ocrvoteqa/crsqa/oracle/admin/vote_file of=/export/users/crsqa/vote_file2
20000+0 records in
20000+0 records out
alpgadbq02sec:/export/users/crsqa $ ls -lrt /export/users/crsqa/vote_file2
-rw-r--r-- 1 crsqa dbaqa 10240000 Mar 25 09:38 /export/users/crsqa/vote_file2
alpgadbq02sec:/export/users/crsqa $ ls -lrt /ocrvoteqa/crsqa/oracle/admin/vote_file
-rw-r--r-- 1 crsqa dbaqa 10240000 Mar 25 09:39 /ocrvoteqa/crsqa/oracle/admin/vote_file
alpgadbq02sec:/export/users/crsqa $ file /export/users/crsqa/vote_file2
Recover voting disk from file
-----------------------------
Take a backup of all voting disk:
dd if=voting_disk_name of=backup_file_name
The following can be used to restore the voting disk from the backup file created.
dd if=backup_file_name of=voting_disk_name
As CRS User
cinohapdbp01sec:/crslab1/oracle/product/10.2.0/crslab1/cdata/crslab1 $ dd if=/ocrvote/crslab1/oracle/admin/Vote_File of=/crslab1/oracle/product/10.2.0/crslab1/log/vote
crsctl command to add and delete the voting disks
-------------------------------------------------
In 11g online voting disk removal
In 10g it's offline
cinohapdbp01sec # /crslab1/oracle/product/10.2.0/crslab1/bin/crsctl delete css votedisk /ocrvote/crslab1/oracle/admin/Vote_File2 -force
successful deletion of votedisk /ocrvote/crslab1/oracle/admin/Vote_File2.
cinohapdbp01sec #
crsctl delete css votedisk path
crsctl add css votedisk path
use -force option when cluster is down
crsctl add css votedisk path -force
Location of OCR
==================
ocrconfig -showbackup
atlease 2 backup in 4 hr gap
one daily back
one weekly back
ocrconfig -help
Manual backup of CRS
---------------------
As root
Backup=> ocrconfig -export
Bringdown CRS
Restore=> ocrconfig -import
Bring up CRS
cinohapdbp01sec # /crslab1/oracle/product/10.2.0/crslab1/bin/ocrconfig -export /crslab1/oracle/product/10.2.0/crslab1/log/ocr_bak
cinohapdbp01sec # /crslab1/oracle/product/10.2.0/crslab1/bin/ocrconfig -Import /crslab1/oracle/product/10.2.0/crslab1/log/ocr_bak
To change loc
==============
ocrconfig -backuploc
Voting disk details
====================
crsctl query css votedisk
ocrcheck
To check No of instances
==========================
SELECT * FROM V$ACTIVE_INSTANCES;
pfile parameters
=================
ebizdb1.__db_cache_size=377487360
ebizdb2.__db_cache_size=394264576..etc
*.cluster_database_instances=2
*.cluster_database=true
ebizdb2.instance_number=2
ebizdb1.instance_number=1
ebizdb1.undo_tablespace='UNDOTBS1'
ebizdb2.undo_tablespace='UNDOTBS2'
To stop the instance and Db
============================
export ORACLE_SID=ebizdb2
emctl stop dbconsole
srvctl stop instance -d ebizdb -i ebizdb1
srvctl stop asm -n ebizdb1
srvctl stop nodeapps -n ebizdb1
srvctl status database -d ebizdb
srvctl status instance -d ebizdb -i ebizdb1
no of node
===========
$ORA_CRS_HOME/bin/olsnodes -n
cihsnsddb001.sensing.ge.com
To stop clusterware
====================
crsctl check crs
crsctl stop crs
/etc/init.d/init.crs stop
crs_stat -t
pa55word
crs_start -all
crsctl query css votedisk
can i try to delete a node from RAC and add it again
fdisk -l
iscsi-ls
/etc/init.d/o2cb status
OCFS
======
create
mkfs.ocfs2 -b 4k -C 32K -N 4 -L ebizdata_2 /dev/sdd1
mkfs.ocfs2 -b 4K -C 32K -N 4 -L ebizdata_3 /dev/sdc1
mkfs.ocfs2 -b 4K -C 32K -N 4 -L R12data_1 /dev/hda1
mount (cat /etc/fstab)
mount -t ocfs2 -o datavolume,nointr -L "crs" /d01/crsdata/ebizdb
mount -t ocfs2 -o datavolume,nointr -L "ebizdata_3" /d02/oradata/ebizdb
ASM
create
/etc/init.d/oracleasm createdisk VOL2 /dev/sde1
check
openfiler
============
service iscsi-target restart
rac node
========
service iscsi restart
/etc/init.d/oracleasm listdisks
after mount add value in
/etc/fstab
dmesg | sort | grep '^Attached scsi disk'
add "DiscoveryAddress=openfiler-priv:3260" in /etc/iscsi.conf
DiscoveryAddress=openfiler-priv:3260
rpm -Uvh ocfs2-2.6.9-42.ELsmp-1.2.9-1.el4.i686.rpm ocfs2console-1.2.7-1.el4.i386.rpm ocfs2-tools-1.2.7-1.el4.i386.rpm ocfs2-tools-1.2.7-1.el4.i386.rpm
ebizdb1:/root $ ./iscsi-ls-map.sh
Host / SCSI ID SCSI Device Name iSCSI Target Name
---------------- ----------------------- -----------------
2 /dev/sdc ebizdata_3
3 /dev/sdb ebizdata_2
4 /dev/sdd ebizdata_1
5 /dev/sde asm3
6 /dev/sdf asm2
7 /dev/sdh asm1
8 /dev/sdg crs
ebizdb1:/root $
ebizdb1:/root $
ebizdb1:/root $ fdisk /dev/sdc
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1019, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1019, default 1019):
Using default value 1019
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
ebizdb1:/root $ ./iscsi-ls-map.sh
Host / SCSI ID SCSI Device Name iSCSI Target Name
---------------- ----------------------- -----------------
2 /dev/sdc ebizdata_3
3 /dev/sdb ebizdata_2
4 /dev/sdd ebizdata_1
5 /dev/sde asm3
6 /dev/sdf asm2
7 /dev/sdh asm1
8 /dev/sdg crs
ebizdb1:/root $ mkfs.ocfs2 -b 4K -C 32K -N 4 -L ebizdata_3 /dev/sdc1
COLUMN instance_name FORMAT a13
COLUMN host_name FORMAT a9
COLUMN failover_method FORMAT a15
COLUMN failed_over FORMAT a11
SELECT instance_name , host_name , NULL AS failover_type , NULL AS failover_method , NULL AS failed_over FROM v$instance
UNION
SELECT NULL , NULL , failover_type , failover_method , failed_over
FROM v$session WHERE username = 'SYSTEM';
388577.1-convert to RAC
[root@ebizdb1 ~]# service iscsi restart
Searching for iscsi-based multipath maps
Found 0 maps
Stopping iscsid: [ OK ]
Removing iscsi driver: ERROR: Module iscsi_sfnet is in use
[FAILED]
[root@ebizdb1 ~]#
[root@ebizdb1 ~]#
[root@ebizdb1 ~]# service iscsi restart
Searching for iscsi-based multipath maps
Found 0 maps
Stopping iscsid: iscsid not running
Removing iscsi driver: ERROR: Module iscsi_sfnet is in use
[FAILED]
to install
===========
rpm -Uvh compat*
check
=====
rpm -q compa*
15920685
/dev/hdc6
[root@ebizapp apps_12]# rpm -Uvh compat*
warning: compat-gcc-c++-7.3-2.96.128.i386.rpm: V3 DSA signature: NOKEY, key ID 73307de6
Preparing... ########################################### [100%]
file /usr/lib/libstdc++-2-libc6.1-1-2.9.0.so from install of compat-libstdc++-7.3-2.96.128 conflicts with file from package compat-libstdc++-296-2.96-132.7.2
file /usr/lib/libstdc++-3-libc6.2-2-2.10.0.so from install of compat-libstdc++-7.3-2.96.128 conflicts with file from package compat-libstdc++-296-2.96-132.7.2
[root@ebizapp apps_12]#
'
"'
runcluvfy.sh stage -post crs -n ebizdb1,ebizdb2 -verbose
./runcluvfy.sh stage -pre dbinst -n ebizdb1,ebizdb2 -r 10gR2 -verbose
Manually deleting the services and clean-up
Removing a Node from a 10g RAC Cluster
======================================
Doc ID: Note:269320.1
=======================
/etc/init.d/init.evmd stop
/etc/init.d/init.evmd disable
/etc/init.d/init.cssd stop
/etc/init.d/init.cssd disable
/etc/init.d/init.crsd stop
/etc/init.d/init.crsd disable
/etc/init.d/init.crs stop
/etc/init.d/init.crs disable
rm -rf /etc/oracle /etc/oraInst.loc /etc/oratab
rm -rf /etc/init.d/init.crsd /etc/init.d/init.crs /etc/init.d/init.cssd /etc/init.d/init.evmd
rm -rf /etc/rc2.d/K96init.crs /etc/rc2.d/S96init.crs etc/rc3.d/K96init.crs \
rm -rf /etc/rc3.d/S96init.crs /etc/rc4.d/K96init.crs /etc/rc4.d/S96init.crs \
rm -rf /etc/rc5.d/K96init.crs /etc/rc5.d/S96init.crs /etc/rc.d/rc0.d/K96init.crs \
rm -rf /etc/rc.d/rc1.d/K96init.crs /etc/rc.d/rc6.d/K96init.crs /etc/rc.d/rc4.d/K96init.crs
cp /etc/inittab.orig /etc/inittab
rm -rf /etc/inittab.crs /etc/inittab.no_crs
rm -rf /tmp/*
rm -rf /tmp/.oracle
rm -rf /usr/local/bin/dbhome /usr/local/bin/oraenv /usr/local/bin/coraenv
rm -rf /var/tmp/.oracle
rm -rf /opt/oracle/*
rm -rf /u03/oracrs/
./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1 CLUSTER_NODES=ebizdb1,ebizdb2
runInstaller -updateNodeList -noClusterEnabled ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1 CLUSTER_NODES=ebizdb1,ebizdb2 CRS=false "INVENTORY_LOCATION=/u01/app/oraInventory" LOCAL_NODE=.
[root@ebizdb3 install]# ./rootdelete.sh local nosharedvar nosharedhome
[root@ebizdb1 install]# ./rootdeletenode.sh ebizdb3,3
runInstaller -updateNodeList ORACLE_HOME=/u01/app/crs CLUSTER_NODES=ebizdb1,ebizdb2 CRS=TRUE
Node addition
===============
prepare the hardware similar to existing nodes
Verify Remote Access / User Equivalence
run $ORA_CRS_HOME/oui/bin/addNode.sh
it will ask to run as root for below script
/ORA_CRS_HOME/install/rootaddnode.sh from running node
/ORA_CRS_HOME/root.sh from new node
oraInstRoot.sh creates inventory location if it's not available.
Although Oracle CRS replaces the Oracle Cluster Manager (ORACM) in Oracle9i RAC, it does continue support for the Global Services Daemon (GSD), which in Oracle9i is responsible for communicating with the Oracle RAC database. In Oracle 10g, GSD's sole purpose is to serve Oracle9i clients (such as SRVCTL, Database Configuration Assistant, and Oracle Enterprise Manager). Financially, this is a very positive benefit since one is not bound to buy new client licenses and hardware to support an Oracle 10g database.
To check interfaces between the RAC nodes
----------------------------------------
$ORA_CRS_HOME/bin/oifcfg iflist
cinohapdbp01sec:/export/users/crslab1 $ $ORA_CRS_HOME/bin/oifcfg iflist
ce0 3.112.44.0
ce2 3.112.44.0
ce3 192.168.22.0
ce3 192.168.21.0
ce5 172.17.0.0
ce7 192.168.24.0
ce7 192.168.23.0
ce7 192.168.25.0
ce9 3.24.138.0
cinohapdbp01sec:/export/users/crslab1 $
3.112.45.199 cinohapdbp01sec cinohapdbp01sec.security.ge.com loghost
3.112.45.205 cinohapdbp01sec-vip cinohapdbp01sec-vip.security.ge.com
192.168.21.1 cinohapdbp01sec-priv
3.112.45.202 cinohapdbp02sec cinohapdbp02sec.security.ge.com
3.112.45.206 cinohapdbp02sec-vip cinohapdbp02sec-vip.security.ge.com
192.168.21.2 cinohapdbp02sec-priv
/usr/bin/netstat -f inet
Logfile Location
-----------------
cd $ORA_CRS_HOME/log/node_name
alert.log
evmd/evmd.log
cssd/cssd.log
crsd/crsd.log
cd racg
ora..vip.log
ora..ons.log
ora..gsd.log
To relocate the crs services
----------------------------
crs_relocate
crs_relocate ora.cinohapdbp02sec.vip
Enable debug for particulare RAC services
-----------------------------------------
crsctl debug log res ':5'
to diable
----------
crsctl debug log res 'ora.cinohapdbp02sec.vip'
cinohapdbp02sec:/export/users/crslab1 $ crsctl help
Usage: crsctl check crs - checks the viability of the Oracle Clusterware
crsctl check cssd - checks the viability of Cluster Synchronization Services
crsctl check crsd - checks the viability of Cluster Ready Services
crsctl check evmd - checks the viability of Event Manager
crsctl check cluster [-node ] - checks the viability of CSS across nodes
crsctl set css - sets a parameter override
crsctl get css - gets the value of a Cluster Synchronization Services parameter
crsctl unset css - sets the Cluster Synchronization Services parameter to its default
crsctl query css votedisk - lists the voting disks used by Cluster Synchronization Services
crsctl add css votedisk - adds a new voting disk
crsctl delete css votedisk - removes a voting disk
crsctl enable crs - enables startup for all Oracle Clusterware daemons
crsctl disable crs - disables startup for all Oracle Clusterware daemons
crsctl start crs [-wait] - starts all Oracle Clusterware daemons
crsctl stop crs [-wait] - stops all Oracle Clusterware daemons. Stops Oracle Clusterware managed resources in case of cluster.
crsctl start resources - starts Oracle Clusterware managed resources
crsctl stop resources - stops Oracle Clusterware managed resources
crsctl debug statedump css - dumps state info for Cluster Synchronization Services objects
crsctl debug statedump crs - dumps state info for Cluster Ready Services objects
crsctl debug statedump evm - dumps state info for Event Manager objects
crsctl debug log css [module:level] {,module:level} ... - turns on debugging for Cluster Synchronization Services
crsctl debug log crs [module:level] {,module:level} ... - turns on debugging for Cluster Ready Services
crsctl debug log evm [module:level] {,module:level} ... - turns on debugging for Event Manager
crsctl debug log res [resname:level] ... - turns on debugging for Event Manager
crsctl debug trace css [module:level] {,module:level} ... - turns on debugging for Cluster Synchronization Services
crsctl debug trace crs [module:level] {,module:level} ... - turns on debugging for Cluster Ready Services
crsctl debug trace evm [module:level] {,module:level} ... - turns on debugging for Event Manager
crsctl query crs softwareversion [] - lists the version of Oracle Clusterware software installed
crsctl query crs activeversion - lists the Oracle Clusterware operating version
crsctl lsmodules css - lists the Cluster Synchronization Services modules that can be used for debugging
crsctl lsmodules crs - lists the Cluster Ready Services modules that can be used for debugging
crsctl lsmodules evm - lists the Event Manager modules that can be used for debugging
If necessary any of these commands can be run with additional tracing by adding a 'trace'
argument at the very front. Example: crsctl trace check css
cinohapdbp02sec:/export/users/crslab1
export _USR_ORA_VIP=3.112.45.205
export _USR_ORA_NETMASK=255.255.252.0
export _USR_ORA_IF=ce2
export _CAA_NAME=cinohapdbp01sec
export _USR_ORA_VIP=3.112.45.206
export _USR_ORA_NETMASK=255.255.252.0
export _USR_ORA_IF=ce2
export _CAA_NAME=cinohapdbp02sec
cinohapdbp02sec # /crslab1/oracle/product/10.2.0/crslab1/bin/racgvip check
cinohapdbp02sec # /crslab1/oracle/product/10.2.0/crslab1/bin/racgvip start
checking the cluster configuration
===================================
cinohapdbp02sec:/crslab1/oracle/product/10.2.0/crslab1/log/cinohapdbp02sec/evmd $ srvctl config nodeapps -n cinohapdbp02sec
VIP exists.: /cinohapdbp02sec-vip/3.112.45.206/255.255.252.0/ce2
GSD exists.
ONS daemon exists.
Listener does not exist.
cinohapdbp02sec:/crslab1/oracle/product/10.2.0/crslab1/log/cinohapdbp02sec/evmd $
Doc ID: 283107.1
-----------------
bash-3.00$ ifconfig -a
lo0: flags=2001000849 mtu 8232 index 1
inet 127.0.0.1 netmask ffffff00
ce0: flags=19040843 mtu 1500 index 2
inet 3.112.45.207 netmask fffffc00 broadcast 3.112.47.255
groupname IPMP0
ce2: flags=29040843 mtu 1500 index 3
inet 3.112.45.208 netmask fffffc00 broadcast 3.112.47.255
groupname IPMP0
ce2:1: flags=21000843 mtu 1500 index 3
inet 3.112.45.199 netmask fffffc00 broadcast 3.112.47.255
ce2:2: flags=21000843 mtu 1500 index 3
inet 3.112.45.201 netmask fffffc00 broadcast 3.112.47.255
ce2:3: flags=21040843 mtu 1500 index 3
inet 3.112.45.205 netmask fffffc00 broadcast 3.112.47.255
ce3: flags=1000843 mtu 1500 index 7
inet 192.168.22.1 netmask ffffff00 broadcast 192.168.22.255
ce3:1: flags=1000843 mtu 1500 index 7
inet 192.168.21.1 netmask ffffff00 broadcast 192.168.21.255
ce5: flags=1000843 mtu 1500 index 4
inet 172.17.7.185 netmask ffffc000 broadcast 172.17.63.255
ce7: flags=1000843 mtu 1500 index 8
inet 192.168.24.1 netmask ffffff00 broadcast 192.168.24.255
ce7:1: flags=1000843 mtu 1500 index 8
inet 192.168.23.1 netmask ffffff00 broadcast 192.168.23.255
ce7:2: flags=1000843 mtu 1500 index 8
inet 192.168.25.1 netmask ffffff00 broadcast 192.168.25.255
ce9: flags=1000843 mtu 1500 index 5
inet 3.24.138.216 netmask fffffe00 broadcast 3.24.139.255
bash-3.00$ hostname
cinohapdbp01sec
cinohapdbp02sec # ifconfig -a
lo0: flags=2001000849 mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
ce0: flags=9040843 mtu 1500 index 2
inet 3.112.45.210 netmask fffffc00 broadcast 3.112.47.255
groupname IPMP0
ether 0:14:4f:74:53:64
ce0:1: flags=1000843 mtu 1500 index 2
inet 3.112.45.202 netmask fffffc00 broadcast 3.112.47.255
ce0:2: flags=1040843 mtu 1500 index 2
inet 3.112.45.206 netmask fffffc00 broadcast 3.112.47.255
ce2: flags=69040843 mtu 1500 index 3
inet 3.112.45.211 netmask fffffc00 broadcast 3.112.47.255
groupname IPMP0
ether 0:14:4f:1f:3d:f8
ce3: flags=1000843 mtu 1500 index 6
inet 192.168.22.2 netmask ffffff00 broadcast 192.168.22.255
ether 0:14:4f:1f:3d:f9
ce3:1: flags=1000843 mtu 1500 index 6
inet 192.168.21.2 netmask ffffff00 broadcast 192.168.21.255
ce5: flags=1000843 mtu 1500 index 4
inet 172.17.7.186 netmask ffffc000 broadcast 172.17.63.255
ether 0:14:4f:1f:3d:fb
ce7: flags=1000843 mtu 1500 index 8
inet 192.168.23.2 netmask ffffff00 broadcast 192.168.23.255
ether 0:14:4f:1f:1c:9d
ce7:1: flags=1000843 mtu 1500 index 8
inet 192.168.25.2 netmask ffffff00 broadcast 192.168.25.255
ce7:2: flags=1000843 mtu 1500 index 8
inet 192.168.24.2 netmask ffffff00 broadcast 192.168.24.255
ce9: flags=1000843 mtu 1500 index 5
inet 3.24.138.214 netmask fffffe00 broadcast 3.24.139.255
ether 0:14:4f:1f:1c:9f
cinohapdbp02sec #
ce2 is standby IP Multipathing (IPMP)
What is the difference between VIP and IPMP ?
================================================
IPMP can failover an address to another interface, but not failover to the other node.
Oracle VIP can failover to another interface on the same node or to another host in the cluster.
cinohapdbp02sec # srvctl modify nodeapps -n cinohapdbp02sec -A cinohapdbp02sec-vip/255.255.252.0/ce2\|ce0
cinohapdbp02sec # srvctl config nodeapps -n cinohapdbp02sec
VIP exists.: /cinohapdbp02sec-vip/3.112.45.206/255.255.252.0/ce2:ce0
GSD exists.
ONS daemon exists.
Listener does not exist.
cinohapdbp02sec # srvctl modify nodeapps -n cinohapdbp01sec -A cinohapdbp01sec-vip/255.255.252.0/ce2\|ce0
cinohapdbp02sec # srvctl start nodeapps -n cinohapdbp02sec
cinohapdbp02sec # srvctl status nodeapps -n cinohapdbp02sec
VIP is running on node: cinohapdbp02sec
GSD is running on node: cinohapdbp02sec
PRKO-2016 : Error in checking condition of listener on node: cinohapdbp02sec
ONS daemon is running on node: cinohapdbp02sec
cinohapdbp02sec #
Srvctl tools to add new db instance
------------------------------------
srvctl add database -d lab1 -o /lab1/oracle/product/10.2.0/lab1
srvctl add instance -d lab1 -i lab11 -n cinohapdbp01sec
srvctl add instance -d lab1 -i lab12 -n cinohapdbp02sec
srvctl setenv instance -d lab1 -i lab11 -t TNS_ADMIN=/lab1/oracle/product/10.2.0/lab1/network/admin/lab11_cinohapdbp01sec
srvctl setenv instance -d lab1 -i lab12 -t TNS_ADMIN=/lab1/oracle/product/10.2.0/lab1/network/admin/lab12_cinohapdbp02sec
3.112.45.199 cinohapdbp01sec cinohapdbp01sec.security.ge.com loghost
3.112.45.205 cinohapdbp01sec-vip cinohapdbp01sec-vip.security.ge.com
192.168.21.1 cinohapdbp01sec-priv
3.112.45.202 cinohapdbp02sec cinohapdbp02sec.security.ge.com loghost
3.112.45.206 cinohapdbp02sec-vip cinohapdbp02sec-vip.security.ge.com
192.168.21.2 cinohapdbp02sec-priv
$ORA_CRS_HOME/bin/oifcfg getif
cinohapdbp01sec $ ./oifcfg getif
ce2 3.112.44.0 global public
ce3 192.168.21.0 global cluster_interconnect
cinohapdbp02sec $ ./oifcfg getif
ce2 3.112.44.0 global public
ce3 192.168.21.0 global cluster_interconnect
from 10201
before cluster instll
cd /lab1/oradata/recovery/arch01/lab11/software/10201_cluster/cluvfy
./runcluvfy.sh comp nodecon -n cinohapdbp01sec,cinohapdbp02sec -verbose
./runcluvfy.sh stage -pre crsinst -n all -verbose
After Cluster install
$ORA_CRS_HOME/bin/cluvfy stage -post crsinst -n cinohapdbp01sec,cinohapdbp02sec
from 11.1.0.6
cd /lab1/oradata/recovery/arch01/lab11/software/11106_software/clusterware
./runcluvfy.sh stage -pre crsinst -n cinohapdbp01sec,cinohapdbp02sec -verbose
before db binray install
./runcluvfy.sh stage -pre dbinst -n cinohapdbp02sec,cinohapdbp01sec -verbose
Before dbca
./runcluvfy.sh stage -pre dbcfg -n cinohapdbp02sec,cinohapdbp01sec -d /lab1/oracle/product/11.1.0/lab1 -verbose
From CRS_HOME
220863/Target!18
TAF
---
Add the below entry in client and db tnsnames.ora
LAB1_TAF =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = cinohapdbp01sec-vip.security.ge.com)(PORT = 1529))
(ADDRESS = (PROTOCOL = TCP)(HOST = cinohapdbp02sec-vip.security.ge.com)(PORT = 1529))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = LAB1_TAF)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
)
)
Check the failover Information
===============================
COLUMN instance_name FORMAT a13
COLUMN host_name FORMAT a9
COLUMN failover_method FORMAT a15
COLUMN failed_over FORMAT a11
SELECT
instance_name
, host_name
, NULL AS failover_type
, NULL AS failover_method
, NULL AS failed_over
FROM v$instance
UNION
SELECT
NULL
, NULL
, failover_type
, failover_method
, failed_over
FROM v$session
WHERE username = 'SYSTEM';
Check Veritas cluster file system installation
modinfo | grep vxfs
cinohapdbp01sec # hastatus -summary
-- SYSTEM STATE
-- System State Frozen
A cinohapdbp01sec RUNNING 0
A cinohapdbp02sec RUNNING 0
-- GROUP STATE
-- Group System Probed AutoDisabled State
B clustmon cinohapdbp01sec Y N ONLINE
B clustmon cinohapdbp02sec Y N OFFLINE
B cvm cinohapdbp01sec Y N ONLINE
B cvm cinohapdbp02sec Y N ONLINE
Check Veritas mount in the system
bash-3.00$ df -F vxfs
/crslab1/oracle (/dev/vx/dsk/cinohapdbp01secdg/crsprod_oracle): 9530832 blocks 148914 files
/lab1/oradata/system01(/dev/vx/dsk/lab1system_dg/system01):82405744 blocks 1287565 files
/lab1/oradata/recovery/redo04/lab12(/dev/vx/dsk/lab1redo_dg/redo04_2):26761600 blocks 418136 files
/lab1/oradata/recovery/redo04/lab11(/dev/vx/dsk/lab1redo_dg/redo04):26462288 blocks 413470 files
/backup/data (/dev/vx/dsk/bkpdata_dg/bkpdata):459034640 blocks 7172416 files
/lab1/oradata/custom01(/dev/vx/dsk/lab1custom_dg/custom01):47658544 blocks 744638 files
/lab1/oradata/data01(/dev/vx/dsk/lab1data_dg/data01):59864944 blocks 935387 files
/lab1/oradata/recovery/redo01/lab11(/dev/vx/dsk/lab1redo_dg/redo01):26446368 blocks 413216 files
/lab1/oradata/recovery/redo01/lab12(/dev/vx/dsk/lab1redo_dg/redo01_2):26705264 blocks 417243 files
/lab1/oradata/data05(/dev/vx/dsk/lab1data_dg/data05):102242272 blocks 1597506 files
/lab1/oradata/data04(/dev/vx/dsk/lab1data_dg/data04):96421840 blocks 1506561 files
/lab1/oradata/recovery/redo02/lab12(/dev/vx/dsk/lab1redo_dg/redo02_2):26705584 blocks 417271 files
/lab1/oradata/recovery/redo02/lab11(/dev/vx/dsk/lab1redo_dg/redo02):24309104 blocks 379828 files
/lab1/oradata/data02(/dev/vx/dsk/lab1data_dg/data02):81311632 blocks 1270493 files
/lab1/oradata/data03(/dev/vx/dsk/lab1data_dg/data03):90363840 blocks 1411904 files
/lab1/oradata/recovery/redo03/lab11(/dev/vx/dsk/lab1redo_dg/redo03):24426528 blocks 381658 files
/lab1/oradata/recovery/redo03/lab12(/dev/vx/dsk/lab1redo_dg/redo03_2):26761600 blocks 418136 files
/ocrvote (/dev/vx/dsk/ocrvotedg1/ocrvotevol): 1956522 blocks 244564 files
/lab1/oracle (/dev/vx/dsk/lab1bin_dg/oracle):14202576 blocks 221888 files
/etc/VRTSvcs/conf/config/main.cf
Online redo log file
--------------------
select l.group#, thread#, member
from v$log l, v$logfile lf
where l.group#=lf.group#;
select THREAD#,GROUPS,INSTANCE,SEQUENCE# from V$thread;
HAPPY LEARNING!
Ajith Pathiyil