5) Openfiler – NAS/SAN Appliance Download/Install
5.1 Install
Openfiler
This
section provides a summary of the screens used to install the Openfiler
software. For the purpose of this article, We opted to install Openfiler with
all default options. The only manual change required was for configuring the
local network settings.
Once
the install has completed, the server will reboot to make sure all required
components, services and drivers are started and recognized. After the reboot,
any external hard drives (if connected) will be discovered by the Openfiler
server.
|
After
downloading and burning the Openfiler ISO image (ISO file) to CD, insert the CD
into the network storage server (openfiler1 in this example), power it on, and
answer the installation screen prompts as noted below.
5.2 Boot Screen
The first screen is the Openfiler boot screen. At
the boot: prompt, hit [Enter] to start the installation process.
5.3 Media Test
When asked to test the CD media, tab over to [Skip]
and hit [Enter]. If there were any errors, the media burning software would
have warned us. After several seconds, the installer should then detect the
video card, monitor, and mouse. The installer then goes into GUI mode.
5.4 Welcome to Openfiler NSA
At the welcome screen, click [Next] to continue.
5.5 Keyboard Configuration
The next screen prompts you for the Keyboard
settings. Make the appropriate selection for your configuration.
5.6 Disk Partitioning Setup
The next screen asks whether to perform disk
partitioning using "Automatic Partitioning" or "Manual
Partitioning with Disk Druid". Although the official Openfiler
documentation suggests to use Manual Partitioning, We opted to use
"Automatic Partitioning" given the simplicity of my example
configuration.
Select [Automatically partition] and click [Next]
continue.
5.7 Automatic Partitioning
If there were a previous installation of Linux on
this machine, the next screen will ask if you want to "remove" or
"keep" old partitions. Select the option to [Remove all partitions on
this system]. For my example configuration, We selected ONLY the 500GB SATA
internal hard drive [sda] for the operating system and Openfiler application
installation. We de-selected the 73GB SCSI internal hard drive since this disk
will be used exclusively in the next section to create a single "Volume
Group" that will be used for all iSCSI based shared disk storage
requirements for Oracle RAC 10g.
We also keep the checkbox [Review (and modify if
needed) the partitions created] selected. Click [Next] to continue.
You will then be prompted with a dialog window
asking if you really want to remove all partitions. Click [Yes] to acknowledge
this warning.
5.8 Partitioning
The installer will then allow you to view (and
modify if needed) the disk partitions it automatically chose for hard disks
selected in the previous screen. In almost all cases, the installer will choose
100MB for /boot, an adequate amount of swap, and the rest going to the root (/)
partition for that disk (or disks). In this example, We are satisfied with the
installers recommended partitioning for /dev/sda.
The installer will also show any other internal
hard disks it discovered. For my example configuration, the installer found the
73GB SCSI internal hard drive as /dev/sdb. For now, We will "Delete"
any and all partitions on this drive (there was only one, /dev/sdb1). In the
next section, we will create the required partition for this particular hard
disk.
5.9 Network Configuration
We made sure to install all NIC interfaces (cards)
in the network storage server before starting the Openfiler installation. This
screen should have successfully detected each of the network devices.
First, make sure that each of the network devices
are checked to [Active on boot]. The installer may choose to not activate eth1
by default.
Second, [Edit] both eth0 and eth1 as follows. You
may choose to use different IP addresses for both eth0 and eth1 and that is OK.
You must, however, configure eth1 (the storage network) to be on the same
subnet you configured for eth1 on linux1 and linux2:
eth0:
- Check OFF the option to [Configure using DHCP]
- Leave the [Activate on boot] checked ON
- IP Address: 192.168.1.195
- Netmask: 255.255.255.0
- Check OFF the option to [Configure using DHCP]
- Leave the [Activate on boot] checked ON
- IP Address: 192.168.1.195
- Netmask: 255.255.255.0
eth1:
- Check OFF the option to [Configure using DHCP]
- Leave the [Activate on boot] checked ON
- IP Address: 192.168.2.195
- Netmask: 255.255.255.0
- Check OFF the option to [Configure using DHCP]
- Leave the [Activate on boot] checked ON
- IP Address: 192.168.2.195
- Netmask: 255.255.255.0
Continue by setting your hostname manually. We used
a hostname of "openfiler1". Finish this dialog off by supplying your
gateway and DNS servers.
Time
Zone Selection
The next screen allows you to configure your time
zone information. Make the appropriate selection for your location.
Set
Root Password
Select a root password and click [Next] to
continue.
About
to Install
This screen is basically a confirmation screen.
Click [Next] to start the installation.
Congratulations
And that's it. You have successfully installed
Openfiler on the network storage server. The installer will eject the CD from
the CD-ROM drive. Take out the CD and click [Reboot] to reboot the system.
If everything was successful after the reboot, you
should now be presented with a text login screen and the URL(s) to use for
administering the Openfiler server.
5.10 Modify /etc/hosts File on Openfiler Server
Although not mandatory, We typically copy the
contents of the /etc/hosts file from one of the Oracle RAC nodes to the new
Openfiler server. This allows convenient name resolution when testing the
network for the cluster.
5.11 Configure iSCSI Volumes using Openfiler
|
Openfiler
administration is performed using the Openfiler Storage Control Center —
a browser based tool over an https connection on port 446. For example:
https://openfiler1.linux5.info:446/
From
the Openfiler Storage Control Center home page, log in as an administrator. The
default administration login credentials for Openfiler are:
- Username: openfiler
- Password: password
The
first page the administrator sees is the [Status] / [System Information]
screen.
To
use Openfiler as an iSCSI storage server, we have to perform six major tasks;
set up iSCSI services, configure network access, identify and partition the
physical storage, create a new volume group, create all logical volumes, and
finally, create new iSCSI targets for each of the logical volumes.
5.12 Services
To
control services, we use the Openfiler Storage Control Center and navigate to
[Services] / [Manage Services]:
Enable iSCSI Openfiler Service
To
enable the iSCSI service, click on the 'Enable' link under the 'iSCSI target
server' service name. After that, the 'iSCSI target server' status should
change to 'Enabled'.
The
ietd program implements the user level part of iSCSI Enterprise Target software
for building an iSCSI storage system on Linux. With the iSCSI target enabled,
we should be able to SSH into the Openfiler server and see the iscsi-target
service running:
[root@openfiler1
~]# service iscsi-target status
ietd
(pid 3839) is running...
5.13 Network Access Configuration
The
next step is to configure network access in Openfiler to identify both Oracle
RAC nodes (linux1 and linux2) that will need to access the iSCSI volumes
through the storage (private) network. Note that iSCSI volumes will be created
later on in this section. Also note that this step does not actually grant the
appropriate permissions to the iSCSI volumes required by both Oracle RAC nodes.
That will be accomplished later in this section by updating the ACL for each
new logical volume.
As
in the previous section, configuring network access is accomplished using the
Openfiler Storage Control Center by navigating to [System] / [Network Setup].
The "Network Access Configuration" section (at the bottom of the
page) allows an administrator to setup networks and/or hosts that will be
allowed to access resources exported by the Openfiler appliance. For the
purpose of this article, we will want to add both Oracle RAC nodes individually
rather than allowing the entire 192.168.2.0 network to have access to Openfiler
resources.
When
entering each of the Oracle RAC nodes, note that the 'Name' field is just a
logical name used for reference only. As a convention when entering nodes, We
can use the node name defined for that IP address. Next, when entering the
actual node in the 'Network/Host' field, always use its IP address even though
its host name may already be defined in your /etc/hosts file or DNS. Lastly,
when entering actual hosts in our Class C network, use a subnet mask of 255.255.255.255.
It
is important to remember that you will be entering the IP address of the private
network (eth1) for each of the RAC nodes in the cluster.
The
following image shows the results of adding both Oracle RAC nodes:
Figure 7:
Configure Openfiler Network Access for Oracle RAC Nodes
5.14 Physical Storage
In
this section, we will be creating the five iSCSI volumes to be used as shared
storage by both of the Oracle RAC nodes in the cluster. This involves multiple
steps that will be performed on the internal 73GB 15K SCSI hard disk connected
to the Openfiler server.
Storage
devices like internal IDE/SATA/SCSI/SAS disks, storage arrays, external USB
drives, external FireWire drives, or ANY other storage can be connected to the
Openfiler server and served to the clients. Once these devices are discovered
at the OS level, Openfiler Storage Control Center can be used to set up and
manage all of that storage.
In
our case, we have a 73GB internal SCSI hard drive for our shared storage needs.
On the Openfiler server this drive is seen as /dev/sdb (MAXTOR
ATLAS15K2_73SCA). To see this and to start the process of creating our iSCSI
volumes, navigate to [Volumes] / [Block Devices] from the Openfiler Storage
Control Center:
Openfiler
Physical Storage - Block Device Management
5.15 Partitioning the Physical Disk
The
first step we will perform is to create a single primary partition on the /dev/sdb
internal hard disk. By clicking on the /dev/sdb link, we are presented with the
options to 'Edit' or 'Create' a partition. Since we will be creating a single
primary partition that spans the entire disk, most of the options can be left
to their default setting where the only modification would be to change the 'Partition
Type' from 'Extended partition' to 'Physical volume'. Here are the
values We specified to create the primary partition on /dev/sdb:
Mode:
Primary
Partition Type: Physical volume
Starting Cylinder: 1
Ending Cylinder: 8924
Partition Type: Physical volume
Starting Cylinder: 1
Ending Cylinder: 8924
The
size now shows 68.36 GB. To accept that we click on the "Create"
button. This results in a new partition (/dev/sdb1) on our internal hard disk:
Partition
the Physical Volume
5.16 Volume Group Management
The
next step is to create a Volume Group. We will be creating a single
volume group named rac1 that contains the newly created primary partition.
From
the Openfiler Storage Control Center, navigate to [Volumes] / [Volume Groups].
There we would see any existing volume groups, or none as in our case. Using
the Volume Group Management screen, enter the name of the new volume group
(rac1), click on the checkbox in front of /dev/sdb1 to select that partition,
and finally click on the 'Add volume group' button.
After
that we are presented with the list that now shows our newly created volume
group named "rac1":
5.17 Logical Volumes
We
can now create the five logical volumes in the newly created volume group
(rac1).
From
the Openfiler Storage Control Center, navigate to [Volumes] / [Add Volume].
There we will see the newly created volume group (rac1) along with its block
storage statistics. Also available at the bottom of this screen is the option
to create a new volume in the selected volume group - (Create a volume in
"rac1"). Use this screen to create the following five logical
(iSCSI) volumes. After creating each logical volume, the application will point
you to the "Manage Volumes" screen. You will then need to click back
to the "Add Volume" tab to create the next logical volume until all
five iSCSI volumes are created:
iSCSI /
Logical Volumes
|
|||
Volume
Name
|
Volume Description
|
Required
Space (MB)
|
Filesystem
Type
|
racdb-crs
|
racdb -
Oracle Clusterware
|
2,048
|
iSCSI
|
racdb-asm1
|
racdb -
ASM Volume 1
|
16,984
|
iSCSI
|
racdb-asm2
|
racdb -
ASM Volume 2
|
16,984
|
iSCSI
|
racdb-asm3
|
racdb -
ASM Volume 3
|
16,984
|
iSCSI
|
racdb-asm4
|
racdb -
ASM Volume 4
|
16,984
|
iSCSI
|
In
effect we have created five iSCSI disks that can now be presented to iSCSI
clients (linux1 and linux2) on the network. The "Manage Volumes"
screen should look as follows:
5.18 iSCSI
Targets
At this point we have five iSCSI
logical volumes. Before an iSCSI client can have access to them, however, an
iSCSI target will need to be created for each of these five volumes. Each iSCSI
logical volume will be mapped to a specific iSCSI target and the appropriate
network access permissions to that target will be granted to both Oracle RAC
nodes. For the purpose of this article, there will be a one-to-one mapping
between an iSCSI logical volume and an iSCSI target.
There are three steps involved in
creating and configuring an iSCSI target; create a unique Target IQN
(basically, the universal name for the new iSCSI target), map one of the iSCSI
logical volumes created in the previous section to the newly created iSCSI
target, and finally, grant both of the Oracle RAC nodes access to the new iSCSI
target. Please note that this process will need to be performed for each
of the five iSCSI logical volumes created in the previous section.
For the purpose of this article, the
following table lists the new iSCSI target names (the Target IQN) and which
iSCSI logical volume it will be mapped to:
iSCSI
Target / Logical Volume Mappings
|
||
Target IQN
|
iSCSI Volume Name
|
Volume Description
|
iqn.2006-01.com.openfiler:racdb.crs
|
racdb-crs
|
racdb - Oracle Clusterware
|
iqn.2006-01.com.openfiler:racdb.asm1
|
racdb-asm1
|
racdb - ASM Volume 1
|
iqn.2006-01.com.openfiler:racdb.asm2
|
racdb-asm2
|
racdb - ASM Volume 2
|
iqn.2006-01.com.openfiler:racdb.asm3
|
racdb-asm3
|
racdb - ASM Volume 3
|
iqn.2006-01.com.openfiler:racdb.asm4
|
racdb-asm4
|
racdb - ASM Volume 4
|
We are now ready to create the five
new iSCSI targets - one for each of the iSCSI logical volumes. The example
below illustrates the three steps required to create a new iSCSI target by
creating the Oracle Clusterware / racdb-crs target (iqn.2006-01.com.openfiler:racdb.crs).
This three step process will need to be repeated for each of the five new iSCSI
targets listed in the table above.
5.19 Create New Target IQN
From the Openfiler Storage Control
Center, navigate to [Volumes] / [iSCSI Targets]. Verify the grey sub-tab
"Target Configuration" is selected. This page allows you to create a
new iSCSI target. A default value is automatically generated for the name of the
new iSCSI target (better known as the "Target IQN"). An example
Target IQN is "iqn.2006-01.com.openfiler:tsn.ae4683b67fd3":
Prefer
to replace the last segment of the default Target IQN with something more
meaningful. For the first iSCSI target (Oracle Clusterware / racdb-crs), We
will modify the default Target IQN by replacing the string "tsn.ae4683b67fd3"
with "racdb.crs" as shown in Fig below:
Create New
iSCSI Target : Replace Default Target IQN
Once
you are satisfied with the new Target IQN, click the "Add" button.
This will create a new iSCSI target and then bring up a page that allows you to
modify a number of settings for the new iSCSI target. For the purpose of this
article, none of settings for the new iSCSI target need to be changed.
5.20 LUN Mapping
After
creating the new iSCSI target, the next step is to map the appropriate iSCSI
logical volumes to it. Under the "Target Configuration" sub-tab,
verify the correct iSCSI target is selected in the section "Select iSCSI
Target". If not, use the pull-down menu to select the correct iSCSI target
and hit the "Change" button.
Next,
click on the grey sub-tab named "LUN Mapping" (next to "Target
Configuration" sub-tab). Locate the appropriate iSCSI logical volume (/dev/rac1/racdb-crs
in this case) and click the "Map" button. You do not need to change
any settings on this page. Your screen should look similar to Fig below after
clicking the "Map" button for volume /dev/rac1/racdb-crs:
Create New
iSCSI Target : Map LUN
5.21 Network ACL
Before
an iSCSI client can have access to the newly created iSCSI target, it needs to
be granted the appropriate permissions. A while back, we configured network
access in Openfiler for two hosts (the Oracle RAC nodes). These are the two
nodes that will need to access the new iSCSI targets through the storage
(private) network. We now need to grant both of the Oracle RAC nodes access to
the new iSCSI target.
Click
on the grey sub-tab named "Network ACL" (next to "LUN
Mapping" sub-tab). For the current iSCSI target, change the
"Access" for both hosts from 'Deny' to 'Allow' and click the 'Update'
button:
Create New
iSCSI Target : Update Network ACL
Go
back to the Create New Target IQN section and perform these three tasks for the
remaining four iSCSI logical volumes while substituting the values found in the
"iSCSI Target / Logical Volume Mappings" table .
5.22 Configure iSCSI Volumes on Oracle RAC Nodes
|
An
iSCSI client can be any system (Linux, Unix, MS Windows, Apple Mac, etc.) for
which iSCSI support (a driver) is available. In our case, the clients are two
Linux servers, linux1 and linux2, running CentOS 5.3.
In
this section we will be configuring the iSCSI software initiator on both of the
Oracle RAC nodes. CentOS 5.3 includes the Open-iSCSI
iSCSI software initiator which can be found in the iscsi-initiator-utils RPM.
This is a change from previous versions of CentOS (4.x) which included the
Linux iscsi-sfnet software driver developed as part of the Linux-iSCSI Project.
All iSCSI management tasks like discovery and logins will use the command-line
interface iscsiadm which is included with Open-iSCSI.
The
iSCSI software initiator will be configured to automatically log in to the
network storage server (openfiler1) and discover the iSCSI volumes
created in the previous section. We will then go through the steps of creating
persistent local SCSI device names (i.e. /dev/iscsi/asm1) for each of the iSCSI
target names discovered using udev. Having a consistent local SCSI device name
and which iSCSI target it maps to is required in order to know which volume
(device) is to be used for OCFS2 and which volumes belong to ASM. Before we can
do any of this, however, we must first install the iSCSI initiator software!
5.23 Installing the iSCSI (initiator) service
With
CentOS 5.3, the Open-iSCSI iSCSI software initiator does not get installed by
default. The software is included in the iscsi-initiator-utils package which
can be found on CD #1. To determine if this package is installed (which in most
cases, it will not be), perform the following on both Oracle RAC nodes:
#
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n"|
grep iscsi-initiator-utils
If
the iscsi-initiator-utils package is not installed, load CD #1 into each of the
Oracle RAC nodes and perform the following:
#
mount -r /dev/cdrom /media/cdrom
#
cd /media/cdrom/CentOS
#
rpm -Uvh iscsi-initiator-utils-*
#
cd /
#
eject
5.24 Configure the iSCSI (initiator) service
After
verifying that the iscsi-initiator-utils package is installed on both Oracle
RAC nodes, start the iscsid service and enable it to automatically start when
the system boots. We will also configure the iscsi service to automatically
start which logs into iSCSI targets needed at system startup.
#
service iscsid start
Turning
off network shutdown. Starting
iSCSI daemon: [ OK ]
[ OK ]
# chkconfig iscsid on
# chkconfig iscsi on
Now
that the iSCSI service is started, use the iscsiadm command-line interface to
discover all available targets on the network storage server. This should be
performed on both Oracle RAC nodes to verify the configuration is functioning
properly:
#
iscsiadm -m discovery -t sendtargets -p openfiler1-priv
192.168.2.195:3260,1
iqn.2006-01.com.openfiler:racdb.asm1
192.168.2.195:3260,1
iqn.2006-01.com.openfiler:racdb.asm2
192.168.2.195:3260,1
iqn.2006-01.com.openfiler:racdb.asm3
192.168.2.195:3260,1
iqn.2006-01.com.openfiler:racdb.asm4
192.168.2.195:3260,1
iqn.2006-01.com.openfiler:racdb.crs
5.25 Manually Log In to iSCSI Targets
At
this point the iSCSI initiator service has been started and each of the Oracle
RAC nodes were able to discover the available targets from the network storage
server. The next step is to manually log in to each of the available targets
which can be done using the iscsiadm command-line interface. This needs to be
run on both Oracle RAC nodes. Note that we had to specify the IP address and
not the host name of the network storage server (openfiler1-priv) - we believe
this is required given the discovery (above) shows the targets using the IP
address.
#
iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.asm1 -p 192.168.2.195 -l
#
iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.asm2 -p 192.168.2.195 -l
#
iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.asm3 -p 192.168.2.195 -l
#
iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.asm4 -p 192.168.2.195 -l
#
iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.crs -p 192.168.2.195 -l
5.26 Configure Automatic Log In
The next step is to ensure the
client will automatically log in to each of the targets listed above when the
machine is booted (or the iSCSI initiator service is started/restarted). As
with the manual log in process described above, perform the following on both
Oracle RAC nodes:
#
iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.asm1 -p 192.168.2.195
--op update -n node.startup -v automatic
#
iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.asm2 -p 192.168.2.195
--op update -n node.startup -v automatic
#
iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.asm3 -p 192.168.2.195
--op update -n node.startup -v automatic
#
iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.asm4 -p 192.168.2.195
--op update -n node.startup -v automatic
#
iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.crs -p 192.168.2.195
--op update -n node.startup -v automatic
5.27 Create Persistent Local SCSI Device Names
In
this section, we will go through the steps to create persistent local SCSI
device names for each of the iSCSI target names. This will be done using udev.
Having a consistent local SCSI device name and which iSCSI target it maps to is
required in order to know which volume (device) is to be used for OCFS2 and
which volumes belong to ASM.
When
either of the Oracle RAC nodes boot and the iSCSI initiator service is started,
it will automatically log in to each of the targets configured in a random
fashion and map them to the next available local SCSI device name. For example,
the target iqn.2006-01.com.openfiler:racdb.asm1 may get mapped to /dev/sda. We
can actually determine the current mappings for all targets by looking at the /dev/disk/by-path
directory:
#
(cd /dev/disk/by-path; ls -l *openfiler* | awk '{FS=" "; print $9
" " $10 " " $11}')
ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.asm1-lun-0
-> ../../sda
ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.asm2-lun-0
-> ../../sdb
ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.asm3-lun-0
-> ../../sdc
ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.asm4-lun-0
-> ../../sdd
ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.crs-lun-0
-> ../../sde
Using the output from the above
listing, we can establish the following current mappings:
Current
iSCSI Target Name to local SCSI Device Name Mappings
|
|
iSCSI Target Name
|
SCSI Device Name
|
iqn.2006-01.com.openfiler:racdb.asm1
|
/dev/sda
|
iqn.2006-01.com.openfiler:racdb.asm2
|
/dev/sdb
|
iqn.2006-01.com.openfiler:racdb.asm3
|
/dev/sdc
|
iqn.2006-01.com.openfiler:racdb.asm4
|
/dev/sdd
|
iqn.2006-01.com.openfiler:racdb.crs
|
/dev/sde
|
This
mapping, however, may change every time the Oracle RAC node is rebooted. For
example, after a reboot it may be determined that the iSCSI target iqn.2006-01.com.openfiler:racdb.asm1
gets mapped to the local SCSI device /dev/sdd. It is therefore impractical to
rely on using the local SCSI device name given there is no way to predict the
iSCSI target mappings after a reboot.
What
we need is a consistent device name we can reference (i.e. /dev/iscsi/asm1)
that will always point to the appropriate iSCSI target through reboots. This is
where the Dynamic Device Management tool named udev comes in. udev
provides a dynamic device directory using symbolic links that point to the
actual device using a configurable set of rules. When udev receives a device
event (for example, the client logging in to an iSCSI target), it matches its
configured rules against the available device attributes provided in sysfs
to identify the device. Rules that match may provide additional device
information or specify a device node name and multiple symlink names and
instruct udev to run additional programs (a SHELL script for example) as part
of the device event handling process.
The
first step is to create a new rules file. The file will be named /etc/udev/rules.d/55-openiscsi.rules
and contain only a single line of name=value pairs used to receive events we
are interested in. It will also define a call-out SHELL script (/etc/udev/scripts/iscsidev.sh)
to handle the event.
Create
the following rules file /etc/udev/rules.d/55-openiscsi.rules on both Oracle
RAC nodes:
/etc/udev/rules.d/55-openiscsi.rules
|
# /etc/udev/rules.d/55-openiscsi.rules
KERNEL=="sd*", BUS=="scsi", PROGRAM="/etc/udev/scripts/iscsidev.sh
%b",SYMLINK+="iscsi/%c/part%n"
|
We
now need to create the UNIX SHELL script that will be called when this event is
received. Let's first create a separate directory on both Oracle RAC nodes
where udev scripts can be stored:
#
mkdir -p /etc/udev/scripts
Next,
create the UNIX shell script /etc/udev/scripts/iscsidev.sh on both Oracle RAC
nodes:
/etc/udev/scripts/iscsidev.sh
|
#!/bin/sh
# FILE: /etc/udev/scripts/iscsidev.sh
BUS=${1}
HOST=${BUS%%:*}
[ -e /sys/class/iscsi_host ] || exit
1
file="/sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session*/targetname"
target_name=$(cat ${file})
# This is not an open-scsi drive
if [ -z "${target_name}" ]; then
exit 1
fi
# Check if QNAP drive
check_qnap_target_name=${target_name%%:*}
if [ $check_qnap_target_name =
"iqn.2004-04.com.qnap" ]; then
target_name=`echo "${target_name%.*}"`
fi
echo "${target_name##*.}"
|
After creating the UNIX SHELL
script, change it to executable:
#
chmod 755 /etc/udev/scripts/iscsidev.sh
Now that udev is configured, restart
the iSCSI service on both Oracle RAC nodes:
#
service iscsi stop
Logging
out of session [sid: 1, target: iqn.2006-01.com.openfiler:racdb.asm1, portal:
192.168.2.195,3260]
Logging
out of session [sid: 2, target: iqn.2006-01.com.openfiler:racdb.asm2, portal:
192.168.2.195,3260]
Logging
out of session [sid: 3, target: iqn.2006-01.com.openfiler:racdb.asm3, portal:
192.168.2.195,3260]
Logging
out of session [sid: 4, target: iqn.2006-01.com.openfiler:racdb.asm4, portal:
192.168.2.195,3260]
Logging
out of session [sid: 5, target: iqn.2006-01.com.openfiler:racdb.crs, portal:
192.168.2.195,3260]
Logout
of [sid: 1, target: iqn.2006-01.com.openfiler:racdb.asm1, portal:
192.168.2.195,3260]: successful
Logout
of [sid: 2, target: iqn.2006-01.com.openfiler:racdb.asm2, portal:
192.168.2.195,3260]: successful
Logout
of [sid: 3, target: iqn.2006-01.com.openfiler:racdb.asm3, portal:
192.168.2.195,3260]: successful
Logout
of [sid: 4, target: iqn.2006-01.com.openfiler:racdb.asm4, portal:
192.168.2.195,3260]: successful
Logout
of [sid: 5, target: iqn.2006-01.com.openfiler:racdb.crs, portal:
192.168.2.195,3260]: successful
Stopping
iSCSI daemon: [ OK ]
#
service iscsi start
iscsid
dead but pid file exists
Turning
off network shutdown. Starting iSCSI daemon: [
OK ]
[ OK ]
Setting
up iSCSI targets: Logging in to [iface: default, target: iqn.2006-01.com.openfiler:racdb.asm2,
portal: 192.168.2.195,3260]
Logging
in to [iface: default, target: iqn.2006-01.com.openfiler:racdb.asm1, portal:
192.168.2.195,3260]
Logging
in to [iface: default, target: iqn.2006-01.com.openfiler:racdb.crs, portal: 192.168.2.195,3260]
Logging
in to [iface: default, target: iqn.2006-01.com.openfiler:racdb.asm4, portal:
192.168.2.195,3260]
Logging
in to [iface: default, target: iqn.2006-01.com.openfiler:racdb.asm3, portal:
192.168.2.195,3260]
Login
to [iface: default, target: iqn.2006-01.com.openfiler:racdb.asm2, portal:
192.168.2.195,3260]: successful
Login
to [iface: default, target: iqn.2006-01.com.openfiler:racdb.asm1, portal:
192.168.2.195,3260]: successful
Login
to [iface: default, target: iqn.2006-01.com.openfiler:racdb.crs, portal:
192.168.2.195,3260]: successful
Login
to [iface: default, target: iqn.2006-01.com.openfiler:racdb.asm4, portal:
192.168.2.195,3260]: successful
Login
to [iface: default, target: iqn.2006-01.com.openfiler:racdb.asm3, portal:
192.168.2.195,3260]: successful
[ OK ]
Let's see if our hard work paid off:
#
ls -l /dev/iscsi/*
/dev/iscsi/asm1:
total
0
lrwxrwxrwx
1 root root 9 Aug 16 00:49 part -> ../../sda
/dev/iscsi/asm2:
total
0
lrwxrwxrwx
1 root root 9 Aug 16 00:49 part -> ../../sdc
/dev/iscsi/asm3:
total
0
lrwxrwxrwx
1 root root 9 Aug 16 00:49 part -> ../../sdb
/dev/iscsi/asm4:
total
0
lrwxrwxrwx
1 root root 9 Aug 16 00:49 part -> ../../sde
/dev/iscsi/crs:
total
0
lrwxrwxrwx
1 root root 9 Aug 16 00:49 part -> ../../sdd
The
listing above shows that udev did the job it was supposed to do! We now have a
consistent set of local device names that can be used to reference the iSCSI
targets. For example, we can safely assume that the device name /dev/iscsi/asm1/part
will always reference the iSCSI target iqn.2006-01.com.openfiler:racdb.asm1. We
now have a consistent iSCSI target name to local device name mapping which is
described in the following table:
iSCSI
Target Name to Local Device Name Mappings
|
|
iSCSI Target Name
|
Local Device Name
|
iqn.2006-01.com.openfiler:racdb.asm1
|
/dev/iscsi/asm1/part
|
iqn.2006-01.com.openfiler:racdb.asm2
|
/dev/iscsi/asm2/part
|
iqn.2006-01.com.openfiler:racdb.asm3
|
/dev/iscsi/asm3/part
|
iqn.2006-01.com.openfiler:racdb.asm4
|
/dev/iscsi/asm4/part
|
iqn.2006-01.com.openfiler:racdb.crs
|
/dev/iscsi/crs/part
|
5.28 Create Partitions on iSCSI Volumes
We
now need to create a single primary partition on each of the iSCSI volumes that
spans the entire size of the volume. As mentioned earlier in this article, We
will be using Oracle's Cluster File System, Release 2 (OCFS2) to store the two
files to be shared for Oracle's Clusterware software. We will then be using
Automatic Storage Management (ASM) to create four ASM volumes; two for all
physical database files (data/index files, online redo log files, and control
files) and two for the Flash Recovery Area (RMAN backups and archived
redo log files).
The following table lists the five
iSCSI volumes and what file systems they will support:
Oracle
Shared Drive Configuration
|
|||||
File
System
Type |
iSCSI
Target
(short) Name |
Size
|
Mount
Point |
ASM
Diskgroup
Name |
File
Types |
OCFS2
|
crs
|
2GB
|
/u02
|
Oracle Cluster Registry (OCR) File - (~250 MB)
Voting Disk - (~20MB) |
|
ASM
|
asm1
|
17.8GB
|
ORCL:VOL1
|
+RACDB_DATA1
|
Oracle Database Files
|
ASM
|
asm2
|
17.8GB
|
ORCL:VOL2
|
+RACDB_DATA1
|
Oracle Database Files
|
ASM
|
asm3
|
17.8GB
|
ORCL:VOL3
|
+FLASH_RECOVERY_AREA
|
Oracle Flash Recovery Area
|
ASM
|
asm4
|
17.8GB
|
ORCL:VOL4
|
+FLASH_RECOVERY_AREA
|
Oracle Flash Recovery Area
|
Total
|
73.2GB
|
As
shown in the table above, we will need to create a single Linux primary
partition on each of the five iSCSI volumes. The fdisk command is used in Linux
for creating (and removing) partitions. For each of the five iSCSI volumes, you
can use the default values when creating the primary partition as the default
action is to use the entire disk. You can safely ignore any warnings that may
indicate the device does not contain a valid DOS partition (or Sun, SGI or OSF
disklabel).
In
this example, We will be running the fdisk command from linux1 to create a
single primary partition on each iSCSI target using the local device names
created by udev in the previous section:
·
/dev/iscsi/asm1/part
·
/dev/iscsi/asm2/part
·
/dev/iscsi/asm3/part
·
/dev/iscsi/asm4/part
·
/dev/iscsi/crs/part
|
#
---------------------------------------
#
fdisk /dev/iscsi/asm1/part
Command
(m for help): n
Command
action
e
extended
p
primary partition (1-4)
p
Partition
number (1-4): 1
First
cylinder (1-16992, default 1): 1
Last
cylinder or +size or +sizeM or +sizeK (1-16992, default 16992): 16992
Command
(m for help): p
Disk
/dev/iscsi/asm1/part: 17.8 GB, 17817403392 bytes
64
heads, 32 sectors/track, 16992 cylinders
Units
= cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks
Id System
/dev/iscsi/asm1/part1 1 16992
17399792 83 Linux
Command
(m for help): w
The
partition table has been altered!
Calling
ioctl() to re-read partition table.
Syncing
disks.
#
---------------------------------------
#
fdisk /dev/iscsi/asm2/part
Command
(m for help): n
Command
action
e
extended
p
primary partition (1-4)
p
Partition
number (1-4): 1
First
cylinder (1-16992, default 1): 1
Last
cylinder or +size or +sizeM or +sizeK (1-16992, default 16992): 16992
Command
(m for help): p
Disk
/dev/iscsi/asm2/part: 17.8 GB, 17817403392 bytes
64
heads, 32 sectors/track, 16992 cylinders
Units
= cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks
Id System
/dev/iscsi/asm2/part1 1 16992
17399792 83 Linux
Command
(m for help): w
The
partition table has been altered!
Calling
ioctl() to re-read partition table.
Syncing
disks.
#
---------------------------------------
#
fdisk /dev/iscsi/asm3/part
Command
(m for help): n
Command
action
e
extended
p
primary partition (1-4)
p
Partition
number (1-4): 1
First
cylinder (1-16992, default 1): 1
Last
cylinder or +size or +sizeM or +sizeK (1-16992, default 16992): 16992
Command
(m for help): p
Disk
/dev/iscsi/asm3/part: 17.8 GB, 17817403392 bytes
64
heads, 32 sectors/track, 16992 cylinders
Units
= cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks
Id System
/dev/iscsi/asm3/part1 1 16992
17399792 83 Linux
Command
(m for help): w
The
partition table has been altered!
Calling
ioctl() to re-read partition table.
Syncing
disks.
#
---------------------------------------
#
fdisk /dev/iscsi/asm4/part
Command
(m for help): n
Command
action
e
extended
p
primary partition (1-4)
p
Partition
number (1-4): 1
First
cylinder (1-16960, default 1): 1
Last
cylinder or +size or +sizeM or +sizeK (1-16960, default 16960): 16960
Command
(m for help): p
Disk
/dev/iscsi/asm4/part: 17.7 GB, 17783848960 bytes
64
heads, 32 sectors/track, 16960 cylinders
Units
= cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks
Id System
/dev/iscsi/asm4/part1 1 16960
17367024 83 Linux
Command
(m for help): w
The
partition table has been altered!
Calling
ioctl() to re-read partition table.
Syncing
disks.
#
---------------------------------------
#
fdisk /dev/iscsi/crs/part
Command
(m for help): n
Command
action
e
extended
p
primary partition (1-4)
p
Partition
number (1-4): 1
First
cylinder (1-1009, default 1): 1
Last
cylinder or +size or +sizeM or +sizeK (1-1009, default 1009): 1009
Command
(m for help): p
Disk
/dev/iscsi/crs/part: 2147 MB, 2147483648 bytes
67
heads, 62 sectors/track, 1009 cylinders
Units
= cylinders of 4154 * 512 = 2126848 bytes
Device Boot Start End Blocks
Id System
/dev/iscsi/crs/part1 1 1009
2095662 83 Linux
Command
(m for help): w
The
partition table has been altered!
Calling
ioctl() to re-read partition table.
Syncing
disks.
5.29 Verify New Partitions
After
creating all required partitions from linux1, you should now inform the kernel
of the partition changes using the following command as the "root"
user account from all remaining nodes in the Oracle RAC cluster (linux2). Note
that the mapping of iSCSI target names discovered from Openfiler and the local
SCSI device name will be different on both Oracle RAC nodes. This is not a
concern and will not cause any problems since we will not be using the local
SCSI device names but rather the local device names created by udev in the
previous section.
From
linux2, run the following commands:
#
partprobe
#
fdisk -l
Disk
/dev/hda: 40.0 GB, 40000000000 bytes
255
heads, 63 sectors/track, 4863 cylinders
Units
= cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks
Id System
/dev/hda1 *
1 13 104391
83 Linux
/dev/hda2 14 4863
38957625 8e Linux LVM
Disk
/dev/sda: 2147 MB, 2147483648 bytes
67
heads, 62 sectors/track, 1009 cylinders
Units
= cylinders of 4154 * 512 = 2126848 bytes
Device Boot Start End Blocks
Id System
/dev/sda1 1 1009
2095662 83 Linux
Disk
/dev/sdb: 17.7 GB, 17783848960 bytes
64
heads, 32 sectors/track, 16960 cylinders
Units
= cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks
Id System
/dev/sdb1 1 16960
17367024 83 Linux
Disk
/dev/sdc: 17.8 GB, 17817403392 bytes
64
heads, 32 sectors/track, 16992 cylinders
Units
= cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks
Id System
/dev/sdc1 1 16992
17399792 83 Linux
Disk
/dev/sdd: 17.8 GB, 17817403392 bytes
64
heads, 32 sectors/track, 16992 cylinders
Units
= cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks
Id System
/dev/sdd1 1 16992
17399792 83 Linux
Disk
/dev/sde: 17.8 GB, 17817403392 bytes
64
heads, 32 sectors/track, 16992 cylinders
Units
= cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks
Id System
/dev/sde1 1 16992
17399792 83 Linux
As a final step you should run the
following command on both Oracle RAC nodes to verify that udev created the new
symbolic links for each new partition:
#
(cd /dev/disk/by-path; ls -l *openfiler* | awk '{FS=" "; print $9
" " $10 " " $11}')
ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.asm1-lun-0
-> ../../sdc
ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.asm1-lun-0-part1
-> ../../sdc1
ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.asm2-lun-0
-> ../../sdd
ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.asm2-lun-0-part1
-> ../../sdd1
ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.asm3-lun-0
-> ../../sde
ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.asm3-lun-0-part1
-> ../../sde1
ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.asm4-lun-0
-> ../../sdb
ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.asm4-lun-0-part1
-> ../../sdb1
ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.crs-lun-0
-> ../../sda
ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.crs-lun-0-part1
-> ../../sda1
The
listing above shows that udev did indeed create new device names for each of
the new partitions. We will be using these new device names when configuring
the volumes for OCFS2 and ASMlib later in this guide:
·
/dev/iscsi/asm1/part1
·
/dev/iscsi/asm2/part1
·
/dev/iscsi/asm3/part1
·
/dev/iscsi/asm4/part1
·
/dev/iscsi/crs/part1
7) R12 Installation paths
7.1
R12 Installation (Initially 2 mid-tier & 1 DB-tier node)
Installation Requirement :
1) Disk space requirement:
Node Space Required
-----------------------------------------------
Mid-Tier 50GB X 2 (2 nodes)
DB Tier – Vision 250GB
2) Installation is done by root OS user
3) Database OS User created (oracle)
1) Disk space requirement:
Node Space Required
-----------------------------------------------
Mid-Tier 50GB X 2 (2 nodes)
DB Tier – Vision 250GB
2) Installation is done by root OS user
3) Database OS User created (oracle)
# mkdir –p /u01/app/oracle
# chown –R oracle:oinstall
/u01/app/oracle
# chmod –R 775 /u01/app/oracle
At the end of this
procedure, you will have the following:
·
/u01 owned by root.
·
/u01/app owned by root.
·
/u01/app/oracle owned by oracle:oinstall
with 775 permissions. This ownership and permissions enables the OUI to create
the oraInventory directory, in the path /u01/app/oracle/oraInventory.
4) Apps OS User created (applmgr)
3) OS Utilities like ar, gcc, g++, ld, ksh, make, X Display Server must exist in PATH.
Installation Process:
Below are some screen-shots of Oracle Applications R12 Vision Instance Installation. Well the screens are self-explanatory, but we have also given some explanation with few screens.
Below are some screen-shots of Oracle Applications R12 Vision Instance Installation. Well the screens are self-explanatory, but we have also given some explanation with few screens.
Screen1: Welcome
Screen:
See everything and press Next.
Screen1:
Welcome Screen: See everything and press Next.
Screen 2: Wizard Operation : Choose whether you want a
fresh installation or upgrade from previous version.
Screen 3: Oracle Configuration Manager: Accept this if you have metalink account and you want support from Oracle. This will ask for Metalink account and Support Identifier in the next
Screen 4: Oracle Configuration Manager Details:
Screen 5: Configuration Choice: If you already have configurations file from previous installation Locate the file and load the configurations from that else Create a new configuration. This option is very helpful, if you have got any error during installation and you want to restart the installation again. In that case you can use the previous selected configurations.
Screen 6: Database Node setup : Provide the DB node details e.g. DB SID, HostName, Domain Name, OS, OS User and Group, and Installation Base Directory.
Please note that the installation is done by root user and you need a on’tte user for DB account e.g. oracle with dba group.
Please note that the installation is done by root user and you need a on’tte user for DB account e.g. oracle with dba group.
Screen 7: Primary Apps Node setup : Next is
the Primary Application Node setup. Provide the on’tte details.
Notice that it is asking for “Instance Directory” This is the $INST_TOP for this particular node.
Notice that it is asking for “Instance Directory” This is the $INST_TOP for this particular node.
Screen 8: Enable/Disable Application Services for Primary Node : As
explained earlier, we can have may Appl nodes. We have taken example of two
appl nodes (appl_node1 and appl_node2). We have disabled the Batch Processing
Services on appl_node1 and will enable it on appl_node2 as explained earlier in
:Shared File System Architecutre”.
Screen 9: Node Information: This screen
gives the DB and primary node informations. Now click on the Add Server button
to add additional Appl node.
Screen 10: Additional node config : This screen
shows the additional node setup. Note the Shared File System Checkbox. Check
this if you want a single installation and share the installation setup for
both the nodes (by NFS mount). And if you want a on’tte $INST_TOP for the
additional node, on’t check the box and provide the paths for that.
Screen 11: Additional node application services: Enable Root
Service group and batch processing services for the additional Appl node.
Screen 12 : Node Information : Now this
screen shows all the three node information..
Screen 13 : System Status Check
Screen 14:
Pre-install Checks : Once all the Checks are passed, proceed for the
installation.
Screen 15 : Install in Progress :
Screen 16 : Post install checks : If any of
the check failed, see the error by clicking icon near the item and try to
remove the errors. Then again check. If every things is fine. Click next and then
finish on the next screen.
Screen 17 : Final Screen
Congratulations... Your Oracle
Applications R12 Installation is successfully done. Now you can just type the
URL in the browser and see beautiful screen of Oracle Apps R12. You can do the
initial login with User: SYSADMIN and password : sysadmin. Then create new
users with System Administrator Responsibility and PLAY/WORK J.
LOG FILE LOCATIONS FOR VARIOUS TYPES OF LOGS
Log files are useful in troubleshooting issues in Oracle
Applications. Here is the list of Log file location in Oracle Applications for
Startup/Shutdown, Cloning, Patching, DB & Apps Listener and various
components in Apps R12/12i:
C. Startup/Shutdown Log files
for Application Tier in R12
Instance Top is new TOP added
in R12. Most of the logs are location in INST_TOP. Below are the location for
logs files for Startup/shutdown processes:
–Startup/Shutdown error
message text files like adapcctl.txt, adcmctl.txt…
$INST_TOP/apps/$CONTEXT_NAME/logs/appl/admin/log
$INST_TOP/apps/$CONTEXT_NAME/logs/appl/admin/log
–Startup/Shutdown error
message related to tech stack (10.1.2, 10.1.3 forms/reports/web)
$INST_TOP/apps/$CONTEXT_NAME/logs/ora/ (10.1.2 & 10.1.3)
$INST_TOP/apps/$CONTEXT_NAME/logs/ora/10.1.3/Apache/error_log[timestamp]
$INST_TOP/apps/$CONTEXT_NAME/logs/ora/10.1.3/opmn/ (OC4J~…, oa*, opmn.log)$INST_TOP/apps/$CONTEXT_NAME/logs/ora/10.1.2/network/ (listener log)
$INST_TOP/apps/$CONTEXT_NAME/logs/appl/conc/log (CM log files)
$INST_TOP/apps/$CONTEXT_NAME/logs/ora/ (10.1.2 & 10.1.3)
$INST_TOP/apps/$CONTEXT_NAME/logs/ora/10.1.3/Apache/error_log[timestamp]
$INST_TOP/apps/$CONTEXT_NAME/logs/ora/10.1.3/opmn/ (OC4J~…, oa*, opmn.log)$INST_TOP/apps/$CONTEXT_NAME/logs/ora/10.1.2/network/ (listener log)
$INST_TOP/apps/$CONTEXT_NAME/logs/appl/conc/log (CM log files)
B. Autoconfig related
log files in R12
i) Database Tier Autoconfig log :
$ORACLE_HOME/appsutil/log/$CONTEXT_NAME/MMDDHHMM/adconfig.log
$ORACLE_HOME/appsutil/log/$CONTEXT_NAME/MMDDHHMM/NetServiceHandler.log
i) Database Tier Autoconfig log :
$ORACLE_HOME/appsutil/log/$CONTEXT_NAME/MMDDHHMM/adconfig.log
$ORACLE_HOME/appsutil/log/$CONTEXT_NAME/MMDDHHMM/NetServiceHandler.log
ii) Application
Tier Autoconfig
log - $INST_TOP/apps/$CONTEXT_NAME/admin/log/$MMDDHHMM/adconfig.log
Autoconfig context file
location in R12 -
$INST_TOP/apps/$CONTEXT_NAME/appl/admin/$CONTEXT_NAME.xml
C. R12 Installation
Logs
Database Tier Installation
RDBMS
$ORACLE_HOME/appsutil/log/$CONTEXT_NAME/.logRDBMS
$ORACLE_HOME/appsutil/log/$CONTEXT_NAME/ApplyDBTechStack_.logRDBMS
$ORACLE_HOME/appsutil/log/$CONTEXT_NAME/ohclone.logRDBMS $ORACLE_HOME/appsutil/log/$CONTEXT_NAME/make_.logRDBMS
$ORACLE_HOME/appsutil/log/$CONTEXT_NAME/installdbf.logRDBMS $ORACLE_HOME/appsutil/log/$CONTEXT_NAME/adcrdb_.log RDBMS
$ORACLE_HOME/appsutil/log/$CONTEXT_NAME/ApplyDatabase_.logRDBMS
$ORACLE_HOME/appsutil/log/$CONTEXT_NAME//adconfig.log
RDBMS
$ORACLE_HOME/appsutil/log/$CONTEXT_NAME//NetServiceHandler.log
$ORACLE_HOME/appsutil/log/$CONTEXT_NAME/ohclone.logRDBMS $ORACLE_HOME/appsutil/log/$CONTEXT_NAME/make_
$ORACLE_HOME/appsutil/log/$CONTEXT_NAME/installdbf.logRDBMS $ORACLE_HOME/appsutil/log/$CONTEXT_NAME/adcrdb_
$ORACLE_HOME/appsutil/log/$CONTEXT_NAME/ApplyDatabase_
$ORACLE_HOME/appsutil/log/$CONTEXT_NAME/
Application Tier Installation
$INST_TOP/logs/.log
$APPL_TOP/admin/$CONTEXT_NAME/log/ApplyAppsTechStack.log
$INST_TOP/logs/ora/10.1.2/install/make_.log
$INST_TOP/logs/ora/10.1.3/install/make_.log
$INST_TOP/admin/log/ApplyAppsTechStack.log
$INST_TOP/admin/log/ohclone.log
$APPL_TOP/admin/$CONTEXT_NAME/log/installAppl.log
$APPL_TOP/admin/$CONTEXT_NAME/log/ApplyAppltop_.log
$APPL_TOP/admin/$CONTEXT_NAME/log//adconfig.log
$APPL_TOP/admin/$CONTEXT_NAME/log//NetServiceHandler.log
$APPL_TOP/admin/$CONTEXT_NAME/log/ApplyAppsTechStack.log
$INST_TOP/logs/ora/10.1.2/install/make_
$INST_TOP/logs/ora/10.1.3/install/make_
$INST_TOP/admin/log/ApplyAppsTechStack.log
$INST_TOP/admin/log/ohclone.log
$APPL_TOP/admin/$CONTEXT_NAME/log/installAppl.log
$APPL_TOP/admin/$CONTEXT_NAME/log/ApplyAppltop_
$APPL_TOP/admin/$CONTEXT_NAME/log/
$APPL_TOP/admin/$CONTEXT_NAME/log/
8) R12 DB-Tier RAC Conversion
8.1 Check the hardware setup done previously
Ø Shared Storage – Already setup is done before R12 Installation
Ø Network – Already setup is done before R12 Installation
Ø Configure SSH between both DB nodes – Explained below
Ø Run Cluster verify utility – PreInstall – runcluvfy
Installation Requirement :
1) Disk space requirement:
Node Space Required
-----------------------------------------------
Mid-Tier 50GB X 2 (2 nodes)
DB Tier – Vision 250GB
2) Installation is done by root OS user
3) Database OS User created (oracle)
1) Disk space requirement:
Node Space Required
-----------------------------------------------
Mid-Tier 50GB X 2 (2 nodes)
DB Tier – Vision 250GB
2) Installation is done by root OS user
3) Database OS User created (oracle)
# mkdir -p /u01/app/oracle
# chown -R oracle:oinstall
/u01/app/oracle
# chmod -R 775 /u01/app/oracle
At the end of this
procedure, you will have the following:
·
/u01 owned by root.
·
/u01/app owned by root.
·
/u01/app/oracle owned by oracle:oinstall
with 775 permissions. This ownership and permissions enables the OUI to create
the oraInventory directory, in the path /u01/app/oracle/oraInventory.
4) Apps OS User created (applmgr)
3) OS Utilities like ar, gcc, g++, ld, ksh, make, X Display Server must exist in PATH.
8.2 CRS Installation
Ø Install CRS 10.2.0.1 Using RunInstaller (/u01/apps/oracle/VIS/db/racdb/crs)
Ø Verify Install - /u01/apps/oracle/VIS/db/racdb/crs/bin/olsnodes
8.3 Install Oracle DB 10.2.0.1 Binaries
8.4 Install Oracle DB Components from Components CD
8.5 Upgrade CRS & Database software to 10.2.0.2
8.6 Upgrade the apps database to 10.2.0.2
(utlu102i.sql)
Ø Interoperabilty patch note:- 454750.1
8.7 Listener Configuration
Ø Create vis_lnsr under $ORACLE_HOME/network/admin
Ø On all cluster nodes run NetCA to create listeners
8.8 Create ASM Instance and ASM Diskgroups
8.9 Run DBCA to configure ASM instances
Ø Create diskgroup ‘+DATADG’ on ‘/dev/raw/ASM1’
Ø Verify ASM instance is registered in CRS ($CRS_HOME/bin/crs_stat
–t)
8.10 Prepare ConvertToRAC.xml using rconfig
Ø File Location ($ORACLE_HOME/assistants/rconfig/sampleXMLs
Ø Edit following parameters
SourceDBHome
TargetDBHome
RAC Nodes
Instance Prefix
Sharedstorage type = ASM
TargetDatabaseArea: ASM
Diskgroupname
Convert Verify:”NO”||”ONLY”
8.11 Create a new spfile on ASM for target(RAC)
DB_Home
Ø Create spfile=’+DATADG/spfilevis.ora’ from pfile;
Ø Link the init.ora to the spfile in the target DB_home
Ø Startup database from target DB_HOME
Ø Use NetCA to create local and remote listeners
8.12 RUN rconfig
$ORACLE_HOME/bin/rconfig
ConvertToRac.XML
Ø Migrate the DB to ASM storage; Create DB instances
Ø Configure Listener &
Netservices; Configure/Register CRS
Ø Start the instance on all nodes
8.13 Enable Autoconfig on database tier
Ø Generate appsutil.zip from apps tier
o
Configure tnsnames.ora in
apps tier to point to 1 DB instance
o
Execute
$AD_TOP/bin/admkappsutil.pl to create appsutil.zip
Ø Create appsutil in RAC ORACLE_HOME in DB Tier
o
Copy appsutil.zip to RAC
ORACLE_HOME and unzip it.
o
Copy instance information
from OLD ORACLE_HOME
Perl
$OLD_ORACLE_HOME/appsutil/scripts//adpreclone.pl
Copy
context_file,jre from old ORACLE_HOME to RAC ORACLE_HOME/appsutil
Ø Create instance specific XML context file
o
De-register the current
config: exec find_conc_clone.setup_clean
o
Create pairsfile.txt file
under new RAC ORACLE_HOME/appsutil/clone
S_undo_tablespace=UNDOTBS2;
s_dbClusterInst=2
S_db_oh=/u01/apps/oracle/VIS/racdb/10/2/0
o
Run perl adclonectx.pl to
create instance specific XML context file
8.14 Run Autoconfig on database tier node-1 &
node-2
8.15 Check database & nodeapps status using
srvctl
8.16 Establish application environment for RAC
Ø Run adconfig on application nodes
o
Prepare tnsnames.ora to
connect to RAC node-1
o
Set jdbc_url in the
context_file to the instance-1 of RAC
o
Execute autoconfig
($AD_TOP/bin/autoconfig.sh context_file=<>.xml)
o
Check tnsnames.ora in
$INST_TOP/ora/10.1.2 & 10.1.3
o
Verify the DBC file in
$FND_SECURE
Ø Load balancing –edit context_file and set TWO_TASK
Ø Run autoconfig $AD_TOP/adconfig.sh
Ø Profile option “Application Database ID” to set the DBC file at
$FND_SECURE
HAPPY LEARNING!
No comments:
Post a Comment
Thanks for you valuable comments !