Quick Reference to shell

Command Description Example
& Run the previous command in the background ls &
&& Logical AND if [ "$foo" -ge "0" ] && [ "$foo" -le "9"]
|| Logical OR if [ "$foo" -lt "0" ] || [ "$foo" -gt "9" ] (not in Bourne shell)
^ Start of line grep "^foo"
$ End of line grep "foo$"
= String equality (cf. -eq) if [ "$foo" = "bar" ]
! Logical NOT if [ "$foo" != "bar" ]
$$ PID of current shell echo "my PID = $$"
$! PID of last background command ls & echo "PID of ls = $!"
$? exit status of last command ls ; echo "ls returned code $?"
$0 Name of current command (as called) echo "I am $0"
$1 Name of current command’s first parameter echo "My first argument is $1"
$9 Name of current command’s ninth parameter echo "My ninth argument is $9"
$@ All of current command’s parameters (preserving whitespace and quoting) echo "My arguments are $@"
$* All of current command’s parameters (not preserving whitespace and quoting) echo "My arguments are $*"
-eq Numeric Equality if [ "$foo" -eq "9" ]
-ne Numeric Inquality if [ "$foo" -ne "9" ]
-lt Less Than if [ "$foo" -lt "9" ]
-le Less Than or Equal if [ "$foo" -le "9" ]
-gt Greater Than if [ "$foo" -gt "9" ]
-ge Greater Than or Equal if [ "$foo" -ge "9" ]
-z String is zero length if [ -z "$foo" ]
-n String is not zero length if [ -n "$foo" ]
-nt Newer Than if [ "$file1" -nt "$file2" ]
-d Is a Directory if [ -d /bin ]
-f Is a File if [ -f /bin/ls ]
-r Is a readable file if [ -r /bin/ls ]
-w Is a writable file if [ -w /bin/ls ]
-x Is an executable file if [ -x /bin/ls ]
parenthesis:
( … )
Function definition function myfunc() { echo hello }

Unique VLAN ID for SEA failover control channel setup

Always select unique VLAN ID – which dosn’t exist on any of your organization network to avoid conflict when setting up dual VIOS with a control channel for SEA failover.. failure to follow this  may result in a network storm. ( Very important and I couldn’t find any note on IBM site about it)

Requirements for Configuring SEA Failover

  • One SEA on one VIOS acts as the primary (active) adapter and the second SEA on the second VIOS acts as a backup (standby) adapter.
  • Each SEA must have at least one virtual Ethernet adapter with the “Access external network” flag (previously known as “trunk” flag) checked. This enables the SEA to provide bridging functionality between the two VIO servers.
  • This adapter on both the SEAs has the same PVID, but will have a different priority value.
  • A SEA in ha_mode (Failover mode) might have more than one trunk adapters, in which case all should have the same priority value.
  • The priority value defines which of the two SEAs will be the primary and which will be the backup. The lower the priority value, the higher the priority, e.g. an adapter with priority 1 will have the highest priority.
  • An additional virtual Ethernet adapter , which belongs to a unique VLAN on the system, is used to create the control channel between the SEAs, and must be specified in each SEA when configured in ha_mode.
  • The purpose of this control channel is to communicate between the two SEA adapters to determine when a fail over should take place.

Upgrading PowerPath in a dual VIO server environment

When upgrading PowerPath in a dual Virtual I/O (VIO) server environment, the devices need to be unmapped in order to maintain the existing mapping information.

To upgrade PowerPath in a dual VIO server environment:
1. On one of the VIO servers, run lsmap -all.
This command displays the mapping between physical, logical,
and virtual devices.

$ lsmap -all
SVSA Physloc Client Partition ID
————— ————————————– ——————–
vhost1 U8203.E4A.10B9141-V1-C30 0x00000000
VTD vtscsi1
Status Available
LUN 0x8100000000000000
Backing device hdiskpower5
Physloc U789C.001.DQD0564-P1-C2-T1-L67

2. Log in on the same VIO server as the padmin user.

3. Unconfigure the PowerPath pseudo devices listed in step 1 by
running:
rmdev -dev <VTD> -ucfg
where <VTD> is the virtual target device.
For example, rmdev -dev vtscsil -ucfg
The VTD status changes to Defined.
Note: Run rmdev -dev <VTD> -ucfg for all VTDs displayed in step 1.

4. Upgrade PowerPath

=======================================================================

1. Close all applications that use PowerPath devices, and vary off all
volume groups except the root volume group (rootvg).

In a CLARiiON environment, if the Navisphere Host Agent is
running, type:
/etc/rc.agent stop

2. Optional. Run powermt save in PowerPath 4.x to save the
changes made in the configuration file.

Run powermt config.
5. Optional. Run powermt load to load the previously saved
configuration file.
When upgrading from PowerPath 4.x to PowerPath 5.3, an error
message is displayed after running powermt load, due to
differences in the PowerPath architecture. This is an expected
result and the error message can be ignored.
Even if the command succeeds in updating the saved
configuration, the following error message is displayed by
running powermt load:
host1a 5300-08-01-0819:/ #powermt load Error loading auto-restore value
Warning:Error occurred loading saved driver state from file /etc/powermt.custom

host1a 5300-08-01-0819:/ #powermt load Error loading auto-restore value
Warning:Error occurred loading saved driver state from file /etc/powermt.custom

Loading continues…
Error loading auto-restore value
When you upgrade from an unlicensed to a licensed version of
PowerPath, the load balancing and failover device policy is set to
bf/nr (BasicFailover/NoRedirect). You can change the policy by
using the powermt set policy command.

=======================================================================

5. Run powermt config.

6. Log in as the padmin user and then configure the VTD
unconfigured from step 3 by running:
cfgdev -dev <VTD>
Where <VTD> is the virtual target device.
For example, cfgdev -dev vtscsil
The VTD status changes to Available.
Note: Run cfgdev -dev <VTD> for all VTDs unconfigured in step 3

7. Run lspath -h on all clients to verify all paths are Available.

8. Perform steps 1 through 7 on the second VIO server.

A sample ASM install process using EMC PowerPath (Symmetrix) with AIX 5.3

Basic System Setup
1. Install AIX 5.3 + latest maintenance level, and check metalink note 282036.1 for any additional
system prerequisites for Oracle
2. Verify the following filesets are installed, or install if not present:
•bos.adt.base
•bos.adt.lib
•bos.adt.libm
•bos.adt.syscalls
•bos.perf.libperfstat
•bos.perf.perfstat
•bos.perf.proctools
•bos.perf.gtools
•rsct.basic
•rsct.basic.compat
3. Create dba and oinstall groups with the same GID across all cluster nodes
4. Create oracle user with the same UID across all cluster nodes, primary group dba
5. set date and timezone (smit system)
6. start xntpd (smit xntpd)
implement tuning parameters from the Tuning Parameters and Settings for ASM section of this
document

Configure Network Settings & Services
1. Set up tcpip on the en0 adapter
# smitty tcpip
– Minimum configuration and startup for en0 ** public network **
– rac1: 10.1.1.101
– rac2: 10.1.1.102
– rac3: 10.1.1.103
– Minimum configuration and startup for en1 ** RAC Interconnect **
– rac1-en1: 10.1.10.101
– rac2-en1: 10.1.10.102
– rac3-en1: 10.1.10.103

2. Update /etc/hosts with all IP/DNS entries
3. Create entries in /etc/hosts.equiv for the oracle user
rac1 oracle
rac2 oracle
rac3 oracle
rac1-en1 oracle
rac2-en1 oracle
rac3-en1 oracle

Logical Volumes & Filesystems
1. Increase filesystem sizes:
– / = 256 MB

– /tmp = > 500 MB free
– /var = 512 MB
2. Make filesystems for Oracle SW ($ORACLE_HOME), ASM ($ORACLE_ASM_HOME) and
CRS ($ORA_CRS_HOME),
– $ORACLE_HOME, eg /opt/oracle/product/10.2.0, should be ~ 5-6 GB
– $ORA_CRS_HOME, eg /crs/oracle/product/10.2.0, should be ~ 2 GB
– mount filesystems after creation
– change ownerships & permissions, example:
– chown -R oracle:oinstall /opt/oracle
– chown -R 775 /opt/oracle
– mkdir -p /crs/oracle/product/10.2.0
– chown -R oracle:oinstall /crs/oracle
– chmod -R 755 /crs/oracle
3. Add $ORA_CRS_HOME/bin to root’s PATH

POWERPATH installation
See the EMC Host Connectivity Guide for IBM AIX, P/N 300-000-608, for full details
1. Install EMC ODM support package
– 5.3.0.2 from ftp://ftp.emc.com/pub/elab/aix/ODM_DEFINITIONS/EMC.AIX.5.3.0.2.tar.Z
– uncompress and extract the tar ball into a new directory
– install using smit install
2. remove any existing devices attached to the EMC
# rmdev –dl hdiskX
3. run /usr/lpp/EMC/Symmetrix/bin/emc_cfgmgr to detect devices
4. Install PowerPath using smit install
5. register PowerPath
# emcpreg –install
6. initialize PowerPath devices
# powermt config
7. verify that all PowerPath devices are named consistently across all cluster nodes
# /usr/lpp/EMC/Symmetrix/bin/inq.aix64 | grep hdiskpower
– compare results. Consistent naming is not required for ASM devices, but LUNs used
for the OCR and VOTE functions must have the same device names on all rac systems6:
8. On all hdiskpower devices to be used by Oracle for ASM, voting, or the OCR, the reserve_lock
attribute must be set to “no”
# chdev -l hdiskpowerX -a reserve_lock=no
9. Verify the attribute is set
# lsattr –El hdiskpowerX

10. Identify two small luns to be used for OCR and voting
11. Set permissions on all hdiskpower drives to be used for ASM, voting, or the OCR as follows:
# chown oracle:dba /dev/rhdiskpowerX
# chmod 660 /dev/rhdiskpowerX
The Oracle Installer will change these permissions and ownership as necessary during the
CRS install process.

Oracle 10g RAC installation
1. Add the following to the oracle user’s .profile:
ORACLE_BASE=<oracle base directory>; export ORACLE_BASE
ORA_CRS_HOME=<ora crs home>; export ORA_CRS_HOME
AIXTHREAD_SCOPE=S; export AIXTHREAD_SCOPE7
umask 022
2. Run the Oracle installer to install CRS
$ export LOG=/tmp/orainstall.log
$ export ORACLE_HOME=/crs/oracle/product/10.2.0
Load the CRS install cd
run rootpre.sh on ALL nodes
$ runInstaller –ignoreSysPrereqs
3. Check crs install for the correct number of nodes and interfaces
[rac1]/crs/oracle/product/10.1.0/bin> # ./olsnodes -n
rac1 1
rac2 2
rac3 3
[rac1]/crs/oracle/product/10.1.0/bin> # ./oifcfg getif
en0 10.1.3.0 global public
en1 10.1.30.0 global cluster_interconnect
4. Install Oracle Binaries
$export ORACLE_HOME=/home/oracle/product/10.2.0
$ cd <10g DVD directory, Disk1>
$ runInstaller –ignoreSysPrereqs
5. Install latest 10g patchset
6. Install any additional Oracle patches listed in the PowerPath for AIX installation guide.
7. For Clariion systems, refer to the “Requirements for Oracle 10g RAC with ASM on AIX
5L” document from EMC to set the miscount settings appropriately.
8. Run DBCA to set up ASM instances and create database
Create Data and Recovery disk groups, each with external redundancy
use /dev/rhdisk* as the disk discovery path
Choose option to Use Oracle-Managed Files

WebLogic 10.3 onAIX 6.1 java 6 reqs

Downloading and Installing IBM SDK Java 6 with Service Release 2, Service Refresh 4+IZ48590

Complete the following procedure to download and install IBM SDK Java 6 with Service Release 2.

  1. Go to the IBM Support: Fix Center download site at the following URL:
  2. http://www.ibm.com/developerworks/java/jdk/aix/service.html

  3. Click the Fix Info link for your JDK version:
  4. Select your APAR/SR number from the table and follow the instructions and/or prompts displayed on the screen to download and install the fix package on your system.
  5. For SR2 (32-bit), use IZ30723.

    For SR2 (64-bit), use IZ30726.

    For SR4+IZ48590 (32-bit), use IZ50170.

    For SR4+IZ48590 (64-bit), use IZ50167.

  6. Select your APAR from the list and follow the instructions and/or prompts displayed on the screen to download and install the specified APAR on your system.
  7. Set the JAVA_HOME environment variable to the directory in which IBM Java 6 is installed, and export JAVA_HOME. For example:
  8. export JAVA_HOME=/usr/java6

  9. Set the PATH variable to include $JAVA_HOME/bin. For example:
  10. export PATH=$JAVA_HOME/bin:$PATH

Downloading and Installing IBM SDK Java 6 64-bit with Service Refresh 2 + iFixes (IZ51489 + IZ32747 + IZ45701 + IZ26955 + IZ33606 + IZ52413) (Applicable to WebLogic Integration 10.3 Only)

This is applicable to WebLogic Integration 10.3 only. Complete the following procedure to download and install IBM SDK Java 6 with Service Refresh 2 + iFixes.

  1. Go to the IBM site:
  2. https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?lang=en_US&source=swg-ibmjavaisv

  3. Register for a valid IBM ID if necessary.
  4. Click on Sign In button.
  5. Login using your IBM ID and password.
  6. Enter the Access Key MJ3D7TQGMK
  7. Click Submit.
  8. From “IBM SDK FOR AIX on 64-bit iSeries/pSeries” select the following tar file to download:
  9. Java6_64.0.0.58.tar.gz (SR2+IZ51489+IZ32747+IZ45701+IZ26955+IZ33606+IZ52413) Java6_64.0.0.58.tar.gz (119 MB)

  10. Click I Confirm to start the file download.
  11. Set the JAVA_HOME environment variable to the directory in which IBM Java 6 is installed, and export JAVA_HOME. For example:
  12. export JAVA_HOME=/usr/java6_64

  13. Set the PATH variable to include $JAVA_HOME/bin. For example:
  14. export PATH=$JAVA_HOME/bin:$PATH

http://www.ibm.com/developerworks/java/jdk/aix/faqs.html

RMAN TDP 11g

RMAN provides consistent and secure backup, restore, and recovery performance for Oracle databases. While the Oracle RMAN initiates a backup or restore, Data Protection for Oracle acts as the interface to the Tivoli Storage Manager server . The Tivoli Storage Manager server then applies administrator-defined storage management policies to the data. Data Protection for Oracle implements the Oracle defined Media Management application program interface (SBTAPI) 2.0. This SBTAPI interfaces with RMAN and translates Oracle commands into Tivoli Storage Manager API calls to the Tivoli Storage Manager server. With the use of RMAN, Data Protection for Oracle allows you to perform the following functions: v Full and incremental backup function for the following while online or offline: – Databases – Tablespaces – Datafiles – Archive log files – Control filesv Full database restores while offline v Tablespace and datafile restore while online or offlineLAN-

cfg2html tool configuration on RHEL 5

cfg2html : It will collect everything and sort it out, and will create a nice html

unzip cfg2html-linux-1.60-20090415_all.zip
rpm -ivh cfg2html-linux-1.60-1.noarch.rpm

rpm -ivh –freshen cfg2html-linux-1.60-1.noarch.rpm

vi /etc/cfg2html/systeminfo

cfg2html-linux

DONE!

RHEL : Enabling Telnet and FTP Services

vi gssftp file under /etc/xinetd.d

–>server_args     = -l -a

Remove -a

restart the service with service xinetd reload

Boom…chekck allow deny in ftpuser config file, hsould work if not follow next….

Red Hat Enterprise Linux: RHEL3 / RHEL4

Enabling Telnet and FTP Services

Linux is configured to run the Telnet and FTP server, but by default, these services are not enabled. To enable the telnet service, login to the server as the root user account and run the following commands:

# chkconfig telnet on
# service xinetd reload
Reloading configuration: [  OK  ]

Starting with the Red Hat Enterprise Linux 3.0 release (and in CentOS Enterprise Linux), the FTP server (wu-ftpd) is no longer available with xinetd. It has been replaced with vsftp and can be started from /etc/init.d/vsftpd as in the following:

# /etc/init.d/vsftpd start
Starting vsftpd for vsftpd:         [ OK ]

If you want the vsftpd service to start and stop when recycling (rebooting) the machine, you can create the following symbolic links:

# ln -s /etc/init.d/vsftpd /etc/rc3.d/S56vsftpd
# ln -s /etc/init.d/vsftpd /etc/rc4.d/S56vsftpd
# ln -s /etc/init.d/vsftpd /etc/rc5.d/S56vsftpd

Allowing Root Logins to Telnet and FTP Services

Now before getting into the details of how to configure Red Hat Linux for root logins, keep in mind that this is VERY BAD security. Make sure that you NEVER configure your production servers for this type of login.

Configure Telnet for root logins

Simply edit the file /etc/securetty and add the following to the end of the file:

pts/0
pts/1
pts/2
pts/3
pts/4
pts/5
pts/6
pts/7
pts/8
pts/9

This will allow up to 10 telnet sessions to the server as root.

Configure FTP for root logins

Edit the files /etc/vsftpd.ftpusers and /etc/vsftpd.user_list and remove the ‘root’ line from each file.

11g Req on AIX 6.1

Ref: http://download.oracle.com/docs/cd/B28359_01/relnotes.111/b32075.pdf

In addition to the information in the installation guides, the following section contains

the system requirements for AIX 6.1:

■ Operating System Requirement

■ Operating System Filesets for AIX 6L

Operating System Requirement

In addition to the supported operating systems listing in the installation guide, AIX

6L, version 6.1, 64-bit kernel is supported with service pack 04 or later.

Refer to Oracle Database Installation Guide for AIX 5L Based Systems (64-Bit) for

additional information on operating system listings.

Operating System Filesets for AIX 6L

The following files are supported on Oracle Database 11g Release 1 (11.1):

■ bos.adt.base

■ bos.adt.lib

■ bos.adt.libm

■ bos.perf.libperfstat

■ bos.perf.perfstat

■ bos.perf.proctools

■ xlC.aix61.rte:9.0.0.1 or later

■ xlC.rte:9.0.0.1 or later

Oracle RAC Vs Single Instance : System Admin point of view

An “Oracle Real Application Cluster” (RAC), is about a clustered Oracle database. If a RAC is properly set up, all the nodes (Servers) are active at the same time, acting on the same one Database. This is very different from a failover Cluster. Let’s first take a birds-eye overview of a single Instance architecture, compared to RAC architecture.

Overview of Single Instance:

If you look at a (traditional) Single Server where a single Oracle Instance (8i, 9i, 10g, 11g) is involved, you would see the following situation.

Files:

You can find a number of database files, residing on a disksystem, amongst others are:

  • system.dbf:   contains the dictionary (users, grants, table properties, packages etc..)
  • undo.dbf:    contains “undo/rollback” information about all modifying SQL statements, and thus containing the “former situation” before transactions are committed to the DB.
  • redo logs:    in case of a crash, these write ahead logs can be used to redo committed transactions  that which were not written to the datafiles yet, but were logged in the redo logs.
  • user defined data and index tablespaces: These are data files, organized in the logical concept of “tablespaces”. These tablespaces contain the tables (and indexes)

 

Note: a tablespace consist of one or more files. To the Operating system, there are only files to be concerned of, but from the Database perspective, the DBA can create a logical entity called “tablespace”, consisting of possibly multiple files, possibly distributed over multiple disks. Then, if the DBA then creates a table (or index), he or she should specify a tablespace, and thereby distributing the (future) tablecontent over multiple files, which might increase I/O performance. So, the DBA might create tablespaces with names like for example “DATA_BIG”, “DATA_SMALL”, “INDEX_BIG” etc..

 

Memory structure and processes:

  • The Instance gets created in memory when the DBA (or the system) “starts the database”. Starting the database means that a number of processes gets active, and that a rather complex shared memory area gets created. This memory area is called the SGA (System Global Area) and contains some buffers and other pools, of which the following are most noticable:
    • buffer cache: datablocks from disk, are cached in this buffer. Most of this cached data are blocks from tables and indexes.
    • log buffer: small memory area which contains modified data which is about to be written to the redologs.
    • Shared pool                : All used SQL queries and procedures are cached in this pool
    • Library cache: The systems metadata is cached in this structure

 

By the way, an Oracle Instance can be highly configured by a configuration file (traditionally that is the file “init.ora” which is an ascii file and can be edited to adjust values). Some of the parameters in that file, determine the sizes of the different caches and pools. For example, here is a section that determines the SGA of a small database:

 

db_cache_size        =268435456

java_pool_size       = 67108864

shared_pool_size     = 67108864

java_pool_size       = 67108864

streams_pool_size    = 67108864

 

So, for system Administrator “an instance” is not synonym to the database files on disk, but is really the “stuff” that gets loaded or get created in memory. After an Oracle Database has started, a number of processes are running:

 

pmon    : process monitor

smon     : system monitor

chkpt     : checkpoint process

dbwr     : database writer process

lgwr       : the process that writes the redologs

 

Overview RAC:

Click on attachment

 

Let’s begin by reviewing the structure of a Real Applications Cluster. Physically, a RAC consists

of several nodes (servers), connected to each other by a private interconnect. The database files

are kept on a shared storage subsystem, where they’re accessible to all nodes. And each node has

a public network connection.

  • A cluster is a set of 2 or more machines (nodes) that share or coordinate resources to perform the same task.
  • A RAC system is 2 or more instances running on a set of clustered nodes, with all instances accessing a shared set of database files (one Database).

 

Depending on the O/S platform, a RAC database may be deployed on a cluster that uses vendor clusterware plus Oracle’s own clusterware (Cluster Ready Services, CRS), or on a cluster that solely uses Oracle’s own clusterware.

 

Thus, every RAC sits on a cluster that is running Cluster Ready Services. srvctl is the primary tool DBAs use to configure CRS for their RAC database and processes.

 

  • Cluster Ready Services and the OCR:  Cluster Ready Services, or CRS, is a new feature for 10g RAC. Essentially, it is Oracle’s own clusterware. On most platforms, Oracle supports vendor clusterware; in these cases, CRS interoperates with the vendor clusterware, providing high availability support and service and workload management. On Linux and Windows clusters, CRS serves as the sole clusterware. In all cases, CRS provides a standard cluster interface that is consistent across all platforms. CRS consists of four processes (crsd, occsd, evmd, and evmlogger) and two disks: the Oracle Cluster Registry (OCR), and the voting disk.

The CRSD manages the HA functionality by starting, stopping, and failing over the application resources and maintaining the profiles and current states in the Oracle Cluster Registry (OCR). whereas the OCSSD manages the participating nodes in the cluster by using the voting disk. The OCSSD also protects against the data corruption potentially caused by “split brain” syndrome by forcing a machine to reboot.

So, on most platforms, you may see the following processes:

 

oprocd          the Process Monitor Daemon

crsd                Cluster Ready Services Daemon (CRSD)

occsd             Oracle Cluster Synchronization Service Daemon

evmd            Event Volume Manager Daemon

 

To start and stop CRS when the machine starts or shutdown, on unix there are rc scripts in place.

We can also, as root, manually start, stop, enable or disable the services.

  • CRS manages the following resources:
    • The ASM instances on each node
    • Databases
    • The instances on each node
    • Oracle Services on each node
    • The cluster nodes themselves, including the following processes, or “nodeapps”:
      • VIP
      • GSD
      • The listener
      • The ONS daemon

 

CRS stores information about these resources in the OCR. If the information in the OCR for one of these resources becomes damaged or inconsistent, then CRS is no longer able to manage that resource. Fortunately, the OCR automatically backs itself up regularly and frequently.

 

SO

 

10g RAC (10.2) uses, or depends on:

 

  • Oracle Clusterware (10.2), formerly referred to as CRS “Cluster Ready Services” (10.1).
  • Oracle’s optional Cluster File System OCFS (This is optional), or use ASM and RAW.
  • Oracle Database extensions

 

RAC is “scale out” technology: just add commodity nodes to the system. The key component is “cache fusion”. Data are transferred from one node to another via very fast interconnects. Essential to 10g RAC is a “Shared Cache” technology. Automatic Workload Repository (AWR) plays a role also.  The Fast Application Notification (FAN) mechanism that is part of RAC, publishes events that describe the current service level being provided by each instance, to AWR. The load balancing advisory information is then used to determine the best instance to serve the new request.

 

  • With RAC, ALL Instances of ALL nodes in a cluster, access a SINGLE database.
  • But every instance has it’s own UNDO tablespace, and REDO logs.

 

The Oracle Clusterware comprise several background processes that facilitate cluster operations.

The Cluster Synchronization Service CSS, Event Management EVM, and Oracle Cluster components

communicate with other cluster components layers in the other instances within the same cluster database environment.

 

 

Questions per implementation arise in the following points:

  • Storage
  • Computer Systems/Storage-Interconnect
  • Datbase
  • Application Server
  • Public and Private networks
  • Application Control & Display

On the Storage level, it can be said that RAC supports

– Automatic Storage Management (ASM)

– Oracle Cluster File System (OCFS)

– ??? Network File System (NFS) – limited (only theoretical actually)

– Disk raw partitions

– Third party cluster file systems, like GPFS

 

For application control and tools, it can be said that 10g RAC supports

– OEM Grid Control     http://hostname:5500/em

OEM Database Control http://hostname:1158/em

– “svrctl” is a command line interface to manage the cluster configuration, for example, starting and stopping all nodes in one command.

– Cluster Verification Utility (cluvfy) can be used for an installation and sanity check.

 

Failure in Client connections: Depending on the Net configuration, type of connection, type of transaction etc.., Oracle Net services provides a feature called “Transparant Application Failover” which can fail over a client session to another backup connection.

About HA and DR:

– RAC is HA       , High Availability, that will keep things Up and Running in one site.

– Data Guard is DR, Disaster Recovery, and is able to mirror one site to another remote site.

 

 

Storage with RAC: We have the following storage options:

Raw                               Raw devices, no filesystem present

ASM                              Automatic Storage Management

CFS                                Cluster File System

OCFS                             Oracle Cluster File System

LVM                               Logical Volume Manager

NFS                                              Network File System (must be on certified NAS device)

 

Storage                                                                        Oracle Clusterware         Database             Recovery area

————–                                                                 ——————                  ——–                    ————-

Automatic Storage Management                             No                                   Yes                         Yes

Cluster file system (OCFS or Other)                      Yes                                    Yes                         Yes

Shared raw storage                                                                      Yes                                    Yes                         No

 

Here is a description about file types. A regular single-instance database has three basic types of files:

1. database software and dump files (alerlog, trace files and that stuff);

2. datafiles, spfile, control files and log files, often referred to as “database files”;

3. and it may have recovery files, if using RMAN.

and, in case of RAC:

4. A RAC database has an additional type of file referred to as “CRS files”. These consist of the    Oracle Cluster Registry (OCR) and the voting disk.

 

Not all of these files have to be on the shared storage subsystem. The database files and CRS files

must be accessible to all instances, so these *must be* on the shared storage subsystem. The database software can be on the shared subsystem and shared between nodes; or each node can have  its own ORACLE_HOME. The flash recovery area must be shared by all instances, if used.

 

Some storage options can’t handle all of these file types. To take an obvious example, the database software and dump files can’t be stored on raw devices. This isn’t important for the dump files,

but it does mean that choosing raw devices precludes having a shared ORACLE_HOME on the shared storage device.

 

Remarks:

  • On a particular platform, there might exist a vendor specific solution for shared storage. For example, on AIX it is usually IBM GPFS that is used as a shared file system. But for

this platform you might also use SFRAC of Veritas. VERITAS Storage Foundation for Oracle Real Application Clusters (SFRAC) provides an integrated solution stack for using clustered filesystems with Oracle RAC on AIX, as an alternative to using raw logical volumes,  Automatic Storage Management (ASM) or the AIX General Parallel Filesystem (GPFS).

  • SAN solutions: And as far as SAN, there’s no inherent SAN protocol that allows for block-level locking between hosts.  Your clustered filesystem is responsible for providing that.