RMAN TDP 11g

RMAN provides consistent and secure backup, restore, and recovery performance for Oracle databases. While the Oracle RMAN initiates a backup or restore, Data Protection for Oracle acts as the interface to the Tivoli Storage Manager server . The Tivoli Storage Manager server then applies administrator-defined storage management policies to the data. Data Protection for Oracle implements the Oracle defined Media Management application program interface (SBTAPI) 2.0. This SBTAPI interfaces with RMAN and translates Oracle commands into Tivoli Storage Manager API calls to the Tivoli Storage Manager server. With the use of RMAN, Data Protection for Oracle allows you to perform the following functions: v Full and incremental backup function for the following while online or offline: – Databases – Tablespaces – Datafiles – Archive log files – Control filesv Full database restores while offline v Tablespace and datafile restore while online or offlineLAN-

cfg2html tool configuration on RHEL 5

cfg2html : It will collect everything and sort it out, and will create a nice html

unzip cfg2html-linux-1.60-20090415_all.zip
rpm -ivh cfg2html-linux-1.60-1.noarch.rpm

rpm -ivh –freshen cfg2html-linux-1.60-1.noarch.rpm

vi /etc/cfg2html/systeminfo

cfg2html-linux

DONE!

RHEL : Enabling Telnet and FTP Services

vi gssftp file under /etc/xinetd.d

–>server_args     = -l -a

Remove -a

restart the service with service xinetd reload

Boom…chekck allow deny in ftpuser config file, hsould work if not follow next….

Red Hat Enterprise Linux: RHEL3 / RHEL4

Enabling Telnet and FTP Services

Linux is configured to run the Telnet and FTP server, but by default, these services are not enabled. To enable the telnet service, login to the server as the root user account and run the following commands:

# chkconfig telnet on
# service xinetd reload
Reloading configuration: [  OK  ]

Starting with the Red Hat Enterprise Linux 3.0 release (and in CentOS Enterprise Linux), the FTP server (wu-ftpd) is no longer available with xinetd. It has been replaced with vsftp and can be started from /etc/init.d/vsftpd as in the following:

# /etc/init.d/vsftpd start
Starting vsftpd for vsftpd:         [ OK ]

If you want the vsftpd service to start and stop when recycling (rebooting) the machine, you can create the following symbolic links:

# ln -s /etc/init.d/vsftpd /etc/rc3.d/S56vsftpd
# ln -s /etc/init.d/vsftpd /etc/rc4.d/S56vsftpd
# ln -s /etc/init.d/vsftpd /etc/rc5.d/S56vsftpd

Allowing Root Logins to Telnet and FTP Services

Now before getting into the details of how to configure Red Hat Linux for root logins, keep in mind that this is VERY BAD security. Make sure that you NEVER configure your production servers for this type of login.

Configure Telnet for root logins

Simply edit the file /etc/securetty and add the following to the end of the file:

pts/0
pts/1
pts/2
pts/3
pts/4
pts/5
pts/6
pts/7
pts/8
pts/9

This will allow up to 10 telnet sessions to the server as root.

Configure FTP for root logins

Edit the files /etc/vsftpd.ftpusers and /etc/vsftpd.user_list and remove the ‘root’ line from each file.

11g Req on AIX 6.1

Ref: http://download.oracle.com/docs/cd/B28359_01/relnotes.111/b32075.pdf

In addition to the information in the installation guides, the following section contains

the system requirements for AIX 6.1:

■ Operating System Requirement

■ Operating System Filesets for AIX 6L

Operating System Requirement

In addition to the supported operating systems listing in the installation guide, AIX

6L, version 6.1, 64-bit kernel is supported with service pack 04 or later.

Refer to Oracle Database Installation Guide for AIX 5L Based Systems (64-Bit) for

additional information on operating system listings.

Operating System Filesets for AIX 6L

The following files are supported on Oracle Database 11g Release 1 (11.1):

■ bos.adt.base

■ bos.adt.lib

■ bos.adt.libm

■ bos.perf.libperfstat

■ bos.perf.perfstat

■ bos.perf.proctools

■ xlC.aix61.rte:9.0.0.1 or later

■ xlC.rte:9.0.0.1 or later

Oracle RAC Vs Single Instance : System Admin point of view

An “Oracle Real Application Cluster” (RAC), is about a clustered Oracle database. If a RAC is properly set up, all the nodes (Servers) are active at the same time, acting on the same one Database. This is very different from a failover Cluster. Let’s first take a birds-eye overview of a single Instance architecture, compared to RAC architecture.

Overview of Single Instance:

If you look at a (traditional) Single Server where a single Oracle Instance (8i, 9i, 10g, 11g) is involved, you would see the following situation.

Files:

You can find a number of database files, residing on a disksystem, amongst others are:

  • system.dbf:   contains the dictionary (users, grants, table properties, packages etc..)
  • undo.dbf:    contains “undo/rollback” information about all modifying SQL statements, and thus containing the “former situation” before transactions are committed to the DB.
  • redo logs:    in case of a crash, these write ahead logs can be used to redo committed transactions  that which were not written to the datafiles yet, but were logged in the redo logs.
  • user defined data and index tablespaces: These are data files, organized in the logical concept of “tablespaces”. These tablespaces contain the tables (and indexes)

 

Note: a tablespace consist of one or more files. To the Operating system, there are only files to be concerned of, but from the Database perspective, the DBA can create a logical entity called “tablespace”, consisting of possibly multiple files, possibly distributed over multiple disks. Then, if the DBA then creates a table (or index), he or she should specify a tablespace, and thereby distributing the (future) tablecontent over multiple files, which might increase I/O performance. So, the DBA might create tablespaces with names like for example “DATA_BIG”, “DATA_SMALL”, “INDEX_BIG” etc..

 

Memory structure and processes:

  • The Instance gets created in memory when the DBA (or the system) “starts the database”. Starting the database means that a number of processes gets active, and that a rather complex shared memory area gets created. This memory area is called the SGA (System Global Area) and contains some buffers and other pools, of which the following are most noticable:
    • buffer cache: datablocks from disk, are cached in this buffer. Most of this cached data are blocks from tables and indexes.
    • log buffer: small memory area which contains modified data which is about to be written to the redologs.
    • Shared pool                : All used SQL queries and procedures are cached in this pool
    • Library cache: The systems metadata is cached in this structure

 

By the way, an Oracle Instance can be highly configured by a configuration file (traditionally that is the file “init.ora” which is an ascii file and can be edited to adjust values). Some of the parameters in that file, determine the sizes of the different caches and pools. For example, here is a section that determines the SGA of a small database:

 

db_cache_size        =268435456

java_pool_size       = 67108864

shared_pool_size     = 67108864

java_pool_size       = 67108864

streams_pool_size    = 67108864

 

So, for system Administrator “an instance” is not synonym to the database files on disk, but is really the “stuff” that gets loaded or get created in memory. After an Oracle Database has started, a number of processes are running:

 

pmon    : process monitor

smon     : system monitor

chkpt     : checkpoint process

dbwr     : database writer process

lgwr       : the process that writes the redologs

 

Overview RAC:

Click on attachment

 

Let’s begin by reviewing the structure of a Real Applications Cluster. Physically, a RAC consists

of several nodes (servers), connected to each other by a private interconnect. The database files

are kept on a shared storage subsystem, where they’re accessible to all nodes. And each node has

a public network connection.

  • A cluster is a set of 2 or more machines (nodes) that share or coordinate resources to perform the same task.
  • A RAC system is 2 or more instances running on a set of clustered nodes, with all instances accessing a shared set of database files (one Database).

 

Depending on the O/S platform, a RAC database may be deployed on a cluster that uses vendor clusterware plus Oracle’s own clusterware (Cluster Ready Services, CRS), or on a cluster that solely uses Oracle’s own clusterware.

 

Thus, every RAC sits on a cluster that is running Cluster Ready Services. srvctl is the primary tool DBAs use to configure CRS for their RAC database and processes.

 

  • Cluster Ready Services and the OCR:  Cluster Ready Services, or CRS, is a new feature for 10g RAC. Essentially, it is Oracle’s own clusterware. On most platforms, Oracle supports vendor clusterware; in these cases, CRS interoperates with the vendor clusterware, providing high availability support and service and workload management. On Linux and Windows clusters, CRS serves as the sole clusterware. In all cases, CRS provides a standard cluster interface that is consistent across all platforms. CRS consists of four processes (crsd, occsd, evmd, and evmlogger) and two disks: the Oracle Cluster Registry (OCR), and the voting disk.

The CRSD manages the HA functionality by starting, stopping, and failing over the application resources and maintaining the profiles and current states in the Oracle Cluster Registry (OCR). whereas the OCSSD manages the participating nodes in the cluster by using the voting disk. The OCSSD also protects against the data corruption potentially caused by “split brain” syndrome by forcing a machine to reboot.

So, on most platforms, you may see the following processes:

 

oprocd          the Process Monitor Daemon

crsd                Cluster Ready Services Daemon (CRSD)

occsd             Oracle Cluster Synchronization Service Daemon

evmd            Event Volume Manager Daemon

 

To start and stop CRS when the machine starts or shutdown, on unix there are rc scripts in place.

We can also, as root, manually start, stop, enable or disable the services.

  • CRS manages the following resources:
    • The ASM instances on each node
    • Databases
    • The instances on each node
    • Oracle Services on each node
    • The cluster nodes themselves, including the following processes, or “nodeapps”:
      • VIP
      • GSD
      • The listener
      • The ONS daemon

 

CRS stores information about these resources in the OCR. If the information in the OCR for one of these resources becomes damaged or inconsistent, then CRS is no longer able to manage that resource. Fortunately, the OCR automatically backs itself up regularly and frequently.

 

SO

 

10g RAC (10.2) uses, or depends on:

 

  • Oracle Clusterware (10.2), formerly referred to as CRS “Cluster Ready Services” (10.1).
  • Oracle’s optional Cluster File System OCFS (This is optional), or use ASM and RAW.
  • Oracle Database extensions

 

RAC is “scale out” technology: just add commodity nodes to the system. The key component is “cache fusion”. Data are transferred from one node to another via very fast interconnects. Essential to 10g RAC is a “Shared Cache” technology. Automatic Workload Repository (AWR) plays a role also.  The Fast Application Notification (FAN) mechanism that is part of RAC, publishes events that describe the current service level being provided by each instance, to AWR. The load balancing advisory information is then used to determine the best instance to serve the new request.

 

  • With RAC, ALL Instances of ALL nodes in a cluster, access a SINGLE database.
  • But every instance has it’s own UNDO tablespace, and REDO logs.

 

The Oracle Clusterware comprise several background processes that facilitate cluster operations.

The Cluster Synchronization Service CSS, Event Management EVM, and Oracle Cluster components

communicate with other cluster components layers in the other instances within the same cluster database environment.

 

 

Questions per implementation arise in the following points:

  • Storage
  • Computer Systems/Storage-Interconnect
  • Datbase
  • Application Server
  • Public and Private networks
  • Application Control & Display

On the Storage level, it can be said that RAC supports

– Automatic Storage Management (ASM)

– Oracle Cluster File System (OCFS)

– ??? Network File System (NFS) – limited (only theoretical actually)

– Disk raw partitions

– Third party cluster file systems, like GPFS

 

For application control and tools, it can be said that 10g RAC supports

– OEM Grid Control     http://hostname:5500/em

OEM Database Control http://hostname:1158/em

– “svrctl” is a command line interface to manage the cluster configuration, for example, starting and stopping all nodes in one command.

– Cluster Verification Utility (cluvfy) can be used for an installation and sanity check.

 

Failure in Client connections: Depending on the Net configuration, type of connection, type of transaction etc.., Oracle Net services provides a feature called “Transparant Application Failover” which can fail over a client session to another backup connection.

About HA and DR:

– RAC is HA       , High Availability, that will keep things Up and Running in one site.

– Data Guard is DR, Disaster Recovery, and is able to mirror one site to another remote site.

 

 

Storage with RAC: We have the following storage options:

Raw                               Raw devices, no filesystem present

ASM                              Automatic Storage Management

CFS                                Cluster File System

OCFS                             Oracle Cluster File System

LVM                               Logical Volume Manager

NFS                                              Network File System (must be on certified NAS device)

 

Storage                                                                        Oracle Clusterware         Database             Recovery area

————–                                                                 ——————                  ——–                    ————-

Automatic Storage Management                             No                                   Yes                         Yes

Cluster file system (OCFS or Other)                      Yes                                    Yes                         Yes

Shared raw storage                                                                      Yes                                    Yes                         No

 

Here is a description about file types. A regular single-instance database has three basic types of files:

1. database software and dump files (alerlog, trace files and that stuff);

2. datafiles, spfile, control files and log files, often referred to as “database files”;

3. and it may have recovery files, if using RMAN.

and, in case of RAC:

4. A RAC database has an additional type of file referred to as “CRS files”. These consist of the    Oracle Cluster Registry (OCR) and the voting disk.

 

Not all of these files have to be on the shared storage subsystem. The database files and CRS files

must be accessible to all instances, so these *must be* on the shared storage subsystem. The database software can be on the shared subsystem and shared between nodes; or each node can have  its own ORACLE_HOME. The flash recovery area must be shared by all instances, if used.

 

Some storage options can’t handle all of these file types. To take an obvious example, the database software and dump files can’t be stored on raw devices. This isn’t important for the dump files,

but it does mean that choosing raw devices precludes having a shared ORACLE_HOME on the shared storage device.

 

Remarks:

  • On a particular platform, there might exist a vendor specific solution for shared storage. For example, on AIX it is usually IBM GPFS that is used as a shared file system. But for

this platform you might also use SFRAC of Veritas. VERITAS Storage Foundation for Oracle Real Application Clusters (SFRAC) provides an integrated solution stack for using clustered filesystems with Oracle RAC on AIX, as an alternative to using raw logical volumes,  Automatic Storage Management (ASM) or the AIX General Parallel Filesystem (GPFS).

  • SAN solutions: And as far as SAN, there’s no inherent SAN protocol that allows for block-level locking between hosts.  Your clustered filesystem is responsible for providing that.