Oracle RAC Vs Single Instance : System Admin point of view

An “Oracle Real Application Cluster” (RAC), is about a clustered Oracle database. If a RAC is properly set up, all the nodes (Servers) are active at the same time, acting on the same one Database. This is very different from a failover Cluster. Let’s first take a birds-eye overview of a single Instance architecture, compared to RAC architecture.

Overview of Single Instance:

If you look at a (traditional) Single Server where a single Oracle Instance (8i, 9i, 10g, 11g) is involved, you would see the following situation.

Files:

You can find a number of database files, residing on a disksystem, amongst others are:

  • system.dbf:   contains the dictionary (users, grants, table properties, packages etc..)
  • undo.dbf:    contains “undo/rollback” information about all modifying SQL statements, and thus containing the “former situation” before transactions are committed to the DB.
  • redo logs:    in case of a crash, these write ahead logs can be used to redo committed transactions  that which were not written to the datafiles yet, but were logged in the redo logs.
  • user defined data and index tablespaces: These are data files, organized in the logical concept of “tablespaces”. These tablespaces contain the tables (and indexes)

 

Note: a tablespace consist of one or more files. To the Operating system, there are only files to be concerned of, but from the Database perspective, the DBA can create a logical entity called “tablespace”, consisting of possibly multiple files, possibly distributed over multiple disks. Then, if the DBA then creates a table (or index), he or she should specify a tablespace, and thereby distributing the (future) tablecontent over multiple files, which might increase I/O performance. So, the DBA might create tablespaces with names like for example “DATA_BIG”, “DATA_SMALL”, “INDEX_BIG” etc..

 

Memory structure and processes:

  • The Instance gets created in memory when the DBA (or the system) “starts the database”. Starting the database means that a number of processes gets active, and that a rather complex shared memory area gets created. This memory area is called the SGA (System Global Area) and contains some buffers and other pools, of which the following are most noticable:
    • buffer cache: datablocks from disk, are cached in this buffer. Most of this cached data are blocks from tables and indexes.
    • log buffer: small memory area which contains modified data which is about to be written to the redologs.
    • Shared pool                : All used SQL queries and procedures are cached in this pool
    • Library cache: The systems metadata is cached in this structure

 

By the way, an Oracle Instance can be highly configured by a configuration file (traditionally that is the file “init.ora” which is an ascii file and can be edited to adjust values). Some of the parameters in that file, determine the sizes of the different caches and pools. For example, here is a section that determines the SGA of a small database:

 

db_cache_size        =268435456

java_pool_size       = 67108864

shared_pool_size     = 67108864

java_pool_size       = 67108864

streams_pool_size    = 67108864

 

So, for system Administrator “an instance” is not synonym to the database files on disk, but is really the “stuff” that gets loaded or get created in memory. After an Oracle Database has started, a number of processes are running:

 

pmon    : process monitor

smon     : system monitor

chkpt     : checkpoint process

dbwr     : database writer process

lgwr       : the process that writes the redologs

 

Overview RAC:

Click on attachment

 

Let’s begin by reviewing the structure of a Real Applications Cluster. Physically, a RAC consists

of several nodes (servers), connected to each other by a private interconnect. The database files

are kept on a shared storage subsystem, where they’re accessible to all nodes. And each node has

a public network connection.

  • A cluster is a set of 2 or more machines (nodes) that share or coordinate resources to perform the same task.
  • A RAC system is 2 or more instances running on a set of clustered nodes, with all instances accessing a shared set of database files (one Database).

 

Depending on the O/S platform, a RAC database may be deployed on a cluster that uses vendor clusterware plus Oracle’s own clusterware (Cluster Ready Services, CRS), or on a cluster that solely uses Oracle’s own clusterware.

 

Thus, every RAC sits on a cluster that is running Cluster Ready Services. srvctl is the primary tool DBAs use to configure CRS for their RAC database and processes.

 

  • Cluster Ready Services and the OCR:  Cluster Ready Services, or CRS, is a new feature for 10g RAC. Essentially, it is Oracle’s own clusterware. On most platforms, Oracle supports vendor clusterware; in these cases, CRS interoperates with the vendor clusterware, providing high availability support and service and workload management. On Linux and Windows clusters, CRS serves as the sole clusterware. In all cases, CRS provides a standard cluster interface that is consistent across all platforms. CRS consists of four processes (crsd, occsd, evmd, and evmlogger) and two disks: the Oracle Cluster Registry (OCR), and the voting disk.

The CRSD manages the HA functionality by starting, stopping, and failing over the application resources and maintaining the profiles and current states in the Oracle Cluster Registry (OCR). whereas the OCSSD manages the participating nodes in the cluster by using the voting disk. The OCSSD also protects against the data corruption potentially caused by “split brain” syndrome by forcing a machine to reboot.

So, on most platforms, you may see the following processes:

 

oprocd          the Process Monitor Daemon

crsd                Cluster Ready Services Daemon (CRSD)

occsd             Oracle Cluster Synchronization Service Daemon

evmd            Event Volume Manager Daemon

 

To start and stop CRS when the machine starts or shutdown, on unix there are rc scripts in place.

We can also, as root, manually start, stop, enable or disable the services.

  • CRS manages the following resources:
    • The ASM instances on each node
    • Databases
    • The instances on each node
    • Oracle Services on each node
    • The cluster nodes themselves, including the following processes, or “nodeapps”:
      • VIP
      • GSD
      • The listener
      • The ONS daemon

 

CRS stores information about these resources in the OCR. If the information in the OCR for one of these resources becomes damaged or inconsistent, then CRS is no longer able to manage that resource. Fortunately, the OCR automatically backs itself up regularly and frequently.

 

SO

 

10g RAC (10.2) uses, or depends on:

 

  • Oracle Clusterware (10.2), formerly referred to as CRS “Cluster Ready Services” (10.1).
  • Oracle’s optional Cluster File System OCFS (This is optional), or use ASM and RAW.
  • Oracle Database extensions

 

RAC is “scale out” technology: just add commodity nodes to the system. The key component is “cache fusion”. Data are transferred from one node to another via very fast interconnects. Essential to 10g RAC is a “Shared Cache” technology. Automatic Workload Repository (AWR) plays a role also.  The Fast Application Notification (FAN) mechanism that is part of RAC, publishes events that describe the current service level being provided by each instance, to AWR. The load balancing advisory information is then used to determine the best instance to serve the new request.

 

  • With RAC, ALL Instances of ALL nodes in a cluster, access a SINGLE database.
  • But every instance has it’s own UNDO tablespace, and REDO logs.

 

The Oracle Clusterware comprise several background processes that facilitate cluster operations.

The Cluster Synchronization Service CSS, Event Management EVM, and Oracle Cluster components

communicate with other cluster components layers in the other instances within the same cluster database environment.

 

 

Questions per implementation arise in the following points:

  • Storage
  • Computer Systems/Storage-Interconnect
  • Datbase
  • Application Server
  • Public and Private networks
  • Application Control & Display

On the Storage level, it can be said that RAC supports

– Automatic Storage Management (ASM)

– Oracle Cluster File System (OCFS)

– ??? Network File System (NFS) – limited (only theoretical actually)

– Disk raw partitions

– Third party cluster file systems, like GPFS

 

For application control and tools, it can be said that 10g RAC supports

– OEM Grid Control     http://hostname:5500/em

OEM Database Control http://hostname:1158/em

– “svrctl” is a command line interface to manage the cluster configuration, for example, starting and stopping all nodes in one command.

– Cluster Verification Utility (cluvfy) can be used for an installation and sanity check.

 

Failure in Client connections: Depending on the Net configuration, type of connection, type of transaction etc.., Oracle Net services provides a feature called “Transparant Application Failover” which can fail over a client session to another backup connection.

About HA and DR:

– RAC is HA       , High Availability, that will keep things Up and Running in one site.

– Data Guard is DR, Disaster Recovery, and is able to mirror one site to another remote site.

 

 

Storage with RAC: We have the following storage options:

Raw                               Raw devices, no filesystem present

ASM                              Automatic Storage Management

CFS                                Cluster File System

OCFS                             Oracle Cluster File System

LVM                               Logical Volume Manager

NFS                                              Network File System (must be on certified NAS device)

 

Storage                                                                        Oracle Clusterware         Database             Recovery area

————–                                                                 ——————                  ——–                    ————-

Automatic Storage Management                             No                                   Yes                         Yes

Cluster file system (OCFS or Other)                      Yes                                    Yes                         Yes

Shared raw storage                                                                      Yes                                    Yes                         No

 

Here is a description about file types. A regular single-instance database has three basic types of files:

1. database software and dump files (alerlog, trace files and that stuff);

2. datafiles, spfile, control files and log files, often referred to as “database files”;

3. and it may have recovery files, if using RMAN.

and, in case of RAC:

4. A RAC database has an additional type of file referred to as “CRS files”. These consist of the    Oracle Cluster Registry (OCR) and the voting disk.

 

Not all of these files have to be on the shared storage subsystem. The database files and CRS files

must be accessible to all instances, so these *must be* on the shared storage subsystem. The database software can be on the shared subsystem and shared between nodes; or each node can have  its own ORACLE_HOME. The flash recovery area must be shared by all instances, if used.

 

Some storage options can’t handle all of these file types. To take an obvious example, the database software and dump files can’t be stored on raw devices. This isn’t important for the dump files,

but it does mean that choosing raw devices precludes having a shared ORACLE_HOME on the shared storage device.

 

Remarks:

  • On a particular platform, there might exist a vendor specific solution for shared storage. For example, on AIX it is usually IBM GPFS that is used as a shared file system. But for

this platform you might also use SFRAC of Veritas. VERITAS Storage Foundation for Oracle Real Application Clusters (SFRAC) provides an integrated solution stack for using clustered filesystems with Oracle RAC on AIX, as an alternative to using raw logical volumes,  Automatic Storage Management (ASM) or the AIX General Parallel Filesystem (GPFS).

  • SAN solutions: And as far as SAN, there’s no inherent SAN protocol that allows for block-level locking between hosts.  Your clustered filesystem is responsible for providing that.

 

EMC LUN info gathering on AIX host

for emcdisk in $(lsdev -C -cdisk -Fname|grep hdiskpower)
do
emcsize=$(bootinfo -s $emcdisk)
echo ${emcdisk} ${emcsize}
done

figures are in MB

Hdiskpower no and size size

./emcinfo.sh | tr -s ‘ ‘ ‘;’ | cut -d’;’ -f1
./emcinfo.sh | tr -s ‘ ‘ ‘;’ | cut -d’;’ -f2

File system size –
df -mI | tr -s ‘ ‘ ‘;’ | cut -d’;’ -f6
df -mI | tr -s ‘ ‘ ‘;’ | cut -d’;’ -f3
df -mI | tr -s ‘ ‘ ‘;’ | cut -d’;’ -f4

Resetting an unknown root password

  1. Insert the product media for the same version and level as the current installation into the appropriate drive.
  2. Power on the machine.
  3. When the screen of icons appears, or when you hear a double beep, press the F1 key repeatedly until the System Management Services menu appears.
  4. Select Multiboot.
  5. Select Install From.
  6. Select the device that holds the product media and then select Install.
  7. Select the AIX version icon.
  8. Define your current system as the system console by pressing the F1 key and then press Enter.
  9. Select the number of your preferred language and press Enter.
  10. Choose Start Maintenance Mode for System Recovery by typing 3 and press Enter.
  11. Select Access a Root Volume Group. A message displays explaining that you will not be able to return to the Installation menus without rebooting if you change the root volume group at this point.
  12. Type 0 and press Enter.
  13. Type the number of the appropriate volume group from the list and press Enter.
  14. Select Access this Volume Group and start a shell by typing 1 and press Enter.
  15. At the # (number sign) prompt, type the passwd command at the command line prompt to reset the root password. For example:
    # passwd
    Changing password for "root"
    root's New password: 
    Enter the new password again:
  16. To write everything from the buffer to the hard disk and reboot the system, type the following:
    sync;sync;sync;reboot

How to install and run CDE on a non-graphical AIX system.

Recently came across a standalone / non-hmc system without console – had to install CDE-X11 file set from base OS. X11.Dt was on Vol2 on AIX base OS…it does prompt vol3if needed.

1. install X11.Dt.*  by running smitty install_all–>cd0–>F4–>/X11.Dt
2. run /usr/dt/bin/dtconfig -e
3. run this script /etc/rc.dt

IBM Network card configuration

If you change network card hardware on IBM systems, make sure the “media_speed” is set to “100_Full_Duplex” or backups will suffer severely. By default, cards are set to “Auto_Negotiation”

Current settings:

lsattr -El ent0

busmem 0xe0080000 Bus memory address False

rom_mem 0xe0040000 ROM memory address False

busintr 179 Bus interrupt level False

intr_priority 3 Interrupt priority False

txdesc_que_sz 512 TX descriptor queue size True

rxdesc_que_sz 1024 RX descriptor queue size True

tx_que_sz 8192 Software transmit queue size True

media_speed Auto_Negotiation Media speed True

copy_bytes 2048 Copy packet if this many or less bytes True

use_alt_addr no Enable alternate ethernet address True

alt_addr 0x000000000000 Alternate ethernet address True

slih_hog 10 Interrupt events processed per interrupt True

rx_hog 1000 RX buffers processed per RX interrupt True

intr_rate 10000 Interrupt events processed per interrupt True

compat_mode no Gigabit Backward compatability True

flow_ctrl yes Enable Transmit and Receive Flow Control True

jumbo_frames no Transmit jumbo frames True

chksum_offload yes Enable hardware transmit and receive checksum True

large_send yes Enable hardware TX TCP resegmentation True

rxbuf_pool_sz 1024 RX descriptor queue size True

Checking available settings for “media_speed”

lsattr -El ent0 -a media_speed

10_Half_Duplex

10_Full_Duplex

100_Half_Duplex

100_Full_Duplex

Auto_Negotiation

Using WebSM open a console to the system you want to change

To correct the “media_speed” setting, do the following:

rmdev -l en0

rmdev -l ent0

chdev -l ent0 -a media_speed=100_Full_Duplex

cfgmgr

Check results:

lsattr -El ent0

media_speed 100_Full_Duplex Media speed True

Reboot the server to make sure your settings take place.

Posted in AIX. 1 Comment »

AIX – Performance Tuning Standards

  • AIO Servers

Default Values:

Minservers = 2
Maxservers = 10
Maxrequests = 4096

Rule of Thumb for Oracle Database System:

maxserver = 300
minservers = 100
maxrequests = 8192

Command:   chdev -l aio0 –P -a maxservers=$MAX -a minservers=$MIN –a
maxreqs=8192

  • JFS Buffer-Cache

Default Values:

maxperm = 80%
minperm = 20%
strict_maxperm = 0

Rule of Thumb for Database System

DB is on FileSystem & Mounted as DIO

Strict_maxperm = 1
maxperm = 20%
minperm  = 5%

Rule of Thumb for system with > 2 Gbyte of RAM

Strict_maxperm = 1
maxperm = 20%
minperm  = 5%

Command: [AIX 5.2 and above] vmo –p –o maxclient%=20
Command: [AIX 5.2 and above] vmo –p –o strict_maxperm%=1
Command: [AIX 5.2 and above] vmo –p –o maxperm%=20
Command: [AIX 5.2 and above] vmo –p –o minperm%=5
Command: [AIX 5.1 and below] vmtune -p $MINPERM  -P $MAXPERM

To view all currently set values:  [AIX 5.2 and above] vmo –a
To view individual value:  [AIX 5.2 and above]  vmo –X maxclient%

  • Client File Pages [JFS2 Buffer Cache]

Default

maxclient = 80%
strict_maxclient = 1

Rule of Thumb for Database System

DB is on FileSystem & Mounted as DIO

maxclient = maxperm

Note: strict_maxclient by default is already turned on

Command: [AIX 5.1 and below] vmtune –t  $MAXCLIENT
Command: [AIX 5.2 and above] vmo –p –o maxclient%=20

  • Maxfree/Minfree Memory [Page Stealing]

Default  Values:

minfree = 120
maxfree = 128

Rule of Thumb for System

minfree = 120 * Quantity of CPU’s * Quantity of Memory Pools
maxfree = ( minfree + [maxpgahead ) * Quantity of CPU’s

Quantity of Memory Pools and maxpgahead values can be determined by executing: vmtune –a  and looking for the total memory pools value and maxpgahead.

Command: [AIX 5.1 and below] vmtune –f $MIN –F $MAX
Command: [AIX 5.2 and above] vmo –p –o maxperm%=20
Command: [AIX 5.2 and above] vmo –p –o minperm%=5

  • Fibre-Channel Device Settings (HBA)

Maximum I/O Transfer Size
Default Value
max_xfer_size = 0x100000  [ 1 MB ]

Maximum number of COMMANDS to queue to the adapter
Default Value
num_cmd_elems = 200

HBA Direct Memory Access transfer buffer
Default value
lg_term_dma = 0x200000 [ 2 MB ]

Rule of Thumb

max_xfer_size = 0x400000  [ 4 MB ]
num_cmd_elems = 512  ( 1024 if a FA is dedicated to that HBA )
lg_term_dma = 0x1000000 [  16 MB ]

Command: chdev –l fcs0 –P –a max_xfer_size=0x400000 –a num_cmd_elems=512
-a lg_term_dma=0x1000000

*Note, this would change the values for device fcs0, there might have multiple HBA’s [i.e. fcs1, fcs2, etc]

  • HDISK tuning – high i/o systems

On high I/O systems (like Data Warehouse), we set the following on each hdisk

Note:  hdisk can only be tuned while mount points are not mounted

Queue Depth
Default Value
queue_depth=8

Max transfer buffer
Default Value
max_transfer=

Rule of Thumb
queue_depth = [ 32 if disk is a 4 way meta ]  [ 64 if disk is a 8 way meta ]
max_transfer = 0x100000 [ 1 MB ]

The following command should be done with the hdisk? and hdiskpower? in defined state: The symptom of this problem is while attempting to add a disk to a volume group, you get a message like “extendvg: LTG must be less than or equal to max_transfer, blah, blah”

root # rmdev –l hdiskpower?
root # rmdev –l hdisk?

root # chdev –l hdiskpower? –P –a queue_depth=32 –a max_transfer=0x100000
root # cfgmgr

Note:  The –P flag on the chdev command allows you to make the change to the device’s characteristics permanently in the Customized Devices object class without actually changing the device. This is useful for devices that cannot be made unavailable and cannot be changed while in the available state.  In most cases, as in changing characteristics on a new disk, you would not use the –P flag.

Moving an LPAR to another frame

Someone asked me this and I couldnt’ explain it upfront..so whoever face it –

1.Have Storage zone the LPARs disk to the new HBA(s).  Also have them add an additional 40GB drive for the new boot disk.  By doing this we have a back out to the old boot disk on the old frame.

2. Collect data from the current LPAR:

a. Network information – Write down IP and ipv4 alias(s) for each interface

b. Run “oslevel –r”  – will need this when setting up NIM for the mksysb recovery

c. Is the LPAR running AIO, if so will need to configure after the mksysb recovery

d. Run “lspv”, save this output, contains volume group and PVID information

e. Any other customizations you deem neccessary

3. create mksysb backup of this LPAR

4. Reconfigure the NIM machine for this LPAR, with new Ethernet MAC address.  Foolproof method is to remove the machine and re-create it.

5. In NIM, configure the LPAR for a mksysb recovery.  Select the appropriate SPOT and LPP Source, base on “oslevel –r” data collected in step 2.

6. Shut down the LPAR on the old frame (Halt the LPAR)

7. Move network cables, fibre cables, disk, zoning

8. if needed, to the LPAR on the new frame

9. On the HMC, bring up the LPAR on the new frame in SMS mode and select a network boot.  Verify SMS profile has only a single HBA (if Clarrion attached, zoned to a single SP), otherwise the recovery will fail with a 554.

10. Follow prompts for building a new OS.  Select the new 40GB drive for the boot disk (use lspv info collected in Step 2 to identify the correct 40GB drive).  Leave defaults for remaining questions NO (shrink file systems, recover devices, and import volume groups).

11. After the LPAR has booted, from the console (the network interface may be down):

a. lspv Note the hdisk# of the bootdisk

b. bootlist –m normal –o Verify boot list is set – if not, set it

bootlist –m normal –o hdisk#

c. ifconfig en0 down If interface got configured, down it

d. ifconfig en0 detach and remove it

e. lsdev –Cc adapter Note Ethernet interfaces (ex. ent0, ent1)

f. rmdev –dl <en#> Remove all en devices

g. rmdev –dl <ent#> Remove all ent devices

h. cfgmgr Will rediscover the en/ent devices

i. chdev –l <ent#> -a media_speed=100_Full_Duplex Set on each interface unless

running GIG, leave defaults

j. Configure the network interfaces and aliases Use info recorded from step 2                             mktcpip –h <hostname> -a <IP> -m <netmask> -i <en#> -g <gateway> -A no –t N/A –s

chdev –l en# -a alias4=<alias IP>,<netmask>

k. Verify that the network is working.

12. If LPAR was running AIO (data collected in Step 2), verify it is running (smitty aio)

13. Check for any other customizations which may have been made on this LPAR

14. Vary on the volume groups,  use the “lspv” data collected in Step 2 to identify by PVID a hdisk in each volume group.  Run for each volume group:

a. importvg –y <vgname> hdisk# Will vary on all hdisk in the volume group

b. varyonvg <vgname>

c. mount all Verify mounts are good

15. Verify paging space is configured appropriately

a. lsps –a Look for Active and Auto set to yes

b. chps –ay pagingXX Run for each paging space, sets Auto

c. swapon /dev/pagingxx Run for each paging space, sets Active

16. Verify LPAR is running 64 bit

a. bootinfo –K If 64, you are good

b. ln –sf /usr/lib/boot/unix_64 /unix If 32, change to run 64 bit

c. ln –sf /usr/lib/boot/unix_64 /usr/lib/boot/unix

d. bosboot –ak /usr/lib/boot/unix_64

17. If LPAR has Power Path

a. Run “powermt config” Creates the powerpath0 device

b. Run “pprootdev on” Sets Power Path control of the boot disk

c. If Clariion, make configuration changes to enable SP failover

chdev -l powerpath0 -Pa QueueDepthAdj=1

chdev –l fcsX –Pa num_cmd_elems=2048 For each fiber adapter

chdev –l fscsiX –Pa fc_err_recov=fast_fail For each fiber adapter

d. Halt the LPAR

e. Activate the Normal profile If Sym/DMX – verify two HBA’s in profile

f. If Clarrion attached, have Storage add zone to 2nd SP

i. Run cfgmgr Configure the 2nd set of disk

g. Run “pprootdev fix” Put rootdisk pvid’s back on hdisk

h. lspv | grep rootvg Get boot disk hdisk#

i. bootlist –m normal –o hdisk# hdisk# Set the boot list with both hdisk

20. From the HMC, remove the LPAR profile from the old frame

21. Pull cables from the old LPAR (Ethernet and fiber), deactivate patch panel ports

22. Update documentation, Server Master, AIX Hardware spreadsheet, Patch Panel spreadsheet

23. Return the old boot disk to storage.

Boot Problem Management – Quick Guide

LED

User Action

553

Access the rootvg.  Issue ‘df –k’.  Check if /tmp, /usr or / are full.

553

Access the rootvg.  Check /etc/inittab (empty, missing or corrupt?).   Check /etc/environment.

551, 555, 557

Access the rootvg.  Re-create the BLV: 

# bosboot –ad /dev/hdiskx

551, 552, 554, 555, 556, 557

Access rootvg before mounting the rootvg filesystems.  Re-create the JFS log:

# logform /dev/hd8

Run fsck afterwards

552, 554, 556

Run fsck against all rootvg filesystems.  If fsck indicates errors (not an AIXV4 filesystem), repair the superblock  (each filesystem has two superblocks, one in logical block 1 and a copy in logical block 31, so copy block 31 to block 1)

# dd count=1 bs=4k skip-31 seek=1 if=/dev/hd4 of=/dev/hd4

551

Access rootvg and unlock the rootvg:

chvg –u rootvg

523 – 534

ODM files are missing or inaccessible.  Restore the missing files from a system backup.

518

Mount of /usr or /var failed? Check the /etc/filesystem.  Check network (remote mount)., filesystems (fsck) and hardware.

http://publib16.boulder.ibm.com/pseries/en_US/infocenter/base/ledsrch.htm

Ether Channel configuration

System requirements

Two network interfaces (ent0 & ent1) in “Available”  state.

1. Put in request to netdatacom team to activate two ports on patch-panel specifically for ether channel.

2. AIX 5.2 ML 03 (minimum requirement)

Procedure

Logon to system as root from console  —-  VERY IMPORTANT

Check available interfaces

lsdev –Cc adapter

Down and detach the interfaces that will be used for ether channel

ifconfig en0 down detach

ifconfig en1 down detach

Remove the devices from device list

rmdev –dl ent0

rmdev –dl ent1

rmdev –dl ent2

rmdev –dl en0

rmdev –dl en1

rmdev –dl en2

Bring back the devices to device list

cfgmgr

Set up the speed of NIC’s to 100Mbps Full Duplex

chdev –l ent0 –a media_speed=100_Full_Duplex

chdev –l ent1 –a media_speed=100_Full_Duplex

Setup interfaces for ether channel via SMITTY

smit etherchannel

 Add An EtherChannel / Link Aggregation

Select primary interface ent0

Move cursor to “Backup Adapter”  and hit “ESC –4 “ to list devices. (Please note F4 might not work on console).

Choose backup adapter  — ent1

Leave other values at default and hit “Enter”

Ether channel device ent2 will be created.

Do standard procedure for adding IP to interface en3.  Preferred way to do this is

smit mktcpip (and select en2 as the interface).

UNKNOWN_ user in /etc/security/failedlogin AIX

Unknown entry appears when somebody tried to log on with a user id which is not known to the system. It would be possible to show the userid they attempted to use but this is not done as a common mistake is to enter the password instead of the userid. If this was recorded it would be a security risk..