About Us

RSInfoMinds, a web based IT Training and Consultancy firm. It is established with high dreams in training people in IT Infrastructure Field. We provide Online and Class Room training in various fields of IT Infrastructure Management.

Join Us: http://www.facebook.com/RSInfoMinds
Mail Us: rsinfominds@gmail.com
Twitter: @RSInfoMinds

We are specialized in the below courses:

Redhat Linux Admin Redhat Linux Cluster
Redhat Virutualization IBM AIX Admin
IBM AIX Virtualization IBM AIX Cluster
HP Unix Admin HP Unix Cluster
HP Unix Virtualization Shell Scripting
Veritas Volume Manager Veritas Cluster
Oracle Core DBA VMWare


We provide training in such a way, So that you get in depth knowledge on the Courses you look for.

And we ensure you are very confident from each and every Techincal aspect that the IT Industry needs and expects from you.

We also conduct Workshops on the latest technology and the real time faculties sharing their work experiences to make you the best.

Monday 27 February 2012

How to find the devices attached to the FC Card in AIX and HP-Unix

This post shows the best way to find out the devices connected to the FC - HBA.

AIX:

# lsdev -Cc adapter | grep -i fcs* : Command to get the FC adapters connected to the machine.

The output of the command will show the loction code of the FC "port_number-slot_number". Make a note of the numbers.

# lsdev -Cc disk | grep "##-##" : Command to fetch the device connected to the "port_number-slot_number"

HP-Unix:

# ioscan -funC fc : Command  to get the FC adapters connected to the machine.

Make a note of the device file name for the FC adapter "/dev/fcd or /dev/td" depends on manufacturer.

# /opt/fcms/bin/fcmsutil /dev/fcd# get remote all : Command that returns the devices connected to it.


Tuesday 21 February 2012

Priority of a Process in AIX

Process refers to a program under a execution.

Process can be executed on foregroud and background.

Priority of a process executed in foreground is 20.

Priority of a process executed in background is 24.

The priority of process can be changed with the nice command.

The range of priority of a proces is +20 to -19.

Smaller the value favored high.

# vmstat 2 5 : Command executed with the priority value of 20.

# nice -n 5 vmstat 2 5 : Command executed with the priority value of 25+5 = 25 (less favored).

# nice -n -5 vmstat 2 5 : Command executed with the priority value of 20-5 = 15 (high favored).

The process could have priority which can be a fixed or non-fixed (dynamic).

Fixed priority could be assigned to a process using the varible,

retcode=setpri(Process ID, priority value).

# ps -lu <user> : Command shows the priority of all the process owned by the user.

Monday 20 February 2012

Disk Based Heart Beat Configuration in HACMP_AIX

This post shows step by step procedure involved in creating the Disk based heart beat polling in the HACMP environment.

Heartbeat polling refer to the process of checking the TCI/IP connectivity between the nodes in the cluster.

Heartbeat polling could be configured in a TCP/IP based or non-TCP/IP based network.

When the hearbeat fails, the failover happens based on the policy configured in the HACMP.

1) Select the physical volume (common) of smallest size.

# lspv
# booinfo -s hdisk# : Command to view the size of the disk.

2) Create a enhanced concurrent volume group.

# mkvg -y disk_beat_polling -s 4 -C -n hdisk1

or

# smitty hacmp-->c-spoc-->hacmp concurrent logical volume management--> concurrent volume group--> create a concurrent volume group-->  select the nodes in the cluster --> select the PVID--> enter the Volume group name and Enter.

3) Perform Discovery Hacmp Related Information.

4) Add the dish beat network.

# smitty hamp--> extended configuration--> resource configuration--> add a network to hacmp cluster --> Select the "diskhb" from the predefined serial device type and Enter.

The default TCP/IP network name : net_ether_##

The default Disk based network name : net_diskhb_##

5) Configure communication Interfaces / Devices.

smitty hacmp-->extended configuration--> extended topology configuration--> configure hacmp communication interfaces/devices  --> add communication interfaces/devices--> add a discovered communication interface and devices --> communication devices -->select the nodes in the cluster and Enter.

Now perform verification and syncronization. Finally start the cluster services.

Successful configuration of disk beat polling is viewed through:

# lssrc -ls topsvcs

Look for the column which says "HB Interval; Sensitivity and Missed HB's"

Wednesday 15 February 2012

How to check the "Run Queue and Swap Queue Size"

# sar -q 5 3

The above command is used to determine the Run Queue and Swap Queue Size.

We can find the number of threads/process waiting in the run queue using the "vmstat" command. But the above would give much more visibility in terms of % of time the run and swap queue was occupied.

runq-sz : Shows average no.of threads present in run queue.

%runocc : Percentage of time the run queue was occupied.

swap-sz : Shows average no.of threads present in the swap queue.

%swapocc : Percentage of time the swap queue was occupied.

Use "vmstat" to find the disk performace

# vmstat hdisk1 hdisk2 hdisk3 hdisk3 1 10

The above command is use to find the "Disk Transfer" rate. The "disk xfer" rate is used to display the transfer rate on the hdisks#. Maximum of 4 hdisks could be used in the command.

Tuesday 14 February 2012

Calculate the Efficiency of your Logical Volume in AIX

# lslv -l hd2 : Command to view the LV fragmenation.

hd2:/usr
PV                COPIES      IN BAND        DISTRIBUTION
hdisk0        114:000:000        22%           000:042:026:000:046

COPIES: 114:000:000

114 : No.of LP's in the first copy.
000 : NO LP's,so no secound copy.
000 : NO LP's,so no third copy.

Therefore the LV is not mirrored.

IN BAND : Shows how well the Intrapolicy and the attribute of the LV is followed.

22%, higher the percentage, the better the allocation efficiency.

Each logical volume has its own intrapolicy. If the operating system cannot meet this requirement, it chooses the best way to meet the requirements.

DISTRIBUTION : edge : middle : center : inner-middle : inner-edge

               000     042       026         000         046            -----> 114 LP.
Since the LV was created with the intra allocation policy "Center"----------->No.of LP's at the center is "26"--->So, 26/(0+42+26+0+46) = 0.2280 * 100 = 22 %

HACMP 2 Node Cluster Steps- Part II

Resource Configuration:

# smitty hacmp --> Extended Configuration --> Extended Resource Configuration --> HACMP Extended Resource Configuration--> Configure HACMP Application and Configure HACMP Service IP Labels and IP Address.

In this wizard we select the option "Configure HACMP Application" to create an Application Server and provide the "Startup" and "Stop" scrip.

We select the "Configure HACMP Service IP Lables and IP Address" to add the "Service IP" to the cluster Network which we created.

So, from "Configure HACMP Service IP Labels and IP Address" --> "Add a Service IP Label/Address"
and select "Configurable on Multiple Nodes" and select the network which we created "net_ether_##" and select the service ip and press "Enter".

The above process binds the "Service IP" with the "Cluster Network".

Next we proceed with Resource Group Configuration:

# smitty hacmp --> Extended Configuration --> Extended Resource Configuration --> HACMP Extended Resource Group Configuration --> Add a Reource Group.

Enter the Resource Group Name, Participating Nodes (Node 1 and Node 2 and set the cluster policy and "Enter".

# smitty hacmp --> Extended Configuration --> Extended Resource Configuration --> HACMP Extended Resource Group Configuration --> Change/Show the attributes of Resource Group.

Select the "RG" which you have selected. In this window you have select the "Service IP" and "Application Server" which we configured earlier.

Now do a "Discovery of cluster"

Next we see configuration of Volume Group:

The resource group which hold the resource should be placed in the shared volume group in terms of a shared file system via shared logica volume. So that the application which binds to the service ip will be available across all the nodes on the cluster.

# smitty hacmp --> System Management (C-SPOC) --> Hacmp Logical Volume Management --> Shared Volume Group --> Create a Shared Volume Group --> Select the node ( Node1and Node2) --> Select the PV ID--> Enter the Volume Group Name / Major Number and Enter the size of PP and Enter.

Create Shared Logical Volume:

# smitty hacmp --> System Management (C-SPOC) --> Hacmp Logical Volume Management --> Shared Logical Volume --> Add a Shared Logical Volume --> Select the Volume Group Name --> Select the Physica Volume --> Enter the Logical Volume Name and No.of LP's and Enter.

Create Shared File System:

# smitty hacmp --> System Management (C-SPOC) --> Hacmp Logical Volume Management --> Shared File System --> Journal File System --> Add a Journal Filesystem on a Previously Defined Logical Volume -->Add a standard Journaled File System --> Select the Shared Logical Volume -->Enter the Mount Point and Enter.

We are good to go now.

Finally, perform Verification and Syncronization:

# smitty hacmp --> Extended Configuration --> Extended Verification and Syncronization.

Verify and Syncronize on both the nodes ( Node1 and 2) and Enter.

Cluster Configuration is Over.




Wednesday 8 February 2012

HACMP 2 Node Cluster Steps- Part I

Cluster Configuration between Node A and  Node B.


Node A :


Bootip : 192.168.1.1
Standby ip : 192.168.1.2
Persistent ip : 192.168.1.3
Service ip : 10.0.0.1


Node B:


Bootip : 192.168.2.1
Standby ip : 192.168.2.2
Persistent ip : 192.168.2.3
Service ip : 10.0.0.2




Ensure that above entries are added to the "/etc/hosts" of Node A and Node B.


We assume that the required file sets for HACMP are already installed on the nodes and both the nodes are restarted.


1) Configure Cluster (Cluster Name: test_cluster)
2) Node Configuration : (Node A and Node B).
3) Network Configuration.
4) Resource Configuration.
5) Verification and Syncronization.


Configure Cluster:


# smitty hacmp --> Extended Configuration --> Extended Topology Configuration --> Configure Cluster-->Add/Change/Show Hacmp Cluster


Enter the name of the cluster "test_cluster" and Enter.


Configuration of Nodes:


# smitty hacmp --> Extended Configuration --> Extended Topology Configuration --> Configure HACMP Nodes--> Add a node to HACMP Cluster.


Select Node Name "Node_A_test_cluster" and enter communcation path to the node as boot ip of node A which "192.168.1.1" and repeat the same for node B "Node_B_test_cluster" and the communication path to the ndoe "192.168.2.1".


Discovery of Configured Nodes:


Now we are going to do a discovery of the nodes added to the cluster.


# smitty hacmp --> Extended Configuration --> Discover HACMP-related Information from Configured Nodes
Network Configuration:


# smitty hacmp --> Extended Configuration --> Extended Topology Configuration--> Configure HACMP Networks --> Add a Network to the cluster --> Select "ether" and Enter.


Enter:


Network Name : Default name appear "net_ether_##"
Network Type : ether
Netmask : Mask value appears
Enable IP Address Takeover : Select "Yes" if you opt for "IPAT over Alias" Method.


Enter.


Adding Communication devices to the cluster


This as the name implies we need to add the communication devices to the cluster. Device means "Node A" and "Node B" and how they communicate with each other. The communication happens via "Node_A and Node_B (boot_ip and standby ip).


# smitty hacmp --> Extended Configuration --> Extended Topology Configuration--> Configure HACMP Communication Interfaces/Devices --> Add communication interfaces/device--> Add a Discovered Communication Interface and Devices and select "Communication Interfaces"


Now select the network which you have configure "net_ether_##". Which in turn opens a window with the "Node_A_test_cluster" and "Node_B_test_cluster" [bootip and standby ip]
select the ip (totally 4) that is 2 per node and "Enter".


Cont........Part II




Thursday 2 February 2012

HACMP Overview

What is HACMP in AIX ?

High Availability Cluster Multi Processing, the term which mainly foucs on the availability of applications. This term means a lot when we speak about of "Fault Tolerant" FT. Fault Tolerant as the name implies, which has the ability to persist across any hardware failure like CPU, Memory, Interface and others using redundant components. By this model the fault tolerant machine is capable of providing high availability for an application at any given time.

But the draw back for "Fault Tolerant" methodology is, Since it relies of redundant components. The cost factor is high. This has lead to the development of "HA" High Availability. Which is capable of providing high availability for an application by sharing of application across multiple nodes.

So how is differs from FT,

FT : High Cost Involved
HA: Compartively Less.

FT : No Down Time Required.
HA: Less Down Time Required.