About Us

RSInfoMinds, a web based IT Training and Consultancy firm. It is established with high dreams in training people in IT Infrastructure Field. We provide Online and Class Room training in various fields of IT Infrastructure Management.

Join Us: http://www.facebook.com/RSInfoMinds
Mail Us: rsinfominds@gmail.com
Twitter: @RSInfoMinds

We are specialized in the below courses:

Redhat Linux Admin Redhat Linux Cluster
Redhat Virutualization IBM AIX Admin
IBM AIX Virtualization IBM AIX Cluster
HP Unix Admin HP Unix Cluster
HP Unix Virtualization Shell Scripting
Veritas Volume Manager Veritas Cluster
Oracle Core DBA VMWare


We provide training in such a way, So that you get in depth knowledge on the Courses you look for.

And we ensure you are very confident from each and every Techincal aspect that the IT Industry needs and expects from you.

We also conduct Workshops on the latest technology and the real time faculties sharing their work experiences to make you the best.

Saturday 22 December 2012

How to check NTP Synchronization ?

# ntpq -rl | grep -i reftime

If the "reftime" shows some Hexadecimal value then it is in synchronization with the NTP server. Rather if it shows a zero value then the NTP is not in syncronization.

Same could be found with the /var/adm/syslog/syslog.log and look for synchronization.

Online JFS not working

When you come across a situation where your online JFS is not working.

# swlist -l product | grep -i online : Command to display online JFS.


You have verified that the online JFS has been installed.


When you try to extend the filesystem size online, you get the following message:



# fsadm -b 52166656 /u01
fsadm: /etc/default/fs is used for determining the file system type
fsadm: You don't have a license to run this program
#

# /sbin/fs/vxfs/vxenablef -a : Command will fix the problem by scanning the JFS license.

How to check device status in HPUX V3i

# ioscan -P health -C  disk/tape/fc : Command to check the device status.

Possible status:

1) online

2) offline

3) limited

State of a Patch in HPUX

Patch is said to be in following state:

1) Applied.

2) Commit ed.

3) Supersedeed.

When a patch is superseded by two patches then the old patch can be committed using the command # cleanup -c #

# : Superseded number.

How to check patches in HPUX

The below post shows to check the patches in HPUX:

# swlist -l patch : Command to list the patches installed in HPUX.

# check_patches : Command check the patches on the box and its state.

# apply_patches: Command list the installed patches.

Monday 10 December 2012

Find Your HBA Connection


This post will show the information on how to find your HBA WWN and where you HBA is connected.

# lsdev -Cc adapter 

# lsdev -p fcs0

fcs0 is made up of fscsi0 and fcnet0.

fscsi is the channel used for the transfer of data over FC HBA.

# lsattr -EHl fscsi0

And locate the field called "connected_to" which will show where you HBA is connected.

Boot Information in NVRAM

NVRAM :  Non-volatile Read Only Memory.

Its a memory on which the information about disk is stored.

# bootlist -m both -o -v

The output of this file will be similar to /etc/path_to_inst file in Solaris.

Sunday 18 November 2012

Preventive Measure Before Replacing A Root Disk

This blog is to ensure that we are in proper phase before replacing a failed disk that belongs to rootvg.

Imaging that your volume group is made up of hdisk0 and hdisk1.

hdisk1 is gone bad. Before you replace make sure the hdisk0 is in good condition.

# ipl_varyon -i


PVNAME          BOOT DEVICE     PVID                    VOLUME GROUP ID
hdisk0          YES             00046474e2326aa40000000000000000        0004647400004c00

Make sure the "BOOT DEVICE" is set to "YES".

If not, make the disk a bootable one,

# bosboot -ad /dev/hdisk0

# mkboot -cd /dev/hdisk0

Verify the server is booting from a good disk,

# bootlist -m secure/service/normal -o hdisk0

# bootlist -m both -o -v

NVRAM variable: (boot-device=/pci@fef00000/scsi@e/sd@8:2 /pci@fef00000/scsi@e/sd@c:2)
Path name: (/pci@fef00000/scsi@e/sd@8:2)

Verify the bootable path is updated in NVRAM.


HPUX SG Commands

# cmviewcl : Command to view Service Guard Packages and Resources.

# cmruncl : Command to start cluster service on a node.

# cmhatlcl : Command to stop cluster service on a node.

# cmstartpkg -v <pkg-name> : Command to start a package.

# cmhaltpkg -v <pkg-name> : Command to halt a package.

# cmmodpkg -e <pkg-name> :Command to enable auto-run on a package.

# cmmodpkg -d <pkg-name> : Command to disable auto-run on a package.



Tuesday 6 November 2012

AIX Monitoring

Thanks mate for coming up with the query on AIX Performance Monitoring.

Here are few to performance monitoring:

Performance monitored for CPU, Memory, Disk, Network and Virtual Memory.

CPU:

# topas, sar , vmstat : Command to monitor the CPU performance.

The value of "usr+sys" cannot be more than 80%.

Memory:

# sar -m, vmstat, svmon, topas: Command to monitor Memory Peformance.

The value of "avm" and "free" to be noted.

Disk:

# iostat : Command to monitor Disk.

%busy and %tps: Should not be high.

Get the Port in AIX


# netstat -Aan | grep -i "Port Number" : Command on AIX which will tell which program or process running on the respective port.


The ports could be a static port, well known port, dynamic port, registered or un-registered port.

To check the registered port look at the file "/etc/services".

Which Program Is Running On My Port?

# lsof : Litst Open Files, It is one of the utility to find which program or the process running on a specific port.

# lsof -inP | grep "Port Number" 

The command output will show which process is running on which port.

Tuesday 9 October 2012

SSH Password less configuration

The post show the steps to perform password less configuration between Unix Machines.

Machine A and Machine B :

Machine:

1) Login as root.

2) # ssk_keygen -t dsa/rsa

3) # cd ./ssh/

4) The 2 step would generate a public key and private key.

5) copy the public key to the machine B (./ssh/authorized_keys) file.

6) Now to ssh from machine A to machine B.

Wednesday 3 October 2012

Scripts for TCP/IP and NFS

Hi Team,

This is the post regarding the scripts  that controls TCP/IP and NFS.

/etc/rc.tcpip : Script for controlling TCP/IP Networks.

/etc/rc.nfs : Script for controlling NFS.

Thursday 23 August 2012

PVG-Strict LVM

How to increase a file system in HPUX where LVM is PVG_Strict

1) # lvdisplay lvname : Verify the allocation property is PVG-Strict.

2) # lvchange -s n : Disable PVG-Strict.

3) # lvextend -l <No.of LE> Logical_Volume_Name

4) # fsadm -F vxfs -b <Size> Mount_Point

5) # bdf : Verify the size.

Monday 20 August 2012

Hi Followers/Readers...

Really Sorry For The Absence In Sharing The Post.

Thursday 21 June 2012

Sendmail in AIX

If you have come acrossed a problem in starting a sendmail demon using "startsrc" command and the sendmail stops immediately.

Solution:

# lssrc -s sendmail : Command to check the sendmail subsystem.

# ps -ef | grep -i sendmail

# stopsrc -s sendmail

# startsrc -s sendmail -a "-bd -q30m" : Command to start the sendmail demon.

# lssrc -s sendmail : Command to verfiy the sendmail demon.

If the demon still shows "inoperative"

# ./etc/rc.tcpip : Script that start the sendmail demon.

Note: Ensure mail log is no full.

Monday 18 June 2012

Varyoffvg in AIX

# varyoffvg <Volume_Group> : Commad to deactivate a volume group.

When a volume group is not active the logical volume or the physical volume on the volume group could not be listed.

To overcome this issue, we can apply "-s" flag when we varyoff of the volume group, this flag is for maintenance mode. This would ensure we can still view the attributes of a volume group even though its deactivated.

# varyoffvg -s vgp
# lsvg -l vgp
vgp:
LV NAME             TYPE       LPs   PPs   PVs  LV STATE      MOUNT POINT
lvn                 ???        10    10    1    closed/syncd  N/A
loglv00             ???        1     1     1    closed/syncd  N/A
#
# lsvg -p vgp
vgp:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk5            active            542         531         109..97..108..108..109
#

Sunday 10 June 2012

Basic Veritas Volume Manager Commands

# vxprint  -g <Disk Group> : Command shows information about a disk group.

# vxdctl mode : Command to check th status of Veritas Service.

# vxdctl stop : Command to stop the Veritas Service.

# vxconfigd -m start : Command to start the Veritas Service.

# vxinstall : Comman to initalize Veritas Volume Manager.

# vxdisksetup -i <disk_name> : Command to prepare a disk for VxVM.

# vxdiskunsetup -C <disk_name> : Command to unconfigure the disk from VxVM.

# vxdg init <DG_Name> <Disk_Name> : Command to create a Disk Group.

# vxassist -g <DG_Name> make <Volume_Name> <Size> : Command to create a logical volume.

# newfs -F vxfs /etc/vx/rdsk/<DG_Name>/<Volume_Name> : Command to format a logical volume.

Wednesday 23 May 2012

Hpunix Storage Troubleshooting Command

Hi All..The post mainly foucs on how to handle storage relelated problems from the Server End.

#  cat /var/adm/syslog/syslog.log : Analyze the file to find what kind problem has been reported.

# ioscan -funC disk/tape : Command to check the status of the disk and tape and not the device is "no_hw" state.

# ioscan -funH : Command to check the status of the Hardware path from the syslog.log.

# ioscan -fun fc : Command to check the status of the FC HBA.

# fcmsutil /dev/fcd# : Command to check the status of the HBA. Status should be "online" not "Awaiting Link up"

# fcmsutil /dev/fcd# stat -s  : Command to check the statistics of the FC HBA.

# fcmsutil /dev/fcd# get remote all : Command to get all the remote ports connected to this FC HBA.

Tuesday 8 May 2012

Multipath in Linux

We all know the concept of Multipathing, which is configured for redundancy purpose to ensure the L UN's from the Storage Team is always available at the server.

1) # multipath -l : Command to view the configured multipath devices.

2) # cd /proc/scsi : Location of the HBA file which contains the WWN Number. In Linux, the HBA files are stored as 0,1.... The same numbering notation reflects in the "multipath" output.

3) /etc/multipath.conf : Configuration file for multipath.

4) # multipathd -k : Command which opens the interactive console with the ">" prompt.

>show paths : Command shows all the configured multipaths.
>show maps : Command shows all mapping of LUN.
 

Sunday 6 May 2012

mksysb error while taking OS backup

# mksysb -vipX /file_system

The above is the command to take a backup of rootvg (Operating System) in AIX.

When the above command fails:

1) Need to check the size of the destination filesystem, which has to match the rootvg.

2) Need to check the # ulimit values.

3) Need to check the /etc/exclude.rootvg file.

Sunday 22 April 2012

VG Configuration and Restore in HP UX

This posts shows how to take backup of a volume group configuration and restoring.

# cd /etc/lvmconf/<vgname> : The directory by default holds the backup of all the volume group configurations.

The volume group configuration is taken whenever LVM commands are applied on the volume group.

The configuration files are always stored with the ".conf" extension.

# vgcfgbackup -f <filename.conf> <volume_group> : Command to  take back up of the volume group.

# vgcfgrestore -f <filename.conf> </dev/rdsk/c#t#d#> : Command to restore the volume group configuration on a physical volume.

The above process is applied, during a disk replacement activity. When a faulty disk is replaced in a volume group, instead of mirroring the data on a new disk we simply apply the configuration file and execute "# vgsync" command. This applies the configuration on the new disk.


Thursday 19 April 2012

Monday 16 April 2012

Software Management in HPUX

The software management in HP UX is divided as Depot, Bundle, Product and Fileset.

# swlist -l depot : Command shows the depots on the machine.

# swlist -l bundle : Command shows the bundle on the machine.

# swlist -l prodcut : Command shows the product on the machine.

# swlist -l fileset : Command shows the fileset on the machine.

# swlist -l patch : Command shows the patches on the machine.

# swlist -l bundle/product/fileset -a revision : Command to view the revision on the bundle/product/fileset

# swlist -l depot/bundle/product/fileset -a software_spec : Command to view the version, revision, arch, vendor details.

# cd /var/adm/sw : Location of all the software logs on the machine.


Thursday 12 April 2012

LUN Information in HPUX

SCSIMGR: SCSI Manager is an utility to get the LUN information from the server.

# scsimgr -h get_attr : Command to get basic attributes of all SCSI Devices.

# scsimgr -v get_stat : Command to get global statistics.

# scsimgr get_stat -D /dev/rdisk/disk0 : Command to get statistics of a particular disk.

# scsimgr get_stat -D /dev/rdisk/disk0 all_lpt : Command to get all the LUN path for a disk.

# scsimgr -v get_stat -H 0/4/1/0.0x60060b000014f45a0001000000000011 : Command to get statistics for a Hardware path.


Monday 2 April 2012

Unmirrorvg Cont...

Original Copy : hdisk4
1nd Copy : hdisk3
2nd Copy : hdisk5

Now I unmirror the 2nd copy of the volume group "data".


bash-3.2# lsvg -p data
data:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk4            active            1084        1019        217..152..216..217..217
hdisk3            active            1084        1019        217..152..216..217..217
hdisk5            active            542         477         109..43..108..108..109
bash-3.2# unmirrorvg -c 2 data
bash-3.2# lsvg -p data
data:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk4            active            1084        1019        217..152..216..217..217
hdisk3            active            1084        1019        217..152..216..217..217
hdisk5            active            542         542         109..108..108..108..109




Unmirrorvg Cont...

Now, we discuss the mirroring using 3 disks.


bash-3.2# lsvg -p data
data:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk4            active            1084        1019        217..152..216..217..217
hdisk3            active            1084        1084        217..217..216..217..217
hdisk5            active            542         542         109..108..108..108..109

hdisk4 : Original Copy.

bash-3.2# mirrorvg -c 3 data hdisk3 hdisk5
0516-1125 mirrorvg: Quorum requirement turned off, varyoff and varyon
        volume group for this to take effect.
bash-3.2# lsvg -p data
data:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk4            active            1084        1019        217..152..216..217..217
hdisk3            active            1084        1019        217..152..216..217..217
hdisk5            active            542         477         109..43..108..108..109
bash-3.2# lsvg -l data
data:
LV NAME             TYPE       LPs   PPs   PVs  LV STATE      MOUNT POINT
loglv01             jfs2log    1     3     3    open/syncd    N/A
fslv01              jfs2       64    192   3    open/syncd    /data
bash-3.2#

So, as per the command executed. I can say,

hdisk4: Original.
hdisk3: 1st mirror.
hdisk5: 2nd mirror.

Now, will try this.

bash-3.2# lsvg -p data
data:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk4            active            1084        1019        217..152..216..217..217
hdisk3            active            1084        1019        217..152..216..217..217
hdisk5            active            542         477         109..43..108..108..109
bash-3.2# unmirrorvg -c 2 data
bash-3.2# lsvg -p data
data:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk4            active            1084        1019        217..152..216..217..217
hdisk3            active            1084        1019        217..152..216..217..217
hdisk5            active            542         542         109..108..108..108..109
bash-3.2#

The above command proves that the 2nd copy was in hdisk5 and its removed. 

Now we will mirror it back.

bash-3.2# mirrorvg data hdisk5

0516-1118 mirrorvg: No logical volumes in volume group to mirror.
0516-1200 mirrorvg: Failed to mirror the volume group.
bash-3.2# lsvg -l data
data:
LV NAME             TYPE       LPs   PPs   PVs  LV STATE      MOUNT POINT
loglv01             jfs2log    1     2     2    open/syncd    N/A
fslv01              jfs2       64    128   2    open/syncd    /data

The above command failed to mirror. But the command is correct. 

But it worked when I executed like this,

bash-3.2# mirrorvg -c 3 data hdisk5
bash-3.2# lsvg -l data
data:
LV NAME             TYPE       LPs   PPs   PVs  LV STATE      MOUNT POINT
loglv01             jfs2log    1     3     3    open/syncd    N/A
fslv01              jfs2       64    192   3    open/syncd    /data
bash-3.2# lsvg -p data
data:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk4            active            1084        1019        217..152..216..217..217
hdisk3            active            1084        1019        217..152..216..217..217
hdisk5            active            542         477         109..43..108..108..109

This needs an explanation...


Umirrorvg Command

This post shows what really the unmirrorvg command does:


bash-3.2# lsvg -p data

data:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk4            active            1084        1084        217..217..216..217..217
hdisk3            active            1084        1019        217..152..216..217..217
hdisk5            active            542         542         109..108..108..108..109

bash-3.2# lspv -l hdisk5
bash-3.2# lspv -l hdisk4
bash-3.2# lspv -l hdisk3
hdisk3:
LV NAME               LPs   PPs   DISTRIBUTION          MOUNT POINT
loglv01               1     1     00..01..00..00..00    N/A
fslv01                64    64    00..64..00..00..00    /data

The above output shows that I have a volume group called "data" which belongs to the physical volume "hdisk3" that hold some data in it across the mount point "/data"

bash-3.2# mirrorvg data hdisk4  /* Mirrored Data VG on hdisk4"

0516-1125 mirrorvg: Quorum requirement turned off, varyoff and varyon
        volume group for this to take effect.
bash-3.2# lsvg -p data
data:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk4            active            1084        1019        217..152..216..217..217
hdisk3            active            1084        1019        217..152..216..217..217
hdisk5            active            542         542         109..108..108..108..109
bash-3.2#

Now,

hdisk3 holds Original Copy.
hdisk4 holds 1st mirror Copy.

Now I go ahead to remove the Original copy from hdisk3.

bash-3.2# unmirrorvg data hdisk3
0516-1133 unmirrorvg: Quorum requirement turned on, varyoff and varyon
        volume group for this to take effect.
bash-3.2# lsvg -l data
data:
LV NAME             TYPE       LPs   PPs   PVs  LV STATE      MOUNT POINT
loglv01             jfs2log    1     1     1    open/syncd    N/A
fslv01              jfs2       64    64    1    open/syncd    /data
bash-3.2# lsvg -p data
data:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk4            active            1084        1019        217..152..216..217..217
hdisk3            active            1084        1084        217..217..216..217..217
hdisk5            active            542         542         109..108..108..108..109
bash-3.2#

Now I again do mirroring on hdisk5.

So,

hdisk4 holds Original Copy (former 1st mirror copy).
hdisk4 holds mirror copy.

bash-3.2# lsvg -l data
data:
LV NAME             TYPE       LPs   PPs   PVs  LV STATE      MOUNT POINT
loglv01             jfs2log    1     2     2    open/syncd    N/A
fslv01              jfs2       64    128   2    open/syncd    /data
bash-3.2# lsvg -p data
data:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk4            active            1084        1019        217..152..216..217..217
hdisk3            active            1084        1084        217..217..216..217..217
hdisk5            active            542         477         109..43..108..108..109
bash-3.2# lspv -l hdisk5
hdisk5:
LV NAME               LPs   PPs   DISTRIBUTION          MOUNT POINT
loglv01               1     1     00..01..00..00..00    N/A
fslv01                64    64    00..64..00..00..00    /data
bash-3.2# lspv -l hdisk4
hdisk4:
LV NAME               LPs   PPs   DISTRIBUTION          MOUNT POINT
loglv01               1     1     00..01..00..00..00    N/A
fslv01                64    64    00..64..00..00..00    /data
bash-3.2# lspv -l hdisk3
bash-3.2#

So, we I have only one mirror copy on "hdisk5".

So, I unmirror the volume group.

bash-3.2# unmirrorvg data
0516-1133 unmirrorvg: Quorum requirement turned on, varyoff and varyon
        volume group for this to take effect.
bash-3.2# lsvg -l data
data:
LV NAME             TYPE       LPs   PPs   PVs  LV STATE      MOUNT POINT
loglv01             jfs2log    1     1     1    open/syncd    N/A
fslv01              jfs2       64    64    1    open/syncd    /data

bash-3.2# lsvg -p data
data:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk4            active            1084        1019        217..152..216..217..217
hdisk3            active            1084        1084        217..217..216..217..217
hdisk5            active            542         542         109..108..108..108..109
bash-3.2#


So, the data on the hdisk5 has been removed.


Friday 30 March 2012

WWID in HPUX

This blog shows how to get the WWID (UUID) of the LUN's allocated using the HP Storage.

# scsimgr -v get_info -H <Hardware Path> : Command to get the WWID of the LUNS allocated through that hardware path.

# fcmsutil /dev/fc# get remote all : Command to check the commuincation between the server HBA to all the remote (switch ports) and storage HBA.

Saturday 24 March 2012

HBA statistics in HP Unix

The below posts show, how to handle and trouble shoot a FC HBA in Hp unix:


# ioscan -funC fc : Command shows all the FC HBA on the box.

# fcmsutil <device_file_name> : Command show the n_port WWN and Switch port WWN.

# fcmsutil <device_file_name> stat -s : Command shows IO stats on the HBA.

# fcmsutil <device_file_name> get remote all : Command check the HBA remote connection.

# fcmsutil <device_file_name> disable : Command to disable the HBA.

# fcmsutil <device_file_name> enable : Command to enable the HBA.

Hot Swappable Disk Replacement in AIX

This posts concentrated on Disk Replacement in AIX:

We conclude the disk has failed after looking at the disk state with the "errpt" diagnosis and the PVstate as "missing"

# errpt -N hdisk#

# lspv pvname

If the disk is a part of a  mirrored volume group, follow the below steps:

1) # unmirrorvg vgname <failed_pv> : Command to unmirror  the volume group from failed physical volume.

2) # reducevg -d- f vgname <failed_pv> : Command to remove the failed disk from the volume group.

3) # lscfg -vpl <failed_pv>: Make a note of the Physical Location of the failed Physical volume.

4) # rmdev  -dl <failed_pv> : Remove the failed disk from the ODM.

5) #diag  --> Task Manager --> Hot Plug Task--> Select SCSI and SCSI RAID Hot Plug Manager -->Select Replace/Remove a Disk attached to SCSI Hot Swap Enclosure-->Select the Device to be replaced and Press Enter.



Sunday 18 March 2012

Veritas Volume Manager Commands

# vxdisk list :Commad to list all the disk recognized by the VxVM.

# vxdisksetup -i <disk_name> :Command to initialize a physical volume under VxVM format.

# vxdiskunsetup <disk_name> :Command to remove a physical volume under VxVM format.

# vxdg init <disk_group_name> cds=on <disk_name>=<Physical_volume> : Command to create a disk group with the disk.

# vxprint -htg <disk_group>: Command to view the disk group created.

# vxinfo -g <disk_group_name> : Command shows the volume created in the disk group.

# vxinfo -g <disk_group_name> -p : Shows the plexes in the disk group.

Friday 16 March 2012

Identifying the boot disk in AIX and HPUX

# bootinfo -b :Command to identify the present boot disk in IBM AIX.

# bootinfo -d :Command to identify the last boot disk  in IBM AIX.

# setboot -v : Command displays the Hardware path the of the primary and alternate boot disk in HPUX.

# ioscan -funH <Hardware_path> -C disk : Command to get the disk on the path in HPUX..

# lvlnboot -v : Command get the boot disk in HPUX..

Virtual Home Directory

In UNIX, a user can login only when they have an account in the server which has an entry at /etc/passwd and other files.

The default location where the user logs in is their respective home directory /home/<username> execpt the "
root". "Root" user logs into "/".

The home directory is created when the # useradd command is used with "-m" flag.

#useradd -m test : Command creates a user called "test" with the home directory of the same name under "/home".

#useradd test : Command create the user "test" with the default attributes.

So what makes the difference when this is executed:

#su - test
#echo $HOME

On the both the case the output will be "/home/test". But the difference is,

# useradd -m test : Creates a home directory.

#useradd test : Created a virtual directory, that is the directory does not exist in real, but allow the users to login by "#su - test", and the user do not have any control like creating a file / reading a file when they login into the virtual directory.


Wednesday 14 March 2012

AIX LVM Header file

When we execute LVM Command, we get a numerical value as a result of the execution of the command.

Each value has a meaning in it which describes the nature of the LVM command executed and the outcome.


bash-3.2# exportvg vg01
0516-764 exportvg: The volume group must be varied off before exporting.

Here I am trying to export a volume group, which is an active one and it generates a numerical value as well.

/usr/include/lvm.h : This is the file that contains all the variable declaration for the LVM structures.

Monday 12 March 2012

Booting Phases of HPUX


The booting process of HP Unix is classified into 5 steps:

1) PDC : Processor Dependent Code.

2) ICL : Initial Controller Load.

3) Loading the kernel.

4) Startup scripts.

5) Console login.

Useful HPUX Administrator Commands


# ioscan -funC : Scans for disk attached to the machine.

# ioscan -funC fc : Scans for FC adapter on the machine.

# ioscan -P health -C disk : Shows health status of all the disk.

# ioscan -funC tape : Scans for tape drive on the machine.

# ioscan -m dsf : Mapping between Persistent DSF and Legacy DSF.

# ioscan -funH <Hardware_Path> : Shows list of devices connected to the hardware path.

# ioscan -m lun : Command show the LUN on the machine.

# ioscan -m lun -h <Hardware_Path> : Mapping between the LUN and the Hardware path.

# ioscan -fn : Shows all the devices on the machine.

# lanscan -aip : Show the all the interfaces MAC and Instance Number.

# lanadmin -g <Instance_Number> : Complete information for a particular Interface.

# linkloop -i <Instance_Number> <MAC> : Check the physical cable connectivity of the interface.

# nwmgr : Starting HPUX 11.30, shows physical connectivity of all the interfaces.

# ioscan -fn | grep -i "no_hw" : Get the devices that are not claimed or not in use.

# strings /etc/lvmtab : Command to get the volume groups created and the physical volumes in it.

# lvlnboot -v : Shows current boot logical volume, swap, dump and root logical volume.

# swapinfo -atm : Information about swap space.

# top : Complete information about the machine.

# glance : Complete information about the machine. Similar to "topas" in AIX.

# vmstat -d 1 5 : Shows virtual memory information with "Disk Performance".


Sunday 4 March 2012

Configuration of Dump Device in Hp Unix

This blog states how to configure Dump device and activate a dump device in HP Unix.

Dump Device refer to an object which plays a vital role during the system crash. On the occurence of a system crash the machine reboots, an image of the system (including hardware and software) will be taken as a dump image and stored in a file system. This information can be used for debugging purpose to analyze the reason for crash.

Steps for configuration:

In HP Unix, by default the paging space is also used as a dump device. "lvol2".

1) The dump device should be a size larger than the size of the RAM.

# dmesg | grep -i physical : Command to get the RAM Size.

2) Create a logical volume of continous allocation and bad block allocation disabled.

# lvcreate -n dump -L 1024 -C y -r n  <Volum_Group_Name> : Command to create a logical volume.

3) Assign the created logical volume as a dump device.

# lvlnboot -d /dev/vg_name/lv_name

4) Verify the dump has been assigned.

# lvlnboot -v

Reboot the machine for the dump to get effective.

Saturday 3 March 2012

Swap Device in HP_Unix

This post mainly deals with the management of swap space in hp unix.

Swap refers to the Virtual space or Paging space as referred according to the UNIX flavour.

Importance of Swap Space has the ability to provide much memory space than the machine has.

The default swap space in hp unix  "/dev/vg#/lvol2".

Logical volume of partition "2" is assigned as default swap space. This space cannot be extended or reduced.

# lvcreate -n <Logical_Volume_Name> -L <Size_of_LV> <Volume_Group_Name> : Command to create a logical volume of desired size.

Swap Space should be twice the size of Physical Memory.

# swapon <Logical_Volume_Name> : Command to assign the the created the logical volume as a swap device.

# swapinfo -dtm : Command to verify the created swap device is created an activated.

Ensure the entry has been  made at "/etc/fstab" for the swap space to persist across the reboot.

/etc/fstab

<Logical_Volume_Name>   "/"   swap   defaults 0 0

Monday 27 February 2012

How to find the devices attached to the FC Card in AIX and HP-Unix

This post shows the best way to find out the devices connected to the FC - HBA.

AIX:

# lsdev -Cc adapter | grep -i fcs* : Command to get the FC adapters connected to the machine.

The output of the command will show the loction code of the FC "port_number-slot_number". Make a note of the numbers.

# lsdev -Cc disk | grep "##-##" : Command to fetch the device connected to the "port_number-slot_number"

HP-Unix:

# ioscan -funC fc : Command  to get the FC adapters connected to the machine.

Make a note of the device file name for the FC adapter "/dev/fcd or /dev/td" depends on manufacturer.

# /opt/fcms/bin/fcmsutil /dev/fcd# get remote all : Command that returns the devices connected to it.


Tuesday 21 February 2012

Priority of a Process in AIX

Process refers to a program under a execution.

Process can be executed on foregroud and background.

Priority of a process executed in foreground is 20.

Priority of a process executed in background is 24.

The priority of process can be changed with the nice command.

The range of priority of a proces is +20 to -19.

Smaller the value favored high.

# vmstat 2 5 : Command executed with the priority value of 20.

# nice -n 5 vmstat 2 5 : Command executed with the priority value of 25+5 = 25 (less favored).

# nice -n -5 vmstat 2 5 : Command executed with the priority value of 20-5 = 15 (high favored).

The process could have priority which can be a fixed or non-fixed (dynamic).

Fixed priority could be assigned to a process using the varible,

retcode=setpri(Process ID, priority value).

# ps -lu <user> : Command shows the priority of all the process owned by the user.

Monday 20 February 2012

Disk Based Heart Beat Configuration in HACMP_AIX

This post shows step by step procedure involved in creating the Disk based heart beat polling in the HACMP environment.

Heartbeat polling refer to the process of checking the TCI/IP connectivity between the nodes in the cluster.

Heartbeat polling could be configured in a TCP/IP based or non-TCP/IP based network.

When the hearbeat fails, the failover happens based on the policy configured in the HACMP.

1) Select the physical volume (common) of smallest size.

# lspv
# booinfo -s hdisk# : Command to view the size of the disk.

2) Create a enhanced concurrent volume group.

# mkvg -y disk_beat_polling -s 4 -C -n hdisk1

or

# smitty hacmp-->c-spoc-->hacmp concurrent logical volume management--> concurrent volume group--> create a concurrent volume group-->  select the nodes in the cluster --> select the PVID--> enter the Volume group name and Enter.

3) Perform Discovery Hacmp Related Information.

4) Add the dish beat network.

# smitty hamp--> extended configuration--> resource configuration--> add a network to hacmp cluster --> Select the "diskhb" from the predefined serial device type and Enter.

The default TCP/IP network name : net_ether_##

The default Disk based network name : net_diskhb_##

5) Configure communication Interfaces / Devices.

smitty hacmp-->extended configuration--> extended topology configuration--> configure hacmp communication interfaces/devices  --> add communication interfaces/devices--> add a discovered communication interface and devices --> communication devices -->select the nodes in the cluster and Enter.

Now perform verification and syncronization. Finally start the cluster services.

Successful configuration of disk beat polling is viewed through:

# lssrc -ls topsvcs

Look for the column which says "HB Interval; Sensitivity and Missed HB's"

Wednesday 15 February 2012

How to check the "Run Queue and Swap Queue Size"

# sar -q 5 3

The above command is used to determine the Run Queue and Swap Queue Size.

We can find the number of threads/process waiting in the run queue using the "vmstat" command. But the above would give much more visibility in terms of % of time the run and swap queue was occupied.

runq-sz : Shows average no.of threads present in run queue.

%runocc : Percentage of time the run queue was occupied.

swap-sz : Shows average no.of threads present in the swap queue.

%swapocc : Percentage of time the swap queue was occupied.

Use "vmstat" to find the disk performace

# vmstat hdisk1 hdisk2 hdisk3 hdisk3 1 10

The above command is use to find the "Disk Transfer" rate. The "disk xfer" rate is used to display the transfer rate on the hdisks#. Maximum of 4 hdisks could be used in the command.

Tuesday 14 February 2012

Calculate the Efficiency of your Logical Volume in AIX

# lslv -l hd2 : Command to view the LV fragmenation.

hd2:/usr
PV                COPIES      IN BAND        DISTRIBUTION
hdisk0        114:000:000        22%           000:042:026:000:046

COPIES: 114:000:000

114 : No.of LP's in the first copy.
000 : NO LP's,so no secound copy.
000 : NO LP's,so no third copy.

Therefore the LV is not mirrored.

IN BAND : Shows how well the Intrapolicy and the attribute of the LV is followed.

22%, higher the percentage, the better the allocation efficiency.

Each logical volume has its own intrapolicy. If the operating system cannot meet this requirement, it chooses the best way to meet the requirements.

DISTRIBUTION : edge : middle : center : inner-middle : inner-edge

               000     042       026         000         046            -----> 114 LP.
Since the LV was created with the intra allocation policy "Center"----------->No.of LP's at the center is "26"--->So, 26/(0+42+26+0+46) = 0.2280 * 100 = 22 %

HACMP 2 Node Cluster Steps- Part II

Resource Configuration:

# smitty hacmp --> Extended Configuration --> Extended Resource Configuration --> HACMP Extended Resource Configuration--> Configure HACMP Application and Configure HACMP Service IP Labels and IP Address.

In this wizard we select the option "Configure HACMP Application" to create an Application Server and provide the "Startup" and "Stop" scrip.

We select the "Configure HACMP Service IP Lables and IP Address" to add the "Service IP" to the cluster Network which we created.

So, from "Configure HACMP Service IP Labels and IP Address" --> "Add a Service IP Label/Address"
and select "Configurable on Multiple Nodes" and select the network which we created "net_ether_##" and select the service ip and press "Enter".

The above process binds the "Service IP" with the "Cluster Network".

Next we proceed with Resource Group Configuration:

# smitty hacmp --> Extended Configuration --> Extended Resource Configuration --> HACMP Extended Resource Group Configuration --> Add a Reource Group.

Enter the Resource Group Name, Participating Nodes (Node 1 and Node 2 and set the cluster policy and "Enter".

# smitty hacmp --> Extended Configuration --> Extended Resource Configuration --> HACMP Extended Resource Group Configuration --> Change/Show the attributes of Resource Group.

Select the "RG" which you have selected. In this window you have select the "Service IP" and "Application Server" which we configured earlier.

Now do a "Discovery of cluster"

Next we see configuration of Volume Group:

The resource group which hold the resource should be placed in the shared volume group in terms of a shared file system via shared logica volume. So that the application which binds to the service ip will be available across all the nodes on the cluster.

# smitty hacmp --> System Management (C-SPOC) --> Hacmp Logical Volume Management --> Shared Volume Group --> Create a Shared Volume Group --> Select the node ( Node1and Node2) --> Select the PV ID--> Enter the Volume Group Name / Major Number and Enter the size of PP and Enter.

Create Shared Logical Volume:

# smitty hacmp --> System Management (C-SPOC) --> Hacmp Logical Volume Management --> Shared Logical Volume --> Add a Shared Logical Volume --> Select the Volume Group Name --> Select the Physica Volume --> Enter the Logical Volume Name and No.of LP's and Enter.

Create Shared File System:

# smitty hacmp --> System Management (C-SPOC) --> Hacmp Logical Volume Management --> Shared File System --> Journal File System --> Add a Journal Filesystem on a Previously Defined Logical Volume -->Add a standard Journaled File System --> Select the Shared Logical Volume -->Enter the Mount Point and Enter.

We are good to go now.

Finally, perform Verification and Syncronization:

# smitty hacmp --> Extended Configuration --> Extended Verification and Syncronization.

Verify and Syncronize on both the nodes ( Node1 and 2) and Enter.

Cluster Configuration is Over.




Wednesday 8 February 2012

HACMP 2 Node Cluster Steps- Part I

Cluster Configuration between Node A and  Node B.


Node A :


Bootip : 192.168.1.1
Standby ip : 192.168.1.2
Persistent ip : 192.168.1.3
Service ip : 10.0.0.1


Node B:


Bootip : 192.168.2.1
Standby ip : 192.168.2.2
Persistent ip : 192.168.2.3
Service ip : 10.0.0.2




Ensure that above entries are added to the "/etc/hosts" of Node A and Node B.


We assume that the required file sets for HACMP are already installed on the nodes and both the nodes are restarted.


1) Configure Cluster (Cluster Name: test_cluster)
2) Node Configuration : (Node A and Node B).
3) Network Configuration.
4) Resource Configuration.
5) Verification and Syncronization.


Configure Cluster:


# smitty hacmp --> Extended Configuration --> Extended Topology Configuration --> Configure Cluster-->Add/Change/Show Hacmp Cluster


Enter the name of the cluster "test_cluster" and Enter.


Configuration of Nodes:


# smitty hacmp --> Extended Configuration --> Extended Topology Configuration --> Configure HACMP Nodes--> Add a node to HACMP Cluster.


Select Node Name "Node_A_test_cluster" and enter communcation path to the node as boot ip of node A which "192.168.1.1" and repeat the same for node B "Node_B_test_cluster" and the communication path to the ndoe "192.168.2.1".


Discovery of Configured Nodes:


Now we are going to do a discovery of the nodes added to the cluster.


# smitty hacmp --> Extended Configuration --> Discover HACMP-related Information from Configured Nodes
Network Configuration:


# smitty hacmp --> Extended Configuration --> Extended Topology Configuration--> Configure HACMP Networks --> Add a Network to the cluster --> Select "ether" and Enter.


Enter:


Network Name : Default name appear "net_ether_##"
Network Type : ether
Netmask : Mask value appears
Enable IP Address Takeover : Select "Yes" if you opt for "IPAT over Alias" Method.


Enter.


Adding Communication devices to the cluster


This as the name implies we need to add the communication devices to the cluster. Device means "Node A" and "Node B" and how they communicate with each other. The communication happens via "Node_A and Node_B (boot_ip and standby ip).


# smitty hacmp --> Extended Configuration --> Extended Topology Configuration--> Configure HACMP Communication Interfaces/Devices --> Add communication interfaces/device--> Add a Discovered Communication Interface and Devices and select "Communication Interfaces"


Now select the network which you have configure "net_ether_##". Which in turn opens a window with the "Node_A_test_cluster" and "Node_B_test_cluster" [bootip and standby ip]
select the ip (totally 4) that is 2 per node and "Enter".


Cont........Part II




Thursday 2 February 2012

HACMP Overview

What is HACMP in AIX ?

High Availability Cluster Multi Processing, the term which mainly foucs on the availability of applications. This term means a lot when we speak about of "Fault Tolerant" FT. Fault Tolerant as the name implies, which has the ability to persist across any hardware failure like CPU, Memory, Interface and others using redundant components. By this model the fault tolerant machine is capable of providing high availability for an application at any given time.

But the draw back for "Fault Tolerant" methodology is, Since it relies of redundant components. The cost factor is high. This has lead to the development of "HA" High Availability. Which is capable of providing high availability for an application by sharing of application across multiple nodes.

So how is differs from FT,

FT : High Cost Involved
HA: Compartively Less.

FT : No Down Time Required.
HA: Less Down Time Required.

Monday 30 January 2012

Operating System Cloning in IBM AIX and HP Unix

This post shows how to do cloning of Operating System. Since we already discussed abt cloning of Operating System (alt_disk_copy) method. Will straight away discuss about hp unix cloning.

In hp unix we use a utility called "Dynamic Root Disk" [DRD] to do cloning of the disk.

1) # uname -a : Check the version of Hp Unix.

2) # model : Check the model of the machine.

3) # swlist -l product -l bundle | grep -i dynamic : Ensure the DRD software is installed on the box.

4) # strings /etc/lvmtab : Find out the disk on which the OS intalled.

5) # diskinfo -v /dev/rdsk/c#t#d# : Command to find the size of the disk.

6) # ioscan -funC disk : Look for the free disk. Select the disk of size which is equal to OS disk.

7) # pvdisplay -v /dev/rdsk/c#t#d# : Ensure the LVM status of the disk is "No". The physical volume should not be a part of any  LVM structure.

8) # drd clone -p  -v -t /dev/dsk/c#t#d# : Command to take a preview of the clone and analyze the disk capability to ensure that it can hold the clone. If it is successful..

9) # drd clone -v -t /dev/dsk/c#t#d# : Command to clone the disk. It take 30 minutes of time.

The default name of the Operating System volume group is called as "vg##" and the cloned one is called "drd##".

10) # bdf : Command to verify the cloned Operating System.

11) # drd umount : Command to un mount the cloned Operating System.



Friday 27 January 2012

Virtual SCSI Mapping Through HMC rather than "mkvdev" CLI

This post shows an easy wat to perform mapping of disk between the Virutal Server and Virtal Client though Hardware Management Console. Rather than CLI using "mkvdev".

Disadvantage of thie method. Only directy mapping of the disk could be made rather than mapping a logical volume or a filesystem are not supported in this method. In those case we need to go back "mkvdev" mode for the mapping.

So we assume that a disk has been mapped the VIO_Server from the SAN Side.

Login into HMC-->System Management-->Server(Managed System to which the VIO_Server and VIO_Client Belongs to)-->Configuration-->Virtual Resources-->Virtual Storage Management.

This opens a windows like below.


In the window, on the left hand top corner you would find a drop down called "VIOS". Select the appropriate VIO_Server. This would list all the physical volumes available on the VIO_Server. To begin with the mapping. Select the disk that you would like to assign and click on "Modify Assignment" on the left hand bottom corner.


This the next window that will open upon selecting "Modify Assignment". Select the appropritate "Virtual SCSI Server Adapter" (vhost#) and click "OK".

Now the mapping is done.


Cloning rootvg to external disk_AIX_ : Mapping to VIO_Client: Part III

VIO_Server

$ lsdev -virtual or $ lsdev | grep "vhost*" or $ lsdev -type adapter : List all the virtual adapters on the VIO_Server.

$ lsmap -vadapter vhost* : Show the mapping of the Virtual SCSI Server Adapter.

$ lspv : Command to list the Physical Volumes.

$ mkvdev -vdev hdisk# -vadapter vhost# : Command to provide mapping between the virtual SCSI Server and Client Adapter.

$ lsmap -vadapter vhost : Command to verify the mapping has been done or not.

VIO_Client:

Now boot the disk through SMS Mode from HMC. Which would boot the client from the "cloned" disk. Upon successful installation of the Operating System. Login into the client.

# lspv : List all the physical volume.

Cloning rootvg to external disk_AIX_ : Mapping to VIO_Client: Part II

Now inform the storage team to map the concerned LUN to the VIO_Server by providing the WWN of the HBA attached to the VIO_Server. Once it is done.

Login into the VIO_Server as "padmin".

$ cfgmgr : Configuration Manager.

$ lspv : List the Physical volumes in the VIO_Server.

$ chkdev -dev hdisk# -verbose : Command to get the PVID, IEEE ID, Unique ID of the disk and VIO_Attributes.

Now check the "unique_id" of the disk which we made a note earlier and compare it with the "unique id" displayed in the above command. If the "unique id" matches then the proper "LUN" is mapped else need to check with the storage team to map the correct LUN.

Next step would be create a Virtual_SCSI_Server_Adapter on VIO_Server and Virtual_SCSI_Clinet Adpater on the VIO_Client.

This has been explained in my previous posts.

Login into HMC--> System Management-->Server-->Select the VIO_Server-->Configuration--> Manage Profile-->Actions-->Create-->Virtual SCSI Adpater.

Make a note of Adapter ID and ensure VIO_Server_Adapter in mapped to the correct VIO_Client.

Follow the same process at the VIO_Client end as well.

Mapping between VIO_Server and VIO_SCSI...Part III

Cloning rootvg to external disk_AIX_ : Mapping to VIO_Client: Part I

The method which we are going to discuss is "rootvg" is cloned on a LUN. Now the LUN is mapped to a VIO_Server which is then presented to a VIO_Client.

Rootvg Cloning:

# lspv : Command to list the physical volumes in the box.

# bootinfo -b : Command to find the boot disk.

# bootinfo -s hdisk# : Command to find the size of the disk.

Now select an empty of same size of the boot disk for cloning. Make sure the physical volume selected has "uniquq id"

# lsattr -EHl hdisk# | grep -i "unique" : Command to view the unique id of the disk and make a note of id.

# alt_disk_copy -d hdisk# : Command to create OS cloning.

# lsvg : Command to view all the volume groups.

The name of the clonend volume group "alt_inst_rootvg".

The bootlist is automatically updated upon successfull cloning of the rootvg. Which puts "alt_inst_rootvg" as primary bootvolume and the other one as secoundary volume group.

Now we go ahead in removing the cloned rootvg disk (SAN) volume from the server.

# rmdev -dl hdisk# : Command to remove the cloned rootvg disk from the server.

# bootlist -m both -o hdisk# : Make sure the bootlist is updated properly. So that the machine boots from the "rootvg" rather then cloned. Since the cloned one is removed from the disk.

If the rmdev command shows an error. We can remove the cloned by disabling.

# alt_rootvg_op -S -t hdisk3 : Command to disable the cloned rootvg.

Now the cloning part is completed. Next We move to VIO_Server side configuration......Part II

Thursday 26 January 2012

Import and Export of a Volume Group in IBM AIX and Hp_Unix

Import and Exporting process to removing a Volume Group and its configuration. This can be used to remove a Volume  Group from a machine otherwise to export a volume group from one machine and use it on another machine.

# lspv : List the physical volumes.

# lsvg -o : List active volume groups in the machine.

Prior to exporting a volume group we need to make sure that the volume group is not active. So its has to be deactivated which in turn all the file system has to be unmounted.

# lsvgfs <volume_group_name> : Command to list the filesystem on the volume group.

Before Unmountig the filesystem, check the status of the filesystem.

# fuser -cu <filesystem> : Shows the process and user using that filesystem.

# kill -9 <PID> : Kill corresponding process.

# umount /filesystem : Unmount the filesystem.

# varyoffvg <volume_group_name> : Deactivate the volumegroup name.

# lsvg -o  : Ensure the volume group is not active.

# exportvg <volume_group_name> : Command to export the volume group.

Eventhough the concept of volume group refers to removing the Volume Group but not its configuration.  That everyvolume group has its own configuration stored in /etc/lvm/lvm.conf. And the volume group information is also updated in the VGDA on each physical volume that was a part of exported volume group.

That why when you try create a volume group from a physical disk that was a part of exported volume group shows  error "Physical volume belongs to a volume group".

In that case we use the option "-f" to create a volume group. Otherwise we can remove the disk definition from the ODM and reconfigure it.

# rmdev -l hdisk#  : Command puts the disk into defined state.
# rmdev -dl hdisk# : Command removes the ODM information about the disk.
# cfgmgrf -l hdisk* : Command to redefine the disk.

Now, will get back to exporting the volume group. The exported volume group can be imported on the same machine or not to different machine.

In case of moving to a different machine. Disk has to be removed physically and in case of LUN. Mapping has to be done o the WWN of another server. Once it is done. Login into the another server,

# lsdev -Cc disk : Look for the disk.

# lspv : List the newly allocated disk. The disk remains in "none" state.

# importvg -y <volume_group_name> <physical_volume_name> : Command to import a volume group.

Since the VGDA on the physical volume plays a vital role in importing the volume group.
"-y" option specifies, what should be the name of the imported volume group. In the option is not specified then the volume group imported with the default name "vg##".

"-n" : Flag can be used to syncronize the imported volume group.
"-V" : Flag specifies the given major number of the imported volume group.

# lvlstmajor : Command to view the list of free major numbers.

HP_Unix..continued...

Tuesday 24 January 2012

Good to know about the I-nodes in Hp_Unix

Consider the scenario, where you have a filesystem called "/myfile" which was mounted onto a logical volume called "/dev/vg00/lvol1" that belongs to the volume group "vg00".

In this case. Every file and directory created in UNIX environement will have unique "i-node" value. But there are cases for directories to have the same "i-node" value.

So, is there any chance of a "Single" directory to hold two different "i-node" values.

Let me explain what is "i-node" ?

I-node is a pointer variable which holds address and other other attributes of an object. Where the object is referred to an file or directory.

So I-node is composed of ( File/Directory creation time, modified time, access time, its metadata, owner of the object, group of the object, permissions, location of the file/directory on the disk).

Now I am asked to craete a file sytem. First I would intialize the phyical volume.

# pvcreate /dev/rdsk/c0t0d1

Now I create volume group and logical volume.

# vgcreate myvolume /dev/dsk/c0t0d1

# lvcreate -L 512M -n mylogical myvolume

Now the logcial volume is created. Next I go ahead to format the logical volume.

# newfs -F vxfs /dev/myvolume/rmylogical

Now I create a mount point i.e., a directory.

# mkdir /myfile : This makes the OS to allocate an i-node to this directory.

# cd /

# ls -il | grep -i myfile : Now obtain the i-node of the created directory. It shows a value of "1234".

Now I proceed in mounting the file system.

# mount /dev/myvolume/mylogical /myfile

# bdf : Verify the filesytem is mounted and # cd /myfile confims "lost+found" as well.

Now I am trying to get the i-node of the same directory "/myfile" which is now acting as mounting point.

# cd /

# ls -il | grep myfile : Now it shows a different value. The value is the inode value of the "root" filesystem.

So when the same filesystem is unmounted.

# umount /myfile

Now if you try to get the "i-node" value of the directory "/myfile" it will show "1234".

Friday 20 January 2012

Replacing a disk in the LVM environment

Scenario:

In the LVM scenario, we want to replace hdisk2 in the volume group vioc_rootvg_1, which contains the LVs vioc_1_rootvg associated to vtscsi0. It has the following attributes:

 
 
 

> The virtual SCSI adapter on the virtual I/O server is vhost0.
 

On the VIO_Client:

1) # unmirrovg rootvg hdisk1 : Unmirror the failing disk.

2) # bosboot -ad /dev/hdisk0 : Create a bootimage on hdisk0.

3) # boolist -m both -o hdisk0 : Change the boot order.

4) # reducevg -d -f rootvg hdisk1 : Remove the failing disk from the rootvg

5) # rmdev -dl hdisk1 : Command to remove the disk from the machine.

Now the disk can be removed from the VIO_Server:

The "hdisk1" is presented at the VIO_Client as the logical volume of name "vioc_1_rootvg" that belongs to the volume group called "vioc_rootvg_1" that is made up of physical volume hdisk2 at the VIOS end.

Since the hdisk1 is the failed disk which has to be replace, which in turn refer to a logical volume at the VIOS. We need to remove the logical volume.

Note: The logical volume is associated with the Virtual Targer Device "vtscsi0" and mapped to the virtual client SCSI Adapter "vscsi1".

1) Login into VIOS_Sever as padmin.

2) $ lslv vioc_1_rootvg  : Make a note of the size of the LV (No.of LP/PP Counts).

3) $ rmdev -vtd vtscsi0 : Command to remove the Virtual Target Device.

4) $ rmlv -f  vioc_1_rootvg  : Command to remove the logical volume.

5) $ lsvg -lv vioc_rootvg_1 : Command to verify the logical volume has been remove from the volume group.

6) $ mklv -lv vioc_1_rootvg vioc_rootvg_1  32G: Command to recreate another logicalvolume of same size.

7) $ lslv vioc_1_rootvg  : Command to verify the logical volume has been created.

8) $ mkvdev -dev vioc_1_rootvg  -vadapter vhost0  : Mapping of the logical volume to the Virtual SCSI client adapter.

9) $ lsmap -vadapter vhost0 : Verify the mapping has been done successfully.

On the client:

1) # cfgmgr : Command to look for any new devices added.

2) # lspv : List the physical volumes. Look for the one in "None" state.

3) # extendvg rootvg hdisk# : Command to add the new disk to the rootvg.

4) # mirrorvg rootvg hdisk# : Command to mirror the rootvg to the newly added disk.

5) # lsvg -m rootvg  : Command to verify the rootvg is mirrored onto new disk.

6) # bosboot  -ad /dev/hdisk# : Create a boot image on the newly added disk.

7) # bootlist -m both -o hdisk# : Update the bootlist.

> The volume group on the virtual I/O client is rootvg.
> The virtual SCSI adapter on the virtual I/O client is vscsi1.
> The failing disk on the virtual I/O client is hdisk1.
> The virtual disk is LVM mirrored on the virtual I/O client.
> The size is 32 GB.

To extend a logical volume on the Virtual I/O Server and recognize the change on the virtual Client

1) Login into the VIOS as padmin.

$ lslv <logical_volume_name>

$ lslv db_lv
LOGICAL VOLUME: db_lv VOLUME GROUP: db_sp
LV IDENTIFIER: 00cddeec00004c000000000c4a8b3d81.1 PERMISSION: read/write
VG STATE: active/complete LV STATE: opened/syncd
TYPE: jfs WRITE VERIFY: off
MAX LPs: 32512 PP SIZE: 32 megabyte(s)
COPIES: 1 SCHED POLICY: parallel
LPs: 320 PPs: 320
STALE PPs: 0 BB POLICY: non-relocatable
INTER-POLICY: minimum RELOCATABLE: yes
INTRA-POLICY: middle UPPER BOUND: 1024
MOUNT POINT: N/A LABEL: None
MIRROR WRITE CONSISTENCY: on/ACTIVE
EACH LP COPY ON A SEPARATE PV ?: yes
Serialize IO ?: NO
DEVICESUBTYPE : DS_LVZ

2) $ extendlv <logical_volume_name> <Size>

$ extendlv db_lv 5G

$ lslv db_lv
LOGICAL VOLUME: db_lv VOLUME GROUP: db_sp
LV IDENTIFIER: 00cddeec00004c000000000c4a8b3d81.1 PERMISSION: read/write
VG STATE: active/complete LV STATE: opened/syncd
TYPE: jfs WRITE VERIFY: off
MAX LPs: 32512 PP SIZE: 32 megabyte(s)
COPIES: 1 SCHED POLICY: parallel
LPs: 480 PPs: 480
STALE PPs: 0 BB POLICY: non-relocatable
INTER-POLICY: minimum RELOCATABLE: yes
INTRA-POLICY: middle UPPER BOUND: 1024
MOUNT POINT: N/A LABEL: None
MIRROR WRITE CONSISTENCY: on/ACTIVE
EACH LP COPY ON A SEPARATE PV ?: yes
Serialize IO ?: NO
DEVICESUBTYPE :
DS_LVZ

3) $ chvg -g <volume_group_name> : Command to find whether the volume group size has increased or not.

Thursday 19 January 2012

Unmirroring of Operating System in IBM AIX and Hp_Unix

IBM AIX:

1) # lsvg -m rootvg : Check the volume group is mirrored or not.

2) # lsvg -p rootvg : Check the physical volume that belong to the volume group.

3) # lsvg rotvg : Ensure that there are no stale partitions on the volume group.

4) # unmirrorvg rootvg hdisk# : Command to un mirror the Operating System from the mentioned hdisk#.

5) # lsvg -m rootvg : Check the volume group is mirrored or not.

6) # chpv -c hdisk# : Command to clear the boot image from the physical volume.

7) # reducevg -d -f rootvg hdisk3# : Command to remove the hdisk# from the root volume group.

Hp_Unix:

1) # vgdisplay -v vg00 | grep "PV Name" : Command to view the Physical volume of the OS volume group.

2) # vgdisplay -v vg00 | grep "LV Name" : Command to view the Logical volumes on the OS volume group.

3) # lvdisplay -v <lv_name> : Command to view the logical volume distribution over the physical volume.

4) # lvreduce -m 0 <logical_volume_name> <physical_volume_name> : Command to unmirror the OS volume group.

5) # lvlnboot -R : Command to refresh the boot table.

6) # lvlnboot -v : View the updated boot list.

7) # vgreduce vg00 /dev/dsk/c#t#d# : Command to remove the disk from the OS volume group.

8) Remove the disk entry from the file "/stand/bootconf"...

Mirroring of Operating System in IBM AIX and Hp_Unix-Part II

1) # ioscan -funC disk : Select an empty disk which is of same size of the OS disk.

2) # diskinfo -v /dev/rdsk/c#t#d# : Command to verify the size of the disk.

3) # vi /tmp/idf : File that contains entry and amount of space allocated to the EFI, OS, HPSP Partition.

3
EFI 500MB
HPUX 100%
HPSP 400MB

Save the file.

4) # idisk -wf /tmp/idf /dev/rdsk/c#t#d# : Command to create partition as described in the /tmp/idf.

5) # idisk /dev/rdsk/c#t#d# : Command to verify the created partition.

6) # insf -eC disk : Command to create DSF for the partition.

7) #ioscan -kfnN /dev/dsk/c#t#d# : Command to ensure the DSF created successfully.

8) # mkboot -e -l /dev/rdsk/c#t#d# : Command to copy "/efi/hpux/hpux.efi" bootloaded to the new partition.

9) # efi_ls -d /dev/rdsk/c#t#d#_p1 /efi/hpux : Command to verify the "hpux.efi: bootloader is copied into the new partition.

10) # mkboot -a "boot vmunix -lq" /dev/rdsk/c#t#d# : Command to disable quorum on the disk.

11) # insf -eC disk : Command to create DSF.

12) # pvcreate -fB /dev/rdsk/c#t#d#_p2  : Command to create a bootable disk.

13) # vgextend vg00 /dev/dsk/c#t#d# : Command to add physical volume to existing OS Volume group.

14) # lvlnboot -R : Command to update the LABEL file of OS Partition.

15) # lvlnboot -v :  Command to verify the boot disks.

16) # vi /stand/bootconf : Add the below lines to the files to show the boot disk and sequence.
1 /dev/dsk/c#t#d#_p2
2 /dev/dsk/c#t#d#_p2

Save the file.

17) # setboot -p /dev/dsk/c#t#d#  : Command to set the primary boot path.

18) # setboot -h /dev/dsk/c#t#d# : Command to set the secoundary boot path.

19) # setboot : Command to verify the boot path.

Mirroring of Operating System in IBM AIX and Hp_Unix

IBM_AIX:

# lspv : Command to list the physical volumes. Identify the disk that belongs to the volume group "rootvg".

# extendvg rootvg hdisk# : We add another empty disk to the rootvg to mirror the Operating System.

# mirrorvg rootvg : Command to mirror the Operating System.

# lsvg -l rootvg : Command to check the rootvg is mirrored.

# lsvg -m rootvg : Command to check the rootvg is mirrored.

# bosboot -ad /dev/hdisk# : Command to create boot image on the mirrored disk.

# boolist -m both -o hdisk# : Command to update the bootlist.

The above procedure is simple enough in AIX but its really a complicated in Hp_Unix.

HP_UNIX:

Understanding the boot disk structure:

Boot disk is divided into 3 partition:

* EFI (Extensible Firmware Interface) Partition.
* OS Partition.
* HPSP (HP Service Partition).

EFI Partition:  Location /dev/rdsk/disk1_p1

OS loader is called "\efi\hpux\hpux.efi"

"\efi\hpux\auto" file that holds several system boot string and trouble shooting utilities.

1) Contains the Master Boot Record at the top of the disk.

2) Each EFI partition is has a GUID (Globally Unique Identifier) and the locations are recorded in the EFI GUID Partition table.

3) Contains OS loader for loading OS in memory during the boot process.

OS Partition: Location /dev/rdsk/disk1_p2

1) LIF (Logical Interchange Format) area in a OS Partition that contains a LABEL File that identifies the location of boot, swap and root file systems.

2) It also includes PVRA, VGRA, BDRA, BBRA.

HPSP:  Location /dev/rdsk/disk1_p3

1) Contains offline diagnostics utilities. Its FAT 32 file system.


Continued.....