About Us

RSInfoMinds, a web based IT Training and Consultancy firm. It is established with high dreams in training people in IT Infrastructure Field. We provide Online and Class Room training in various fields of IT Infrastructure Management.

Join Us: http://www.facebook.com/RSInfoMinds
Mail Us: rsinfominds@gmail.com
Twitter: @RSInfoMinds

We are specialized in the below courses:

Redhat Linux Admin Redhat Linux Cluster
Redhat Virutualization IBM AIX Admin
IBM AIX Virtualization IBM AIX Cluster
HP Unix Admin HP Unix Cluster
HP Unix Virtualization Shell Scripting
Veritas Volume Manager Veritas Cluster
Oracle Core DBA VMWare


We provide training in such a way, So that you get in depth knowledge on the Courses you look for.

And we ensure you are very confident from each and every Techincal aspect that the IT Industry needs and expects from you.

We also conduct Workshops on the latest technology and the real time faculties sharing their work experiences to make you the best.

Monday, 30 January 2012

Operating System Cloning in IBM AIX and HP Unix

This post shows how to do cloning of Operating System. Since we already discussed abt cloning of Operating System (alt_disk_copy) method. Will straight away discuss about hp unix cloning.

In hp unix we use a utility called "Dynamic Root Disk" [DRD] to do cloning of the disk.

1) # uname -a : Check the version of Hp Unix.

2) # model : Check the model of the machine.

3) # swlist -l product -l bundle | grep -i dynamic : Ensure the DRD software is installed on the box.

4) # strings /etc/lvmtab : Find out the disk on which the OS intalled.

5) # diskinfo -v /dev/rdsk/c#t#d# : Command to find the size of the disk.

6) # ioscan -funC disk : Look for the free disk. Select the disk of size which is equal to OS disk.

7) # pvdisplay -v /dev/rdsk/c#t#d# : Ensure the LVM status of the disk is "No". The physical volume should not be a part of any  LVM structure.

8) # drd clone -p  -v -t /dev/dsk/c#t#d# : Command to take a preview of the clone and analyze the disk capability to ensure that it can hold the clone. If it is successful..

9) # drd clone -v -t /dev/dsk/c#t#d# : Command to clone the disk. It take 30 minutes of time.

The default name of the Operating System volume group is called as "vg##" and the cloned one is called "drd##".

10) # bdf : Command to verify the cloned Operating System.

11) # drd umount : Command to un mount the cloned Operating System.



Friday, 27 January 2012

Virtual SCSI Mapping Through HMC rather than "mkvdev" CLI

This post shows an easy wat to perform mapping of disk between the Virutal Server and Virtal Client though Hardware Management Console. Rather than CLI using "mkvdev".

Disadvantage of thie method. Only directy mapping of the disk could be made rather than mapping a logical volume or a filesystem are not supported in this method. In those case we need to go back "mkvdev" mode for the mapping.

So we assume that a disk has been mapped the VIO_Server from the SAN Side.

Login into HMC-->System Management-->Server(Managed System to which the VIO_Server and VIO_Client Belongs to)-->Configuration-->Virtual Resources-->Virtual Storage Management.

This opens a windows like below.


In the window, on the left hand top corner you would find a drop down called "VIOS". Select the appropriate VIO_Server. This would list all the physical volumes available on the VIO_Server. To begin with the mapping. Select the disk that you would like to assign and click on "Modify Assignment" on the left hand bottom corner.


This the next window that will open upon selecting "Modify Assignment". Select the appropritate "Virtual SCSI Server Adapter" (vhost#) and click "OK".

Now the mapping is done.


Cloning rootvg to external disk_AIX_ : Mapping to VIO_Client: Part III

VIO_Server

$ lsdev -virtual or $ lsdev | grep "vhost*" or $ lsdev -type adapter : List all the virtual adapters on the VIO_Server.

$ lsmap -vadapter vhost* : Show the mapping of the Virtual SCSI Server Adapter.

$ lspv : Command to list the Physical Volumes.

$ mkvdev -vdev hdisk# -vadapter vhost# : Command to provide mapping between the virtual SCSI Server and Client Adapter.

$ lsmap -vadapter vhost : Command to verify the mapping has been done or not.

VIO_Client:

Now boot the disk through SMS Mode from HMC. Which would boot the client from the "cloned" disk. Upon successful installation of the Operating System. Login into the client.

# lspv : List all the physical volume.

Cloning rootvg to external disk_AIX_ : Mapping to VIO_Client: Part II

Now inform the storage team to map the concerned LUN to the VIO_Server by providing the WWN of the HBA attached to the VIO_Server. Once it is done.

Login into the VIO_Server as "padmin".

$ cfgmgr : Configuration Manager.

$ lspv : List the Physical volumes in the VIO_Server.

$ chkdev -dev hdisk# -verbose : Command to get the PVID, IEEE ID, Unique ID of the disk and VIO_Attributes.

Now check the "unique_id" of the disk which we made a note earlier and compare it with the "unique id" displayed in the above command. If the "unique id" matches then the proper "LUN" is mapped else need to check with the storage team to map the correct LUN.

Next step would be create a Virtual_SCSI_Server_Adapter on VIO_Server and Virtual_SCSI_Clinet Adpater on the VIO_Client.

This has been explained in my previous posts.

Login into HMC--> System Management-->Server-->Select the VIO_Server-->Configuration--> Manage Profile-->Actions-->Create-->Virtual SCSI Adpater.

Make a note of Adapter ID and ensure VIO_Server_Adapter in mapped to the correct VIO_Client.

Follow the same process at the VIO_Client end as well.

Mapping between VIO_Server and VIO_SCSI...Part III

Cloning rootvg to external disk_AIX_ : Mapping to VIO_Client: Part I

The method which we are going to discuss is "rootvg" is cloned on a LUN. Now the LUN is mapped to a VIO_Server which is then presented to a VIO_Client.

Rootvg Cloning:

# lspv : Command to list the physical volumes in the box.

# bootinfo -b : Command to find the boot disk.

# bootinfo -s hdisk# : Command to find the size of the disk.

Now select an empty of same size of the boot disk for cloning. Make sure the physical volume selected has "uniquq id"

# lsattr -EHl hdisk# | grep -i "unique" : Command to view the unique id of the disk and make a note of id.

# alt_disk_copy -d hdisk# : Command to create OS cloning.

# lsvg : Command to view all the volume groups.

The name of the clonend volume group "alt_inst_rootvg".

The bootlist is automatically updated upon successfull cloning of the rootvg. Which puts "alt_inst_rootvg" as primary bootvolume and the other one as secoundary volume group.

Now we go ahead in removing the cloned rootvg disk (SAN) volume from the server.

# rmdev -dl hdisk# : Command to remove the cloned rootvg disk from the server.

# bootlist -m both -o hdisk# : Make sure the bootlist is updated properly. So that the machine boots from the "rootvg" rather then cloned. Since the cloned one is removed from the disk.

If the rmdev command shows an error. We can remove the cloned by disabling.

# alt_rootvg_op -S -t hdisk3 : Command to disable the cloned rootvg.

Now the cloning part is completed. Next We move to VIO_Server side configuration......Part II

Thursday, 26 January 2012

Import and Export of a Volume Group in IBM AIX and Hp_Unix

Import and Exporting process to removing a Volume Group and its configuration. This can be used to remove a Volume  Group from a machine otherwise to export a volume group from one machine and use it on another machine.

# lspv : List the physical volumes.

# lsvg -o : List active volume groups in the machine.

Prior to exporting a volume group we need to make sure that the volume group is not active. So its has to be deactivated which in turn all the file system has to be unmounted.

# lsvgfs <volume_group_name> : Command to list the filesystem on the volume group.

Before Unmountig the filesystem, check the status of the filesystem.

# fuser -cu <filesystem> : Shows the process and user using that filesystem.

# kill -9 <PID> : Kill corresponding process.

# umount /filesystem : Unmount the filesystem.

# varyoffvg <volume_group_name> : Deactivate the volumegroup name.

# lsvg -o  : Ensure the volume group is not active.

# exportvg <volume_group_name> : Command to export the volume group.

Eventhough the concept of volume group refers to removing the Volume Group but not its configuration.  That everyvolume group has its own configuration stored in /etc/lvm/lvm.conf. And the volume group information is also updated in the VGDA on each physical volume that was a part of exported volume group.

That why when you try create a volume group from a physical disk that was a part of exported volume group shows  error "Physical volume belongs to a volume group".

In that case we use the option "-f" to create a volume group. Otherwise we can remove the disk definition from the ODM and reconfigure it.

# rmdev -l hdisk#  : Command puts the disk into defined state.
# rmdev -dl hdisk# : Command removes the ODM information about the disk.
# cfgmgrf -l hdisk* : Command to redefine the disk.

Now, will get back to exporting the volume group. The exported volume group can be imported on the same machine or not to different machine.

In case of moving to a different machine. Disk has to be removed physically and in case of LUN. Mapping has to be done o the WWN of another server. Once it is done. Login into the another server,

# lsdev -Cc disk : Look for the disk.

# lspv : List the newly allocated disk. The disk remains in "none" state.

# importvg -y <volume_group_name> <physical_volume_name> : Command to import a volume group.

Since the VGDA on the physical volume plays a vital role in importing the volume group.
"-y" option specifies, what should be the name of the imported volume group. In the option is not specified then the volume group imported with the default name "vg##".

"-n" : Flag can be used to syncronize the imported volume group.
"-V" : Flag specifies the given major number of the imported volume group.

# lvlstmajor : Command to view the list of free major numbers.

HP_Unix..continued...

Tuesday, 24 January 2012

Good to know about the I-nodes in Hp_Unix

Consider the scenario, where you have a filesystem called "/myfile" which was mounted onto a logical volume called "/dev/vg00/lvol1" that belongs to the volume group "vg00".

In this case. Every file and directory created in UNIX environement will have unique "i-node" value. But there are cases for directories to have the same "i-node" value.

So, is there any chance of a "Single" directory to hold two different "i-node" values.

Let me explain what is "i-node" ?

I-node is a pointer variable which holds address and other other attributes of an object. Where the object is referred to an file or directory.

So I-node is composed of ( File/Directory creation time, modified time, access time, its metadata, owner of the object, group of the object, permissions, location of the file/directory on the disk).

Now I am asked to craete a file sytem. First I would intialize the phyical volume.

# pvcreate /dev/rdsk/c0t0d1

Now I create volume group and logical volume.

# vgcreate myvolume /dev/dsk/c0t0d1

# lvcreate -L 512M -n mylogical myvolume

Now the logcial volume is created. Next I go ahead to format the logical volume.

# newfs -F vxfs /dev/myvolume/rmylogical

Now I create a mount point i.e., a directory.

# mkdir /myfile : This makes the OS to allocate an i-node to this directory.

# cd /

# ls -il | grep -i myfile : Now obtain the i-node of the created directory. It shows a value of "1234".

Now I proceed in mounting the file system.

# mount /dev/myvolume/mylogical /myfile

# bdf : Verify the filesytem is mounted and # cd /myfile confims "lost+found" as well.

Now I am trying to get the i-node of the same directory "/myfile" which is now acting as mounting point.

# cd /

# ls -il | grep myfile : Now it shows a different value. The value is the inode value of the "root" filesystem.

So when the same filesystem is unmounted.

# umount /myfile

Now if you try to get the "i-node" value of the directory "/myfile" it will show "1234".

Friday, 20 January 2012

Replacing a disk in the LVM environment

Scenario:

In the LVM scenario, we want to replace hdisk2 in the volume group vioc_rootvg_1, which contains the LVs vioc_1_rootvg associated to vtscsi0. It has the following attributes:

 
 
 

> The virtual SCSI adapter on the virtual I/O server is vhost0.
 

On the VIO_Client:

1) # unmirrovg rootvg hdisk1 : Unmirror the failing disk.

2) # bosboot -ad /dev/hdisk0 : Create a bootimage on hdisk0.

3) # boolist -m both -o hdisk0 : Change the boot order.

4) # reducevg -d -f rootvg hdisk1 : Remove the failing disk from the rootvg

5) # rmdev -dl hdisk1 : Command to remove the disk from the machine.

Now the disk can be removed from the VIO_Server:

The "hdisk1" is presented at the VIO_Client as the logical volume of name "vioc_1_rootvg" that belongs to the volume group called "vioc_rootvg_1" that is made up of physical volume hdisk2 at the VIOS end.

Since the hdisk1 is the failed disk which has to be replace, which in turn refer to a logical volume at the VIOS. We need to remove the logical volume.

Note: The logical volume is associated with the Virtual Targer Device "vtscsi0" and mapped to the virtual client SCSI Adapter "vscsi1".

1) Login into VIOS_Sever as padmin.

2) $ lslv vioc_1_rootvg  : Make a note of the size of the LV (No.of LP/PP Counts).

3) $ rmdev -vtd vtscsi0 : Command to remove the Virtual Target Device.

4) $ rmlv -f  vioc_1_rootvg  : Command to remove the logical volume.

5) $ lsvg -lv vioc_rootvg_1 : Command to verify the logical volume has been remove from the volume group.

6) $ mklv -lv vioc_1_rootvg vioc_rootvg_1  32G: Command to recreate another logicalvolume of same size.

7) $ lslv vioc_1_rootvg  : Command to verify the logical volume has been created.

8) $ mkvdev -dev vioc_1_rootvg  -vadapter vhost0  : Mapping of the logical volume to the Virtual SCSI client adapter.

9) $ lsmap -vadapter vhost0 : Verify the mapping has been done successfully.

On the client:

1) # cfgmgr : Command to look for any new devices added.

2) # lspv : List the physical volumes. Look for the one in "None" state.

3) # extendvg rootvg hdisk# : Command to add the new disk to the rootvg.

4) # mirrorvg rootvg hdisk# : Command to mirror the rootvg to the newly added disk.

5) # lsvg -m rootvg  : Command to verify the rootvg is mirrored onto new disk.

6) # bosboot  -ad /dev/hdisk# : Create a boot image on the newly added disk.

7) # bootlist -m both -o hdisk# : Update the bootlist.

> The volume group on the virtual I/O client is rootvg.
> The virtual SCSI adapter on the virtual I/O client is vscsi1.
> The failing disk on the virtual I/O client is hdisk1.
> The virtual disk is LVM mirrored on the virtual I/O client.
> The size is 32 GB.

To extend a logical volume on the Virtual I/O Server and recognize the change on the virtual Client

1) Login into the VIOS as padmin.

$ lslv <logical_volume_name>

$ lslv db_lv
LOGICAL VOLUME: db_lv VOLUME GROUP: db_sp
LV IDENTIFIER: 00cddeec00004c000000000c4a8b3d81.1 PERMISSION: read/write
VG STATE: active/complete LV STATE: opened/syncd
TYPE: jfs WRITE VERIFY: off
MAX LPs: 32512 PP SIZE: 32 megabyte(s)
COPIES: 1 SCHED POLICY: parallel
LPs: 320 PPs: 320
STALE PPs: 0 BB POLICY: non-relocatable
INTER-POLICY: minimum RELOCATABLE: yes
INTRA-POLICY: middle UPPER BOUND: 1024
MOUNT POINT: N/A LABEL: None
MIRROR WRITE CONSISTENCY: on/ACTIVE
EACH LP COPY ON A SEPARATE PV ?: yes
Serialize IO ?: NO
DEVICESUBTYPE : DS_LVZ

2) $ extendlv <logical_volume_name> <Size>

$ extendlv db_lv 5G

$ lslv db_lv
LOGICAL VOLUME: db_lv VOLUME GROUP: db_sp
LV IDENTIFIER: 00cddeec00004c000000000c4a8b3d81.1 PERMISSION: read/write
VG STATE: active/complete LV STATE: opened/syncd
TYPE: jfs WRITE VERIFY: off
MAX LPs: 32512 PP SIZE: 32 megabyte(s)
COPIES: 1 SCHED POLICY: parallel
LPs: 480 PPs: 480
STALE PPs: 0 BB POLICY: non-relocatable
INTER-POLICY: minimum RELOCATABLE: yes
INTRA-POLICY: middle UPPER BOUND: 1024
MOUNT POINT: N/A LABEL: None
MIRROR WRITE CONSISTENCY: on/ACTIVE
EACH LP COPY ON A SEPARATE PV ?: yes
Serialize IO ?: NO
DEVICESUBTYPE :
DS_LVZ

3) $ chvg -g <volume_group_name> : Command to find whether the volume group size has increased or not.

Thursday, 19 January 2012

Unmirroring of Operating System in IBM AIX and Hp_Unix

IBM AIX:

1) # lsvg -m rootvg : Check the volume group is mirrored or not.

2) # lsvg -p rootvg : Check the physical volume that belong to the volume group.

3) # lsvg rotvg : Ensure that there are no stale partitions on the volume group.

4) # unmirrorvg rootvg hdisk# : Command to un mirror the Operating System from the mentioned hdisk#.

5) # lsvg -m rootvg : Check the volume group is mirrored or not.

6) # chpv -c hdisk# : Command to clear the boot image from the physical volume.

7) # reducevg -d -f rootvg hdisk3# : Command to remove the hdisk# from the root volume group.

Hp_Unix:

1) # vgdisplay -v vg00 | grep "PV Name" : Command to view the Physical volume of the OS volume group.

2) # vgdisplay -v vg00 | grep "LV Name" : Command to view the Logical volumes on the OS volume group.

3) # lvdisplay -v <lv_name> : Command to view the logical volume distribution over the physical volume.

4) # lvreduce -m 0 <logical_volume_name> <physical_volume_name> : Command to unmirror the OS volume group.

5) # lvlnboot -R : Command to refresh the boot table.

6) # lvlnboot -v : View the updated boot list.

7) # vgreduce vg00 /dev/dsk/c#t#d# : Command to remove the disk from the OS volume group.

8) Remove the disk entry from the file "/stand/bootconf"...

Mirroring of Operating System in IBM AIX and Hp_Unix-Part II

1) # ioscan -funC disk : Select an empty disk which is of same size of the OS disk.

2) # diskinfo -v /dev/rdsk/c#t#d# : Command to verify the size of the disk.

3) # vi /tmp/idf : File that contains entry and amount of space allocated to the EFI, OS, HPSP Partition.

3
EFI 500MB
HPUX 100%
HPSP 400MB

Save the file.

4) # idisk -wf /tmp/idf /dev/rdsk/c#t#d# : Command to create partition as described in the /tmp/idf.

5) # idisk /dev/rdsk/c#t#d# : Command to verify the created partition.

6) # insf -eC disk : Command to create DSF for the partition.

7) #ioscan -kfnN /dev/dsk/c#t#d# : Command to ensure the DSF created successfully.

8) # mkboot -e -l /dev/rdsk/c#t#d# : Command to copy "/efi/hpux/hpux.efi" bootloaded to the new partition.

9) # efi_ls -d /dev/rdsk/c#t#d#_p1 /efi/hpux : Command to verify the "hpux.efi: bootloader is copied into the new partition.

10) # mkboot -a "boot vmunix -lq" /dev/rdsk/c#t#d# : Command to disable quorum on the disk.

11) # insf -eC disk : Command to create DSF.

12) # pvcreate -fB /dev/rdsk/c#t#d#_p2  : Command to create a bootable disk.

13) # vgextend vg00 /dev/dsk/c#t#d# : Command to add physical volume to existing OS Volume group.

14) # lvlnboot -R : Command to update the LABEL file of OS Partition.

15) # lvlnboot -v :  Command to verify the boot disks.

16) # vi /stand/bootconf : Add the below lines to the files to show the boot disk and sequence.
1 /dev/dsk/c#t#d#_p2
2 /dev/dsk/c#t#d#_p2

Save the file.

17) # setboot -p /dev/dsk/c#t#d#  : Command to set the primary boot path.

18) # setboot -h /dev/dsk/c#t#d# : Command to set the secoundary boot path.

19) # setboot : Command to verify the boot path.

Mirroring of Operating System in IBM AIX and Hp_Unix

IBM_AIX:

# lspv : Command to list the physical volumes. Identify the disk that belongs to the volume group "rootvg".

# extendvg rootvg hdisk# : We add another empty disk to the rootvg to mirror the Operating System.

# mirrorvg rootvg : Command to mirror the Operating System.

# lsvg -l rootvg : Command to check the rootvg is mirrored.

# lsvg -m rootvg : Command to check the rootvg is mirrored.

# bosboot -ad /dev/hdisk# : Command to create boot image on the mirrored disk.

# boolist -m both -o hdisk# : Command to update the bootlist.

The above procedure is simple enough in AIX but its really a complicated in Hp_Unix.

HP_UNIX:

Understanding the boot disk structure:

Boot disk is divided into 3 partition:

* EFI (Extensible Firmware Interface) Partition.
* OS Partition.
* HPSP (HP Service Partition).

EFI Partition:  Location /dev/rdsk/disk1_p1

OS loader is called "\efi\hpux\hpux.efi"

"\efi\hpux\auto" file that holds several system boot string and trouble shooting utilities.

1) Contains the Master Boot Record at the top of the disk.

2) Each EFI partition is has a GUID (Globally Unique Identifier) and the locations are recorded in the EFI GUID Partition table.

3) Contains OS loader for loading OS in memory during the boot process.

OS Partition: Location /dev/rdsk/disk1_p2

1) LIF (Logical Interchange Format) area in a OS Partition that contains a LABEL File that identifies the location of boot, swap and root file systems.

2) It also includes PVRA, VGRA, BDRA, BBRA.

HPSP:  Location /dev/rdsk/disk1_p3

1) Contains offline diagnostics utilities. Its FAT 32 file system.


Continued.....

Wednesday, 18 January 2012

Increasing The Filesystem Size in AIX and Hp_Unix

IBM AIX:

# chfs -a size=+<Size in Units> <File_System_Name> : Command to increase the size of the file system.

# df : Verify the size of the file  system.

Hp_Unix:

# fsadm -F <filesystem_type> -b <Size_To_Be_Increased> <File_System_Name> : Command to increase the file system  size.

# bdf : Verfiy the size of the file system.

Tuesday, 17 January 2012

Paging Space Handling in IBM AIX and Hp_UNIX

Paging Space in IBM AIX: Recorded at "/etc/swapspace" file

# lsps -a : List the available paging space.

# lsps -s : Summary about the paging space.

# mkps -a -n -s <No.of PP's> <Volume_Group_Name> <Physical_Volume_Name> : Command to create a Paging Space.

# swapon <paging_device> : Activate a paging space.

# swapoff /dev/<paging_device> : Deactivate a paging space.

# rmps <paging_device> : Remove a paging device.

# chps -a y/n : Enable/Disable Auto on feature of the paging device.

# chps -d <PP Count> <paging_space> : Decrease the size of the paging device.

# chps -s <PP Count> <paging_space> : Increase the size of the paging device

Swap Space in Hp_Unix: Recorded in "/etc/fstab"

# swapinfo -dtm : Command to list the swap space on the machine.

# lvcreate -L <Size of Swap> -n <Swap_Name> -C y <Volume_Group_Name> : Command to create a swap space. -C refer to continous logical space.

# swapon /dev/volume_group_name/swap_space_name



Handling Stale Partitions in AIX and HP_Unix

IBM AIX :

# syncvg -v <Volume Group name> : Syncronize all the stale partition on the volume group.

# syncvg -p <Physical Volume name> : Syncronize all the stale partition on the physical volume.

# syncvg -l <Logical Volume name> : Syncronize all the stale partition on the logical volume.


Hp_UNIX:

# vgsync /dev/<volume group name> : Syncronize the volume group.

# lvsync /dev/volume_group_name/logical_volume_name : Syncronize the logical volume.

Mirrroing Constraints in IBM AIX and HP_UNIX

Steps to mirroring in IBM AIX:

1) # lspv : Command to view the Physical volume in none state.

2) # mkvg -y <vgname> -s <PP Size> <Physical Volumes.....> : Command to create a Volume Group.

3) # mklv -y <lvname> -t <filesystem type> -c <no.of. copies> <volume group name> <no.of PP's> <Physical Volume Names> : Command to create a logical volume.

4) # crfs -v <filesystem type> -d </dev/logical_volume_name> -m <mount point> -A y : Command to create a file system.

5) # mount /<mount point> : Command to mount a file system.

Hp_Unix

1) # isocan -funC disk : Command to view the disk available in the machine.

2) # pvcreate /dev/rdsk/c0t0d0 : Command to create a physical volume.
    # pvcreate /dev/rdsk/c0t0d1 : Command to create a physical volume.

3) # vgcreate <Volume Group name> /dev/dsk/c0t0d0 /dev/dsk/c0t0d1 : Command to create a volume group.

4) # vgdisplay -v <Volume Group name> | grep "PV Name" : View the physical volumes that created this volume group.

5) # lvcreate -n <Name of the logical volume> -L <Size of the logical volume> -m <no.of.copies> <volume group name>

6) # lvdisplay -v <logical volume name> : View the created a logcial volume with copies on the physical volume.

7) # newfs -F vxfs <Raw logical volume name> : Command to format the logical volume.

8) # mount <logical volume name > <mount point> : Command to mount.


Migration of Stand Alone SCSI Data to Virtual SCSI Data _ IBM AIX Part III

Now we modify the VIOS Profile to add the SCSI Adapter which we removed from the AIX_Server.




Select the adapter and click on "Add as Required".


1) Login to VIOS as "padmin" and execute $ cfgdev


$ lsdev | grep vhost : Command to locate the newly created vhost adapter.
host0 Available Virtual SCSI Server Adapter
vhost1 Available Virtual SCSI Server Adapter
vhost2 Available Virtual SCSI Server Adapter
vhost3 Available Virtual SCSI Server Adapter
vhost4 Available Virtual SCSI Server Adapter


vhost4 is the newly created Virutal SCSI Server Adapter.


$ lsmap -vadapter vhost4


2) Now locate the new disk mapped to the VIOS.


$ lsdev -type disk


name status description
hdisk0 Available SAS Disk Drive
hdisk1 Available SAS Disk Drive
hdisk2 Available SAS Disk Drive
hdisk3 Available SAS Disk Drive
hdisk4 Available SAS Disk Drive
hdisk5 Available SAS Disk Drive
hdisk6 Available SAS Disk Drive
hdisk7 Available SAS Disk Drive
hdisk8 Available 16 Bit LVD SCSI Disk Drive


hdisk8 has been added and is a SCSI disk.


3) Confirm that the hdisk8 has the same UDID of the one found in AIX_Server.


$ chkdev -dev hdisk8 -verbose


NAME: hdisk8
IDENTIFIER: 22080004B9710BST3146807LC03IBMscsi
PHYS2VIRT_CAPABLE: YES
VIRT2NPIV_CAPABLE: NA
VIRT2PHYS_CAPABLE: NA

IEEE:
VTD:


4) Now, map the disk from the VIOS to the Client Partition.


$ mkvdev -vdev hdisk8 -vadapter vhost4
vtscsi0 Available


5) Verify the mapping.


$ lsmap -all or lsmap -vadapter vhost4
$ lsmap -vadapter vhost4
SVSA Physloc Client Partition ID
--------------- -------------------------------------------- ------------------
vhost4 U8204.E8A.10FE401-V1-C15 0x00000000
VTD vtscsi0
Status Available
LUN 0x8100000000000000
Backing device hdisk8
Physloc U78A0.001.0000000-P1-C4-T2-L1-L0


Now login into the client partition and execute # cfgmgr and # lspv to locate the new disk.
PVID: 002631cd31ad04f50000000000000000
UDID: 22080004B9710BST3146807LC03IBMscsi

Migration of Stand Alone SCSI Data to Virtual SCSI Data _ IBM AIX Part II

On HMC:


1) Modify the AIX_Server partition profile, So that the SCSI Adapter is removed. Upon removing the adapter, Save the partition profile.






Select the adapter with the slot number obtained from the # lscfg -vpl ssiscsia0 output. Once the adapter is selected then click on "Remove".


Next we need to create virtual server SCSI adapter and virtual client SCSI adpter to map the disk.


Select the VIOS Server. The "vhost#" adapter could be added through Dynamic Operation or through Manage Profiles.


Select the "VIOS Server"-->Tasks-->Configurations-->Manage Profile-->Virtual Adapter-->Action-->Create-->Virtual SCSI Adapter.






Same the same process to add a virtual SCSI client adapter on the client LPAR.


Continued...Part III

Migration of Stand Alone SCSI Data to Virtual SCSI Data _ IBM AIX Part I

The below process shows the migration of data from a Standalone SCSI Disk to a LPAR Managed by a VIOS.

Step 1 on AIX_Server:

1) Identify the disk that has to be moved to the LPAR.

# lspv
hdisk0 002631cd31ad04f5 rootvg
active

2) Check the disk to be migrated have a unique UDID.

# lsattr  -EHL hdisk0

PCM PCM/friend/scsiscsd Path Control Module
False
algorithm fail_over Algorithm
True
dist_err_pcnt 0 Distributed Error Percentage
True
dist_tw_width 50 Distributed Error Sample Time
True
hcheck_interval 0 Health Check Interval
True
hcheck_mode nonactive Health Check Mode
True
max_transfer 0x40000 Maximum TRANSFER Size
True
pvid 002631cd31ad04f5
False
queue_depth 3 Queue DEPTH
False
reserve_policy single_path Reserve Policy
True
size_in_mb 146800 Size in Megabytes
False
0000000000000000 Physical volume identifier
unique_id 22080004B9710BST3146807LC03IBMscsi
False

3) Identify the parent of the hdisk0

# lsdev -Cl hdisk0 -F parent
scsi1
# lsdev -l scsi1 -F parent
sisscsia0

So the hdisk0 is attached to the scsi1 which has the parent sisscsia0.

4) Next step is to identify any the resources attached to the same sisscsia0.

# lsdev | grep -i siscsia0
sisscsia0 Available 00-08 PCI-X Dual Channel Ultra320 SCSI Adapter

# lsdev | grep "00-08"
hdisk0 Available 00-08-01-1,0 16 Bit LVD SCSI Disk Drive
scsi0 Available 00-08-00 PCI-X Dual Channel Ultra320 SCSI Adapter bus
scsi1 Available 00-08-01 PCI-X Dual Channel Ultra320 SCSI Adapter bus
ses0 Available 00-08-00-14,0 SCSI Enclosure Services Device
ses1 Available 00-08-00-15,0 SCSI Enclosure Services Device
ses2 Available 00-08-01-14,0 SCSI Enclosure Services Device
ses3 Available 00-08-01-15,0 SCSI Enclosure Services Device
sisscsia0 Available 00-08 PCI-X Dual Channel Ultra320 SCSI Adapter

So only one hdisk is allocated to the sisscsia0.

5) Obtain the hardware location code of the drive.

# lscf g -vpl siscsia0

sisscsia0 U78A0.001.0000000-P1-C4 PCI-X Dual Channel Ultra320 SCSI
Adapter
PCI-X Dual Channel Ultra320 SCSI Adapter:
Part Number.................97P3359
FRU Number..................97P3359Hardware Location Code......U78A0.001.0000000-P1-C4

6) Shutdown the AIX_Server with the shutdown command.

Continued...in Part II

Serial Number...............YL10C4061142
Manufacture ID..............000C
EC Level....................0
ROM Level.(alterable).......05080092
Product Specific.(Z0).......5702
Unique device identifier

Saturday, 14 January 2012

Hp Unix LVM Commands

# extendfs : To Extend a file system.
# lvmadm : Display the limit associated with the volume group version.
# lvchange : Change the attribute of a logical volume.
# lvcreate : To create a logical volume.
# lvdisplay : To display information about a logical volume.
# lvextend -m : Add a mirror to a logical volume.
# lvextend -L : Increase the size of the logical volume.
# lvlnboot : Prepares a logical volume to become a root, swap, dump area.
# lvmerge : Merge the split logical volumes.
# lvmove : Migrate a logical volume to a new disk.
# lvreduce -L : Reduce the size of the logical volume.
# lvreduce -m : Reduce the number of mirror copies of a logical volume.
# lvremove  : Remove a logical volume.
# lvrmboot : Removes a logical volume link to root, swap, dump.
# lvsplit : Split a mirrored logical volume into two logical volumes.
# lvsync : Syncronize the stale logical volume.

Tuesday, 10 January 2012

Need of PVID, UDID and IEEE

To export a physical volume as a virtual device, the physical volume must have an IEEE volume attribute, a unique identifier (UDID), or a physical identifier (PV ID).

Physical Volume Identifiers

A physical volume in an AIX Box (hdisk#) could be identified with Physical Volume ID, UDID (Unique Device Identifier), IEEE Identifier.

PVID : 32 Digit Identified on which the first 16 Digit remains the same on all the Physical Volumes and the remaining 16 Digit vary among the physical volumes.

UDID : This is value that is assigned to the physical volumes that are managed by MPIO.

$ chdev -dev hdisk# -verbose : Command to view the PVID, UDID and IEEE ID for a physical volume.

# lsattr -EHl hdisk# : Command to view the PVID and UDID for a disk.

$ chkdev -dev hdisk1 -verbose

NAME:                hdisk1
IDENTIFIER:          210ChpO-c4HkKBc904N37006NETAPPfcp
PHYS2VIRT_CAPABLE:   YES
VIRT2NPIV_CAPABLE:   NA
VIRT2PHYS_CAPABLE:   NA
PVID:                00c58e40599f2f900000000000000000
UDID:                2708ECVBZ1SC10IC35L146UCDY10-003IBXscsi
IEEE:
VTD:
PHYS2VIRT_CAPABLE:  Shows that the resource could be virtualized.
VIRT2NPIV_CAPABLE:  Shows that the resource is N-Port Virtualization supportive.
VIRT2PHYS_CAPABLE :  Shows the resource could be used as a physical object.