About Us

RSInfoMinds, a web based IT Training and Consultancy firm. It is established with high dreams in training people in IT Infrastructure Field. We provide Online and Class Room training in various fields of IT Infrastructure Management.

Join Us: http://www.facebook.com/RSInfoMinds
Mail Us: rsinfominds@gmail.com
Twitter: @RSInfoMinds

We are specialized in the below courses:

Redhat Linux Admin Redhat Linux Cluster
Redhat Virutualization IBM AIX Admin
IBM AIX Virtualization IBM AIX Cluster
HP Unix Admin HP Unix Cluster
HP Unix Virtualization Shell Scripting
Veritas Volume Manager Veritas Cluster
Oracle Core DBA VMWare


We provide training in such a way, So that you get in depth knowledge on the Courses you look for.

And we ensure you are very confident from each and every Techincal aspect that the IT Industry needs and expects from you.

We also conduct Workshops on the latest technology and the real time faculties sharing their work experiences to make you the best.

Saturday 31 May 2014

AIX Page Replacement

The AIX page replacement daemons scan memory a page at a time to find pages to evict in order to free up memory.

minperm and maxperm

These tunable parameters are used to indicate how much memory the AIX kernel should use to cache non-computational pages.

The maxperm tunable parameter indicates the maximum amount of memory that should be used to cache non-computational pages.

By default, maxperm is an "un-strict" limit, meaning that the limit can be exceeded.

Making maxperm an un-strict limit allows more non-computational files to be cached in memory when there is available free memory.

The maxperm limit can be made a "strict" limit by setting the strict_maxperm tunable parameter to 1.

When maxperm is a strict-limit, the kernel does not allow the number of non-computational pages to exceed maxperm, even if there is free memory available.


The minperm limit indicates the target minimum amount of memory that should be used for non-computational pages.

The number of non-computational pages is referred to as numperm:

The vmstat –v command displays the numperm value for a system as a percentage of a system’s real memory.

When  the number of non-computational pages (numperm) is greater than or equal to maxperm, the AIX page replacement daemons strictly target non-computational pages.

When the number of non-computational pages (numperm) is less than or equal to minperm, the AIX page replacement daemons target both computational and non-computational pages
.

When the number of non-computational pages (numperm) is between minperm and maxperm, the lru_file_repage tunable parameter controls what kind of pages the AIX page replacement daemons should steal.

When numperm is between minperm and maxperm, the AIX page replacement daemons determine what type of pages to target based on their internal re-paging table when the lru_file_repage tunable parameter is set to 1.
# vmstat -v


20.0 minperm percentage <<- system’s minperm% setting
80.0 maxperm percentage <<- system’s maxperm% setting

AIX Virtual Memory Manager

The AIX® virtual memory manager (AIX VMM) is a page-based virtual memory manager.

A page is a fixed-size block of data.

A page might be resident in memory (that is, mapped into a location in physical memory), or a page might be resident on
a disk (that is, paged out of physical memory into paging space or a file system).

AIX maps pages into real memory based on demand.

When an application references a page that is not mapped into real memory, the system generates a page fault.

To resolve the page fault, the AIX kernel loads the referenced page to a location in real memory.

If the referenced page is a new page (that is, a page in a data heap of the process that has never been previously referenced), "loading" the referenced page simply means filling a real memory location with zeros (that is, providing a zero-filled page).

If the referenced page is a pre-existing page (that is, a page in a file or a previously paged out page), loading the referenced page involves reading the page from the disk (paging space or disk file system) into a location in real memory.


Once a page is loaded into real memory, it is marked as unmodified.

If a process or the kernel modifies the page, the state of the page changes to modified.

This allows AIX to keep track of whether a page has been modified after it was loaded into memory.

As the system adds more pages into real memory, the number of empty locations in real memory that can contain pages decreases.

When the number of free page  frames gets to a low value, the AIX kernel must empty out some locations in real memory for reuse of new pages.
This process is otherwise known as page replacement.

The AIX VMM has background daemons responsible for doing page replacement. A page replacement daemon is referred to as lrud (shows up as lrud in the output of ps -k).
lrud daemons are responsible for scanning in memory pages and evicting pages in order to empty locations in real memory. When a page
replacement daemon determines that it wants to evict a specific page, the page  replacement daemon does one of two things:




# If the page is modified, the page replacement daemon writes the page out to a secondary storage location.

# If the page is unmodified, the page replacement daemon can simply mark the physical memory block as free, and the physical memory block can
then be re-used for another page.

The page replacement daemons target different types of pages for eviction based on system memory usage and tunable parameters.

Fundamentally, there are two types of pages on AIX:

• Working storage pages (Computational pages)
• Permanent storage pages (Non-computational pages)

Working storage pages are pages that contain volatile data (in other words, data that is not preserved across a reboot).

* Process data
* Stack
* Shared memory
* Kernel data

When modified working storage pages need to be paged out (moved from memory to the disk), they are written to paging space. Working storage pages are never written to a file system.


When a process exits, the system releases all of its private working storage pages.

Permanent storage pages are pages that contain permanent data (that is, data that is preserved across a reboot). This permanent data is just file data. So, permanent storage pages are basically just pieces of files cached in memory.


When a modified permanent storage page needs to be paged out (moved from memory to disk), it is written to a file system.


As mentioned earlier, an unmodified permanent storage page can just be released without being written to the file
system, since the file system contains a pristine copy of the data.

You can divide permanent storage pages into two sub-types:

• Client pages
• Non-client pages

When you first open a file, the AIX kernel creates an internal VMM object to represent the file. It marks it as non-computational, meaning all files start out as non-computational.


As a program does reads and writes to the file, the AIX kernel caches the file's data in memory as non-computational permanent storage pages.


If the file is closed, the AIX kernel continues to cache the file data in memory (in permanent storage pages). The kernel continues to cache the file for performance;

for example, if another process comes along later and uses the same file, the file data is still in memory, and the AIX kernel does not have to read the file data in from disk.

AIX IO Tuning

AIX’s disk and adapter drivers each use a queue to handle IO, split into an in-service queue, and a wait queue.

Note that even though the disk is attached to the adapter, the hdisk driver code is utilized before the adapter driver code.

IO requests in the in-service queue are sent to the storage, and the queue slot is freed when the IO is complete.

IO requests in the wait queue stay there until an in-service queue slot is free, at which time they are moved to the in-service queue and sent to the storage.

IO requests in the in-service queue are also called in-flight from the perspective of the device driver.


The size of the hdisk driver in-service queue is specified by the queue_depth attribute, while the size of the adapter driver in-service queue is specified by the num_cmd_elems attribute.



root # lsattr -EHl fcs0
attribute value description user_settable
intr_priority 3 Interrupt priority False
lg_term_dma 0x800000 Long term DMA True
max_xfer_size 0x100000 Maximum Transfer Size True
num_cmd_elems 200 Maximum Number of COMMAND Elements True
sw_fc_class 2 FC Class for Fabric True

root # lsattr -EHl hdisk0
attribute value description user_settable
PCM PCM/friend/vscsi Path Control Module False
algorithm fail_over Algorithm True
hcheck_cmd test_unit_rdy Health Check Command True
hcheck_interval 60 Health Check Interval True
hcheck_mode enabled Health Check Mode True
max_transfer 0x40000 Maximum TRANSFER Size True
pvid 00c4c6c7b35f29770000000000000000 Physical volume identifier False
queue_depth 3 Queue DEPTH True
reserve_policy no_reserve Reserve Policy True

A physical disk can only do one IO at a time, but knowing several of the IO requests allows the disk to do
the IOs using an elevator algorithm to minimize actuator movement and latency.

Virtual disks typically are backed by many physical disks, so can do many IOs in parallel.

Maximum LUN IOPS = queue_depth/ (avg. IO service time)

Maximum adapter IOPS = num_cmd_elems/ (avg. IO service time)

Currently default queue sizes (num_cmd_elems) for FC adapters range from 200 to 500, with maximum  values of 2048 or 4096.

Rename A Device In AIX Using # rendev

# lspv
hdisk0          00f61ab2f73e46e2                    rootvg          active
hdisk1          00f61ab20bf28ac6                    None
hdisk2          00f61ab2202f7c0b                    None
hdisk4          00f61ab20b97190d                    None
hdisk3          00f61ab2202f93ab                    None

# rendev -l hdisk3 -n hdisk300

# lspv
hdisk0          00f61ab2f73e46e2                    rootvg          active
hdisk1          00f61ab20bf28ac6                    None
hdisk2          00f61ab2202f7c0b                    None
hdisk4          00f61ab20b97190d                    None
hdisk300        00f61ab2202f93ab                   None

Friday 30 May 2014

Password Expiration

# chage --list test

# chage -M 60 test : This makes the password to expire after 60 Days.

# chage -W 5 test : Warning days before password expire.

Linux Swapiness


The swappiness parameter controls the tendency of the kernel to move processes out of physical memory and onto the swap disk. Because disks are much slower than RAM, this can lead to slower response times for system and applications if processes are too aggressively moved out of memory.

swappiness can have a value of between 0 and 100

swappiness=0 tells the kernel to avoid swapping processes out of physical memory for as long as possible.

swappiness=100 tells the kernel to aggressively swap processes out of physical memory and move them to swap cache.

How to Lock / UnLock (Enable / Disable) Linux User Account

To lock, you can use the follow command:

# passwd -l username (where username is the login id).

To Unlock the same account:

# passwd -u username 

VCS Command For Reference Part II

Service Group Operations
hagrp -list – List all service groups
hagrp -resources [service_group] – List a service group’s resources
hagrp -dep [service_group] – List a service group’s dependencies
hagrp -display [service_group] – Get detailed information about a service group
hagrp -online groupname -sys systemname – Start a service group and bring it resources
hagrp -offline groupname -sys systemname – Stop a service group and take it resources offline
hagrp -switch groupname -to systemname – Switch a service group from one system to another
hagrp -freeze -sys -persistent – Gets into Maintenance Mode. Freeze a service group. This will disable online and offline operations
hagrp -unfreeze -sys -persistent] – Take the servicegroup out of maintenance mode
hagrp -enable service_group [-sys system] – Enable a service group
hagrp -disable service_group [-sys system] – Disable a service group
hagrp -enableresources service_group – Enable all the resources in a service group
hagrp -disableresources service_group – Disable all the resources in a service group
hagrp -link parent_group child_group relationship – Specify the dependency relationship between two service groups
hagrp -unlink parent_group child_group – Remove the dependency between two service groups


VCS Commands For Reference

LT status
lltconfig -a list – List all the MAC addresses in cluster
lltstat -l – Lists information about each configured LLT link
lltstat [-nvv|-n] – Verify status of links in cluster

Starting and stopping LLT
lltconfig -c – Start the LLT service
lltconfig -U – stop the LLT running

GAB status
gabconfig -a – List Membership, Verify id GAB is operating
gabdiskhb -l – Check the disk heartbeat status
gabdiskx -l – lists all the exclusive GAB disks and their membership information

Starting and stopping GAB
gabconfig -c -n seed_number – Start the GAB
gabconfig -U – Stop the GAB

Cluster Status
hastatus -summary – Outputs the status of cluster
hasys -display – Displays the cluster operation status

Start or Stop services
hastart [-force|-stale] – ‘force’ is used to load local configuration
hasys -force 'system' – start the cluster using config file from the mentioned “system”
hastop -local [-force|-evacuate] – ‘local’ option will stop the service only on the system you type the command
hastop -sys 'system' [-force|-evacuate] – ‘sys’ stops had on the system you specify
hastop -all [-force] – ‘all’ stops had on all systems in the cluster

Change VCS Configuration online
haconf -makerw – makes VCS configuration in read/write mode
haconf -dump -makero – Dumps the configuration changes

Agent Operations
haagent -start agent_name -sys system – Starts an agent
haagent -stop agent_name -sys system – Stops an agent

Cluster Operations
haclus -display – Displays cluster information and status
haclus -enable LinkMonitoring – Enables heartbeat link monitoring in the GUI
haclus -disable LinkMonitoring – Disables heartbeat link monitoring in the GUI

Add and Delete Users
hauser -add user_name – Adds a user with read/write access
hauser -add VCSGuest – Adds a user with read-only access
hauser -modify user_name – Modifies a users password
hauser -delete user_name – Deletes a user
hauser -display [user_name] – Displays all users if username is not specified

System Operations
hasys -list – List systems in the cluster
hasys -display – Get detailed information about each system
hasys -add system – Add a system to cluster
hasys -delete system – Delete a system from cluster

Resource Types
hatype -list – List resource types
hatype -display [type_name] – Get detailed information about a resource type
hatype -resources type_name – List all resources of a particular type
hatype -add resource_type – Add a resource type
hatype -modify .... – Set the value of static attributes
hatype -delete resource_type – Delete a resource type

Resource Operations
hares -list – List all resources
hares -dep [resource] – List a resource’s dependencies
hares -display [resource] – Get detailed information about a resource
hares -add resource_type service_group – Add a resource
hares -modify resource attribute_name value – Modify the attributes of the new resource
hares -delete resource – Delete a resource type
hares -online resource -sys systemname – Online a resource, type
hares -offline resource -sys systemname – Offline a resource, type
hares -probe resource -sys system – Cause a resource’s agent to immediately monitor the resource on a particular system
hares -clear resource [-sys system] – Clear a faulted resource
hares -local resource attribute_name value – Make a resource’s attribute value local
hares -global resource attribute_name value – Make a resource’s attribute value global
hares -link parent_res child_res – Specify a dependency between two resources
hares -unlink parent_res child_res – Remove the dependency relationship between two resources


Saturday 24 May 2014

IBM AIX® (internal) Location System Codes

lsdev -C -H -F "name status physloc locationdescription" : Get the AIX ( if present) and physical location codes.
lsdev -Cc disk -F 'name location physloc' : Get the AIX and physical location codes of all disks.
lsdev -Cl hdisk0 -F physloc : Get the location code of hdisk0.
lscfg -vpl hdisk0 : Get extended information o fhdisk0.
lsdev -C| grep hdisk0 : Get the AIX location code of hdisk0.
lsparent -Cl hdisk0 : Get the parent devices for hdisk0.
lscfg -l fcs0 : Get information about the fsc0 device.

Location Codes are divided as:

Unit enclosure type-----------Enclosure model----------Serial number-----------Location

U789C---------001------------DQD3F62---------P2-D3

U : Unit
C: Card
P : Planar
D: Device

Locate Timezone In Linux

[root@linux_lab ]# cat /etc/sysconfig/clock
# The ZONE parameter is only evaluated by system-config-date.
# The timezone of the system is defined by the contents of /etc/localtime.
ZONE="America/New_York"
UTC=true
ARC=false
[root@linux_lab ]# echo $TZ

[root@linux_lab ]#


How To Enable FTP Service on AIX 6.1

aix_lab# ftp localhost
ftp: connect: Connection refused
ftp> bye
aix_lab#

as we see the ftp service is not enable, so let’s enable it

aix_lab# chsubserver -a -v ftp -p tcp
aix_lab# refresh -s inetd
0513-095 The request for subsystem refresh was completed successfully.
aix_lab#


aix_lab# ftp localhost
Connected to loopback.
220 aix_lab FTP server (Version 4.2 Tue Dec 22 14:13:26 CST 2009) ready.
Name (localhost:root):

Sunday 18 May 2014

VERITAS Volume Manager on RHEL 6

VxVM is already installed on the machine and below procedure will describe adding disk to VERITAS volume manager and provisioning storage for usage.  In this scenario attached disks are local SCSI disks.
Procedure:
1)    List the available disk using vxdisk command.
[root@node1 /]# vxdisk list
DEVICE       TYPE            DISK         GROUP        STATUS
sda          auto:none       -            -            online invalid
sdb          auto:LVM        -            -            LVM
sdc          auto:none       -            -            online invalid
sdd          auto:none       -            -            online invalid
sde          auto:none       -            -            online invalid
sdf          auto:none       -            -            online invalid
sdg          auto:none       -            -            online invalid
[root@node1 /]#
Note:  
online invalid in the STATUS line indicates that a disk has yet to be added or initialized for VxVM control.
2)    Initialize the disks using vxdisksetup command.
[root@node1 /]# vxdisksetup -i sdc
[root@node1 /]# vxdisksetup -i sdd
3)    Verify that disks are got initialized using vxdisk command.
[root@node1 /]# vxdisk list
DEVICE       TYPE            DISK         GROUP        STATUS
sda          auto:none       -            -            online invalid
sdb          auto:LVM        -            -            LVM
sdc          auto:cdsdisk    -            -            online
sdd          auto:cdsdisk    -            -            online
sde          auto:none       -            -            online invalid
sdf          auto:none       -            -            online invalid
[root@node1 /]#
Note:  
Disks that are listed as online are initialized and the part of VxVM.  
4)    Add the desired disk to disk group using vxdg command. In this scenario we are creating disk group named ITOCDG and adding disk sdc to diskgroup.
[root@node1 /]# vxdg init ITOCDG disk1=sdc
[root@node1 /]# vxdisk list
DEVICE       TYPE            DISK         GROUP        STATUS
sda          auto:none       -            -            online invalid
sdb          auto:LVM        -            -            LVM
sdc          auto:cdsdisk    disk1        ITOCDG       online
sdd          auto:cdsdisk    -            -            online
sde          auto:none       -            -            online invalid
sdf          auto:none       -            -            online invalid
[root@node1 /]#
Note :
we can use vxdg with adddisk switch to add new disks to disk group.
[root@node1 /]# vxdg -g ITOCDG adddisk disk2=sdd
[root@node1 /]# vxdisk list
DEVICE       TYPE            DISK         GROUP        STATUS
sda          auto:none       -            -            online invalid
sdb          auto:LVM        -            -            LVM
sdc          auto:cdsdisk    disk1        ITOCDG       online
sdd          auto:cdsdisk    disk2        ITOCDG       online
sde          auto:none       -            -            online invalid
sdf          auto:none       -            -            online invalid
[root@node1 /]#

5)    To list disk group properties we can use following command.
[root@node1 /]# vxdg list ITOCDG
Group:     ITOCDG
dgid:      1334435551.48.node1.redhat.com
import-id: 1024.47
flags:     cds
version:   170
alignment: 8192 (bytes)
ssb:            on 
autotagging:    on
detach-policy: global
dg-fail-policy: obsolete
copies:    nconfig=default nlog=default
config:    seqno=0.1030 permlen=51360 free=51356 templen=2 loglen=4096
config disk sdc copy 1 len=51360 state=clean online
config disk sdd copy 1 len=51360 state=clean online
log disk sdc copy 1 len=4096
log disk sdd copy 1 len=4096
[root@node1 /]#
6)     Create volume on the disk group with desired size. In this example we are creating volume VOL1 with size 100MB
[root@node1 /]# vxassist -g ITOCDG make VOL1 100m
7)    To list volume details we can use vxlist volume command
   [root@node1 /]# vxlist volume
TY   VOLUME   DISKGROUP        SIZE STATUS    LAYOUT   LINKAGE
vol  VOL1     ITOCDG        100.00m healthy   concat   -

8)    Create file system on the volume using mkfs command, in this example we are creating VXFS file system
[root@node1 /]# mkfs -t vxfs /dev/vx/rdsk/ITOCDG/VOL1
    version 9 layout   
    204800 sectors, 102400 blocks of size 1024, log size 1024 blocks
    rcq size 1024 blocks
    largefiles supported
Note :
/dev/vx/rdsk/ITOCDG/VOL1 is the device file for volume VOL1. For verifying file system we can use fstyp command.
[root@node1 /]# fstyp /dev/vx/dsk/ITOCDG/VOL1
9)    Create a mount point and mount the file system using mount command.
[root@node1 /]# mkdir /veritas1
[root@node1 /]# mount -t vxfs /dev/vx/dsk/ITOCDG/VOL1 /veritas1/
10)  Verify the mounted file system using mount command or df command
[root@node1 /]# mount
/dev/mapper/vg_node1-lv_root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)

devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0")
/dev/sda1 on /boot type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
none on /dev/odm type vxodmfs (rw,smartsync)
/dev/sr0 on /repo type iso9660 (ro)
/dev/sr1 on /media/201204090917 type iso9660 (ro,nosuid,nodev,uhelper=udisks,uid=0,gid=0,iocharset=utf8                                ,mode=0400,dmode=0500)
/dev/vx/dsk/ITOCDG/VOL1 on /veritas1 type vxfs (rw, delaylog, largefiles,ioerror=mwdisable)
Note:
df displays the amount of disk space available on the file system .
[root@node1 /]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_node1-lv_root
                      8.5G  4.5G  3.6G  57% /
tmpfs                 499M  208K  499M   1% /dev/shm
/dev/sda1             485M   32M  429M   7% /boot
/dev/sr0              3.4G  3.4G     0 100% /repo
/dev/sr1              927M  927M     0 100% /media/201204090917
/dev/vx/dsk/ITOCDG/VOL1
                      100M  3.2M   91M   4% /veritas1

[root@node1 /]#

Kernel Build

1. Take a backup of the current runnig kernel.

# cp /boot/initrd-$(uname -r).img /boot/initrd-$(uname -r).img.bak

2.     Rebuild the Initrd image using mkinitrd command


# mkinitrd -f -v /boot/initrd-$(uname -r).img $(uname -r)

Saturday 17 May 2014

Linux Memory Usage Analyzer


ps -eo rss,vsz,pid,cputime,cmd --width 100 --sort rss,vsz | tail --lines 10

ps aux --sort -rss | head

ps axo %mem,pid,euser,cmd | sort -nr | head -n 10

ps aux | sort -nk +4 | tail

ps -e -orss=,args= | sort -b -k1,1n | pr -TW$COLUMNS

Saturday 10 May 2014

Symptoms Of Paging Space Issue


  • INIT: Paging space is low

  • ksh: cannot fork no swap space

  • Not enough memory

  • Fork function failed

  • fork () system call failed

  • Unable to fork, too many processes

  • Fork failure - not enough memory available

  • Fork function not allowed. Not enough memory available.

  • Cannot fork: Not enough space

To create a SEA Failover scenario in a Dual VIO configuration


Follow below procedures step by step as it is written

1. Create two virtual adapters on both the vio say ent1 and ent2 . Keeping in mind that ent0 is physical adapter or real adapter .

2. While creating keep the pvid of both adapters different for example ent1 pvid as "1" and ent2 pvid as "99"

3. Make ent1 as trunk adapter with priority 1 on vio 1 and ent1 as trunk adapter as trunk adapter with priority 2 . To create trunk adapter select the option use this adapter for bridging .

4. Create SEA adapter by issuing command on both vio servers . If your configurationmatches the above just copy and paste the command below on both the vio servers.


mkvdev -sea ent0 -vadapter ent1 -default ent1 -defaultid 1 -attr ha_mode=auto ctl_chan=ent2

5. Add ip address on the new adapter that is ent3 .


mktcpip -hostname $(hostname) -inetaddr <your -ip address> -interface en3 -netmask <your netmask>


6. Make Sure you created SEA adapter and IP address on the adapters .

7. Test the status of adapters by issuing following command .

entstat -all ent3 |grep -i state


It will show status PRIMARY or BACKUP depending on which vio you are running .

8. Change the state of adapter by using command below to check failover .

chdev -dev ent3 -attr ha_mode=standby

9. Test the status of adapters again  by issuing following command .

entstat -all ent3 |grep -i state

It show show failover